镜像自地址
https://github.com/binary-husky/gpt_academic.git
已同步 2025-12-06 14:36:48 +00:00
比较提交
39 次代码提交
version3.6
...
binary-hus
| 作者 | SHA1 | 提交日期 | |
|---|---|---|---|
|
|
88ae16150b | ||
|
|
caa1ebc227 | ||
|
|
2bc65a99ca | ||
|
|
0a2805513e | ||
|
|
c22867b74c | ||
|
|
2abe665521 | ||
|
|
b0e6c4d365 | ||
|
|
d883c7f34b | ||
|
|
aba871342f | ||
|
|
37744a9cb1 | ||
|
|
480516380d | ||
|
|
60ba712131 | ||
|
|
a7c960dcb0 | ||
|
|
a96f842b3a | ||
|
|
417ca91e23 | ||
|
|
ef8fadfa18 | ||
|
|
865c4ca993 | ||
|
|
31304f481a | ||
|
|
1bd3637d32 | ||
|
|
160a683667 | ||
|
|
49ca03ca06 | ||
|
|
c625348ce1 | ||
|
|
6d4a74893a | ||
|
|
5c7499cada | ||
|
|
f522691529 | ||
|
|
ca85573ec1 | ||
|
|
2c7bba5c63 | ||
|
|
e22f0226d5 | ||
|
|
0f250305b4 | ||
|
|
7606f5c130 | ||
|
|
4f0dcc431c | ||
|
|
6ca0dd2f9e | ||
|
|
e3e9921f6b | ||
|
|
867ddd355e | ||
|
|
c60a7452bf | ||
|
|
68a49d3758 | ||
|
|
ac3d4cf073 | ||
|
|
9479dd984c | ||
|
|
3c271302cc |
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@@ -69,9 +69,3 @@ body:
|
||||
attributes:
|
||||
label: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)
|
||||
description: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
5
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
5
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@@ -21,8 +21,3 @@ body:
|
||||
attributes:
|
||||
label: Feature Request | 功能请求
|
||||
description: Feature Request | 功能请求
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -152,3 +152,4 @@ request_llms/moss
|
||||
media
|
||||
flagged
|
||||
request_llms/ChatGLM-6b-onnx-u8s8
|
||||
.pre-commit-config.yaml
|
||||
|
||||
14
README.md
14
README.md
@@ -2,7 +2,7 @@
|
||||
>
|
||||
> 2023.11.12: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
|
||||
>
|
||||
> 2023.11.7: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目开源免费,近期发现有人蔑视开源协议并利用本项目违规圈钱,请提高警惕,谨防上当受骗。
|
||||
> 2023.12.26: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
|
||||
|
||||
<br>
|
||||
|
||||
@@ -65,7 +65,7 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
|
||||
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
|
||||
批量注释生成 | [插件] 一键批量生成函数注释
|
||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
|
||||
chat分析报告生成 | [插件] 运行后自动生成总结汇报
|
||||
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等(3.7版本)
|
||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
||||
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
||||
@@ -111,7 +111,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
||||
</div>
|
||||
|
||||
- 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
|
||||
- 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + GPT4)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||
</div>
|
||||
@@ -308,9 +308,9 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
||||
</div>
|
||||
|
||||
8. OpenAI音频解析与总结
|
||||
8. 基于mermaid的流图、脑图绘制
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/c518b82f-bd53-46e2-baf5-ad1b081c1da4" width="500" >
|
||||
</div>
|
||||
|
||||
9. Latex全文校对纠错
|
||||
@@ -370,8 +370,8 @@ GPT Academic开发者QQ群:`610599535`
|
||||
|
||||
1. `master` 分支: 主分支,稳定版
|
||||
2. `frontier` 分支: 开发分支,测试版
|
||||
3. 如何接入其他大模型:[接入其他大模型](request_llms/README.md)
|
||||
|
||||
3. 如何[接入其他大模型](request_llms/README.md)
|
||||
4. 访问GPT-Academic的[在线服务并支持我们](https://github.com/binary-husky/gpt_academic/wiki/online)
|
||||
|
||||
### V:参考与学习
|
||||
|
||||
|
||||
29
config.py
29
config.py
@@ -89,11 +89,14 @@ DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
|
||||
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-preview",
|
||||
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
||||
"api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
|
||||
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
|
||||
"chatglm3", "moss", "claude-2"]
|
||||
# P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
|
||||
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
"gemini-pro", "chatglm3", "moss", "claude-2"]
|
||||
# P.S. 其他可用的模型还包括 [
|
||||
# "qwen-turbo", "qwen-plus", "qwen-max"
|
||||
# "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613",
|
||||
# "gpt-3.5-turbo-16k-0613", "gpt-3.5-random", "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
|
||||
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"
|
||||
# ]
|
||||
|
||||
|
||||
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
|
||||
@@ -103,7 +106,11 @@ MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
|
||||
# 选择本地模型变体(只有当AVAIL_LLM_MODELS包含了对应本地模型时,才会起作用)
|
||||
# 如果你选择Qwen系列的模型,那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型
|
||||
# 也可以是具体的模型路径
|
||||
QWEN_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
|
||||
QWEN_LOCAL_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
|
||||
|
||||
|
||||
# 接入通义千问在线大模型 https://dashscope.console.aliyun.com/
|
||||
DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
|
||||
|
||||
|
||||
# 百度千帆(LLM_MODEL="qianfan")
|
||||
@@ -199,6 +206,10 @@ ANTHROPIC_API_KEY = ""
|
||||
CUSTOM_API_KEY_PATTERN = ""
|
||||
|
||||
|
||||
# Google Gemini API-Key
|
||||
GEMINI_API_KEY = ''
|
||||
|
||||
|
||||
# HUGGINGFACE的TOKEN,下载LLAMA时起作用 https://huggingface.co/docs/hub/security-tokens
|
||||
HUGGINGFACE_ACCESS_TOKEN = "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"
|
||||
|
||||
@@ -284,6 +295,12 @@ NUM_CUSTOM_BASIC_BTN = 4
|
||||
│ ├── ZHIPUAI_API_KEY
|
||||
│ └── ZHIPUAI_MODEL
|
||||
│
|
||||
├── "qwen-turbo" 等通义千问大模型
|
||||
│ └── DASHSCOPE_API_KEY
|
||||
│
|
||||
├── "Gemini"
|
||||
│ └── GEMINI_API_KEY
|
||||
│
|
||||
└── "newbing" Newbing接口不再稳定,不推荐使用
|
||||
├── NEWBING_STYLE
|
||||
└── NEWBING_COOKIES
|
||||
@@ -300,7 +317,7 @@ NUM_CUSTOM_BASIC_BTN = 4
|
||||
├── "jittorllms_pangualpha"
|
||||
├── "jittorllms_llama"
|
||||
├── "deepseekcoder"
|
||||
├── "qwen"
|
||||
├── "qwen-local"
|
||||
├── RWKV的支持见Wiki
|
||||
└── "llama2"
|
||||
|
||||
|
||||
@@ -345,7 +345,7 @@ def get_crazy_functions():
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
|
||||
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&gpt-4", # 高级参数输入区的显示提示
|
||||
"Function": HotReload(同时问询_指定模型)
|
||||
},
|
||||
})
|
||||
@@ -356,7 +356,7 @@ def get_crazy_functions():
|
||||
try:
|
||||
from crazy_functions.图片生成 import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
|
||||
function_plugins.update({
|
||||
"图片生成_DALLE2 (先切换模型到openai或api2d)": {
|
||||
"图片生成_DALLE2 (先切换模型到gpt-*)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
@@ -367,7 +367,7 @@ def get_crazy_functions():
|
||||
},
|
||||
})
|
||||
function_plugins.update({
|
||||
"图片生成_DALLE3 (先切换模型到openai或api2d)": {
|
||||
"图片生成_DALLE3 (先切换模型到gpt-*)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
@@ -378,7 +378,7 @@ def get_crazy_functions():
|
||||
},
|
||||
})
|
||||
function_plugins.update({
|
||||
"图片修改_DALLE2 (先切换模型到openai或api2d)": {
|
||||
"图片修改_DALLE2 (先切换模型到gpt-*)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
|
||||
@@ -139,6 +139,8 @@ def can_multi_process(llm):
|
||||
if llm.startswith('gpt-'): return True
|
||||
if llm.startswith('api2d-'): return True
|
||||
if llm.startswith('azure-'): return True
|
||||
if llm.startswith('spark'): return True
|
||||
if llm.startswith('zhipuai'): return True
|
||||
return False
|
||||
|
||||
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
@@ -464,6 +466,9 @@ def read_and_clean_pdf_text(fp):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
# 对于某些PDF会有第一个段落就以小写字母开头,为了避免索引错误将其更改为大写
|
||||
if starts_with_lowercase_word(meta_txt[0]):
|
||||
meta_txt[0] = meta_txt[0].capitalize()
|
||||
for _ in range(100):
|
||||
for index, block_txt in enumerate(meta_txt):
|
||||
if starts_with_lowercase_word(block_txt):
|
||||
|
||||
@@ -250,8 +250,8 @@ def find_main_tex_file(file_manifest, mode):
|
||||
else: # if len(canidates) >= 2 通过一些Latex模板中常见(但通常不会出现在正文)的单词,对不同latex源文件扣分,取评分最高者返回
|
||||
canidates_score = []
|
||||
# 给出一些判定模板文档的词作为扣分项
|
||||
unexpected_words = ['\LaTeX', 'manuscript', 'Guidelines', 'font', 'citations', 'rejected', 'blind review', 'reviewers']
|
||||
expected_words = ['\input', '\ref', '\cite']
|
||||
unexpected_words = ['\\LaTeX', 'manuscript', 'Guidelines', 'font', 'citations', 'rejected', 'blind review', 'reviewers']
|
||||
expected_words = ['\\input', '\\ref', '\\cite']
|
||||
for texf in canidates:
|
||||
canidates_score.append(0)
|
||||
with open(texf, 'r', encoding='utf8', errors='ignore') as f:
|
||||
|
||||
@@ -65,10 +65,10 @@ def cut(limit, get_token_fn, txt_tocut, must_break_at_empty_line, break_anyway=F
|
||||
# 如果没有找到合适的切分点
|
||||
if break_anyway:
|
||||
# 是否允许暴力切分
|
||||
prev, post = force_breakdown(txt_tocut, limit, get_token_fn)
|
||||
prev, post = force_breakdown(remain_txt_to_cut, limit, get_token_fn)
|
||||
else:
|
||||
# 不允许直接报错
|
||||
raise RuntimeError(f"存在一行极长的文本!{txt_tocut}")
|
||||
raise RuntimeError(f"存在一行极长的文本!{remain_txt_to_cut}")
|
||||
|
||||
# 追加列表
|
||||
res.append(prev); fin_len+=len(prev)
|
||||
|
||||
@@ -104,7 +104,11 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
||||
if prompt.strip() == "":
|
||||
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
|
||||
return
|
||||
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
resolution = plugin_kwargs.get("advanced_arg", '1024x1024')
|
||||
@@ -121,7 +125,11 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
@CatchException
|
||||
def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
||||
if prompt.strip() == "":
|
||||
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
|
||||
return
|
||||
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
resolution_arg = plugin_kwargs.get("advanced_arg", '1024x1024-standard-vivid').lower()
|
||||
|
||||
@@ -229,4 +229,3 @@ services:
|
||||
# 不使用代理网络拉取最新代码
|
||||
command: >
|
||||
bash -c "python3 -u main.py"
|
||||
|
||||
|
||||
@@ -1,2 +1 @@
|
||||
# 此Dockerfile不再维护,请前往docs/GithubAction+ChatGLM+Moss
|
||||
|
||||
|
||||
@@ -341,4 +341,3 @@ https://github.com/oobabooga/one-click-installers
|
||||
# المزيد:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
|
||||
|
||||
@@ -355,4 +355,3 @@ https://github.com/oobabooga/one-click-installers
|
||||
# More:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
|
||||
|
||||
@@ -354,4 +354,3 @@ https://github.com/oobabooga/one-click-installers
|
||||
# Plus:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
|
||||
|
||||
@@ -361,4 +361,3 @@ https://github.com/oobabooga/one-click-installers
|
||||
# Weitere:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
|
||||
|
||||
@@ -358,4 +358,3 @@ https://github.com/oobabooga/one-click-installers
|
||||
# Altre risorse:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
|
||||
|
||||
@@ -342,4 +342,3 @@ https://github.com/oobabooga/one-click-installers
|
||||
# その他:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
|
||||
|
||||
@@ -361,4 +361,3 @@ https://github.com/oobabooga/one-click-installers
|
||||
# 더보기:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
|
||||
|
||||
@@ -355,4 +355,3 @@ https://github.com/oobabooga/instaladores-de-um-clique
|
||||
# Mais:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
|
||||
|
||||
@@ -358,4 +358,3 @@ https://github.com/oobabooga/one-click-installers
|
||||
# Больше:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
|
||||
|
||||
@@ -7,13 +7,27 @@ sample = """
|
||||
"""
|
||||
import re
|
||||
|
||||
|
||||
def preprocess_newbing_out(s):
|
||||
pattern = r'\^(\d+)\^' # 匹配^数字^
|
||||
pattern2 = r'\[(\d+)\]' # 匹配^数字^
|
||||
sub = lambda m: '\['+m.group(1)+'\]' # 将匹配到的数字作为替换值
|
||||
result = re.sub(pattern, sub, s) # 替换操作
|
||||
if '[1]' in result:
|
||||
result += '<br/><hr style="border-top: dotted 1px #44ac5c;"><br/><small>' + "<br/>".join([re.sub(pattern2, sub, r) for r in result.split('\n') if r.startswith('[')]) + '</small>'
|
||||
pattern = r"\^(\d+)\^" # 匹配^数字^
|
||||
pattern2 = r"\[(\d+)\]" # 匹配^数字^
|
||||
|
||||
def sub(m):
|
||||
return "\\[" + m.group(1) + "\\]" # 将匹配到的数字作为替换值
|
||||
|
||||
result = re.sub(pattern, sub, s) # 替换操作
|
||||
if "[1]" in result:
|
||||
result += (
|
||||
'<br/><hr style="border-top: dotted 1px #44ac5c;"><br/><small>'
|
||||
+ "<br/>".join(
|
||||
[
|
||||
re.sub(pattern2, sub, r)
|
||||
for r in result.split("\n")
|
||||
if r.startswith("[")
|
||||
]
|
||||
)
|
||||
+ "</small>"
|
||||
)
|
||||
return result
|
||||
|
||||
|
||||
@@ -28,37 +42,39 @@ def close_up_code_segment_during_stream(gpt_reply):
|
||||
str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。
|
||||
|
||||
"""
|
||||
if '```' not in gpt_reply:
|
||||
if "```" not in gpt_reply:
|
||||
return gpt_reply
|
||||
if gpt_reply.endswith('```'):
|
||||
if gpt_reply.endswith("```"):
|
||||
return gpt_reply
|
||||
|
||||
# 排除了以上两个情况,我们
|
||||
segments = gpt_reply.split('```')
|
||||
segments = gpt_reply.split("```")
|
||||
n_mark = len(segments) - 1
|
||||
if n_mark % 2 == 1:
|
||||
# print('输出代码片段中!')
|
||||
return gpt_reply+'\n```'
|
||||
return gpt_reply + "\n```"
|
||||
else:
|
||||
return gpt_reply
|
||||
|
||||
|
||||
import markdown
|
||||
from latex2mathml.converter import convert as tex2mathml
|
||||
from functools import wraps, lru_cache
|
||||
|
||||
|
||||
def markdown_convertion(txt):
|
||||
"""
|
||||
将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。
|
||||
"""
|
||||
pre = '<div class="markdown-body">'
|
||||
suf = '</div>'
|
||||
suf = "</div>"
|
||||
if txt.startswith(pre) and txt.endswith(suf):
|
||||
# print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
|
||||
return txt # 已经被转化过,不需要再次转化
|
||||
return txt # 已经被转化过,不需要再次转化
|
||||
|
||||
markdown_extension_configs = {
|
||||
'mdx_math': {
|
||||
'enable_dollar_delimiter': True,
|
||||
'use_gitlab_delimiters': False,
|
||||
"mdx_math": {
|
||||
"enable_dollar_delimiter": True,
|
||||
"use_gitlab_delimiters": False,
|
||||
},
|
||||
}
|
||||
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
|
||||
@@ -72,19 +88,19 @@ def markdown_convertion(txt):
|
||||
|
||||
def replace_math_no_render(match):
|
||||
content = match.group(1)
|
||||
if 'mode=display' in match.group(0):
|
||||
content = content.replace('\n', '</br>')
|
||||
return f"<font color=\"#00FF00\">$$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$$</font>"
|
||||
if "mode=display" in match.group(0):
|
||||
content = content.replace("\n", "</br>")
|
||||
return f'<font color="#00FF00">$$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$$</font>'
|
||||
else:
|
||||
return f"<font color=\"#00FF00\">$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$</font>"
|
||||
return f'<font color="#00FF00">$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$</font>'
|
||||
|
||||
def replace_math_render(match):
|
||||
content = match.group(1)
|
||||
if 'mode=display' in match.group(0):
|
||||
if '\\begin{aligned}' in content:
|
||||
content = content.replace('\\begin{aligned}', '\\begin{array}')
|
||||
content = content.replace('\\end{aligned}', '\\end{array}')
|
||||
content = content.replace('&', ' ')
|
||||
if "mode=display" in match.group(0):
|
||||
if "\\begin{aligned}" in content:
|
||||
content = content.replace("\\begin{aligned}", "\\begin{array}")
|
||||
content = content.replace("\\end{aligned}", "\\end{array}")
|
||||
content = content.replace("&", " ")
|
||||
content = tex2mathml_catch_exception(content, display="block")
|
||||
return content
|
||||
else:
|
||||
@@ -94,37 +110,58 @@ def markdown_convertion(txt):
|
||||
"""
|
||||
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
||||
"""
|
||||
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
|
||||
content = content.replace('</script>\n</script>', '</script>')
|
||||
content = content.replace(
|
||||
'<script type="math/tex">\n<script type="math/tex; mode=display">',
|
||||
'<script type="math/tex; mode=display">',
|
||||
)
|
||||
content = content.replace("</script>\n</script>", "</script>")
|
||||
return content
|
||||
|
||||
|
||||
if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识
|
||||
if ("$" in txt) and ("```" not in txt): # 有$标识的公式符号,且没有代码段```的标识
|
||||
# convert everything to html format
|
||||
split = markdown.markdown(text='---')
|
||||
convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs)
|
||||
split = markdown.markdown(text="---")
|
||||
convert_stage_1 = markdown.markdown(
|
||||
text=txt,
|
||||
extensions=["mdx_math", "fenced_code", "tables", "sane_lists"],
|
||||
extension_configs=markdown_extension_configs,
|
||||
)
|
||||
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
|
||||
# re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s).
|
||||
# 1. convert to easy-to-copy tex (do not render math)
|
||||
convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
|
||||
convert_stage_2_1, n = re.subn(
|
||||
find_equation_pattern,
|
||||
replace_math_no_render,
|
||||
convert_stage_1,
|
||||
flags=re.DOTALL,
|
||||
)
|
||||
# 2. convert to rendered equation
|
||||
convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL)
|
||||
convert_stage_2_2, n = re.subn(
|
||||
find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL
|
||||
)
|
||||
# cat them together
|
||||
return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf
|
||||
return pre + convert_stage_2_1 + f"{split}" + convert_stage_2_2 + suf
|
||||
else:
|
||||
return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf
|
||||
return (
|
||||
pre
|
||||
+ markdown.markdown(
|
||||
txt, extensions=["fenced_code", "codehilite", "tables", "sane_lists"]
|
||||
)
|
||||
+ suf
|
||||
)
|
||||
|
||||
|
||||
sample = preprocess_newbing_out(sample)
|
||||
sample = close_up_code_segment_during_stream(sample)
|
||||
sample = markdown_convertion(sample)
|
||||
with open('tmp.html', 'w', encoding='utf8') as f:
|
||||
f.write("""
|
||||
with open("tmp.html", "w", encoding="utf8") as f:
|
||||
f.write(
|
||||
"""
|
||||
|
||||
<head>
|
||||
<title>My Website</title>
|
||||
<link rel="stylesheet" type="text/css" href="style.css">
|
||||
</head>
|
||||
|
||||
""")
|
||||
"""
|
||||
)
|
||||
f.write(sample)
|
||||
|
||||
@@ -61,4 +61,3 @@ VI 两种音频监听模式切换时,需要刷新页面才有效。
|
||||
VII 非localhost运行+非https情况下无法打开录音功能的坑:https://blog.csdn.net/weixin_39461487/article/details/109594434
|
||||
|
||||
## 5.点击函数插件区“实时音频采集” 或者其他音频交互功能
|
||||
|
||||
|
||||
13
main.py
13
main.py
@@ -15,7 +15,7 @@ help_menu_description = \
|
||||
|
||||
def main():
|
||||
import gradio as gr
|
||||
if gr.__version__ not in ['3.32.6']:
|
||||
if gr.__version__ not in ['3.32.6', '3.32.7']:
|
||||
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
|
||||
from request_llms.bridge_all import predict
|
||||
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
|
||||
@@ -139,17 +139,17 @@ def main():
|
||||
with gr.Row():
|
||||
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm")
|
||||
with gr.Row():
|
||||
with gr.Accordion("点击展开“文件上传区”。上传本地文件/压缩包供函数插件调用。", open=False) as area_file_up:
|
||||
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
|
||||
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
|
||||
|
||||
|
||||
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden"):
|
||||
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"):
|
||||
with gr.Row():
|
||||
with gr.Tab("上传文件", elem_id="interact-panel"):
|
||||
gr.Markdown("请上传本地文件/压缩包供“函数插件区”功能调用。请注意: 上传文件后会自动把输入区修改为相应路径。")
|
||||
file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload_float")
|
||||
|
||||
with gr.Tab("更换模型 & Prompt", elem_id="interact-panel"):
|
||||
with gr.Tab("更换模型", elem_id="interact-panel"):
|
||||
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
||||
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
|
||||
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
|
||||
@@ -161,10 +161,9 @@ def main():
|
||||
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"],
|
||||
value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
|
||||
checkboxes_2 = gr.CheckboxGroup(["自定义菜单"],
|
||||
value=[], label="显示/隐藏自定义菜单", elem_id='cbs').style(container=False)
|
||||
value=[], label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False)
|
||||
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
|
||||
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode,
|
||||
)
|
||||
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
|
||||
with gr.Tab("帮助", elem_id="interact-panel"):
|
||||
gr.Markdown(help_menu_description)
|
||||
|
||||
|
||||
@@ -28,6 +28,9 @@ from .bridge_chatglm3 import predict as chatglm3_ui
|
||||
from .bridge_qianfan import predict_no_ui_long_connection as qianfan_noui
|
||||
from .bridge_qianfan import predict as qianfan_ui
|
||||
|
||||
from .bridge_google_gemini import predict as genai_ui
|
||||
from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
|
||||
|
||||
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
|
||||
|
||||
class LazyloadTiktoken(object):
|
||||
@@ -246,6 +249,22 @@ model_info = {
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
},
|
||||
"gemini-pro": {
|
||||
"fn_with_ui": genai_ui,
|
||||
"fn_without_ui": genai_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 1024 * 32,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
},
|
||||
"gemini-pro-vision": {
|
||||
"fn_with_ui": genai_ui,
|
||||
"fn_without_ui": genai_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 1024 * 32,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
},
|
||||
}
|
||||
|
||||
# -=-=-=-=-=-=- api2d 对齐支持 -=-=-=-=-=-=-
|
||||
@@ -431,14 +450,14 @@ if "chatglm_onnx" in AVAIL_LLM_MODELS:
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
if "qwen" in AVAIL_LLM_MODELS:
|
||||
if "qwen-local" in AVAIL_LLM_MODELS:
|
||||
try:
|
||||
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
|
||||
from .bridge_qwen import predict as qwen_ui
|
||||
from .bridge_qwen_local import predict_no_ui_long_connection as qwen_local_noui
|
||||
from .bridge_qwen_local import predict as qwen_local_ui
|
||||
model_info.update({
|
||||
"qwen": {
|
||||
"fn_with_ui": qwen_ui,
|
||||
"fn_without_ui": qwen_noui,
|
||||
"qwen-local": {
|
||||
"fn_with_ui": qwen_local_ui,
|
||||
"fn_without_ui": qwen_local_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 4096,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
@@ -447,16 +466,32 @@ if "qwen" in AVAIL_LLM_MODELS:
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
if "chatgpt_website" in AVAIL_LLM_MODELS: # 接入一些逆向工程https://github.com/acheong08/ChatGPT-to-API/
|
||||
if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai
|
||||
try:
|
||||
from .bridge_chatgpt_website import predict_no_ui_long_connection as chatgpt_website_noui
|
||||
from .bridge_chatgpt_website import predict as chatgpt_website_ui
|
||||
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
|
||||
from .bridge_qwen import predict as qwen_ui
|
||||
model_info.update({
|
||||
"chatgpt_website": {
|
||||
"fn_with_ui": chatgpt_website_ui,
|
||||
"fn_without_ui": chatgpt_website_noui,
|
||||
"endpoint": openai_endpoint,
|
||||
"max_token": 4096,
|
||||
"qwen-turbo": {
|
||||
"fn_with_ui": qwen_ui,
|
||||
"fn_without_ui": qwen_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 6144,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
},
|
||||
"qwen-plus": {
|
||||
"fn_with_ui": qwen_ui,
|
||||
"fn_without_ui": qwen_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 30720,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
},
|
||||
"qwen-max": {
|
||||
"fn_with_ui": qwen_ui,
|
||||
"fn_without_ui": qwen_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 28672,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
}
|
||||
|
||||
@@ -102,20 +102,25 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
||||
result = ''
|
||||
json_data = None
|
||||
while True:
|
||||
try: chunk = next(stream_response).decode()
|
||||
try: chunk = next(stream_response)
|
||||
except StopIteration:
|
||||
break
|
||||
except requests.exceptions.ConnectionError:
|
||||
chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。
|
||||
if len(chunk)==0: continue
|
||||
if not chunk.startswith('data:'):
|
||||
error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode()
|
||||
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
|
||||
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
|
||||
if len(chunk_decoded)==0: continue
|
||||
if not chunk_decoded.startswith('data:'):
|
||||
error_msg = get_full_error(chunk, stream_response).decode()
|
||||
if "reduce the length" in error_msg:
|
||||
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
|
||||
else:
|
||||
raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
|
||||
if ('data: [DONE]' in chunk): break # api2d 正常完成
|
||||
json_data = json.loads(chunk.lstrip('data:'))['choices'][0]
|
||||
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成
|
||||
# 提前读取一些信息 (用于判断异常)
|
||||
if has_choices and not choice_valid:
|
||||
# 一些垃圾第三方接口的出现这样的错误
|
||||
continue
|
||||
json_data = chunkjson['choices'][0]
|
||||
delta = json_data["delta"]
|
||||
if len(delta) == 0: break
|
||||
if "role" in delta: continue
|
||||
|
||||
109
request_llms/bridge_google_gemini.py
普通文件
109
request_llms/bridge_google_gemini.py
普通文件
@@ -0,0 +1,109 @@
|
||||
# encoding: utf-8
|
||||
# @Time : 2023/12/21
|
||||
# @Author : Spike
|
||||
# @Descr :
|
||||
import json
|
||||
import re
|
||||
import os
|
||||
import time
|
||||
from request_llms.com_google import GoogleChatInit
|
||||
from toolbox import get_conf, update_ui, update_ui_lastest_msg, have_any_recent_upload_image_files, trimmed_format_exc
|
||||
|
||||
proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY')
|
||||
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
|
||||
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
|
||||
|
||||
|
||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None,
|
||||
console_slience=False):
|
||||
# 检查API_KEY
|
||||
if get_conf("GEMINI_API_KEY") == "":
|
||||
raise ValueError(f"请配置 GEMINI_API_KEY。")
|
||||
|
||||
genai = GoogleChatInit()
|
||||
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
||||
gpt_replying_buffer = ''
|
||||
stream_response = genai.generate_chat(inputs, llm_kwargs, history, sys_prompt)
|
||||
for response in stream_response:
|
||||
results = response.decode()
|
||||
match = re.search(r'"text":\s*"((?:[^"\\]|\\.)*)"', results, flags=re.DOTALL)
|
||||
error_match = re.search(r'\"message\":\s*\"(.*?)\"', results, flags=re.DOTALL)
|
||||
if match:
|
||||
try:
|
||||
paraphrase = json.loads('{"text": "%s"}' % match.group(1))
|
||||
except:
|
||||
raise ValueError(f"解析GEMINI消息出错。")
|
||||
buffer = paraphrase['text']
|
||||
gpt_replying_buffer += buffer
|
||||
if len(observe_window) >= 1:
|
||||
observe_window[0] = gpt_replying_buffer
|
||||
if len(observe_window) >= 2:
|
||||
if (time.time() - observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
|
||||
if error_match:
|
||||
raise RuntimeError(f'{gpt_replying_buffer} 对话错误')
|
||||
return gpt_replying_buffer
|
||||
|
||||
|
||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None):
|
||||
# 检查API_KEY
|
||||
if get_conf("GEMINI_API_KEY") == "":
|
||||
yield from update_ui_lastest_msg(f"请配置 GEMINI_API_KEY。", chatbot=chatbot, history=history, delay=0)
|
||||
return
|
||||
|
||||
if "vision" in llm_kwargs["llm_model"]:
|
||||
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
|
||||
def make_media_input(inputs, image_paths):
|
||||
for image_path in image_paths:
|
||||
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
|
||||
return inputs
|
||||
if have_recent_file:
|
||||
inputs = make_media_input(inputs, image_paths)
|
||||
|
||||
chatbot.append((inputs, ""))
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
genai = GoogleChatInit()
|
||||
retry = 0
|
||||
while True:
|
||||
try:
|
||||
stream_response = genai.generate_chat(inputs, llm_kwargs, history, system_prompt)
|
||||
break
|
||||
except Exception as e:
|
||||
retry += 1
|
||||
chatbot[-1] = ((chatbot[-1][0], trimmed_format_exc()))
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="请求失败") # 刷新界面
|
||||
return
|
||||
gpt_replying_buffer = ""
|
||||
gpt_security_policy = ""
|
||||
history.extend([inputs, ''])
|
||||
for response in stream_response:
|
||||
results = response.decode("utf-8") # 被这个解码给耍了。。
|
||||
gpt_security_policy += results
|
||||
match = re.search(r'"text":\s*"((?:[^"\\]|\\.)*)"', results, flags=re.DOTALL)
|
||||
error_match = re.search(r'\"message\":\s*\"(.*)\"', results, flags=re.DOTALL)
|
||||
if match:
|
||||
try:
|
||||
paraphrase = json.loads('{"text": "%s"}' % match.group(1))
|
||||
except:
|
||||
raise ValueError(f"解析GEMINI消息出错。")
|
||||
gpt_replying_buffer += paraphrase['text'] # 使用 json 解析库进行处理
|
||||
chatbot[-1] = (inputs, gpt_replying_buffer)
|
||||
history[-1] = gpt_replying_buffer
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
if error_match:
|
||||
history = history[-2] # 错误的不纳入对话
|
||||
chatbot[-1] = (inputs, gpt_replying_buffer + f"对话错误,请查看message\n\n```\n{error_match.group(1)}\n```")
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
raise RuntimeError('对话错误')
|
||||
if not gpt_replying_buffer:
|
||||
history = history[-2] # 错误的不纳入对话
|
||||
chatbot[-1] = (inputs, gpt_replying_buffer + f"触发了Google的安全访问策略,没有回答\n\n```\n{gpt_security_policy}\n```")
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
llm_kwargs = {'llm_model': 'gemini-pro'}
|
||||
result = predict('Write long a story about a magic backpack.', llm_kwargs, llm_kwargs, [])
|
||||
for i in result:
|
||||
print(i)
|
||||
@@ -1,59 +1,62 @@
|
||||
model_name = "Qwen"
|
||||
cmd_to_install = "`pip install -r request_llms/requirements_qwen.txt`"
|
||||
import time
|
||||
import os
|
||||
from toolbox import update_ui, get_conf, update_ui_lastest_msg
|
||||
from toolbox import check_packages, report_exception
|
||||
|
||||
from toolbox import ProxyNetworkActivate, get_conf
|
||||
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
|
||||
model_name = 'Qwen'
|
||||
|
||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
||||
"""
|
||||
⭐多线程方法
|
||||
函数的说明请见 request_llms/bridge_all.py
|
||||
"""
|
||||
watch_dog_patience = 5
|
||||
response = ""
|
||||
|
||||
from .com_qwenapi import QwenRequestInstance
|
||||
sri = QwenRequestInstance()
|
||||
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
|
||||
if len(observe_window) >= 1:
|
||||
observe_window[0] = response
|
||||
if len(observe_window) >= 2:
|
||||
if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
|
||||
return response
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Local Model
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
class GetQwenLMHandle(LocalLLMHandle):
|
||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
||||
"""
|
||||
⭐单线程方法
|
||||
函数的说明请见 request_llms/bridge_all.py
|
||||
"""
|
||||
chatbot.append((inputs, ""))
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
def load_model_info(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
self.model_name = model_name
|
||||
self.cmd_to_install = cmd_to_install
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
check_packages(["dashscope"])
|
||||
except:
|
||||
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade dashscope```。",
|
||||
chatbot=chatbot, history=history, delay=0)
|
||||
return
|
||||
|
||||
def load_model_and_tokenizer(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
# from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
from transformers.generation import GenerationConfig
|
||||
with ProxyNetworkActivate('Download_LLM'):
|
||||
model_id = get_conf('QWEN_MODEL_SELECTION')
|
||||
self._tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, resume_download=True)
|
||||
# use fp16
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True).eval()
|
||||
model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
|
||||
self._model = model
|
||||
# 检查DASHSCOPE_API_KEY
|
||||
if get_conf("DASHSCOPE_API_KEY") == "":
|
||||
yield from update_ui_lastest_msg(f"请配置 DASHSCOPE_API_KEY。",
|
||||
chatbot=chatbot, history=history, delay=0)
|
||||
return
|
||||
|
||||
return self._model, self._tokenizer
|
||||
if additional_fn is not None:
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
def llm_stream_generator(self, **kwargs):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
def adaptor(kwargs):
|
||||
query = kwargs['query']
|
||||
max_length = kwargs['max_length']
|
||||
top_p = kwargs['top_p']
|
||||
temperature = kwargs['temperature']
|
||||
history = kwargs['history']
|
||||
return query, max_length, top_p, temperature, history
|
||||
# 开始接收回复
|
||||
from .com_qwenapi import QwenRequestInstance
|
||||
sri = QwenRequestInstance()
|
||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
||||
chatbot[-1] = (inputs, response)
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
query, max_length, top_p, temperature, history = adaptor(kwargs)
|
||||
|
||||
for response in self._model.chat_stream(self._tokenizer, query, history=history):
|
||||
yield response
|
||||
|
||||
def try_to_import_special_deps(self, **kwargs):
|
||||
# import something that will raise error if the user does not install requirement_*.txt
|
||||
# 🏃♂️🏃♂️🏃♂️ 主进程执行
|
||||
import importlib
|
||||
importlib.import_module('modelscope')
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 GPT-Academic Interface
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetQwenLMHandle, model_name)
|
||||
# 总结输出
|
||||
if response == f"[Local Message] 等待{model_name}响应中 ...":
|
||||
response = f"[Local Message] {model_name}响应异常 ..."
|
||||
history.extend([inputs, response])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
@@ -0,0 +1,59 @@
|
||||
model_name = "Qwen_Local"
|
||||
cmd_to_install = "`pip install -r request_llms/requirements_qwen_local.txt`"
|
||||
|
||||
from toolbox import ProxyNetworkActivate, get_conf
|
||||
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
|
||||
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Local Model
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
class GetQwenLMHandle(LocalLLMHandle):
|
||||
|
||||
def load_model_info(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
self.model_name = model_name
|
||||
self.cmd_to_install = cmd_to_install
|
||||
|
||||
def load_model_and_tokenizer(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
# from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
from transformers.generation import GenerationConfig
|
||||
with ProxyNetworkActivate('Download_LLM'):
|
||||
model_id = get_conf('QWEN_LOCAL_MODEL_SELECTION')
|
||||
self._tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, resume_download=True)
|
||||
# use fp16
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True).eval()
|
||||
model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
|
||||
self._model = model
|
||||
|
||||
return self._model, self._tokenizer
|
||||
|
||||
def llm_stream_generator(self, **kwargs):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
def adaptor(kwargs):
|
||||
query = kwargs['query']
|
||||
max_length = kwargs['max_length']
|
||||
top_p = kwargs['top_p']
|
||||
temperature = kwargs['temperature']
|
||||
history = kwargs['history']
|
||||
return query, max_length, top_p, temperature, history
|
||||
|
||||
query, max_length, top_p, temperature, history = adaptor(kwargs)
|
||||
|
||||
for response in self._model.chat_stream(self._tokenizer, query, history=history):
|
||||
yield response
|
||||
|
||||
def try_to_import_special_deps(self, **kwargs):
|
||||
# import something that will raise error if the user does not install requirement_*.txt
|
||||
# 🏃♂️🏃♂️🏃♂️ 主进程执行
|
||||
import importlib
|
||||
importlib.import_module('modelscope')
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 GPT-Academic Interface
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetQwenLMHandle, model_name)
|
||||
@@ -26,7 +26,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
||||
|
||||
from .com_sparkapi import SparkRequestInstance
|
||||
sri = SparkRequestInstance()
|
||||
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
|
||||
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt, use_image_api=False):
|
||||
if len(observe_window) >= 1:
|
||||
observe_window[0] = response
|
||||
if len(observe_window) >= 2:
|
||||
@@ -52,7 +52,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
# 开始接收回复
|
||||
from .com_sparkapi import SparkRequestInstance
|
||||
sri = SparkRequestInstance()
|
||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt, use_image_api=True):
|
||||
chatbot[-1] = (inputs, response)
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
|
||||
228
request_llms/com_google.py
普通文件
228
request_llms/com_google.py
普通文件
@@ -0,0 +1,228 @@
|
||||
# encoding: utf-8
|
||||
# @Time : 2023/12/25
|
||||
# @Author : Spike
|
||||
# @Descr :
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import requests
|
||||
from typing import List, Dict, Tuple
|
||||
from toolbox import get_conf, encode_image, get_pictures_list
|
||||
|
||||
proxies, TIMEOUT_SECONDS = get_conf("proxies", "TIMEOUT_SECONDS")
|
||||
|
||||
"""
|
||||
========================================================================
|
||||
第五部分 一些文件处理方法
|
||||
files_filter_handler 根据type过滤文件
|
||||
input_encode_handler 提取input中的文件,并解析
|
||||
file_manifest_filter_html 根据type过滤文件, 并解析为html or md 文本
|
||||
link_mtime_to_md 文件增加本地时间参数,避免下载到缓存文件
|
||||
html_view_blank 超链接
|
||||
html_local_file 本地文件取相对路径
|
||||
to_markdown_tabs 文件list 转换为 md tab
|
||||
"""
|
||||
|
||||
|
||||
def files_filter_handler(file_list):
|
||||
new_list = []
|
||||
filter_ = [
|
||||
"png",
|
||||
"jpg",
|
||||
"jpeg",
|
||||
"bmp",
|
||||
"svg",
|
||||
"webp",
|
||||
"ico",
|
||||
"tif",
|
||||
"tiff",
|
||||
"raw",
|
||||
"eps",
|
||||
]
|
||||
for file in file_list:
|
||||
file = str(file).replace("file=", "")
|
||||
if os.path.exists(file):
|
||||
if str(os.path.basename(file)).split(".")[-1] in filter_:
|
||||
new_list.append(file)
|
||||
return new_list
|
||||
|
||||
|
||||
def input_encode_handler(inputs, llm_kwargs):
|
||||
if llm_kwargs["most_recent_uploaded"].get("path"):
|
||||
image_paths = get_pictures_list(llm_kwargs["most_recent_uploaded"]["path"])
|
||||
md_encode = []
|
||||
for md_path in image_paths:
|
||||
type_ = os.path.splitext(md_path)[1].replace(".", "")
|
||||
type_ = "jpeg" if type_ == "jpg" else type_
|
||||
md_encode.append({"data": encode_image(md_path), "type": type_})
|
||||
return inputs, md_encode
|
||||
|
||||
|
||||
def file_manifest_filter_html(file_list, filter_: list = None, md_type=False):
|
||||
new_list = []
|
||||
if not filter_:
|
||||
filter_ = [
|
||||
"png",
|
||||
"jpg",
|
||||
"jpeg",
|
||||
"bmp",
|
||||
"svg",
|
||||
"webp",
|
||||
"ico",
|
||||
"tif",
|
||||
"tiff",
|
||||
"raw",
|
||||
"eps",
|
||||
]
|
||||
for file in file_list:
|
||||
if str(os.path.basename(file)).split(".")[-1] in filter_:
|
||||
new_list.append(html_local_img(file, md=md_type))
|
||||
elif os.path.exists(file):
|
||||
new_list.append(link_mtime_to_md(file))
|
||||
else:
|
||||
new_list.append(file)
|
||||
return new_list
|
||||
|
||||
|
||||
def link_mtime_to_md(file):
|
||||
link_local = html_local_file(file)
|
||||
link_name = os.path.basename(file)
|
||||
a = f"[{link_name}]({link_local}?{os.path.getmtime(file)})"
|
||||
return a
|
||||
|
||||
|
||||
def html_local_file(file):
|
||||
base_path = os.path.dirname(__file__) # 项目目录
|
||||
if os.path.exists(str(file)):
|
||||
file = f'file={file.replace(base_path, ".")}'
|
||||
return file
|
||||
|
||||
|
||||
def html_local_img(__file, layout="left", max_width=None, max_height=None, md=True):
|
||||
style = ""
|
||||
if max_width is not None:
|
||||
style += f"max-width: {max_width};"
|
||||
if max_height is not None:
|
||||
style += f"max-height: {max_height};"
|
||||
__file = html_local_file(__file)
|
||||
a = f'<div align="{layout}"><img src="{__file}" style="{style}"></div>'
|
||||
if md:
|
||||
a = f""
|
||||
return a
|
||||
|
||||
|
||||
def to_markdown_tabs(head: list, tabs: list, alignment=":---:", column=False):
|
||||
"""
|
||||
Args:
|
||||
head: 表头:[]
|
||||
tabs: 表值:[[列1], [列2], [列3], [列4]]
|
||||
alignment: :--- 左对齐, :---: 居中对齐, ---: 右对齐
|
||||
column: True to keep data in columns, False to keep data in rows (default).
|
||||
Returns:
|
||||
A string representation of the markdown table.
|
||||
"""
|
||||
if column:
|
||||
transposed_tabs = list(map(list, zip(*tabs)))
|
||||
else:
|
||||
transposed_tabs = tabs
|
||||
# Find the maximum length among the columns
|
||||
max_len = max(len(column) for column in transposed_tabs)
|
||||
|
||||
tab_format = "| %s "
|
||||
tabs_list = "".join([tab_format % i for i in head]) + "|\n"
|
||||
tabs_list += "".join([tab_format % alignment for i in head]) + "|\n"
|
||||
|
||||
for i in range(max_len):
|
||||
row_data = [tab[i] if i < len(tab) else "" for tab in transposed_tabs]
|
||||
row_data = file_manifest_filter_html(row_data, filter_=None)
|
||||
tabs_list += "".join([tab_format % i for i in row_data]) + "|\n"
|
||||
|
||||
return tabs_list
|
||||
|
||||
|
||||
class GoogleChatInit:
|
||||
def __init__(self):
|
||||
self.url_gemini = "https://generativelanguage.googleapis.com/v1beta/models/%m:streamGenerateContent?key=%k"
|
||||
|
||||
def generate_chat(self, inputs, llm_kwargs, history, system_prompt):
|
||||
headers, payload = self.generate_message_payload(
|
||||
inputs, llm_kwargs, history, system_prompt
|
||||
)
|
||||
response = requests.post(
|
||||
url=self.url_gemini,
|
||||
headers=headers,
|
||||
data=json.dumps(payload),
|
||||
stream=True,
|
||||
proxies=proxies,
|
||||
timeout=TIMEOUT_SECONDS,
|
||||
)
|
||||
return response.iter_lines()
|
||||
|
||||
def __conversation_user(self, user_input, llm_kwargs):
|
||||
what_i_have_asked = {"role": "user", "parts": []}
|
||||
if "vision" not in self.url_gemini:
|
||||
input_ = user_input
|
||||
encode_img = []
|
||||
else:
|
||||
input_, encode_img = input_encode_handler(user_input, llm_kwargs=llm_kwargs)
|
||||
what_i_have_asked["parts"].append({"text": input_})
|
||||
if encode_img:
|
||||
for data in encode_img:
|
||||
what_i_have_asked["parts"].append(
|
||||
{
|
||||
"inline_data": {
|
||||
"mime_type": f"image/{data['type']}",
|
||||
"data": data["data"],
|
||||
}
|
||||
}
|
||||
)
|
||||
return what_i_have_asked
|
||||
|
||||
def __conversation_history(self, history, llm_kwargs):
|
||||
messages = []
|
||||
conversation_cnt = len(history) // 2
|
||||
if conversation_cnt:
|
||||
for index in range(0, 2 * conversation_cnt, 2):
|
||||
what_i_have_asked = self.__conversation_user(history[index], llm_kwargs)
|
||||
what_gpt_answer = {
|
||||
"role": "model",
|
||||
"parts": [{"text": history[index + 1]}],
|
||||
}
|
||||
messages.append(what_i_have_asked)
|
||||
messages.append(what_gpt_answer)
|
||||
return messages
|
||||
|
||||
def generate_message_payload(
|
||||
self, inputs, llm_kwargs, history, system_prompt
|
||||
) -> Tuple[Dict, Dict]:
|
||||
messages = [
|
||||
# {"role": "system", "parts": [{"text": system_prompt}]}, # gemini 不允许对话轮次为偶数,所以这个没有用,看后续支持吧。。。
|
||||
# {"role": "user", "parts": [{"text": ""}]},
|
||||
# {"role": "model", "parts": [{"text": ""}]}
|
||||
]
|
||||
self.url_gemini = self.url_gemini.replace(
|
||||
"%m", llm_kwargs["llm_model"]
|
||||
).replace("%k", get_conf("GEMINI_API_KEY"))
|
||||
header = {"Content-Type": "application/json"}
|
||||
if "vision" not in self.url_gemini: # 不是vision 才处理history
|
||||
messages.extend(
|
||||
self.__conversation_history(history, llm_kwargs)
|
||||
) # 处理 history
|
||||
messages.append(self.__conversation_user(inputs, llm_kwargs)) # 处理用户对话
|
||||
payload = {
|
||||
"contents": messages,
|
||||
"generationConfig": {
|
||||
# "maxOutputTokens": 800,
|
||||
"stopSequences": str(llm_kwargs.get("stop", "")).split(" "),
|
||||
"temperature": llm_kwargs.get("temperature", 1),
|
||||
"topP": llm_kwargs.get("top_p", 0.8),
|
||||
"topK": 10,
|
||||
},
|
||||
}
|
||||
return header, payload
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
google = GoogleChatInit()
|
||||
# print(gootle.generate_message_payload('你好呀', {}, ['123123', '3123123'], ''))
|
||||
# gootle.input_encode_handle('123123[123123](./123123), ')
|
||||
94
request_llms/com_qwenapi.py
普通文件
94
request_llms/com_qwenapi.py
普通文件
@@ -0,0 +1,94 @@
|
||||
from http import HTTPStatus
|
||||
from toolbox import get_conf
|
||||
import threading
|
||||
import logging
|
||||
|
||||
timeout_bot_msg = '[Local Message] Request timeout. Network error.'
|
||||
|
||||
class QwenRequestInstance():
|
||||
def __init__(self):
|
||||
import dashscope
|
||||
self.time_to_yield_event = threading.Event()
|
||||
self.time_to_exit_event = threading.Event()
|
||||
self.result_buf = ""
|
||||
|
||||
def validate_key():
|
||||
DASHSCOPE_API_KEY = get_conf("DASHSCOPE_API_KEY")
|
||||
if DASHSCOPE_API_KEY == '': return False
|
||||
return True
|
||||
|
||||
if not validate_key():
|
||||
raise RuntimeError('请配置 DASHSCOPE_API_KEY')
|
||||
dashscope.api_key = get_conf("DASHSCOPE_API_KEY")
|
||||
|
||||
|
||||
def generate(self, inputs, llm_kwargs, history, system_prompt):
|
||||
# import _thread as thread
|
||||
from dashscope import Generation
|
||||
QWEN_MODEL = {
|
||||
'qwen-turbo': Generation.Models.qwen_turbo,
|
||||
'qwen-plus': Generation.Models.qwen_plus,
|
||||
'qwen-max': Generation.Models.qwen_max,
|
||||
}[llm_kwargs['llm_model']]
|
||||
top_p = llm_kwargs.get('top_p', 0.8)
|
||||
if top_p == 0: top_p += 1e-5
|
||||
if top_p == 1: top_p -= 1e-5
|
||||
|
||||
self.result_buf = ""
|
||||
responses = Generation.call(
|
||||
model=QWEN_MODEL,
|
||||
messages=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
|
||||
top_p=top_p,
|
||||
temperature=llm_kwargs.get('temperature', 1.0),
|
||||
result_format='message',
|
||||
stream=True,
|
||||
incremental_output=True
|
||||
)
|
||||
|
||||
for response in responses:
|
||||
if response.status_code == HTTPStatus.OK:
|
||||
if response.output.choices[0].finish_reason == 'stop':
|
||||
yield self.result_buf
|
||||
break
|
||||
elif response.output.choices[0].finish_reason == 'length':
|
||||
self.result_buf += "[Local Message] 生成长度过长,后续输出被截断"
|
||||
yield self.result_buf
|
||||
break
|
||||
else:
|
||||
self.result_buf += response.output.choices[0].message.content
|
||||
yield self.result_buf
|
||||
else:
|
||||
self.result_buf += f"[Local Message] 请求错误:状态码:{response.status_code},错误码:{response.code},消息:{response.message}"
|
||||
yield self.result_buf
|
||||
break
|
||||
logging.info(f'[raw_input] {inputs}')
|
||||
logging.info(f'[response] {self.result_buf}')
|
||||
return self.result_buf
|
||||
|
||||
|
||||
def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
|
||||
conversation_cnt = len(history) // 2
|
||||
if system_prompt == '': system_prompt = 'Hello!'
|
||||
messages = [{"role": "user", "content": system_prompt}, {"role": "assistant", "content": "Certainly!"}]
|
||||
if conversation_cnt:
|
||||
for index in range(0, 2*conversation_cnt, 2):
|
||||
what_i_have_asked = {}
|
||||
what_i_have_asked["role"] = "user"
|
||||
what_i_have_asked["content"] = history[index]
|
||||
what_gpt_answer = {}
|
||||
what_gpt_answer["role"] = "assistant"
|
||||
what_gpt_answer["content"] = history[index+1]
|
||||
if what_i_have_asked["content"] != "":
|
||||
if what_gpt_answer["content"] == "":
|
||||
continue
|
||||
if what_gpt_answer["content"] == timeout_bot_msg:
|
||||
continue
|
||||
messages.append(what_i_have_asked)
|
||||
messages.append(what_gpt_answer)
|
||||
else:
|
||||
messages[-1]['content'] = what_gpt_answer['content']
|
||||
what_i_ask_now = {}
|
||||
what_i_ask_now["role"] = "user"
|
||||
what_i_ask_now["content"] = inputs
|
||||
messages.append(what_i_ask_now)
|
||||
return messages
|
||||
@@ -72,12 +72,12 @@ class SparkRequestInstance():
|
||||
|
||||
self.result_buf = ""
|
||||
|
||||
def generate(self, inputs, llm_kwargs, history, system_prompt):
|
||||
def generate(self, inputs, llm_kwargs, history, system_prompt, use_image_api=False):
|
||||
llm_kwargs = llm_kwargs
|
||||
history = history
|
||||
system_prompt = system_prompt
|
||||
import _thread as thread
|
||||
thread.start_new_thread(self.create_blocking_request, (inputs, llm_kwargs, history, system_prompt))
|
||||
thread.start_new_thread(self.create_blocking_request, (inputs, llm_kwargs, history, system_prompt, use_image_api))
|
||||
while True:
|
||||
self.time_to_yield_event.wait(timeout=1)
|
||||
if self.time_to_yield_event.is_set():
|
||||
@@ -86,7 +86,7 @@ class SparkRequestInstance():
|
||||
return self.result_buf
|
||||
|
||||
|
||||
def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt):
|
||||
def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt, use_image_api):
|
||||
if llm_kwargs['llm_model'] == 'sparkv2':
|
||||
gpt_url = self.gpt_url_v2
|
||||
elif llm_kwargs['llm_model'] == 'sparkv3':
|
||||
@@ -94,10 +94,12 @@ class SparkRequestInstance():
|
||||
else:
|
||||
gpt_url = self.gpt_url
|
||||
file_manifest = []
|
||||
if llm_kwargs.get('most_recent_uploaded'):
|
||||
if use_image_api and llm_kwargs.get('most_recent_uploaded'):
|
||||
if llm_kwargs['most_recent_uploaded'].get('path'):
|
||||
file_manifest = get_pictures_list(llm_kwargs['most_recent_uploaded']['path'])
|
||||
gpt_url = self.gpt_url_img
|
||||
if len(file_manifest) > 0:
|
||||
print('正在使用讯飞图片理解API')
|
||||
gpt_url = self.gpt_url_img
|
||||
wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, gpt_url)
|
||||
websocket.enableTrace(False)
|
||||
wsUrl = wsParam.create_url()
|
||||
|
||||
@@ -5,4 +5,3 @@ accelerate
|
||||
matplotlib
|
||||
huggingface_hub
|
||||
triton
|
||||
|
||||
|
||||
@@ -1,4 +1 @@
|
||||
modelscope
|
||||
transformers_stream_generator
|
||||
auto-gptq
|
||||
optimum
|
||||
dashscope
|
||||
|
||||
@@ -0,0 +1,5 @@
|
||||
modelscope
|
||||
transformers_stream_generator
|
||||
auto-gptq
|
||||
optimum
|
||||
urllib3<2
|
||||
@@ -3,12 +3,14 @@
|
||||
# """
|
||||
def validate_path():
|
||||
import os, sys
|
||||
dir_name = os.path.dirname(__file__)
|
||||
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
|
||||
|
||||
os.path.dirname(__file__)
|
||||
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
|
||||
os.chdir(root_dir_assume)
|
||||
sys.path.append(root_dir_assume)
|
||||
|
||||
validate_path() # validate path so you can run from base directory
|
||||
|
||||
validate_path() # validate path so you can run from base directory
|
||||
if __name__ == "__main__":
|
||||
# from request_llms.bridge_newbingfree import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_moss import predict_no_ui_long_connection
|
||||
@@ -18,19 +20,19 @@ if __name__ == "__main__":
|
||||
# from request_llms.bridge_internlm import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_qwen_7B import predict_no_ui_long_connection
|
||||
from request_llms.bridge_qwen import predict_no_ui_long_connection
|
||||
from request_llms.bridge_qwen_local import predict_no_ui_long_connection
|
||||
|
||||
# from request_llms.bridge_spark import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
|
||||
|
||||
llm_kwargs = {
|
||||
'max_length': 4096,
|
||||
'top_p': 1,
|
||||
'temperature': 1,
|
||||
"max_length": 4096,
|
||||
"top_p": 1,
|
||||
"temperature": 1,
|
||||
}
|
||||
|
||||
result = predict_no_ui_long_connection( inputs="请问什么是质子?",
|
||||
llm_kwargs=llm_kwargs,
|
||||
history=["你好", "我好!"],
|
||||
sys_prompt="")
|
||||
print('final result:', result)
|
||||
result = predict_no_ui_long_connection(
|
||||
inputs="请问什么是质子?", llm_kwargs=llm_kwargs, history=["你好", "我好!"], sys_prompt=""
|
||||
)
|
||||
print("final result:", result)
|
||||
|
||||
@@ -29,16 +29,20 @@ md = """
|
||||
请随时告诉我您的需求,我会尽力提供帮助。如果您有任何问题或需要解答的议题,请随时提问。
|
||||
"""
|
||||
|
||||
|
||||
def validate_path():
|
||||
import os, sys
|
||||
dir_name = os.path.dirname(__file__)
|
||||
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
|
||||
|
||||
os.path.dirname(__file__)
|
||||
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
|
||||
os.chdir(root_dir_assume)
|
||||
sys.path.append(root_dir_assume)
|
||||
validate_path() # validate path so you can run from base directory
|
||||
|
||||
|
||||
validate_path() # validate path so you can run from base directory
|
||||
from toolbox import markdown_convertion
|
||||
|
||||
html = markdown_convertion(md)
|
||||
print(html)
|
||||
with open('test.html', 'w', encoding='utf-8') as f:
|
||||
with open("test.html", "w", encoding="utf-8") as f:
|
||||
f.write(html)
|
||||
@@ -4,16 +4,28 @@
|
||||
|
||||
|
||||
import os, sys
|
||||
def validate_path(): dir_name = os.path.dirname(__file__); root_dir_assume = os.path.abspath(dir_name + '/..'); os.chdir(root_dir_assume); sys.path.append(root_dir_assume)
|
||||
validate_path() # 返回项目根路径
|
||||
|
||||
|
||||
def validate_path():
|
||||
dir_name = os.path.dirname(__file__)
|
||||
root_dir_assume = os.path.abspath(dir_name + "/..")
|
||||
os.chdir(root_dir_assume)
|
||||
sys.path.append(root_dir_assume)
|
||||
|
||||
|
||||
validate_path() # 返回项目根路径
|
||||
|
||||
if __name__ == "__main__":
|
||||
from tests.test_utils import plugin_test
|
||||
|
||||
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
|
||||
|
||||
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2307.07522")
|
||||
|
||||
plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix")
|
||||
plugin_test(
|
||||
plugin="crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF",
|
||||
main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix",
|
||||
)
|
||||
|
||||
# plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='修改api-key为sk-jhoejriotherjep')
|
||||
|
||||
@@ -61,4 +73,3 @@ if __name__ == "__main__":
|
||||
|
||||
# advanced_arg = {"advanced_arg":"--pre_seq_len=128 --learning_rate=2e-2 --num_gpus=1 --json_dataset='t_code.json' --ptuning_directory='/home/hmp/ChatGLM2-6B/ptuning' " }
|
||||
# plugin_test(plugin='crazy_functions.chatglm微调工具->启动微调', main_input='build/dev.json', advanced_arg=advanced_arg)
|
||||
|
||||
|
||||
@@ -9,45 +9,52 @@ from functools import wraps
|
||||
import sys
|
||||
import os
|
||||
|
||||
|
||||
def chat_to_markdown_str(chat):
|
||||
result = ""
|
||||
for i, cc in enumerate(chat):
|
||||
result += f'\n\n{cc[0]}\n\n{cc[1]}'
|
||||
if i != len(chat)-1:
|
||||
result += '\n\n---'
|
||||
result += f"\n\n{cc[0]}\n\n{cc[1]}"
|
||||
if i != len(chat) - 1:
|
||||
result += "\n\n---"
|
||||
return result
|
||||
|
||||
|
||||
def silence_stdout(func):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
_original_stdout = sys.stdout
|
||||
sys.stdout = open(os.devnull, 'w')
|
||||
sys.stdout.reconfigure(encoding='utf-8')
|
||||
sys.stdout = open(os.devnull, "w")
|
||||
sys.stdout.reconfigure(encoding="utf-8")
|
||||
for q in func(*args, **kwargs):
|
||||
sys.stdout = _original_stdout
|
||||
yield q
|
||||
sys.stdout = open(os.devnull, 'w')
|
||||
sys.stdout.reconfigure(encoding='utf-8')
|
||||
sys.stdout = open(os.devnull, "w")
|
||||
sys.stdout.reconfigure(encoding="utf-8")
|
||||
sys.stdout.close()
|
||||
sys.stdout = _original_stdout
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
def silence_stdout_fn(func):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
_original_stdout = sys.stdout
|
||||
sys.stdout = open(os.devnull, 'w')
|
||||
sys.stdout.reconfigure(encoding='utf-8')
|
||||
sys.stdout = open(os.devnull, "w")
|
||||
sys.stdout.reconfigure(encoding="utf-8")
|
||||
result = func(*args, **kwargs)
|
||||
sys.stdout.close()
|
||||
sys.stdout = _original_stdout
|
||||
return result
|
||||
|
||||
return wrapper
|
||||
|
||||
class VoidTerminal():
|
||||
|
||||
class VoidTerminal:
|
||||
def __init__(self) -> None:
|
||||
pass
|
||||
|
||||
|
||||
vt = VoidTerminal()
|
||||
vt.get_conf = silence_stdout_fn(get_conf)
|
||||
vt.set_conf = silence_stdout_fn(set_conf)
|
||||
@@ -56,9 +63,27 @@ vt.get_plugin_handle = silence_stdout_fn(get_plugin_handle)
|
||||
vt.get_plugin_default_kwargs = silence_stdout_fn(get_plugin_default_kwargs)
|
||||
vt.get_chat_handle = silence_stdout_fn(get_chat_handle)
|
||||
vt.get_chat_default_kwargs = silence_stdout_fn(get_chat_default_kwargs)
|
||||
vt.chat_to_markdown_str = (chat_to_markdown_str)
|
||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
|
||||
vt.get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
|
||||
vt.chat_to_markdown_str = chat_to_markdown_str
|
||||
(
|
||||
proxies,
|
||||
WEB_PORT,
|
||||
LLM_MODEL,
|
||||
CONCURRENT_COUNT,
|
||||
AUTHENTICATION,
|
||||
CHATBOT_HEIGHT,
|
||||
LAYOUT,
|
||||
API_KEY,
|
||||
) = vt.get_conf(
|
||||
"proxies",
|
||||
"WEB_PORT",
|
||||
"LLM_MODEL",
|
||||
"CONCURRENT_COUNT",
|
||||
"AUTHENTICATION",
|
||||
"CHATBOT_HEIGHT",
|
||||
"LAYOUT",
|
||||
"API_KEY",
|
||||
)
|
||||
|
||||
|
||||
def plugin_test(main_input, plugin, advanced_arg=None, debug=True):
|
||||
from rich.live import Live
|
||||
@@ -69,9 +94,9 @@ def plugin_test(main_input, plugin, advanced_arg=None, debug=True):
|
||||
|
||||
plugin = vt.get_plugin_handle(plugin)
|
||||
plugin_kwargs = vt.get_plugin_default_kwargs()
|
||||
plugin_kwargs['main_input'] = main_input
|
||||
plugin_kwargs["main_input"] = main_input
|
||||
if advanced_arg is not None:
|
||||
plugin_kwargs['plugin_kwargs'] = advanced_arg
|
||||
plugin_kwargs["plugin_kwargs"] = advanced_arg
|
||||
if debug:
|
||||
my_working_plugin = (plugin)(**plugin_kwargs)
|
||||
else:
|
||||
|
||||
@@ -4,14 +4,25 @@
|
||||
|
||||
|
||||
import os, sys
|
||||
def validate_path(): dir_name = os.path.dirname(__file__); root_dir_assume = os.path.abspath(dir_name + '/..'); os.chdir(root_dir_assume); sys.path.append(root_dir_assume)
|
||||
validate_path() # 返回项目根路径
|
||||
|
||||
|
||||
def validate_path():
|
||||
dir_name = os.path.dirname(__file__)
|
||||
root_dir_assume = os.path.abspath(dir_name + "/..")
|
||||
os.chdir(root_dir_assume)
|
||||
sys.path.append(root_dir_assume)
|
||||
|
||||
|
||||
validate_path() # 返回项目根路径
|
||||
|
||||
if __name__ == "__main__":
|
||||
from tests.test_utils import plugin_test
|
||||
|
||||
plugin_test(plugin='crazy_functions.知识库问答->知识库文件注入', main_input="./README.md")
|
||||
plugin_test(plugin="crazy_functions.知识库问答->知识库文件注入", main_input="./README.md")
|
||||
|
||||
plugin_test(plugin='crazy_functions.知识库问答->读取知识库作答', main_input="What is the installation method?")
|
||||
plugin_test(
|
||||
plugin="crazy_functions.知识库问答->读取知识库作答",
|
||||
main_input="What is the installation method?",
|
||||
)
|
||||
|
||||
plugin_test(plugin='crazy_functions.知识库问答->读取知识库作答', main_input="远程云服务器部署?")
|
||||
plugin_test(plugin="crazy_functions.知识库问答->读取知识库作答", main_input="远程云服务器部署?")
|
||||
|
||||
@@ -94,6 +94,10 @@
|
||||
background-color: var(--block-background-fill) !important;
|
||||
}
|
||||
|
||||
#cbsc {
|
||||
background-color: var(--block-background-fill) !important;
|
||||
}
|
||||
|
||||
#interact-panel .form {
|
||||
border: hidden
|
||||
}
|
||||
|
||||
409
themes/common.js
409
themes/common.js
@@ -74,6 +74,7 @@ function toast_up(msg) {
|
||||
m.style.cssText = "font-size: var(--text-md) !important; color: rgb(255, 255, 255); background-color: rgba(0, 0, 100, 0.6); padding: 10px 15px; margin: 0 0 0 -60px; border-radius: 4px; position: fixed; top: 50%; left: 50%; width: auto; text-align: center;";
|
||||
document.body.appendChild(m);
|
||||
}
|
||||
|
||||
function toast_down() {
|
||||
var m = document.getElementById('toast_up');
|
||||
if (m) {
|
||||
@@ -81,6 +82,97 @@ function toast_down() {
|
||||
}
|
||||
}
|
||||
|
||||
function begin_loading_status() {
|
||||
// Create the loader div and add styling
|
||||
var loader = document.createElement('div');
|
||||
loader.id = 'Js_File_Loading';
|
||||
var C1 = document.createElement('div');
|
||||
var C2 = document.createElement('div');
|
||||
// var C3 = document.createElement('span');
|
||||
// C3.textContent = '上传中...'
|
||||
// C3.style.position = "fixed";
|
||||
// C3.style.top = "50%";
|
||||
// C3.style.left = "50%";
|
||||
// C3.style.width = "80px";
|
||||
// C3.style.height = "80px";
|
||||
// C3.style.margin = "-40px 0 0 -40px";
|
||||
|
||||
C1.style.position = "fixed";
|
||||
C1.style.top = "50%";
|
||||
C1.style.left = "50%";
|
||||
C1.style.width = "80px";
|
||||
C1.style.height = "80px";
|
||||
C1.style.borderLeft = "12px solid #00f3f300";
|
||||
C1.style.borderRight = "12px solid #00f3f300";
|
||||
C1.style.borderTop = "12px solid #82aaff";
|
||||
C1.style.borderBottom = "12px solid #82aaff"; // Added for effect
|
||||
C1.style.borderRadius = "50%";
|
||||
C1.style.margin = "-40px 0 0 -40px";
|
||||
C1.style.animation = "spinAndPulse 2s linear infinite";
|
||||
|
||||
C2.style.position = "fixed";
|
||||
C2.style.top = "50%";
|
||||
C2.style.left = "50%";
|
||||
C2.style.width = "40px";
|
||||
C2.style.height = "40px";
|
||||
C2.style.borderLeft = "12px solid #00f3f300";
|
||||
C2.style.borderRight = "12px solid #00f3f300";
|
||||
C2.style.borderTop = "12px solid #33c9db";
|
||||
C2.style.borderBottom = "12px solid #33c9db"; // Added for effect
|
||||
C2.style.borderRadius = "50%";
|
||||
C2.style.margin = "-20px 0 0 -20px";
|
||||
C2.style.animation = "spinAndPulse2 2s linear infinite";
|
||||
|
||||
loader.appendChild(C1);
|
||||
loader.appendChild(C2);
|
||||
// loader.appendChild(C3);
|
||||
document.body.appendChild(loader); // Add the loader to the body
|
||||
|
||||
// Set the CSS animation keyframes for spin and pulse to be synchronized
|
||||
var styleSheet = document.createElement('style');
|
||||
styleSheet.id = 'Js_File_Loading_Style';
|
||||
styleSheet.textContent = `
|
||||
@keyframes spinAndPulse {
|
||||
0% { transform: rotate(0deg) scale(1); }
|
||||
25% { transform: rotate(90deg) scale(1.1); }
|
||||
50% { transform: rotate(180deg) scale(1); }
|
||||
75% { transform: rotate(270deg) scale(0.9); }
|
||||
100% { transform: rotate(360deg) scale(1); }
|
||||
}
|
||||
|
||||
@keyframes spinAndPulse2 {
|
||||
0% { transform: rotate(-90deg);}
|
||||
25% { transform: rotate(-180deg);}
|
||||
50% { transform: rotate(-270deg);}
|
||||
75% { transform: rotate(-360deg);}
|
||||
100% { transform: rotate(-450deg);}
|
||||
}
|
||||
`;
|
||||
document.head.appendChild(styleSheet);
|
||||
}
|
||||
|
||||
|
||||
function cancel_loading_status() {
|
||||
// remove the loader from the body
|
||||
var loadingElement = document.getElementById('Js_File_Loading');
|
||||
if (loadingElement) {
|
||||
document.body.removeChild(loadingElement);
|
||||
}
|
||||
var loadingStyle = document.getElementById('Js_File_Loading_Style');
|
||||
if (loadingStyle) {
|
||||
document.head.removeChild(loadingStyle);
|
||||
}
|
||||
// create new listen event
|
||||
let clearButton = document.querySelectorAll('div[id*="elem_upload"] button[aria-label="Clear"]');
|
||||
for (let button of clearButton) {
|
||||
button.addEventListener('click', function () {
|
||||
setTimeout(function () {
|
||||
register_upload_event();
|
||||
}, 50);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
// 第 2 部分: 复制按钮
|
||||
@@ -94,8 +186,7 @@ function addCopyButton(botElement) {
|
||||
|
||||
const messageBtnColumnElement = botElement.querySelector('.message-btn-row');
|
||||
if (messageBtnColumnElement) {
|
||||
// Do something if .message-btn-column exists, for example, remove it
|
||||
// messageBtnColumnElement.remove();
|
||||
// if .message-btn-column exists
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -154,32 +245,53 @@ function chatbotContentChanged(attempt = 1, force = false) {
|
||||
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
|
||||
function chatbotAutoHeight() {
|
||||
// 自动调整高度
|
||||
// 自动调整高度:立即
|
||||
function update_height() {
|
||||
var { panel_height_target, chatbot_height, chatbot } = get_elements(true);
|
||||
if (panel_height_target != chatbot_height) {
|
||||
var pixelString = panel_height_target.toString() + 'px';
|
||||
var { height_target, chatbot_height, chatbot } = get_elements(true);
|
||||
if (height_target != chatbot_height) {
|
||||
var pixelString = height_target.toString() + 'px';
|
||||
chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
|
||||
}
|
||||
}
|
||||
|
||||
// 自动调整高度:缓慢
|
||||
function update_height_slow() {
|
||||
var { panel_height_target, chatbot_height, chatbot } = get_elements();
|
||||
if (panel_height_target != chatbot_height) {
|
||||
new_panel_height = (panel_height_target - chatbot_height) * 0.5 + chatbot_height;
|
||||
if (Math.abs(new_panel_height - panel_height_target) < 10) {
|
||||
new_panel_height = panel_height_target;
|
||||
var { height_target, chatbot_height, chatbot } = get_elements();
|
||||
if (height_target != chatbot_height) {
|
||||
// sign = (height_target - chatbot_height)/Math.abs(height_target - chatbot_height);
|
||||
// speed = Math.max(Math.abs(height_target - chatbot_height), 1);
|
||||
new_panel_height = (height_target - chatbot_height) * 0.5 + chatbot_height;
|
||||
if (Math.abs(new_panel_height - height_target) < 10) {
|
||||
new_panel_height = height_target;
|
||||
}
|
||||
// console.log(chatbot_height, panel_height_target, new_panel_height);
|
||||
var pixelString = new_panel_height.toString() + 'px';
|
||||
chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
|
||||
}
|
||||
}
|
||||
monitoring_input_box()
|
||||
update_height();
|
||||
setInterval(function () {
|
||||
update_height_slow()
|
||||
}, 50); // 每100毫秒执行一次
|
||||
window.addEventListener('resize', function() { update_height(); });
|
||||
window.addEventListener('scroll', function() { update_height_slow(); });
|
||||
setInterval(function () { update_height_slow() }, 50); // 每50毫秒执行一次
|
||||
}
|
||||
|
||||
swapped = false;
|
||||
function swap_input_area() {
|
||||
// Get the elements to be swapped
|
||||
var element1 = document.querySelector("#input-panel");
|
||||
var element2 = document.querySelector("#basic-panel");
|
||||
|
||||
// Get the parent of the elements
|
||||
var parent = element1.parentNode;
|
||||
|
||||
// Get the next sibling of element2
|
||||
var nextSibling = element2.nextSibling;
|
||||
|
||||
// Swap the elements
|
||||
parent.insertBefore(element2, element1);
|
||||
parent.insertBefore(element1, nextSibling);
|
||||
if (swapped) {swapped = false;}
|
||||
else {swapped = true;}
|
||||
}
|
||||
|
||||
function get_elements(consider_state_panel = false) {
|
||||
@@ -191,19 +303,42 @@ function get_elements(consider_state_panel = false) {
|
||||
const panel2 = document.querySelector('#basic-panel').getBoundingClientRect()
|
||||
const panel3 = document.querySelector('#plugin-panel').getBoundingClientRect();
|
||||
// const panel4 = document.querySelector('#interact-panel').getBoundingClientRect();
|
||||
const panel5 = document.querySelector('#input-panel2').getBoundingClientRect();
|
||||
const panel_active = document.querySelector('#state-panel').getBoundingClientRect();
|
||||
if (consider_state_panel || panel_active.height < 25) {
|
||||
document.state_panel_height = panel_active.height;
|
||||
}
|
||||
// 25 是chatbot的label高度, 16 是右侧的gap
|
||||
var panel_height_target = panel1.height + panel2.height + panel3.height + 0 + 0 - 25 + 16 * 2;
|
||||
var height_target = panel1.height + panel2.height + panel3.height + 0 + 0 - 25 + 16 * 2;
|
||||
// 禁止动态的state-panel高度影响
|
||||
panel_height_target = panel_height_target + (document.state_panel_height - panel_active.height)
|
||||
var panel_height_target = parseInt(panel_height_target);
|
||||
height_target = height_target + (document.state_panel_height - panel_active.height)
|
||||
var height_target = parseInt(height_target);
|
||||
var chatbot_height = chatbot.style.height;
|
||||
// 交换输入区位置,使得输入区始终可用
|
||||
if (!swapped){
|
||||
if (panel1.top!=0 && (panel1.bottom + panel1.top)/2 < 0){ swap_input_area(); }
|
||||
}
|
||||
else if (swapped){
|
||||
if (panel2.top!=0 && panel2.top > 0){ swap_input_area(); }
|
||||
}
|
||||
// 调整高度
|
||||
const err_tor = 5;
|
||||
if (Math.abs(panel1.left - chatbot.getBoundingClientRect().left) < err_tor){
|
||||
// 是否处于窄屏模式
|
||||
height_target = window.innerHeight * 0.6;
|
||||
}else{
|
||||
// 调整高度
|
||||
const chatbot_height_exceed = 15;
|
||||
const chatbot_height_exceed_m = 10;
|
||||
b_panel = Math.max(panel1.bottom, panel2.bottom, panel3.bottom)
|
||||
if (b_panel >= window.innerHeight - chatbot_height_exceed) {
|
||||
height_target = window.innerHeight - chatbot.getBoundingClientRect().top - chatbot_height_exceed_m;
|
||||
}
|
||||
else if (b_panel < window.innerHeight * 0.75) {
|
||||
height_target = window.innerHeight * 0.8;
|
||||
}
|
||||
}
|
||||
var chatbot_height = parseInt(chatbot_height);
|
||||
return { panel_height_target, chatbot_height, chatbot };
|
||||
return { height_target, chatbot_height, chatbot };
|
||||
}
|
||||
|
||||
|
||||
@@ -217,9 +352,47 @@ var elem_upload_float = null;
|
||||
var elem_input_main = null;
|
||||
var elem_input_float = null;
|
||||
var elem_chatbot = null;
|
||||
var elem_upload_component_float = null;
|
||||
var elem_upload_component = null;
|
||||
var exist_file_msg = '⚠️请先删除上传区(左上方)中的历史文件,再尝试上传。'
|
||||
|
||||
function add_func_paste(input) {
|
||||
function locate_upload_elems(){
|
||||
elem_upload = document.getElementById('elem_upload')
|
||||
elem_upload_float = document.getElementById('elem_upload_float')
|
||||
elem_input_main = document.getElementById('user_input_main')
|
||||
elem_input_float = document.getElementById('user_input_float')
|
||||
elem_chatbot = document.getElementById('gpt-chatbot')
|
||||
elem_upload_component_float = elem_upload_float.querySelector("input[type=file]");
|
||||
elem_upload_component = elem_upload.querySelector("input[type=file]");
|
||||
}
|
||||
|
||||
async function upload_files(files) {
|
||||
let totalSizeMb = 0
|
||||
elem_upload_component_float = elem_upload_float.querySelector("input[type=file]");
|
||||
if (files && files.length > 0) {
|
||||
// 执行具体的上传逻辑
|
||||
if (elem_upload_component_float) {
|
||||
for (let i = 0; i < files.length; i++) {
|
||||
// 将从文件数组中获取的文件大小(单位为字节)转换为MB,
|
||||
totalSizeMb += files[i].size / 1024 / 1024;
|
||||
}
|
||||
// 检查文件总大小是否超过20MB
|
||||
if (totalSizeMb > 20) {
|
||||
toast_push('⚠️文件夹大于 20MB 🚀上传文件中', 3000);
|
||||
}
|
||||
let event = new Event("change");
|
||||
Object.defineProperty(event, "target", { value: elem_upload_component_float, enumerable: true });
|
||||
Object.defineProperty(event, "currentTarget", { value: elem_upload_component_float, enumerable: true });
|
||||
Object.defineProperty(elem_upload_component_float, "files", { value: files, enumerable: true });
|
||||
elem_upload_component_float.dispatchEvent(event);
|
||||
} else {
|
||||
console.log(exist_file_msg);
|
||||
toast_push(exist_file_msg, 3000);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function register_func_paste(input) {
|
||||
let paste_files = [];
|
||||
if (input) {
|
||||
input.addEventListener("paste", async function (e) {
|
||||
@@ -245,7 +418,7 @@ function add_func_paste(input) {
|
||||
}
|
||||
}
|
||||
|
||||
function add_func_drag(elem) {
|
||||
function register_func_drag(elem) {
|
||||
if (elem) {
|
||||
const dragEvents = ["dragover"];
|
||||
const leaveEvents = ["dragleave", "dragend", "drop"];
|
||||
@@ -281,113 +454,74 @@ function add_func_drag(elem) {
|
||||
}
|
||||
}
|
||||
|
||||
async function upload_files(files) {
|
||||
const uploadInputElement = elem_upload_float.querySelector("input[type=file]");
|
||||
let totalSizeMb = 0
|
||||
if (files && files.length > 0) {
|
||||
// 执行具体的上传逻辑
|
||||
if (uploadInputElement) {
|
||||
for (let i = 0; i < files.length; i++) {
|
||||
// 将从文件数组中获取的文件大小(单位为字节)转换为MB,
|
||||
totalSizeMb += files[i].size / 1024 / 1024;
|
||||
}
|
||||
// 检查文件总大小是否超过20MB
|
||||
if (totalSizeMb > 20) {
|
||||
toast_push('⚠️文件夹大于 20MB 🚀上传文件中', 3000)
|
||||
// return; // 如果超过了指定大小, 可以不进行后续上传操作
|
||||
}
|
||||
// 监听change事件, 原生Gradio可以实现
|
||||
// uploadInputElement.addEventListener('change', function(){replace_input_string()});
|
||||
let event = new Event("change");
|
||||
Object.defineProperty(event, "target", { value: uploadInputElement, enumerable: true });
|
||||
Object.defineProperty(event, "currentTarget", { value: uploadInputElement, enumerable: true });
|
||||
Object.defineProperty(uploadInputElement, "files", { value: files, enumerable: true });
|
||||
uploadInputElement.dispatchEvent(event);
|
||||
} else {
|
||||
toast_push(exist_file_msg, 3000)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function begin_loading_status() {
|
||||
// Create the loader div and add styling
|
||||
var loader = document.createElement('div');
|
||||
loader.id = 'Js_File_Loading';
|
||||
loader.style.position = "absolute";
|
||||
loader.style.top = "50%";
|
||||
loader.style.left = "50%";
|
||||
loader.style.width = "60px";
|
||||
loader.style.height = "60px";
|
||||
loader.style.border = "16px solid #f3f3f3";
|
||||
loader.style.borderTop = "16px solid #3498db";
|
||||
loader.style.borderRadius = "50%";
|
||||
loader.style.animation = "spin 2s linear infinite";
|
||||
loader.style.transform = "translate(-50%, -50%)";
|
||||
document.body.appendChild(loader); // Add the loader to the body
|
||||
// Set the CSS animation keyframes
|
||||
var styleSheet = document.createElement('style');
|
||||
// styleSheet.type = 'text/css';
|
||||
styleSheet.id = 'Js_File_Loading_Style'
|
||||
styleSheet.innerText = `
|
||||
@keyframes spin {
|
||||
0% { transform: rotate(0deg); }
|
||||
100% { transform: rotate(360deg); }
|
||||
}`;
|
||||
document.head.appendChild(styleSheet);
|
||||
}
|
||||
|
||||
function cancel_loading_status() {
|
||||
var loadingElement = document.getElementById('Js_File_Loading');
|
||||
if (loadingElement) {
|
||||
document.body.removeChild(loadingElement); // remove the loader from the body
|
||||
}
|
||||
var loadingStyle = document.getElementById('Js_File_Loading_Style');
|
||||
if (loadingStyle) {
|
||||
document.head.removeChild(loadingStyle);
|
||||
}
|
||||
let clearButton = document.querySelectorAll('div[id*="elem_upload"] button[aria-label="Clear"]');
|
||||
for (let button of clearButton) {
|
||||
button.addEventListener('click', function () {
|
||||
setTimeout(function () {
|
||||
register_upload_event();
|
||||
}, 50);
|
||||
function elem_upload_component_pop_message(elem) {
|
||||
if (elem) {
|
||||
const dragEvents = ["dragover"];
|
||||
const leaveEvents = ["dragleave", "dragend", "drop"];
|
||||
dragEvents.forEach(event => {
|
||||
elem.addEventListener(event, function (e) {
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
if (elem_upload_float.querySelector("input[type=file]")) {
|
||||
toast_up('⚠️释放以上传文件')
|
||||
} else {
|
||||
toast_up(exist_file_msg)
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function register_upload_event() {
|
||||
elem_upload_float = document.getElementById('elem_upload_float')
|
||||
const upload_component = elem_upload_float.querySelector("input[type=file]");
|
||||
if (upload_component) {
|
||||
upload_component.addEventListener('change', function (event) {
|
||||
leaveEvents.forEach(event => {
|
||||
elem.addEventListener(event, function (e) {
|
||||
toast_down();
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
});
|
||||
});
|
||||
elem.addEventListener("drop", async function (e) {
|
||||
toast_push('正在上传中,请稍等。', 2000);
|
||||
begin_loading_status();
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function register_upload_event() {
|
||||
locate_upload_elems();
|
||||
if (elem_upload_float) {
|
||||
_upload = document.querySelector("#elem_upload_float div.center.boundedheight.flex")
|
||||
elem_upload_component_pop_message(_upload);
|
||||
}
|
||||
if (elem_upload_component_float) {
|
||||
elem_upload_component_float.addEventListener('change', function (event) {
|
||||
toast_push('正在上传中,请稍等。', 2000);
|
||||
begin_loading_status();
|
||||
});
|
||||
}
|
||||
if (elem_upload_component) {
|
||||
elem_upload_component.addEventListener('change', function (event) {
|
||||
toast_push('正在上传中,请稍等。', 2000);
|
||||
begin_loading_status();
|
||||
});
|
||||
}else{
|
||||
toast_push("oppps", 3000);
|
||||
}
|
||||
}
|
||||
|
||||
function monitoring_input_box() {
|
||||
register_upload_event();
|
||||
|
||||
elem_upload = document.getElementById('elem_upload')
|
||||
elem_upload_float = document.getElementById('elem_upload_float')
|
||||
elem_input_main = document.getElementById('user_input_main')
|
||||
elem_input_float = document.getElementById('user_input_float')
|
||||
elem_chatbot = document.getElementById('gpt-chatbot')
|
||||
|
||||
if (elem_input_main) {
|
||||
if (elem_input_main.querySelector("textarea")) {
|
||||
add_func_paste(elem_input_main.querySelector("textarea"))
|
||||
register_func_paste(elem_input_main.querySelector("textarea"))
|
||||
}
|
||||
}
|
||||
if (elem_input_float) {
|
||||
if (elem_input_float.querySelector("textarea")) {
|
||||
add_func_paste(elem_input_float.querySelector("textarea"))
|
||||
register_func_paste(elem_input_float.querySelector("textarea"))
|
||||
}
|
||||
}
|
||||
if (elem_chatbot) {
|
||||
add_func_drag(elem_chatbot)
|
||||
register_func_drag(elem_chatbot)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
@@ -441,8 +575,62 @@ function audio_fn_init() {
|
||||
}
|
||||
}
|
||||
|
||||
function minor_ui_adjustment() {
|
||||
let cbsc_area = document.getElementById('cbsc');
|
||||
cbsc_area.style.paddingTop = '15px';
|
||||
var bar_btn_width = [];
|
||||
// 自动隐藏超出范围的toolbar按钮
|
||||
function auto_hide_toolbar() {
|
||||
var qq = document.getElementById('tooltip');
|
||||
var tab_nav = qq.getElementsByClassName('tab-nav');
|
||||
if (tab_nav.length == 0){ return; }
|
||||
var btn_list = tab_nav[0].getElementsByTagName('button')
|
||||
if (btn_list.length == 0){ return; }
|
||||
// 获取页面宽度
|
||||
var page_width = document.documentElement.clientWidth;
|
||||
// 总是保留的按钮数量
|
||||
const always_preserve = 2;
|
||||
// 获取最后一个按钮的右侧位置
|
||||
var cur_right = btn_list[always_preserve-1].getBoundingClientRect().right;
|
||||
if (bar_btn_width.length == 0){
|
||||
// 首次运行,记录每个按钮的宽度
|
||||
for (var i = 0; i < btn_list.length; i++) {
|
||||
bar_btn_width.push(btn_list[i].getBoundingClientRect().width);
|
||||
}
|
||||
}
|
||||
// 处理每一个按钮
|
||||
for (var i = always_preserve; i < btn_list.length; i++) {
|
||||
var element = btn_list[i];
|
||||
var element_right = element.getBoundingClientRect().right;
|
||||
if (element_right!=0){ cur_right = element_right; }
|
||||
if (element.style.display === 'none') {
|
||||
if ((cur_right + bar_btn_width[i]) < (page_width * 0.37)) {
|
||||
// 恢复显示当前按钮
|
||||
element.style.display = 'block';
|
||||
// console.log('show');
|
||||
return;
|
||||
}else{
|
||||
return;
|
||||
}
|
||||
} else {
|
||||
if (cur_right > (page_width * 0.38)) {
|
||||
// 隐藏当前按钮以及右侧所有按钮
|
||||
for (var j = i; j < btn_list.length; j++) {
|
||||
if (btn_list[j].style.display !== 'none') {
|
||||
btn_list[j].style.display = 'none';
|
||||
}
|
||||
}
|
||||
// console.log('show');
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
setInterval(function () {
|
||||
auto_hide_toolbar()
|
||||
}, 200); // 每50毫秒执行一次
|
||||
}
|
||||
|
||||
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
// 第 6 部分: JS初始化函数
|
||||
@@ -450,6 +638,7 @@ function audio_fn_init() {
|
||||
|
||||
function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
|
||||
audio_fn_init();
|
||||
minor_ui_adjustment();
|
||||
chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
|
||||
var chatbotObserver = new MutationObserver(() => {
|
||||
chatbotContentChanged(1);
|
||||
|
||||
@@ -479,4 +479,3 @@
|
||||
.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */
|
||||
.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */
|
||||
.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */
|
||||
|
||||
|
||||
@@ -1,18 +1,26 @@
|
||||
import os
|
||||
import gradio as gr
|
||||
from toolbox import get_conf
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
||||
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
|
||||
theme_dir = os.path.dirname(__file__)
|
||||
|
||||
def adjust_theme():
|
||||
|
||||
def adjust_theme():
|
||||
try:
|
||||
color_er = gr.themes.utils.colors.fuchsia
|
||||
set_theme = gr.themes.Default(
|
||||
primary_hue=gr.themes.utils.colors.orange,
|
||||
neutral_hue=gr.themes.utils.colors.gray,
|
||||
font=["Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"],
|
||||
font_mono=["ui-monospace", "Consolas", "monospace"])
|
||||
font=[
|
||||
"Helvetica",
|
||||
"Microsoft YaHei",
|
||||
"ui-sans-serif",
|
||||
"sans-serif",
|
||||
"system-ui",
|
||||
],
|
||||
font_mono=["ui-monospace", "Consolas", "monospace"],
|
||||
)
|
||||
set_theme.set(
|
||||
# Colors
|
||||
input_background_fill_dark="*neutral_800",
|
||||
@@ -59,7 +67,7 @@ def adjust_theme():
|
||||
button_cancel_text_color_dark="white",
|
||||
)
|
||||
|
||||
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
||||
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
|
||||
js = f"<script>{f.read()}</script>"
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
@@ -69,21 +77,26 @@ def adjust_theme():
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
if not hasattr(gr, 'RawTemplateResponse'):
|
||||
if not hasattr(gr, "RawTemplateResponse"):
|
||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||
gradio_original_template_fn = gr.RawTemplateResponse
|
||||
|
||||
def gradio_new_template_fn(*args, **kwargs):
|
||||
res = gradio_original_template_fn(*args, **kwargs)
|
||||
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
||||
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
|
||||
res.init_headers()
|
||||
return res
|
||||
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
|
||||
|
||||
gr.routes.templates.TemplateResponse = (
|
||||
gradio_new_template_fn # override gradio template
|
||||
)
|
||||
except:
|
||||
set_theme = None
|
||||
print('gradio版本较旧, 不能自定义字体和颜色')
|
||||
print("gradio版本较旧, 不能自定义字体和颜色")
|
||||
return set_theme
|
||||
|
||||
with open(os.path.join(theme_dir, 'contrast.css'), "r", encoding="utf-8") as f:
|
||||
|
||||
with open(os.path.join(theme_dir, "contrast.css"), "r", encoding="utf-8") as f:
|
||||
advanced_css = f.read()
|
||||
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
|
||||
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
|
||||
advanced_css += f.read()
|
||||
|
||||
@@ -303,4 +303,3 @@
|
||||
.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */
|
||||
.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */
|
||||
.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */
|
||||
|
||||
|
||||
@@ -1,17 +1,26 @@
|
||||
import os
|
||||
import gradio as gr
|
||||
from toolbox import get_conf
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
||||
theme_dir = os.path.dirname(__file__)
|
||||
def adjust_theme():
|
||||
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
|
||||
theme_dir = os.path.dirname(__file__)
|
||||
|
||||
|
||||
def adjust_theme():
|
||||
try:
|
||||
color_er = gr.themes.utils.colors.fuchsia
|
||||
set_theme = gr.themes.Default(
|
||||
primary_hue=gr.themes.utils.colors.orange,
|
||||
neutral_hue=gr.themes.utils.colors.gray,
|
||||
font=["Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"],
|
||||
font_mono=["ui-monospace", "Consolas", "monospace"])
|
||||
font=[
|
||||
"Helvetica",
|
||||
"Microsoft YaHei",
|
||||
"ui-sans-serif",
|
||||
"sans-serif",
|
||||
"system-ui",
|
||||
],
|
||||
font_mono=["ui-monospace", "Consolas", "monospace"],
|
||||
)
|
||||
set_theme.set(
|
||||
# Colors
|
||||
input_background_fill_dark="*neutral_800",
|
||||
@@ -58,7 +67,7 @@ def adjust_theme():
|
||||
button_cancel_text_color_dark="white",
|
||||
)
|
||||
|
||||
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
||||
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
|
||||
js = f"<script>{f.read()}</script>"
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
@@ -68,21 +77,26 @@ def adjust_theme():
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
if not hasattr(gr, 'RawTemplateResponse'):
|
||||
if not hasattr(gr, "RawTemplateResponse"):
|
||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||
gradio_original_template_fn = gr.RawTemplateResponse
|
||||
|
||||
def gradio_new_template_fn(*args, **kwargs):
|
||||
res = gradio_original_template_fn(*args, **kwargs)
|
||||
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
||||
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
|
||||
res.init_headers()
|
||||
return res
|
||||
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
|
||||
|
||||
gr.routes.templates.TemplateResponse = (
|
||||
gradio_new_template_fn # override gradio template
|
||||
)
|
||||
except:
|
||||
set_theme = None
|
||||
print('gradio版本较旧, 不能自定义字体和颜色')
|
||||
print("gradio版本较旧, 不能自定义字体和颜色")
|
||||
return set_theme
|
||||
|
||||
with open(os.path.join(theme_dir, 'default.css'), "r", encoding="utf-8") as f:
|
||||
|
||||
with open(os.path.join(theme_dir, "default.css"), "r", encoding="utf-8") as f:
|
||||
advanced_css = f.read()
|
||||
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
|
||||
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
|
||||
advanced_css += f.read()
|
||||
|
||||
@@ -2,29 +2,36 @@ import logging
|
||||
import os
|
||||
import gradio as gr
|
||||
from toolbox import get_conf, ProxyNetworkActivate
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
||||
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
|
||||
theme_dir = os.path.dirname(__file__)
|
||||
|
||||
|
||||
def dynamic_set_theme(THEME):
|
||||
set_theme = gr.themes.ThemeClass()
|
||||
with ProxyNetworkActivate('Download_Gradio_Theme'):
|
||||
logging.info('正在下载Gradio主题,请稍等。')
|
||||
if THEME.startswith('Huggingface-'): THEME = THEME.lstrip('Huggingface-')
|
||||
if THEME.startswith('huggingface-'): THEME = THEME.lstrip('huggingface-')
|
||||
with ProxyNetworkActivate("Download_Gradio_Theme"):
|
||||
logging.info("正在下载Gradio主题,请稍等。")
|
||||
if THEME.startswith("Huggingface-"):
|
||||
THEME = THEME.lstrip("Huggingface-")
|
||||
if THEME.startswith("huggingface-"):
|
||||
THEME = THEME.lstrip("huggingface-")
|
||||
set_theme = set_theme.from_hub(THEME.lower())
|
||||
return set_theme
|
||||
|
||||
|
||||
def adjust_theme():
|
||||
try:
|
||||
set_theme = gr.themes.ThemeClass()
|
||||
with ProxyNetworkActivate('Download_Gradio_Theme'):
|
||||
logging.info('正在下载Gradio主题,请稍等。')
|
||||
THEME = get_conf('THEME')
|
||||
if THEME.startswith('Huggingface-'): THEME = THEME.lstrip('Huggingface-')
|
||||
if THEME.startswith('huggingface-'): THEME = THEME.lstrip('huggingface-')
|
||||
with ProxyNetworkActivate("Download_Gradio_Theme"):
|
||||
logging.info("正在下载Gradio主题,请稍等。")
|
||||
THEME = get_conf("THEME")
|
||||
if THEME.startswith("Huggingface-"):
|
||||
THEME = THEME.lstrip("Huggingface-")
|
||||
if THEME.startswith("huggingface-"):
|
||||
THEME = THEME.lstrip("huggingface-")
|
||||
set_theme = set_theme.from_hub(THEME.lower())
|
||||
|
||||
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
||||
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
|
||||
js = f"<script>{f.read()}</script>"
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
@@ -34,20 +41,26 @@ def adjust_theme():
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
if not hasattr(gr, 'RawTemplateResponse'):
|
||||
if not hasattr(gr, "RawTemplateResponse"):
|
||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||
gradio_original_template_fn = gr.RawTemplateResponse
|
||||
|
||||
def gradio_new_template_fn(*args, **kwargs):
|
||||
res = gradio_original_template_fn(*args, **kwargs)
|
||||
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
||||
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
|
||||
res.init_headers()
|
||||
return res
|
||||
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
|
||||
except Exception as e:
|
||||
|
||||
gr.routes.templates.TemplateResponse = (
|
||||
gradio_new_template_fn # override gradio template
|
||||
)
|
||||
except Exception:
|
||||
set_theme = None
|
||||
from toolbox import trimmed_format_exc
|
||||
logging.error('gradio版本较旧, 不能自定义字体和颜色:', trimmed_format_exc())
|
||||
|
||||
logging.error("gradio版本较旧, 不能自定义字体和颜色:", trimmed_format_exc())
|
||||
return set_theme
|
||||
|
||||
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
|
||||
|
||||
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
|
||||
advanced_css = f.read()
|
||||
|
||||
@@ -197,12 +197,12 @@ footer {
|
||||
}
|
||||
textarea.svelte-1pie7s6 {
|
||||
background: #e7e6e6 !important;
|
||||
width: 96% !important;
|
||||
width: 100% !important;
|
||||
}
|
||||
|
||||
.dark textarea.svelte-1pie7s6 {
|
||||
background: var(--input-background-fill) !important;
|
||||
width: 96% !important;
|
||||
width: 100% !important;
|
||||
}
|
||||
|
||||
.dark input[type=number].svelte-1cl284s {
|
||||
@@ -256,13 +256,13 @@ textarea.svelte-1pie7s6 {
|
||||
max-height: 95% !important;
|
||||
overflow-y: auto !important;
|
||||
}*/
|
||||
.app.svelte-1mya07g.svelte-1mya07g {
|
||||
/* .app.svelte-1mya07g.svelte-1mya07g {
|
||||
max-width: 100%;
|
||||
position: relative;
|
||||
padding: var(--size-4);
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
}
|
||||
} */
|
||||
|
||||
.gradio-container-3-32-2 h1 {
|
||||
font-weight: 700 !important;
|
||||
@@ -508,12 +508,14 @@ ol:not(.options), ul:not(.options) {
|
||||
[data-testid = "bot"] {
|
||||
max-width: 85%;
|
||||
border-bottom-left-radius: 0 !important;
|
||||
box-shadow: 2px 2px 0px 1px rgba(0, 0, 0, 0.06);
|
||||
background-color: var(--message-bot-background-color-light) !important;
|
||||
}
|
||||
[data-testid = "user"] {
|
||||
max-width: 85%;
|
||||
width: auto !important;
|
||||
border-bottom-right-radius: 0 !important;
|
||||
box-shadow: 2px 2px 0px 1px rgba(0, 0, 0, 0.06);
|
||||
background-color: var(--message-user-background-color-light) !important;
|
||||
}
|
||||
.dark [data-testid = "bot"] {
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
import os
|
||||
import gradio as gr
|
||||
from toolbox import get_conf
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
||||
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
|
||||
theme_dir = os.path.dirname(__file__)
|
||||
|
||||
|
||||
def adjust_theme():
|
||||
try:
|
||||
set_theme = gr.themes.Soft(
|
||||
@@ -50,7 +52,6 @@ def adjust_theme():
|
||||
c900="#2B2B2B",
|
||||
c950="#171717",
|
||||
),
|
||||
|
||||
radius_size=gr.themes.sizes.radius_sm,
|
||||
).set(
|
||||
button_primary_background_fill="*primary_500",
|
||||
@@ -75,7 +76,7 @@ def adjust_theme():
|
||||
chatbot_code_background_color_dark="*neutral_950",
|
||||
)
|
||||
|
||||
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
||||
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
|
||||
js = f"<script>{f.read()}</script>"
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
@@ -86,24 +87,29 @@ def adjust_theme():
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
|
||||
with open(os.path.join(theme_dir, 'green.js'), 'r', encoding='utf8') as f:
|
||||
with open(os.path.join(theme_dir, "green.js"), "r", encoding="utf8") as f:
|
||||
js += f"<script>{f.read()}</script>"
|
||||
|
||||
if not hasattr(gr, 'RawTemplateResponse'):
|
||||
if not hasattr(gr, "RawTemplateResponse"):
|
||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||
gradio_original_template_fn = gr.RawTemplateResponse
|
||||
|
||||
def gradio_new_template_fn(*args, **kwargs):
|
||||
res = gradio_original_template_fn(*args, **kwargs)
|
||||
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
||||
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
|
||||
res.init_headers()
|
||||
return res
|
||||
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
|
||||
|
||||
gr.routes.templates.TemplateResponse = (
|
||||
gradio_new_template_fn # override gradio template
|
||||
)
|
||||
except:
|
||||
set_theme = None
|
||||
print('gradio版本较旧, 不能自定义字体和颜色')
|
||||
print("gradio版本较旧, 不能自定义字体和颜色")
|
||||
return set_theme
|
||||
|
||||
with open(os.path.join(theme_dir, 'green.css'), "r", encoding="utf-8") as f:
|
||||
|
||||
with open(os.path.join(theme_dir, "green.css"), "r", encoding="utf-8") as f:
|
||||
advanced_css = f.read()
|
||||
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
|
||||
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
|
||||
advanced_css += f.read()
|
||||
|
||||
@@ -10,29 +10,33 @@ from toolbox import get_conf
|
||||
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||
"""
|
||||
|
||||
|
||||
def load_dynamic_theme(THEME):
|
||||
adjust_dynamic_theme = None
|
||||
if THEME == 'Chuanhu-Small-and-Beautiful':
|
||||
if THEME == "Chuanhu-Small-and-Beautiful":
|
||||
from .green import adjust_theme, advanced_css
|
||||
theme_declaration = "<h2 align=\"center\" class=\"small\">[Chuanhu-Small-and-Beautiful主题]</h2>"
|
||||
elif THEME == 'High-Contrast':
|
||||
|
||||
theme_declaration = (
|
||||
'<h2 align="center" class="small">[Chuanhu-Small-and-Beautiful主题]</h2>'
|
||||
)
|
||||
elif THEME == "High-Contrast":
|
||||
from .contrast import adjust_theme, advanced_css
|
||||
|
||||
theme_declaration = ""
|
||||
elif '/' in THEME:
|
||||
elif "/" in THEME:
|
||||
from .gradios import adjust_theme, advanced_css
|
||||
from .gradios import dynamic_set_theme
|
||||
|
||||
adjust_dynamic_theme = dynamic_set_theme(THEME)
|
||||
theme_declaration = ""
|
||||
else:
|
||||
from .default import adjust_theme, advanced_css
|
||||
|
||||
theme_declaration = ""
|
||||
return adjust_theme, advanced_css, theme_declaration, adjust_dynamic_theme
|
||||
|
||||
adjust_theme, advanced_css, theme_declaration, _ = load_dynamic_theme(get_conf('THEME'))
|
||||
|
||||
|
||||
|
||||
|
||||
adjust_theme, advanced_css, theme_declaration, _ = load_dynamic_theme(get_conf("THEME"))
|
||||
|
||||
|
||||
"""
|
||||
@@ -42,26 +46,26 @@ cookie相关工具函数
|
||||
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||
"""
|
||||
|
||||
|
||||
def init_cookie(cookies, chatbot):
|
||||
# 为每一位访问的用户赋予一个独一无二的uuid编码
|
||||
cookies.update({'uuid': uuid.uuid4()})
|
||||
cookies.update({"uuid": uuid.uuid4()})
|
||||
return cookies
|
||||
|
||||
|
||||
def to_cookie_str(d):
|
||||
# Pickle the dictionary and encode it as a string
|
||||
pickled_dict = pickle.dumps(d)
|
||||
cookie_value = base64.b64encode(pickled_dict).decode('utf-8')
|
||||
cookie_value = base64.b64encode(pickled_dict).decode("utf-8")
|
||||
return cookie_value
|
||||
|
||||
|
||||
def from_cookie_str(c):
|
||||
# Decode the base64-encoded string and unpickle it into a dictionary
|
||||
pickled_dict = base64.b64decode(c.encode('utf-8'))
|
||||
pickled_dict = base64.b64decode(c.encode("utf-8"))
|
||||
return pickle.loads(pickled_dict)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
"""
|
||||
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||
第 3 部分
|
||||
@@ -114,5 +118,3 @@ js_code_for_persistent_cookie_init = """(persistent_cookie) => {
|
||||
return getCookie("persistent_cookie");
|
||||
}
|
||||
"""
|
||||
|
||||
|
||||
|
||||
178
toolbox.py
178
toolbox.py
@@ -11,8 +11,10 @@ import glob
|
||||
import math
|
||||
from latex2mathml.converter import convert as tex2mathml
|
||||
from functools import wraps, lru_cache
|
||||
|
||||
pj = os.path.join
|
||||
default_user_name = 'default_user'
|
||||
|
||||
"""
|
||||
========================================================================
|
||||
第一部分
|
||||
@@ -26,6 +28,7 @@ default_user_name = 'default_user'
|
||||
========================================================================
|
||||
"""
|
||||
|
||||
|
||||
class ChatBotWithCookies(list):
|
||||
def __init__(self, cookie):
|
||||
"""
|
||||
@@ -67,18 +70,18 @@ def ArgsGeneralWrapper(f):
|
||||
else:
|
||||
user_name = default_user_name
|
||||
cookies.update({
|
||||
'top_p':top_p,
|
||||
'top_p': top_p,
|
||||
'api_key': cookies['api_key'],
|
||||
'llm_model': llm_model,
|
||||
'temperature':temperature,
|
||||
'temperature': temperature,
|
||||
'user_name': user_name,
|
||||
})
|
||||
llm_kwargs = {
|
||||
'api_key': cookies['api_key'],
|
||||
'llm_model': llm_model,
|
||||
'top_p':top_p,
|
||||
'top_p': top_p,
|
||||
'max_length': max_length,
|
||||
'temperature':temperature,
|
||||
'temperature': temperature,
|
||||
'client_ip': request.client.host,
|
||||
'most_recent_uploaded': cookies.get('most_recent_uploaded')
|
||||
}
|
||||
@@ -103,8 +106,10 @@ def ArgsGeneralWrapper(f):
|
||||
final_cookies = chatbot_with_cookie.get_cookies()
|
||||
# len(args) != 0 代表“提交”键对话通道,或者基础功能通道
|
||||
if len(args) != 0 and 'files_to_promote' in final_cookies and len(final_cookies['files_to_promote']) > 0:
|
||||
chatbot_with_cookie.append(["检测到**滞留的缓存文档**,请及时处理。", "请及时点击“**保存当前对话**”获取所有滞留文档。"])
|
||||
chatbot_with_cookie.append(
|
||||
["检测到**滞留的缓存文档**,请及时处理。", "请及时点击“**保存当前对话**”获取所有滞留文档。"])
|
||||
yield from update_ui(chatbot_with_cookie, final_cookies['history'], msg="检测到被滞留的缓存文档")
|
||||
|
||||
return decorated
|
||||
|
||||
|
||||
@@ -129,6 +134,7 @@ def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
|
||||
|
||||
yield cookies, chatbot_gr, history, msg
|
||||
|
||||
|
||||
def update_ui_lastest_msg(lastmsg, chatbot, history, delay=1): # 刷新界面
|
||||
"""
|
||||
刷新用户界面
|
||||
@@ -147,6 +153,7 @@ def trimmed_format_exc():
|
||||
replace_path = "."
|
||||
return str.replace(current_path, replace_path)
|
||||
|
||||
|
||||
def CatchException(f):
|
||||
"""
|
||||
装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
|
||||
@@ -164,9 +171,9 @@ def CatchException(f):
|
||||
if len(chatbot_with_cookie) == 0:
|
||||
chatbot_with_cookie.clear()
|
||||
chatbot_with_cookie.append(["插件调度异常", "异常原因"])
|
||||
chatbot_with_cookie[-1] = (chatbot_with_cookie[-1][0],
|
||||
f"[Local Message] 插件调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}")
|
||||
yield from update_ui(chatbot=chatbot_with_cookie, history=history, msg=f'异常 {e}') # 刷新界面
|
||||
chatbot_with_cookie[-1] = (chatbot_with_cookie[-1][0], f"[Local Message] 插件调用出错: \n\n{tb_str} \n")
|
||||
yield from update_ui(chatbot=chatbot_with_cookie, history=history, msg=f'异常 {e}') # 刷新界面
|
||||
|
||||
return decorated
|
||||
|
||||
|
||||
@@ -209,6 +216,7 @@ def HotReload(f):
|
||||
========================================================================
|
||||
"""
|
||||
|
||||
|
||||
def get_reduce_token_percent(text):
|
||||
"""
|
||||
* 此函数未来将被弃用
|
||||
@@ -220,9 +228,9 @@ def get_reduce_token_percent(text):
|
||||
EXCEED_ALLO = 500 # 稍微留一点余地,否则在回复时会因余量太少出问题
|
||||
max_limit = float(match[0]) - EXCEED_ALLO
|
||||
current_tokens = float(match[1])
|
||||
ratio = max_limit/current_tokens
|
||||
ratio = max_limit / current_tokens
|
||||
assert ratio > 0 and ratio < 1
|
||||
return ratio, str(int(current_tokens-max_limit))
|
||||
return ratio, str(int(current_tokens - max_limit))
|
||||
except:
|
||||
return 0.5, '不详'
|
||||
|
||||
@@ -268,8 +276,6 @@ def regular_txt_to_markdown(text):
|
||||
return text
|
||||
|
||||
|
||||
|
||||
|
||||
def report_exception(chatbot, history, a, b):
|
||||
"""
|
||||
向chatbot中添加错误信息
|
||||
@@ -352,7 +358,8 @@ def markdown_convertion(txt):
|
||||
"""
|
||||
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
||||
"""
|
||||
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
|
||||
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">',
|
||||
'<script type="math/tex; mode=display">')
|
||||
content = content.replace('</script>\n</script>', '</script>')
|
||||
return content
|
||||
|
||||
@@ -363,16 +370,16 @@ def markdown_convertion(txt):
|
||||
if '```' in txt and '```reference' not in txt: return False
|
||||
if '$' not in txt and '\\[' not in txt: return False
|
||||
mathpatterns = {
|
||||
r'(?<!\\|\$)(\$)([^\$]+)(\$)': {'allow_multi_lines': False}, # $...$
|
||||
r'(?<!\\)(\$\$)([^\$]+)(\$\$)': {'allow_multi_lines': True}, # $$...$$
|
||||
r'(?<!\\)(\\\[)(.+?)(\\\])': {'allow_multi_lines': False}, # \[...\]
|
||||
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
|
||||
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
|
||||
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
|
||||
r'(?<!\\|\$)(\$)([^\$]+)(\$)': {'allow_multi_lines': False}, # $...$
|
||||
r'(?<!\\)(\$\$)([^\$]+)(\$\$)': {'allow_multi_lines': True}, # $$...$$
|
||||
r'(?<!\\)(\\\[)(.+?)(\\\])': {'allow_multi_lines': False}, # \[...\]
|
||||
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
|
||||
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
|
||||
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
|
||||
}
|
||||
matches = []
|
||||
for pattern, property in mathpatterns.items():
|
||||
flags = re.ASCII|re.DOTALL if property['allow_multi_lines'] else re.ASCII
|
||||
flags = re.ASCII | re.DOTALL if property['allow_multi_lines'] else re.ASCII
|
||||
matches.extend(re.findall(pattern, txt, flags))
|
||||
if len(matches) == 0: return False
|
||||
contain_any_eq = False
|
||||
@@ -389,7 +396,7 @@ def markdown_convertion(txt):
|
||||
def fix_markdown_indent(txt):
|
||||
# fix markdown indent
|
||||
if (' - ' not in txt) or ('. ' not in txt):
|
||||
return txt # do not need to fix, fast escape
|
||||
return txt # do not need to fix, fast escape
|
||||
# walk through the lines and fix non-standard indentation
|
||||
lines = txt.split("\n")
|
||||
pattern = re.compile(r'^\s+-')
|
||||
@@ -401,7 +408,7 @@ def markdown_convertion(txt):
|
||||
stripped_string = line.lstrip()
|
||||
num_spaces = len(line) - len(stripped_string)
|
||||
if (num_spaces % 4) == 3:
|
||||
num_spaces_should_be = math.ceil(num_spaces/4) * 4
|
||||
num_spaces_should_be = math.ceil(num_spaces / 4) * 4
|
||||
lines[i] = ' ' * num_spaces_should_be + stripped_string
|
||||
return '\n'.join(lines)
|
||||
|
||||
@@ -409,7 +416,8 @@ def markdown_convertion(txt):
|
||||
if is_equation(txt): # 有$标识的公式符号,且没有代码段```的标识
|
||||
# convert everything to html format
|
||||
split = markdown.markdown(text='---')
|
||||
convert_stage_1 = markdown.markdown(text=txt, extensions=['sane_lists', 'tables', 'mdx_math', 'fenced_code'], extension_configs=markdown_extension_configs)
|
||||
convert_stage_1 = markdown.markdown(text=txt, extensions=['sane_lists', 'tables', 'mdx_math', 'fenced_code'],
|
||||
extension_configs=markdown_extension_configs)
|
||||
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
|
||||
# 1. convert to easy-to-copy tex (do not render math)
|
||||
convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
|
||||
@@ -441,8 +449,7 @@ def close_up_code_segment_during_stream(gpt_reply):
|
||||
segments = gpt_reply.split('```')
|
||||
n_mark = len(segments) - 1
|
||||
if n_mark % 2 == 1:
|
||||
# print('输出代码片段中!')
|
||||
return gpt_reply+'\n```'
|
||||
return gpt_reply + '\n```' # 输出代码片段中!
|
||||
else:
|
||||
return gpt_reply
|
||||
|
||||
@@ -559,6 +566,7 @@ def file_already_in_downloadzone(file, user_path):
|
||||
except:
|
||||
return False
|
||||
|
||||
|
||||
def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
|
||||
# 将文件复制一份到下载区
|
||||
import shutil
|
||||
@@ -581,8 +589,10 @@ def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
|
||||
if not os.path.exists(new_path): shutil.copyfile(file, new_path)
|
||||
# 将文件添加到chatbot cookie中
|
||||
if chatbot is not None:
|
||||
if 'files_to_promote' in chatbot._cookies: current = chatbot._cookies['files_to_promote']
|
||||
else: current = []
|
||||
if 'files_to_promote' in chatbot._cookies:
|
||||
current = chatbot._cookies['files_to_promote']
|
||||
else:
|
||||
current = []
|
||||
if new_path not in current: # 避免把同一个文件添加多次
|
||||
chatbot._cookies.update({'files_to_promote': [new_path] + current})
|
||||
return new_path
|
||||
@@ -605,8 +615,10 @@ def del_outdated_uploads(outdate_time_seconds, target_path_base=None):
|
||||
for subdirectory in glob.glob(f'{user_upload_dir}/*'):
|
||||
subdirectory_time = os.path.getmtime(subdirectory)
|
||||
if subdirectory_time < one_hour_ago:
|
||||
try: shutil.rmtree(subdirectory)
|
||||
except: pass
|
||||
try:
|
||||
shutil.rmtree(subdirectory)
|
||||
except:
|
||||
pass
|
||||
return
|
||||
|
||||
|
||||
@@ -681,7 +693,7 @@ def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkbo
|
||||
os.makedirs(target_path_base, exist_ok=True)
|
||||
|
||||
# 移除过时的旧文件从而节省空间&保护隐私
|
||||
outdate_time_seconds = 3600 # 一小时
|
||||
outdate_time_seconds = 3600 # 一小时
|
||||
del_outdated_uploads(outdate_time_seconds, get_upload_folder(user_name))
|
||||
|
||||
# 逐个文件转移到目标路径
|
||||
@@ -690,12 +702,7 @@ def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkbo
|
||||
file_origin_name = os.path.basename(file.orig_name)
|
||||
this_file_path = pj(target_path_base, file_origin_name)
|
||||
shutil.move(file.name, this_file_path)
|
||||
upload_msg += extract_archive(file_path=this_file_path, dest_dir=this_file_path+'.extract')
|
||||
|
||||
if "浮动输入区" in checkboxes:
|
||||
txt, txt2 = "", target_path_base
|
||||
else:
|
||||
txt, txt2 = target_path_base, ""
|
||||
upload_msg += extract_archive(file_path=this_file_path, dest_dir=this_file_path + '.extract')
|
||||
|
||||
# 整理文件集合 输出消息
|
||||
moved_files = [fp for fp in glob.glob(f'{target_path_base}/**/*', recursive=True)]
|
||||
@@ -703,7 +710,11 @@ def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkbo
|
||||
chatbot.append(['我上传了文件,请查收',
|
||||
f'[Local Message] 收到以下文件: \n\n{moved_files_str}' +
|
||||
f'\n\n调用路径参数已自动修正到: \n\n{txt}' +
|
||||
f'\n\n现在您点击任意函数插件时,以上文件将被作为输入参数'+upload_msg])
|
||||
f'\n\n现在您点击任意函数插件时,以上文件将被作为输入参数' + upload_msg])
|
||||
|
||||
txt, txt2 = target_path_base, ""
|
||||
if "浮动输入区" in checkboxes:
|
||||
txt, txt2 = txt2, txt
|
||||
|
||||
# 记录近期文件
|
||||
cookies.update({
|
||||
@@ -732,34 +743,40 @@ def on_report_generated(cookies, files, chatbot):
|
||||
chatbot.append(['报告如何远程获取?', f'报告已经添加到右侧“文件上传区”(可能处于折叠状态),请查收。{file_links}'])
|
||||
return cookies, report_files, chatbot
|
||||
|
||||
|
||||
def load_chat_cookies():
|
||||
API_KEY, LLM_MODEL, AZURE_API_KEY = get_conf('API_KEY', 'LLM_MODEL', 'AZURE_API_KEY')
|
||||
AZURE_CFG_ARRAY, NUM_CUSTOM_BASIC_BTN = get_conf('AZURE_CFG_ARRAY', 'NUM_CUSTOM_BASIC_BTN')
|
||||
|
||||
# deal with azure openai key
|
||||
if is_any_api_key(AZURE_API_KEY):
|
||||
if is_any_api_key(API_KEY): API_KEY = API_KEY + ',' + AZURE_API_KEY
|
||||
else: API_KEY = AZURE_API_KEY
|
||||
if is_any_api_key(API_KEY):
|
||||
API_KEY = API_KEY + ',' + AZURE_API_KEY
|
||||
else:
|
||||
API_KEY = AZURE_API_KEY
|
||||
if len(AZURE_CFG_ARRAY) > 0:
|
||||
for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items():
|
||||
if not azure_model_name.startswith('azure'):
|
||||
raise ValueError("AZURE_CFG_ARRAY中配置的模型必须以azure开头")
|
||||
AZURE_API_KEY_ = azure_cfg_dict["AZURE_API_KEY"]
|
||||
if is_any_api_key(AZURE_API_KEY_):
|
||||
if is_any_api_key(API_KEY): API_KEY = API_KEY + ',' + AZURE_API_KEY_
|
||||
else: API_KEY = AZURE_API_KEY_
|
||||
if is_any_api_key(API_KEY):
|
||||
API_KEY = API_KEY + ',' + AZURE_API_KEY_
|
||||
else:
|
||||
API_KEY = AZURE_API_KEY_
|
||||
|
||||
customize_fn_overwrite_ = {}
|
||||
for k in range(NUM_CUSTOM_BASIC_BTN):
|
||||
customize_fn_overwrite_.update({
|
||||
"自定义按钮" + str(k+1):{
|
||||
"Title": r"",
|
||||
"Prefix": r"请在自定义菜单中定义提示词前缀.",
|
||||
"Suffix": r"请在自定义菜单中定义提示词后缀",
|
||||
"Title": r"",
|
||||
"Prefix": r"请在自定义菜单中定义提示词前缀.",
|
||||
"Suffix": r"请在自定义菜单中定义提示词后缀",
|
||||
}
|
||||
})
|
||||
return {'api_key': API_KEY, 'llm_model': LLM_MODEL, 'customize_fn_overwrite': customize_fn_overwrite_}
|
||||
|
||||
|
||||
def is_openai_api_key(key):
|
||||
CUSTOM_API_KEY_PATTERN = get_conf('CUSTOM_API_KEY_PATTERN')
|
||||
if len(CUSTOM_API_KEY_PATTERN) != 0:
|
||||
@@ -768,14 +785,17 @@ def is_openai_api_key(key):
|
||||
API_MATCH_ORIGINAL = re.match(r"sk-[a-zA-Z0-9]{48}$", key)
|
||||
return bool(API_MATCH_ORIGINAL)
|
||||
|
||||
|
||||
def is_azure_api_key(key):
|
||||
API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{32}$", key)
|
||||
return bool(API_MATCH_AZURE)
|
||||
|
||||
|
||||
def is_api2d_key(key):
|
||||
API_MATCH_API2D = re.match(r"fk[a-zA-Z0-9]{6}-[a-zA-Z0-9]{32}$", key)
|
||||
return bool(API_MATCH_API2D)
|
||||
|
||||
|
||||
def is_any_api_key(key):
|
||||
if ',' in key:
|
||||
keys = key.split(',')
|
||||
@@ -785,8 +805,9 @@ def is_any_api_key(key):
|
||||
else:
|
||||
return is_openai_api_key(key) or is_api2d_key(key) or is_azure_api_key(key)
|
||||
|
||||
|
||||
def what_keys(keys):
|
||||
avail_key_list = {'OpenAI Key':0, "Azure Key":0, "API2D Key":0}
|
||||
avail_key_list = {'OpenAI Key': 0, "Azure Key": 0, "API2D Key": 0}
|
||||
key_list = keys.split(',')
|
||||
|
||||
for k in key_list:
|
||||
@@ -803,6 +824,7 @@ def what_keys(keys):
|
||||
|
||||
return f"检测到: OpenAI Key {avail_key_list['OpenAI Key']} 个, Azure Key {avail_key_list['Azure Key']} 个, API2D Key {avail_key_list['API2D Key']} 个"
|
||||
|
||||
|
||||
def select_api_key(keys, llm_model):
|
||||
import random
|
||||
avail_key_list = []
|
||||
@@ -826,6 +848,7 @@ def select_api_key(keys, llm_model):
|
||||
api_key = random.choice(avail_key_list) # 随机负载均衡
|
||||
return api_key
|
||||
|
||||
|
||||
def read_env_variable(arg, default_value):
|
||||
"""
|
||||
环境变量可以是 `GPT_ACADEMIC_CONFIG`(优先),也可以直接是`CONFIG`
|
||||
@@ -856,7 +879,7 @@ def read_env_variable(arg, default_value):
|
||||
env_arg = env_arg.strip()
|
||||
if env_arg == 'True': r = True
|
||||
elif env_arg == 'False': r = False
|
||||
else: print('enter True or False, but have:', env_arg); r = default_value
|
||||
else: print('Enter True or False, but have:', env_arg); r = default_value
|
||||
elif isinstance(default_value, int):
|
||||
r = int(env_arg)
|
||||
elif isinstance(default_value, float):
|
||||
@@ -880,12 +903,13 @@ def read_env_variable(arg, default_value):
|
||||
print亮绿(f"[ENV_VAR] 成功读取环境变量{arg}")
|
||||
return r
|
||||
|
||||
|
||||
@lru_cache(maxsize=128)
|
||||
def read_single_conf_with_lru_cache(arg):
|
||||
from colorful import print亮红, print亮绿, print亮蓝
|
||||
try:
|
||||
# 优先级1. 获取环境变量作为配置
|
||||
default_ref = getattr(importlib.import_module('config'), arg) # 读取默认值作为数据类型转换的参考
|
||||
default_ref = getattr(importlib.import_module('config'), arg) # 读取默认值作为数据类型转换的参考
|
||||
r = read_env_variable(arg, default_ref)
|
||||
except:
|
||||
try:
|
||||
@@ -899,7 +923,7 @@ def read_single_conf_with_lru_cache(arg):
|
||||
if arg == 'API_URL_REDIRECT':
|
||||
oai_rd = r.get("https://api.openai.com/v1/chat/completions", None) # API_URL_REDIRECT填写格式是错误的,请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`
|
||||
if oai_rd and not oai_rd.endswith('/completions'):
|
||||
print亮红( "\n\n[API_URL_REDIRECT] API_URL_REDIRECT填错了。请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`。如果您确信自己没填错,无视此消息即可。")
|
||||
print亮红("\n\n[API_URL_REDIRECT] API_URL_REDIRECT填错了。请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`。如果您确信自己没填错,无视此消息即可。")
|
||||
time.sleep(5)
|
||||
if arg == 'API_KEY':
|
||||
print亮蓝(f"[API_KEY] 本项目现已支持OpenAI和Azure的api-key。也支持同时填写多个api-key,如API_KEY=\"openai-key1,openai-key2,azure-key3\"")
|
||||
@@ -907,9 +931,9 @@ def read_single_conf_with_lru_cache(arg):
|
||||
if is_any_api_key(r):
|
||||
print亮绿(f"[API_KEY] 您的 API_KEY 是: {r[:15]}*** API_KEY 导入成功")
|
||||
else:
|
||||
print亮红( "[API_KEY] 您的 API_KEY 不满足任何一种已知的密钥格式,请在config文件中修改API密钥之后再运行。")
|
||||
print亮红("[API_KEY] 您的 API_KEY 不满足任何一种已知的密钥格式,请在config文件中修改API密钥之后再运行。")
|
||||
if arg == 'proxies':
|
||||
if not read_single_conf_with_lru_cache('USE_PROXY'): r = None # 检查USE_PROXY,防止proxies单独起作用
|
||||
if not read_single_conf_with_lru_cache('USE_PROXY'): r = None # 检查USE_PROXY,防止proxies单独起作用
|
||||
if r is None:
|
||||
print亮红('[PROXY] 网络代理状态:未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议:检查USE_PROXY选项是否修改。')
|
||||
else:
|
||||
@@ -953,17 +977,20 @@ class DummyWith():
|
||||
在上下文执行开始的情况下,__enter__()方法会在代码块被执行前被调用,
|
||||
而在上下文执行结束时,__exit__()方法则会被调用。
|
||||
"""
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_value, traceback):
|
||||
return
|
||||
|
||||
|
||||
def run_gradio_in_subpath(demo, auth, port, custom_path):
|
||||
"""
|
||||
把gradio的运行地址更改到指定的二次路径上
|
||||
"""
|
||||
def is_path_legal(path: str)->bool:
|
||||
|
||||
def is_path_legal(path: str) -> bool:
|
||||
'''
|
||||
check path for sub url
|
||||
path: path to check
|
||||
@@ -1039,14 +1066,15 @@ def clip_history(inputs, history, tokenizer, max_token_limit):
|
||||
while n_token > max_token_limit:
|
||||
where = np.argmax(everything_token)
|
||||
encoded = tokenizer.encode(everything[where], disallowed_special=())
|
||||
clipped_encoded = encoded[:len(encoded)-delta]
|
||||
everything[where] = tokenizer.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
|
||||
clipped_encoded = encoded[:len(encoded) - delta]
|
||||
everything[where] = tokenizer.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
|
||||
everything_token[where] = get_token_num(everything[where])
|
||||
n_token = get_token_num('\n'.join(everything))
|
||||
|
||||
history = everything[1:]
|
||||
return history
|
||||
|
||||
|
||||
"""
|
||||
========================================================================
|
||||
第三部分
|
||||
@@ -1058,6 +1086,7 @@ def clip_history(inputs, history, tokenizer, max_token_limit):
|
||||
========================================================================
|
||||
"""
|
||||
|
||||
|
||||
def zip_folder(source_folder, dest_folder, zip_name):
|
||||
import zipfile
|
||||
import os
|
||||
@@ -1089,15 +1118,18 @@ def zip_folder(source_folder, dest_folder, zip_name):
|
||||
|
||||
print(f"Zip file created at {zip_file}")
|
||||
|
||||
|
||||
def zip_result(folder):
|
||||
t = gen_time_str()
|
||||
zip_folder(folder, get_log_folder(), f'{t}-result.zip')
|
||||
return pj(get_log_folder(), f'{t}-result.zip')
|
||||
|
||||
|
||||
def gen_time_str():
|
||||
import time
|
||||
return time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
|
||||
|
||||
|
||||
def get_log_folder(user=default_user_name, plugin_name='shared'):
|
||||
if user is None: user = default_user_name
|
||||
PATH_LOGGING = get_conf('PATH_LOGGING')
|
||||
@@ -1108,29 +1140,36 @@ def get_log_folder(user=default_user_name, plugin_name='shared'):
|
||||
if not os.path.exists(_dir): os.makedirs(_dir)
|
||||
return _dir
|
||||
|
||||
|
||||
def get_upload_folder(user=default_user_name, tag=None):
|
||||
PATH_PRIVATE_UPLOAD = get_conf('PATH_PRIVATE_UPLOAD')
|
||||
if user is None: user = default_user_name
|
||||
if tag is None or len(tag)==0:
|
||||
if tag is None or len(tag) == 0:
|
||||
target_path_base = pj(PATH_PRIVATE_UPLOAD, user)
|
||||
else:
|
||||
target_path_base = pj(PATH_PRIVATE_UPLOAD, user, tag)
|
||||
return target_path_base
|
||||
|
||||
|
||||
def is_the_upload_folder(string):
|
||||
PATH_PRIVATE_UPLOAD = get_conf('PATH_PRIVATE_UPLOAD')
|
||||
pattern = r'^PATH_PRIVATE_UPLOAD[\\/][A-Za-z0-9_-]+[\\/]\d{4}-\d{2}-\d{2}-\d{2}-\d{2}-\d{2}$'
|
||||
pattern = pattern.replace('PATH_PRIVATE_UPLOAD', PATH_PRIVATE_UPLOAD)
|
||||
if re.match(pattern, string): return True
|
||||
else: return False
|
||||
if re.match(pattern, string):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def get_user(chatbotwithcookies):
|
||||
return chatbotwithcookies._cookies.get('user_name', default_user_name)
|
||||
|
||||
|
||||
class ProxyNetworkActivate():
|
||||
"""
|
||||
这段代码定义了一个名为TempProxy的空上下文管理器, 用于给一小段代码上代理
|
||||
这段代码定义了一个名为ProxyNetworkActivate的空上下文管理器, 用于给一小段代码上代理
|
||||
"""
|
||||
|
||||
def __init__(self, task=None) -> None:
|
||||
self.task = task
|
||||
if not task:
|
||||
@@ -1158,12 +1197,14 @@ class ProxyNetworkActivate():
|
||||
if 'HTTPS_PROXY' in os.environ: os.environ.pop('HTTPS_PROXY')
|
||||
return
|
||||
|
||||
|
||||
def objdump(obj, file='objdump.tmp'):
|
||||
import pickle
|
||||
with open(file, 'wb+') as f:
|
||||
pickle.dump(obj, f)
|
||||
return
|
||||
|
||||
|
||||
def objload(file='objdump.tmp'):
|
||||
import pickle, os
|
||||
if not os.path.exists(file):
|
||||
@@ -1171,6 +1212,7 @@ def objload(file='objdump.tmp'):
|
||||
with open(file, 'rb') as f:
|
||||
return pickle.load(f)
|
||||
|
||||
|
||||
def Singleton(cls):
|
||||
"""
|
||||
一个单实例装饰器
|
||||
@@ -1184,6 +1226,7 @@ def Singleton(cls):
|
||||
|
||||
return _singleton
|
||||
|
||||
|
||||
"""
|
||||
========================================================================
|
||||
第四部分
|
||||
@@ -1197,6 +1240,7 @@ def Singleton(cls):
|
||||
========================================================================
|
||||
"""
|
||||
|
||||
|
||||
def set_conf(key, value):
|
||||
from toolbox import read_single_conf_with_lru_cache, get_conf
|
||||
read_single_conf_with_lru_cache.cache_clear()
|
||||
@@ -1205,10 +1249,12 @@ def set_conf(key, value):
|
||||
altered = get_conf(key)
|
||||
return altered
|
||||
|
||||
|
||||
def set_multi_conf(dic):
|
||||
for k, v in dic.items(): set_conf(k, v)
|
||||
return
|
||||
|
||||
|
||||
def get_plugin_handle(plugin_name):
|
||||
"""
|
||||
e.g. plugin_name = 'crazy_functions.批量Markdown翻译->Markdown翻译指定语言'
|
||||
@@ -1220,12 +1266,14 @@ def get_plugin_handle(plugin_name):
|
||||
f_hot_reload = getattr(importlib.import_module(module, fn_name), fn_name)
|
||||
return f_hot_reload
|
||||
|
||||
|
||||
def get_chat_handle():
|
||||
"""
|
||||
"""
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
return predict_no_ui_long_connection
|
||||
|
||||
|
||||
def get_plugin_default_kwargs():
|
||||
"""
|
||||
"""
|
||||
@@ -1234,9 +1282,9 @@ def get_plugin_default_kwargs():
|
||||
llm_kwargs = {
|
||||
'api_key': cookies['api_key'],
|
||||
'llm_model': cookies['llm_model'],
|
||||
'top_p':1.0,
|
||||
'top_p': 1.0,
|
||||
'max_length': None,
|
||||
'temperature':1.0,
|
||||
'temperature': 1.0,
|
||||
}
|
||||
chatbot = ChatBotWithCookies(llm_kwargs)
|
||||
|
||||
@@ -1252,6 +1300,7 @@ def get_plugin_default_kwargs():
|
||||
}
|
||||
return DEFAULT_FN_GROUPS_kwargs
|
||||
|
||||
|
||||
def get_chat_default_kwargs():
|
||||
"""
|
||||
"""
|
||||
@@ -1259,9 +1308,9 @@ def get_chat_default_kwargs():
|
||||
llm_kwargs = {
|
||||
'api_key': cookies['api_key'],
|
||||
'llm_model': cookies['llm_model'],
|
||||
'top_p':1.0,
|
||||
'top_p': 1.0,
|
||||
'max_length': None,
|
||||
'temperature':1.0,
|
||||
'temperature': 1.0,
|
||||
}
|
||||
default_chat_kwargs = {
|
||||
"inputs": "Hello there, are you ready?",
|
||||
@@ -1284,15 +1333,15 @@ def get_pictures_list(path):
|
||||
|
||||
def have_any_recent_upload_image_files(chatbot):
|
||||
_5min = 5 * 60
|
||||
if chatbot is None: return False, None # chatbot is None
|
||||
if chatbot is None: return False, None # chatbot is None
|
||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
||||
if not most_recent_uploaded: return False, None # most_recent_uploaded is None
|
||||
if not most_recent_uploaded: return False, None # most_recent_uploaded is None
|
||||
if time.time() - most_recent_uploaded["time"] < _5min:
|
||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
||||
path = most_recent_uploaded['path']
|
||||
file_manifest = get_pictures_list(path)
|
||||
if len(file_manifest) == 0: return False, None
|
||||
return True, file_manifest # most_recent_uploaded is new
|
||||
return True, file_manifest # most_recent_uploaded is new
|
||||
else:
|
||||
return False, None # most_recent_uploaded is too old
|
||||
|
||||
@@ -1307,6 +1356,7 @@ def get_max_token(llm_kwargs):
|
||||
from request_llms.bridge_all import model_info
|
||||
return model_info[llm_kwargs['llm_model']]['max_token']
|
||||
|
||||
|
||||
def check_packages(packages=[]):
|
||||
import importlib.util
|
||||
for p in packages:
|
||||
|
||||
4
version
4
version
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"version": 3.64,
|
||||
"version": 3.65,
|
||||
"show_feature": true,
|
||||
"new_feature": "支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区 <-> 修复若干隐蔽的内存BUG <-> 修复多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版"
|
||||
"new_feature": "支持Gemini-pro <-> 支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区 <-> 修复若干隐蔽的内存BUG <-> 修复多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版"
|
||||
}
|
||||
|
||||
在新工单中引用
屏蔽一个用户