镜像自地址
https://github.com/binary-husky/gpt_academic.git
已同步 2025-12-06 14:36:48 +00:00
比较提交
100 次代码提交
version3.1
...
version3.3
| 作者 | SHA1 | 提交日期 | |
|---|---|---|---|
|
|
b6d2766e59 | ||
|
|
73ce471a0e | ||
|
|
4e113139c8 | ||
|
|
081acc6404 | ||
|
|
1a999497d7 | ||
|
|
6137963355 | ||
|
|
22bffdb737 | ||
|
|
75adcbffeb | ||
|
|
4451770061 | ||
|
|
09c413a272 | ||
|
|
ddb6c90a8f | ||
|
|
71590426f9 | ||
|
|
b3e5cdb3a5 | ||
|
|
6595ab813e | ||
|
|
d1efbd26da | ||
|
|
f04683732e | ||
|
|
cb0241db78 | ||
|
|
a097b6cd03 | ||
|
|
487ffe7888 | ||
|
|
51424a7d08 | ||
|
|
06e8e8f9a6 | ||
|
|
0512b311f8 | ||
|
|
81d53d0726 | ||
|
|
a141c5ccdc | ||
|
|
e361d741c3 | ||
|
|
f5bc58dbde | ||
|
|
e7b73f3041 | ||
|
|
ed8db8c8ae | ||
|
|
df97213d3b | ||
|
|
97443d1f83 | ||
|
|
59bed52faf | ||
|
|
3814c3a915 | ||
|
|
d98d0a291e | ||
|
|
ee94fa6dc4 | ||
|
|
d2e46f6684 | ||
|
|
5948dcacd5 | ||
|
|
3041858e7f | ||
|
|
9c2a6bc413 | ||
|
|
1cf8b6c6c8 | ||
|
|
781ef4487c | ||
|
|
4a494354b1 | ||
|
|
385c775aa5 | ||
|
|
518385dea2 | ||
|
|
4d1eea7bd5 | ||
|
|
9cb51ccc70 | ||
|
|
94dc398163 | ||
|
|
65317e33af | ||
|
|
06fbdf43af | ||
|
|
ab61418410 | ||
|
|
0785ff2aed | ||
|
|
676fe40d39 | ||
|
|
0b89673ee9 | ||
|
|
2f4e050612 | ||
|
|
87d963bda5 | ||
|
|
07807e4653 | ||
|
|
2b96217f2b | ||
|
|
13342c2988 | ||
|
|
95f8b2824a | ||
|
|
4065d6e234 | ||
|
|
d3dcd432e8 | ||
|
|
7d14de79bf | ||
|
|
15c6b52b5f | ||
|
|
c0f1b5bc8e | ||
|
|
bd62c6be68 | ||
|
|
70bd21f09a | ||
|
|
a0f15f1512 | ||
|
|
4575046ce1 | ||
|
|
33ea7391b5 | ||
|
|
e90eee2d8e | ||
|
|
7d44210a48 | ||
|
|
206f4138b6 | ||
|
|
6d2807f499 | ||
|
|
f1234937c6 | ||
|
|
7beea951c6 | ||
|
|
6f7e8076c7 | ||
|
|
ae24fab441 | ||
|
|
880be21bf7 | ||
|
|
559b3cd6bb | ||
|
|
9d9df8aa57 | ||
|
|
64548d33a9 | ||
|
|
c3cafd8d6f | ||
|
|
e9a6efef7f | ||
|
|
89a75e26c3 | ||
|
|
1139d395f2 | ||
|
|
e20070939c | ||
|
|
3236fcca21 | ||
|
|
de0ed4a6f5 | ||
|
|
0ff838443e | ||
|
|
cfbfb68618 | ||
|
|
9945d5048a | ||
|
|
f0ff1f2c64 | ||
|
|
7dd73e1330 | ||
|
|
4cfbacdb26 | ||
|
|
26af2b1bb4 | ||
|
|
20bec70160 | ||
|
|
9b5f088793 | ||
|
|
3a561a70db | ||
|
|
11e33ec657 | ||
|
|
d1926725d3 | ||
|
|
2f9a4e1618 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -145,3 +145,4 @@ cradle*
|
|||||||
debug*
|
debug*
|
||||||
private*
|
private*
|
||||||
crazy_functions/test_project/pdf_and_word
|
crazy_functions/test_project/pdf_and_word
|
||||||
|
crazy_functions/test_samples
|
||||||
|
|||||||
36
README.md
36
README.md
@@ -1,4 +1,9 @@
|
|||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> 本项目依赖的Gradio组件的新版pip包(Gradio 3.26~3.27)有严重bug。所以,请在安装时严格选择requirements.txt中**指定的版本**。
|
||||||
|
>
|
||||||
|
> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
|
||||||
|
>
|
||||||
|
|
||||||
# <img src="docs/logo.png" width="40" > ChatGPT 学术优化
|
# <img src="docs/logo.png" width="40" > ChatGPT 学术优化
|
||||||
|
|
||||||
@@ -20,24 +25,26 @@ If you like this project, please give it a Star. If you've come up with more use
|
|||||||
--- | ---
|
--- | ---
|
||||||
一键润色 | 支持一键润色、一键查找论文语法错误
|
一键润色 | 支持一键润色、一键查找论文语法错误
|
||||||
一键中英互译 | 一键中英互译
|
一键中英互译 | 一键中英互译
|
||||||
一键代码解释 | 可以正确显示代码、解释代码
|
一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
|
||||||
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
||||||
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持配置代理服务器
|
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持代理连接OpenAI/Google等,秒解锁ChatGPT互联网[实时信息聚合](https://www.bilibili.com/video/BV1om4y127ck/)能力
|
||||||
模块化设计 | 支持自定义高阶的函数插件与[函数插件],插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||||
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
|
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
|
||||||
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
|
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
|
||||||
读论文 | [函数插件] 一键解读latex论文全文并生成摘要
|
读论文、翻译论文 | [函数插件] 一键解读latex/pdf论文全文并生成摘要
|
||||||
Latex全文翻译、润色 | [函数插件] 一键翻译或润色latex论文
|
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
|
||||||
批量注释生成 | [函数插件] 一键批量生成函数注释
|
批量注释生成 | [函数插件] 一键批量生成函数注释
|
||||||
|
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
|
||||||
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
||||||
Markdown中英互译 | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
|
|
||||||
[arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
|
||||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
||||||
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你选择有趣的文章
|
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||||
公式/图片/表格显示 | 可以同时显示公式的tex形式和渲染形式,支持公式、代码高亮
|
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||||
多线程函数插件支持 | 支持多线调用chatgpt,一键处理海量文本或程序
|
互联网信息聚合+GPT | [函数插件] 一键让ChatGPT先Google搜索,再回答问题,信息流永不过时
|
||||||
|
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
||||||
|
多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
|
||||||
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
|
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
|
||||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
|
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
|
||||||
|
更多LLM模型接入 | 新加入Newbing测试接口(新必应AI)
|
||||||
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
|
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
|
||||||
…… | ……
|
…… | ……
|
||||||
|
|
||||||
@@ -173,6 +180,8 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
|
|||||||
2. 使用WSL2(Windows Subsystem for Linux 子系统)
|
2. 使用WSL2(Windows Subsystem for Linux 子系统)
|
||||||
请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||||
|
|
||||||
|
3. 如何在二级网址(如`http://localhost/subpath`)下运行
|
||||||
|
请访问[FastAPI运行说明](docs/WithFastapi.md)
|
||||||
|
|
||||||
## 安装-代理配置
|
## 安装-代理配置
|
||||||
1. 常规方法
|
1. 常规方法
|
||||||
@@ -268,12 +277,15 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
|
|||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="https://user-images.githubusercontent.com/96192199/233575247-fb00819e-6d1b-4bb7-bd54-1d7528f03dd9.png" width="800" >
|
<img src="https://user-images.githubusercontent.com/96192199/233575247-fb00819e-6d1b-4bb7-bd54-1d7528f03dd9.png" width="800" >
|
||||||
|
<img src="https://user-images.githubusercontent.com/96192199/233779501-5ce826f0-6cca-4d59-9e5f-b4eacb8cc15f.png" width="800" >
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Todo 与 版本规划:
|
## Todo 与 版本规划:
|
||||||
- version 3.2+ (todo): 函数插件支持更多参数接口
|
- version 3.3+ (todo): NewBing支持
|
||||||
|
- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
|
||||||
- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡
|
- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡
|
||||||
- version 3.0: 对chatglm和其他小型llm的支持
|
- version 3.0: 对chatglm和其他小型llm的支持
|
||||||
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
|
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
|
||||||
|
|||||||
15
config.py
15
config.py
@@ -45,7 +45,7 @@ MAX_RETRY = 2
|
|||||||
|
|
||||||
# OpenAI模型选择是(gpt4现在只对申请成功的人开放,体验gpt-4可以试试api2d)
|
# OpenAI模型选择是(gpt4现在只对申请成功的人开放,体验gpt-4可以试试api2d)
|
||||||
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
||||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm"]
|
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing"]
|
||||||
|
|
||||||
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
|
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
|
||||||
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
||||||
@@ -57,6 +57,15 @@ CONCURRENT_COUNT = 100
|
|||||||
# [("username", "password"), ("username2", "password2"), ...]
|
# [("username", "password"), ("username2", "password2"), ...]
|
||||||
AUTHENTICATION = []
|
AUTHENTICATION = []
|
||||||
|
|
||||||
# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
|
# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
|
||||||
# 格式 {"https://api.openai.com/v1/chat/completions": "重定向的URL"}
|
# 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
|
||||||
API_URL_REDIRECT = {}
|
API_URL_REDIRECT = {}
|
||||||
|
|
||||||
|
# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
|
||||||
|
CUSTOM_PATH = "/"
|
||||||
|
|
||||||
|
# 如果需要使用newbing,把newbing的长长的cookie放到这里
|
||||||
|
NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"]
|
||||||
|
NEWBING_COOKIES = """
|
||||||
|
your bing cookies here
|
||||||
|
"""
|
||||||
@@ -19,12 +19,25 @@ def get_crazy_functions():
|
|||||||
from crazy_functions.解析项目源代码 import 解析一个Lua项目
|
from crazy_functions.解析项目源代码 import 解析一个Lua项目
|
||||||
from crazy_functions.解析项目源代码 import 解析一个CSharp项目
|
from crazy_functions.解析项目源代码 import 解析一个CSharp项目
|
||||||
from crazy_functions.总结word文档 import 总结word文档
|
from crazy_functions.总结word文档 import 总结word文档
|
||||||
|
from crazy_functions.解析JupyterNotebook import 解析ipynb文件
|
||||||
|
from crazy_functions.对话历史存档 import 对话历史存档
|
||||||
function_plugins = {
|
function_plugins = {
|
||||||
|
|
||||||
"解析整个Python项目": {
|
"解析整个Python项目": {
|
||||||
"Color": "stop", # 按钮颜色
|
"Color": "stop", # 按钮颜色
|
||||||
"Function": HotReload(解析一个Python项目)
|
"Function": HotReload(解析一个Python项目)
|
||||||
},
|
},
|
||||||
|
"保存当前的对话": {
|
||||||
|
"AsButton":False,
|
||||||
|
"Function": HotReload(对话历史存档)
|
||||||
|
},
|
||||||
|
"[测试功能] 解析Jupyter Notebook文件": {
|
||||||
|
"Color": "stop",
|
||||||
|
"AsButton":False,
|
||||||
|
"Function": HotReload(解析ipynb文件),
|
||||||
|
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||||
|
"ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示
|
||||||
|
},
|
||||||
"批量总结Word文档": {
|
"批量总结Word文档": {
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"Function": HotReload(总结word文档)
|
"Function": HotReload(总结word文档)
|
||||||
@@ -168,7 +181,7 @@ def get_crazy_functions():
|
|||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Function": HotReload(Markdown英译中)
|
"Function": HotReload(Markdown英译中)
|
||||||
},
|
},
|
||||||
|
|
||||||
})
|
})
|
||||||
|
|
||||||
###################### 第三组插件 ###########################
|
###################### 第三组插件 ###########################
|
||||||
@@ -181,7 +194,7 @@ def get_crazy_functions():
|
|||||||
"Function": HotReload(下载arxiv论文并翻译摘要)
|
"Function": HotReload(下载arxiv论文并翻译摘要)
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
|
||||||
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
||||||
function_plugins.update({
|
function_plugins.update({
|
||||||
"连接网络回答问题(先输入问题,再点击按钮,需要访问谷歌)": {
|
"连接网络回答问题(先输入问题,再点击按钮,需要访问谷歌)": {
|
||||||
@@ -191,5 +204,25 @@ def get_crazy_functions():
|
|||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
|
||||||
|
from crazy_functions.解析项目源代码 import 解析任意code项目
|
||||||
|
function_plugins.update({
|
||||||
|
"解析项目源代码(手动指定和筛选源代码文件类型)": {
|
||||||
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||||
|
"ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示
|
||||||
|
"Function": HotReload(解析任意code项目)
|
||||||
|
},
|
||||||
|
})
|
||||||
|
from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
|
||||||
|
function_plugins.update({
|
||||||
|
"询问多个GPT模型(手动指定询问哪些模型)": {
|
||||||
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||||
|
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
|
||||||
|
"Function": HotReload(同时问询_指定模型)
|
||||||
|
},
|
||||||
|
})
|
||||||
###################### 第n组插件 ###########################
|
###################### 第n组插件 ###########################
|
||||||
return function_plugins
|
return function_plugins
|
||||||
|
|||||||
@@ -108,6 +108,13 @@ def test_联网回答问题():
|
|||||||
print("当前问答:", cb[-1][-1].replace("\n"," "))
|
print("当前问答:", cb[-1][-1].replace("\n"," "))
|
||||||
for i, it in enumerate(cb): print亮蓝(it[0]); print亮黄(it[1])
|
for i, it in enumerate(cb): print亮蓝(it[0]); print亮黄(it[1])
|
||||||
|
|
||||||
|
def test_解析ipynb文件():
|
||||||
|
from crazy_functions.解析JupyterNotebook import 解析ipynb文件
|
||||||
|
txt = "crazy_functions/test_samples"
|
||||||
|
for cookies, cb, hist, msg in 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
print(cb)
|
||||||
|
|
||||||
|
|
||||||
# test_解析一个Python项目()
|
# test_解析一个Python项目()
|
||||||
# test_Latex英文润色()
|
# test_Latex英文润色()
|
||||||
# test_Markdown中译英()
|
# test_Markdown中译英()
|
||||||
@@ -116,9 +123,8 @@ def test_联网回答问题():
|
|||||||
# test_总结word文档()
|
# test_总结word文档()
|
||||||
# test_下载arxiv论文并翻译摘要()
|
# test_下载arxiv论文并翻译摘要()
|
||||||
# test_解析一个Cpp项目()
|
# test_解析一个Cpp项目()
|
||||||
|
# test_联网回答问题()
|
||||||
test_联网回答问题()
|
test_解析ipynb文件()
|
||||||
|
|
||||||
|
|
||||||
input("程序完成,回车退出。")
|
input("程序完成,回车退出。")
|
||||||
print("退出。")
|
print("退出。")
|
||||||
@@ -1,5 +1,4 @@
|
|||||||
import traceback
|
from toolbox import update_ui, get_conf, trimmed_format_exc
|
||||||
from toolbox import update_ui, get_conf
|
|
||||||
|
|
||||||
def input_clipping(inputs, history, max_token_limit):
|
def input_clipping(inputs, history, max_token_limit):
|
||||||
import numpy as np
|
import numpy as np
|
||||||
@@ -94,12 +93,12 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
|||||||
continue # 返回重试
|
continue # 返回重试
|
||||||
else:
|
else:
|
||||||
# 【选择放弃】
|
# 【选择放弃】
|
||||||
tb_str = '```\n' + traceback.format_exc() + '```'
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
||||||
return mutable[0] # 放弃
|
return mutable[0] # 放弃
|
||||||
except:
|
except:
|
||||||
# 【第三种情况】:其他错误:重试几次
|
# 【第三种情况】:其他错误:重试几次
|
||||||
tb_str = '```\n' + traceback.format_exc() + '```'
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
print(tb_str)
|
print(tb_str)
|
||||||
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
||||||
if retry_op > 0:
|
if retry_op > 0:
|
||||||
@@ -173,7 +172,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
|||||||
if max_workers == -1: # 读取配置文件
|
if max_workers == -1: # 读取配置文件
|
||||||
try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
|
try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
|
||||||
except: max_workers = 8
|
except: max_workers = 8
|
||||||
if max_workers <= 0 or max_workers >= 20: max_workers = 8
|
if max_workers <= 0: max_workers = 3
|
||||||
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
|
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
|
||||||
if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
|
if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
|
||||||
max_workers = 1
|
max_workers = 1
|
||||||
@@ -220,14 +219,14 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
|||||||
continue # 返回重试
|
continue # 返回重试
|
||||||
else:
|
else:
|
||||||
# 【选择放弃】
|
# 【选择放弃】
|
||||||
tb_str = '```\n' + traceback.format_exc() + '```'
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
||||||
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
|
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
|
||||||
mutable[index][2] = "输入过长已放弃"
|
mutable[index][2] = "输入过长已放弃"
|
||||||
return gpt_say # 放弃
|
return gpt_say # 放弃
|
||||||
except:
|
except:
|
||||||
# 【第三种情况】:其他错误
|
# 【第三种情况】:其他错误
|
||||||
tb_str = '```\n' + traceback.format_exc() + '```'
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
print(tb_str)
|
print(tb_str)
|
||||||
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
||||||
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
|
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
|
||||||
|
|||||||
42
crazy_functions/对话历史存档.py
普通文件
42
crazy_functions/对话历史存档.py
普通文件
@@ -0,0 +1,42 @@
|
|||||||
|
from toolbox import CatchException, update_ui
|
||||||
|
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
|
|
||||||
|
def write_chat_to_file(chatbot, file_name=None):
|
||||||
|
"""
|
||||||
|
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
if file_name is None:
|
||||||
|
file_name = 'chatGPT对话历史' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
|
||||||
|
os.makedirs('./gpt_log/', exist_ok=True)
|
||||||
|
with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
|
||||||
|
for i, contents in enumerate(chatbot):
|
||||||
|
for content in contents:
|
||||||
|
try: # 这个bug没找到触发条件,暂时先这样顶一下
|
||||||
|
if type(content) != str: content = str(content)
|
||||||
|
except:
|
||||||
|
continue
|
||||||
|
f.write(content)
|
||||||
|
f.write('\n\n')
|
||||||
|
f.write('<hr color="red"> \n\n')
|
||||||
|
|
||||||
|
res = '对话历史写入:' + os.path.abspath(f'./gpt_log/{file_name}')
|
||||||
|
print(res)
|
||||||
|
return res
|
||||||
|
|
||||||
|
@CatchException
|
||||||
|
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
"""
|
||||||
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
|
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||||
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
|
history 聊天历史,前情提要
|
||||||
|
system_prompt 给gpt的静默提醒
|
||||||
|
web_port 当前软件运行的端口号
|
||||||
|
"""
|
||||||
|
|
||||||
|
chatbot.append(("保存当前对话", f"[Local Message] {write_chat_to_file(chatbot)}"))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||||
|
|
||||||
145
crazy_functions/解析JupyterNotebook.py
普通文件
145
crazy_functions/解析JupyterNotebook.py
普通文件
@@ -0,0 +1,145 @@
|
|||||||
|
from toolbox import update_ui
|
||||||
|
from toolbox import CatchException, report_execption, write_results_to_file
|
||||||
|
fast_debug = True
|
||||||
|
|
||||||
|
|
||||||
|
class PaperFileGroup():
|
||||||
|
def __init__(self):
|
||||||
|
self.file_paths = []
|
||||||
|
self.file_contents = []
|
||||||
|
self.sp_file_contents = []
|
||||||
|
self.sp_file_index = []
|
||||||
|
self.sp_file_tag = []
|
||||||
|
|
||||||
|
# count_token
|
||||||
|
from request_llm.bridge_all import model_info
|
||||||
|
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||||
|
def get_token_num(txt): return len(
|
||||||
|
enc.encode(txt, disallowed_special=()))
|
||||||
|
self.get_token_num = get_token_num
|
||||||
|
|
||||||
|
def run_file_split(self, max_token_limit=1900):
|
||||||
|
"""
|
||||||
|
将长文本分离开来
|
||||||
|
"""
|
||||||
|
for index, file_content in enumerate(self.file_contents):
|
||||||
|
if self.get_token_num(file_content) < max_token_limit:
|
||||||
|
self.sp_file_contents.append(file_content)
|
||||||
|
self.sp_file_index.append(index)
|
||||||
|
self.sp_file_tag.append(self.file_paths[index])
|
||||||
|
else:
|
||||||
|
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||||
|
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||||
|
file_content, self.get_token_num, max_token_limit)
|
||||||
|
for j, segment in enumerate(segments):
|
||||||
|
self.sp_file_contents.append(segment)
|
||||||
|
self.sp_file_index.append(index)
|
||||||
|
self.sp_file_tag.append(
|
||||||
|
self.file_paths[index] + f".part-{j}.txt")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def parseNotebook(filename, enable_markdown=1):
|
||||||
|
import json
|
||||||
|
|
||||||
|
CodeBlocks = []
|
||||||
|
with open(filename, 'r', encoding='utf-8', errors='replace') as f:
|
||||||
|
notebook = json.load(f)
|
||||||
|
for cell in notebook['cells']:
|
||||||
|
if cell['cell_type'] == 'code' and cell['source']:
|
||||||
|
# remove blank lines
|
||||||
|
cell['source'] = [line for line in cell['source'] if line.strip()
|
||||||
|
!= '']
|
||||||
|
CodeBlocks.append("".join(cell['source']))
|
||||||
|
elif enable_markdown and cell['cell_type'] == 'markdown' and cell['source']:
|
||||||
|
cell['source'] = [line for line in cell['source'] if line.strip()
|
||||||
|
!= '']
|
||||||
|
CodeBlocks.append("Markdown:"+"".join(cell['source']))
|
||||||
|
|
||||||
|
Code = ""
|
||||||
|
for idx, code in enumerate(CodeBlocks):
|
||||||
|
Code += f"This is {idx+1}th code block: \n"
|
||||||
|
Code += code+"\n"
|
||||||
|
|
||||||
|
return Code
|
||||||
|
|
||||||
|
|
||||||
|
def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
||||||
|
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||||
|
|
||||||
|
enable_markdown = plugin_kwargs.get("advanced_arg", "1")
|
||||||
|
try:
|
||||||
|
enable_markdown = int(enable_markdown)
|
||||||
|
except ValueError:
|
||||||
|
enable_markdown = 1
|
||||||
|
|
||||||
|
pfg = PaperFileGroup()
|
||||||
|
|
||||||
|
for fp in file_manifest:
|
||||||
|
file_content = parseNotebook(fp, enable_markdown=enable_markdown)
|
||||||
|
pfg.file_paths.append(fp)
|
||||||
|
pfg.file_contents.append(file_content)
|
||||||
|
|
||||||
|
# <-------- 拆分过长的IPynb文件 ---------->
|
||||||
|
pfg.run_file_split(max_token_limit=1024)
|
||||||
|
n_split = len(pfg.sp_file_contents)
|
||||||
|
|
||||||
|
inputs_array = [r"This is a Jupyter Notebook file, tell me about Each Block in Chinese. Focus Just On Code." +
|
||||||
|
r"If a block starts with `Markdown` which means it's a markdown block in ipynbipynb. " +
|
||||||
|
r"Start a new line for a block and block num use Chinese." +
|
||||||
|
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||||
|
inputs_show_user_array = [f"{f}的分析如下" for f in pfg.sp_file_tag]
|
||||||
|
sys_prompt_array = ["You are a professional programmer."] * n_split
|
||||||
|
|
||||||
|
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||||
|
inputs_array=inputs_array,
|
||||||
|
inputs_show_user_array=inputs_show_user_array,
|
||||||
|
llm_kwargs=llm_kwargs,
|
||||||
|
chatbot=chatbot,
|
||||||
|
history_array=[[""] for _ in range(n_split)],
|
||||||
|
sys_prompt_array=sys_prompt_array,
|
||||||
|
# max_workers=5, # OpenAI所允许的最大并行过载
|
||||||
|
scroller_max_len=80
|
||||||
|
)
|
||||||
|
|
||||||
|
# <-------- 整理结果,退出 ---------->
|
||||||
|
block_result = " \n".join(gpt_response_collection)
|
||||||
|
chatbot.append(("解析的结果如下", block_result))
|
||||||
|
history.extend(["解析的结果如下", block_result])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
# <-------- 写入文件,退出 ---------->
|
||||||
|
res = write_results_to_file(history)
|
||||||
|
chatbot.append(("完成了吗?", res))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
@CatchException
|
||||||
|
def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
chatbot.append([
|
||||||
|
"函数插件功能?",
|
||||||
|
"对IPynb文件进行解析。Contributor: codycjy."])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
history = [] # 清空历史
|
||||||
|
import glob
|
||||||
|
import os
|
||||||
|
if os.path.exists(txt):
|
||||||
|
project_folder = txt
|
||||||
|
else:
|
||||||
|
if txt == "":
|
||||||
|
txt = '空空如也的输入栏'
|
||||||
|
report_execption(chatbot, history,
|
||||||
|
a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
if txt.endswith('.ipynb'):
|
||||||
|
file_manifest = [txt]
|
||||||
|
else:
|
||||||
|
file_manifest = [f for f in glob.glob(
|
||||||
|
f'{project_folder}/**/*.ipynb', recursive=True)]
|
||||||
|
if len(file_manifest) == 0:
|
||||||
|
report_execption(chatbot, history,
|
||||||
|
a=f"解析项目: {txt}", b=f"找不到任何.ipynb文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
yield from ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, )
|
||||||
@@ -1,5 +1,6 @@
|
|||||||
from toolbox import update_ui
|
from toolbox import update_ui
|
||||||
from toolbox import CatchException, report_execption, write_results_to_file
|
from toolbox import CatchException, report_execption, write_results_to_file
|
||||||
|
from .crazy_utils import input_clipping
|
||||||
|
|
||||||
def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
||||||
import os, copy
|
import os, copy
|
||||||
@@ -11,7 +12,7 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|||||||
history_array = []
|
history_array = []
|
||||||
sys_prompt_array = []
|
sys_prompt_array = []
|
||||||
report_part_1 = []
|
report_part_1 = []
|
||||||
|
|
||||||
assert len(file_manifest) <= 512, "源文件太多(超过512个), 请缩减输入文件的数量。或者,您也可以选择删除此行警告,并修改代码拆分file_manifest列表,从而实现分批次处理。"
|
assert len(file_manifest) <= 512, "源文件太多(超过512个), 请缩减输入文件的数量。或者,您也可以选择删除此行警告,并修改代码拆分file_manifest列表,从而实现分批次处理。"
|
||||||
############################## <第一步,逐个文件分析,多线程> ##################################
|
############################## <第一步,逐个文件分析,多线程> ##################################
|
||||||
for index, fp in enumerate(file_manifest):
|
for index, fp in enumerate(file_manifest):
|
||||||
@@ -61,13 +62,15 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|||||||
previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
||||||
previous_iteration_files_string = ', '.join(previous_iteration_files)
|
previous_iteration_files_string = ', '.join(previous_iteration_files)
|
||||||
current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
||||||
i_say = f'根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括{previous_iteration_files_string})。'
|
i_say = f'用一张Markdown表格简要描述以下文件的功能:{previous_iteration_files_string}。根据以上分析,用一句话概括程序的整体功能。'
|
||||||
inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
|
inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
|
||||||
this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
|
this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
|
||||||
this_iteration_history.append(last_iteration_result)
|
this_iteration_history.append(last_iteration_result)
|
||||||
|
# 裁剪input
|
||||||
|
inputs, this_iteration_history_feed = input_clipping(inputs=i_say, history=this_iteration_history, max_token_limit=2560)
|
||||||
result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||||
inputs=i_say, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
|
inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
|
||||||
history=this_iteration_history, # 迭代之前的分析
|
history=this_iteration_history_feed, # 迭代之前的分析
|
||||||
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
|
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
|
||||||
report_part_2.extend([i_say, result])
|
report_part_2.extend([i_say, result])
|
||||||
last_iteration_result = result
|
last_iteration_result = result
|
||||||
@@ -222,8 +225,8 @@ def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
return
|
return
|
||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
@@ -243,9 +246,9 @@ def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何lua文件: {txt}")
|
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何lua文件: {txt}")
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
return
|
return
|
||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
@@ -263,4 +266,45 @@ def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何CSharp文件: {txt}")
|
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何CSharp文件: {txt}")
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
return
|
return
|
||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
|
|
||||||
|
@CatchException
|
||||||
|
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
txt_pattern = plugin_kwargs.get("advanced_arg")
|
||||||
|
txt_pattern = txt_pattern.replace(",", ",")
|
||||||
|
# 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
|
||||||
|
pattern_include = [_.lstrip(" ,").rstrip(" ,") for _ in txt_pattern.split(",") if _ != "" and not _.strip().startswith("^")]
|
||||||
|
if not pattern_include: pattern_include = ["*"] # 不输入即全部匹配
|
||||||
|
# 将要忽略匹配的文件后缀(例如: ^*.c, ^*.cpp, ^*.py)
|
||||||
|
pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")]
|
||||||
|
pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件
|
||||||
|
# 将要忽略匹配的文件名(例如: ^README.md)
|
||||||
|
pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")]
|
||||||
|
# 生成正则表达式
|
||||||
|
pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
|
||||||
|
pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else ''
|
||||||
|
|
||||||
|
history.clear()
|
||||||
|
import glob, os, re
|
||||||
|
if os.path.exists(txt):
|
||||||
|
project_folder = txt
|
||||||
|
else:
|
||||||
|
if txt == "": txt = '空空如也的输入栏'
|
||||||
|
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
# 若上传压缩文件, 先寻找到解压的文件夹路径, 从而避免解析压缩文件
|
||||||
|
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
|
||||||
|
if len(maybe_dir)>0 and maybe_dir[0].endswith('.extract'):
|
||||||
|
extract_folder_path = maybe_dir[0]
|
||||||
|
else:
|
||||||
|
extract_folder_path = project_folder
|
||||||
|
# 按输入的匹配模式寻找上传的非压缩文件和已解压的文件
|
||||||
|
file_manifest = [f for pattern in pattern_include for f in glob.glob(f'{extract_folder_path}/**/{pattern}', recursive=True) if "" != extract_folder_path and \
|
||||||
|
os.path.isfile(f) and (not re.search(pattern_except, f) or pattern.endswith('.' + re.search(pattern_except, f).group().split('.')[-1]))]
|
||||||
|
if len(file_manifest) == 0:
|
||||||
|
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
@@ -25,6 +25,35 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
retry_times_at_unknown_error=0
|
retry_times_at_unknown_error=0
|
||||||
)
|
)
|
||||||
|
|
||||||
|
history.append(txt)
|
||||||
|
history.append(gpt_say)
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||||
|
|
||||||
|
|
||||||
|
@CatchException
|
||||||
|
def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
"""
|
||||||
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
|
plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行
|
||||||
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
|
history 聊天历史,前情提要
|
||||||
|
system_prompt 给gpt的静默提醒
|
||||||
|
web_port 当前软件运行的端口号
|
||||||
|
"""
|
||||||
|
history = [] # 清空历史,以免输入溢出
|
||||||
|
chatbot.append((txt, "正在同时咨询ChatGPT和ChatGLM……"))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||||
|
|
||||||
|
# llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
||||||
|
llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
||||||
|
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||||
|
inputs=txt, inputs_show_user=txt,
|
||||||
|
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
||||||
|
sys_prompt=system_prompt,
|
||||||
|
retry_times_at_unknown_error=0
|
||||||
|
)
|
||||||
|
|
||||||
history.append(txt)
|
history.append(txt)
|
||||||
history.append(gpt_say)
|
history.append(gpt_say)
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||||
@@ -70,6 +70,7 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
try:
|
try:
|
||||||
import arxiv
|
import arxiv
|
||||||
|
import math
|
||||||
from bs4 import BeautifulSoup
|
from bs4 import BeautifulSoup
|
||||||
except:
|
except:
|
||||||
report_execption(chatbot, history,
|
report_execption(chatbot, history,
|
||||||
@@ -80,25 +81,26 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
|
|
||||||
# 清空历史,以免输入溢出
|
# 清空历史,以免输入溢出
|
||||||
history = []
|
history = []
|
||||||
|
|
||||||
meta_paper_info_list = yield from get_meta_information(txt, chatbot, history)
|
meta_paper_info_list = yield from get_meta_information(txt, chatbot, history)
|
||||||
|
batchsize = 5
|
||||||
|
for batch in range(math.ceil(len(meta_paper_info_list)/batchsize)):
|
||||||
|
if len(meta_paper_info_list[:batchsize]) > 0:
|
||||||
|
i_say = "下面是一些学术文献的数据,提取出以下内容:" + \
|
||||||
|
"1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开(is_paper_in_arxiv);4、引用数量(cite);5、中文摘要翻译。" + \
|
||||||
|
f"以下是信息源:{str(meta_paper_info_list[:batchsize])}"
|
||||||
|
|
||||||
if len(meta_paper_info_list[:10]) > 0:
|
inputs_show_user = f"请分析此页面中出现的所有文章:{txt},这是第{batch+1}批"
|
||||||
i_say = "下面是一些学术文献的数据,请从中提取出以下内容。" + \
|
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||||
"1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开(is_paper_in_arxiv);4、引用数量(cite);5、中文摘要翻译。" + \
|
inputs=i_say, inputs_show_user=inputs_show_user,
|
||||||
f"以下是信息源:{str(meta_paper_info_list[:10])}"
|
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||||
|
sys_prompt="你是一个学术翻译,请从数据中提取信息。你必须使用Markdown表格。你必须逐个文献进行处理。"
|
||||||
|
)
|
||||||
|
|
||||||
inputs_show_user = f"请分析此页面中出现的所有文章:{txt}"
|
history.extend([ f"第{batch+1}批", gpt_say ])
|
||||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
meta_paper_info_list = meta_paper_info_list[batchsize:]
|
||||||
inputs=i_say, inputs_show_user=inputs_show_user,
|
|
||||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
|
||||||
sys_prompt="你是一个学术翻译,请从数据中提取信息。你必须使用Markdown格式。你必须逐个文献进行处理。"
|
|
||||||
)
|
|
||||||
|
|
||||||
history.extend([ "第一批", gpt_say ])
|
chatbot.append(["状态?",
|
||||||
meta_paper_info_list = meta_paper_info_list[10:]
|
"已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."])
|
||||||
|
|
||||||
chatbot.append(["状态?", "已经全部完成"])
|
|
||||||
msg = '正常'
|
msg = '正常'
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
|
||||||
res = write_results_to_file(history)
|
res = write_results_to_file(history)
|
||||||
|
|||||||
43
docs/WithFastapi.md
普通文件
43
docs/WithFastapi.md
普通文件
@@ -0,0 +1,43 @@
|
|||||||
|
# Running with fastapi
|
||||||
|
|
||||||
|
We currently support fastapi in order to solve sub-path deploy issue.
|
||||||
|
|
||||||
|
1. change CUSTOM_PATH setting in `config.py`
|
||||||
|
|
||||||
|
``` sh
|
||||||
|
nano config.py
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Edit main.py
|
||||||
|
|
||||||
|
```diff
|
||||||
|
auto_opentab_delay()
|
||||||
|
- demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
||||||
|
+ demo.queue(concurrency_count=CONCURRENT_COUNT)
|
||||||
|
|
||||||
|
- # 如果需要在二级路径下运行
|
||||||
|
- # CUSTOM_PATH, = get_conf('CUSTOM_PATH')
|
||||||
|
- # if CUSTOM_PATH != "/":
|
||||||
|
- # from toolbox import run_gradio_in_subpath
|
||||||
|
- # run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
|
||||||
|
- # else:
|
||||||
|
- # demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
||||||
|
|
||||||
|
+ 如果需要在二级路径下运行
|
||||||
|
+ CUSTOM_PATH, = get_conf('CUSTOM_PATH')
|
||||||
|
+ if CUSTOM_PATH != "/":
|
||||||
|
+ from toolbox import run_gradio_in_subpath
|
||||||
|
+ run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
|
||||||
|
+ else:
|
||||||
|
+ demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
3. Go!
|
||||||
|
|
||||||
|
``` sh
|
||||||
|
python main.py
|
||||||
|
```
|
||||||
40
main.py
40
main.py
@@ -45,7 +45,7 @@ def main():
|
|||||||
|
|
||||||
gr_L1 = lambda: gr.Row().style()
|
gr_L1 = lambda: gr.Row().style()
|
||||||
gr_L2 = lambda scale: gr.Column(scale=scale)
|
gr_L2 = lambda scale: gr.Column(scale=scale)
|
||||||
if LAYOUT == "TOP-DOWN":
|
if LAYOUT == "TOP-DOWN":
|
||||||
gr_L1 = lambda: DummyWith()
|
gr_L1 = lambda: DummyWith()
|
||||||
gr_L2 = lambda scale: gr.Row()
|
gr_L2 = lambda scale: gr.Row()
|
||||||
CHATBOT_HEIGHT /= 2
|
CHATBOT_HEIGHT /= 2
|
||||||
@@ -88,9 +88,12 @@ def main():
|
|||||||
with gr.Row():
|
with gr.Row():
|
||||||
with gr.Accordion("更多函数插件", open=True):
|
with gr.Accordion("更多函数插件", open=True):
|
||||||
dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)]
|
dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)]
|
||||||
with gr.Column(scale=1):
|
with gr.Row():
|
||||||
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="").style(container=False)
|
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="").style(container=False)
|
||||||
with gr.Column(scale=1):
|
with gr.Row():
|
||||||
|
plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False,
|
||||||
|
placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
|
||||||
|
with gr.Row():
|
||||||
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary")
|
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary")
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
with gr.Accordion("点击展开“文件上传区”。上传本地文件可供红色函数插件调用。", open=False) as area_file_up:
|
with gr.Accordion("点击展开“文件上传区”。上传本地文件可供红色函数插件调用。", open=False) as area_file_up:
|
||||||
@@ -100,7 +103,7 @@ def main():
|
|||||||
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
|
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
|
||||||
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
|
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
|
||||||
max_length_sl = gr.Slider(minimum=256, maximum=4096, value=512, step=1, interactive=True, label="Local LLM MaxLength",)
|
max_length_sl = gr.Slider(minimum=256, maximum=4096, value=512, step=1, interactive=True, label="Local LLM MaxLength",)
|
||||||
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
|
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
|
||||||
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
||||||
|
|
||||||
gr.Markdown(description)
|
gr.Markdown(description)
|
||||||
@@ -122,11 +125,12 @@ def main():
|
|||||||
ret.update({area_input_secondary: gr.update(visible=("底部输入区" in a))})
|
ret.update({area_input_secondary: gr.update(visible=("底部输入区" in a))})
|
||||||
ret.update({clearBtn: gr.update(visible=("输入清除键" in a))})
|
ret.update({clearBtn: gr.update(visible=("输入清除键" in a))})
|
||||||
ret.update({clearBtn2: gr.update(visible=("输入清除键" in a))})
|
ret.update({clearBtn2: gr.update(visible=("输入清除键" in a))})
|
||||||
|
ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))})
|
||||||
if "底部输入区" in a: ret.update({txt: gr.update(value="")})
|
if "底部输入区" in a: ret.update({txt: gr.update(value="")})
|
||||||
return ret
|
return ret
|
||||||
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2] )
|
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2, plugin_advanced_arg] )
|
||||||
# 整理反复出现的控件句柄组合
|
# 整理反复出现的控件句柄组合
|
||||||
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt]
|
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg]
|
||||||
output_combo = [cookies, chatbot, history, status]
|
output_combo = [cookies, chatbot, history, status]
|
||||||
predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=input_combo, outputs=output_combo)
|
predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=input_combo, outputs=output_combo)
|
||||||
# 提交按钮、重置按钮
|
# 提交按钮、重置按钮
|
||||||
@@ -153,20 +157,22 @@ def main():
|
|||||||
# 函数插件-下拉菜单与随变按钮的互动
|
# 函数插件-下拉菜单与随变按钮的互动
|
||||||
def on_dropdown_changed(k):
|
def on_dropdown_changed(k):
|
||||||
variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary"
|
variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary"
|
||||||
return {switchy_bt: gr.update(value=k, variant=variant)}
|
ret = {switchy_bt: gr.update(value=k, variant=variant)}
|
||||||
dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt] )
|
if crazy_fns[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区
|
||||||
|
ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + crazy_fns[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
|
||||||
|
else:
|
||||||
|
ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")})
|
||||||
|
return ret
|
||||||
|
dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt, plugin_advanced_arg] )
|
||||||
def on_md_dropdown_changed(k):
|
def on_md_dropdown_changed(k):
|
||||||
return {chatbot: gr.update(label="当前模型:"+k)}
|
return {chatbot: gr.update(label="当前模型:"+k)}
|
||||||
md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] )
|
md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] )
|
||||||
# 随变按钮的回调函数注册
|
# 随变按钮的回调函数注册
|
||||||
def route(k, *args, **kwargs):
|
def route(k, *args, **kwargs):
|
||||||
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
|
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
|
||||||
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
|
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
|
||||||
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
|
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
|
||||||
click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
|
click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
|
||||||
# def expand_file_area(file_upload, area_file_up):
|
|
||||||
# if len(file_upload)>0: return {area_file_up: gr.update(open=True)}
|
|
||||||
# click_handle.then(expand_file_area, [file_upload, area_file_up], [area_file_up])
|
|
||||||
cancel_handles.append(click_handle)
|
cancel_handles.append(click_handle)
|
||||||
# 终止按钮的回调函数注册
|
# 终止按钮的回调函数注册
|
||||||
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
|
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
|
||||||
@@ -178,7 +184,7 @@ def main():
|
|||||||
print(f"如果浏览器没有自动打开,请复制并转到以下URL:")
|
print(f"如果浏览器没有自动打开,请复制并转到以下URL:")
|
||||||
print(f"\t(亮色主题): http://localhost:{PORT}")
|
print(f"\t(亮色主题): http://localhost:{PORT}")
|
||||||
print(f"\t(暗色主题): http://localhost:{PORT}/?__dark-theme=true")
|
print(f"\t(暗色主题): http://localhost:{PORT}/?__dark-theme=true")
|
||||||
def open():
|
def open():
|
||||||
time.sleep(2) # 打开浏览器
|
time.sleep(2) # 打开浏览器
|
||||||
webbrowser.open_new_tab(f"http://localhost:{PORT}/?__dark-theme=true")
|
webbrowser.open_new_tab(f"http://localhost:{PORT}/?__dark-theme=true")
|
||||||
threading.Thread(target=open, name="open-browser", daemon=True).start()
|
threading.Thread(target=open, name="open-browser", daemon=True).start()
|
||||||
@@ -188,5 +194,13 @@ def main():
|
|||||||
auto_opentab_delay()
|
auto_opentab_delay()
|
||||||
demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
||||||
|
|
||||||
|
# 如果需要在二级路径下运行
|
||||||
|
# CUSTOM_PATH, = get_conf('CUSTOM_PATH')
|
||||||
|
# if CUSTOM_PATH != "/":
|
||||||
|
# from toolbox import run_gradio_in_subpath
|
||||||
|
# run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
|
||||||
|
# else:
|
||||||
|
# demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
|||||||
@@ -11,7 +11,7 @@
|
|||||||
import tiktoken
|
import tiktoken
|
||||||
from functools import lru_cache
|
from functools import lru_cache
|
||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
from toolbox import get_conf
|
from toolbox import get_conf, trimmed_format_exc
|
||||||
|
|
||||||
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
|
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
|
||||||
from .bridge_chatgpt import predict as chatgpt_ui
|
from .bridge_chatgpt import predict as chatgpt_ui
|
||||||
@@ -19,6 +19,9 @@ from .bridge_chatgpt import predict as chatgpt_ui
|
|||||||
from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
|
from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
|
||||||
from .bridge_chatglm import predict as chatglm_ui
|
from .bridge_chatglm import predict as chatglm_ui
|
||||||
|
|
||||||
|
from .bridge_newbing import predict_no_ui_long_connection as newbing_noui
|
||||||
|
from .bridge_newbing import predict as newbing_ui
|
||||||
|
|
||||||
# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
|
# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
|
||||||
# from .bridge_tgui import predict as tgui_ui
|
# from .bridge_tgui import predict as tgui_ui
|
||||||
|
|
||||||
@@ -48,6 +51,7 @@ class LazyloadTiktoken(object):
|
|||||||
API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
|
API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
|
||||||
openai_endpoint = "https://api.openai.com/v1/chat/completions"
|
openai_endpoint = "https://api.openai.com/v1/chat/completions"
|
||||||
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
|
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
|
||||||
|
newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
|
||||||
# 兼容旧版的配置
|
# 兼容旧版的配置
|
||||||
try:
|
try:
|
||||||
API_URL, = get_conf("API_URL")
|
API_URL, = get_conf("API_URL")
|
||||||
@@ -59,6 +63,7 @@ except:
|
|||||||
# 新版配置
|
# 新版配置
|
||||||
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
|
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
|
||||||
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
|
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
|
||||||
|
if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
|
||||||
|
|
||||||
|
|
||||||
# 获取tokenizer
|
# 获取tokenizer
|
||||||
@@ -116,7 +121,15 @@ model_info = {
|
|||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
},
|
},
|
||||||
|
# newbing
|
||||||
|
"newbing": {
|
||||||
|
"fn_with_ui": newbing_ui,
|
||||||
|
"fn_without_ui": newbing_noui,
|
||||||
|
"endpoint": newbing_endpoint,
|
||||||
|
"max_token": 4096,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -128,10 +141,7 @@ def LLM_CATCH_EXCEPTION(f):
|
|||||||
try:
|
try:
|
||||||
return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
|
return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
from toolbox import get_conf
|
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
||||||
import traceback
|
|
||||||
proxies, = get_conf('proxies')
|
|
||||||
tb_str = '\n```\n' + traceback.format_exc() + '\n```\n'
|
|
||||||
observe_window[0] = tb_str
|
observe_window[0] = tb_str
|
||||||
return tb_str
|
return tb_str
|
||||||
return decorated
|
return decorated
|
||||||
@@ -182,7 +192,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
|
|||||||
|
|
||||||
def mutex_manager(window_mutex, observe_window):
|
def mutex_manager(window_mutex, observe_window):
|
||||||
while True:
|
while True:
|
||||||
time.sleep(0.5)
|
time.sleep(0.25)
|
||||||
if not window_mutex[-1]: break
|
if not window_mutex[-1]: break
|
||||||
# 看门狗(watchdog)
|
# 看门狗(watchdog)
|
||||||
for i in range(n_model):
|
for i in range(n_model):
|
||||||
|
|||||||
@@ -32,6 +32,7 @@ class GetGLMHandle(Process):
|
|||||||
return self.chatglm_model is not None
|
return self.chatglm_model is not None
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
|
# 子进程执行
|
||||||
# 第一次运行,加载参数
|
# 第一次运行,加载参数
|
||||||
retry = 0
|
retry = 0
|
||||||
while True:
|
while True:
|
||||||
@@ -53,17 +54,24 @@ class GetGLMHandle(Process):
|
|||||||
self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。')
|
self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。')
|
||||||
raise RuntimeError("不能正常加载ChatGLM的参数!")
|
raise RuntimeError("不能正常加载ChatGLM的参数!")
|
||||||
|
|
||||||
# 进入任务等待状态
|
|
||||||
while True:
|
while True:
|
||||||
|
# 进入任务等待状态
|
||||||
kwargs = self.child.recv()
|
kwargs = self.child.recv()
|
||||||
|
# 收到消息,开始请求
|
||||||
try:
|
try:
|
||||||
for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs):
|
for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs):
|
||||||
self.child.send(response)
|
self.child.send(response)
|
||||||
|
# # 中途接收可能的终止指令(如果有的话)
|
||||||
|
# if self.child.poll():
|
||||||
|
# command = self.child.recv()
|
||||||
|
# if command == '[Terminate]': break
|
||||||
except:
|
except:
|
||||||
self.child.send('[Local Message] Call ChatGLM fail.')
|
self.child.send('[Local Message] Call ChatGLM fail.')
|
||||||
|
# 请求处理结束,开始下一个循环
|
||||||
self.child.send('[Finish]')
|
self.child.send('[Finish]')
|
||||||
|
|
||||||
def stream_chat(self, **kwargs):
|
def stream_chat(self, **kwargs):
|
||||||
|
# 主进程执行
|
||||||
self.parent.send(kwargs)
|
self.parent.send(kwargs)
|
||||||
while True:
|
while True:
|
||||||
res = self.parent.recv()
|
res = self.parent.recv()
|
||||||
@@ -130,14 +138,17 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||||
|
|
||||||
|
# 处理历史信息
|
||||||
history_feedin = []
|
history_feedin = []
|
||||||
history_feedin.append(["What can I do?", system_prompt] )
|
history_feedin.append(["What can I do?", system_prompt] )
|
||||||
for i in range(len(history)//2):
|
for i in range(len(history)//2):
|
||||||
history_feedin.append([history[2*i], history[2*i+1]] )
|
history_feedin.append([history[2*i], history[2*i+1]] )
|
||||||
|
|
||||||
|
# 开始接收chatglm的回复
|
||||||
for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
||||||
chatbot[-1] = (inputs, response)
|
chatbot[-1] = (inputs, response)
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
|
# 总结输出
|
||||||
history.extend([inputs, response])
|
history.extend([inputs, response])
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ import importlib
|
|||||||
|
|
||||||
# config_private.py放自己的秘密如API和代理网址
|
# config_private.py放自己的秘密如API和代理网址
|
||||||
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
||||||
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys
|
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc
|
||||||
proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
|
proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
|
||||||
get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
|
get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
|
||||||
|
|
||||||
@@ -145,7 +145,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
|
||||||
return
|
return
|
||||||
|
|
||||||
history.append(inputs); history.append(" ")
|
history.append(inputs); history.append("")
|
||||||
|
|
||||||
retry = 0
|
retry = 0
|
||||||
while True:
|
while True:
|
||||||
@@ -198,21 +198,24 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
chunk_decoded = chunk.decode()
|
chunk_decoded = chunk.decode()
|
||||||
error_msg = chunk_decoded
|
error_msg = chunk_decoded
|
||||||
if "reduce the length" in error_msg:
|
if "reduce the length" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长,或历史数据过长. 历史缓存数据现已释放,您可以请再次尝试.")
|
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出
|
||||||
history = [] # 清除历史
|
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
|
||||||
|
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
|
||||||
|
# history = [] # 清除历史
|
||||||
elif "does not exist" in error_msg:
|
elif "does not exist" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在,或者您没有获得体验资格.")
|
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
|
||||||
elif "Incorrect API key" in error_msg:
|
elif "Incorrect API key" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由,拒绝服务.")
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务.")
|
||||||
elif "exceeded your current quota" in error_msg:
|
elif "exceeded your current quota" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由,拒绝服务.")
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务.")
|
||||||
elif "bad forward key" in error_msg:
|
elif "bad forward key" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
|
||||||
elif "Not enough point" in error_msg:
|
elif "Not enough point" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
|
||||||
else:
|
else:
|
||||||
from toolbox import regular_txt_to_markdown
|
from toolbox import regular_txt_to_markdown
|
||||||
tb_str = '```\n' + traceback.format_exc() + '```'
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}")
|
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}")
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
|
||||||
return
|
return
|
||||||
|
|||||||
250
request_llm/bridge_newbing.py
普通文件
250
request_llm/bridge_newbing.py
普通文件
@@ -0,0 +1,250 @@
|
|||||||
|
"""
|
||||||
|
========================================================================
|
||||||
|
第一部分:来自EdgeGPT.py
|
||||||
|
https://github.com/acheong08/EdgeGPT
|
||||||
|
========================================================================
|
||||||
|
"""
|
||||||
|
from .edge_gpt import NewbingChatbot
|
||||||
|
load_message = "等待NewBing响应。"
|
||||||
|
|
||||||
|
"""
|
||||||
|
========================================================================
|
||||||
|
第二部分:子进程Worker(调用主体)
|
||||||
|
========================================================================
|
||||||
|
"""
|
||||||
|
import time
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import asyncio
|
||||||
|
import importlib
|
||||||
|
import threading
|
||||||
|
from toolbox import update_ui, get_conf, trimmed_format_exc
|
||||||
|
from multiprocessing import Process, Pipe
|
||||||
|
|
||||||
|
def preprocess_newbing_out(s):
|
||||||
|
pattern = r'\^(\d+)\^' # 匹配^数字^
|
||||||
|
sub = lambda m: '\['+m.group(1)+'\]' # 将匹配到的数字作为替换值
|
||||||
|
result = re.sub(pattern, sub, s) # 替换操作
|
||||||
|
if '[1]' in result:
|
||||||
|
result += '\n\n```\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
|
||||||
|
return result
|
||||||
|
|
||||||
|
def preprocess_newbing_out_simple(result):
|
||||||
|
if '[1]' in result:
|
||||||
|
result += '\n\n```\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
|
||||||
|
return result
|
||||||
|
|
||||||
|
class NewBingHandle(Process):
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__(daemon=True)
|
||||||
|
self.parent, self.child = Pipe()
|
||||||
|
self.newbing_model = None
|
||||||
|
self.info = ""
|
||||||
|
self.success = True
|
||||||
|
self.local_history = []
|
||||||
|
self.check_dependency()
|
||||||
|
self.start()
|
||||||
|
self.threadLock = threading.Lock()
|
||||||
|
|
||||||
|
def check_dependency(self):
|
||||||
|
try:
|
||||||
|
self.success = False
|
||||||
|
import certifi, httpx, rich
|
||||||
|
self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
|
||||||
|
self.success = True
|
||||||
|
except:
|
||||||
|
self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。"
|
||||||
|
self.success = False
|
||||||
|
|
||||||
|
def ready(self):
|
||||||
|
return self.newbing_model is not None
|
||||||
|
|
||||||
|
async def async_run(self):
|
||||||
|
# 读取配置
|
||||||
|
NEWBING_STYLE, = get_conf('NEWBING_STYLE')
|
||||||
|
from request_llm.bridge_all import model_info
|
||||||
|
endpoint = model_info['newbing']['endpoint']
|
||||||
|
while True:
|
||||||
|
# 等待
|
||||||
|
kwargs = self.child.recv()
|
||||||
|
question=kwargs['query']
|
||||||
|
history=kwargs['history']
|
||||||
|
system_prompt=kwargs['system_prompt']
|
||||||
|
|
||||||
|
# 是否重置
|
||||||
|
if len(self.local_history) > 0 and len(history)==0:
|
||||||
|
await self.newbing_model.reset()
|
||||||
|
self.local_history = []
|
||||||
|
|
||||||
|
# 开始问问题
|
||||||
|
prompt = ""
|
||||||
|
if system_prompt not in self.local_history:
|
||||||
|
self.local_history.append(system_prompt)
|
||||||
|
prompt += system_prompt + '\n'
|
||||||
|
|
||||||
|
# 追加历史
|
||||||
|
for ab in history:
|
||||||
|
a, b = ab
|
||||||
|
if a not in self.local_history:
|
||||||
|
self.local_history.append(a)
|
||||||
|
prompt += a + '\n'
|
||||||
|
if b not in self.local_history:
|
||||||
|
self.local_history.append(b)
|
||||||
|
prompt += b + '\n'
|
||||||
|
|
||||||
|
# 问题
|
||||||
|
prompt += question
|
||||||
|
self.local_history.append(question)
|
||||||
|
|
||||||
|
# 提交
|
||||||
|
async for final, response in self.newbing_model.ask_stream(
|
||||||
|
prompt=question,
|
||||||
|
conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"]
|
||||||
|
wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub"
|
||||||
|
):
|
||||||
|
if not final:
|
||||||
|
print(response)
|
||||||
|
self.child.send(str(response))
|
||||||
|
else:
|
||||||
|
print('-------- receive final ---------')
|
||||||
|
self.child.send('[Finish]')
|
||||||
|
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
"""
|
||||||
|
这个函数运行在子进程
|
||||||
|
"""
|
||||||
|
# 第一次运行,加载参数
|
||||||
|
self.success = False
|
||||||
|
self.local_history = []
|
||||||
|
if (self.newbing_model is None) or (not self.success):
|
||||||
|
# 代理设置
|
||||||
|
proxies, = get_conf('proxies')
|
||||||
|
if proxies is None:
|
||||||
|
self.proxies_https = None
|
||||||
|
else:
|
||||||
|
self.proxies_https = proxies['https']
|
||||||
|
# cookie
|
||||||
|
NEWBING_COOKIES, = get_conf('NEWBING_COOKIES')
|
||||||
|
try:
|
||||||
|
cookies = json.loads(NEWBING_COOKIES)
|
||||||
|
except:
|
||||||
|
self.success = False
|
||||||
|
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
||||||
|
self.child.send(f'[Local Message] 不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。')
|
||||||
|
self.child.send('[Fail]')
|
||||||
|
self.child.send('[Finish]')
|
||||||
|
raise RuntimeError(f"不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。")
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies)
|
||||||
|
except:
|
||||||
|
self.success = False
|
||||||
|
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
||||||
|
self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}')
|
||||||
|
self.child.send('[Fail]')
|
||||||
|
self.child.send('[Finish]')
|
||||||
|
raise RuntimeError(f"不能加载Newbing组件。")
|
||||||
|
|
||||||
|
self.success = True
|
||||||
|
try:
|
||||||
|
# 进入任务等待状态
|
||||||
|
asyncio.run(self.async_run())
|
||||||
|
except Exception:
|
||||||
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
|
self.child.send(f'[Local Message] Newbing失败 {tb_str}.')
|
||||||
|
self.child.send('[Fail]')
|
||||||
|
self.child.send('[Finish]')
|
||||||
|
|
||||||
|
def stream_chat(self, **kwargs):
|
||||||
|
"""
|
||||||
|
这个函数运行在主进程
|
||||||
|
"""
|
||||||
|
self.threadLock.acquire()
|
||||||
|
self.parent.send(kwargs) # 发送请求到子进程
|
||||||
|
while True:
|
||||||
|
res = self.parent.recv() # 等待newbing回复的片段
|
||||||
|
if res == '[Finish]':
|
||||||
|
break # 结束
|
||||||
|
elif res == '[Fail]':
|
||||||
|
self.success = False
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
yield res # newbing回复的片段
|
||||||
|
self.threadLock.release()
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
||||||
|
========================================================================
|
||||||
|
第三部分:主进程统一调用函数接口
|
||||||
|
========================================================================
|
||||||
|
"""
|
||||||
|
global newbing_handle
|
||||||
|
newbing_handle = None
|
||||||
|
|
||||||
|
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
|
||||||
|
"""
|
||||||
|
多线程方法
|
||||||
|
函数的说明请见 request_llm/bridge_all.py
|
||||||
|
"""
|
||||||
|
global newbing_handle
|
||||||
|
if (newbing_handle is None) or (not newbing_handle.success):
|
||||||
|
newbing_handle = NewBingHandle()
|
||||||
|
observe_window[0] = load_message + "\n\n" + newbing_handle.info
|
||||||
|
if not newbing_handle.success:
|
||||||
|
error = newbing_handle.info
|
||||||
|
newbing_handle = None
|
||||||
|
raise RuntimeError(error)
|
||||||
|
|
||||||
|
# 没有 sys_prompt 接口,因此把prompt加入 history
|
||||||
|
history_feedin = []
|
||||||
|
for i in range(len(history)//2):
|
||||||
|
history_feedin.append([history[2*i], history[2*i+1]] )
|
||||||
|
|
||||||
|
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
||||||
|
response = ""
|
||||||
|
observe_window[0] = "[Local Message]: 等待NewBing响应中 ..."
|
||||||
|
for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
||||||
|
observe_window[0] = preprocess_newbing_out_simple(response)
|
||||||
|
if len(observe_window) >= 2:
|
||||||
|
if (time.time()-observe_window[1]) > watch_dog_patience:
|
||||||
|
raise RuntimeError("程序终止。")
|
||||||
|
return preprocess_newbing_out_simple(response)
|
||||||
|
|
||||||
|
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
||||||
|
"""
|
||||||
|
单线程方法
|
||||||
|
函数的说明请见 request_llm/bridge_all.py
|
||||||
|
"""
|
||||||
|
chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ..."))
|
||||||
|
|
||||||
|
global newbing_handle
|
||||||
|
if (newbing_handle is None) or (not newbing_handle.success):
|
||||||
|
newbing_handle = NewBingHandle()
|
||||||
|
chatbot[-1] = (inputs, load_message + "\n\n" + newbing_handle.info)
|
||||||
|
yield from update_ui(chatbot=chatbot, history=[])
|
||||||
|
if not newbing_handle.success:
|
||||||
|
newbing_handle = None
|
||||||
|
return
|
||||||
|
|
||||||
|
if additional_fn is not None:
|
||||||
|
import core_functional
|
||||||
|
importlib.reload(core_functional) # 热更新prompt
|
||||||
|
core_functional = core_functional.get_core_functions()
|
||||||
|
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||||
|
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||||
|
|
||||||
|
history_feedin = []
|
||||||
|
for i in range(len(history)//2):
|
||||||
|
history_feedin.append([history[2*i], history[2*i+1]] )
|
||||||
|
|
||||||
|
chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...")
|
||||||
|
response = "[Local Message]: 等待NewBing响应中 ..."
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
||||||
|
for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
||||||
|
chatbot[-1] = (inputs, preprocess_newbing_out(response))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
||||||
|
|
||||||
|
history.extend([inputs, preprocess_newbing_out(response)])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
|
||||||
|
|
||||||
409
request_llm/edge_gpt.py
普通文件
409
request_llm/edge_gpt.py
普通文件
@@ -0,0 +1,409 @@
|
|||||||
|
"""
|
||||||
|
========================================================================
|
||||||
|
第一部分:来自EdgeGPT.py
|
||||||
|
https://github.com/acheong08/EdgeGPT
|
||||||
|
========================================================================
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import random
|
||||||
|
import re
|
||||||
|
import ssl
|
||||||
|
import sys
|
||||||
|
import uuid
|
||||||
|
from enum import Enum
|
||||||
|
from typing import Generator
|
||||||
|
from typing import Literal
|
||||||
|
from typing import Optional
|
||||||
|
from typing import Union
|
||||||
|
import websockets.client as websockets
|
||||||
|
|
||||||
|
DELIMITER = "\x1e"
|
||||||
|
|
||||||
|
|
||||||
|
# Generate random IP between range 13.104.0.0/14
|
||||||
|
FORWARDED_IP = (
|
||||||
|
f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
HEADERS = {
|
||||||
|
"accept": "application/json",
|
||||||
|
"accept-language": "en-US,en;q=0.9",
|
||||||
|
"content-type": "application/json",
|
||||||
|
"sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"',
|
||||||
|
"sec-ch-ua-arch": '"x86"',
|
||||||
|
"sec-ch-ua-bitness": '"64"',
|
||||||
|
"sec-ch-ua-full-version": '"109.0.1518.78"',
|
||||||
|
"sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
|
||||||
|
"sec-ch-ua-mobile": "?0",
|
||||||
|
"sec-ch-ua-model": "",
|
||||||
|
"sec-ch-ua-platform": '"Windows"',
|
||||||
|
"sec-ch-ua-platform-version": '"15.0.0"',
|
||||||
|
"sec-fetch-dest": "empty",
|
||||||
|
"sec-fetch-mode": "cors",
|
||||||
|
"sec-fetch-site": "same-origin",
|
||||||
|
"x-ms-client-request-id": str(uuid.uuid4()),
|
||||||
|
"x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32",
|
||||||
|
"Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx",
|
||||||
|
"Referrer-Policy": "origin-when-cross-origin",
|
||||||
|
"x-forwarded-for": FORWARDED_IP,
|
||||||
|
}
|
||||||
|
|
||||||
|
HEADERS_INIT_CONVER = {
|
||||||
|
"authority": "edgeservices.bing.com",
|
||||||
|
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
|
||||||
|
"accept-language": "en-US,en;q=0.9",
|
||||||
|
"cache-control": "max-age=0",
|
||||||
|
"sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"',
|
||||||
|
"sec-ch-ua-arch": '"x86"',
|
||||||
|
"sec-ch-ua-bitness": '"64"',
|
||||||
|
"sec-ch-ua-full-version": '"110.0.1587.69"',
|
||||||
|
"sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
|
||||||
|
"sec-ch-ua-mobile": "?0",
|
||||||
|
"sec-ch-ua-model": '""',
|
||||||
|
"sec-ch-ua-platform": '"Windows"',
|
||||||
|
"sec-ch-ua-platform-version": '"15.0.0"',
|
||||||
|
"sec-fetch-dest": "document",
|
||||||
|
"sec-fetch-mode": "navigate",
|
||||||
|
"sec-fetch-site": "none",
|
||||||
|
"sec-fetch-user": "?1",
|
||||||
|
"upgrade-insecure-requests": "1",
|
||||||
|
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69",
|
||||||
|
"x-edge-shopping-flag": "1",
|
||||||
|
"x-forwarded-for": FORWARDED_IP,
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_ssl_context():
|
||||||
|
import certifi
|
||||||
|
ssl_context = ssl.create_default_context()
|
||||||
|
ssl_context.load_verify_locations(certifi.where())
|
||||||
|
return ssl_context
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
class NotAllowedToAccess(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class ConversationStyle(Enum):
|
||||||
|
creative = "h3imaginative,clgalileo,gencontentv3"
|
||||||
|
balanced = "galileo"
|
||||||
|
precise = "h3precise,clgalileo"
|
||||||
|
|
||||||
|
|
||||||
|
CONVERSATION_STYLE_TYPE = Optional[
|
||||||
|
Union[ConversationStyle, Literal["creative", "balanced", "precise"]]
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _append_identifier(msg: dict) -> str:
|
||||||
|
"""
|
||||||
|
Appends special character to end of message to identify end of message
|
||||||
|
"""
|
||||||
|
# Convert dict to json string
|
||||||
|
return json.dumps(msg) + DELIMITER
|
||||||
|
|
||||||
|
|
||||||
|
def _get_ran_hex(length: int = 32) -> str:
|
||||||
|
"""
|
||||||
|
Returns random hex string
|
||||||
|
"""
|
||||||
|
return "".join(random.choice("0123456789abcdef") for _ in range(length))
|
||||||
|
|
||||||
|
|
||||||
|
class _ChatHubRequest:
|
||||||
|
"""
|
||||||
|
Request object for ChatHub
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
conversation_signature: str,
|
||||||
|
client_id: str,
|
||||||
|
conversation_id: str,
|
||||||
|
invocation_id: int = 0,
|
||||||
|
) -> None:
|
||||||
|
self.struct: dict = {}
|
||||||
|
|
||||||
|
self.client_id: str = client_id
|
||||||
|
self.conversation_id: str = conversation_id
|
||||||
|
self.conversation_signature: str = conversation_signature
|
||||||
|
self.invocation_id: int = invocation_id
|
||||||
|
|
||||||
|
def update(
|
||||||
|
self,
|
||||||
|
prompt,
|
||||||
|
conversation_style,
|
||||||
|
options,
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Updates request object
|
||||||
|
"""
|
||||||
|
if options is None:
|
||||||
|
options = [
|
||||||
|
"deepleo",
|
||||||
|
"enable_debug_commands",
|
||||||
|
"disable_emoji_spoken_text",
|
||||||
|
"enablemm",
|
||||||
|
]
|
||||||
|
if conversation_style:
|
||||||
|
if not isinstance(conversation_style, ConversationStyle):
|
||||||
|
conversation_style = getattr(ConversationStyle, conversation_style)
|
||||||
|
options = [
|
||||||
|
"nlu_direct_response_filter",
|
||||||
|
"deepleo",
|
||||||
|
"disable_emoji_spoken_text",
|
||||||
|
"responsible_ai_policy_235",
|
||||||
|
"enablemm",
|
||||||
|
conversation_style.value,
|
||||||
|
"dtappid",
|
||||||
|
"cricinfo",
|
||||||
|
"cricinfov2",
|
||||||
|
"dv3sugg",
|
||||||
|
]
|
||||||
|
self.struct = {
|
||||||
|
"arguments": [
|
||||||
|
{
|
||||||
|
"source": "cib",
|
||||||
|
"optionsSets": options,
|
||||||
|
"sliceIds": [
|
||||||
|
"222dtappid",
|
||||||
|
"225cricinfo",
|
||||||
|
"224locals0",
|
||||||
|
],
|
||||||
|
"traceId": _get_ran_hex(32),
|
||||||
|
"isStartOfSession": self.invocation_id == 0,
|
||||||
|
"message": {
|
||||||
|
"author": "user",
|
||||||
|
"inputMethod": "Keyboard",
|
||||||
|
"text": prompt,
|
||||||
|
"messageType": "Chat",
|
||||||
|
},
|
||||||
|
"conversationSignature": self.conversation_signature,
|
||||||
|
"participant": {
|
||||||
|
"id": self.client_id,
|
||||||
|
},
|
||||||
|
"conversationId": self.conversation_id,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
"invocationId": str(self.invocation_id),
|
||||||
|
"target": "chat",
|
||||||
|
"type": 4,
|
||||||
|
}
|
||||||
|
self.invocation_id += 1
|
||||||
|
|
||||||
|
|
||||||
|
class _Conversation:
|
||||||
|
"""
|
||||||
|
Conversation API
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
cookies,
|
||||||
|
proxy,
|
||||||
|
) -> None:
|
||||||
|
self.struct: dict = {
|
||||||
|
"conversationId": None,
|
||||||
|
"clientId": None,
|
||||||
|
"conversationSignature": None,
|
||||||
|
"result": {"value": "Success", "message": None},
|
||||||
|
}
|
||||||
|
import httpx
|
||||||
|
self.proxy = proxy
|
||||||
|
proxy = (
|
||||||
|
proxy
|
||||||
|
or os.environ.get("all_proxy")
|
||||||
|
or os.environ.get("ALL_PROXY")
|
||||||
|
or os.environ.get("https_proxy")
|
||||||
|
or os.environ.get("HTTPS_PROXY")
|
||||||
|
or None
|
||||||
|
)
|
||||||
|
if proxy is not None and proxy.startswith("socks5h://"):
|
||||||
|
proxy = "socks5://" + proxy[len("socks5h://") :]
|
||||||
|
self.session = httpx.Client(
|
||||||
|
proxies=proxy,
|
||||||
|
timeout=30,
|
||||||
|
headers=HEADERS_INIT_CONVER,
|
||||||
|
)
|
||||||
|
for cookie in cookies:
|
||||||
|
self.session.cookies.set(cookie["name"], cookie["value"])
|
||||||
|
|
||||||
|
# Send GET request
|
||||||
|
response = self.session.get(
|
||||||
|
url=os.environ.get("BING_PROXY_URL")
|
||||||
|
or "https://edgeservices.bing.com/edgesvc/turing/conversation/create",
|
||||||
|
)
|
||||||
|
if response.status_code != 200:
|
||||||
|
response = self.session.get(
|
||||||
|
"https://edge.churchless.tech/edgesvc/turing/conversation/create",
|
||||||
|
)
|
||||||
|
if response.status_code != 200:
|
||||||
|
print(f"Status code: {response.status_code}")
|
||||||
|
print(response.text)
|
||||||
|
print(response.url)
|
||||||
|
raise Exception("Authentication failed")
|
||||||
|
try:
|
||||||
|
self.struct = response.json()
|
||||||
|
except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
|
||||||
|
raise Exception(
|
||||||
|
"Authentication failed. You have not been accepted into the beta.",
|
||||||
|
) from exc
|
||||||
|
if self.struct["result"]["value"] == "UnauthorizedRequest":
|
||||||
|
raise NotAllowedToAccess(self.struct["result"]["message"])
|
||||||
|
|
||||||
|
|
||||||
|
class _ChatHub:
|
||||||
|
"""
|
||||||
|
Chat API
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, conversation) -> None:
|
||||||
|
self.wss = None
|
||||||
|
self.request: _ChatHubRequest
|
||||||
|
self.loop: bool
|
||||||
|
self.task: asyncio.Task
|
||||||
|
print(conversation.struct)
|
||||||
|
self.request = _ChatHubRequest(
|
||||||
|
conversation_signature=conversation.struct["conversationSignature"],
|
||||||
|
client_id=conversation.struct["clientId"],
|
||||||
|
conversation_id=conversation.struct["conversationId"],
|
||||||
|
)
|
||||||
|
|
||||||
|
async def ask_stream(
|
||||||
|
self,
|
||||||
|
prompt: str,
|
||||||
|
wss_link: str,
|
||||||
|
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
||||||
|
raw: bool = False,
|
||||||
|
options: dict = None,
|
||||||
|
) -> Generator[str, None, None]:
|
||||||
|
"""
|
||||||
|
Ask a question to the bot
|
||||||
|
"""
|
||||||
|
if self.wss and not self.wss.closed:
|
||||||
|
await self.wss.close()
|
||||||
|
# Check if websocket is closed
|
||||||
|
self.wss = await websockets.connect(
|
||||||
|
wss_link,
|
||||||
|
extra_headers=HEADERS,
|
||||||
|
max_size=None,
|
||||||
|
ssl=get_ssl_context()
|
||||||
|
)
|
||||||
|
await self._initial_handshake()
|
||||||
|
# Construct a ChatHub request
|
||||||
|
self.request.update(
|
||||||
|
prompt=prompt,
|
||||||
|
conversation_style=conversation_style,
|
||||||
|
options=options,
|
||||||
|
)
|
||||||
|
# Send request
|
||||||
|
await self.wss.send(_append_identifier(self.request.struct))
|
||||||
|
final = False
|
||||||
|
while not final:
|
||||||
|
objects = str(await self.wss.recv()).split(DELIMITER)
|
||||||
|
for obj in objects:
|
||||||
|
if obj is None or not obj:
|
||||||
|
continue
|
||||||
|
response = json.loads(obj)
|
||||||
|
if response.get("type") != 2 and raw:
|
||||||
|
yield False, response
|
||||||
|
elif response.get("type") == 1 and response["arguments"][0].get(
|
||||||
|
"messages",
|
||||||
|
):
|
||||||
|
resp_txt = response["arguments"][0]["messages"][0]["adaptiveCards"][
|
||||||
|
0
|
||||||
|
]["body"][0].get("text")
|
||||||
|
yield False, resp_txt
|
||||||
|
elif response.get("type") == 2:
|
||||||
|
final = True
|
||||||
|
yield True, response
|
||||||
|
|
||||||
|
async def _initial_handshake(self) -> None:
|
||||||
|
await self.wss.send(_append_identifier({"protocol": "json", "version": 1}))
|
||||||
|
await self.wss.recv()
|
||||||
|
|
||||||
|
async def close(self) -> None:
|
||||||
|
"""
|
||||||
|
Close the connection
|
||||||
|
"""
|
||||||
|
if self.wss and not self.wss.closed:
|
||||||
|
await self.wss.close()
|
||||||
|
|
||||||
|
|
||||||
|
class NewbingChatbot:
|
||||||
|
"""
|
||||||
|
Combines everything to make it seamless
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
cookies,
|
||||||
|
proxy
|
||||||
|
) -> None:
|
||||||
|
if cookies is None:
|
||||||
|
cookies = {}
|
||||||
|
self.cookies = cookies
|
||||||
|
self.proxy = proxy
|
||||||
|
self.chat_hub: _ChatHub = _ChatHub(
|
||||||
|
_Conversation(self.cookies, self.proxy),
|
||||||
|
)
|
||||||
|
|
||||||
|
async def ask(
|
||||||
|
self,
|
||||||
|
prompt: str,
|
||||||
|
wss_link: str,
|
||||||
|
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
||||||
|
options: dict = None,
|
||||||
|
) -> dict:
|
||||||
|
"""
|
||||||
|
Ask a question to the bot
|
||||||
|
"""
|
||||||
|
async for final, response in self.chat_hub.ask_stream(
|
||||||
|
prompt=prompt,
|
||||||
|
conversation_style=conversation_style,
|
||||||
|
wss_link=wss_link,
|
||||||
|
options=options,
|
||||||
|
):
|
||||||
|
if final:
|
||||||
|
return response
|
||||||
|
await self.chat_hub.wss.close()
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def ask_stream(
|
||||||
|
self,
|
||||||
|
prompt: str,
|
||||||
|
wss_link: str,
|
||||||
|
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
||||||
|
raw: bool = False,
|
||||||
|
options: dict = None,
|
||||||
|
) -> Generator[str, None, None]:
|
||||||
|
"""
|
||||||
|
Ask a question to the bot
|
||||||
|
"""
|
||||||
|
async for response in self.chat_hub.ask_stream(
|
||||||
|
prompt=prompt,
|
||||||
|
conversation_style=conversation_style,
|
||||||
|
wss_link=wss_link,
|
||||||
|
raw=raw,
|
||||||
|
options=options,
|
||||||
|
):
|
||||||
|
yield response
|
||||||
|
|
||||||
|
async def close(self) -> None:
|
||||||
|
"""
|
||||||
|
Close the connection
|
||||||
|
"""
|
||||||
|
await self.chat_hub.close()
|
||||||
|
|
||||||
|
async def reset(self) -> None:
|
||||||
|
"""
|
||||||
|
Reset the conversation
|
||||||
|
"""
|
||||||
|
await self.close()
|
||||||
|
self.chat_hub = _ChatHub(_Conversation(self.cookies, self.proxy))
|
||||||
|
|
||||||
|
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
BingImageCreator
|
||||||
|
certifi
|
||||||
|
httpx
|
||||||
|
prompt_toolkit
|
||||||
|
requests
|
||||||
|
rich
|
||||||
|
websockets
|
||||||
|
httpx[socks]
|
||||||
247
theme.py
247
theme.py
@@ -137,6 +137,16 @@ advanced_css = """
|
|||||||
|
|
||||||
/* 行内代码的背景设为淡灰色,设定圆角和间距. */
|
/* 行内代码的背景设为淡灰色,设定圆角和间距. */
|
||||||
.markdown-body code {
|
.markdown-body code {
|
||||||
|
display: inline;
|
||||||
|
white-space: break-spaces;
|
||||||
|
border-radius: 6px;
|
||||||
|
margin: 0 2px 0 2px;
|
||||||
|
padding: .2em .4em .1em .4em;
|
||||||
|
background-color: rgba(13, 17, 23, 0.95);
|
||||||
|
color: #c9d1d9;
|
||||||
|
}
|
||||||
|
|
||||||
|
.dark .markdown-body code {
|
||||||
display: inline;
|
display: inline;
|
||||||
white-space: break-spaces;
|
white-space: break-spaces;
|
||||||
border-radius: 6px;
|
border-radius: 6px;
|
||||||
@@ -144,8 +154,19 @@ advanced_css = """
|
|||||||
padding: .2em .4em .1em .4em;
|
padding: .2em .4em .1em .4em;
|
||||||
background-color: rgba(175,184,193,0.2);
|
background-color: rgba(175,184,193,0.2);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* 设定代码块的样式,包括背景颜色、内、外边距、圆角。 */
|
/* 设定代码块的样式,包括背景颜色、内、外边距、圆角。 */
|
||||||
.markdown-body pre code {
|
.markdown-body pre code {
|
||||||
|
display: block;
|
||||||
|
overflow: auto;
|
||||||
|
white-space: pre;
|
||||||
|
background-color: rgba(13, 17, 23, 0.95);
|
||||||
|
border-radius: 10px;
|
||||||
|
padding: 1em;
|
||||||
|
margin: 1em 2em 1em 0.5em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.dark .markdown-body pre code {
|
||||||
display: block;
|
display: block;
|
||||||
overflow: auto;
|
overflow: auto;
|
||||||
white-space: pre;
|
white-space: pre;
|
||||||
@@ -160,72 +181,162 @@ advanced_css = """
|
|||||||
if CODE_HIGHLIGHT:
|
if CODE_HIGHLIGHT:
|
||||||
advanced_css += """
|
advanced_css += """
|
||||||
|
|
||||||
.hll { background-color: #ffffcc }
|
.codehilite .hll { background-color: #6e7681 }
|
||||||
.c { color: #3D7B7B; font-style: italic } /* Comment */
|
.codehilite .c { color: #8b949e; font-style: italic } /* Comment */
|
||||||
.err { border: 1px solid #FF0000 } /* Error */
|
.codehilite .err { color: #f85149 } /* Error */
|
||||||
.k { color: hsl(197, 94%, 51%); font-weight: bold } /* Keyword */
|
.codehilite .esc { color: #c9d1d9 } /* Escape */
|
||||||
.o { color: #666666 } /* Operator */
|
.codehilite .g { color: #c9d1d9 } /* Generic */
|
||||||
.ch { color: #3D7B7B; font-style: italic } /* Comment.Hashbang */
|
.codehilite .k { color: #ff7b72 } /* Keyword */
|
||||||
.cm { color: #3D7B7B; font-style: italic } /* Comment.Multiline */
|
.codehilite .l { color: #a5d6ff } /* Literal */
|
||||||
.cp { color: #9C6500 } /* Comment.Preproc */
|
.codehilite .n { color: #c9d1d9 } /* Name */
|
||||||
.cpf { color: #3D7B7B; font-style: italic } /* Comment.PreprocFile */
|
.codehilite .o { color: #ff7b72; font-weight: bold } /* Operator */
|
||||||
.c1 { color: #3D7B7B; font-style: italic } /* Comment.Single */
|
.codehilite .x { color: #c9d1d9 } /* Other */
|
||||||
.cs { color: #3D7B7B; font-style: italic } /* Comment.Special */
|
.codehilite .p { color: #c9d1d9 } /* Punctuation */
|
||||||
.gd { color: #A00000 } /* Generic.Deleted */
|
.codehilite .ch { color: #8b949e; font-style: italic } /* Comment.Hashbang */
|
||||||
.ge { font-style: italic } /* Generic.Emph */
|
.codehilite .cm { color: #8b949e; font-style: italic } /* Comment.Multiline */
|
||||||
.gr { color: #E40000 } /* Generic.Error */
|
.codehilite .cp { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Preproc */
|
||||||
.gh { color: #000080; font-weight: bold } /* Generic.Heading */
|
.codehilite .cpf { color: #8b949e; font-style: italic } /* Comment.PreprocFile */
|
||||||
.gi { color: #008400 } /* Generic.Inserted */
|
.codehilite .c1 { color: #8b949e; font-style: italic } /* Comment.Single */
|
||||||
.go { color: #717171 } /* Generic.Output */
|
.codehilite .cs { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Special */
|
||||||
.gp { color: #000080; font-weight: bold } /* Generic.Prompt */
|
.codehilite .gd { color: #ffa198; background-color: #490202 } /* Generic.Deleted */
|
||||||
.gs { font-weight: bold } /* Generic.Strong */
|
.codehilite .ge { color: #c9d1d9; font-style: italic } /* Generic.Emph */
|
||||||
.gu { color: #800080; font-weight: bold } /* Generic.Subheading */
|
.codehilite .gr { color: #ffa198 } /* Generic.Error */
|
||||||
.gt { color: #a9dd00 } /* Generic.Traceback */
|
.codehilite .gh { color: #79c0ff; font-weight: bold } /* Generic.Heading */
|
||||||
.kc { color: #008000; font-weight: bold } /* Keyword.Constant */
|
.codehilite .gi { color: #56d364; background-color: #0f5323 } /* Generic.Inserted */
|
||||||
.kd { color: #008000; font-weight: bold } /* Keyword.Declaration */
|
.codehilite .go { color: #8b949e } /* Generic.Output */
|
||||||
.kn { color: #008000; font-weight: bold } /* Keyword.Namespace */
|
.codehilite .gp { color: #8b949e } /* Generic.Prompt */
|
||||||
.kp { color: #008000 } /* Keyword.Pseudo */
|
.codehilite .gs { color: #c9d1d9; font-weight: bold } /* Generic.Strong */
|
||||||
.kr { color: #008000; font-weight: bold } /* Keyword.Reserved */
|
.codehilite .gu { color: #79c0ff } /* Generic.Subheading */
|
||||||
.kt { color: #B00040 } /* Keyword.Type */
|
.codehilite .gt { color: #ff7b72 } /* Generic.Traceback */
|
||||||
.m { color: #666666 } /* Literal.Number */
|
.codehilite .g-Underline { color: #c9d1d9; text-decoration: underline } /* Generic.Underline */
|
||||||
.s { color: #BA2121 } /* Literal.String */
|
.codehilite .kc { color: #79c0ff } /* Keyword.Constant */
|
||||||
.na { color: #687822 } /* Name.Attribute */
|
.codehilite .kd { color: #ff7b72 } /* Keyword.Declaration */
|
||||||
.nb { color: #e5f8c3 } /* Name.Builtin */
|
.codehilite .kn { color: #ff7b72 } /* Keyword.Namespace */
|
||||||
.nc { color: #ffad65; font-weight: bold } /* Name.Class */
|
.codehilite .kp { color: #79c0ff } /* Keyword.Pseudo */
|
||||||
.no { color: #880000 } /* Name.Constant */
|
.codehilite .kr { color: #ff7b72 } /* Keyword.Reserved */
|
||||||
.nd { color: #AA22FF } /* Name.Decorator */
|
.codehilite .kt { color: #ff7b72 } /* Keyword.Type */
|
||||||
.ni { color: #717171; font-weight: bold } /* Name.Entity */
|
.codehilite .ld { color: #79c0ff } /* Literal.Date */
|
||||||
.ne { color: #CB3F38; font-weight: bold } /* Name.Exception */
|
.codehilite .m { color: #a5d6ff } /* Literal.Number */
|
||||||
.nf { color: #f9f978 } /* Name.Function */
|
.codehilite .s { color: #a5d6ff } /* Literal.String */
|
||||||
.nl { color: #767600 } /* Name.Label */
|
.codehilite .na { color: #c9d1d9 } /* Name.Attribute */
|
||||||
.nn { color: #0000FF; font-weight: bold } /* Name.Namespace */
|
.codehilite .nb { color: #c9d1d9 } /* Name.Builtin */
|
||||||
.nt { color: #008000; font-weight: bold } /* Name.Tag */
|
.codehilite .nc { color: #f0883e; font-weight: bold } /* Name.Class */
|
||||||
.nv { color: #19177C } /* Name.Variable */
|
.codehilite .no { color: #79c0ff; font-weight: bold } /* Name.Constant */
|
||||||
.ow { color: #AA22FF; font-weight: bold } /* Operator.Word */
|
.codehilite .nd { color: #d2a8ff; font-weight: bold } /* Name.Decorator */
|
||||||
.w { color: #bbbbbb } /* Text.Whitespace */
|
.codehilite .ni { color: #ffa657 } /* Name.Entity */
|
||||||
.mb { color: #666666 } /* Literal.Number.Bin */
|
.codehilite .ne { color: #f0883e; font-weight: bold } /* Name.Exception */
|
||||||
.mf { color: #666666 } /* Literal.Number.Float */
|
.codehilite .nf { color: #d2a8ff; font-weight: bold } /* Name.Function */
|
||||||
.mh { color: #666666 } /* Literal.Number.Hex */
|
.codehilite .nl { color: #79c0ff; font-weight: bold } /* Name.Label */
|
||||||
.mi { color: #666666 } /* Literal.Number.Integer */
|
.codehilite .nn { color: #ff7b72 } /* Name.Namespace */
|
||||||
.mo { color: #666666 } /* Literal.Number.Oct */
|
.codehilite .nx { color: #c9d1d9 } /* Name.Other */
|
||||||
.sa { color: #BA2121 } /* Literal.String.Affix */
|
.codehilite .py { color: #79c0ff } /* Name.Property */
|
||||||
.sb { color: #BA2121 } /* Literal.String.Backtick */
|
.codehilite .nt { color: #7ee787 } /* Name.Tag */
|
||||||
.sc { color: #BA2121 } /* Literal.String.Char */
|
.codehilite .nv { color: #79c0ff } /* Name.Variable */
|
||||||
.dl { color: #BA2121 } /* Literal.String.Delimiter */
|
.codehilite .ow { color: #ff7b72; font-weight: bold } /* Operator.Word */
|
||||||
.sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */
|
.codehilite .pm { color: #c9d1d9 } /* Punctuation.Marker */
|
||||||
.s2 { color: #2bf840 } /* Literal.String.Double */
|
.codehilite .w { color: #6e7681 } /* Text.Whitespace */
|
||||||
.se { color: #AA5D1F; font-weight: bold } /* Literal.String.Escape */
|
.codehilite .mb { color: #a5d6ff } /* Literal.Number.Bin */
|
||||||
.sh { color: #BA2121 } /* Literal.String.Heredoc */
|
.codehilite .mf { color: #a5d6ff } /* Literal.Number.Float */
|
||||||
.si { color: #A45A77; font-weight: bold } /* Literal.String.Interpol */
|
.codehilite .mh { color: #a5d6ff } /* Literal.Number.Hex */
|
||||||
.sx { color: #008000 } /* Literal.String.Other */
|
.codehilite .mi { color: #a5d6ff } /* Literal.Number.Integer */
|
||||||
.sr { color: #A45A77 } /* Literal.String.Regex */
|
.codehilite .mo { color: #a5d6ff } /* Literal.Number.Oct */
|
||||||
.s1 { color: #BA2121 } /* Literal.String.Single */
|
.codehilite .sa { color: #79c0ff } /* Literal.String.Affix */
|
||||||
.ss { color: #19177C } /* Literal.String.Symbol */
|
.codehilite .sb { color: #a5d6ff } /* Literal.String.Backtick */
|
||||||
.bp { color: #008000 } /* Name.Builtin.Pseudo */
|
.codehilite .sc { color: #a5d6ff } /* Literal.String.Char */
|
||||||
.fm { color: #0000FF } /* Name.Function.Magic */
|
.codehilite .dl { color: #79c0ff } /* Literal.String.Delimiter */
|
||||||
.vc { color: #19177C } /* Name.Variable.Class */
|
.codehilite .sd { color: #a5d6ff } /* Literal.String.Doc */
|
||||||
.vg { color: #19177C } /* Name.Variable.Global */
|
.codehilite .s2 { color: #a5d6ff } /* Literal.String.Double */
|
||||||
.vi { color: #19177C } /* Name.Variable.Instance */
|
.codehilite .se { color: #79c0ff } /* Literal.String.Escape */
|
||||||
.vm { color: #19177C } /* Name.Variable.Magic */
|
.codehilite .sh { color: #79c0ff } /* Literal.String.Heredoc */
|
||||||
.il { color: #666666 } /* Literal.Number.Integer.Long */
|
.codehilite .si { color: #a5d6ff } /* Literal.String.Interpol */
|
||||||
|
.codehilite .sx { color: #a5d6ff } /* Literal.String.Other */
|
||||||
|
.codehilite .sr { color: #79c0ff } /* Literal.String.Regex */
|
||||||
|
.codehilite .s1 { color: #a5d6ff } /* Literal.String.Single */
|
||||||
|
.codehilite .ss { color: #a5d6ff } /* Literal.String.Symbol */
|
||||||
|
.codehilite .bp { color: #c9d1d9 } /* Name.Builtin.Pseudo */
|
||||||
|
.codehilite .fm { color: #d2a8ff; font-weight: bold } /* Name.Function.Magic */
|
||||||
|
.codehilite .vc { color: #79c0ff } /* Name.Variable.Class */
|
||||||
|
.codehilite .vg { color: #79c0ff } /* Name.Variable.Global */
|
||||||
|
.codehilite .vi { color: #79c0ff } /* Name.Variable.Instance */
|
||||||
|
.codehilite .vm { color: #79c0ff } /* Name.Variable.Magic */
|
||||||
|
.codehilite .il { color: #a5d6ff } /* Literal.Number.Integer.Long */
|
||||||
|
|
||||||
|
.dark .codehilite .hll { background-color: #2C3B41 }
|
||||||
|
.dark .codehilite .c { color: #79d618; font-style: italic } /* Comment */
|
||||||
|
.dark .codehilite .err { color: #FF5370 } /* Error */
|
||||||
|
.dark .codehilite .esc { color: #89DDFF } /* Escape */
|
||||||
|
.dark .codehilite .g { color: #EEFFFF } /* Generic */
|
||||||
|
.dark .codehilite .k { color: #BB80B3 } /* Keyword */
|
||||||
|
.dark .codehilite .l { color: #C3E88D } /* Literal */
|
||||||
|
.dark .codehilite .n { color: #EEFFFF } /* Name */
|
||||||
|
.dark .codehilite .o { color: #89DDFF } /* Operator */
|
||||||
|
.dark .codehilite .p { color: #89DDFF } /* Punctuation */
|
||||||
|
.dark .codehilite .ch { color: #79d618; font-style: italic } /* Comment.Hashbang */
|
||||||
|
.dark .codehilite .cm { color: #79d618; font-style: italic } /* Comment.Multiline */
|
||||||
|
.dark .codehilite .cp { color: #79d618; font-style: italic } /* Comment.Preproc */
|
||||||
|
.dark .codehilite .cpf { color: #79d618; font-style: italic } /* Comment.PreprocFile */
|
||||||
|
.dark .codehilite .c1 { color: #79d618; font-style: italic } /* Comment.Single */
|
||||||
|
.dark .codehilite .cs { color: #79d618; font-style: italic } /* Comment.Special */
|
||||||
|
.dark .codehilite .gd { color: #FF5370 } /* Generic.Deleted */
|
||||||
|
.dark .codehilite .ge { color: #89DDFF } /* Generic.Emph */
|
||||||
|
.dark .codehilite .gr { color: #FF5370 } /* Generic.Error */
|
||||||
|
.dark .codehilite .gh { color: #C3E88D } /* Generic.Heading */
|
||||||
|
.dark .codehilite .gi { color: #C3E88D } /* Generic.Inserted */
|
||||||
|
.dark .codehilite .go { color: #79d618 } /* Generic.Output */
|
||||||
|
.dark .codehilite .gp { color: #FFCB6B } /* Generic.Prompt */
|
||||||
|
.dark .codehilite .gs { color: #FF5370 } /* Generic.Strong */
|
||||||
|
.dark .codehilite .gu { color: #89DDFF } /* Generic.Subheading */
|
||||||
|
.dark .codehilite .gt { color: #FF5370 } /* Generic.Traceback */
|
||||||
|
.dark .codehilite .kc { color: #89DDFF } /* Keyword.Constant */
|
||||||
|
.dark .codehilite .kd { color: #BB80B3 } /* Keyword.Declaration */
|
||||||
|
.dark .codehilite .kn { color: #89DDFF; font-style: italic } /* Keyword.Namespace */
|
||||||
|
.dark .codehilite .kp { color: #89DDFF } /* Keyword.Pseudo */
|
||||||
|
.dark .codehilite .kr { color: #BB80B3 } /* Keyword.Reserved */
|
||||||
|
.dark .codehilite .kt { color: #BB80B3 } /* Keyword.Type */
|
||||||
|
.dark .codehilite .ld { color: #C3E88D } /* Literal.Date */
|
||||||
|
.dark .codehilite .m { color: #F78C6C } /* Literal.Number */
|
||||||
|
.dark .codehilite .s { color: #C3E88D } /* Literal.String */
|
||||||
|
.dark .codehilite .na { color: #BB80B3 } /* Name.Attribute */
|
||||||
|
.dark .codehilite .nb { color: #82AAFF } /* Name.Builtin */
|
||||||
|
.dark .codehilite .nc { color: #FFCB6B } /* Name.Class */
|
||||||
|
.dark .codehilite .no { color: #EEFFFF } /* Name.Constant */
|
||||||
|
.dark .codehilite .nd { color: #82AAFF } /* Name.Decorator */
|
||||||
|
.dark .codehilite .ni { color: #89DDFF } /* Name.Entity */
|
||||||
|
.dark .codehilite .ne { color: #FFCB6B } /* Name.Exception */
|
||||||
|
.dark .codehilite .nf { color: #82AAFF } /* Name.Function */
|
||||||
|
.dark .codehilite .nl { color: #82AAFF } /* Name.Label */
|
||||||
|
.dark .codehilite .nn { color: #FFCB6B } /* Name.Namespace */
|
||||||
|
.dark .codehilite .nx { color: #EEFFFF } /* Name.Other */
|
||||||
|
.dark .codehilite .py { color: #FFCB6B } /* Name.Property */
|
||||||
|
.dark .codehilite .nt { color: #FF5370 } /* Name.Tag */
|
||||||
|
.dark .codehilite .nv { color: #89DDFF } /* Name.Variable */
|
||||||
|
.dark .codehilite .ow { color: #89DDFF; font-style: italic } /* Operator.Word */
|
||||||
|
.dark .codehilite .pm { color: #89DDFF } /* Punctuation.Marker */
|
||||||
|
.dark .codehilite .w { color: #EEFFFF } /* Text.Whitespace */
|
||||||
|
.dark .codehilite .mb { color: #F78C6C } /* Literal.Number.Bin */
|
||||||
|
.dark .codehilite .mf { color: #F78C6C } /* Literal.Number.Float */
|
||||||
|
.dark .codehilite .mh { color: #F78C6C } /* Literal.Number.Hex */
|
||||||
|
.dark .codehilite .mi { color: #F78C6C } /* Literal.Number.Integer */
|
||||||
|
.dark .codehilite .mo { color: #F78C6C } /* Literal.Number.Oct */
|
||||||
|
.dark .codehilite .sa { color: #BB80B3 } /* Literal.String.Affix */
|
||||||
|
.dark .codehilite .sb { color: #C3E88D } /* Literal.String.Backtick */
|
||||||
|
.dark .codehilite .sc { color: #C3E88D } /* Literal.String.Char */
|
||||||
|
.dark .codehilite .dl { color: #EEFFFF } /* Literal.String.Delimiter */
|
||||||
|
.dark .codehilite .sd { color: #79d618; font-style: italic } /* Literal.String.Doc */
|
||||||
|
.dark .codehilite .s2 { color: #C3E88D } /* Literal.String.Double */
|
||||||
|
.dark .codehilite .se { color: #EEFFFF } /* Literal.String.Escape */
|
||||||
|
.dark .codehilite .sh { color: #C3E88D } /* Literal.String.Heredoc */
|
||||||
|
.dark .codehilite .si { color: #89DDFF } /* Literal.String.Interpol */
|
||||||
|
.dark .codehilite .sx { color: #C3E88D } /* Literal.String.Other */
|
||||||
|
.dark .codehilite .sr { color: #89DDFF } /* Literal.String.Regex */
|
||||||
|
.dark .codehilite .s1 { color: #C3E88D } /* Literal.String.Single */
|
||||||
|
.dark .codehilite .ss { color: #89DDFF } /* Literal.String.Symbol */
|
||||||
|
.dark .codehilite .bp { color: #89DDFF } /* Name.Builtin.Pseudo */
|
||||||
|
.dark .codehilite .fm { color: #82AAFF } /* Name.Function.Magic */
|
||||||
|
.dark .codehilite .vc { color: #89DDFF } /* Name.Variable.Class */
|
||||||
|
.dark .codehilite .vg { color: #89DDFF } /* Name.Variable.Global */
|
||||||
|
.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */
|
||||||
|
.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */
|
||||||
|
.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|||||||
153
toolbox.py
153
toolbox.py
@@ -5,7 +5,20 @@ import inspect
|
|||||||
import re
|
import re
|
||||||
from latex2mathml.converter import convert as tex2mathml
|
from latex2mathml.converter import convert as tex2mathml
|
||||||
from functools import wraps, lru_cache
|
from functools import wraps, lru_cache
|
||||||
############################### 插件输入输出接驳区 #######################################
|
|
||||||
|
"""
|
||||||
|
========================================================================
|
||||||
|
第一部分
|
||||||
|
函数插件输入输出接驳区
|
||||||
|
- ChatBotWithCookies: 带Cookies的Chatbot类,为实现更多强大的功能做基础
|
||||||
|
- ArgsGeneralWrapper: 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构
|
||||||
|
- update_ui: 刷新界面用 yield from update_ui(chatbot, history)
|
||||||
|
- CatchException: 将插件中出的所有问题显示在界面上
|
||||||
|
- HotReload: 实现插件的热更新
|
||||||
|
- trimmed_format_exc: 打印traceback,为了安全而隐藏绝对地址
|
||||||
|
========================================================================
|
||||||
|
"""
|
||||||
|
|
||||||
class ChatBotWithCookies(list):
|
class ChatBotWithCookies(list):
|
||||||
def __init__(self, cookie):
|
def __init__(self, cookie):
|
||||||
self._cookies = cookie
|
self._cookies = cookie
|
||||||
@@ -20,33 +33,35 @@ class ChatBotWithCookies(list):
|
|||||||
def get_cookies(self):
|
def get_cookies(self):
|
||||||
return self._cookies
|
return self._cookies
|
||||||
|
|
||||||
|
|
||||||
def ArgsGeneralWrapper(f):
|
def ArgsGeneralWrapper(f):
|
||||||
"""
|
"""
|
||||||
装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
|
装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
|
||||||
"""
|
"""
|
||||||
def decorated(cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, *args):
|
def decorated(cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg, *args):
|
||||||
txt_passon = txt
|
txt_passon = txt
|
||||||
if txt == "" and txt2 != "": txt_passon = txt2
|
if txt == "" and txt2 != "": txt_passon = txt2
|
||||||
# 引入一个有cookie的chatbot
|
# 引入一个有cookie的chatbot
|
||||||
cookies.update({
|
cookies.update({
|
||||||
'top_p':top_p,
|
'top_p':top_p,
|
||||||
'temperature':temperature,
|
'temperature':temperature,
|
||||||
})
|
})
|
||||||
llm_kwargs = {
|
llm_kwargs = {
|
||||||
'api_key': cookies['api_key'],
|
'api_key': cookies['api_key'],
|
||||||
'llm_model': llm_model,
|
'llm_model': llm_model,
|
||||||
'top_p':top_p,
|
'top_p':top_p,
|
||||||
'max_length': max_length,
|
'max_length': max_length,
|
||||||
'temperature':temperature,
|
'temperature':temperature,
|
||||||
}
|
}
|
||||||
plugin_kwargs = {
|
plugin_kwargs = {
|
||||||
# 目前还没有
|
"advanced_arg": plugin_advanced_arg,
|
||||||
}
|
}
|
||||||
chatbot_with_cookie = ChatBotWithCookies(cookies)
|
chatbot_with_cookie = ChatBotWithCookies(cookies)
|
||||||
chatbot_with_cookie.write_list(chatbot)
|
chatbot_with_cookie.write_list(chatbot)
|
||||||
yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
|
yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
|
||||||
return decorated
|
return decorated
|
||||||
|
|
||||||
|
|
||||||
def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
|
def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
|
||||||
"""
|
"""
|
||||||
刷新用户界面
|
刷新用户界面
|
||||||
@@ -54,10 +69,18 @@ def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
|
|||||||
assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。"
|
assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。"
|
||||||
yield chatbot.get_cookies(), chatbot, history, msg
|
yield chatbot.get_cookies(), chatbot, history, msg
|
||||||
|
|
||||||
|
def trimmed_format_exc():
|
||||||
|
import os, traceback
|
||||||
|
str = traceback.format_exc()
|
||||||
|
current_path = os.getcwd()
|
||||||
|
replace_path = "."
|
||||||
|
return str.replace(current_path, replace_path)
|
||||||
|
|
||||||
def CatchException(f):
|
def CatchException(f):
|
||||||
"""
|
"""
|
||||||
装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
|
装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@wraps(f)
|
@wraps(f)
|
||||||
def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
|
def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
|
||||||
try:
|
try:
|
||||||
@@ -66,7 +89,7 @@ def CatchException(f):
|
|||||||
from check_proxy import check_proxy
|
from check_proxy import check_proxy
|
||||||
from toolbox import get_conf
|
from toolbox import get_conf
|
||||||
proxies, = get_conf('proxies')
|
proxies, = get_conf('proxies')
|
||||||
tb_str = '```\n' + traceback.format_exc() + '```'
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
if chatbot is None or len(chatbot) == 0:
|
if chatbot is None or len(chatbot) == 0:
|
||||||
chatbot = [["插件调度异常", "异常原因"]]
|
chatbot = [["插件调度异常", "异常原因"]]
|
||||||
chatbot[-1] = (chatbot[-1][0],
|
chatbot[-1] = (chatbot[-1][0],
|
||||||
@@ -93,7 +116,23 @@ def HotReload(f):
|
|||||||
return decorated
|
return decorated
|
||||||
|
|
||||||
|
|
||||||
####################################### 其他小工具 #####################################
|
"""
|
||||||
|
========================================================================
|
||||||
|
第二部分
|
||||||
|
其他小工具:
|
||||||
|
- write_results_to_file: 将结果写入markdown文件中
|
||||||
|
- regular_txt_to_markdown: 将普通文本转换为Markdown格式的文本。
|
||||||
|
- report_execption: 向chatbot中添加简单的意外错误信息
|
||||||
|
- text_divide_paragraph: 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。
|
||||||
|
- markdown_convertion: 用多种方式组合,将markdown转化为好看的html
|
||||||
|
- format_io: 接管gradio默认的markdown处理方式
|
||||||
|
- on_file_uploaded: 处理文件的上传(自动解压)
|
||||||
|
- on_report_generated: 将生成的报告自动投射到文件上传区
|
||||||
|
- clip_history: 当历史上下文过长时,自动截断
|
||||||
|
- get_conf: 获取设置
|
||||||
|
- select_api_key: 根据当前的模型类别,抽取可用的api-key
|
||||||
|
========================================================================
|
||||||
|
"""
|
||||||
|
|
||||||
def get_reduce_token_percent(text):
|
def get_reduce_token_percent(text):
|
||||||
"""
|
"""
|
||||||
@@ -113,7 +152,6 @@ def get_reduce_token_percent(text):
|
|||||||
return 0.5, '不详'
|
return 0.5, '不详'
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def write_results_to_file(history, file_name=None):
|
def write_results_to_file(history, file_name=None):
|
||||||
"""
|
"""
|
||||||
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
|
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
|
||||||
@@ -219,7 +257,7 @@ def markdown_convertion(txt):
|
|||||||
return content
|
return content
|
||||||
else:
|
else:
|
||||||
return tex2mathml_catch_exception(content)
|
return tex2mathml_catch_exception(content)
|
||||||
|
|
||||||
def markdown_bug_hunt(content):
|
def markdown_bug_hunt(content):
|
||||||
"""
|
"""
|
||||||
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
||||||
@@ -227,7 +265,7 @@ def markdown_convertion(txt):
|
|||||||
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
|
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
|
||||||
content = content.replace('</script>\n</script>', '</script>')
|
content = content.replace('</script>\n</script>', '</script>')
|
||||||
return content
|
return content
|
||||||
|
|
||||||
|
|
||||||
if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识
|
if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识
|
||||||
# convert everything to html format
|
# convert everything to html format
|
||||||
@@ -248,7 +286,7 @@ def markdown_convertion(txt):
|
|||||||
def close_up_code_segment_during_stream(gpt_reply):
|
def close_up_code_segment_during_stream(gpt_reply):
|
||||||
"""
|
"""
|
||||||
在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的```
|
在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的```
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
gpt_reply (str): GPT模型返回的回复字符串。
|
gpt_reply (str): GPT模型返回的回复字符串。
|
||||||
|
|
||||||
@@ -369,6 +407,9 @@ def find_recent_files(directory):
|
|||||||
|
|
||||||
|
|
||||||
def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
|
def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
|
||||||
|
"""
|
||||||
|
当文件被上传时的回调函数
|
||||||
|
"""
|
||||||
if len(files) == 0:
|
if len(files) == 0:
|
||||||
return chatbot, txt
|
return chatbot, txt
|
||||||
import shutil
|
import shutil
|
||||||
@@ -388,8 +429,7 @@ def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
|
|||||||
shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
|
shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
|
||||||
err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
|
err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
|
||||||
dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
|
dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
|
||||||
moved_files = [fp for fp in glob.glob(
|
moved_files = [fp for fp in glob.glob('private_upload/**/*', recursive=True)]
|
||||||
'private_upload/**/*', recursive=True)]
|
|
||||||
if "底部输入区" in checkboxes:
|
if "底部输入区" in checkboxes:
|
||||||
txt = ""
|
txt = ""
|
||||||
txt2 = f'private_upload/{time_tag}'
|
txt2 = f'private_upload/{time_tag}'
|
||||||
@@ -508,10 +548,10 @@ def clear_line_break(txt):
|
|||||||
class DummyWith():
|
class DummyWith():
|
||||||
"""
|
"""
|
||||||
这段代码定义了一个名为DummyWith的空上下文管理器,
|
这段代码定义了一个名为DummyWith的空上下文管理器,
|
||||||
它的作用是……额……没用,即在代码结构不变得情况下取代其他的上下文管理器。
|
它的作用是……额……就是不起作用,即在代码结构不变得情况下取代其他的上下文管理器。
|
||||||
上下文管理器是一种Python对象,用于与with语句一起使用,
|
上下文管理器是一种Python对象,用于与with语句一起使用,
|
||||||
以确保一些资源在代码块执行期间得到正确的初始化和清理。
|
以确保一些资源在代码块执行期间得到正确的初始化和清理。
|
||||||
上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
|
上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
|
||||||
在上下文执行开始的情况下,__enter__()方法会在代码块被执行前被调用,
|
在上下文执行开始的情况下,__enter__()方法会在代码块被执行前被调用,
|
||||||
而在上下文执行结束时,__exit__()方法则会被调用。
|
而在上下文执行结束时,__exit__()方法则会被调用。
|
||||||
"""
|
"""
|
||||||
@@ -520,3 +560,86 @@ class DummyWith():
|
|||||||
|
|
||||||
def __exit__(self, exc_type, exc_value, traceback):
|
def __exit__(self, exc_type, exc_value, traceback):
|
||||||
return
|
return
|
||||||
|
|
||||||
|
def run_gradio_in_subpath(demo, auth, port, custom_path):
|
||||||
|
"""
|
||||||
|
把gradio的运行地址更改到指定的二次路径上
|
||||||
|
"""
|
||||||
|
def is_path_legal(path: str)->bool:
|
||||||
|
'''
|
||||||
|
check path for sub url
|
||||||
|
path: path to check
|
||||||
|
return value: do sub url wrap
|
||||||
|
'''
|
||||||
|
if path == "/": return True
|
||||||
|
if len(path) == 0:
|
||||||
|
print("ilegal custom path: {}\npath must not be empty\ndeploy on root url".format(path))
|
||||||
|
return False
|
||||||
|
if path[0] == '/':
|
||||||
|
if path[1] != '/':
|
||||||
|
print("deploy on sub-path {}".format(path))
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
print("ilegal custom path: {}\npath should begin with \'/\'\ndeploy on root url".format(path))
|
||||||
|
return False
|
||||||
|
|
||||||
|
if not is_path_legal(custom_path): raise RuntimeError('Ilegal custom path')
|
||||||
|
import uvicorn
|
||||||
|
import gradio as gr
|
||||||
|
from fastapi import FastAPI
|
||||||
|
app = FastAPI()
|
||||||
|
if custom_path != "/":
|
||||||
|
@app.get("/")
|
||||||
|
def read_main():
|
||||||
|
return {"message": f"Gradio is running at: {custom_path}"}
|
||||||
|
app = gr.mount_gradio_app(app, demo, path=custom_path)
|
||||||
|
uvicorn.run(app, host="0.0.0.0", port=port) # , auth=auth
|
||||||
|
|
||||||
|
|
||||||
|
def clip_history(inputs, history, tokenizer, max_token_limit):
|
||||||
|
"""
|
||||||
|
reduce the length of history by clipping.
|
||||||
|
this function search for the longest entries to clip, little by little,
|
||||||
|
until the number of token of history is reduced under threshold.
|
||||||
|
通过裁剪来缩短历史记录的长度。
|
||||||
|
此函数逐渐地搜索最长的条目进行剪辑,
|
||||||
|
直到历史记录的标记数量降低到阈值以下。
|
||||||
|
"""
|
||||||
|
import numpy as np
|
||||||
|
from request_llm.bridge_all import model_info
|
||||||
|
def get_token_num(txt):
|
||||||
|
return len(tokenizer.encode(txt, disallowed_special=()))
|
||||||
|
input_token_num = get_token_num(inputs)
|
||||||
|
if input_token_num < max_token_limit * 3 / 4:
|
||||||
|
# 当输入部分的token占比小于限制的3/4时,裁剪时
|
||||||
|
# 1. 把input的余量留出来
|
||||||
|
max_token_limit = max_token_limit - input_token_num
|
||||||
|
# 2. 把输出用的余量留出来
|
||||||
|
max_token_limit = max_token_limit - 128
|
||||||
|
# 3. 如果余量太小了,直接清除历史
|
||||||
|
if max_token_limit < 128:
|
||||||
|
history = []
|
||||||
|
return history
|
||||||
|
else:
|
||||||
|
# 当输入部分的token占比 > 限制的3/4时,直接清除历史
|
||||||
|
history = []
|
||||||
|
return history
|
||||||
|
|
||||||
|
everything = ['']
|
||||||
|
everything.extend(history)
|
||||||
|
n_token = get_token_num('\n'.join(everything))
|
||||||
|
everything_token = [get_token_num(e) for e in everything]
|
||||||
|
|
||||||
|
# 截断时的颗粒度
|
||||||
|
delta = max(everything_token) // 16
|
||||||
|
|
||||||
|
while n_token > max_token_limit:
|
||||||
|
where = np.argmax(everything_token)
|
||||||
|
encoded = tokenizer.encode(everything[where], disallowed_special=())
|
||||||
|
clipped_encoded = encoded[:len(encoded)-delta]
|
||||||
|
everything[where] = tokenizer.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
|
||||||
|
everything_token[where] = get_token_num(everything[where])
|
||||||
|
n_token = get_token_num('\n'.join(everything))
|
||||||
|
|
||||||
|
history = everything[1:]
|
||||||
|
return history
|
||||||
|
|||||||
4
version
4
version
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"version": 3.15,
|
"version": 3.3,
|
||||||
"show_feature": true,
|
"show_feature": true,
|
||||||
"new_feature": "添加联网(Google)回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM和GPT-4 <-> 改进架构,支持与多个LLM模型同时对话 <-> 添加支持API2D(国内,可支持gpt4)"
|
"new_feature": "支持NewBing !! <-> 保存对话功能 <-> 解读任意语言代码+同时询问任意的LLM组合 <-> 添加联网(Google)回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM和GPT-4 <-> 改进架构,支持与多个LLM模型同时对话 <-> 添加支持API2D(国内,可支持gpt4)"
|
||||||
}
|
}
|
||||||
|
|||||||
在新工单中引用
屏蔽一个用户