比较提交

...

26 次代码提交

作者 SHA1 备注 提交日期
Yuki
163f12c533 # fix com_zhipuglm.py illegal temperature problem (#1687)
* Update com_zhipuglm.py

# fix 用户在使用 zhipuai 界面时遇到了关于温度参数的非法参数错误
2024-04-08 12:17:07 +08:00
binary-husky
bdd46c5dd1 Version 3.74: Merge latest updates on dev branch (frontier) (#1621)
* Update version to 3.74

* Add support for Yi Model API (#1635)

* 更新以支持零一万物模型

* 删除newbing

* 修改config

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* Refactor function signatures in bridge files

* fix qwen api change

* rename and ref functions

* rename and move some cookie functions

* 增加haiku模型,新增endpoint配置说明 (#1626)

* haiku added

* 新增haiku,新增endpoint配置说明

* Haiku added

* 将说明同步至最新Endpoint

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* private_upload目录下进行文件鉴权 (#1596)

* private_upload目录下进行文件鉴权

* minor fastapi adjustment

* Add logging functionality to enable saving
conversation records

* waiting to fix username retrieve

* support 2rd web path

* allow accessing default user dir

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* remove yaml deps

* fix favicon

* fix abs path auth problem

* forget to write a return

* add `dashscope` to deps

* fix GHSA-v9q9-xj86-953p

* 用户名重叠越权访问patch (#1681)

* add cohere model api access

* cohere + can_multi_thread

* fix block user access(fail)

* fix fastapi bug

* change cohere api endpoint

* explain version

---------

Co-authored-by: Menghuan1918 <menghuan2003@outlook.com>
Co-authored-by: Skyzayre <120616113+Skyzayre@users.noreply.github.com>
Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com>
2024-04-08 11:49:30 +08:00
binary-husky
ae51a0e686 fix GHSA-v9q9-xj86-953p 2024-04-05 20:47:11 +08:00
binary-husky
f2582ea137 fix qwen api change 2024-04-03 12:17:41 +08:00
binary-husky
ddd2fd84da fix checkbox bugs 2024-04-02 19:42:55 +08:00
binary-husky
6c90ff80ea add prompt and temperature to cookie 2024-04-02 18:02:00 +08:00
binary-husky
cb7c0703be Update requirements.txt (#1668) 2024-04-01 11:30:50 +08:00
binary-husky
5181cd441d change pip install url due to server failure (#1667) 2024-04-01 11:20:14 +08:00
binary-husky
216d4374e7 fix color list overflow 2024-04-01 00:11:32 +08:00
iluem
8af6c0cab6 Qhaoduoyu patch 1: pickle to json to increase security (#1648)
* Update theme.py

fix bugs

* Update theme.py

fix bugs

* change var names

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-03-25 09:54:30 +08:00
binary-husky
67ad041372 fix issue #1640 2024-03-20 18:09:37 +08:00
binary-husky
725c72229c update docker compose 2024-03-20 17:37:03 +08:00
Menghuan1918
e42ede512b Update Claude3 api request and fix some bugs (#1641)
* Update version to 3.74

* Add support for Yi Model API (#1635)

* 更新以支持零一万物模型

* 删除newbing

* 修改config

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* Update claude requrest to http type

* Update for endpoint

* Add support for other tpyes of pictures

* Update pip packages

* Fix console_slience issue while error handling

* revert version changes

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-03-20 17:22:23 +08:00
binary-husky
84ccc9e64c fix claude + oneapi error 2024-03-17 14:53:28 +08:00
binary-husky
c172847e19 add python annotations for toolbox functions 2024-03-16 22:54:33 +08:00
binary-husky
d166d25eb4 resolve invalid escape sequence warning
to support python3.12
2024-03-11 18:10:05 +08:00
binary-husky
516bbb1331 Update README.md 2024-03-11 17:40:16 +08:00
binary-husky
c3140ce344 merge frontier branch (#1620)
* Zhipu sdk update 适配最新的智谱SDK,支持GLM4v (#1502)

* 适配 google gemini 优化为从用户input中提取文件

* 适配最新的智谱SDK、支持glm-4v

* requirements.txt fix

* pending history check

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* Update "生成多种Mermaid图表" plugin: Separate out the file reading function (#1520)

* Update crazy_functional.py with new functionality deal with PDF

* Update crazy_functional.py and Mermaid.py for plugin_kwargs

* Update crazy_functional.py with new chart type: mind map

* Update SELECT_PROMPT and i_say_show_user messages

* Update ArgsReminder message in get_crazy_functions() function

* Update with read md file and update PROMPTS

* Return the PROMPTS as the test found that the initial version worked best

* Update Mermaid chart generation function

* version 3.71

* 解决issues #1510

* Remove unnecessary text from sys_prompt in 解析历史输入 function

* Remove sys_prompt message in 解析历史输入 function

* Update bridge_all.py: supports gpt-4-turbo-preview (#1517)

* Update bridge_all.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update bridge_all.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* Update config.py: supports gpt-4-turbo-preview (#1516)

* Update config.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update config.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* Refactor 解析历史输入 function to handle file input

* Update Mermaid chart generation functionality

* rename files and functions

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com>
Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* 接入mathpix ocr功能 (#1468)

* Update Latex输出PDF结果.py

借助mathpix实现了PDF翻译中文并重新编译PDF

* Update config.py

add mathpix appid & appkey

* Add 'PDF翻译中文并重新编译PDF' feature to plugins.

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* fix zhipuai

* check picture

* remove glm-4 due to bug

* 修改config

* 检查MATHPIX_APPID

* Remove unnecessary code and update
function_plugins dictionary

* capture non-standard token overflow

* bug fix #1524

* change mermaid style

* 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 (#1530)

* 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽

* 微调未果 先stage一下

* update

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* ver 3.72

* change live2d

* save the status of ``clear btn` in cookie

* 前端选择保持

* js ui bug fix

* reset btn bug fix

* update live2d tips

* fix missing get_token_num method

* fix live2d toggle switch

* fix persistent custom btn with cookie

* fix zhipuai feedback with core functionality

* Refactor button update and clean up functions

* tailing space removal

* Fix missing MATHPIX_APPID and MATHPIX_APPKEY
configuration

* Prompt fix、脑图提示词优化 (#1537)

* 适配 google gemini 优化为从用户input中提取文件

* 脑图提示词优化

* Fix missing MATHPIX_APPID and MATHPIX_APPKEY
configuration

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* 优化“PDF翻译中文并重新编译PDF”插件 (#1602)

* Add gemini_endpoint to API_URL_REDIRECT (#1560)

* Add gemini_endpoint to API_URL_REDIRECT

* Update gemini-pro and gemini-pro-vision model_info
endpoints

* Update to support new claude models (#1606)

* Add anthropic library and update claude models

* 更新bridge_claude.py文件,添加了对图片输入的支持。修复了一些bug。

* 添加Claude_3_Models变量以限制图片数量

* Refactor code to improve readability and
maintainability

* minor claude bug fix

* more flexible one-api support

* reformat config

* fix one-api new access bug

* dummy

* compat non-standard api

* version 3.73

---------

Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com>
Co-authored-by: Menghuan1918 <menghuan2003@outlook.com>
Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com>
Co-authored-by: Hao Ma <893017927@qq.com>
Co-authored-by: zeyuan huang <599012428@qq.com>
2024-03-11 17:26:09 +08:00
binary-husky
cd18663800 compat non-standard api - 2 2024-03-10 17:13:54 +08:00
binary-husky
dbf1322836 compat non-standard api 2024-03-10 17:07:59 +08:00
XIao
98dd3ae1c0 Moonshot- 在config.py中增加可用模型 (#1603)
* 支持月之暗面api

* fix文案

* 优化noui的返回值,对话历史文件继续上传到moonshat

* fix

* config 可用模型配置增加

* add `can_multi_thread` model attr (#1598)

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-03-05 16:07:05 +08:00
binary-husky
3036709496 add can_multi_thread model attr (#1598) 2024-03-05 15:58:18 +08:00
XIao
8e9c07644f 支持月之暗面api,文件对话 (#1597)
* 支持月之暗面api

* fix文案
2024-03-03 23:42:17 +08:00
binary-husky
90d96b77e6 handle qianfan chat error 2024-02-29 00:36:06 +08:00
binary-husky
66c876a9ca Update README.md 2024-02-26 22:56:09 +08:00
binary-husky
0665eb75ed Update README.md (#1581) 2024-02-26 22:52:00 +08:00
共有 103 个文件被更改,包括 2690 次插入1035 次删除

查看文件

@@ -1,7 +1,6 @@
> [!IMPORTANT] > [!IMPORTANT]
> 2024.3.11: 恭迎Claude3和Moonshot,全力支持Qwen、GLM、DeepseekCoder等中文大语言模型
> 2024.1.18: 更新3.70版本,支持Mermaid绘图库让大模型绘制脑图 > 2024.1.18: 更新3.70版本,支持Mermaid绘图库让大模型绘制脑图
> 2024.1.17: 恭迎GLM4,全力支持Qwen、GLM、DeepseekCoder等国内中文大语言基座模型
> 2024.1.17: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。 > 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
<br> <br>
@@ -253,8 +252,7 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
# Advanced Usage # Advanced Usage
### I自定义新的便捷按钮学术快捷键 ### I自定义新的便捷按钮学术快捷键
任意文本编辑器打开`core_functional.py`,添加如下条目,然后重启程序。(如果按钮已存在,那么可以直接修改(前缀、后缀都已支持热修改),无需重启程序即可生效。) 现在已可以通过UI中的`界面外观`菜单中的`自定义菜单`添加新的便捷按钮。如果需要在代码中定义,请使用任意文本编辑器打开`core_functional.py`,添加如下条目即可:
例如
```python ```python
"超级英译中": { "超级英译中": {

查看文件

@@ -47,7 +47,7 @@ def backup_and_download(current_version, remote_version):
shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history']) shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
proxies = get_conf('proxies') proxies = get_conf('proxies')
try: r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True) try: r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
except: r = requests.get('https://public.gpt-academic.top/publish/master.zip', proxies=proxies, stream=True) except: r = requests.get('https://public.agent-matrix.com/publish/master.zip', proxies=proxies, stream=True)
zip_file_path = backup_dir+'/master.zip' zip_file_path = backup_dir+'/master.zip'
with open(zip_file_path, 'wb+') as f: with open(zip_file_path, 'wb+') as f:
f.write(r.content) f.write(r.content)
@@ -81,7 +81,7 @@ def patch_and_restart(path):
dir_util.copy_tree(path_new_version, './') dir_util.copy_tree(path_new_version, './')
print亮绿('代码已经更新,即将更新pip包依赖……') print亮绿('代码已经更新,即将更新pip包依赖……')
for i in reversed(range(5)): time.sleep(1); print(i) for i in reversed(range(5)): time.sleep(1); print(i)
try: try:
import subprocess import subprocess
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt']) subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
except: except:
@@ -113,7 +113,7 @@ def auto_update(raise_error=False):
import json import json
proxies = get_conf('proxies') proxies = get_conf('proxies')
try: response = requests.get("https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5) try: response = requests.get("https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
except: response = requests.get("https://public.gpt-academic.top/publish/version", proxies=proxies, timeout=5) except: response = requests.get("https://public.agent-matrix.com/publish/version", proxies=proxies, timeout=5)
remote_json_data = json.loads(response.text) remote_json_data = json.loads(response.text)
remote_version = remote_json_data['version'] remote_version = remote_json_data['version']
if remote_json_data["show_feature"]: if remote_json_data["show_feature"]:
@@ -159,7 +159,7 @@ def warm_up_modules():
enc.encode("模块预热", disallowed_special=()) enc.encode("模块预热", disallowed_special=())
enc = model_info["gpt-4"]['tokenizer'] enc = model_info["gpt-4"]['tokenizer']
enc.encode("模块预热", disallowed_special=()) enc.encode("模块预热", disallowed_special=())
def warm_up_vectordb(): def warm_up_vectordb():
print('正在执行一些模块的预热 ...') print('正在执行一些模块的预热 ...')
from toolbox import ProxyNetworkActivate from toolbox import ProxyNetworkActivate
@@ -167,7 +167,7 @@ def warm_up_vectordb():
import nltk import nltk
with ProxyNetworkActivate("Warmup_Modules"): nltk.download("punkt") with ProxyNetworkActivate("Warmup_Modules"): nltk.download("punkt")
if __name__ == '__main__': if __name__ == '__main__':
import os import os
os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染

查看文件

@@ -3,7 +3,7 @@ from sys import stdout
if platform.system()=="Linux": if platform.system()=="Linux":
pass pass
else: else:
from colorama import init from colorama import init
init() init()

查看文件

@@ -30,7 +30,33 @@ if USE_PROXY:
else: else:
proxies = None proxies = None
# ------------------------------------ 以下配置可以优化体验, 但大部分场合下并不需要修改 ------------------------------------ # [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-3-turbo",
"gemini-pro", "chatglm3"
]
# --- --- --- ---
# P.S. 其他可用的模型还包括
# AVAIL_LLM_MODELS = [
# "qianfan", "deepseekcoder",
# "spark", "sparkv2", "sparkv3", "sparkv3.5",
# "qwen-turbo", "qwen-plus", "qwen-max", "qwen-local",
# "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k",
# "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125"
# "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2",
# "moss", "llama2", "chatglm_onnx", "internlm", "jittorllms_pangualpha", "jittorllms_llama",
# "yi-34b-chat-0205", "yi-34b-chat-200k"
# ]
# --- --- --- ---
# 此外,为了更灵活地接入one-api多模型管理界面,您还可以在接入one-api时,
# 使用"one-api-*"前缀直接使用非标准方式接入的模型,例如
# AVAIL_LLM_MODELS = ["one-api-claude-3-sonnet-20240229(max_token=100000)"]
# --- --- --- ---
# --------------- 以下配置可以优化体验 ---------------
# 重新URL重新定向,实现更换API_URL的作用高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人 # 重新URL重新定向,实现更换API_URL的作用高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人
# 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"} # 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
@@ -85,20 +111,6 @@ MAX_RETRY = 2
DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体'] DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
# 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-3-turbo",
"gemini-pro", "chatglm3", "claude-2"]
# P.S. 其他可用的模型还包括 [
# "moss", "qwen-turbo", "qwen-plus", "qwen-max"
# "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613",
# "gpt-3.5-turbo-16k-0613", "gpt-3.5-random", "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"
# ]
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4" # 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3" MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
@@ -127,6 +139,7 @@ CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda" LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本 LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
# 设置gradio的并行线程数不需要修改 # 设置gradio的并行线程数不需要修改
CONCURRENT_COUNT = 100 CONCURRENT_COUNT = 100
@@ -144,7 +157,8 @@ ADD_WAIFU = False
AUTHENTICATION = [] AUTHENTICATION = []
# 如果需要在二级路径下运行(常规情况下,不要修改!!需要配合修改main.py才能生效! # 如果需要在二级路径下运行(常规情况下,不要修改!!
# (举例 CUSTOM_PATH = "/gpt_academic",可以让软件运行在 http://ip:port/gpt_academic/ 下。)
CUSTOM_PATH = "/" CUSTOM_PATH = "/"
@@ -172,14 +186,8 @@ AZURE_ENGINE = "填入你亲手写的部署名" # 读 docs\use_azure.
AZURE_CFG_ARRAY = {} AZURE_CFG_ARRAY = {}
# 使用Newbing (不推荐使用,未来将删除) # 阿里云实时语音识别 配置难度较高
NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"] # 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
NEWBING_COOKIES = """
put your new bing cookies here
"""
# 阿里云实时语音识别 配置难度较高 仅建议高手用户使用 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
ENABLE_AUDIO = False ENABLE_AUDIO = False
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
@@ -198,16 +206,18 @@ ZHIPUAI_API_KEY = ""
ZHIPUAI_MODEL = "" # 此选项已废弃,不再需要填写 ZHIPUAI_MODEL = "" # 此选项已废弃,不再需要填写
# # 火山引擎YUNQUE大模型
# YUNQUE_SECRET_KEY = ""
# YUNQUE_ACCESS_KEY = ""
# YUNQUE_MODEL = ""
# Claude API KEY # Claude API KEY
ANTHROPIC_API_KEY = "" ANTHROPIC_API_KEY = ""
# 月之暗面 API KEY
MOONSHOT_API_KEY = ""
# 零一万物(Yi Model) API KEY
YIMODEL_API_KEY = ""
# Mathpix 拥有执行PDF的OCR功能,但是需要注册账号 # Mathpix 拥有执行PDF的OCR功能,但是需要注册账号
MATHPIX_APPID = "" MATHPIX_APPID = ""
MATHPIX_APPKEY = "" MATHPIX_APPKEY = ""
@@ -266,7 +276,11 @@ PLUGIN_HOT_RELOAD = False
# 自定义按钮的最大数量限制 # 自定义按钮的最大数量限制
NUM_CUSTOM_BASIC_BTN = 4 NUM_CUSTOM_BASIC_BTN = 4
""" """
--------------- 配置关联关系说明 ---------------
在线大模型配置关联关系示意图 在线大模型配置关联关系示意图
├── "gpt-3.5-turbo" 等openai模型 ├── "gpt-3.5-turbo" 等openai模型
@@ -290,7 +304,7 @@ NUM_CUSTOM_BASIC_BTN = 4
│ ├── XFYUN_API_SECRET │ ├── XFYUN_API_SECRET
│ └── XFYUN_API_KEY │ └── XFYUN_API_KEY
├── "claude-1-100k" 等claude模型 ├── "claude-3-opus-20240229" 等claude模型
│ └── ANTHROPIC_API_KEY │ └── ANTHROPIC_API_KEY
├── "stack-claude" ├── "stack-claude"
@@ -305,15 +319,19 @@ NUM_CUSTOM_BASIC_BTN = 4
├── "glm-4", "glm-3-turbo", "zhipuai" 智谱AI大模型 ├── "glm-4", "glm-3-turbo", "zhipuai" 智谱AI大模型
│ └── ZHIPUAI_API_KEY │ └── ZHIPUAI_API_KEY
├── "yi-34b-chat-0205", "yi-34b-chat-200k" 等零一万物(Yi Model)大模型
│ └── YIMODEL_API_KEY
├── "qwen-turbo" 等通义千问大模型 ├── "qwen-turbo" 等通义千问大模型
│ └── DASHSCOPE_API_KEY │ └── DASHSCOPE_API_KEY
├── "Gemini" ├── "Gemini"
│ └── GEMINI_API_KEY │ └── GEMINI_API_KEY
└── "newbing" Newbing接口不再稳定,不推荐使用 └── "one-api-...(max_token=...)" 用一种更方便的方式接入one-api多模型管理界面
├── NEWBING_STYLE ├── AVAIL_LLM_MODELS
── NEWBING_COOKIES ── API_KEY
└── API_URL_REDIRECT
本地大模型示意图 本地大模型示意图

查看文件

@@ -34,16 +34,16 @@ def get_core_functions():
# [6] 文本预处理 (可选参数,默认 None,举例写个函数移除所有的换行符 # [6] 文本预处理 (可选参数,默认 None,举例写个函数移除所有的换行符
"PreProcess": None, "PreProcess": None,
}, },
"总结绘制脑图": { "总结绘制脑图": {
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等 # 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
"Prefix": r"", "Prefix": '''"""\n\n''',
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来 # 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
"Suffix": "Suffix":
# dedent() 函数用于去除多行字符串的缩进 # dedent() 函数用于去除多行字符串的缩进
dedent("\n"+r''' dedent("\n\n"+r'''
============================== """
使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如 使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如
@@ -57,15 +57,15 @@ def get_core_functions():
C --> |"箭头名2"| F["节点名6"] C --> |"箭头名2"| F["节点名6"]
``` ```
警告 注意
1使用中文 1使用中文
2节点名字使用引号包裹,如["Laptop"] 2节点名字使用引号包裹,如["Laptop"]
3`|` 和 `"`之间不要存在空格 3`|` 和 `"`之间不要存在空格
4根据情况选择flowchart LR从左到右或者flowchart TD从上到下 4根据情况选择flowchart LR从左到右或者flowchart TD从上到下
'''), '''),
}, },
"查找语法错误": { "查找语法错误": {
"Prefix": r"Help me ensure that the grammar and the spelling is correct. " "Prefix": r"Help me ensure that the grammar and the spelling is correct. "
r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good. " r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good. "
@@ -85,14 +85,14 @@ def get_core_functions():
"Suffix": r"", "Suffix": r"",
"PreProcess": clear_line_break, # 预处理:清除换行符 "PreProcess": clear_line_break, # 预处理:清除换行符
}, },
"中译英": { "中译英": {
"Prefix": r"Please translate following sentence to English:" + "\n\n", "Prefix": r"Please translate following sentence to English:" + "\n\n",
"Suffix": r"", "Suffix": r"",
}, },
"学术英中互译": { "学术英中互译": {
"Prefix": build_gpt_academic_masked_string_langbased( "Prefix": build_gpt_academic_masked_string_langbased(
text_show_chinese= text_show_chinese=
@@ -112,29 +112,29 @@ def get_core_functions():
) + "\n\n", ) + "\n\n",
"Suffix": r"", "Suffix": r"",
}, },
"英译中": { "英译中": {
"Prefix": r"翻译成地道的中文:" + "\n\n", "Prefix": r"翻译成地道的中文:" + "\n\n",
"Suffix": r"", "Suffix": r"",
"Visible": False, "Visible": False,
}, },
"找图片": { "找图片": {
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL,"
r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片" + "\n\n", r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片" + "\n\n",
"Suffix": r"", "Suffix": r"",
"Visible": False, "Visible": False,
}, },
"解释代码": { "解释代码": {
"Prefix": r"请解释以下代码:" + "\n```\n", "Prefix": r"请解释以下代码:" + "\n```\n",
"Suffix": "\n```\n", "Suffix": "\n```\n",
}, },
"参考文献转Bib": { "参考文献转Bib": {
"Prefix": r"Here are some bibliography items, please transform them into bibtex style." "Prefix": r"Here are some bibliography items, please transform them into bibtex style."
r"Note that, reference styles maybe more than one kind, you should transform each item correctly." r"Note that, reference styles maybe more than one kind, you should transform each item correctly."

查看文件

@@ -46,7 +46,7 @@ class PaperFileGroup():
manifest.append(path + '.polish.tex') manifest.append(path + '.polish.tex')
f.write(res) f.write(res)
return manifest return manifest
def zip_result(self): def zip_result(self):
import os, time import os, time
folder = os.path.dirname(self.file_paths[0]) folder = os.path.dirname(self.file_paths[0])
@@ -59,7 +59,7 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
# <-------- 读取Latex文件,删除其中的所有注释 ----------> # <-------- 读取Latex文件,删除其中的所有注释 ---------->
pfg = PaperFileGroup() pfg = PaperFileGroup()
for index, fp in enumerate(file_manifest): for index, fp in enumerate(file_manifest):
@@ -73,31 +73,31 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
pfg.file_paths.append(fp) pfg.file_paths.append(fp)
pfg.file_contents.append(clean_tex_content) pfg.file_contents.append(clean_tex_content)
# <-------- 拆分过长的latex文件 ----------> # <-------- 拆分过长的latex文件 ---------->
pfg.run_file_split(max_token_limit=1024) pfg.run_file_split(max_token_limit=1024)
n_split = len(pfg.sp_file_contents) n_split = len(pfg.sp_file_contents)
# <-------- 多线程润色开始 ----------> # <-------- 多线程润色开始 ---------->
if language == 'en': if language == 'en':
if mode == 'polish': if mode == 'polish':
inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, " + inputs_array = [r"Below is a section from an academic paper, polish this section to meet the academic standard, " +
"improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" + r"improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
else: else:
inputs_array = [r"Below is a section from an academic paper, proofread this section." + inputs_array = [r"Below is a section from an academic paper, proofread this section." +
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
r"Answer me only with the revised text:" + r"Answer me only with the revised text:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
elif language == 'zh': elif language == 'zh':
if mode == 'polish': if mode == 'polish':
inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式" + inputs_array = [r"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
else: else:
inputs_array = [f"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式" + inputs_array = [r"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag]
sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)] sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)]
@@ -113,7 +113,7 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
scroller_max_len = 80 scroller_max_len = 80
) )
# <-------- 文本碎片重组为完整的tex文件,整理结果为压缩包 ----------> # <-------- 文本碎片重组为完整的tex文件,整理结果为压缩包 ---------->
try: try:
pfg.sp_file_result = [] pfg.sp_file_result = []
for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]): for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]):
@@ -124,7 +124,7 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# <-------- 整理结果,退出 ----------> # <-------- 整理结果,退出 ---------->
create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name) res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name)
promote_file_to_downloadzone(res, chatbot=chatbot) promote_file_to_downloadzone(res, chatbot=chatbot)

查看文件

@@ -39,7 +39,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
import time, os, re import time, os, re
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
# <-------- 读取Latex文件,删除其中的所有注释 ----------> # <-------- 读取Latex文件,删除其中的所有注释 ---------->
pfg = PaperFileGroup() pfg = PaperFileGroup()
for index, fp in enumerate(file_manifest): for index, fp in enumerate(file_manifest):
@@ -53,11 +53,11 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
pfg.file_paths.append(fp) pfg.file_paths.append(fp)
pfg.file_contents.append(clean_tex_content) pfg.file_contents.append(clean_tex_content)
# <-------- 拆分过长的latex文件 ----------> # <-------- 拆分过长的latex文件 ---------->
pfg.run_file_split(max_token_limit=1024) pfg.run_file_split(max_token_limit=1024)
n_split = len(pfg.sp_file_contents) n_split = len(pfg.sp_file_contents)
# <-------- 抽取摘要 ----------> # <-------- 抽取摘要 ---------->
# if language == 'en': # if language == 'en':
# abs_extract_inputs = f"Please write an abstract for this paper" # abs_extract_inputs = f"Please write an abstract for this paper"
@@ -70,14 +70,14 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
# sys_prompt="Your job is to collect information from materials。", # sys_prompt="Your job is to collect information from materials。",
# ) # )
# <-------- 多线程润色开始 ----------> # <-------- 多线程润色开始 ---------->
if language == 'en->zh': if language == 'en->zh':
inputs_array = ["Below is a section from an English academic paper, translate it into Chinese, do not modify any latex command such as \section, \cite and equations:" + inputs_array = ["Below is a section from an English academic paper, translate it into Chinese, do not modify any latex command such as \section, \cite and equations:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
elif language == 'zh->en': elif language == 'zh->en':
inputs_array = [f"Below is a section from a Chinese academic paper, translate it into English, do not modify any latex command such as \section, \cite and equations:" + inputs_array = [f"Below is a section from a Chinese academic paper, translate it into English, do not modify any latex command such as \section, \cite and equations:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
@@ -93,7 +93,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
scroller_max_len = 80 scroller_max_len = 80
) )
# <-------- 整理结果,退出 ----------> # <-------- 整理结果,退出 ---------->
create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
res = write_history_to_file(gpt_response_collection, create_report_file_name) res = write_history_to_file(gpt_response_collection, create_report_file_name)
promote_file_to_downloadzone(res, chatbot=chatbot) promote_file_to_downloadzone(res, chatbot=chatbot)

查看文件

@@ -1,4 +1,4 @@
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone, check_repeat_upload, map_file_to_sha256
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
from functools import partial from functools import partial
import glob, os, requests, time, json, tarfile import glob, os, requests, time, json, tarfile
@@ -40,7 +40,7 @@ def switch_prompt(pfg, mode, more_requirement):
def desend_to_extracted_folder_if_exist(project_folder): def desend_to_extracted_folder_if_exist(project_folder):
""" """
Descend into the extracted folder if it exists, otherwise return the original folder. Descend into the extracted folder if it exists, otherwise return the original folder.
Args: Args:
@@ -56,7 +56,7 @@ def desend_to_extracted_folder_if_exist(project_folder):
def move_project(project_folder, arxiv_id=None): def move_project(project_folder, arxiv_id=None):
""" """
Create a new work folder and copy the project folder to it. Create a new work folder and copy the project folder to it.
Args: Args:
@@ -112,9 +112,9 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
txt = 'https://arxiv.org/abs/' + txt[:10] txt = 'https://arxiv.org/abs/' + txt[:10]
if not txt.startswith('https://arxiv.org'): if not txt.startswith('https://arxiv.org'):
return txt, None # 是本地文件,跳过下载 return txt, None # 是本地文件,跳过下载
# <-------------- inspect format -------------> # <-------------- inspect format ------------->
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...']) chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
@@ -214,7 +214,7 @@ def pdf2tex_project(pdf_file_path):
return None return None
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException @CatchException
@@ -291,7 +291,7 @@ def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, histo
return success return success
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException @CatchException
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request): def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
@@ -326,7 +326,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache) txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
except tarfile.ReadError as e: except tarfile.ReadError as e:
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
"无法自动下载该论文的Latex源码,请前往arxiv打开此论文下载页面,点other Formats,然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。", "无法自动下载该论文的Latex源码,请前往arxiv打开此论文下载页面,点other Formats,然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。",
chatbot=chatbot, history=history) chatbot=chatbot, history=history)
return return
@@ -385,7 +385,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
return success return success
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 插件主程序3 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 插件主程序3 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException @CatchException
def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
@@ -438,47 +438,101 @@ def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, h
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# <-------------- convert pdf into tex -------------> hash_tag = map_file_to_sha256(file_manifest[0])
project_folder = pdf2tex_project(file_manifest[0])
# Translate English Latex to Chinese Latex, and compile it # <-------------- check repeated pdf ------------->
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] chatbot.append([f"检查PDF是否被重复上传", "正在检查..."])
if len(file_manifest) == 0: yield from update_ui(chatbot=chatbot, history=history)
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}") repeat, project_folder = check_repeat_upload(file_manifest[0], hash_tag)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file -------------> except_flag = False
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder -------------> if repeat:
project_folder = move_project(project_folder) yield from update_ui_lastest_msg(f"发现重复上传,请查收结果(压缩包)...", chatbot=chatbot, history=history)
# <-------------- if merge_translate_zh is already generated, skip gpt req -------------> try:
if not os.path.exists(project_folder + '/merge_translate_zh.tex'): trans_html_file = [f for f in glob.glob(f'{project_folder}/**/*.trans.html', recursive=True)][0]
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs, promote_file_to_downloadzone(trans_html_file, rename_file=None, chatbot=chatbot)
chatbot, history, system_prompt, mode='translate_zh',
switch_prompt=_switch_prompt_)
# <-------------- compile PDF -------------> translate_pdf = [f for f in glob.glob(f'{project_folder}/**/merge_translate_zh.pdf', recursive=True)][0]
success = yield from 编译Latex(chatbot, history, main_file_original='merge', promote_file_to_downloadzone(translate_pdf, rename_file=None, chatbot=chatbot)
main_file_modified='merge_translate_zh', mode='translate_zh',
work_folder_original=project_folder, work_folder_modified=project_folder,
work_folder=project_folder)
# <-------------- zip PDF -------------> comparison_pdf = [f for f in glob.glob(f'{project_folder}/**/comparison.pdf', recursive=True)][0]
zip_res = zip_result(project_folder) promote_file_to_downloadzone(comparison_pdf, rename_file=None, chatbot=chatbot)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了",
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体见Github wiki ...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done -------------> zip_res = zip_result(project_folder)
return success promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
return True
except:
report_exception(chatbot, history, b=f"发现重复上传,但是无法找到相关文件")
yield from update_ui(chatbot=chatbot, history=history)
chatbot.append([f"没有相关文件", '尝试重新翻译PDF...'])
yield from update_ui(chatbot=chatbot, history=history)
except_flag = True
elif not repeat or except_flag:
yield from update_ui_lastest_msg(f"未发现重复上传", chatbot=chatbot, history=history)
# <-------------- convert pdf into tex ------------->
chatbot.append([f"解析项目: {txt}", "正在将PDF转换为tex项目,请耐心等待..."])
yield from update_ui(chatbot=chatbot, history=history)
project_folder = pdf2tex_project(file_manifest[0])
if project_folder is None:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"PDF转换为tex项目失败")
yield from update_ui(chatbot=chatbot, history=history)
return False
# <-------------- translate latex file into Chinese ------------->
yield from update_ui_lastest_msg("正在tex项目将翻译为中文...", chatbot=chatbot, history=history)
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
project_folder = move_project(project_folder)
# <-------------- set a hash tag for repeat-checking ------------->
with open(pj(project_folder, hash_tag + '.tag'), 'w') as f:
f.write(hash_tag)
f.close()
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='translate_zh',
switch_prompt=_switch_prompt_)
# <-------------- compile PDF ------------->
yield from update_ui_lastest_msg("正在将翻译好的项目tex项目编译为PDF...", chatbot=chatbot, history=history)
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
main_file_modified='merge_translate_zh', mode='translate_zh',
work_folder_original=project_folder, work_folder_modified=project_folder,
work_folder=project_folder)
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了",
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体见Github wiki ...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done ------------->
return success

查看文件

@@ -72,7 +72,7 @@ class PluginMultiprocessManager:
if file_type.lower() in ['png', 'jpg']: if file_type.lower() in ['png', 'jpg']:
image_path = os.path.abspath(fp) image_path = os.path.abspath(fp)
self.chatbot.append([ self.chatbot.append([
'检测到新生图像:', '检测到新生图像:',
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>' f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
]) ])
yield from update_ui(chatbot=self.chatbot, history=self.history) yield from update_ui(chatbot=self.chatbot, history=self.history)
@@ -114,21 +114,21 @@ class PluginMultiprocessManager:
self.cnt = 1 self.cnt = 1
self.parent_conn = self.launch_subprocess_with_pipe() # ⭐⭐⭐ self.parent_conn = self.launch_subprocess_with_pipe() # ⭐⭐⭐
repeated, cmd_to_autogen = self.send_command(txt) repeated, cmd_to_autogen = self.send_command(txt)
if txt == 'exit': if txt == 'exit':
self.chatbot.append([f"结束", "结束信号已明确,终止AutoGen程序。"]) self.chatbot.append([f"结束", "结束信号已明确,终止AutoGen程序。"])
yield from update_ui(chatbot=self.chatbot, history=self.history) yield from update_ui(chatbot=self.chatbot, history=self.history)
self.terminate() self.terminate()
return "terminate" return "terminate"
# patience = 10 # patience = 10
while True: while True:
time.sleep(0.5) time.sleep(0.5)
if not self.alive: if not self.alive:
# the heartbeat watchdog might have it killed # the heartbeat watchdog might have it killed
self.terminate() self.terminate()
return "terminate" return "terminate"
if self.parent_conn.poll(): if self.parent_conn.poll():
self.feed_heartbeat_watchdog() self.feed_heartbeat_watchdog()
if "[GPT-Academic] 等待中" in self.chatbot[-1][-1]: if "[GPT-Academic] 等待中" in self.chatbot[-1][-1]:
self.chatbot.pop(-1) # remove the last line self.chatbot.pop(-1) # remove the last line
@@ -152,8 +152,8 @@ class PluginMultiprocessManager:
yield from update_ui(chatbot=self.chatbot, history=self.history) yield from update_ui(chatbot=self.chatbot, history=self.history)
if msg.cmd == "interact": if msg.cmd == "interact":
yield from self.overwatch_workdir_file_change() yield from self.overwatch_workdir_file_change()
self.chatbot.append([f"程序抵达用户反馈节点.", msg.content + self.chatbot.append([f"程序抵达用户反馈节点.", msg.content +
"\n\n等待您的进一步指令." + "\n\n等待您的进一步指令." +
"\n\n(1) 一般情况下您不需要说什么, 清空输入区, 然后直接点击“提交”以继续. " + "\n\n(1) 一般情况下您不需要说什么, 清空输入区, 然后直接点击“提交”以继续. " +
"\n\n(2) 如果您需要补充些什么, 输入要反馈的内容, 直接点击“提交”以继续. " + "\n\n(2) 如果您需要补充些什么, 输入要反馈的内容, 直接点击“提交”以继续. " +
"\n\n(3) 如果您想终止程序, 输入exit, 直接点击“提交”以终止AutoGen并解锁. " "\n\n(3) 如果您想终止程序, 输入exit, 直接点击“提交”以终止AutoGen并解锁. "

查看文件

@@ -8,7 +8,7 @@ class WatchDog():
self.interval = interval self.interval = interval
self.msg = msg self.msg = msg
self.kill_dog = False self.kill_dog = False
def watch(self): def watch(self):
while True: while True:
if self.kill_dog: break if self.kill_dog: break

查看文件

@@ -46,7 +46,7 @@ def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成")) chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
args = plugin_kwargs.get("advanced_arg", None) args = plugin_kwargs.get("advanced_arg", None)
if args is None: if args is None:
chatbot.append(("没给定指令", "退出")) chatbot.append(("没给定指令", "退出"))
yield from update_ui(chatbot=chatbot, history=history); return yield from update_ui(chatbot=chatbot, history=history); return
else: else:
@@ -69,7 +69,7 @@ def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
sys_prompt_array=[arguments.system_prompt for _ in (batch)], sys_prompt_array=[arguments.system_prompt for _ in (batch)],
max_workers=10 # OpenAI所允许的最大并行过载 max_workers=10 # OpenAI所允许的最大并行过载
) )
with open(txt+'.generated.json', 'a+', encoding='utf8') as f: with open(txt+'.generated.json', 'a+', encoding='utf8') as f:
for b, r in zip(batch, res[1::2]): for b, r in zip(batch, res[1::2]):
f.write(json.dumps({"content":b, "summary":r}, ensure_ascii=False)+'\n') f.write(json.dumps({"content":b, "summary":r}, ensure_ascii=False)+'\n')
@@ -95,12 +95,12 @@ def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成")) chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
args = plugin_kwargs.get("advanced_arg", None) args = plugin_kwargs.get("advanced_arg", None)
if args is None: if args is None:
chatbot.append(("没给定指令", "退出")) chatbot.append(("没给定指令", "退出"))
yield from update_ui(chatbot=chatbot, history=history); return yield from update_ui(chatbot=chatbot, history=history); return
else: else:
arguments = string_to_options(arguments=args) arguments = string_to_options(arguments=args)
pre_seq_len = arguments.pre_seq_len # 128 pre_seq_len = arguments.pre_seq_len # 128

查看文件

@@ -135,13 +135,25 @@ def request_gpt_model_in_new_thread_with_ui_alive(
yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息 yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息
return final_result return final_result
def can_multi_process(llm): def can_multi_process(llm) -> bool:
if llm.startswith('gpt-'): return True from request_llms.bridge_all import model_info
if llm.startswith('api2d-'): return True
if llm.startswith('azure-'): return True def default_condition(llm) -> bool:
if llm.startswith('spark'): return True # legacy condition
if llm.startswith('zhipuai') or llm.startswith('glm-'): return True if llm.startswith('gpt-'): return True
return False if llm.startswith('api2d-'): return True
if llm.startswith('azure-'): return True
if llm.startswith('spark'): return True
if llm.startswith('zhipuai') or llm.startswith('glm-'): return True
return False
if llm in model_info:
if 'can_multi_thread' in model_info[llm]:
return model_info[llm]['can_multi_thread']
else:
return default_condition(llm)
else:
return default_condition(llm)
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array, inputs_show_user_array, llm_kwargs, inputs_array, inputs_show_user_array, llm_kwargs,

查看文件

@@ -10,7 +10,7 @@ class FileNode:
self.parenting_ship = [] self.parenting_ship = []
self.comment = "" self.comment = ""
self.comment_maxlen_show = 50 self.comment_maxlen_show = 50
@staticmethod @staticmethod
def add_linebreaks_at_spaces(string, interval=10): def add_linebreaks_at_spaces(string, interval=10):
return '\n'.join(string[i:i+interval] for i in range(0, len(string), interval)) return '\n'.join(string[i:i+interval] for i in range(0, len(string), interval))

查看文件

@@ -8,7 +8,7 @@ import random
class MiniGame_ASCII_Art(GptAcademicGameBaseState): class MiniGame_ASCII_Art(GptAcademicGameBaseState):
def step(self, prompt, chatbot, history): def step(self, prompt, chatbot, history):
if self.step_cnt == 0: if self.step_cnt == 0:
chatbot.append(["我画你猜(动物)", "请稍等..."]) chatbot.append(["我画你猜(动物)", "请稍等..."])
else: else:
if prompt.strip() == 'exit': if prompt.strip() == 'exit':

查看文件

@@ -88,8 +88,8 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
self.story = [] self.story = []
chatbot.append(["互动写故事", f"这次的故事开头是:{self.headstart}"]) chatbot.append(["互动写故事", f"这次的故事开头是:{self.headstart}"])
self.sys_prompt_ = '你是一个想象力丰富的杰出作家。正在与你的朋友互动,一起写故事,因此你每次写的故事段落应少于300字结局除外' self.sys_prompt_ = '你是一个想象力丰富的杰出作家。正在与你的朋友互动,一起写故事,因此你每次写的故事段落应少于300字结局除外'
def generate_story_image(self, story_paragraph): def generate_story_image(self, story_paragraph):
try: try:
from crazy_functions.图片生成 import gen_image from crazy_functions.图片生成 import gen_image
@@ -98,13 +98,13 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
return f'<br/><div align="center"><img src="file={image_path}"></div>' return f'<br/><div align="center"><img src="file={image_path}"></div>'
except: except:
return '' return ''
def step(self, prompt, chatbot, history): def step(self, prompt, chatbot, history):
""" """
首先,处理游戏初始化等特殊情况 首先,处理游戏初始化等特殊情况
""" """
if self.step_cnt == 0: if self.step_cnt == 0:
self.begin_game_step_0(prompt, chatbot, history) self.begin_game_step_0(prompt, chatbot, history)
self.lock_plugin(chatbot) self.lock_plugin(chatbot)
self.cur_task = 'head_start' self.cur_task = 'head_start'
@@ -132,7 +132,7 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
inputs_ = prompts_hs.format(headstart=self.headstart) inputs_ = prompts_hs.format(headstart=self.headstart)
history_ = [] history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive( story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, '故事开头', self.llm_kwargs, inputs_, '故事开头', self.llm_kwargs,
chatbot, history_, self.sys_prompt_ chatbot, history_, self.sys_prompt_
) )
self.story.append(story_paragraph) self.story.append(story_paragraph)
@@ -147,7 +147,7 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
inputs_ = prompts_interact.format(previously_on_story=previously_on_story) inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
history_ = [] history_ = []
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive( self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, '请在以下几种故事走向中,选择一种(当然,您也可以选择给出其他故事走向):', self.llm_kwargs, inputs_, '请在以下几种故事走向中,选择一种(当然,您也可以选择给出其他故事走向):', self.llm_kwargs,
chatbot, chatbot,
history_, history_,
self.sys_prompt_ self.sys_prompt_
@@ -166,7 +166,7 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
inputs_ = prompts_resume.format(previously_on_story=previously_on_story, choice=self.next_choices, user_choice=prompt) inputs_ = prompts_resume.format(previously_on_story=previously_on_story, choice=self.next_choices, user_choice=prompt)
history_ = [] history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive( story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, f'下一段故事(您的选择是:{prompt})。', self.llm_kwargs, inputs_, f'下一段故事(您的选择是:{prompt})。', self.llm_kwargs,
chatbot, history_, self.sys_prompt_ chatbot, history_, self.sys_prompt_
) )
self.story.append(story_paragraph) self.story.append(story_paragraph)
@@ -181,10 +181,10 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
inputs_ = prompts_interact.format(previously_on_story=previously_on_story) inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
history_ = [] history_ = []
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive( self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, inputs_,
'请在以下几种故事走向中,选择一种。当然,您也可以给出您心中的其他故事走向。另外,如果您希望剧情立即收尾,请输入剧情走向,并以“剧情收尾”四个字提示程序。', self.llm_kwargs, '请在以下几种故事走向中,选择一种。当然,您也可以给出您心中的其他故事走向。另外,如果您希望剧情立即收尾,请输入剧情走向,并以“剧情收尾”四个字提示程序。', self.llm_kwargs,
chatbot, chatbot,
history_, history_,
self.sys_prompt_ self.sys_prompt_
) )
self.cur_task = 'user_choice' self.cur_task = 'user_choice'
@@ -200,7 +200,7 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
inputs_ = prompts_terminate.format(previously_on_story=previously_on_story, user_choice=prompt) inputs_ = prompts_terminate.format(previously_on_story=previously_on_story, user_choice=prompt)
history_ = [] history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive( story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, f'故事收尾(您的选择是:{prompt})。', self.llm_kwargs, inputs_, f'故事收尾(您的选择是:{prompt})。', self.llm_kwargs,
chatbot, history_, self.sys_prompt_ chatbot, history_, self.sys_prompt_
) )
# # 配图 # # 配图

查看文件

@@ -5,7 +5,7 @@ def get_code_block(reply):
import re import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) == 1: if len(matches) == 1:
return "```" + matches[0] + "```" # code block return "```" + matches[0] + "```" # code block
raise RuntimeError("GPT is not generating proper code.") raise RuntimeError("GPT is not generating proper code.")
@@ -13,10 +13,10 @@ def is_same_thing(a, b, llm_kwargs):
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
class IsSameThing(BaseModel): class IsSameThing(BaseModel):
is_same_thing: bool = Field(description="determine whether two objects are same thing.", default=False) is_same_thing: bool = Field(description="determine whether two objects are same thing.", default=False)
def run_gpt_fn(inputs, sys_prompt, history=[]): def run_gpt_fn(inputs, sys_prompt, history=[]):
return predict_no_ui_long_connection( return predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, inputs=inputs, llm_kwargs=llm_kwargs,
history=history, sys_prompt=sys_prompt, observe_window=[] history=history, sys_prompt=sys_prompt, observe_window=[]
) )
@@ -24,7 +24,7 @@ def is_same_thing(a, b, llm_kwargs):
inputs_01 = "Identity whether the user input and the target is the same thing: \n target object: {a} \n user input object: {b} \n\n\n".format(a=a, b=b) inputs_01 = "Identity whether the user input and the target is the same thing: \n target object: {a} \n user input object: {b} \n\n\n".format(a=a, b=b)
inputs_01 += "\n\n\n Note that the user may describe the target object with a different language, e.g. cat and 猫 are the same thing." inputs_01 += "\n\n\n Note that the user may describe the target object with a different language, e.g. cat and 猫 are the same thing."
analyze_res_cot_01 = run_gpt_fn(inputs_01, "", []) analyze_res_cot_01 = run_gpt_fn(inputs_01, "", [])
inputs_02 = inputs_01 + gpt_json_io.format_instructions inputs_02 = inputs_01 + gpt_json_io.format_instructions
analyze_res = run_gpt_fn(inputs_02, "", [inputs_01, analyze_res_cot_01]) analyze_res = run_gpt_fn(inputs_02, "", [inputs_01, analyze_res_cot_01])

查看文件

@@ -41,11 +41,11 @@ def is_function_successfully_generated(fn_path, class_name, return_dict):
# Now you can create an instance of the class # Now you can create an instance of the class
instance = some_class() instance = some_class()
return_dict['success'] = True return_dict['success'] = True
return return
except: except:
return_dict['traceback'] = trimmed_format_exc() return_dict['traceback'] = trimmed_format_exc()
return return
def subprocess_worker(code, file_path, return_dict): def subprocess_worker(code, file_path, return_dict):
return_dict['result'] = None return_dict['result'] = None
return_dict['success'] = False return_dict['success'] = False

查看文件

@@ -1,4 +1,4 @@
import platform import platform
import pickle import pickle
import multiprocessing import multiprocessing

查看文件

@@ -89,7 +89,7 @@ class GptJsonIO():
error + "\n\n" + \ error + "\n\n" + \
"Now, fix this json string. \n\n" "Now, fix this json string. \n\n"
return prompt return prompt
def generate_output_auto_repair(self, response, gpt_gen_fn): def generate_output_auto_repair(self, response, gpt_gen_fn):
""" """
response: string containing canidate json response: string containing canidate json

查看文件

@@ -90,16 +90,16 @@ class LatexPaperSplit():
"版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \ "版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \
"项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。" "项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。"
# 请您不要删除或修改这行警告,除非您是论文的原作者如果您是论文原作者,欢迎加REAME中的QQ联系开发者 # 请您不要删除或修改这行警告,除非您是论文的原作者如果您是论文原作者,欢迎加REAME中的QQ联系开发者
self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\" self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\"
self.title = "unknown" self.title = "unknown"
self.abstract = "unknown" self.abstract = "unknown"
def read_title_and_abstract(self, txt): def read_title_and_abstract(self, txt):
try: try:
title, abstract = find_title_and_abs(txt) title, abstract = find_title_and_abs(txt)
if title is not None: if title is not None:
self.title = title.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '') self.title = title.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '')
if abstract is not None: if abstract is not None:
self.abstract = abstract.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '') self.abstract = abstract.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '')
except: except:
pass pass
@@ -111,7 +111,7 @@ class LatexPaperSplit():
result_string = "" result_string = ""
node_cnt = 0 node_cnt = 0
line_cnt = 0 line_cnt = 0
for node in self.nodes: for node in self.nodes:
if node.preserve: if node.preserve:
line_cnt += node.string.count('\n') line_cnt += node.string.count('\n')
@@ -144,7 +144,7 @@ class LatexPaperSplit():
return result_string return result_string
def split(self, txt, project_folder, opts): def split(self, txt, project_folder, opts):
""" """
break down latex file to a linked list, break down latex file to a linked list,
each node use a preserve flag to indicate whether it should each node use a preserve flag to indicate whether it should
@@ -155,7 +155,7 @@ class LatexPaperSplit():
manager = multiprocessing.Manager() manager = multiprocessing.Manager()
return_dict = manager.dict() return_dict = manager.dict()
p = multiprocessing.Process( p = multiprocessing.Process(
target=split_subprocess, target=split_subprocess,
args=(txt, project_folder, return_dict, opts)) args=(txt, project_folder, return_dict, opts))
p.start() p.start()
p.join() p.join()
@@ -217,13 +217,13 @@ def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin
from ..crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from ..crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from .latex_actions import LatexPaperFileGroup, LatexPaperSplit from .latex_actions import LatexPaperFileGroup, LatexPaperSplit
# <-------- 寻找主tex文件 ----------> # <-------- 寻找主tex文件 ---------->
maintex = find_main_tex_file(file_manifest, mode) maintex = find_main_tex_file(file_manifest, mode)
chatbot.append((f"定位主Latex文件", f'[Local Message] 分析结果该项目的Latex主文件是{maintex}, 如果分析错误, 请立即终止程序, 删除或修改歧义文件, 然后重试。主程序即将开始, 请稍候。')) chatbot.append((f"定位主Latex文件", f'[Local Message] 分析结果该项目的Latex主文件是{maintex}, 如果分析错误, 请立即终止程序, 删除或修改歧义文件, 然后重试。主程序即将开始, 请稍候。'))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
time.sleep(3) time.sleep(3)
# <-------- 读取Latex文件, 将多文件tex工程融合为一个巨型tex ----------> # <-------- 读取Latex文件, 将多文件tex工程融合为一个巨型tex ---------->
main_tex_basename = os.path.basename(maintex) main_tex_basename = os.path.basename(maintex)
assert main_tex_basename.endswith('.tex') assert main_tex_basename.endswith('.tex')
main_tex_basename_bare = main_tex_basename[:-4] main_tex_basename_bare = main_tex_basename[:-4]
@@ -240,13 +240,13 @@ def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin
with open(project_folder + '/merge.tex', 'w', encoding='utf-8', errors='replace') as f: with open(project_folder + '/merge.tex', 'w', encoding='utf-8', errors='replace') as f:
f.write(merged_content) f.write(merged_content)
# <-------- 精细切分latex文件 ----------> # <-------- 精细切分latex文件 ---------->
chatbot.append((f"Latex文件融合完成", f'[Local Message] 正在精细切分latex文件,这需要一段时间计算,文档越长耗时越长,请耐心等待。')) chatbot.append((f"Latex文件融合完成", f'[Local Message] 正在精细切分latex文件,这需要一段时间计算,文档越长耗时越长,请耐心等待。'))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
lps = LatexPaperSplit() lps = LatexPaperSplit()
lps.read_title_and_abstract(merged_content) lps.read_title_and_abstract(merged_content)
res = lps.split(merged_content, project_folder, opts) # 消耗时间的函数 res = lps.split(merged_content, project_folder, opts) # 消耗时间的函数
# <-------- 拆分过长的latex片段 ----------> # <-------- 拆分过长的latex片段 ---------->
pfg = LatexPaperFileGroup() pfg = LatexPaperFileGroup()
for index, r in enumerate(res): for index, r in enumerate(res):
pfg.file_paths.append('segment-' + str(index)) pfg.file_paths.append('segment-' + str(index))
@@ -255,17 +255,17 @@ def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin
pfg.run_file_split(max_token_limit=1024) pfg.run_file_split(max_token_limit=1024)
n_split = len(pfg.sp_file_contents) n_split = len(pfg.sp_file_contents)
# <-------- 根据需要切换prompt ----------> # <-------- 根据需要切换prompt ---------->
inputs_array, sys_prompt_array = switch_prompt(pfg, mode) inputs_array, sys_prompt_array = switch_prompt(pfg, mode)
inputs_show_user_array = [f"{mode} {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"{mode} {f}" for f in pfg.sp_file_tag]
if os.path.exists(pj(project_folder,'temp.pkl')): if os.path.exists(pj(project_folder,'temp.pkl')):
# <-------- 【仅调试】如果存在调试缓存文件,则跳过GPT请求环节 ----------> # <-------- 【仅调试】如果存在调试缓存文件,则跳过GPT请求环节 ---------->
pfg = objload(file=pj(project_folder,'temp.pkl')) pfg = objload(file=pj(project_folder,'temp.pkl'))
else: else:
# <-------- gpt 多线程请求 ----------> # <-------- gpt 多线程请求 ---------->
history_array = [[""] for _ in range(n_split)] history_array = [[""] for _ in range(n_split)]
# LATEX_EXPERIMENTAL, = get_conf('LATEX_EXPERIMENTAL') # LATEX_EXPERIMENTAL, = get_conf('LATEX_EXPERIMENTAL')
# if LATEX_EXPERIMENTAL: # if LATEX_EXPERIMENTAL:
@@ -284,32 +284,32 @@ def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin
scroller_max_len = 40 scroller_max_len = 40
) )
# <-------- 文本碎片重组为完整的tex片段 ----------> # <-------- 文本碎片重组为完整的tex片段 ---------->
pfg.sp_file_result = [] pfg.sp_file_result = []
for i_say, gpt_say, orig_content in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], pfg.sp_file_contents): for i_say, gpt_say, orig_content in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], pfg.sp_file_contents):
pfg.sp_file_result.append(gpt_say) pfg.sp_file_result.append(gpt_say)
pfg.merge_result() pfg.merge_result()
# <-------- 临时存储用于调试 ----------> # <-------- 临时存储用于调试 ---------->
pfg.get_token_num = None pfg.get_token_num = None
objdump(pfg, file=pj(project_folder,'temp.pkl')) objdump(pfg, file=pj(project_folder,'temp.pkl'))
write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder) write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder)
# <-------- 写出文件 ----------> # <-------- 写出文件 ---------->
msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}" msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}"
final_tex = lps.merge_result(pfg.file_result, mode, msg) final_tex = lps.merge_result(pfg.file_result, mode, msg)
objdump((lps, pfg.file_result, mode, msg), file=pj(project_folder,'merge_result.pkl')) objdump((lps, pfg.file_result, mode, msg), file=pj(project_folder,'merge_result.pkl'))
with open(project_folder + f'/merge_{mode}.tex', 'w', encoding='utf-8', errors='replace') as f: with open(project_folder + f'/merge_{mode}.tex', 'w', encoding='utf-8', errors='replace') as f:
if mode != 'translate_zh' or "binary" in final_tex: f.write(final_tex) if mode != 'translate_zh' or "binary" in final_tex: f.write(final_tex)
# <-------- 整理结果, 退出 ---------->
# <-------- 整理结果, 退出 ---------->
chatbot.append((f"完成了吗?", 'GPT结果已输出, 即将编译PDF')) chatbot.append((f"完成了吗?", 'GPT结果已输出, 即将编译PDF'))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------- 返回 ----------> # <-------- 返回 ---------->
return project_folder + f'/merge_{mode}.tex' return project_folder + f'/merge_{mode}.tex'
@@ -362,7 +362,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified) ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')): if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')):
# 只有第二步成功,才能继续下面的步骤 # 只有第二步成功,才能继续下面的步骤
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面
@@ -393,9 +393,9 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
original_pdf_success = os.path.exists(pj(work_folder_original, f'{main_file_original}.pdf')) original_pdf_success = os.path.exists(pj(work_folder_original, f'{main_file_original}.pdf'))
modified_pdf_success = os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')) modified_pdf_success = os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf'))
diff_pdf_success = os.path.exists(pj(work_folder, f'merge_diff.pdf')) diff_pdf_success = os.path.exists(pj(work_folder, f'merge_diff.pdf'))
results_ += f"原始PDF编译是否成功: {original_pdf_success};" results_ += f"原始PDF编译是否成功: {original_pdf_success};"
results_ += f"转化PDF编译是否成功: {modified_pdf_success};" results_ += f"转化PDF编译是否成功: {modified_pdf_success};"
results_ += f"对比PDF编译是否成功: {diff_pdf_success};" results_ += f"对比PDF编译是否成功: {diff_pdf_success};"
yield from update_ui_lastest_msg(f'{n_fix}编译结束:<br/>{results_}...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_lastest_msg(f'{n_fix}编译结束:<br/>{results_}...', chatbot, history) # 刷新Gradio前端界面
if diff_pdf_success: if diff_pdf_success:
@@ -409,7 +409,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
shutil.copyfile(result_pdf, pj(work_folder, '..', 'translation', 'translate_zh.pdf')) shutil.copyfile(result_pdf, pj(work_folder, '..', 'translation', 'translate_zh.pdf'))
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
# 将两个PDF拼接 # 将两个PDF拼接
if original_pdf_success: if original_pdf_success:
try: try:
from .latex_toolbox import merge_pdfs from .latex_toolbox import merge_pdfs
concat_pdf = pj(work_folder_modified, f'comparison.pdf') concat_pdf = pj(work_folder_modified, f'comparison.pdf')
@@ -425,7 +425,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
if n_fix>=max_try: break if n_fix>=max_try: break
n_fix += 1 n_fix += 1
can_retry, main_file_modified, buggy_lines = remove_buggy_lines( can_retry, main_file_modified, buggy_lines = remove_buggy_lines(
file_path=pj(work_folder_modified, f'{main_file_modified}.tex'), file_path=pj(work_folder_modified, f'{main_file_modified}.tex'),
log_path=pj(work_folder_modified, f'{main_file_modified}.log'), log_path=pj(work_folder_modified, f'{main_file_modified}.log'),
tex_name=f'{main_file_modified}.tex', tex_name=f'{main_file_modified}.tex',
tex_name_pure=f'{main_file_modified}', tex_name_pure=f'{main_file_modified}',
@@ -445,14 +445,14 @@ def write_html(sp_file_contents, sp_file_result, chatbot, project_folder):
import shutil import shutil
from crazy_functions.pdf_fns.report_gen_html import construct_html from crazy_functions.pdf_fns.report_gen_html import construct_html
from toolbox import gen_time_str from toolbox import gen_time_str
ch = construct_html() ch = construct_html()
orig = "" orig = ""
trans = "" trans = ""
final = [] final = []
for c,r in zip(sp_file_contents, sp_file_result): for c,r in zip(sp_file_contents, sp_file_result):
final.append(c) final.append(c)
final.append(r) final.append(r)
for i, k in enumerate(final): for i, k in enumerate(final):
if i%2==0: if i%2==0:
orig = k orig = k
if i%2==1: if i%2==1:

查看文件

@@ -85,8 +85,8 @@ def write_numpy_to_wave(filename, rate, data, add_header=False):
def is_speaker_speaking(vad, data, sample_rate): def is_speaker_speaking(vad, data, sample_rate):
# Function to detect if the speaker is speaking # Function to detect if the speaker is speaking
# The WebRTC VAD only accepts 16-bit mono PCM audio, # The WebRTC VAD only accepts 16-bit mono PCM audio,
# sampled at 8000, 16000, 32000 or 48000 Hz. # sampled at 8000, 16000, 32000 or 48000 Hz.
# A frame must be either 10, 20, or 30 ms in duration: # A frame must be either 10, 20, or 30 ms in duration:
frame_duration = 30 frame_duration = 30
n_bit_each = int(sample_rate * frame_duration / 1000)*2 # x2 because audio is 16 bit (2 bytes) n_bit_each = int(sample_rate * frame_duration / 1000)*2 # x2 because audio is 16 bit (2 bytes)
@@ -94,7 +94,7 @@ def is_speaker_speaking(vad, data, sample_rate):
for t in range(len(data)): for t in range(len(data)):
if t!=0 and t % n_bit_each == 0: if t!=0 and t % n_bit_each == 0:
res_list.append(vad.is_speech(data[t-n_bit_each:t], sample_rate)) res_list.append(vad.is_speech(data[t-n_bit_each:t], sample_rate))
info = ''.join(['^' if r else '.' for r in res_list]) info = ''.join(['^' if r else '.' for r in res_list])
info = info[:10] info = info[:10]
if any(res_list): if any(res_list):
@@ -186,10 +186,10 @@ class AliyunASR():
keep_alive_last_send_time = time.time() keep_alive_last_send_time = time.time()
while not self.stop: while not self.stop:
# time.sleep(self.capture_interval) # time.sleep(self.capture_interval)
audio = rad.read(uuid.hex) audio = rad.read(uuid.hex)
if audio is not None: if audio is not None:
# convert to pcm file # convert to pcm file
temp_file = f'{temp_folder}/{uuid.hex}.pcm' # temp_file = f'{temp_folder}/{uuid.hex}.pcm' #
dsdata = change_sample_rate(audio, rad.rate, NEW_SAMPLERATE) # 48000 --> 16000 dsdata = change_sample_rate(audio, rad.rate, NEW_SAMPLERATE) # 48000 --> 16000
write_numpy_to_wave(temp_file, NEW_SAMPLERATE, dsdata) write_numpy_to_wave(temp_file, NEW_SAMPLERATE, dsdata)
# read pcm binary # read pcm binary

查看文件

@@ -3,12 +3,12 @@ from scipy import interpolate
def Singleton(cls): def Singleton(cls):
_instance = {} _instance = {}
def _singleton(*args, **kargs): def _singleton(*args, **kargs):
if cls not in _instance: if cls not in _instance:
_instance[cls] = cls(*args, **kargs) _instance[cls] = cls(*args, **kargs)
return _instance[cls] return _instance[cls]
return _singleton return _singleton
@@ -39,7 +39,7 @@ class RealtimeAudioDistribution():
else: else:
res = None res = None
return res return res
def change_sample_rate(audio, old_sr, new_sr): def change_sample_rate(audio, old_sr, new_sr):
duration = audio.shape[0] / old_sr duration = audio.shape[0] / old_sr

查看文件

@@ -40,7 +40,7 @@ class GptAcademicState():
class GptAcademicGameBaseState(): class GptAcademicGameBaseState():
""" """
1. first init: __init__ -> 1. first init: __init__ ->
""" """
def init_game(self, chatbot, lock_plugin): def init_game(self, chatbot, lock_plugin):
self.plugin_name = None self.plugin_name = None
@@ -53,7 +53,7 @@ class GptAcademicGameBaseState():
raise ValueError("callback_fn is None") raise ValueError("callback_fn is None")
chatbot._cookies['lock_plugin'] = self.callback_fn chatbot._cookies['lock_plugin'] = self.callback_fn
self.dump_state(chatbot) self.dump_state(chatbot)
def get_plugin_name(self): def get_plugin_name(self):
if self.plugin_name is None: if self.plugin_name is None:
raise ValueError("plugin_name is None") raise ValueError("plugin_name is None")
@@ -71,7 +71,7 @@ class GptAcademicGameBaseState():
state = chatbot._cookies.get(f'plugin_state/{plugin_name}', None) state = chatbot._cookies.get(f'plugin_state/{plugin_name}', None)
if state is not None: if state is not None:
state = pickle.loads(state) state = pickle.loads(state)
else: else:
state = cls() state = cls()
state.init_game(chatbot, lock_plugin) state.init_game(chatbot, lock_plugin)
state.plugin_name = plugin_name state.plugin_name = plugin_name
@@ -79,7 +79,7 @@ class GptAcademicGameBaseState():
state.chatbot = chatbot state.chatbot = chatbot
state.callback_fn = callback_fn state.callback_fn = callback_fn
return state return state
def continue_game(self, prompt, chatbot, history): def continue_game(self, prompt, chatbot, history):
# 游戏主体 # 游戏主体
yield from self.step(prompt, chatbot, history) yield from self.step(prompt, chatbot, history)

查看文件

@@ -35,7 +35,7 @@ def cut(limit, get_token_fn, txt_tocut, must_break_at_empty_line, break_anyway=F
remain_txt_to_cut_storage = "" remain_txt_to_cut_storage = ""
# 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage # 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage
remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage) remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage)
while True: while True:
if get_token_fn(remain_txt_to_cut) <= limit: if get_token_fn(remain_txt_to_cut) <= limit:
# 如果剩余文本的token数小于限制,那么就不用切了 # 如果剩余文本的token数小于限制,那么就不用切了

查看文件

@@ -64,8 +64,8 @@ def produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chat
# 再做一个小修改重新修改当前part的标题,默认用英文的 # 再做一个小修改重新修改当前part的标题,默认用英文的
cur_value += value cur_value += value
translated_res_array.append(cur_value) translated_res_array.append(cur_value)
res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + translated_res_array, res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + translated_res_array,
file_basename = f"{gen_time_str()}-translated_only.md", file_basename = f"{gen_time_str()}-translated_only.md",
file_fullname = None, file_fullname = None,
auto_caption = False) auto_caption = False)
promote_file_to_downloadzone(res_path, rename_file=os.path.basename(res_path)+'.md', chatbot=chatbot) promote_file_to_downloadzone(res_path, rename_file=os.path.basename(res_path)+'.md', chatbot=chatbot)
@@ -144,11 +144,11 @@ def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_fi
produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chatbot, fp, generated_conclusion_files) produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chatbot, fp, generated_conclusion_files)
# -=-=-=-=-=-=-=-= 写出HTML文件 -=-=-=-=-=-=-=-= # -=-=-=-=-=-=-=-= 写出HTML文件 -=-=-=-=-=-=-=-=
ch = construct_html() ch = construct_html()
orig = "" orig = ""
trans = "" trans = ""
gpt_response_collection_html = copy.deepcopy(gpt_response_collection) gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
for i,k in enumerate(gpt_response_collection_html): for i,k in enumerate(gpt_response_collection_html):
if i%2==0: if i%2==0:
gpt_response_collection_html[i] = inputs_show_user_array[i//2] gpt_response_collection_html[i] = inputs_show_user_array[i//2]
else: else:
@@ -159,7 +159,7 @@ def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_fi
final = ["", "", "一、论文概况", "", "Abstract", paper_meta_info, "二、论文翻译", ""] final = ["", "", "一、论文概况", "", "Abstract", paper_meta_info, "二、论文翻译", ""]
final.extend(gpt_response_collection_html) final.extend(gpt_response_collection_html)
for i, k in enumerate(final): for i, k in enumerate(final):
if i%2==0: if i%2==0:
orig = k orig = k
if i%2==1: if i%2==1:

查看文件

@@ -22,10 +22,10 @@ def extract_text_from_files(txt, chatbot, history):
file_manifest = [] file_manifest = []
excption = "" excption = ""
if txt == "": if txt == "":
final_result.append(txt) final_result.append(txt)
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容 return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
#查找输入区内容中的文件 #查找输入区内容中的文件
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf') file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md') file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
@@ -35,12 +35,12 @@ def extract_text_from_files(txt, chatbot, history):
if file_doc: if file_doc:
excption = "word" excption = "word"
return False, final_result, page_one, file_manifest, excption return False, final_result, page_one, file_manifest, excption
file_num = len(pdf_manifest) + len(md_manifest) + len(word_manifest) file_num = len(pdf_manifest) + len(md_manifest) + len(word_manifest)
if file_num == 0: if file_num == 0:
final_result.append(txt) final_result.append(txt)
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容 return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
if file_pdf: if file_pdf:
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议 try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
import fitz import fitz
@@ -61,7 +61,7 @@ def extract_text_from_files(txt, chatbot, history):
file_content = f.read() file_content = f.read()
file_content = file_content.encode('utf-8', 'ignore').decode() file_content = file_content.encode('utf-8', 'ignore').decode()
headers = re.findall(r'^#\s(.*)$', file_content, re.MULTILINE) #接下来提取md中的一级/二级标题作为摘要 headers = re.findall(r'^#\s(.*)$', file_content, re.MULTILINE) #接下来提取md中的一级/二级标题作为摘要
if len(headers) > 0: if len(headers) > 0:
page_one.append("\n".join(headers)) #合并所有的标题,以换行符分割 page_one.append("\n".join(headers)) #合并所有的标题,以换行符分割
else: else:
page_one.append("") page_one.append("")
@@ -81,5 +81,5 @@ def extract_text_from_files(txt, chatbot, history):
page_one.append(file_content[:200]) page_one.append(file_content[:200])
final_result.append(file_content) final_result.append(file_content)
file_manifest.append(os.path.relpath(fp, folder_word)) file_manifest.append(os.path.relpath(fp, folder_word))
return True, final_result, page_one, file_manifest, excption return True, final_result, page_one, file_manifest, excption

查看文件

@@ -28,7 +28,7 @@ EMBEDDING_DEVICE = "cpu"
# 基于上下文的prompt模版,请务必保留"{question}"和"{context}" # 基于上下文的prompt模版,请务必保留"{question}"和"{context}"
PROMPT_TEMPLATE = """已知信息: PROMPT_TEMPLATE = """已知信息:
{context} {context}
根据上述已知信息,简洁和专业的来回答用户的问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题” 或 “没有提供足够的相关信息”,不允许在答案中添加编造成分,答案请使用中文。 问题是:{question}""" 根据上述已知信息,简洁和专业的来回答用户的问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题” 或 “没有提供足够的相关信息”,不允许在答案中添加编造成分,答案请使用中文。 问题是:{question}"""
@@ -58,7 +58,7 @@ OPEN_CROSS_DOMAIN = False
def similarity_search_with_score_by_vector( def similarity_search_with_score_by_vector(
self, embedding: List[float], k: int = 4 self, embedding: List[float], k: int = 4
) -> List[Tuple[Document, float]]: ) -> List[Tuple[Document, float]]:
def seperate_list(ls: List[int]) -> List[List[int]]: def seperate_list(ls: List[int]) -> List[List[int]]:
lists = [] lists = []
ls1 = [ls[0]] ls1 = [ls[0]]
@@ -200,7 +200,7 @@ class LocalDocQA:
return vs_path, loaded_files return vs_path, loaded_files
else: else:
raise RuntimeError("文件加载失败,请检查文件格式是否正确") raise RuntimeError("文件加载失败,请检查文件格式是否正确")
def get_loaded_file(self, vs_path): def get_loaded_file(self, vs_path):
ds = self.vector_store.docstore ds = self.vector_store.docstore
return set([ds._dict[k].metadata['source'].split(vs_path)[-1] for k in ds._dict]) return set([ds._dict[k].metadata['source'].split(vs_path)[-1] for k in ds._dict])
@@ -290,10 +290,10 @@ class knowledge_archive_interface():
self.threadLock.acquire() self.threadLock.acquire()
# import uuid # import uuid
self.current_id = id self.current_id = id
self.qa_handle, self.kai_path = construct_vector_store( self.qa_handle, self.kai_path = construct_vector_store(
vs_id=self.current_id, vs_id=self.current_id,
vs_path=vs_path, vs_path=vs_path,
files=file_manifest, files=file_manifest,
sentence_size=100, sentence_size=100,
history=[], history=[],
one_conent="", one_conent="",
@@ -304,7 +304,7 @@ class knowledge_archive_interface():
def get_current_archive_id(self): def get_current_archive_id(self):
return self.current_id return self.current_id
def get_loaded_file(self, vs_path): def get_loaded_file(self, vs_path):
return self.qa_handle.get_loaded_file(vs_path) return self.qa_handle.get_loaded_file(vs_path)
@@ -312,10 +312,10 @@ class knowledge_archive_interface():
self.threadLock.acquire() self.threadLock.acquire()
if not self.current_id == id: if not self.current_id == id:
self.current_id = id self.current_id = id
self.qa_handle, self.kai_path = construct_vector_store( self.qa_handle, self.kai_path = construct_vector_store(
vs_id=self.current_id, vs_id=self.current_id,
vs_path=vs_path, vs_path=vs_path,
files=[], files=[],
sentence_size=100, sentence_size=100,
history=[], history=[],
one_conent="", one_conent="",
@@ -329,7 +329,7 @@ class knowledge_archive_interface():
query = txt, query = txt,
vs_path = self.kai_path, vs_path = self.kai_path,
score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD, score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
vector_search_top_k=VECTOR_SEARCH_TOP_K, vector_search_top_k=VECTOR_SEARCH_TOP_K,
chunk_conent=True, chunk_conent=True,
chunk_size=CHUNK_SIZE, chunk_size=CHUNK_SIZE,
text2vec = self.get_chinese_text2vec(), text2vec = self.get_chinese_text2vec(),

查看文件

@@ -35,9 +35,9 @@ def get_recent_file_prompt_support(chatbot):
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None) most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
path = most_recent_uploaded['path'] path = most_recent_uploaded['path']
prompt = "\nAdditional Information:\n" prompt = "\nAdditional Information:\n"
prompt = "In case that this plugin requires a path or a file as argument," prompt = "In case that this plugin requires a path or a file as argument,"
prompt += f"it is important for you to know that the user has recently uploaded a file, located at: `{path}`" prompt += f"it is important for you to know that the user has recently uploaded a file, located at: `{path}`"
prompt += f"Only use it when necessary, otherwise, you can ignore this file." prompt += f"Only use it when necessary, otherwise, you can ignore this file."
return prompt return prompt
def get_inputs_show_user(inputs, plugin_arr_enum_prompt): def get_inputs_show_user(inputs, plugin_arr_enum_prompt):
@@ -82,7 +82,7 @@ def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
msg += "\n但您可以尝试再试一次\n" msg += "\n但您可以尝试再试一次\n"
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2) yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
return return
# ⭐ ⭐ ⭐ 确认插件参数 # ⭐ ⭐ ⭐ 确认插件参数
if not have_any_recent_upload_files(chatbot): if not have_any_recent_upload_files(chatbot):
appendix_info = "" appendix_info = ""
@@ -99,7 +99,7 @@ def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
inputs = f"A plugin named {plugin_sel.plugin_selection} is selected, " + \ inputs = f"A plugin named {plugin_sel.plugin_selection} is selected, " + \
"you should extract plugin_arg from the user requirement, the user requirement is: \n\n" + \ "you should extract plugin_arg from the user requirement, the user requirement is: \n\n" + \
">> " + (txt + appendix_info).rstrip('\n').replace('\n','\n>> ') + '\n\n' + \ ">> " + (txt + appendix_info).rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
gpt_json_io.format_instructions gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection( run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[]) inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
plugin_sel = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn) plugin_sel = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)

查看文件

@@ -10,7 +10,7 @@ def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG') ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG')
if not ALLOW_RESET_CONFIG: if not ALLOW_RESET_CONFIG:
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"当前配置不允许被修改如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。", lastmsg=f"当前配置不允许被修改如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
chatbot=chatbot, history=history, delay=2 chatbot=chatbot, history=history, delay=2
) )
return return
@@ -35,7 +35,7 @@ def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
inputs = "Analyze how to change configuration according to following user input, answer me with json: \n\n" + \ inputs = "Analyze how to change configuration according to following user input, answer me with json: \n\n" + \
">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \ ">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
gpt_json_io.format_instructions gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection( run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[]) inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
user_intention = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn) user_intention = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)
@@ -45,11 +45,11 @@ def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
ok = (explicit_conf in txt) ok = (explicit_conf in txt)
if ok: if ok:
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}", lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}",
chatbot=chatbot, history=history, delay=1 chatbot=chatbot, history=history, delay=1
) )
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}\n\n正在修改配置中", lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}\n\n正在修改配置中",
chatbot=chatbot, history=history, delay=2 chatbot=chatbot, history=history, delay=2
) )
@@ -69,7 +69,7 @@ def modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history
ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG') ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG')
if not ALLOW_RESET_CONFIG: if not ALLOW_RESET_CONFIG:
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"当前配置不允许被修改如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。", lastmsg=f"当前配置不允许被修改如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
chatbot=chatbot, history=history, delay=2 chatbot=chatbot, history=history, delay=2
) )
return return

查看文件

@@ -6,7 +6,7 @@ class VoidTerminalState():
def reset_state(self): def reset_state(self):
self.has_provided_explaination = False self.has_provided_explaination = False
def lock_plugin(self, chatbot): def lock_plugin(self, chatbot):
chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端' chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端'
chatbot._cookies['plugin_state'] = pickle.dumps(self) chatbot._cookies['plugin_state'] = pickle.dumps(self)

查看文件

@@ -144,8 +144,8 @@ def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, hi
try: try:
import bs4 import bs4
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4```。") b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
@@ -157,12 +157,12 @@ def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, hi
try: try:
pdf_path, info = download_arxiv_(txt) pdf_path, info = download_arxiv_(txt)
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"下载pdf文件未成功") b = f"下载pdf文件未成功")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# 翻译摘要等 # 翻译摘要等
i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}" i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}"
i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}' i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}'

查看文件

@@ -12,9 +12,9 @@ def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_
# 选择游戏 # 选择游戏
cls = MiniGame_ResumeStory cls = MiniGame_ResumeStory
# 如果之前已经初始化了游戏实例,则继续该实例;否则重新初始化 # 如果之前已经初始化了游戏实例,则继续该实例;否则重新初始化
state = cls.sync_state(chatbot, state = cls.sync_state(chatbot,
llm_kwargs, llm_kwargs,
cls, cls,
plugin_name='MiniGame_ResumeStory', plugin_name='MiniGame_ResumeStory',
callback_fn='crazy_functions.互动小游戏->随机小游戏', callback_fn='crazy_functions.互动小游戏->随机小游戏',
lock_plugin=True lock_plugin=True
@@ -30,9 +30,9 @@ def 随机小游戏1(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system
# 选择游戏 # 选择游戏
cls = MiniGame_ASCII_Art cls = MiniGame_ASCII_Art
# 如果之前已经初始化了游戏实例,则继续该实例;否则重新初始化 # 如果之前已经初始化了游戏实例,则继续该实例;否则重新初始化
state = cls.sync_state(chatbot, state = cls.sync_state(chatbot,
llm_kwargs, llm_kwargs,
cls, cls,
plugin_name='MiniGame_ASCII_Art', plugin_name='MiniGame_ASCII_Art',
callback_fn='crazy_functions.互动小游戏->随机小游戏1', callback_fn='crazy_functions.互动小游戏->随机小游戏1',
lock_plugin=True lock_plugin=True

查看文件

@@ -38,7 +38,7 @@ def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
inputs=inputs_show_user=f"Extract all image urls in this html page, pick the first 5 images and show them with markdown format: \n\n {page_return}" inputs=inputs_show_user=f"Extract all image urls in this html page, pick the first 5 images and show them with markdown format: \n\n {page_return}"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=inputs, inputs_show_user=inputs_show_user, inputs=inputs, inputs_show_user=inputs_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="When you want to show an image, use markdown format. e.g. ![image_description](image_url). If there are no image url provided, answer 'no image url provided'" sys_prompt="When you want to show an image, use markdown format. e.g. ![image_description](image_url). If there are no image url provided, answer 'no image url provided'"
) )
chatbot[-1] = [chatbot[-1][0], gpt_say] chatbot[-1] = [chatbot[-1][0], gpt_say]

查看文件

@@ -6,10 +6,10 @@
- 将图像转为灰度图像 - 将图像转为灰度图像
- 将csv文件转excel表格 - 将csv文件转excel表格
Testing: Testing:
- Crop the image, keeping the bottom half. - Crop the image, keeping the bottom half.
- Swap the blue channel and red channel of the image. - Swap the blue channel and red channel of the image.
- Convert the image to grayscale. - Convert the image to grayscale.
- Convert the CSV file to an Excel spreadsheet. - Convert the CSV file to an Excel spreadsheet.
""" """
@@ -29,12 +29,12 @@ import multiprocessing
templete = """ templete = """
```python ```python
import ... # Put dependencies here, e.g. import numpy as np. import ... # Put dependencies here, e.g. import numpy as np.
class TerminalFunction(object): # Do not change the name of the class, The name of the class must be `TerminalFunction` class TerminalFunction(object): # Do not change the name of the class, The name of the class must be `TerminalFunction`
def run(self, path): # The name of the function must be `run`, it takes only a positional argument. def run(self, path): # The name of the function must be `run`, it takes only a positional argument.
# rewrite the function you have just written here # rewrite the function you have just written here
... ...
return generated_file_path return generated_file_path
``` ```
@@ -48,7 +48,7 @@ def get_code_block(reply):
import re import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) == 1: if len(matches) == 1:
return matches[0].strip('python') # code block return matches[0].strip('python') # code block
for match in matches: for match in matches:
if 'class TerminalFunction' in match: if 'class TerminalFunction' in match:
@@ -68,8 +68,8 @@ def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
# 第一步 # 第一步
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say, inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo, llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo,
sys_prompt= r"You are a world-class programmer." sys_prompt= r"You are a world-class programmer."
) )
history.extend([i_say, gpt_say]) history.extend([i_say, gpt_say])
@@ -82,33 +82,33 @@ def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
] ]
i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable templete. " i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable templete. "
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=inputs_show_user, inputs=i_say, inputs_show_user=inputs_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt= r"You are a programmer. You need to replace `...` with valid packages, do not give `...` in your answer!" sys_prompt= r"You are a programmer. You need to replace `...` with valid packages, do not give `...` in your answer!"
) )
code_to_return = gpt_say code_to_return = gpt_say
history.extend([i_say, gpt_say]) history.extend([i_say, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
# # 第三步 # # 第三步
# i_say = "Please list to packages to install to run the code above. Then show me how to use `try_install_deps` function to install them." # i_say = "Please list to packages to install to run the code above. Then show me how to use `try_install_deps` function to install them."
# i_say += 'For instance. `try_install_deps(["opencv-python", "scipy", "numpy"])`' # i_say += 'For instance. `try_install_deps(["opencv-python", "scipy", "numpy"])`'
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive( # installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
# inputs=i_say, inputs_show_user=inputs_show_user, # inputs=i_say, inputs_show_user=inputs_show_user,
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, # llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
# sys_prompt= r"You are a programmer." # sys_prompt= r"You are a programmer."
# ) # )
# # # 第三步 # # # 第三步
# i_say = "Show me how to use `pip` to install packages to run the code above. " # i_say = "Show me how to use `pip` to install packages to run the code above. "
# i_say += 'For instance. `pip install -r opencv-python scipy numpy`' # i_say += 'For instance. `pip install -r opencv-python scipy numpy`'
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive( # installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
# inputs=i_say, inputs_show_user=i_say, # inputs=i_say, inputs_show_user=i_say,
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, # llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
# sys_prompt= r"You are a programmer." # sys_prompt= r"You are a programmer."
# ) # )
installation_advance = "" installation_advance = ""
return code_to_return, installation_advance, txt, file_type, llm_kwargs, chatbot, history return code_to_return, installation_advance, txt, file_type, llm_kwargs, chatbot, history
@@ -117,7 +117,7 @@ def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
def for_immediate_show_off_when_possible(file_type, fp, chatbot): def for_immediate_show_off_when_possible(file_type, fp, chatbot):
if file_type in ['png', 'jpg']: if file_type in ['png', 'jpg']:
image_path = os.path.abspath(fp) image_path = os.path.abspath(fp)
chatbot.append(['这是一张图片, 展示如下:', chatbot.append(['这是一张图片, 展示如下:',
f'本地文件地址: <br/>`{image_path}`<br/>'+ f'本地文件地址: <br/>`{image_path}`<br/>'+
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>' f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
]) ])
@@ -177,7 +177,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
chatbot.append(["文件检索", "没有发现任何近期上传的文件。"]) chatbot.append(["文件检索", "没有发现任何近期上传的文件。"])
yield from update_ui_lastest_msg("没有发现任何近期上传的文件。", chatbot, history, 1) yield from update_ui_lastest_msg("没有发现任何近期上传的文件。", chatbot, history, 1)
return # 2. 如果没有文件 return # 2. 如果没有文件
# 读取文件 # 读取文件
file_type = file_list[0].split('.')[-1] file_type = file_list[0].split('.')[-1]
@@ -185,7 +185,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
if is_the_upload_folder(txt): if is_the_upload_folder(txt):
yield from update_ui_lastest_msg(f"请在输入框内填写需求, 然后再次点击该插件! 至于您的文件,不用担心, 文件路径 {txt} 已经被记忆. ", chatbot, history, 1) yield from update_ui_lastest_msg(f"请在输入框内填写需求, 然后再次点击该插件! 至于您的文件,不用担心, 文件路径 {txt} 已经被记忆. ", chatbot, history, 1)
return return
# 开始干正事 # 开始干正事
MAX_TRY = 3 MAX_TRY = 3
for j in range(MAX_TRY): # 最多重试5次 for j in range(MAX_TRY): # 最多重试5次
@@ -238,7 +238,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
# chatbot.append(["如果是缺乏依赖,请参考以下建议", installation_advance]) # chatbot.append(["如果是缺乏依赖,请参考以下建议", installation_advance])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# 顺利完成,收尾 # 顺利完成,收尾
res = str(res) res = str(res)
if os.path.exists(res): if os.path.exists(res):
@@ -248,5 +248,5 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
else: else:
chatbot.append(["执行成功了,结果是一个字符串", "结果:" + res]) chatbot.append(["执行成功了,结果是一个字符串", "结果:" + res])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新

查看文件

@@ -21,8 +21,8 @@ def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
i_say = "请写bash命令实现以下功能" + txt i_say = "请写bash命令实现以下功能" + txt
# 开始 # 开始
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=txt, inputs=i_say, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="你是一个Linux大师级用户。注意,当我要求你写bash命令时,尽可能地仅用一行命令解决我的要求。" sys_prompt="你是一个Linux大师级用户。注意,当我要求你写bash命令时,尽可能地仅用一行命令解决我的要求。"
) )
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新

查看文件

@@ -7,7 +7,7 @@ def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", qual
from request_llms.bridge_all import model_info from request_llms.bridge_all import model_info
proxies = get_conf('proxies') proxies = get_conf('proxies')
# Set up OpenAI API key and model # Set up OpenAI API key and model
api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model']) api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
# 'https://api.openai.com/v1/chat/completions' # 'https://api.openai.com/v1/chat/completions'
@@ -113,7 +113,7 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
resolution = plugin_kwargs.get("advanced_arg", '1024x1024') resolution = plugin_kwargs.get("advanced_arg", '1024x1024')
image_url, image_path = gen_image(llm_kwargs, prompt, resolution) image_url, image_path = gen_image(llm_kwargs, prompt, resolution)
chatbot.append([prompt, chatbot.append([prompt,
f'图像中转网址: <br/>`{image_url}`<br/>'+ f'图像中转网址: <br/>`{image_url}`<br/>'+
f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>' f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
f'本地文件地址: <br/>`{image_path}`<br/>'+ f'本地文件地址: <br/>`{image_path}`<br/>'+
@@ -144,7 +144,7 @@ def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
elif part in ['vivid', 'natural']: elif part in ['vivid', 'natural']:
style = part style = part
image_url, image_path = gen_image(llm_kwargs, prompt, resolution, model="dall-e-3", quality=quality, style=style) image_url, image_path = gen_image(llm_kwargs, prompt, resolution, model="dall-e-3", quality=quality, style=style)
chatbot.append([prompt, chatbot.append([prompt,
f'图像中转网址: <br/>`{image_url}`<br/>'+ f'图像中转网址: <br/>`{image_url}`<br/>'+
f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>' f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
f'本地文件地址: <br/>`{image_path}`<br/>'+ f'本地文件地址: <br/>`{image_path}`<br/>'+
@@ -164,7 +164,7 @@ class ImageEditState(GptAcademicState):
confirm = (len(file_manifest) >= 1 and file_manifest[0].endswith('.png') and os.path.exists(file_manifest[0])) confirm = (len(file_manifest) >= 1 and file_manifest[0].endswith('.png') and os.path.exists(file_manifest[0]))
file = None if not confirm else file_manifest[0] file = None if not confirm else file_manifest[0]
return confirm, file return confirm, file
def lock_plugin(self, chatbot): def lock_plugin(self, chatbot):
chatbot._cookies['lock_plugin'] = 'crazy_functions.图片生成->图片修改_DALLE2' chatbot._cookies['lock_plugin'] = 'crazy_functions.图片生成->图片修改_DALLE2'
self.dump_state(chatbot) self.dump_state(chatbot)

查看文件

@@ -57,11 +57,11 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
if get_conf("AUTOGEN_USE_DOCKER"): if get_conf("AUTOGEN_USE_DOCKER"):
import docker import docker
except: except:
chatbot.append([ f"处理任务: {txt}", chatbot.append([ f"处理任务: {txt}",
f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pyautogen docker```。"]) f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pyautogen docker```。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
try: try:
import autogen import autogen
@@ -72,7 +72,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
chatbot.append([f"处理任务: {txt}", f"缺少docker运行环境"]) chatbot.append([f"处理任务: {txt}", f"缺少docker运行环境"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# 解锁插件 # 解锁插件
chatbot.get_cookies()['lock_plugin'] = None chatbot.get_cookies()['lock_plugin'] = None
persistent_class_multi_user_manager = GradioMultiuserManagerForPersistentClasses() persistent_class_multi_user_manager = GradioMultiuserManagerForPersistentClasses()

查看文件

@@ -66,7 +66,7 @@ def read_file_to_chat(chatbot, history, file_name):
i_say, gpt_say = h.split('<hr style="border-top: dotted 3px #ccc;">') i_say, gpt_say = h.split('<hr style="border-top: dotted 3px #ccc;">')
chatbot.append([i_say, gpt_say]) chatbot.append([i_say, gpt_say])
chatbot.append([f"存档文件详情?", f"[Local Message] 载入对话{len(html)}条,上下文{len(history)}条。"]) chatbot.append([f"存档文件详情?", f"[Local Message] 载入对话{len(html)}条,上下文{len(history)}条。"])
return chatbot, history return chatbot, history
@CatchException @CatchException
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request): def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
@@ -80,7 +80,7 @@ def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
user_request 当前用户的请求信息IP地址等 user_request 当前用户的请求信息IP地址等
""" """
chatbot.append(("保存当前对话", chatbot.append(("保存当前对话",
f"[Local Message] {write_chat_to_file(chatbot, history)},您可以调用下拉菜单中的“载入对话历史存档”还原当下的对话。")) f"[Local Message] {write_chat_to_file(chatbot, history)},您可以调用下拉菜单中的“载入对话历史存档”还原当下的对话。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
@@ -108,9 +108,9 @@ def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
import glob import glob
local_history = "<br/>".join([ local_history = "<br/>".join([
"`"+hide_cwd(f)+f" ({gen_file_preview(f)})"+"`" "`"+hide_cwd(f)+f" ({gen_file_preview(f)})"+"`"
for f in glob.glob( for f in glob.glob(
f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html',
recursive=True recursive=True
)]) )])
chatbot.append([f"正在查找对话历史文件html格式: {txt}", f"找不到任何html文件: {txt}。但本地存储了以下历史文件,您可以将任意一个文件路径粘贴到输入区,然后重试:<br/>{local_history}"]) chatbot.append([f"正在查找对话历史文件html格式: {txt}", f"找不到任何html文件: {txt}。但本地存储了以下历史文件,您可以将任意一个文件路径粘贴到输入区,然后重试:<br/>{local_history}"])
@@ -139,7 +139,7 @@ def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot
import glob, os import glob, os
local_history = "<br/>".join([ local_history = "<br/>".join([
"`"+hide_cwd(f)+"`" "`"+hide_cwd(f)+"`"
for f in glob.glob( for f in glob.glob(
f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True
)]) )])

查看文件

@@ -40,10 +40,10 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```' i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```'
i_say_show_user = f'请对下面的文章片段做概述: {os.path.abspath(fp)}的第{i+1}/{len(paper_fragments)}个片段。' i_say_show_user = f'请对下面的文章片段做概述: {os.path.abspath(fp)}的第{i+1}/{len(paper_fragments)}个片段。'
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say_show_user, inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
chatbot=chatbot, chatbot=chatbot,
history=[], history=[],
sys_prompt="总结文章。" sys_prompt="总结文章。"
) )
@@ -56,10 +56,10 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
if len(paper_fragments) > 1: if len(paper_fragments) > 1:
i_say = f"根据以上的对话,总结文章{os.path.abspath(fp)}的主要内容。" i_say = f"根据以上的对话,总结文章{os.path.abspath(fp)}的主要内容。"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
chatbot=chatbot, chatbot=chatbot,
history=this_paper_history, history=this_paper_history,
sys_prompt="总结文章。" sys_prompt="总结文章。"
) )

查看文件

@@ -53,7 +53,7 @@ class PaperFileGroup():
def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'): def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
# <-------- 读取Markdown文件,删除其中的所有注释 ----------> # <-------- 读取Markdown文件,删除其中的所有注释 ---------->
pfg = PaperFileGroup() pfg = PaperFileGroup()
for index, fp in enumerate(file_manifest): for index, fp in enumerate(file_manifest):
@@ -63,23 +63,23 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
pfg.file_paths.append(fp) pfg.file_paths.append(fp)
pfg.file_contents.append(file_content) pfg.file_contents.append(file_content)
# <-------- 拆分过长的Markdown文件 ----------> # <-------- 拆分过长的Markdown文件 ---------->
pfg.run_file_split(max_token_limit=1500) pfg.run_file_split(max_token_limit=1500)
n_split = len(pfg.sp_file_contents) n_split = len(pfg.sp_file_contents)
# <-------- 多线程翻译开始 ----------> # <-------- 多线程翻译开始 ---------->
if language == 'en->zh': if language == 'en->zh':
inputs_array = ["This is a Markdown file, translate it into Chinese, do not modify any existing Markdown commands:" + inputs_array = ["This is a Markdown file, translate it into Chinese, do not modify any existing Markdown commands:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
elif language == 'zh->en': elif language == 'zh->en':
inputs_array = [f"This is a Markdown file, translate it into English, do not modify any existing Markdown commands:" + inputs_array = [f"This is a Markdown file, translate it into English, do not modify any existing Markdown commands:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
else: else:
inputs_array = [f"This is a Markdown file, translate it into {language}, do not modify any existing Markdown commands, only answer me with translated results:" + inputs_array = [f"This is a Markdown file, translate it into {language}, do not modify any existing Markdown commands, only answer me with translated results:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
@@ -103,7 +103,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
except: except:
logging.error(trimmed_format_exc()) logging.error(trimmed_format_exc())
# <-------- 整理结果,退出 ----------> # <-------- 整理结果,退出 ---------->
create_report_file_name = gen_time_str() + f"-chatgpt.md" create_report_file_name = gen_time_str() + f"-chatgpt.md"
res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name) res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name)
promote_file_to_downloadzone(res, chatbot=chatbot) promote_file_to_downloadzone(res, chatbot=chatbot)
@@ -255,7 +255,7 @@ def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history,
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
language = plugin_kwargs.get("advanced_arg", 'Chinese') language = plugin_kwargs.get("advanced_arg", 'Chinese')
yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language=language) yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language=language)

查看文件

@@ -17,7 +17,7 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
file_content, page_one = read_and_clean_pdf_text(file_name) # 尝试按照章节切割PDF file_content, page_one = read_and_clean_pdf_text(file_name) # 尝试按照章节切割PDF
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
TOKEN_LIMIT_PER_FRAGMENT = 2500 TOKEN_LIMIT_PER_FRAGMENT = 2500
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
@@ -25,7 +25,7 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model']) page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
# 为了更好的效果,我们剥离Introduction之后的部分如果有 # 为了更好的效果,我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0] paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
############################## <第 1 步,从摘要中提取高价值信息,放到history中> ################################## ############################## <第 1 步,从摘要中提取高价值信息,放到history中> ##################################
final_results = [] final_results = []
final_results.append(paper_meta) final_results.append(paper_meta)
@@ -44,10 +44,10 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i]}" i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i]}"
i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i][:200]}" i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i][:200]}"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问 gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
llm_kwargs, chatbot, llm_kwargs, chatbot,
history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果 history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
sys_prompt="Extract the main idea of this section with Chinese." # 提示 sys_prompt="Extract the main idea of this section with Chinese." # 提示
) )
iteration_results.append(gpt_say) iteration_results.append(gpt_say)
last_iteration_result = gpt_say last_iteration_result = gpt_say
@@ -67,15 +67,15 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
- (2):What are the past methods? What are the problems with them? Is the approach well motivated? - (2):What are the past methods? What are the problems with them? Is the approach well motivated?
- (3):What is the research methodology proposed in this paper? - (3):What is the research methodology proposed in this paper?
- (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals? - (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals?
Follow the format of the output that follows: Follow the format of the output that follows:
1. Title: xxx\n\n 1. Title: xxx\n\n
2. Authors: xxx\n\n 2. Authors: xxx\n\n
3. Affiliation: xxx\n\n 3. Affiliation: xxx\n\n
4. Keywords: xxx\n\n 4. Keywords: xxx\n\n
5. Urls: xxx or xxx , xxx \n\n 5. Urls: xxx or xxx , xxx \n\n
6. Summary: \n\n 6. Summary: \n\n
- (1):xxx;\n - (1):xxx;\n
- (2):xxx;\n - (2):xxx;\n
- (3):xxx;\n - (3):xxx;\n
- (4):xxx.\n\n - (4):xxx.\n\n
Be sure to use Chinese answers (proper nouns need to be marked in English), statements as concise and academic as possible, Be sure to use Chinese answers (proper nouns need to be marked in English), statements as concise and academic as possible,
@@ -85,8 +85,8 @@ do not have too much repetitive information, numerical values using the original
file_write_buffer.extend(final_results) file_write_buffer.extend(final_results)
i_say, final_results = input_clipping(i_say, final_results, max_token_limit=2000) i_say, final_results = input_clipping(i_say, final_results, max_token_limit=2000)
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user='开始最终总结', inputs=i_say, inputs_show_user='开始最终总结',
llm_kwargs=llm_kwargs, chatbot=chatbot, history=final_results, llm_kwargs=llm_kwargs, chatbot=chatbot, history=final_results,
sys_prompt= f"Extract the main idea of this paper with less than {NUM_OF_WORD} Chinese characters" sys_prompt= f"Extract the main idea of this paper with less than {NUM_OF_WORD} Chinese characters"
) )
final_results.append(gpt_say) final_results.append(gpt_say)
@@ -114,8 +114,8 @@ def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
try: try:
import fitz import fitz
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
@@ -134,7 +134,7 @@ def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
# 搜索需要处理的文件清单 # 搜索需要处理的文件清单
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
# 如果没找到任何文件 # 如果没找到任何文件
if len(file_manifest) == 0: if len(file_manifest) == 0:
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}")

查看文件

@@ -85,10 +85,10 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
msg = '正常' msg = '正常'
# ** gpt request ** # ** gpt request **
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say_show_user, inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
chatbot=chatbot, chatbot=chatbot,
history=[], history=[],
sys_prompt="总结文章。" sys_prompt="总结文章。"
) # 带超时倒计时 ) # 带超时倒计时
@@ -106,10 +106,10 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
msg = '正常' msg = '正常'
# ** gpt request ** # ** gpt request **
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
chatbot=chatbot, chatbot=chatbot,
history=history, history=history,
sys_prompt="总结文章。" sys_prompt="总结文章。"
) # 带超时倒计时 ) # 带超时倒计时
@@ -138,8 +138,8 @@ def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, histo
try: try:
import pdfminer, bs4 import pdfminer, bs4
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return

查看文件

@@ -76,8 +76,8 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
success_mmd, file_manifest_mmd, _ = get_files_from_everything(txt, type='.mmd') success_mmd, file_manifest_mmd, _ = get_files_from_everything(txt, type='.mmd')
success = success or success_mmd success = success or success_mmd
file_manifest += file_manifest_mmd file_manifest += file_manifest_mmd
chatbot.append(["文件列表:", ", ".join([e.split('/')[-1] for e in file_manifest])]); chatbot.append(["文件列表:", ", ".join([e.split('/')[-1] for e in file_manifest])]);
yield from update_ui( chatbot=chatbot, history=history) yield from update_ui( chatbot=chatbot, history=history)
# 检测输入参数,如没有给定输入参数,直接退出 # 检测输入参数,如没有给定输入参数,直接退出
if not success: if not success:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'

查看文件

@@ -68,7 +68,7 @@ def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwa
with open(grobid_json_res, 'w+', encoding='utf8') as f: with open(grobid_json_res, 'w+', encoding='utf8') as f:
f.write(json.dumps(article_dict, indent=4, ensure_ascii=False)) f.write(json.dumps(article_dict, indent=4, ensure_ascii=False))
promote_file_to_downloadzone(grobid_json_res, chatbot=chatbot) promote_file_to_downloadzone(grobid_json_res, chatbot=chatbot)
if article_dict is None: raise RuntimeError("解析PDF失败,请检查PDF是否损坏。") if article_dict is None: raise RuntimeError("解析PDF失败,请检查PDF是否损坏。")
yield from translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG) yield from translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files))) chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
@@ -97,7 +97,7 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
# 为了更好的效果,我们剥离Introduction之后的部分如果有 # 为了更好的效果,我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0] paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
# 单线,获取文章meta信息 # 单线,获取文章meta信息
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive( paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取{paper_meta}", inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取{paper_meta}",
@@ -121,7 +121,7 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
) )
gpt_response_collection_md = copy.deepcopy(gpt_response_collection) gpt_response_collection_md = copy.deepcopy(gpt_response_collection)
# 整理报告的格式 # 整理报告的格式
for i,k in enumerate(gpt_response_collection_md): for i,k in enumerate(gpt_response_collection_md):
if i%2==0: if i%2==0:
gpt_response_collection_md[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection_md)//2}] \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection_md)//2}]\n " gpt_response_collection_md[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection_md)//2}] \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection_md)//2}]\n "
else: else:
@@ -139,18 +139,18 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
# write html # write html
try: try:
ch = construct_html() ch = construct_html()
orig = "" orig = ""
trans = "" trans = ""
gpt_response_collection_html = copy.deepcopy(gpt_response_collection) gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
for i,k in enumerate(gpt_response_collection_html): for i,k in enumerate(gpt_response_collection_html):
if i%2==0: if i%2==0:
gpt_response_collection_html[i] = paper_fragments[i//2].replace('#', '') gpt_response_collection_html[i] = paper_fragments[i//2].replace('#', '')
else: else:
gpt_response_collection_html[i] = gpt_response_collection_html[i] gpt_response_collection_html[i] = gpt_response_collection_html[i]
final = ["论文概况", paper_meta_info.replace('# ', '### '), "二、论文翻译", ""] final = ["论文概况", paper_meta_info.replace('# ', '### '), "二、论文翻译", ""]
final.extend(gpt_response_collection_html) final.extend(gpt_response_collection_html)
for i, k in enumerate(final): for i, k in enumerate(final):
if i%2==0: if i%2==0:
orig = k orig = k
if i%2==1: if i%2==1:

查看文件

@@ -27,7 +27,7 @@ def eval_manim(code):
class_name = get_class_name(code) class_name = get_class_name(code)
try: try:
time_str = gen_time_str() time_str = gen_time_str()
subprocess.check_output([sys.executable, '-c', f"from gpt_log.MyAnimation import {class_name}; {class_name}().render()"]) subprocess.check_output([sys.executable, '-c', f"from gpt_log.MyAnimation import {class_name}; {class_name}().render()"])
shutil.move(f'media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{time_str}.mp4') shutil.move(f'media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{time_str}.mp4')
@@ -36,7 +36,7 @@ def eval_manim(code):
output = e.output.decode() output = e.output.decode()
print(f"Command returned non-zero exit status {e.returncode}: {output}.") print(f"Command returned non-zero exit status {e.returncode}: {output}.")
return f"Evaluating python script failed: {e.output}." return f"Evaluating python script failed: {e.output}."
except: except:
print('generating mp4 failed') print('generating mp4 failed')
return "Generating mp4 failed." return "Generating mp4 failed."
@@ -45,7 +45,7 @@ def get_code_block(reply):
import re import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) != 1: if len(matches) != 1:
raise RuntimeError("GPT is not generating proper code.") raise RuntimeError("GPT is not generating proper code.")
return matches[0].strip('python') # code block return matches[0].strip('python') # code block
@@ -61,7 +61,7 @@ def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
user_request 当前用户的请求信息IP地址等 user_request 当前用户的请求信息IP地址等
""" """
# 清空历史,以免输入溢出 # 清空历史,以免输入溢出
history = [] history = []
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
@@ -73,24 +73,24 @@ def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
# 尝试导入依赖, 如果缺少依赖, 则给出安装建议 # 尝试导入依赖, 如果缺少依赖, 则给出安装建议
dep_ok = yield from inspect_dependency(chatbot=chatbot, history=history) # 刷新界面 dep_ok = yield from inspect_dependency(chatbot=chatbot, history=history) # 刷新界面
if not dep_ok: return if not dep_ok: return
# 输入 # 输入
i_say = f'Generate a animation to show: ' + txt i_say = f'Generate a animation to show: ' + txt
demo = ["Here is some examples of manim", examples_of_manim()] demo = ["Here is some examples of manim", examples_of_manim()]
_, demo = input_clipping(inputs="", history=demo, max_token_limit=2560) _, demo = input_clipping(inputs="", history=demo, max_token_limit=2560)
# 开始 # 开始
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say, inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo, llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo,
sys_prompt= sys_prompt=
r"Write a animation script with 3blue1brown's manim. "+ r"Write a animation script with 3blue1brown's manim. "+
r"Please begin with `from manim import *`. " + r"Please begin with `from manim import *`. " +
r"Answer me with a code block wrapped by ```." r"Answer me with a code block wrapped by ```."
) )
chatbot.append(["开始生成动画", "..."]) chatbot.append(["开始生成动画", "..."])
history.extend([i_say, gpt_say]) history.extend([i_say, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
# 将代码转为动画 # 将代码转为动画
code = get_code_block(gpt_say) code = get_code_block(gpt_say)
res = eval_manim(code) res = eval_manim(code)

查看文件

@@ -15,7 +15,7 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
file_content, page_one = read_and_clean_pdf_text(file_name) # 尝试按照章节切割PDF file_content, page_one = read_and_clean_pdf_text(file_name) # 尝试按照章节切割PDF
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
TOKEN_LIMIT_PER_FRAGMENT = 2500 TOKEN_LIMIT_PER_FRAGMENT = 2500
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
@@ -23,7 +23,7 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model']) page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
# 为了更好的效果,我们剥离Introduction之后的部分如果有 # 为了更好的效果,我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0] paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
############################## <第 1 步,从摘要中提取高价值信息,放到history中> ################################## ############################## <第 1 步,从摘要中提取高价值信息,放到history中> ##################################
final_results = [] final_results = []
final_results.append(paper_meta) final_results.append(paper_meta)
@@ -42,10 +42,10 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}" i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}"
i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]} ...." i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]} ...."
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问 gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
llm_kwargs, chatbot, llm_kwargs, chatbot,
history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果 history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
sys_prompt="Extract the main idea of this section, answer me with Chinese." # 提示 sys_prompt="Extract the main idea of this section, answer me with Chinese." # 提示
) )
iteration_results.append(gpt_say) iteration_results.append(gpt_say)
last_iteration_result = gpt_say last_iteration_result = gpt_say
@@ -76,8 +76,8 @@ def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chat
try: try:
import fitz import fitz
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return

查看文件

@@ -16,7 +16,7 @@ def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if not fast_debug: if not fast_debug:
msg = '正常' msg = '正常'
# ** gpt request ** # ** gpt request **
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
@@ -27,7 +27,7 @@ def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
if not fast_debug: time.sleep(2) if not fast_debug: time.sleep(2)
if not fast_debug: if not fast_debug:
res = write_history_to_file(history) res = write_history_to_file(history)
promote_file_to_downloadzone(res, chatbot=chatbot) promote_file_to_downloadzone(res, chatbot=chatbot)
chatbot.append(("完成了吗?", res)) chatbot.append(("完成了吗?", res))

查看文件

@@ -179,15 +179,15 @@ def 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs):
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words in Chinese: {txt[i]}" i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words in Chinese: {txt[i]}"
i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...." i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...."
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问 gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
llm_kwargs, chatbot, llm_kwargs, chatbot,
history=["The main content of the previous section is?", last_iteration_result], # 迭代上一次的结果 history=["The main content of the previous section is?", last_iteration_result], # 迭代上一次的结果
sys_prompt="Extracts the main content from the text section where it is located for graphing purposes, answer me with Chinese." # 提示 sys_prompt="Extracts the main content from the text section where it is located for graphing purposes, answer me with Chinese." # 提示
) )
results.append(gpt_say) results.append(gpt_say)
last_iteration_result = gpt_say last_iteration_result = gpt_say
############################## <第 2 步,根据整理的摘要选择图表类型> ################################## ############################## <第 2 步,根据整理的摘要选择图表类型> ##################################
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
gpt_say = plugin_kwargs.get("advanced_arg", "") #将图表类型参数赋值为插件参数 gpt_say = plugin_kwargs.get("advanced_arg", "") #将图表类型参数赋值为插件参数
results_txt = '\n'.join(results) #合并摘要 results_txt = '\n'.join(results) #合并摘要
if gpt_say not in ['1','2','3','4','5','6','7','8','9']: #如插件参数不正确则使用对话模型判断 if gpt_say not in ['1','2','3','4','5','6','7','8','9']: #如插件参数不正确则使用对话模型判断
i_say_show_user = f'接下来将判断适合的图表类型,如连续3次判断失败将会使用流程图进行绘制'; gpt_say = "[Local Message] 收到。" # 用户提示 i_say_show_user = f'接下来将判断适合的图表类型,如连续3次判断失败将会使用流程图进行绘制'; gpt_say = "[Local Message] 收到。" # 用户提示
@@ -198,7 +198,7 @@ def 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs):
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say_show_user, inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="" sys_prompt=""
) )
if gpt_say in ['1','2','3','4','5','6','7','8','9']: #判断返回是否正确 if gpt_say in ['1','2','3','4','5','6','7','8','9']: #判断返回是否正确
@@ -228,12 +228,12 @@ def 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs):
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say_show_user, inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="" sys_prompt=""
) )
history.append(gpt_say) history.append(gpt_say)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
@CatchException @CatchException
def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
""" """
@@ -249,11 +249,11 @@ def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history,
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",
"根据当前聊天历史或指定的路径文件(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\ "根据当前聊天历史或指定的路径文件(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\
\n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918"]) \n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if os.path.exists(txt): #如输入区无内容则直接解析历史记录 if os.path.exists(txt): #如输入区无内容则直接解析历史记录
from crazy_functions.pdf_fns.parse_word import extract_text_from_files from crazy_functions.pdf_fns.parse_word import extract_text_from_files
file_exist, final_result, page_one, file_manifest, excption = extract_text_from_files(txt, chatbot, history) file_exist, final_result, page_one, file_manifest, excption = extract_text_from_files(txt, chatbot, history)
@@ -264,15 +264,15 @@ def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history,
if excption != "": if excption != "":
if excption == "word": if excption == "word":
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"找到了.doc文件,但是该文件格式不被支持,请先转化为.docx格式。") b = f"找到了.doc文件,但是该文件格式不被支持,请先转化为.docx格式。")
elif excption == "pdf": elif excption == "pdf":
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
elif excption == "word_pip": elif excption == "word_pip":
report_exception(chatbot, history, report_exception(chatbot, history,
a=f"解析项目: {txt}", a=f"解析项目: {txt}",

查看文件

@@ -9,7 +9,7 @@ install_msg ="""
3. python -m pip install unstructured[all-docs] --upgrade 3. python -m pip install unstructured[all-docs] --upgrade
4. python -c 'import nltk; nltk.download("punkt")' 4. python -c 'import nltk; nltk.download("punkt")'
""" """
@CatchException @CatchException
@@ -56,7 +56,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
chatbot.append(["没有找到任何可读取文件", "当前支持的格式包括: txt, md, docx, pptx, pdf, json等"]) chatbot.append(["没有找到任何可读取文件", "当前支持的格式包括: txt, md, docx, pptx, pdf, json等"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# < -------------------预热文本向量化模组--------------- > # < -------------------预热文本向量化模组--------------- >
chatbot.append(['<br/>'.join(file_manifest), "正在预热文本向量化模组, 如果是第一次运行, 将消耗较长时间下载中文向量化模型..."]) chatbot.append(['<br/>'.join(file_manifest), "正在预热文本向量化模组, 如果是第一次运行, 将消耗较长时间下载中文向量化模型..."])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
@@ -109,8 +109,8 @@ def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
chatbot.append((txt, f'[知识库 {kai_id}] ' + prompt)) chatbot.append((txt, f'[知识库 {kai_id}] ' + prompt))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=prompt, inputs_show_user=txt, inputs=prompt, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt=system_prompt sys_prompt=system_prompt
) )
history.extend((prompt, gpt_say)) history.extend((prompt, gpt_say))

查看文件

@@ -40,10 +40,10 @@ def scrape_text(url, proxies) -> str:
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
'Content-Type': 'text/plain', 'Content-Type': 'text/plain',
} }
try: try:
response = requests.get(url, headers=headers, proxies=proxies, timeout=8) response = requests.get(url, headers=headers, proxies=proxies, timeout=8)
if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
except: except:
return "无法连接到该网页" return "无法连接到该网页"
soup = BeautifulSoup(response.text, "html.parser") soup = BeautifulSoup(response.text, "html.parser")
for script in soup(["script", "style"]): for script in soup(["script", "style"]):
@@ -66,7 +66,7 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
user_request 当前用户的请求信息IP地址等 user_request 当前用户的请求信息IP地址等
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
chatbot.append((f"请结合互联网信息回答以下问题:{txt}", chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
"[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR")) "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
@@ -91,13 +91,13 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
# ------------- < 第3步ChatGPT综合 > ------------- # ------------- < 第3步ChatGPT综合 > -------------
i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}" i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}"
i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
inputs=i_say, inputs=i_say,
history=history, history=history,
max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4 max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4
) )
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say, inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。" sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。"
) )
chatbot[-1] = (i_say, gpt_say) chatbot[-1] = (i_say, gpt_say)

查看文件

@@ -33,7 +33,7 @@ explain_msg = """
- 「请调用插件,解析python源代码项目,代码我刚刚打包拖到上传区了」 - 「请调用插件,解析python源代码项目,代码我刚刚打包拖到上传区了」
- 「请问Transformer网络的结构是怎样的?」 - 「请问Transformer网络的结构是怎样的?」
2. 您可以打开插件下拉菜单以了解本项目的各种能力。 2. 您可以打开插件下拉菜单以了解本项目的各种能力。
3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词,您的意图可以被识别的更准确。 3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词,您的意图可以被识别的更准确。
@@ -67,7 +67,7 @@ class UserIntention(BaseModel):
def chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention): def chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=txt, inputs_show_user=txt, inputs=txt, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt=system_prompt sys_prompt=system_prompt
) )
chatbot[-1] = [txt, gpt_say] chatbot[-1] = [txt, gpt_say]
@@ -115,7 +115,7 @@ def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
if is_the_upload_folder(txt): if is_the_upload_folder(txt):
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=False) state.set_state(chatbot=chatbot, key='has_provided_explaination', value=False)
appendix_msg = "\n\n**很好,您已经上传了文件**,现在请您描述您的需求。" appendix_msg = "\n\n**很好,您已经上传了文件**,现在请您描述您的需求。"
if is_certain or (state.has_provided_explaination): if is_certain or (state.has_provided_explaination):
# 如果意图明确,跳过提示环节 # 如果意图明确,跳过提示环节
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True) state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
@@ -152,7 +152,7 @@ def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
analyze_res = run_gpt_fn(inputs, "") analyze_res = run_gpt_fn(inputs, "")
try: try:
user_intention = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn) user_intention = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}", lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
except JsonStringError as e: except JsonStringError as e:
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 失败 当前语言模型({llm_kwargs['llm_model']})不能理解您的意图", chatbot=chatbot, history=history, delay=0) lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 失败 当前语言模型({llm_kwargs['llm_model']})不能理解您的意图", chatbot=chatbot, history=history, delay=0)
@@ -161,7 +161,7 @@ def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
pass pass
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}", lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
# 用户意图: 修改本项目的配置 # 用户意图: 修改本项目的配置

查看文件

@@ -82,13 +82,13 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot, inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
history=this_iteration_history_feed, # 迭代之前的分析 history=this_iteration_history_feed, # 迭代之前的分析
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional) sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
diagram_code = make_diagram(this_iteration_files, result, this_iteration_history_feed) diagram_code = make_diagram(this_iteration_files, result, this_iteration_history_feed)
summary = "请用一句话概括这些文件的整体功能。\n\n" + diagram_code summary = "请用一句话概括这些文件的整体功能。\n\n" + diagram_code
summary_result = yield from request_gpt_model_in_new_thread_with_ui_alive( summary_result = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=summary, inputs=summary,
inputs_show_user=summary, inputs_show_user=summary,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
chatbot=chatbot, chatbot=chatbot,
history=[i_say, result], # 迭代之前的分析 history=[i_say, result], # 迭代之前的分析
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional) sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
@@ -345,9 +345,12 @@ def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")] pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")]
pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件 pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件
# 将要忽略匹配的文件名(例如: ^README.md) # 将要忽略匹配的文件名(例如: ^README.md)
pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")] pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", r"\.") # 移除左边通配符,移除右侧逗号,转义点号
for _ in txt_pattern.split(" ") # 以空格分割
if (_ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")) # ^开始,但不是^*.开始
]
# 生成正则表达式 # 生成正则表达式
pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$' pattern_except = r'/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else '' pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else ''
history.clear() history.clear()

查看文件

@@ -20,8 +20,8 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
# llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
llm_kwargs['llm_model'] = MULTI_QUERY_LLM_MODELS # 支持任意数量的llm接口,用&符号分隔 llm_kwargs['llm_model'] = MULTI_QUERY_LLM_MODELS # 支持任意数量的llm接口,用&符号分隔
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=txt, inputs_show_user=txt, inputs=txt, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt=system_prompt, sys_prompt=system_prompt,
retry_times_at_unknown_error=0 retry_times_at_unknown_error=0
) )
@@ -52,8 +52,8 @@ def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history,
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=txt, inputs_show_user=txt, inputs=txt, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt=system_prompt, sys_prompt=system_prompt,
retry_times_at_unknown_error=0 retry_times_at_unknown_error=0
) )

查看文件

@@ -39,7 +39,7 @@ class AsyncGptTask():
try: try:
MAX_TOKEN_ALLO = 2560 MAX_TOKEN_ALLO = 2560
i_say, history = input_clipping(i_say, history, max_token_limit=MAX_TOKEN_ALLO) i_say, history = input_clipping(i_say, history, max_token_limit=MAX_TOKEN_ALLO)
gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=history, sys_prompt=sys_prompt, gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=history, sys_prompt=sys_prompt,
observe_window=observe_window[index], console_slience=True) observe_window=observe_window[index], console_slience=True)
except ConnectionAbortedError as token_exceed_err: except ConnectionAbortedError as token_exceed_err:
print('至少一个线程任务Token溢出而失败', e) print('至少一个线程任务Token溢出而失败', e)
@@ -120,7 +120,7 @@ class InterviewAssistant(AliyunASR):
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
self.plugin_wd.feed() self.plugin_wd.feed()
if self.event_on_result_chg.is_set(): if self.event_on_result_chg.is_set():
# called when some words have finished # called when some words have finished
self.event_on_result_chg.clear() self.event_on_result_chg.clear()
chatbot[-1] = list(chatbot[-1]) chatbot[-1] = list(chatbot[-1])
@@ -151,7 +151,7 @@ class InterviewAssistant(AliyunASR):
# add gpt task 创建子线程请求gpt,避免线程阻塞 # add gpt task 创建子线程请求gpt,避免线程阻塞
history = chatbot2history(chatbot) history = chatbot2history(chatbot)
self.agt.add_async_gpt_task(self.buffered_sentence, len(chatbot)-1, llm_kwargs, history, system_prompt) self.agt.add_async_gpt_task(self.buffered_sentence, len(chatbot)-1, llm_kwargs, history, system_prompt)
self.buffered_sentence = "" self.buffered_sentence = ""
chatbot.append(["[ 请讲话 ]", "[ 正在等您说完问题 ]"]) chatbot.append(["[ 请讲话 ]", "[ 正在等您说完问题 ]"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

查看文件

@@ -20,10 +20,10 @@ def get_meta_information(url, chatbot, history):
proxies = get_conf('proxies') proxies = get_conf('proxies')
headers = { headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36',
'Accept-Encoding': 'gzip, deflate, br', 'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7', 'Accept-Language': 'en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7',
'Cache-Control':'max-age=0', 'Cache-Control':'max-age=0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'Connection': 'keep-alive' 'Connection': 'keep-alive'
} }
try: try:
@@ -95,7 +95,7 @@ def get_meta_information(url, chatbot, history):
) )
try: paper = next(search.results()) try: paper = next(search.results())
except: paper = None except: paper = None
is_match = paper is not None and string_similar(title, paper.title) > 0.90 is_match = paper is not None and string_similar(title, paper.title) > 0.90
# 如果在Arxiv上匹配失败,检索文章的历史版本的题目 # 如果在Arxiv上匹配失败,检索文章的历史版本的题目
@@ -146,8 +146,8 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
import math import math
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4 arxiv```。") b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4 arxiv```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
@@ -163,7 +163,7 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
if len(meta_paper_info_list[:batchsize]) > 0: if len(meta_paper_info_list[:batchsize]) > 0:
i_say = "下面是一些学术文献的数据,提取出以下内容:" + \ i_say = "下面是一些学术文献的数据,提取出以下内容:" + \
"1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开is_paper_in_arxiv;4、引用数量cite;5、中文摘要翻译。" + \ "1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开is_paper_in_arxiv;4、引用数量cite;5、中文摘要翻译。" + \
f"以下是信息源:{str(meta_paper_info_list[:batchsize])}" f"以下是信息源:{str(meta_paper_info_list[:batchsize])}"
inputs_show_user = f"请分析此页面中出现的所有文章:{txt},这是第{batch+1}" inputs_show_user = f"请分析此页面中出现的所有文章:{txt},这是第{batch+1}"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
@@ -175,11 +175,11 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
history.extend([ f"{batch+1}", gpt_say ]) history.extend([ f"{batch+1}", gpt_say ])
meta_paper_info_list = meta_paper_info_list[batchsize:] meta_paper_info_list = meta_paper_info_list[batchsize:]
chatbot.append(["状态?", chatbot.append(["状态?",
"已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."]) "已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."])
msg = '正常' msg = '正常'
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
path = write_history_to_file(history) path = write_history_to_file(history)
promote_file_to_downloadzone(path, chatbot=chatbot) promote_file_to_downloadzone(path, chatbot=chatbot)
chatbot.append(("完成了吗?", path)); chatbot.append(("完成了吗?", path));
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面

查看文件

@@ -40,7 +40,7 @@ def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
chatbot.append(( chatbot.append((
"您正在调用插件:历史上的今天", "您正在调用插件:历史上的今天",
"[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板该函数只有20多行代码。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR" + 高阶功能模板函数示意图)) "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板该函数只有20多行代码。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR" + 高阶功能模板函数示意图))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
for i in range(5): for i in range(5):
@@ -48,8 +48,8 @@ def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day
i_say = f'历史中哪些事件发生在{currentMonth}{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。' i_say = f'历史中哪些事件发生在{currentMonth}{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。'
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say, inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="当你想发送一张照片时,请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。" sys_prompt="当你想发送一张照片时,请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。"
) )
chatbot[-1] = (i_say, gpt_say) chatbot[-1] = (i_say, gpt_say)
@@ -84,15 +84,15 @@ def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
chatbot.append(("这是什么功能?", "一个测试mermaid绘制图表的功能,您可以在输入框中输入一些关键词,然后使用mermaid+llm绘制图表。")) chatbot.append(("这是什么功能?", "一个测试mermaid绘制图表的功能,您可以在输入框中输入一些关键词,然后使用mermaid+llm绘制图表。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
if txt == "": txt = "空白的输入栏" # 调皮一下 if txt == "": txt = "空白的输入栏" # 调皮一下
i_say_show_user = f'请绘制有关“{txt}”的逻辑关系图。' i_say_show_user = f'请绘制有关“{txt}”的逻辑关系图。'
i_say = PROMPT.format(subject=txt) i_say = PROMPT.format(subject=txt)
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say_show_user, inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="" sys_prompt=""
) )
history.append(i_say); history.append(gpt_say) history.append(i_say); history.append(gpt_say)

查看文件

@@ -1,12 +1,12 @@
## =================================================== ## ===================================================
# docker-compose.yml # docker-compose.yml
## =================================================== ## ===================================================
# 1. 请在以下方案中选择任意一种,然后删除其他的方案 # 1. 请在以下方案中选择任意一种,然后删除其他的方案
# 2. 修改你选择的方案中的environment环境变量,详情请见github wiki或者config.py # 2. 修改你选择的方案中的environment环境变量,详情请见github wiki或者config.py
# 3. 选择一种暴露服务端口的方法,并对相应的配置做出修改: # 3. 选择一种暴露服务端口的方法,并对相应的配置做出修改:
# 方法1: 适用于Linux,很方便,可惜windows不支持与宿主的网络融合为一体,这个是默认配置 # 方法1: 适用于Linux,很方便,可惜windows不支持与宿主的网络融合为一体,这个是默认配置
# network_mode: "host" # network_mode: "host"
# 方法2: 适用于所有系统包括Windows和MacOS端口映射,把容器的端口映射到宿主的端口注意您需要先删除network_mode: "host",再追加以下内容) # 方法2: 适用于所有系统包括Windows和MacOS端口映射,把容器的端口映射到宿主的端口注意您需要先删除network_mode: "host",再追加以下内容)
# ports: # ports:
# - "12345:12345" # 注意12345必须与WEB_PORT环境变量相互对应 # - "12345:12345" # 注意12345必须与WEB_PORT环境变量相互对应
# 4. 最后`docker-compose up`运行 # 4. 最后`docker-compose up`运行
@@ -25,7 +25,7 @@
## =================================================== ## ===================================================
## =================================================== ## ===================================================
## 方案零 部署项目的全部能力这个是包含cuda和latex的大型镜像。如果您网速慢、硬盘小或没有显卡,则不推荐使用这个 ## 方案零 部署项目的全部能力这个是包含cuda和latex的大型镜像。如果您网速慢、硬盘小或没有显卡,则不推荐使用这个
## =================================================== ## ===================================================
version: '3' version: '3'
services: services:
@@ -63,10 +63,10 @@ services:
# count: 1 # count: 1
# capabilities: [gpu] # capabilities: [gpu]
# WEB_PORT暴露方法1: 适用于Linux与宿主的网络融合 # WEB_PORT暴露方法1: 适用于Linux与宿主的网络融合
network_mode: "host" network_mode: "host"
# WEB_PORT暴露方法2: 适用于所有系统端口映射 # WEB_PORT暴露方法2: 适用于所有系统端口映射
# ports: # ports:
# - "12345:12345" # 12345必须与WEB_PORT相互对应 # - "12345:12345" # 12345必须与WEB_PORT相互对应
@@ -75,10 +75,8 @@ services:
bash -c "python3 -u main.py" bash -c "python3 -u main.py"
## =================================================== ## ===================================================
## 方案一 如果不需要运行本地模型(仅 chatgpt, azure, 星火, 千帆, claude 等在线大模型服务) ## 方案一 如果不需要运行本地模型(仅 chatgpt, azure, 星火, 千帆, claude 等在线大模型服务)
## =================================================== ## ===================================================
version: '3' version: '3'
services: services:
@@ -97,16 +95,16 @@ services:
# DEFAULT_WORKER_NUM: ' 10 ' # DEFAULT_WORKER_NUM: ' 10 '
# AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] ' # AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] '
# 与宿主的网络融合 # 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
network_mode: "host" network_mode: "host"
# 不使用代理网络拉取最新代码 # 启动命令
command: > command: >
bash -c "python3 -u main.py" bash -c "python3 -u main.py"
### =================================================== ### ===================================================
### 方案二 如果需要运行ChatGLM + Qwen + MOSS等本地模型 ### 方案二 如果需要运行ChatGLM + Qwen + MOSS等本地模型
### =================================================== ### ===================================================
version: '3' version: '3'
services: services:
@@ -130,8 +128,10 @@ services:
devices: devices:
- /dev/nvidia0:/dev/nvidia0 - /dev/nvidia0:/dev/nvidia0
# 与宿主的网络融合 # 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
network_mode: "host" network_mode: "host"
# 启动命令
command: > command: >
bash -c "python3 -u main.py" bash -c "python3 -u main.py"
@@ -139,8 +139,9 @@ services:
# command: > # command: >
# bash -c "pip install -r request_llms/requirements_qwen.txt && python3 -u main.py" # bash -c "pip install -r request_llms/requirements_qwen.txt && python3 -u main.py"
### =================================================== ### ===================================================
### 方案三 如果需要运行ChatGPT + LLAMA + 盘古 + RWKV本地模型 ### 方案三 如果需要运行ChatGPT + LLAMA + 盘古 + RWKV本地模型
### =================================================== ### ===================================================
version: '3' version: '3'
services: services:
@@ -164,16 +165,16 @@ services:
devices: devices:
- /dev/nvidia0:/dev/nvidia0 - /dev/nvidia0:/dev/nvidia0
# 与宿主的网络融合 # 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
network_mode: "host" network_mode: "host"
# 不使用代理网络拉取最新代码 # 启动命令
command: > command: >
python3 -u main.py python3 -u main.py
## =================================================== ## ===================================================
## 方案四 ChatGPT + Latex ## 方案四 ChatGPT + Latex
## =================================================== ## ===================================================
version: '3' version: '3'
services: services:
@@ -190,16 +191,16 @@ services:
DEFAULT_WORKER_NUM: ' 10 ' DEFAULT_WORKER_NUM: ' 10 '
WEB_PORT: ' 12303 ' WEB_PORT: ' 12303 '
# 与宿主的网络融合 # 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
network_mode: "host" network_mode: "host"
# 不使用代理网络拉取最新代码 # 启动命令
command: > command: >
bash -c "python3 -u main.py" bash -c "python3 -u main.py"
## =================================================== ## ===================================================
## 方案五 ChatGPT + 语音助手 (请先阅读 docs/use_audio.md ## 方案五 ChatGPT + 语音助手 (请先阅读 docs/use_audio.md
## =================================================== ## ===================================================
version: '3' version: '3'
services: services:
@@ -223,9 +224,9 @@ services:
# (无需填写) ALIYUN_ACCESSKEY: ' LTAI5q6BrFUzoRXVGUWnekh1 ' # (无需填写) ALIYUN_ACCESSKEY: ' LTAI5q6BrFUzoRXVGUWnekh1 '
# (无需填写) ALIYUN_SECRET: ' eHmI20AVWIaQZ0CiTD2bGQVsaP9i68 ' # (无需填写) ALIYUN_SECRET: ' eHmI20AVWIaQZ0CiTD2bGQVsaP9i68 '
# 与宿主的网络融合 # 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
network_mode: "host" network_mode: "host"
# 不使用代理网络拉取最新代码 # 启动命令
command: > command: >
bash -c "python3 -u main.py" bash -c "python3 -u main.py"

140
main.py
查看文件

@@ -13,9 +13,20 @@ help_menu_description = \
</br></br>如何语音对话: 请阅读Wiki </br></br>如何语音对话: 请阅读Wiki
</br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交网页刷新后失效""" </br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交网页刷新后失效"""
def enable_log(PATH_LOGGING):
import logging, uuid
admin_log_path = os.path.join(PATH_LOGGING, "admin")
os.makedirs(admin_log_path, exist_ok=True)
log_dir = os.path.join(admin_log_path, "chat_secrets.log")
try:logging.basicConfig(filename=log_dir, level=logging.INFO, encoding="utf-8", format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
except:logging.basicConfig(filename=log_dir, level=logging.INFO, format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
# Disable logging output from the 'httpx' logger
logging.getLogger("httpx").setLevel(logging.WARNING)
print(f"所有对话记录将自动保存在本地目录{log_dir}, 请注意自我隐私保护哦!")
def main(): def main():
import gradio as gr import gradio as gr
if gr.__version__ not in ['3.32.8']: if gr.__version__ not in ['3.32.9']:
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.") raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
from request_llms.bridge_all import predict from request_llms.bridge_all import predict
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
@@ -23,25 +34,19 @@ def main():
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION') proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT') CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU') ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU')
DARK_MODE, NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('DARK_MODE', 'NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE') NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE')
INIT_SYS_PROMPT = get_conf('INIT_SYS_PROMPT') DARK_MODE, INIT_SYS_PROMPT, ADD_WAIFU = get_conf('DARK_MODE', 'INIT_SYS_PROMPT', 'ADD_WAIFU')
# 如果WEB_PORT是-1, 则随机选取WEB端口 # 如果WEB_PORT是-1, 则随机选取WEB端口
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
from check_proxy import get_current_version from check_proxy import get_current_version
from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_reset, js_code_show_or_hide, js_code_show_or_hide_group2 from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_reset, js_code_show_or_hide, js_code_show_or_hide_group2
from themes.theme import js_code_for_css_changing, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init from themes.theme import js_code_for_css_changing, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, init_cookie from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}" title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
# 问询记录, python 版本建议3.9+(越新越好) # 对话、日志记录
import logging, uuid enable_log(PATH_LOGGING)
os.makedirs(PATH_LOGGING, exist_ok=True)
try:logging.basicConfig(filename=f"{PATH_LOGGING}/chat_secrets.log", level=logging.INFO, encoding="utf-8", format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
except:logging.basicConfig(filename=f"{PATH_LOGGING}/chat_secrets.log", level=logging.INFO, format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
# Disable logging output from the 'httpx' logger
logging.getLogger("httpx").setLevel(logging.WARNING)
print(f"所有问询记录将自动保存在本地目录./{PATH_LOGGING}/chat_secrets.log, 请注意自我隐私保护哦!")
# 一些普通功能模块 # 一些普通功能模块
from core_functional import get_core_functions from core_functional import get_core_functions
@@ -74,9 +79,9 @@ def main():
cancel_handles = [] cancel_handles = []
customize_btns = {} customize_btns = {}
predefined_btns = {} predefined_btns = {}
with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo: with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as app_block:
gr.HTML(title_html) gr.HTML(title_html)
secret_css, dark_mode, py_pickle_cookie = gr.Textbox(visible=False), gr.Textbox(DARK_MODE, visible=False), gr.Textbox(visible=False) secret_css, web_cookie_cache = gr.Textbox(visible=False), gr.Textbox(visible=False)
cookies = gr.State(load_chat_cookies()) cookies = gr.State(load_chat_cookies())
with gr_L1(): with gr_L1():
with gr_L2(scale=2, elem_id="gpt-chat"): with gr_L2(scale=2, elem_id="gpt-chat"):
@@ -152,9 +157,13 @@ def main():
with gr.Tab("更换模型", elem_id="interact-panel"): with gr.Tab("更换模型", elem_id="interact-panel"):
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False) md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",) top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",) temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature", elem_id="elem_temperature")
max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength",) max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength",)
system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=INIT_SYS_PROMPT) system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=INIT_SYS_PROMPT, elem_id="elem_prompt")
temperature.change(None, inputs=[temperature], outputs=None,
_js="""(temperature)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_temperature_cookie", temperature)""")
system_prompt.change(None, inputs=[system_prompt], outputs=None,
_js="""(system_prompt)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_system_prompt_cookie", system_prompt)""")
with gr.Tab("界面外观", elem_id="interact-panel"): with gr.Tab("界面外观", elem_id="interact-panel"):
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False) theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False)
@@ -194,64 +203,19 @@ def main():
with gr.Column(scale=1, min_width=70): with gr.Column(scale=1, min_width=70):
basic_fn_confirm = gr.Button("确认并保存", variant="primary"); basic_fn_confirm.style(size="sm") basic_fn_confirm = gr.Button("确认并保存", variant="primary"); basic_fn_confirm.style(size="sm")
basic_fn_clean = gr.Button("恢复默认", variant="primary"); basic_fn_clean.style(size="sm") basic_fn_clean = gr.Button("恢复默认", variant="primary"); basic_fn_clean.style(size="sm")
def assign_btn(persistent_cookie_, cookies_, basic_btn_dropdown_, basic_fn_title, basic_fn_prefix, basic_fn_suffix, clean_up=False):
ret = {}
# 读取之前的自定义按钮
customize_fn_overwrite_ = cookies_['customize_fn_overwrite']
# 更新新的自定义按钮
customize_fn_overwrite_.update({
basic_btn_dropdown_:
{
"Title":basic_fn_title,
"Prefix":basic_fn_prefix,
"Suffix":basic_fn_suffix,
}
}
)
if clean_up:
customize_fn_overwrite_ = {}
cookies_.update(customize_fn_overwrite_) # 更新cookie
visible = (not clean_up) and (basic_fn_title != "")
if basic_btn_dropdown_ in customize_btns:
# 是自定义按钮,不是预定义按钮
ret.update({customize_btns[basic_btn_dropdown_]: gr.update(visible=visible, value=basic_fn_title)})
else:
# 是预定义按钮
ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=visible, value=basic_fn_title)})
ret.update({cookies: cookies_})
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
except: persistent_cookie_ = {}
persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value
persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict
ret.update({py_pickle_cookie: persistent_cookie_}) # write persistent cookie
return ret
from shared_utils.cookie_manager import assign_btn__fn_builder
assign_btn = assign_btn__fn_builder(customize_btns, predefined_btns, cookies, web_cookie_cache)
# update btn # update btn
h = basic_fn_confirm.click(assign_btn, [py_pickle_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix], h = basic_fn_confirm.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
[py_pickle_cookie, cookies, *customize_btns.values(), *predefined_btns.values()]) [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
h.then(None, [py_pickle_cookie], None, _js="""(py_pickle_cookie)=>{setCookie("py_pickle_cookie", py_pickle_cookie, 365);}""") h.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
# clean up btn # clean up btn
h2 = basic_fn_clean.click(assign_btn, [py_pickle_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix, gr.State(True)], h2 = basic_fn_clean.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix, gr.State(True)],
[py_pickle_cookie, cookies, *customize_btns.values(), *predefined_btns.values()]) [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
h2.then(None, [py_pickle_cookie], None, _js="""(py_pickle_cookie)=>{setCookie("py_pickle_cookie", py_pickle_cookie, 365);}""") h2.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
def persistent_cookie_reload(persistent_cookie_, cookies_):
ret = {}
for k in customize_btns:
ret.update({customize_btns[k]: gr.update(visible=False, value="")})
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
except: return ret
customize_fn_overwrite_ = persistent_cookie_.get("custom_bnt", {})
cookies_['customize_fn_overwrite'] = customize_fn_overwrite_
ret.update({cookies: cookies_})
for k,v in persistent_cookie_["custom_bnt"].items():
if v['Title'] == "": continue
if k in customize_btns: ret.update({customize_btns[k]: gr.update(visible=True, value=v['Title'])})
else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
return ret
# 功能区显示开关与功能区的互动 # 功能区显示开关与功能区的互动
def fn_area_visibility(a): def fn_area_visibility(a):
@@ -371,11 +335,14 @@ def main():
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies]) audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
demo.load(init_cookie, inputs=[cookies], outputs=[cookies]) app_block.load(assign_user_uuid, inputs=[cookies], outputs=[cookies])
demo.load(persistent_cookie_reload, inputs = [py_pickle_cookie, cookies],
outputs = [py_pickle_cookie, cookies, *customize_btns.values(), *predefined_btns.values()], _js=js_code_for_persistent_cookie_init) from shared_utils.cookie_manager import load_web_cookie_cache__fn_builder
demo.load(None, inputs=[dark_mode], outputs=None, _js="""(dark_mode)=>{apply_cookie_for_checkbox(dark_mode);}""") # 配置暗色主题或亮色主题 load_web_cookie_cache = load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns)
demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}') app_block.load(load_web_cookie_cache, inputs = [web_cookie_cache, cookies],
outputs = [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()], _js=js_code_for_persistent_cookie_init)
app_block.load(None, inputs=[], outputs=None, _js=f"""()=>GptAcademicJavaScriptInit("{DARK_MODE}","{INIT_SYS_PROMPT}","{ADD_WAIFU}","{LAYOUT}")""") # 配置暗色主题或亮色主题
# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数 # gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
def run_delayed_tasks(): def run_delayed_tasks():
@@ -390,28 +357,15 @@ def main():
threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新 threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新
threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面 threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面
threading.Thread(target=warm_up_mods, name="warm-up", daemon=True).start() # 预热tiktoken模块 threading.Thread(target=warm_up_mods, name="warm-up", daemon=True).start() # 预热tiktoken模块
# 运行一些异步任务自动更新、打开浏览器页面、预热tiktoken模块
run_delayed_tasks() run_delayed_tasks()
demo.queue(concurrency_count=CONCURRENT_COUNT).launch(
quiet=True,
server_name="0.0.0.0",
ssl_keyfile=None if SSL_KEYFILE == "" else SSL_KEYFILE,
ssl_certfile=None if SSL_CERTFILE == "" else SSL_CERTFILE,
ssl_verify=False,
server_port=PORT,
favicon_path=os.path.join(os.path.dirname(__file__), "docs/logo.png"),
auth=AUTHENTICATION if len(AUTHENTICATION) != 0 else None,
blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile",f"{PATH_LOGGING}/admin"])
# 如果需要在二级路径下运行 # 最后,正式开始服务
# CUSTOM_PATH = get_conf('CUSTOM_PATH') from shared_utils.fastapi_server import start_app
# if CUSTOM_PATH != "/": start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SSL_CERTFILE)
# from toolbox import run_gradio_in_subpath
# run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
# else:
# demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png",
# blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile",f"{PATH_LOGGING}/admin"])
if __name__ == "__main__": if __name__ == "__main__":
main() main()

查看文件

@@ -1,7 +1,7 @@
""" """
Translate this project to other languages (experimental, please open an issue if there is any bug) Translate this project to other languages (experimental, please open an issue if there is any bug)
Usage: Usage:
1. modify config.py, set your LLM_MODEL and API_KEY(s) to provide access to OPENAI (or any other LLM model provider) 1. modify config.py, set your LLM_MODEL and API_KEY(s) to provide access to OPENAI (or any other LLM model provider)
@@ -11,20 +11,20 @@
3. modify TransPrompt (below ↓) 3. modify TransPrompt (below ↓)
TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #." TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #."
4. Run `python multi_language.py`. 4. Run `python multi_language.py`.
Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes. Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes.
(You can also run `CACHE_ONLY=True python multi_language.py` to use cached translation mapping) (You can also run `CACHE_ONLY=True python multi_language.py` to use cached translation mapping)
5. Find the translated program in `multi-language\English\*` 5. Find the translated program in `multi-language\English\*`
P.S. P.S.
- The translation mapping will be stored in `docs/translation_xxxx.json`, you can revised mistaken translation there. - The translation mapping will be stored in `docs/translation_xxxx.json`, you can revised mistaken translation there.
- If you would like to share your `docs/translation_xxxx.json`, (so that everyone can use the cached & revised translation mapping), please open a Pull Request - If you would like to share your `docs/translation_xxxx.json`, (so that everyone can use the cached & revised translation mapping), please open a Pull Request
- If there is any translation error in `docs/translation_xxxx.json`, please open a Pull Request - If there is any translation error in `docs/translation_xxxx.json`, please open a Pull Request
- Welcome any Pull Request, regardless of language - Welcome any Pull Request, regardless of language
""" """
@@ -58,7 +58,7 @@ if not os.path.exists(CACHE_FOLDER):
def lru_file_cache(maxsize=128, ttl=None, filename=None): def lru_file_cache(maxsize=128, ttl=None, filename=None):
""" """
Decorator that caches a function's return value after being called with given arguments. Decorator that caches a function's return value after being called with given arguments.
It uses a Least Recently Used (LRU) cache strategy to limit the size of the cache. It uses a Least Recently Used (LRU) cache strategy to limit the size of the cache.
maxsize: Maximum size of the cache. Defaults to 128. maxsize: Maximum size of the cache. Defaults to 128.
ttl: Time-to-Live of the cache. If a value hasn't been accessed for `ttl` seconds, it will be evicted from the cache. ttl: Time-to-Live of the cache. If a value hasn't been accessed for `ttl` seconds, it will be evicted from the cache.
@@ -151,7 +151,7 @@ def map_to_json(map, language):
def read_map_from_json(language): def read_map_from_json(language):
if os.path.exists(f'docs/translate_{language.lower()}.json'): if os.path.exists(f'docs/translate_{language.lower()}.json'):
with open(f'docs/translate_{language.lower()}.json', 'r', encoding='utf8') as f: with open(f'docs/translate_{language.lower()}.json', 'r', encoding='utf8') as f:
res = json.load(f) res = json.load(f)
res = {k:v for k, v in res.items() if v is not None and contains_chinese(k)} res = {k:v for k, v in res.items() if v is not None and contains_chinese(k)}
return res return res
@@ -168,7 +168,7 @@ def advanced_split(splitted_string, spliter, include_spliter=False):
splitted[i] += spliter splitted[i] += spliter
splitted[i] = splitted[i].strip() splitted[i] = splitted[i].strip()
for i in reversed(range(len(splitted))): for i in reversed(range(len(splitted))):
if not contains_chinese(splitted[i]): if not contains_chinese(splitted[i]):
splitted.pop(i) splitted.pop(i)
splitted_string_tmp.extend(splitted) splitted_string_tmp.extend(splitted)
else: else:
@@ -183,12 +183,12 @@ def trans(word_to_translate, language, special=False):
if len(word_to_translate) == 0: return {} if len(word_to_translate) == 0: return {}
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies
cookies = load_chat_cookies() cookies = load_chat_cookies()
llm_kwargs = { llm_kwargs = {
'api_key': cookies['api_key'], 'api_key': cookies['api_key'],
'llm_model': cookies['llm_model'], 'llm_model': cookies['llm_model'],
'top_p':1.0, 'top_p':1.0,
'max_length': None, 'max_length': None,
'temperature':0.4, 'temperature':0.4,
} }
@@ -204,12 +204,12 @@ def trans(word_to_translate, language, special=False):
sys_prompt_array = [f"Translate following sentences to {LANG}. E.g., You should translate sentences to the following format ['translation of sentence 1', 'translation of sentence 2']. Do NOT answer with Chinese!" for _ in inputs_array] sys_prompt_array = [f"Translate following sentences to {LANG}. E.g., You should translate sentences to the following format ['translation of sentence 1', 'translation of sentence 2']. Do NOT answer with Chinese!" for _ in inputs_array]
chatbot = ChatBotWithCookies(llm_kwargs) chatbot = ChatBotWithCookies(llm_kwargs)
gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array, inputs_array,
inputs_show_user_array, inputs_show_user_array,
llm_kwargs, llm_kwargs,
chatbot, chatbot,
history_array, history_array,
sys_prompt_array, sys_prompt_array,
) )
while True: while True:
try: try:
@@ -224,7 +224,7 @@ def trans(word_to_translate, language, special=False):
try: try:
res_before_trans = eval(result[i-1]) res_before_trans = eval(result[i-1])
res_after_trans = eval(result[i]) res_after_trans = eval(result[i])
if len(res_before_trans) != len(res_after_trans): if len(res_before_trans) != len(res_after_trans):
raise RuntimeError raise RuntimeError
for a,b in zip(res_before_trans, res_after_trans): for a,b in zip(res_before_trans, res_after_trans):
translated_result[a] = b translated_result[a] = b
@@ -246,12 +246,12 @@ def trans_json(word_to_translate, language, special=False):
if len(word_to_translate) == 0: return {} if len(word_to_translate) == 0: return {}
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies
cookies = load_chat_cookies() cookies = load_chat_cookies()
llm_kwargs = { llm_kwargs = {
'api_key': cookies['api_key'], 'api_key': cookies['api_key'],
'llm_model': cookies['llm_model'], 'llm_model': cookies['llm_model'],
'top_p':1.0, 'top_p':1.0,
'max_length': None, 'max_length': None,
'temperature':0.4, 'temperature':0.4,
} }
@@ -261,18 +261,18 @@ def trans_json(word_to_translate, language, special=False):
word_to_translate_split = split_list(word_to_translate, N_EACH_REQ) word_to_translate_split = split_list(word_to_translate, N_EACH_REQ)
inputs_array = [{k:"#" for k in s} for s in word_to_translate_split] inputs_array = [{k:"#" for k in s} for s in word_to_translate_split]
inputs_array = [ json.dumps(i, ensure_ascii=False) for i in inputs_array] inputs_array = [ json.dumps(i, ensure_ascii=False) for i in inputs_array]
inputs_show_user_array = inputs_array inputs_show_user_array = inputs_array
history_array = [[] for _ in inputs_array] history_array = [[] for _ in inputs_array]
sys_prompt_array = [TransPrompt for _ in inputs_array] sys_prompt_array = [TransPrompt for _ in inputs_array]
chatbot = ChatBotWithCookies(llm_kwargs) chatbot = ChatBotWithCookies(llm_kwargs)
gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array, inputs_array,
inputs_show_user_array, inputs_show_user_array,
llm_kwargs, llm_kwargs,
chatbot, chatbot,
history_array, history_array,
sys_prompt_array, sys_prompt_array,
) )
while True: while True:
try: try:
@@ -336,7 +336,7 @@ def step_1_core_key_translate():
cached_translation = read_map_from_json(language=LANG_STD) cached_translation = read_map_from_json(language=LANG_STD)
cached_translation_keys = list(cached_translation.keys()) cached_translation_keys = list(cached_translation.keys())
for d in chinese_core_keys_norepeat: for d in chinese_core_keys_norepeat:
if d not in cached_translation_keys: if d not in cached_translation_keys:
need_translate.append(d) need_translate.append(d)
if CACHE_ONLY: if CACHE_ONLY:
@@ -379,7 +379,7 @@ def step_1_core_key_translate():
# read again # read again
with open(file_path, 'r', encoding='utf-8') as f: with open(file_path, 'r', encoding='utf-8') as f:
content = f.read() content = f.read()
for k, v in chinese_core_keys_norepeat_mapping.items(): for k, v in chinese_core_keys_norepeat_mapping.items():
content = content.replace(k, v) content = content.replace(k, v)
@@ -390,7 +390,7 @@ def step_1_core_key_translate():
def step_2_core_key_translate(): def step_2_core_key_translate():
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
# step2 # step2
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
def load_string(strings, string_input): def load_string(strings, string_input):
@@ -423,7 +423,7 @@ def step_2_core_key_translate():
splitted_string = advanced_split(splitted_string, spliter=" ", include_spliter=False) splitted_string = advanced_split(splitted_string, spliter=" ", include_spliter=False)
splitted_string = advanced_split(splitted_string, spliter="- ", include_spliter=False) splitted_string = advanced_split(splitted_string, spliter="- ", include_spliter=False)
splitted_string = advanced_split(splitted_string, spliter="---", include_spliter=False) splitted_string = advanced_split(splitted_string, spliter="---", include_spliter=False)
# -------------------------------------- # --------------------------------------
for j, s in enumerate(splitted_string): # .com for j, s in enumerate(splitted_string): # .com
if '.com' in s: continue if '.com' in s: continue
@@ -457,7 +457,7 @@ def step_2_core_key_translate():
comments_arr = [] comments_arr = []
for code_sp in content.splitlines(): for code_sp in content.splitlines():
comments = re.findall(r'#.*$', code_sp) comments = re.findall(r'#.*$', code_sp)
for comment in comments: for comment in comments:
load_string(strings=comments_arr, string_input=comment) load_string(strings=comments_arr, string_input=comment)
string_literals.extend(comments_arr) string_literals.extend(comments_arr)
@@ -479,7 +479,7 @@ def step_2_core_key_translate():
cached_translation = read_map_from_json(language=LANG) cached_translation = read_map_from_json(language=LANG)
cached_translation_keys = list(cached_translation.keys()) cached_translation_keys = list(cached_translation.keys())
for d in chinese_literal_names_norepeat: for d in chinese_literal_names_norepeat:
if d not in cached_translation_keys: if d not in cached_translation_keys:
need_translate.append(d) need_translate.append(d)
if CACHE_ONLY: if CACHE_ONLY:
@@ -504,18 +504,18 @@ def step_2_core_key_translate():
# read again # read again
with open(file_path, 'r', encoding='utf-8') as f: with open(file_path, 'r', encoding='utf-8') as f:
content = f.read() content = f.read()
for k, v in cached_translation.items(): for k, v in cached_translation.items():
if v is None: continue if v is None: continue
if '"' in v: if '"' in v:
v = v.replace('"', "`") v = v.replace('"', "`")
if '\'' in v: if '\'' in v:
v = v.replace('\'', "`") v = v.replace('\'', "`")
content = content.replace(k, v) content = content.replace(k, v)
with open(file_path, 'w', encoding='utf-8') as f: with open(file_path, 'w', encoding='utf-8') as f:
f.write(content) f.write(content)
if file.strip('.py') in cached_translation: if file.strip('.py') in cached_translation:
file_new = cached_translation[file.strip('.py')] + '.py' file_new = cached_translation[file.strip('.py')] + '.py'
file_path_new = os.path.join(root, file_new) file_path_new = os.path.join(root, file_new)

查看文件

@@ -8,10 +8,10 @@
具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁 具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁
2. predict_no_ui_long_connection(...) 2. predict_no_ui_long_connection(...)
""" """
import tiktoken, copy import tiktoken, copy, re
from functools import lru_cache from functools import lru_cache
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from toolbox import get_conf, trimmed_format_exc, apply_gpt_academic_string_mask from toolbox import get_conf, trimmed_format_exc, apply_gpt_academic_string_mask, read_one_api_model_name
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
from .bridge_chatgpt import predict as chatgpt_ui from .bridge_chatgpt import predict as chatgpt_ui
@@ -34,6 +34,9 @@ from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
from .bridge_zhipu import predict as zhipu_ui from .bridge_zhipu import predict as zhipu_ui
from .bridge_cohere import predict as cohere_ui
from .bridge_cohere import predict_no_ui_long_connection as cohere_noui
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044'] colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
class LazyloadTiktoken(object): class LazyloadTiktoken(object):
@@ -61,6 +64,11 @@ API_URL_REDIRECT, AZURE_ENDPOINT, AZURE_ENGINE = get_conf("API_URL_REDIRECT", "A
openai_endpoint = "https://api.openai.com/v1/chat/completions" openai_endpoint = "https://api.openai.com/v1/chat/completions"
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions" api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub" newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
gemini_endpoint = "https://generativelanguage.googleapis.com/v1beta/models"
claude_endpoint = "https://api.anthropic.com/v1/messages"
yimodel_endpoint = "https://api.lingyiwanwu.com/v1/chat/completions"
cohere_endpoint = 'https://api.cohere.ai/v1/chat'
if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/' if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/'
azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15' azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15'
# 兼容旧版的配置 # 兼容旧版的配置
@@ -75,7 +83,10 @@ except:
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint] if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint] if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint] if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
if gemini_endpoint in API_URL_REDIRECT: gemini_endpoint = API_URL_REDIRECT[gemini_endpoint]
if claude_endpoint in API_URL_REDIRECT: claude_endpoint = API_URL_REDIRECT[claude_endpoint]
if yimodel_endpoint in API_URL_REDIRECT: yimodel_endpoint = API_URL_REDIRECT[yimodel_endpoint]
if cohere_endpoint in API_URL_REDIRECT: cohere_endpoint = API_URL_REDIRECT[cohere_endpoint]
# 获取tokenizer # 获取tokenizer
tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo") tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo")
@@ -94,7 +105,7 @@ model_info = {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint, "endpoint": openai_endpoint,
"max_token": 4096, "max_token": 16385,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
@@ -126,7 +137,16 @@ model_info = {
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
"gpt-3.5-turbo-1106": {#16k "gpt-3.5-turbo-1106": { #16k
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 16385,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"gpt-3.5-turbo-0125": { #16k
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint, "endpoint": openai_endpoint,
@@ -282,7 +302,7 @@ model_info = {
"gemini-pro": { "gemini-pro": {
"fn_with_ui": genai_ui, "fn_with_ui": genai_ui,
"fn_without_ui": genai_noui, "fn_without_ui": genai_noui,
"endpoint": None, "endpoint": gemini_endpoint,
"max_token": 1024 * 32, "max_token": 1024 * 32,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
@@ -290,13 +310,56 @@ model_info = {
"gemini-pro-vision": { "gemini-pro-vision": {
"fn_with_ui": genai_ui, "fn_with_ui": genai_ui,
"fn_without_ui": genai_noui, "fn_without_ui": genai_noui,
"endpoint": gemini_endpoint,
"max_token": 1024 * 32,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
# cohere
"cohere-command-r-plus": {
"fn_with_ui": cohere_ui,
"fn_without_ui": cohere_noui,
"can_multi_thread": True,
"endpoint": cohere_endpoint,
"max_token": 1024 * 4,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
}
# -=-=-=-=-=-=- 月之暗面 -=-=-=-=-=-=-
from request_llms.bridge_moonshot import predict as moonshot_ui
from request_llms.bridge_moonshot import predict_no_ui_long_connection as moonshot_no_ui
model_info.update({
"moonshot-v1-8k": {
"fn_with_ui": moonshot_ui,
"fn_without_ui": moonshot_no_ui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 1024 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"moonshot-v1-32k": {
"fn_with_ui": moonshot_ui,
"fn_without_ui": moonshot_no_ui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 1024 * 32, "max_token": 1024 * 32,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
} "moonshot-v1-128k": {
"fn_with_ui": moonshot_ui,
"fn_without_ui": moonshot_no_ui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 1024 * 128,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
}
})
# -=-=-=-=-=-=- api2d 对齐支持 -=-=-=-=-=-=- # -=-=-=-=-=-=- api2d 对齐支持 -=-=-=-=-=-=-
for model in AVAIL_LLM_MODELS: for model in AVAIL_LLM_MODELS:
if model.startswith('api2d-') and (model.replace('api2d-','') in model_info.keys()): if model.startswith('api2d-') and (model.replace('api2d-','') in model_info.keys()):
@@ -312,25 +375,67 @@ for model in AVAIL_LLM_MODELS:
model_info.update({model: mi}) model_info.update({model: mi})
# -=-=-=-=-=-=- 以下部分是新加入的模型,可能附带额外依赖 -=-=-=-=-=-=- # -=-=-=-=-=-=- 以下部分是新加入的模型,可能附带额外依赖 -=-=-=-=-=-=-
if "claude-1-100k" in AVAIL_LLM_MODELS or "claude-2" in AVAIL_LLM_MODELS: # claude家族
claude_models = ["claude-instant-1.2","claude-2.0","claude-2.1","claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229"]
if any(item in claude_models for item in AVAIL_LLM_MODELS):
from .bridge_claude import predict_no_ui_long_connection as claude_noui from .bridge_claude import predict_no_ui_long_connection as claude_noui
from .bridge_claude import predict as claude_ui from .bridge_claude import predict as claude_ui
model_info.update({ model_info.update({
"claude-1-100k": { "claude-instant-1.2": {
"fn_with_ui": claude_ui, "fn_with_ui": claude_ui,
"fn_without_ui": claude_noui, "fn_without_ui": claude_noui,
"endpoint": None, "endpoint": claude_endpoint,
"max_token": 8196, "max_token": 100000,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
}) })
model_info.update({ model_info.update({
"claude-2": { "claude-2.0": {
"fn_with_ui": claude_ui, "fn_with_ui": claude_ui,
"fn_without_ui": claude_noui, "fn_without_ui": claude_noui,
"endpoint": None, "endpoint": claude_endpoint,
"max_token": 8196, "max_token": 100000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-2.1": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-3-haiku-20240307": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-3-sonnet-20240229": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-3-opus-20240229": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
@@ -400,22 +505,6 @@ if "stack-claude" in AVAIL_LLM_MODELS:
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
} }
}) })
if "newbing-free" in AVAIL_LLM_MODELS:
try:
from .bridge_newbingfree import predict_no_ui_long_connection as newbingfree_noui
from .bridge_newbingfree import predict as newbingfree_ui
model_info.update({
"newbing-free": {
"fn_with_ui": newbingfree_ui,
"fn_without_ui": newbingfree_noui,
"endpoint": newbing_endpoint,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
}
})
except:
print(trimmed_format_exc())
if "newbing" in AVAIL_LLM_MODELS: # same with newbing-free if "newbing" in AVAIL_LLM_MODELS: # same with newbing-free
try: try:
from .bridge_newbingfree import predict_no_ui_long_connection as newbingfree_noui from .bridge_newbingfree import predict_no_ui_long_connection as newbingfree_noui
@@ -448,6 +537,7 @@ if "chatglmft" in AVAIL_LLM_MODELS: # same with newbing-free
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# -=-=-=-=-=-=- 上海AI-LAB书生大模型 -=-=-=-=-=-=-
if "internlm" in AVAIL_LLM_MODELS: if "internlm" in AVAIL_LLM_MODELS:
try: try:
from .bridge_internlm import predict_no_ui_long_connection as internlm_noui from .bridge_internlm import predict_no_ui_long_connection as internlm_noui
@@ -480,6 +570,7 @@ if "chatglm_onnx" in AVAIL_LLM_MODELS:
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# -=-=-=-=-=-=- 通义-本地模型 -=-=-=-=-=-=-
if "qwen-local" in AVAIL_LLM_MODELS: if "qwen-local" in AVAIL_LLM_MODELS:
try: try:
from .bridge_qwen_local import predict_no_ui_long_connection as qwen_local_noui from .bridge_qwen_local import predict_no_ui_long_connection as qwen_local_noui
@@ -488,6 +579,7 @@ if "qwen-local" in AVAIL_LLM_MODELS:
"qwen-local": { "qwen-local": {
"fn_with_ui": qwen_local_ui, "fn_with_ui": qwen_local_ui,
"fn_without_ui": qwen_local_noui, "fn_without_ui": qwen_local_noui,
"can_multi_thread": False,
"endpoint": None, "endpoint": None,
"max_token": 4096, "max_token": 4096,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -496,6 +588,7 @@ if "qwen-local" in AVAIL_LLM_MODELS:
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# -=-=-=-=-=-=- 通义-在线模型 -=-=-=-=-=-=-
if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai
try: try:
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
@@ -504,6 +597,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"qwen-turbo": { "qwen-turbo": {
"fn_with_ui": qwen_ui, "fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui, "fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 6144, "max_token": 6144,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -512,6 +606,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"qwen-plus": { "qwen-plus": {
"fn_with_ui": qwen_ui, "fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui, "fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 30720, "max_token": 30720,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -520,6 +615,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"qwen-max": { "qwen-max": {
"fn_with_ui": qwen_ui, "fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui, "fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 28672, "max_token": 28672,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -528,7 +624,35 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
if "spark" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型 # -=-=-=-=-=-=- 零一万物模型 -=-=-=-=-=-=-
if "yi-34b-chat-0205" in AVAIL_LLM_MODELS or "yi-34b-chat-200k" in AVAIL_LLM_MODELS: # zhipuai
try:
from .bridge_yimodel import predict_no_ui_long_connection as yimodel_noui
from .bridge_yimodel import predict as yimodel_ui
model_info.update({
"yi-34b-chat-0205": {
"fn_with_ui": yimodel_ui,
"fn_without_ui": yimodel_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint,
"max_token": 4000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"yi-34b-chat-200k": {
"fn_with_ui": yimodel_ui,
"fn_without_ui": yimodel_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
except:
print(trimmed_format_exc())
# -=-=-=-=-=-=- 讯飞星火认知大模型 -=-=-=-=-=-=-
if "spark" in AVAIL_LLM_MODELS:
try: try:
from .bridge_spark import predict_no_ui_long_connection as spark_noui from .bridge_spark import predict_no_ui_long_connection as spark_noui
from .bridge_spark import predict as spark_ui from .bridge_spark import predict as spark_ui
@@ -536,6 +660,7 @@ if "spark" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
"spark": { "spark": {
"fn_with_ui": spark_ui, "fn_with_ui": spark_ui,
"fn_without_ui": spark_noui, "fn_without_ui": spark_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 4096, "max_token": 4096,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -552,6 +677,7 @@ if "sparkv2" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
"sparkv2": { "sparkv2": {
"fn_with_ui": spark_ui, "fn_with_ui": spark_ui,
"fn_without_ui": spark_noui, "fn_without_ui": spark_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 4096, "max_token": 4096,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -568,6 +694,7 @@ if "sparkv3" in AVAIL_LLM_MODELS or "sparkv3.5" in AVAIL_LLM_MODELS: # 讯飞
"sparkv3": { "sparkv3": {
"fn_with_ui": spark_ui, "fn_with_ui": spark_ui,
"fn_without_ui": spark_noui, "fn_without_ui": spark_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 4096, "max_token": 4096,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -576,6 +703,7 @@ if "sparkv3" in AVAIL_LLM_MODELS or "sparkv3.5" in AVAIL_LLM_MODELS: # 讯飞
"sparkv3.5": { "sparkv3.5": {
"fn_with_ui": spark_ui, "fn_with_ui": spark_ui,
"fn_without_ui": spark_noui, "fn_without_ui": spark_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 4096, "max_token": 4096,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -600,6 +728,7 @@ if "llama2" in AVAIL_LLM_MODELS: # llama2
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# -=-=-=-=-=-=- 智谱 -=-=-=-=-=-=-
if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai 是glm-4的别名,向后兼容配置 if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai 是glm-4的别名,向后兼容配置
try: try:
model_info.update({ model_info.update({
@@ -614,6 +743,7 @@ if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai 是glm-4的别名,向后兼容
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# -=-=-=-=-=-=- 幻方-深度求索大模型 -=-=-=-=-=-=-
if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
try: try:
from .bridge_deepseekcoder import predict_no_ui_long_connection as deepseekcoder_noui from .bridge_deepseekcoder import predict_no_ui_long_connection as deepseekcoder_noui
@@ -630,26 +760,34 @@ if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# if "skylark" in AVAIL_LLM_MODELS:
# try:
# from .bridge_skylark2 import predict_no_ui_long_connection as skylark_noui
# from .bridge_skylark2 import predict as skylark_ui
# model_info.update({
# "skylark": {
# "fn_with_ui": skylark_ui,
# "fn_without_ui": skylark_noui,
# "endpoint": None,
# "max_token": 4096,
# "tokenizer": tokenizer_gpt35,
# "token_cnt": get_token_num_gpt35,
# }
# })
# except:
# print(trimmed_format_exc())
# <-- 用于定义和切换多个azure模型 --> # -=-=-=-=-=-=- one-api 对齐支持 -=-=-=-=-=-=-
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY") for model in [m for m in AVAIL_LLM_MODELS if m.startswith("one-api-")]:
# 为了更灵活地接入one-api多模型管理界面,设计了此接口,例子AVAIL_LLM_MODELS = ["one-api-mixtral-8x7b(max_token=6666)"]
# 其中
# "one-api-" 是前缀(必要)
# "mixtral-8x7b" 是模型名(必要)
# "(max_token=6666)" 是配置(非必要)
try:
_, max_token_tmp = read_one_api_model_name(model)
except:
print(f"one-api模型 {model} 的 max_token 配置不是整数,请检查配置文件。")
continue
model_info.update({
model: {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": max_token_tmp,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
# -=-=-=-=-=-=- azure模型对齐支持 -=-=-=-=-=-=-
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY") # <-- 用于定义和切换多个azure模型 -->
if len(AZURE_CFG_ARRAY) > 0: if len(AZURE_CFG_ARRAY) > 0:
for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items(): for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items():
# 可能会覆盖之前的配置,但这是意料之中的 # 可能会覆盖之前的配置,但这是意料之中的
@@ -678,7 +816,7 @@ def LLM_CATCH_EXCEPTION(f):
""" """
装饰器函数,将错误显示出来 装饰器函数,将错误显示出来
""" """
def decorated(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience): def decorated(inputs:str, llm_kwargs:dict, history:list, sys_prompt:str, observe_window:list, console_slience:bool):
try: try:
return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
except Exception as e: except Exception as e:
@@ -688,9 +826,9 @@ def LLM_CATCH_EXCEPTION(f):
return decorated return decorated
def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list, sys_prompt:str, observe_window:list=[], console_slience:bool=False):
""" """
发送至LLM,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 发送至LLM,等待回复,一次性完成,不显示中间过程。但内部(尽可能地)用stream的方法避免中途网线被掐。
inputs inputs
是本次问询的输入 是本次问询的输入
sys_prompt: sys_prompt:
@@ -708,7 +846,6 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
model = llm_kwargs['llm_model'] model = llm_kwargs['llm_model']
n_model = 1 n_model = 1
if '&' not in model: if '&' not in model:
assert not model.startswith("tgui"), "TGUI不支持函数插件的实现"
# 如果只询问1个大语言模型 # 如果只询问1个大语言模型
method = model_info[model]["fn_without_ui"] method = model_info[model]["fn_without_ui"]
@@ -743,7 +880,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
# 观察窗window # 观察窗window
chat_string = [] chat_string = []
for i in range(n_model): for i in range(n_model):
chat_string.append( f"{str(models[i])} 说】: <font color=\"{colors[i]}\"> {window_mutex[i][0]} </font>" ) color = colors[i%len(colors)]
chat_string.append( f"{str(models[i])} 说】: <font color=\"{color}\"> {window_mutex[i][0]} </font>" )
res = '<br/><br/>\n\n---\n\n'.join(chat_string) res = '<br/><br/>\n\n---\n\n'.join(chat_string)
# # # # # # # # # # # # # # # # # # # # # #
observe_window[0] = res observe_window[0] = res
@@ -760,22 +898,30 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
time.sleep(1) time.sleep(1)
for i, future in enumerate(futures): # wait and get for i, future in enumerate(futures): # wait and get
return_string_collect.append( f"{str(models[i])} 说】: <font color=\"{colors[i]}\"> {future.result()} </font>" ) color = colors[i%len(colors)]
return_string_collect.append( f"{str(models[i])} 说】: <font color=\"{color}\"> {future.result()} </font>" )
window_mutex[-1] = False # stop mutex thread window_mutex[-1] = False # stop mutex thread
res = '<br/><br/>\n\n---\n\n'.join(return_string_collect) res = '<br/><br/>\n\n---\n\n'.join(return_string_collect)
return res return res
def predict(inputs, llm_kwargs, *args, **kwargs): def predict(inputs:str, llm_kwargs:dict, *args, **kwargs):
""" """
发送至LLM,流式获取输出。 发送至LLM,流式获取输出。
用于基础的对话功能。 用于基础的对话功能。
inputs 是本次问询的输入
top_p, temperature是LLM的内部调优参数 完整参数列表:
history 是之前的对话列表注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误 predict(
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 inputs:str, # 是本次问询的输入
additional_fn代表点击的哪个按钮,按钮见functional.py llm_kwargs:dict, # 是LLM的内部调优参数
plugin_kwargs:dict, # 是插件的内部参数
chatbot:ChatBotWithCookies, # 原样传递,负责向用户前端展示对话,兼顾前端状态的功能
history:list=[], # 是之前的对话列表
system_prompt:str='', # 系统静默prompt
stream:bool=True, # 是否流式输出(已弃用)
additional_fn:str=None # 基础功能区按钮的附加功能
):
""" """
inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm") inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")

查看文件

@@ -56,15 +56,15 @@ class GetGLM2Handle(LocalLLMHandle):
query, max_length, top_p, temperature, history = adaptor(kwargs) query, max_length, top_p, temperature, history = adaptor(kwargs)
for response, history in self._model.stream_chat(self._tokenizer, for response, history in self._model.stream_chat(self._tokenizer,
query, query,
history, history,
max_length=max_length, max_length=max_length,
top_p=top_p, top_p=top_p,
temperature=temperature, temperature=temperature,
): ):
yield response yield response
def try_to_import_special_deps(self, **kwargs): def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt # import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行 # 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行

查看文件

@@ -55,15 +55,15 @@ class GetGLM3Handle(LocalLLMHandle):
query, max_length, top_p, temperature, history = adaptor(kwargs) query, max_length, top_p, temperature, history = adaptor(kwargs)
for response, history in self._model.stream_chat(self._tokenizer, for response, history in self._model.stream_chat(self._tokenizer,
query, query,
history, history,
max_length=max_length, max_length=max_length,
top_p=top_p, top_p=top_p,
temperature=temperature, temperature=temperature,
): ):
yield response yield response
def try_to_import_special_deps(self, **kwargs): def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt # import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行 # 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行

查看文件

@@ -37,7 +37,7 @@ class GetGLMFTHandle(Process):
self.check_dependency() self.check_dependency()
self.start() self.start()
self.threadLock = threading.Lock() self.threadLock = threading.Lock()
def check_dependency(self): def check_dependency(self):
try: try:
import sentencepiece import sentencepiece
@@ -101,7 +101,7 @@ class GetGLMFTHandle(Process):
break break
except Exception as e: except Exception as e:
retry += 1 retry += 1
if retry > 3: if retry > 3:
self.child.send('[Local Message] Call ChatGLMFT fail 不能正常加载ChatGLMFT的参数。') self.child.send('[Local Message] Call ChatGLMFT fail 不能正常加载ChatGLMFT的参数。')
raise RuntimeError("不能正常加载ChatGLMFT的参数") raise RuntimeError("不能正常加载ChatGLMFT的参数")
@@ -113,7 +113,7 @@ class GetGLMFTHandle(Process):
for response, history in self.chatglmft_model.stream_chat(self.chatglmft_tokenizer, **kwargs): for response, history in self.chatglmft_model.stream_chat(self.chatglmft_tokenizer, **kwargs):
self.child.send(response) self.child.send(response)
# # 中途接收可能的终止指令(如果有的话) # # 中途接收可能的终止指令(如果有的话)
# if self.child.poll(): # if self.child.poll():
# command = self.child.recv() # command = self.child.recv()
# if command == '[Terminate]': break # if command == '[Terminate]': break
except: except:
@@ -133,11 +133,12 @@ class GetGLMFTHandle(Process):
else: else:
break break
self.threadLock.release() self.threadLock.release()
global glmft_handle global glmft_handle
glmft_handle = None glmft_handle = None
################################################################################# #################################################################################
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
""" """
多线程方法 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -146,7 +147,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if glmft_handle is None: if glmft_handle is None:
glmft_handle = GetGLMFTHandle() glmft_handle = GetGLMFTHandle()
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glmft_handle.info if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glmft_handle.info
if not glmft_handle.success: if not glmft_handle.success:
error = glmft_handle.info error = glmft_handle.info
glmft_handle = None glmft_handle = None
raise RuntimeError(error) raise RuntimeError(error)
@@ -161,7 +162,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
response = "" response = ""
for response in glmft_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): for response in glmft_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
if len(observe_window) >= 1: observe_window[0] = response if len(observe_window) >= 1: observe_window[0] = response
if len(observe_window) >= 2: if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience: if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。") raise RuntimeError("程序终止。")
return response return response
@@ -180,7 +181,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
glmft_handle = GetGLMFTHandle() glmft_handle = GetGLMFTHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + glmft_handle.info) chatbot[-1] = (inputs, load_message + "\n\n" + glmft_handle.info)
yield from update_ui(chatbot=chatbot, history=[]) yield from update_ui(chatbot=chatbot, history=[])
if not glmft_handle.success: if not glmft_handle.success:
glmft_handle = None glmft_handle = None
return return

查看文件

@@ -59,7 +59,7 @@ class GetONNXGLMHandle(LocalLLMHandle):
temperature=temperature, temperature=temperature,
): ):
yield answer yield answer
def try_to_import_special_deps(self, **kwargs): def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt # import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行 # 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行

查看文件

@@ -21,7 +21,9 @@ import random
# config_private.py放自己的秘密如API和代理网址 # config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件 # 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
from toolbox import trimmed_format_exc, is_the_upload_folder, read_one_api_model_name, log_chat
from toolbox import ChatBotWithCookies
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \ proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY') get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
@@ -47,14 +49,14 @@ def decode_chunk(chunk):
choice_valid = False choice_valid = False
has_content = False has_content = False
has_role = False has_role = False
try: try:
chunkjson = json.loads(chunk_decoded[6:]) chunkjson = json.loads(chunk_decoded[6:])
has_choices = 'choices' in chunkjson has_choices = 'choices' in chunkjson
if has_choices: choice_valid = (len(chunkjson['choices']) > 0) if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
if has_choices and choice_valid: has_content = ("content" in chunkjson['choices'][0]["delta"]) if has_choices and choice_valid: has_content = ("content" in chunkjson['choices'][0]["delta"])
if has_content: has_content = (chunkjson['choices'][0]["delta"]["content"] is not None) if has_content: has_content = (chunkjson['choices'][0]["delta"]["content"] is not None)
if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"] if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
except: except:
pass pass
return chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role return chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role
@@ -68,7 +70,7 @@ def verify_endpoint(endpoint):
raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint) raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint)
return endpoint return endpoint
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=None, console_slience:bool=False):
""" """
发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
inputs inputs
@@ -103,13 +105,13 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
json_data = None json_data = None
while True: while True:
try: chunk = next(stream_response) try: chunk = next(stream_response)
except StopIteration: except StopIteration:
break break
except requests.exceptions.ConnectionError: except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。 chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk) chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
if len(chunk_decoded)==0: continue if len(chunk_decoded)==0: continue
if not chunk_decoded.startswith('data:'): if not chunk_decoded.startswith('data:'):
error_msg = get_full_error(chunk, stream_response).decode() error_msg = get_full_error(chunk, stream_response).decode()
if "reduce the length" in error_msg: if "reduce the length" in error_msg:
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg) raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
@@ -125,11 +127,12 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
json_data = chunkjson['choices'][0] json_data = chunkjson['choices'][0]
delta = json_data["delta"] delta = json_data["delta"]
if len(delta) == 0: break if len(delta) == 0: break
if "role" in delta: continue if (not has_content) and has_role: continue
if "content" in delta: if (not has_content) and (not has_role): continue # raise RuntimeError("发现不标准的第三方接口:"+delta)
if has_content: # has_role = True/False
result += delta["content"] result += delta["content"]
if not console_slience: print(delta["content"], end='') if not console_slience: print(delta["content"], end='')
if observe_window is not None: if observe_window is not None:
# 观测窗,把已经获取的数据显示出去 # 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1: if len(observe_window) >= 1:
observe_window[0] += delta["content"] observe_window[0] += delta["content"]
@@ -145,7 +148,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
return result return result
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
""" """
发送至chatGPT,流式获取输出。 发送至chatGPT,流式获取输出。
用于基础的对话功能。 用于基础的对话功能。
@@ -171,7 +175,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot) inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
raw_input = inputs raw_input = inputs
logging.info(f'[raw_input] {raw_input}') # logging.info(f'[raw_input] {raw_input}')
chatbot.append((inputs, "")) chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
@@ -187,7 +191,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。") chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
return return
# 检查endpoint是否合法 # 检查endpoint是否合法
try: try:
from .bridge_all import model_info from .bridge_all import model_info
@@ -197,7 +201,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot[-1] = (inputs, tb_str) chatbot[-1] = (inputs, tb_str)
yield from update_ui(chatbot=chatbot, history=history, msg="Endpoint不满足要求") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="Endpoint不满足要求") # 刷新界面
return return
history.append(inputs); history.append("") history.append(inputs); history.append("")
retry = 0 retry = 0
@@ -214,7 +218,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if retry > MAX_RETRY: raise TimeoutError if retry > MAX_RETRY: raise TimeoutError
gpt_replying_buffer = "" gpt_replying_buffer = ""
is_head_of_the_stream = True is_head_of_the_stream = True
if stream: if stream:
stream_response = response.iter_lines() stream_response = response.iter_lines()
@@ -226,21 +230,21 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chunk_decoded = chunk.decode() chunk_decoded = chunk.decode()
error_msg = chunk_decoded error_msg = chunk_decoded
# 首先排除一个one-api没有done数据包的第三方Bug情形 # 首先排除一个one-api没有done数据包的第三方Bug情形
if len(gpt_replying_buffer.strip()) > 0 and len(error_msg) == 0: if len(gpt_replying_buffer.strip()) > 0 and len(error_msg) == 0:
yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的非OpenAI官方接口,建议选择更稳定的接口。") yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的非OpenAI官方接口,建议选择更稳定的接口。")
break break
# 其他情况,直接返回报错 # 其他情况,直接返回报错
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg) chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="非OpenAI官方接口返回了错误:" + chunk.decode()) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="非OpenAI官方接口返回了错误:" + chunk.decode()) # 刷新界面
return return
# 提前读取一些信息 (用于判断异常) # 提前读取一些信息 (用于判断异常)
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk) chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r"content" not in chunk_decoded): if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r"content" not in chunk_decoded):
# 数据流的第一帧不携带content # 数据流的第一帧不携带content
is_head_of_the_stream = False; continue is_head_of_the_stream = False; continue
if chunk: if chunk:
try: try:
if has_choices and not choice_valid: if has_choices and not choice_valid:
@@ -252,7 +256,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
# 前者是API2D的结束条件,后者是OPENAI的结束条件 # 前者是API2D的结束条件,后者是OPENAI的结束条件
if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0): if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0):
# 判定为数据流的结束,gpt_replying_buffer也写完了 # 判定为数据流的结束,gpt_replying_buffer也写完了
logging.info(f'[response] {gpt_replying_buffer}') # logging.info(f'[response] {gpt_replying_buffer}')
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
break break
# 处理数据流的主体 # 处理数据流的主体
status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}" status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
@@ -264,7 +269,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
# 一些第三方接口的出现这样的错误,兼容一下吧 # 一些第三方接口的出现这样的错误,兼容一下吧
continue continue
else: else:
# 一些垃圾第三方接口出现这样的错误 # 至此已经超出了正常接口应该进入的范围,一些垃圾第三方接口出现这样的错误
if chunkjson['choices'][0]["delta"]["content"] is None: continue # 一些垃圾第三方接口出现这样的错误,兼容一下吧
gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"] gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
history[-1] = gpt_replying_buffer history[-1] = gpt_replying_buffer
@@ -285,7 +291,7 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup' openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup'
if "reduce the length" in error_msg: if "reduce the length" in error_msg:
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出 if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'], history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一 max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)") chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
elif "does not exist" in error_msg: elif "does not exist" in error_msg:
@@ -324,7 +330,7 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
"Authorization": f"Bearer {api_key}" "Authorization": f"Bearer {api_key}"
} }
if API_ORG.startswith('org-'): headers.update({"OpenAI-Organization": API_ORG}) if API_ORG.startswith('org-'): headers.update({"OpenAI-Organization": API_ORG})
if llm_kwargs['llm_model'].startswith('azure-'): if llm_kwargs['llm_model'].startswith('azure-'):
headers.update({"api-key": api_key}) headers.update({"api-key": api_key})
if llm_kwargs['llm_model'] in AZURE_CFG_ARRAY.keys(): if llm_kwargs['llm_model'] in AZURE_CFG_ARRAY.keys():
azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"] azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"]
@@ -356,10 +362,13 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
model = llm_kwargs['llm_model'] model = llm_kwargs['llm_model']
if llm_kwargs['llm_model'].startswith('api2d-'): if llm_kwargs['llm_model'].startswith('api2d-'):
model = llm_kwargs['llm_model'][len('api2d-'):] model = llm_kwargs['llm_model'][len('api2d-'):]
if llm_kwargs['llm_model'].startswith('one-api-'):
model = llm_kwargs['llm_model'][len('one-api-'):]
model, _ = read_one_api_model_name(model)
if model == "gpt-3.5-random": # 随机选择, 绕过openai访问频率限制 if model == "gpt-3.5-random": # 随机选择, 绕过openai访问频率限制
model = random.choice([ model = random.choice([
"gpt-3.5-turbo", "gpt-3.5-turbo",
"gpt-3.5-turbo-16k", "gpt-3.5-turbo-16k",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-1106",
"gpt-3.5-turbo-0613", "gpt-3.5-turbo-0613",
@@ -370,7 +379,7 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
payload = { payload = {
"model": model, "model": model,
"messages": messages, "messages": messages,
"temperature": llm_kwargs['temperature'], # 1.0, "temperature": llm_kwargs['temperature'], # 1.0,
"top_p": llm_kwargs['top_p'], # 1.0, "top_p": llm_kwargs['top_p'], # 1.0,
"n": 1, "n": 1,

查看文件

@@ -27,7 +27,7 @@ timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check
def report_invalid_key(key): def report_invalid_key(key):
if get_conf("BLOCK_INVALID_APIKEY"): if get_conf("BLOCK_INVALID_APIKEY"):
# 实验性功能,自动检测并屏蔽失效的KEY,请勿使用 # 实验性功能,自动检测并屏蔽失效的KEY,请勿使用
from request_llms.key_manager import ApiKeyManager from request_llms.key_manager import ApiKeyManager
api_key = ApiKeyManager().add_key_to_blacklist(key) api_key = ApiKeyManager().add_key_to_blacklist(key)
@@ -51,13 +51,13 @@ def decode_chunk(chunk):
choice_valid = False choice_valid = False
has_content = False has_content = False
has_role = False has_role = False
try: try:
chunkjson = json.loads(chunk_decoded[6:]) chunkjson = json.loads(chunk_decoded[6:])
has_choices = 'choices' in chunkjson has_choices = 'choices' in chunkjson
if has_choices: choice_valid = (len(chunkjson['choices']) > 0) if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
if has_choices and choice_valid: has_content = "content" in chunkjson['choices'][0]["delta"] if has_choices and choice_valid: has_content = "content" in chunkjson['choices'][0]["delta"]
if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"] if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
except: except:
pass pass
return chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role return chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role
@@ -103,7 +103,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
raw_input = inputs raw_input = inputs
logging.info(f'[raw_input] {raw_input}') logging.info(f'[raw_input] {raw_input}')
def make_media_input(inputs, image_paths): def make_media_input(inputs, image_paths):
for image_path in image_paths: for image_path in image_paths:
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>' inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
return inputs return inputs
@@ -122,7 +122,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。") chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
return return
# 检查endpoint是否合法 # 检查endpoint是否合法
try: try:
from .bridge_all import model_info from .bridge_all import model_info
@@ -150,7 +150,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if retry > MAX_RETRY: raise TimeoutError if retry > MAX_RETRY: raise TimeoutError
gpt_replying_buffer = "" gpt_replying_buffer = ""
is_head_of_the_stream = True is_head_of_the_stream = True
if stream: if stream:
stream_response = response.iter_lines() stream_response = response.iter_lines()
@@ -162,21 +162,21 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chunk_decoded = chunk.decode() chunk_decoded = chunk.decode()
error_msg = chunk_decoded error_msg = chunk_decoded
# 首先排除一个one-api没有done数据包的第三方Bug情形 # 首先排除一个one-api没有done数据包的第三方Bug情形
if len(gpt_replying_buffer.strip()) > 0 and len(error_msg) == 0: if len(gpt_replying_buffer.strip()) > 0 and len(error_msg) == 0:
yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的非OpenAI官方接口,建议选择更稳定的接口。") yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的非OpenAI官方接口,建议选择更稳定的接口。")
break break
# 其他情况,直接返回报错 # 其他情况,直接返回报错
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key) chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key)
yield from update_ui(chatbot=chatbot, history=history, msg="非OpenAI官方接口返回了错误:" + chunk.decode()) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="非OpenAI官方接口返回了错误:" + chunk.decode()) # 刷新界面
return return
# 提前读取一些信息 (用于判断异常) # 提前读取一些信息 (用于判断异常)
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk) chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r"content" not in chunk_decoded): if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r"content" not in chunk_decoded):
# 数据流的第一帧不携带content # 数据流的第一帧不携带content
is_head_of_the_stream = False; continue is_head_of_the_stream = False; continue
if chunk: if chunk:
try: try:
if has_choices and not choice_valid: if has_choices and not choice_valid:
@@ -220,7 +220,7 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg,
openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup' openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup'
if "reduce the length" in error_msg: if "reduce the length" in error_msg:
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出 if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'], history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一 max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)") chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
elif "does not exist" in error_msg: elif "does not exist" in error_msg:
@@ -260,7 +260,7 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths):
"Authorization": f"Bearer {api_key}" "Authorization": f"Bearer {api_key}"
} }
if API_ORG.startswith('org-'): headers.update({"OpenAI-Organization": API_ORG}) if API_ORG.startswith('org-'): headers.update({"OpenAI-Organization": API_ORG})
if llm_kwargs['llm_model'].startswith('azure-'): if llm_kwargs['llm_model'].startswith('azure-'):
headers.update({"api-key": api_key}) headers.update({"api-key": api_key})
if llm_kwargs['llm_model'] in AZURE_CFG_ARRAY.keys(): if llm_kwargs['llm_model'] in AZURE_CFG_ARRAY.keys():
azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"] azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"]
@@ -294,7 +294,7 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths):
payload = { payload = {
"model": model, "model": model,
"messages": messages, "messages": messages,
"temperature": llm_kwargs['temperature'], # 1.0, "temperature": llm_kwargs['temperature'], # 1.0,
"top_p": llm_kwargs['top_p'], # 1.0, "top_p": llm_kwargs['top_p'], # 1.0,
"n": 1, "n": 1,

查看文件

@@ -73,12 +73,12 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
result = '' result = ''
while True: while True:
try: chunk = next(stream_response).decode() try: chunk = next(stream_response).decode()
except StopIteration: except StopIteration:
break break
except requests.exceptions.ConnectionError: except requests.exceptions.ConnectionError:
chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。 chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。
if len(chunk)==0: continue if len(chunk)==0: continue
if not chunk.startswith('data:'): if not chunk.startswith('data:'):
error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode() error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode()
if "reduce the length" in error_msg: if "reduce the length" in error_msg:
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg) raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
@@ -89,14 +89,14 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
delta = json_data["delta"] delta = json_data["delta"]
if len(delta) == 0: break if len(delta) == 0: break
if "role" in delta: continue if "role" in delta: continue
if "content" in delta: if "content" in delta:
result += delta["content"] result += delta["content"]
if not console_slience: print(delta["content"], end='') if not console_slience: print(delta["content"], end='')
if observe_window is not None: if observe_window is not None:
# 观测窗,把已经获取的数据显示出去 # 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1: observe_window[0] += delta["content"] if len(observe_window) >= 1: observe_window[0] += delta["content"]
# 看门狗,如果超过期限没有喂狗,则终止 # 看门狗,如果超过期限没有喂狗,则终止
if len(observe_window) >= 2: if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience: if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("用户取消了程序。") raise RuntimeError("用户取消了程序。")
else: raise RuntimeError("意外Json结构"+delta) else: raise RuntimeError("意外Json结构"+delta)
@@ -132,7 +132,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。") chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
return return
history.append(inputs); history.append("") history.append(inputs); history.append("")
retry = 0 retry = 0
@@ -151,7 +151,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if retry > MAX_RETRY: raise TimeoutError if retry > MAX_RETRY: raise TimeoutError
gpt_replying_buffer = "" gpt_replying_buffer = ""
is_head_of_the_stream = True is_head_of_the_stream = True
if stream: if stream:
stream_response = response.iter_lines() stream_response = response.iter_lines()
@@ -165,12 +165,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg) chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="非Openai官方接口返回了错误:" + chunk.decode()) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="非Openai官方接口返回了错误:" + chunk.decode()) # 刷新界面
return return
# print(chunk.decode()[6:]) # print(chunk.decode()[6:])
if is_head_of_the_stream and (r'"object":"error"' not in chunk.decode()): if is_head_of_the_stream and (r'"object":"error"' not in chunk.decode()):
# 数据流的第一帧不携带content # 数据流的第一帧不携带content
is_head_of_the_stream = False; continue is_head_of_the_stream = False; continue
if chunk: if chunk:
try: try:
chunk_decoded = chunk.decode() chunk_decoded = chunk.decode()
@@ -203,7 +203,7 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup' openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup'
if "reduce the length" in error_msg: if "reduce the length" in error_msg:
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出 if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'], history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一 max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)") chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
# history = [] # 清除历史 # history = [] # 清除历史
@@ -264,7 +264,7 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
payload = { payload = {
"model": llm_kwargs['llm_model'].strip('api2d-'), "model": llm_kwargs['llm_model'].strip('api2d-'),
"messages": messages, "messages": messages,
"temperature": llm_kwargs['temperature'], # 1.0, "temperature": llm_kwargs['temperature'], # 1.0,
"top_p": llm_kwargs['top_p'], # 1.0, "top_p": llm_kwargs['top_p'], # 1.0,
"n": 1, "n": 1,

查看文件

@@ -9,15 +9,15 @@
具备多线程调用能力的函数 具备多线程调用能力的函数
2. predict_no_ui_long_connection支持多线程 2. predict_no_ui_long_connection支持多线程
""" """
import os
import json
import time
import gradio as gr
import logging import logging
import os
import time
import traceback import traceback
import json
import requests import requests
import importlib from toolbox import get_conf, update_ui, trimmed_format_exc, encode_image, every_image_file_in_path, log_chat
picture_system_prompt = "\n当回复图像时,必须说明正在回复哪张图像。所有图像仅在最后一个问题中提供,即使它们在历史记录中被提及。请使用'这是第X张图像:'的格式来指明您正在描述的是哪张图像。"
Claude_3_Models = ["claude-3-haiku-20240307", "claude-3-sonnet-20240229", "claude-3-opus-20240229"]
# config_private.py放自己的秘密如API和代理网址 # config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件 # 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件
@@ -39,6 +39,34 @@ def get_full_error(chunk, stream_response):
break break
return chunk return chunk
def decode_chunk(chunk):
# 提前读取一些信息(用于判断异常)
chunk_decoded = chunk.decode()
chunkjson = None
is_last_chunk = False
need_to_pass = False
if chunk_decoded.startswith('data:'):
try:
chunkjson = json.loads(chunk_decoded[6:])
except:
need_to_pass = True
pass
elif chunk_decoded.startswith('event:'):
try:
event_type = chunk_decoded.split(':')[1].strip()
if event_type == 'content_block_stop' or event_type == 'message_stop':
is_last_chunk = True
elif event_type == 'content_block_start' or event_type == 'message_start':
need_to_pass = True
pass
except:
need_to_pass = True
pass
else:
need_to_pass = True
pass
return need_to_pass, chunkjson, is_last_chunk
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
""" """
@@ -54,50 +82,67 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
observe_window = None observe_window = None
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]观测窗。observe_window[1]:看门狗 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]观测窗。observe_window[1]:看门狗
""" """
from anthropic import Anthropic
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可 watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
prompt = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
retry = 0
if len(ANTHROPIC_API_KEY) == 0: if len(ANTHROPIC_API_KEY) == 0:
raise RuntimeError("没有设置ANTHROPIC_API_KEY选项") raise RuntimeError("没有设置ANTHROPIC_API_KEY选项")
if inputs == "": inputs = "空空如也的输入栏"
headers, message = generate_payload(inputs, llm_kwargs, history, sys_prompt, image_paths=None)
retry = 0
while True: while True:
try: try:
# make a POST request to the API endpoint, stream=False # make a POST request to the API endpoint, stream=False
from .bridge_all import model_info from .bridge_all import model_info
anthropic = Anthropic(api_key=ANTHROPIC_API_KEY) endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
# endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] response = requests.post(endpoint, headers=headers, json=message,
# with ProxyNetworkActivate() proxies=proxies, stream=True, timeout=TIMEOUT_SECONDS);break
stream = anthropic.completions.create( except requests.exceptions.ReadTimeout as e:
prompt=prompt,
max_tokens_to_sample=4096, # The maximum number of tokens to generate before stopping.
model=llm_kwargs['llm_model'],
stream=True,
temperature = llm_kwargs['temperature']
)
break
except Exception as e:
retry += 1 retry += 1
traceback.print_exc() traceback.print_exc()
if retry > MAX_RETRY: raise TimeoutError if retry > MAX_RETRY: raise TimeoutError
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……') if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
stream_response = response.iter_lines()
result = '' result = ''
try: while True:
for completion in stream: try: chunk = next(stream_response)
result += completion.completion except StopIteration:
if not console_slience: print(completion.completion, end='') break
if observe_window is not None: except requests.exceptions.ConnectionError:
# 观测窗,把已经获取的数据显示出去 chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
if len(observe_window) >= 1: observe_window[0] += completion.completion need_to_pass, chunkjson, is_last_chunk = decode_chunk(chunk)
# 看门狗,如果超过期限没有喂狗,则终止 if chunk:
if len(observe_window) >= 2: try:
if (time.time()-observe_window[1]) > watch_dog_patience: if need_to_pass:
raise RuntimeError("用户取消了程序。") pass
except Exception as e: elif is_last_chunk:
traceback.print_exc() # logging.info(f'[response] {result}')
break
else:
if chunkjson and chunkjson['type'] == 'content_block_delta':
result += chunkjson['delta']['text']
print(chunkjson['delta']['text'], end='')
if observe_window is not None:
# 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1:
observe_window[0] += chunkjson['delta']['text']
# 看门狗,如果超过期限没有喂狗,则终止
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("用户取消了程序。")
except Exception as e:
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
print(error_msg)
raise RuntimeError("Json解析不合常规")
return result return result
def make_media_input(history,inputs,image_paths):
for image_path in image_paths:
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
return inputs
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
""" """
@@ -109,23 +154,33 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
additional_fn代表点击的哪个按钮,按钮见functional.py additional_fn代表点击的哪个按钮,按钮见functional.py
""" """
from anthropic import Anthropic if inputs == "": inputs = "空空如也的输入栏"
if len(ANTHROPIC_API_KEY) == 0: if len(ANTHROPIC_API_KEY) == 0:
chatbot.append((inputs, "没有设置ANTHROPIC_API_KEY")) chatbot.append((inputs, "没有设置ANTHROPIC_API_KEY"))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
return return
if additional_fn is not None: if additional_fn is not None:
from core_functional import handle_core_functionality from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot) inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
raw_input = inputs have_recent_file, image_paths = every_image_file_in_path(chatbot)
logging.info(f'[raw_input] {raw_input}') if len(image_paths) > 20:
chatbot.append((inputs, "")) chatbot.append((inputs, "图片数量超过api上限(20张)"))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="等待响应")
return
if any([llm_kwargs['llm_model'] == model for model in Claude_3_Models]) and have_recent_file:
if inputs == "" or inputs == "空空如也的输入栏": inputs = "请描述给出的图片"
system_prompt += picture_system_prompt # 由于没有单独的参数保存包含图片的历史,所以只能通过提示词对第几张图片进行定位
chatbot.append((make_media_input(history,inputs, image_paths), ""))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
else:
chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
try: try:
prompt = generate_payload(inputs, llm_kwargs, history, system_prompt, stream) headers, message = generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths)
except RuntimeError as e: except RuntimeError as e:
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。") chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
@@ -138,91 +193,117 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
try: try:
# make a POST request to the API endpoint, stream=True # make a POST request to the API endpoint, stream=True
from .bridge_all import model_info from .bridge_all import model_info
anthropic = Anthropic(api_key=ANTHROPIC_API_KEY) endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
# endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] response = requests.post(endpoint, headers=headers, json=message,
# with ProxyNetworkActivate() proxies=proxies, stream=True, timeout=TIMEOUT_SECONDS);break
stream = anthropic.completions.create( except requests.exceptions.ReadTimeout as e:
prompt=prompt,
max_tokens_to_sample=4096, # The maximum number of tokens to generate before stopping.
model=llm_kwargs['llm_model'],
stream=True,
temperature = llm_kwargs['temperature']
)
break
except:
retry += 1 retry += 1
chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg)) traceback.print_exc()
retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
if retry > MAX_RETRY: raise TimeoutError if retry > MAX_RETRY: raise TimeoutError
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
stream_response = response.iter_lines()
gpt_replying_buffer = "" gpt_replying_buffer = ""
for completion in stream:
try:
gpt_replying_buffer = gpt_replying_buffer + completion.completion
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield from update_ui(chatbot=chatbot, history=history, msg='正常') # 刷新界面
except Exception as e: while True:
from toolbox import regular_txt_to_markdown try: chunk = next(stream_response)
tb_str = '```\n' + trimmed_format_exc() + '```' except StopIteration:
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str}") break
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + tb_str) # 刷新界面 except requests.exceptions.ConnectionError:
return chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
need_to_pass, chunkjson, is_last_chunk = decode_chunk(chunk)
if chunk:
try:
if need_to_pass:
pass
elif is_last_chunk:
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
# logging.info(f'[response] {gpt_replying_buffer}')
break
else:
if chunkjson and chunkjson['type'] == 'content_block_delta':
gpt_replying_buffer += chunkjson['delta']['text']
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield from update_ui(chatbot=chatbot, history=history, msg='正常') # 刷新界面
except Exception as e:
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
print(error_msg)
raise RuntimeError("Json解析不合常规")
def multiple_picture_types(image_paths):
"""
根据图片类型返回image/jpeg, image/png, image/gif, image/webp,无法判断则返回image/jpeg
"""
for image_path in image_paths:
if image_path.endswith('.jpeg') or image_path.endswith('.jpg'):
return 'image/jpeg'
elif image_path.endswith('.png'):
return 'image/png'
elif image_path.endswith('.gif'):
return 'image/gif'
elif image_path.endswith('.webp'):
return 'image/webp'
return 'image/jpeg'
# https://github.com/jtsang4/claude-to-chatgpt/blob/main/claude_to_chatgpt/adapter.py def generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths):
def convert_messages_to_prompt(messages):
prompt = ""
role_map = {
"system": "Human",
"user": "Human",
"assistant": "Assistant",
}
for message in messages:
role = message["role"]
content = message["content"]
transformed_role = role_map[role]
prompt += f"\n\n{transformed_role.capitalize()}: {content}"
prompt += "\n\nAssistant: "
return prompt
def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
""" """
整合所有信息,选择LLM模型,生成http请求,为发送请求做准备 整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
""" """
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
conversation_cnt = len(history) // 2 conversation_cnt = len(history) // 2
messages = [{"role": "system", "content": system_prompt}] messages = []
if conversation_cnt: if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2): for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {} what_i_have_asked = {}
what_i_have_asked["role"] = "user" what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = history[index] what_i_have_asked["content"] = [{"type": "text", "text": history[index]}]
what_gpt_answer = {} what_gpt_answer = {}
what_gpt_answer["role"] = "assistant" what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = history[index+1] what_gpt_answer["content"] = [{"type": "text", "text": history[index+1]}]
if what_i_have_asked["content"] != "": if what_i_have_asked["content"][0]["text"] != "":
if what_gpt_answer["content"] == "": continue if what_i_have_asked["content"][0]["text"] == "": continue
if what_gpt_answer["content"] == timeout_bot_msg: continue if what_i_have_asked["content"][0]["text"] == timeout_bot_msg: continue
messages.append(what_i_have_asked) messages.append(what_i_have_asked)
messages.append(what_gpt_answer) messages.append(what_gpt_answer)
else: else:
messages[-1]['content'] = what_gpt_answer['content'] messages[-1]['content'][0]['text'] = what_gpt_answer['content'][0]['text']
what_i_ask_now = {} if any([llm_kwargs['llm_model'] == model for model in Claude_3_Models]) and image_paths:
what_i_ask_now["role"] = "user" what_i_ask_now = {}
what_i_ask_now["content"] = inputs what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = []
for image_path in image_paths:
what_i_ask_now["content"].append({
"type": "image",
"source": {
"type": "base64",
"media_type": multiple_picture_types(image_paths),
"data": encode_image(image_path),
}
})
what_i_ask_now["content"].append({"type": "text", "text": inputs})
else:
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = [{"type": "text", "text": inputs}]
messages.append(what_i_ask_now) messages.append(what_i_ask_now)
prompt = convert_messages_to_prompt(messages) # 开始整理headers与message
headers = {
return prompt 'x-api-key': ANTHROPIC_API_KEY,
'anthropic-version': '2023-06-01',
'content-type': 'application/json'
}
payload = {
'model': llm_kwargs['llm_model'],
'max_tokens': 4096,
'messages': messages,
'temperature': llm_kwargs['temperature'],
'stream': True,
'system': system_prompt
}
return headers, payload

查看文件

@@ -0,0 +1,328 @@
# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目
"""
该文件中主要包含三个函数
不具备多线程能力的函数:
1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
具备多线程调用能力的函数
2. predict_no_ui_long_connection支持多线程
"""
import json
import time
import gradio as gr
import logging
import traceback
import requests
import importlib
import random
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
from toolbox import trimmed_format_exc, is_the_upload_folder, read_one_api_model_name, log_chat
from toolbox import ChatBotWithCookies
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
def get_full_error(chunk, stream_response):
"""
获取完整的从Cohere返回的报错
"""
while True:
try:
chunk += next(stream_response)
except:
break
return chunk
def decode_chunk(chunk):
# 提前读取一些信息 (用于判断异常)
chunk_decoded = chunk.decode()
chunkjson = None
has_choices = False
choice_valid = False
has_content = False
has_role = False
try:
chunkjson = json.loads(chunk_decoded)
has_choices = 'choices' in chunkjson
if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
if has_choices and choice_valid: has_content = ("content" in chunkjson['choices'][0]["delta"])
if has_content: has_content = (chunkjson['choices'][0]["delta"]["content"] is not None)
if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
except:
pass
return chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role
from functools import lru_cache
@lru_cache(maxsize=32)
def verify_endpoint(endpoint):
"""
检查endpoint是否可用
"""
if "你亲手写的api名称" in endpoint:
raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint)
return endpoint
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=None, console_slience:bool=False):
"""
发送,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
inputs
是本次问询的输入
sys_prompt:
系统静默prompt
llm_kwargs
内部调优参数
history
是之前的对话列表
observe_window = None
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]观测窗。observe_window[1]:看门狗
"""
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
retry = 0
while True:
try:
# make a POST request to the API endpoint, stream=False
from .bridge_all import model_info
endpoint = verify_endpoint(model_info[llm_kwargs['llm_model']]['endpoint'])
response = requests.post(endpoint, headers=headers, proxies=proxies,
json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
except requests.exceptions.ReadTimeout as e:
retry += 1
traceback.print_exc()
if retry > MAX_RETRY: raise TimeoutError
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
stream_response = response.iter_lines()
result = ''
json_data = None
while True:
try: chunk = next(stream_response)
except StopIteration:
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
if chunkjson['event_type'] == 'stream-start': continue
if chunkjson['event_type'] == 'text-generation':
result += chunkjson["text"]
if not console_slience: print(chunkjson["text"], end='')
if observe_window is not None:
# 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1:
observe_window[0] += chunkjson["text"]
# 看门狗,如果超过期限没有喂狗,则终止
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("用户取消了程序。")
if chunkjson['event_type'] == 'stream-end': break
return result
def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
"""
发送至chatGPT,流式获取输出。
用于基础的对话功能。
inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
additional_fn代表点击的哪个按钮,按钮见functional.py
"""
# if is_any_api_key(inputs):
# chatbot._cookies['api_key'] = inputs
# chatbot.append(("输入已识别为Cohere的api_key", what_keys(inputs)))
# yield from update_ui(chatbot=chatbot, history=history, msg="api_key已导入") # 刷新界面
# return
# elif not is_any_api_key(chatbot._cookies['api_key']):
# chatbot.append((inputs, "缺少api_key。\n\n1. 临时解决方案直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案在config.py中配置。"))
# yield from update_ui(chatbot=chatbot, history=history, msg="缺少api_key") # 刷新界面
# return
user_input = inputs
if additional_fn is not None:
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
raw_input = inputs
# logging.info(f'[raw_input] {raw_input}')
chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
# check mis-behavior
if is_the_upload_folder(user_input):
chatbot[-1] = (inputs, f"[Local Message] 检测到操作错误!当您上传文档之后,需点击“**函数插件区**”按钮进行处理,请勿点击“提交”按钮或者“基础功能区”按钮。")
yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
time.sleep(2)
try:
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
except RuntimeError as e:
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
return
# 检查endpoint是否合法
try:
from .bridge_all import model_info
endpoint = verify_endpoint(model_info[llm_kwargs['llm_model']]['endpoint'])
except:
tb_str = '```\n' + trimmed_format_exc() + '```'
chatbot[-1] = (inputs, tb_str)
yield from update_ui(chatbot=chatbot, history=history, msg="Endpoint不满足要求") # 刷新界面
return
history.append(inputs); history.append("")
retry = 0
while True:
try:
# make a POST request to the API endpoint, stream=True
response = requests.post(endpoint, headers=headers, proxies=proxies,
json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
except:
retry += 1
chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg))
retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
if retry > MAX_RETRY: raise TimeoutError
gpt_replying_buffer = ""
is_head_of_the_stream = True
if stream:
stream_response = response.iter_lines()
while True:
try:
chunk = next(stream_response)
except StopIteration:
# 非Cohere官方接口的出现这样的报错,Cohere和API2D不会走这里
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
# 其他情况,直接返回报错
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="非Cohere官方接口返回了错误:" + chunk.decode()) # 刷新界面
return
# 提前读取一些信息 (用于判断异常)
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
if chunkjson:
try:
if chunkjson['event_type'] == 'stream-start':
continue
if chunkjson['event_type'] == 'text-generation':
gpt_replying_buffer = gpt_replying_buffer + chunkjson["text"]
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
if chunkjson['event_type'] == 'stream-end':
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
break
except Exception as e:
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
print(error_msg)
return
def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg):
from .bridge_all import model_info
Cohere_website = ' 请登录Cohere查看详情 https://platform.Cohere.com/signup'
if "reduce the length" in error_msg:
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
elif "does not exist" in error_msg:
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
elif "Incorrect API key" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. Cohere以提供了不正确的API_KEY为由, 拒绝服务. " + Cohere_website)
elif "exceeded your current quota" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. Cohere以账户额度不足为由, 拒绝服务." + Cohere_website)
elif "account is not active" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Your account is not active. Cohere以账户失效为由, 拒绝服务." + Cohere_website)
elif "associated with a deactivated account" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] You are associated with a deactivated account. Cohere以账户失效为由, 拒绝服务." + Cohere_website)
elif "API key has been deactivated" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] API key has been deactivated. Cohere以账户失效为由, 拒绝服务." + Cohere_website)
elif "bad forward key" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
elif "Not enough point" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
else:
from toolbox import regular_txt_to_markdown
tb_str = '```\n' + trimmed_format_exc() + '```'
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
return chatbot, history
def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
"""
整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
"""
# if not is_any_api_key(llm_kwargs['api_key']):
# raise AssertionError("你提供了错误的API_KEY。\n\n1. 临时解决方案直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案在config.py中配置。")
api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
if API_ORG.startswith('org-'): headers.update({"Cohere-Organization": API_ORG})
if llm_kwargs['llm_model'].startswith('azure-'):
headers.update({"api-key": api_key})
if llm_kwargs['llm_model'] in AZURE_CFG_ARRAY.keys():
azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"]
headers.update({"api-key": azure_api_key_unshared})
conversation_cnt = len(history) // 2
messages = [{"role": "SYSTEM", "message": system_prompt}]
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "USER"
what_i_have_asked["message"] = history[index]
what_gpt_answer = {}
what_gpt_answer["role"] = "CHATBOT"
what_gpt_answer["message"] = history[index+1]
if what_i_have_asked["message"] != "":
if what_gpt_answer["message"] == "": continue
if what_gpt_answer["message"] == timeout_bot_msg: continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['message'] = what_gpt_answer['message']
model = llm_kwargs['llm_model']
if model.startswith('cohere-'): model = model[len('cohere-'):]
payload = {
"model": model,
"message": inputs,
"chat_history": messages,
"temperature": llm_kwargs['temperature'], # 1.0,
"top_p": llm_kwargs['top_p'], # 1.0,
"n": 1,
"stream": stream,
"presence_penalty": 0,
"frequency_penalty": 0,
}
return headers,payload

查看文件

@@ -88,7 +88,7 @@ class GetCoderLMHandle(LocalLLMHandle):
temperature = kwargs['temperature'] temperature = kwargs['temperature']
history = kwargs['history'] history = kwargs['history']
return query, max_length, top_p, temperature, history return query, max_length, top_p, temperature, history
query, max_length, top_p, temperature, history = adaptor(kwargs) query, max_length, top_p, temperature, history = adaptor(kwargs)
history.append({ 'role': 'user', 'content': query}) history.append({ 'role': 'user', 'content': query})
messages = history messages = history
@@ -97,14 +97,14 @@ class GetCoderLMHandle(LocalLLMHandle):
inputs = inputs[:, -max_length:] inputs = inputs[:, -max_length:]
inputs = inputs.to(self._model.device) inputs = inputs.to(self._model.device)
generation_kwargs = dict( generation_kwargs = dict(
inputs=inputs, inputs=inputs,
max_new_tokens=max_length, max_new_tokens=max_length,
do_sample=False, do_sample=False,
top_p=top_p, top_p=top_p,
streamer = self._streamer, streamer = self._streamer,
top_k=50, top_k=50,
temperature=temperature, temperature=temperature,
num_return_sequences=1, num_return_sequences=1,
eos_token_id=32021, eos_token_id=32021,
) )
thread = Thread(target=self._model.generate, kwargs=generation_kwargs, daemon=True) thread = Thread(target=self._model.generate, kwargs=generation_kwargs, daemon=True)

查看文件

@@ -7,6 +7,7 @@ import re
import os import os
import time import time
from request_llms.com_google import GoogleChatInit from request_llms.com_google import GoogleChatInit
from toolbox import ChatBotWithCookies
from toolbox import get_conf, update_ui, update_ui_lastest_msg, have_any_recent_upload_image_files, trimmed_format_exc from toolbox import get_conf, update_ui, update_ui_lastest_msg, have_any_recent_upload_image_files, trimmed_format_exc
proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY') proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY')
@@ -20,7 +21,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if get_conf("GEMINI_API_KEY") == "": if get_conf("GEMINI_API_KEY") == "":
raise ValueError(f"请配置 GEMINI_API_KEY。") raise ValueError(f"请配置 GEMINI_API_KEY。")
genai = GoogleChatInit() genai = GoogleChatInit(llm_kwargs)
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可 watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
gpt_replying_buffer = '' gpt_replying_buffer = ''
stream_response = genai.generate_chat(inputs, llm_kwargs, history, sys_prompt) stream_response = genai.generate_chat(inputs, llm_kwargs, history, sys_prompt)
@@ -44,7 +45,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
return gpt_replying_buffer return gpt_replying_buffer
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None): def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
# 检查API_KEY # 检查API_KEY
if get_conf("GEMINI_API_KEY") == "": if get_conf("GEMINI_API_KEY") == "":
yield from update_ui_lastest_msg(f"请配置 GEMINI_API_KEY。", chatbot=chatbot, history=history, delay=0) yield from update_ui_lastest_msg(f"请配置 GEMINI_API_KEY。", chatbot=chatbot, history=history, delay=0)
@@ -61,7 +63,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot.append((inputs, "没有检测到任何近期上传的图像文件,请上传jpg格式的图片,此外,请注意拓展名需要小写")) chatbot.append((inputs, "没有检测到任何近期上传的图像文件,请上传jpg格式的图片,此外,请注意拓展名需要小写"))
yield from update_ui(chatbot=chatbot, history=history, msg="等待图片") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="等待图片") # 刷新界面
return return
def make_media_input(inputs, image_paths): def make_media_input(inputs, image_paths):
for image_path in image_paths: for image_path in image_paths:
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>' inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
return inputs return inputs
@@ -70,7 +72,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot.append((inputs, "")) chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
genai = GoogleChatInit() genai = GoogleChatInit(llm_kwargs)
retry = 0 retry = 0
while True: while True:
try: try:

查看文件

@@ -82,7 +82,7 @@ class GetInternlmHandle(LocalLLMHandle):
history = kwargs['history'] history = kwargs['history']
real_prompt = combine_history(prompt, history) real_prompt = combine_history(prompt, history)
return model, tokenizer, real_prompt, max_length, top_p, temperature return model, tokenizer, real_prompt, max_length, top_p, temperature
model, tokenizer, prompt, max_length, top_p, temperature = adaptor() model, tokenizer, prompt, max_length, top_p, temperature = adaptor()
prefix_allowed_tokens_fn = None prefix_allowed_tokens_fn = None
logits_processor = None logits_processor = None
@@ -183,7 +183,7 @@ class GetInternlmHandle(LocalLLMHandle):
outputs, model_kwargs, is_encoder_decoder=False outputs, model_kwargs, is_encoder_decoder=False
) )
unfinished_sequences = unfinished_sequences.mul((min(next_tokens != i for i in eos_token_id)).long()) unfinished_sequences = unfinished_sequences.mul((min(next_tokens != i for i in eos_token_id)).long())
output_token_ids = input_ids[0].cpu().tolist() output_token_ids = input_ids[0].cpu().tolist()
output_token_ids = output_token_ids[input_length:] output_token_ids = output_token_ids[input_length:]
for each_eos_token_id in eos_token_id: for each_eos_token_id in eos_token_id:
@@ -196,7 +196,7 @@ class GetInternlmHandle(LocalLLMHandle):
if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores): if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
return return
# ------------------------------------------------------------------------------------------------------------------------ # ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 GPT-Academic Interface # 🔌💻 GPT-Academic Interface
# ------------------------------------------------------------------------------------------------------------------------ # ------------------------------------------------------------------------------------------------------------------------

查看文件

@@ -1,10 +1,10 @@
from transformers import AutoModel, AutoTokenizer
import time import time
import threading import threading
import importlib import importlib
from toolbox import update_ui, get_conf from toolbox import update_ui, get_conf
from multiprocessing import Process, Pipe from multiprocessing import Process, Pipe
from transformers import AutoModel, AutoTokenizer
load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存CPU或显存GPU,也许会导致低配计算机卡死 ……" load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存CPU或显存GPU,也许会导致低配计算机卡死 ……"
@@ -20,7 +20,7 @@ class GetGLMHandle(Process):
self.check_dependency() self.check_dependency()
self.start() self.start()
self.threadLock = threading.Lock() self.threadLock = threading.Lock()
def check_dependency(self): def check_dependency(self):
try: try:
import pandas import pandas
@@ -102,11 +102,12 @@ class GetGLMHandle(Process):
else: else:
break break
self.threadLock.release() self.threadLock.release()
global llama_glm_handle global llama_glm_handle
llama_glm_handle = None llama_glm_handle = None
################################################################################# #################################################################################
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
""" """
多线程方法 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -115,7 +116,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if llama_glm_handle is None: if llama_glm_handle is None:
llama_glm_handle = GetGLMHandle() llama_glm_handle = GetGLMHandle()
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + llama_glm_handle.info if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + llama_glm_handle.info
if not llama_glm_handle.success: if not llama_glm_handle.success:
error = llama_glm_handle.info error = llama_glm_handle.info
llama_glm_handle = None llama_glm_handle = None
raise RuntimeError(error) raise RuntimeError(error)
@@ -130,7 +131,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
for response in llama_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): for response in llama_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
print(response) print(response)
if len(observe_window) >= 1: observe_window[0] = response if len(observe_window) >= 1: observe_window[0] = response
if len(observe_window) >= 2: if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience: if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。") raise RuntimeError("程序终止。")
return response return response
@@ -149,7 +150,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
llama_glm_handle = GetGLMHandle() llama_glm_handle = GetGLMHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + llama_glm_handle.info) chatbot[-1] = (inputs, load_message + "\n\n" + llama_glm_handle.info)
yield from update_ui(chatbot=chatbot, history=[]) yield from update_ui(chatbot=chatbot, history=[])
if not llama_glm_handle.success: if not llama_glm_handle.success:
llama_glm_handle = None llama_glm_handle = None
return return

查看文件

@@ -1,10 +1,10 @@
from transformers import AutoModel, AutoTokenizer
import time import time
import threading import threading
import importlib import importlib
from toolbox import update_ui, get_conf from toolbox import update_ui, get_conf
from multiprocessing import Process, Pipe from multiprocessing import Process, Pipe
from transformers import AutoModel, AutoTokenizer
load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存CPU或显存GPU,也许会导致低配计算机卡死 ……" load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存CPU或显存GPU,也许会导致低配计算机卡死 ……"
@@ -20,7 +20,7 @@ class GetGLMHandle(Process):
self.check_dependency() self.check_dependency()
self.start() self.start()
self.threadLock = threading.Lock() self.threadLock = threading.Lock()
def check_dependency(self): def check_dependency(self):
try: try:
import pandas import pandas
@@ -102,11 +102,12 @@ class GetGLMHandle(Process):
else: else:
break break
self.threadLock.release() self.threadLock.release()
global pangu_glm_handle global pangu_glm_handle
pangu_glm_handle = None pangu_glm_handle = None
################################################################################# #################################################################################
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
""" """
多线程方法 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -115,7 +116,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if pangu_glm_handle is None: if pangu_glm_handle is None:
pangu_glm_handle = GetGLMHandle() pangu_glm_handle = GetGLMHandle()
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + pangu_glm_handle.info if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + pangu_glm_handle.info
if not pangu_glm_handle.success: if not pangu_glm_handle.success:
error = pangu_glm_handle.info error = pangu_glm_handle.info
pangu_glm_handle = None pangu_glm_handle = None
raise RuntimeError(error) raise RuntimeError(error)
@@ -130,7 +131,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
for response in pangu_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): for response in pangu_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
print(response) print(response)
if len(observe_window) >= 1: observe_window[0] = response if len(observe_window) >= 1: observe_window[0] = response
if len(observe_window) >= 2: if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience: if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。") raise RuntimeError("程序终止。")
return response return response
@@ -149,7 +150,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
pangu_glm_handle = GetGLMHandle() pangu_glm_handle = GetGLMHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + pangu_glm_handle.info) chatbot[-1] = (inputs, load_message + "\n\n" + pangu_glm_handle.info)
yield from update_ui(chatbot=chatbot, history=[]) yield from update_ui(chatbot=chatbot, history=[])
if not pangu_glm_handle.success: if not pangu_glm_handle.success:
pangu_glm_handle = None pangu_glm_handle = None
return return

查看文件

@@ -20,7 +20,7 @@ class GetGLMHandle(Process):
self.check_dependency() self.check_dependency()
self.start() self.start()
self.threadLock = threading.Lock() self.threadLock = threading.Lock()
def check_dependency(self): def check_dependency(self):
try: try:
import pandas import pandas
@@ -102,11 +102,12 @@ class GetGLMHandle(Process):
else: else:
break break
self.threadLock.release() self.threadLock.release()
global rwkv_glm_handle global rwkv_glm_handle
rwkv_glm_handle = None rwkv_glm_handle = None
################################################################################# #################################################################################
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
""" """
多线程方法 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -115,7 +116,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if rwkv_glm_handle is None: if rwkv_glm_handle is None:
rwkv_glm_handle = GetGLMHandle() rwkv_glm_handle = GetGLMHandle()
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + rwkv_glm_handle.info if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + rwkv_glm_handle.info
if not rwkv_glm_handle.success: if not rwkv_glm_handle.success:
error = rwkv_glm_handle.info error = rwkv_glm_handle.info
rwkv_glm_handle = None rwkv_glm_handle = None
raise RuntimeError(error) raise RuntimeError(error)
@@ -130,7 +131,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
for response in rwkv_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): for response in rwkv_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
print(response) print(response)
if len(observe_window) >= 1: observe_window[0] = response if len(observe_window) >= 1: observe_window[0] = response
if len(observe_window) >= 2: if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience: if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。") raise RuntimeError("程序终止。")
return response return response
@@ -149,7 +150,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
rwkv_glm_handle = GetGLMHandle() rwkv_glm_handle = GetGLMHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + rwkv_glm_handle.info) chatbot[-1] = (inputs, load_message + "\n\n" + rwkv_glm_handle.info)
yield from update_ui(chatbot=chatbot, history=[]) yield from update_ui(chatbot=chatbot, history=[])
if not rwkv_glm_handle.success: if not rwkv_glm_handle.success:
rwkv_glm_handle = None rwkv_glm_handle = None
return return

查看文件

@@ -48,7 +48,7 @@ class GetLlamaHandle(LocalLLMHandle):
history = kwargs['history'] history = kwargs['history']
console_slience = kwargs.get('console_slience', True) console_slience = kwargs.get('console_slience', True)
return query, max_length, top_p, temperature, history, console_slience return query, max_length, top_p, temperature, history, console_slience
def convert_messages_to_prompt(query, history): def convert_messages_to_prompt(query, history):
prompt = "" prompt = ""
for a, b in history: for a, b in history:
@@ -56,7 +56,7 @@ class GetLlamaHandle(LocalLLMHandle):
prompt += "\n{b}" + b prompt += "\n{b}" + b
prompt += f"\n[INST]{query}[/INST]" prompt += f"\n[INST]{query}[/INST]"
return prompt return prompt
query, max_length, top_p, temperature, history, console_slience = adaptor(kwargs) query, max_length, top_p, temperature, history, console_slience = adaptor(kwargs)
prompt = convert_messages_to_prompt(query, history) prompt = convert_messages_to_prompt(query, history)
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=- # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-
@@ -70,13 +70,13 @@ class GetLlamaHandle(LocalLLMHandle):
thread = Thread(target=self._model.generate, kwargs=generation_kwargs) thread = Thread(target=self._model.generate, kwargs=generation_kwargs)
thread.start() thread.start()
generated_text = "" generated_text = ""
for new_text in streamer: for new_text in streamer:
generated_text += new_text generated_text += new_text
if not console_slience: print(new_text, end='') if not console_slience: print(new_text, end='')
yield generated_text.lstrip(prompt_tk_back).rstrip("</s>") yield generated_text.lstrip(prompt_tk_back).rstrip("</s>")
if not console_slience: print() if not console_slience: print()
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=- # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-
def try_to_import_special_deps(self, **kwargs): def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt # import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行 # 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行

查看文件

@@ -0,0 +1,197 @@
# encoding: utf-8
# @Time : 2024/3/3
# @Author : Spike
# @Descr :
import json
import os
import time
import logging
from toolbox import get_conf, update_ui, log_chat
from toolbox import ChatBotWithCookies
import requests
class MoonShotInit:
def __init__(self):
self.llm_model = None
self.url = 'https://api.moonshot.cn/v1/chat/completions'
self.api_key = get_conf('MOONSHOT_API_KEY')
def __converter_file(self, user_input: str):
what_ask = []
for f in user_input.splitlines():
if os.path.exists(f):
files = []
if os.path.isdir(f):
file_list = os.listdir(f)
files.extend([os.path.join(f, file) for file in file_list])
else:
files.append(f)
for file in files:
if file.split('.')[-1] in ['pdf']:
with open(file, 'r') as fp:
from crazy_functions.crazy_utils import read_and_clean_pdf_text
file_content, _ = read_and_clean_pdf_text(fp)
what_ask.append({"role": "system", "content": file_content})
return what_ask
def __converter_user(self, user_input: str):
what_i_ask_now = {"role": "user", "content": user_input}
return what_i_ask_now
def __conversation_history(self, history):
conversation_cnt = len(history) // 2
messages = []
if conversation_cnt:
for index in range(0, 2 * conversation_cnt, 2):
what_i_have_asked = {
"role": "user",
"content": str(history[index])
}
what_gpt_answer = {
"role": "assistant",
"content": str(history[index + 1])
}
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "": continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = what_gpt_answer['content']
return messages
def _analysis_content(self, chuck):
chunk_decoded = chuck.decode("utf-8")
chunk_json = {}
content = ""
try:
chunk_json = json.loads(chunk_decoded[6:])
content = chunk_json['choices'][0]["delta"].get("content", "")
except:
pass
return chunk_decoded, chunk_json, content
def generate_payload(self, inputs, llm_kwargs, history, system_prompt, stream):
self.llm_model = llm_kwargs['llm_model']
llm_kwargs.update({'use-key': self.api_key})
messages = []
if system_prompt:
messages.append({"role": "system", "content": system_prompt})
messages.extend(self.__converter_file(inputs))
for i in history[0::2]: # 历史文件继续上传
messages.extend(self.__converter_file(i))
messages.extend(self.__conversation_history(history))
messages.append(self.__converter_user(inputs))
header = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.api_key}",
}
payload = {
"model": self.llm_model,
"messages": messages,
"temperature": llm_kwargs.get('temperature', 0.3), # 1.0,
"top_p": llm_kwargs.get('top_p', 1.0), # 1.0,
"n": llm_kwargs.get('n_choices', 1),
"stream": stream
}
return payload, header
def generate_messages(self, inputs, llm_kwargs, history, system_prompt, stream):
payload, headers = self.generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
response = requests.post(self.url, headers=headers, json=payload, stream=stream)
chunk_content = ""
gpt_bro_result = ""
for chuck in response.iter_lines():
chunk_decoded, check_json, content = self._analysis_content(chuck)
chunk_content += chunk_decoded
if content:
gpt_bro_result += content
yield content, gpt_bro_result, ''
else:
error_msg = msg_handle_error(llm_kwargs, chunk_decoded)
if error_msg:
yield error_msg, gpt_bro_result, error_msg
break
def msg_handle_error(llm_kwargs, chunk_decoded):
use_ket = llm_kwargs.get('use-key', '')
api_key_encryption = use_ket[:8] + '****' + use_ket[-5:]
openai_website = f' 请登录OpenAI查看详情 https://platform.openai.com/signup api-key: `{api_key_encryption}`'
error_msg = ''
if "does not exist" in chunk_decoded:
error_msg = f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格."
elif "Incorrect API key" in chunk_decoded:
error_msg = f"[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务." + openai_website
elif "exceeded your current quota" in chunk_decoded:
error_msg = "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务." + openai_website
elif "account is not active" in chunk_decoded:
error_msg = "[Local Message] Your account is not active. OpenAI以账户失效为由, 拒绝服务." + openai_website
elif "associated with a deactivated account" in chunk_decoded:
error_msg = "[Local Message] You are associated with a deactivated account. OpenAI以账户失效为由, 拒绝服务." + openai_website
elif "API key has been deactivated" in chunk_decoded:
error_msg = "[Local Message] API key has been deactivated. OpenAI以账户失效为由, 拒绝服务." + openai_website
elif "bad forward key" in chunk_decoded:
error_msg = "[Local Message] Bad forward key. API2D账户额度不足."
elif "Not enough point" in chunk_decoded:
error_msg = "[Local Message] Not enough point. API2D账户点数不足."
elif 'error' in str(chunk_decoded).lower():
try:
error_msg = json.dumps(json.loads(chunk_decoded[:6]), indent=4, ensure_ascii=False)
except:
error_msg = chunk_decoded
return error_msg
def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
chatbot.append([inputs, ""])
if additional_fn is not None:
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
gpt_bro_init = MoonShotInit()
history.extend([inputs, ''])
stream_response = gpt_bro_init.generate_messages(inputs, llm_kwargs, history, system_prompt, stream)
for content, gpt_bro_result, error_bro_meg in stream_response:
chatbot[-1] = [inputs, gpt_bro_result]
history[-1] = gpt_bro_result
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if error_bro_meg:
chatbot[-1] = [inputs, error_bro_meg]
history = history[:-2]
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
break
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_bro_result)
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None,
console_slience=False):
gpt_bro_init = MoonShotInit()
watch_dog_patience = 60 # 看门狗的耐心, 设置10秒即可
stream_response = gpt_bro_init.generate_messages(inputs, llm_kwargs, history, sys_prompt, True)
moonshot_bro_result = ''
for content, moonshot_bro_result, error_bro_meg in stream_response:
moonshot_bro_result = moonshot_bro_result
if error_bro_meg:
if len(observe_window) >= 3:
observe_window[2] = error_bro_meg
return f'{moonshot_bro_result} 对话错误'
# 观测窗
if len(observe_window) >= 1:
observe_window[0] = moonshot_bro_result
if len(observe_window) >= 2:
if (time.time() - observe_window[1]) > watch_dog_patience:
observe_window[2] = "请求超时,程序终止。"
raise RuntimeError(f"{moonshot_bro_result} 程序终止。")
return moonshot_bro_result
if __name__ == '__main__':
moon_ai = MoonShotInit()
for g in moon_ai.generate_messages('hello', {'llm_model': 'moonshot-v1-8k'},
[], '', True):
print(g)

查看文件

@@ -18,7 +18,7 @@ class GetGLMHandle(Process):
if self.check_dependency(): if self.check_dependency():
self.start() self.start()
self.threadLock = threading.Lock() self.threadLock = threading.Lock()
def check_dependency(self): # 主进程执行 def check_dependency(self): # 主进程执行
try: try:
import datasets, os import datasets, os
@@ -54,9 +54,9 @@ class GetGLMHandle(Process):
from models.tokenization_moss import MossTokenizer from models.tokenization_moss import MossTokenizer
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument("--model_name", default="fnlp/moss-moon-003-sft-int4", parser.add_argument("--model_name", default="fnlp/moss-moon-003-sft-int4",
choices=["fnlp/moss-moon-003-sft", choices=["fnlp/moss-moon-003-sft",
"fnlp/moss-moon-003-sft-int8", "fnlp/moss-moon-003-sft-int8",
"fnlp/moss-moon-003-sft-int4"], type=str) "fnlp/moss-moon-003-sft-int4"], type=str)
parser.add_argument("--gpu", default="0", type=str) parser.add_argument("--gpu", default="0", type=str)
args = parser.parse_args() args = parser.parse_args()
@@ -76,7 +76,7 @@ class GetGLMHandle(Process):
config = MossConfig.from_pretrained(model_path) config = MossConfig.from_pretrained(model_path)
self.tokenizer = MossTokenizer.from_pretrained(model_path) self.tokenizer = MossTokenizer.from_pretrained(model_path)
if num_gpus > 1: if num_gpus > 1:
print("Waiting for all devices to be ready, it may take a few minutes...") print("Waiting for all devices to be ready, it may take a few minutes...")
with init_empty_weights(): with init_empty_weights():
raw_model = MossForCausalLM._from_config(config, torch_dtype=torch.float16) raw_model = MossForCausalLM._from_config(config, torch_dtype=torch.float16)
@@ -135,15 +135,15 @@ class GetGLMHandle(Process):
inputs = self.tokenizer(self.prompt, return_tensors="pt") inputs = self.tokenizer(self.prompt, return_tensors="pt")
with torch.no_grad(): with torch.no_grad():
outputs = self.model.generate( outputs = self.model.generate(
inputs.input_ids.cuda(), inputs.input_ids.cuda(),
attention_mask=inputs.attention_mask.cuda(), attention_mask=inputs.attention_mask.cuda(),
max_length=2048, max_length=2048,
do_sample=True, do_sample=True,
top_k=40, top_k=40,
top_p=0.8, top_p=0.8,
temperature=0.7, temperature=0.7,
repetition_penalty=1.02, repetition_penalty=1.02,
num_return_sequences=1, num_return_sequences=1,
eos_token_id=106068, eos_token_id=106068,
pad_token_id=self.tokenizer.pad_token_id) pad_token_id=self.tokenizer.pad_token_id)
response = self.tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True) response = self.tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
@@ -167,11 +167,12 @@ class GetGLMHandle(Process):
else: else:
break break
self.threadLock.release() self.threadLock.release()
global moss_handle global moss_handle
moss_handle = None moss_handle = None
################################################################################# #################################################################################
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
""" """
多线程方法 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -180,7 +181,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if moss_handle is None: if moss_handle is None:
moss_handle = GetGLMHandle() moss_handle = GetGLMHandle()
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + moss_handle.info if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + moss_handle.info
if not moss_handle.success: if not moss_handle.success:
error = moss_handle.info error = moss_handle.info
moss_handle = None moss_handle = None
raise RuntimeError(error) raise RuntimeError(error)
@@ -194,7 +195,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
response = "" response = ""
for response in moss_handle.stream_chat(query=inputs, history=history_feedin, sys_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): for response in moss_handle.stream_chat(query=inputs, history=history_feedin, sys_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
if len(observe_window) >= 1: observe_window[0] = response if len(observe_window) >= 1: observe_window[0] = response
if len(observe_window) >= 2: if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience: if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。") raise RuntimeError("程序终止。")
return response return response
@@ -213,7 +214,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
moss_handle = GetGLMHandle() moss_handle = GetGLMHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + moss_handle.info) chatbot[-1] = (inputs, load_message + "\n\n" + moss_handle.info)
yield from update_ui(chatbot=chatbot, history=[]) yield from update_ui(chatbot=chatbot, history=[])
if not moss_handle.success: if not moss_handle.success:
moss_handle = None moss_handle = None
return return
else: else:

查看文件

@@ -117,7 +117,8 @@ def generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt):
raise RuntimeError(dec['error_msg']) raise RuntimeError(dec['error_msg'])
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
""" """
⭐多线程方法 ⭐多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -160,3 +161,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)") chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
yield from update_ui(chatbot=chatbot, history=history, msg="异常") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="异常") # 刷新界面
return return
except RuntimeError as e:
tb_str = '```\n' + trimmed_format_exc() + '```'
chatbot[-1] = (chatbot[-1][0], tb_str)
yield from update_ui(chatbot=chatbot, history=history, msg="异常") # 刷新界面
return

查看文件

@@ -5,7 +5,8 @@ from toolbox import check_packages, report_exception
model_name = 'Qwen' model_name = 'Qwen'
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
""" """
⭐多线程方法 ⭐多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -47,6 +48,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if additional_fn is not None: if additional_fn is not None:
from core_functional import handle_core_functionality from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot) inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
chatbot[-1] = (inputs, "")
yield from update_ui(chatbot=chatbot, history=history)
# 开始接收回复 # 开始接收回复
from .com_qwenapi import QwenRequestInstance from .com_qwenapi import QwenRequestInstance

查看文件

@@ -45,7 +45,7 @@ class GetQwenLMHandle(LocalLLMHandle):
for response in self._model.chat_stream(self._tokenizer, query, history=history): for response in self._model.chat_stream(self._tokenizer, query, history=history):
yield response yield response
def try_to_import_special_deps(self, **kwargs): def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt # import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行 # 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行

查看文件

@@ -9,7 +9,8 @@ def validate_key():
if YUNQUE_SECRET_KEY == '': return False if YUNQUE_SECRET_KEY == '': return False
return True return True
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
""" """
⭐ 多线程方法 ⭐ 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py

查看文件

@@ -13,7 +13,8 @@ def validate_key():
return False return False
return True return True
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
""" """
⭐多线程方法 ⭐多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py

查看文件

@@ -76,7 +76,7 @@ async def run(context, max_token, temperature, top_p, addr, port):
pass pass
elif content["msg"] in ["process_generating", "process_completed"]: elif content["msg"] in ["process_generating", "process_completed"]:
yield content["output"]["data"][0] yield content["output"]["data"][0]
# You can search for your desired end indicator and # You can search for your desired end indicator and
# stop generation by closing the websocket here # stop generation by closing the websocket here
if (content["msg"] == "process_completed"): if (content["msg"] == "process_completed"):
break break
@@ -117,12 +117,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
async def get_result(mutable): async def get_result(mutable):
# "tgui:galactica-1.3b@localhost:7860" # "tgui:galactica-1.3b@localhost:7860"
async for response in run(context=prompt, max_token=llm_kwargs['max_length'], async for response in run(context=prompt, max_token=llm_kwargs['max_length'],
temperature=llm_kwargs['temperature'], temperature=llm_kwargs['temperature'],
top_p=llm_kwargs['top_p'], addr=addr, port=port): top_p=llm_kwargs['top_p'], addr=addr, port=port):
print(response[len(mutable[0]):]) print(response[len(mutable[0]):])
mutable[0] = response mutable[0] = response
if (time.time() - mutable[1]) > 3: if (time.time() - mutable[1]) > 3:
print('exit when no listener') print('exit when no listener')
break break
asyncio.run(get_result(mutable)) asyncio.run(get_result(mutable))
@@ -154,12 +154,12 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
def run_coorotine(observe_window): def run_coorotine(observe_window):
async def get_result(observe_window): async def get_result(observe_window):
async for response in run(context=prompt, max_token=llm_kwargs['max_length'], async for response in run(context=prompt, max_token=llm_kwargs['max_length'],
temperature=llm_kwargs['temperature'], temperature=llm_kwargs['temperature'],
top_p=llm_kwargs['top_p'], addr=addr, port=port): top_p=llm_kwargs['top_p'], addr=addr, port=port):
print(response[len(observe_window[0]):]) print(response[len(observe_window[0]):])
observe_window[0] = response observe_window[0] = response
if (time.time() - observe_window[1]) > 5: if (time.time() - observe_window[1]) > 5:
print('exit when no listener') print('exit when no listener')
break break
asyncio.run(get_result(observe_window)) asyncio.run(get_result(observe_window))

查看文件

@@ -0,0 +1,283 @@
# 借鉴自同目录下的bridge_chatgpt.py
"""
该文件中主要包含三个函数
不具备多线程能力的函数:
1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
具备多线程调用能力的函数
2. predict_no_ui_long_connection支持多线程
"""
import json
import time
import gradio as gr
import logging
import traceback
import requests
import importlib
import random
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件
from toolbox import get_conf, update_ui, trimmed_format_exc, is_the_upload_folder, read_one_api_model_name
proxies, TIMEOUT_SECONDS, MAX_RETRY, YIMODEL_API_KEY = \
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'YIMODEL_API_KEY')
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
def get_full_error(chunk, stream_response):
"""
获取完整的从Openai返回的报错
"""
while True:
try:
chunk += next(stream_response)
except:
break
return chunk
def decode_chunk(chunk):
# 提前读取一些信息(用于判断异常)
chunk_decoded = chunk.decode()
chunkjson = None
is_last_chunk = False
try:
chunkjson = json.loads(chunk_decoded[6:])
is_last_chunk = chunkjson.get("lastOne", False)
except:
pass
return chunk_decoded, chunkjson, is_last_chunk
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
"""
发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
inputs
是本次问询的输入
sys_prompt:
系统静默prompt
llm_kwargs
chatGPT的内部调优参数
history
是之前的对话列表
observe_window = None
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]观测窗。observe_window[1]:看门狗
"""
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
if inputs == "": inputs = "空空如也的输入栏"
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
retry = 0
while True:
try:
# make a POST request to the API endpoint, stream=False
from .bridge_all import model_info
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
response = requests.post(endpoint, headers=headers, proxies=proxies,
json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
except requests.exceptions.ReadTimeout as e:
retry += 1
traceback.print_exc()
if retry > MAX_RETRY: raise TimeoutError
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
stream_response = response.iter_lines()
result = ''
is_head_of_the_stream = True
while True:
try: chunk = next(stream_response)
except StopIteration:
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
chunk_decoded, chunkjson, is_last_chunk = decode_chunk(chunk)
if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r'"role":"assistant"' in chunk_decoded):
# 数据流的第一帧不携带content
is_head_of_the_stream = False; continue
if chunk:
try:
if is_last_chunk:
# 判定为数据流的结束,gpt_replying_buffer也写完了
logging.info(f'[response] {result}')
break
result += chunkjson['choices'][0]["delta"]["content"]
if not console_slience: print(chunkjson['choices'][0]["delta"]["content"], end='')
if observe_window is not None:
# 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1:
observe_window[0] += chunkjson['choices'][0]["delta"]["content"]
# 看门狗,如果超过期限没有喂狗,则终止
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("用户取消了程序。")
except Exception as e:
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
print(error_msg)
raise RuntimeError("Json解析不合常规")
return result
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
发送至chatGPT,流式获取输出。
用于基础的对话功能。
inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
additional_fn代表点击的哪个按钮,按钮见functional.py
"""
if len(YIMODEL_API_KEY) == 0:
raise RuntimeError("没有设置YIMODEL_API_KEY选项")
if inputs == "": inputs = "空空如也的输入栏"
user_input = inputs
if additional_fn is not None:
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
raw_input = inputs
logging.info(f'[raw_input] {raw_input}')
chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
# check mis-behavior
if is_the_upload_folder(user_input):
chatbot[-1] = (inputs, f"[Local Message] 检测到操作错误!当您上传文档之后,需点击“**函数插件区**”按钮进行处理,请勿点击“提交”按钮或者“基础功能区”按钮。")
yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
time.sleep(2)
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
from .bridge_all import model_info
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
history.append(inputs); history.append("")
retry = 0
while True:
try:
# make a POST request to the API endpoint, stream=True
response = requests.post(endpoint, headers=headers, proxies=proxies,
json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
except:
retry += 1
chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg))
retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
if retry > MAX_RETRY: raise TimeoutError
gpt_replying_buffer = ""
is_head_of_the_stream = True
if stream:
stream_response = response.iter_lines()
while True:
try:
chunk = next(stream_response)
except StopIteration:
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
# 提前读取一些信息 (用于判断异常)
chunk_decoded, chunkjson, is_last_chunk = decode_chunk(chunk)
if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r'"role":"assistant"' in chunk_decoded):
# 数据流的第一帧不携带content
is_head_of_the_stream = False; continue
if chunk:
try:
if is_last_chunk:
# 判定为数据流的结束,gpt_replying_buffer也写完了
logging.info(f'[response] {gpt_replying_buffer}')
break
# 处理数据流的主体
status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
# 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面
except Exception as e:
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
print(error_msg)
return
def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg):
from .bridge_all import model_info
if "bad_request" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] 已经超过了模型的最大上下文或是模型格式错误,请尝试削减单次输入的文本量。")
elif "authentication_error" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. 请确保API key有效。")
elif "not_found" in error_msg:
chatbot[-1] = (chatbot[-1][0], f"[Local Message] {llm_kwargs['llm_model']} 无效,请确保使用小写的模型名称。")
elif "rate_limit" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] 遇到了控制请求速率限制,请一分钟后重试。")
elif "system_busy" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] 系统繁忙,请一分钟后重试。")
else:
from toolbox import regular_txt_to_markdown
tb_str = '```\n' + trimmed_format_exc() + '```'
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
return chatbot, history
def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
"""
整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
"""
api_key = f"Bearer {YIMODEL_API_KEY}"
headers = {
"Content-Type": "application/json",
"Authorization": api_key
}
conversation_cnt = len(history) // 2
messages = [{"role": "system", "content": system_prompt}]
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = history[index]
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = history[index+1]
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "": continue
if what_gpt_answer["content"] == timeout_bot_msg: continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = what_gpt_answer['content']
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = inputs
messages.append(what_i_ask_now)
model = llm_kwargs['llm_model']
if llm_kwargs['llm_model'].startswith('one-api-'):
model = llm_kwargs['llm_model'][len('one-api-'):]
model, _ = read_one_api_model_name(model)
tokens = 600 if llm_kwargs['llm_model'] == 'yi-34b-chat-0205' else 4096 #yi-34b-chat-0205只有4k上下文...
payload = {
"model": model,
"messages": messages,
"temperature": llm_kwargs['temperature'], # 1.0,
"stream": stream,
"max_tokens": tokens
}
try:
print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")
except:
print('输入中可能存在乱码。')
return headers,payload

查看文件

@@ -1,7 +1,8 @@
import time import time
import os import os
from toolbox import update_ui, get_conf, update_ui_lastest_msg from toolbox import update_ui, get_conf, update_ui_lastest_msg, log_chat
from toolbox import check_packages, report_exception, have_any_recent_upload_image_files from toolbox import check_packages, report_exception, have_any_recent_upload_image_files
from toolbox import ChatBotWithCookies
model_name = '智谱AI大模型' model_name = '智谱AI大模型'
zhipuai_default_model = 'glm-4' zhipuai_default_model = 'glm-4'
@@ -16,7 +17,8 @@ def make_media_input(inputs, image_paths):
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>' inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
return inputs return inputs
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
""" """
⭐多线程方法 ⭐多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -42,7 +44,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
return response return response
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None): def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
""" """
⭐单线程方法 ⭐单线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -90,4 +93,5 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot[-1] = [inputs, response] chatbot[-1] = [inputs, response]
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
history.extend([inputs, response]) history.extend([inputs, response])
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=response)
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)

查看文件

@@ -119,7 +119,7 @@ class ChatGLMModel():
past_key_values = { k: v for k, v in zip(past_names, past_key_values) } past_key_values = { k: v for k, v in zip(past_names, past_key_values) }
next_token = self.sample_next_token(logits[0, -1], top_k=top_k, top_p=top_p, temperature=temperature) next_token = self.sample_next_token(logits[0, -1], top_k=top_k, top_p=top_p, temperature=temperature)
output_tokens += [next_token] output_tokens += [next_token]
if next_token == self.eop_token_id or len(output_tokens) > max_generated_tokens: if next_token == self.eop_token_id or len(output_tokens) > max_generated_tokens:

查看文件

@@ -114,8 +114,10 @@ def html_local_img(__file, layout="left", max_width=None, max_height=None, md=Tr
class GoogleChatInit: class GoogleChatInit:
def __init__(self): def __init__(self, llm_kwargs):
self.url_gemini = "https://generativelanguage.googleapis.com/v1beta/models/%m:streamGenerateContent?key=%k" from .bridge_all import model_info
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
self.url_gemini = endpoint + "/%m:streamGenerateContent?key=%k"
def generate_chat(self, inputs, llm_kwargs, history, system_prompt): def generate_chat(self, inputs, llm_kwargs, history, system_prompt):
headers, payload = self.generate_message_payload( headers, payload = self.generate_message_payload(

查看文件

@@ -48,6 +48,10 @@ class QwenRequestInstance():
for response in responses: for response in responses:
if response.status_code == HTTPStatus.OK: if response.status_code == HTTPStatus.OK:
if response.output.choices[0].finish_reason == 'stop': if response.output.choices[0].finish_reason == 'stop':
try:
self.result_buf += response.output.choices[0].message.content
except:
pass
yield self.result_buf yield self.result_buf
break break
elif response.output.choices[0].finish_reason == 'length': elif response.output.choices[0].finish_reason == 'length':

查看文件

@@ -8,7 +8,7 @@ from toolbox import get_conf, encode_image, get_pictures_list
import logging, os import logging, os
def input_encode_handler(inputs, llm_kwargs): def input_encode_handler(inputs:str, llm_kwargs:dict):
if llm_kwargs["most_recent_uploaded"].get("path"): if llm_kwargs["most_recent_uploaded"].get("path"):
image_paths = get_pictures_list(llm_kwargs["most_recent_uploaded"]["path"]) image_paths = get_pictures_list(llm_kwargs["most_recent_uploaded"]["path"])
md_encode = [] md_encode = []
@@ -28,7 +28,7 @@ class ZhipuChatInit:
self.zhipu_bro = ZhipuAI(api_key=ZHIPUAI_API_KEY) self.zhipu_bro = ZhipuAI(api_key=ZHIPUAI_API_KEY)
self.model = '' self.model = ''
def __conversation_user(self, user_input: str, llm_kwargs): def __conversation_user(self, user_input: str, llm_kwargs:dict):
if self.model not in ["glm-4v"]: if self.model not in ["glm-4v"]:
return {"role": "user", "content": user_input} return {"role": "user", "content": user_input}
else: else:
@@ -41,7 +41,7 @@ class ZhipuChatInit:
what_i_have_asked['content'].append(img_d) what_i_have_asked['content'].append(img_d)
return what_i_have_asked return what_i_have_asked
def __conversation_history(self, history, llm_kwargs): def __conversation_history(self, history:list, llm_kwargs:dict):
messages = [] messages = []
conversation_cnt = len(history) // 2 conversation_cnt = len(history) // 2
if conversation_cnt: if conversation_cnt:
@@ -55,22 +55,67 @@ class ZhipuChatInit:
messages.append(what_gpt_answer) messages.append(what_gpt_answer)
return messages return messages
def __conversation_message_payload(self, inputs, llm_kwargs, history, system_prompt): @staticmethod
def preprocess_param(param, default=0.95, min_val=0.01, max_val=0.99):
"""预处理参数,保证其在允许范围内,并处理精度问题"""
try:
param = float(param)
except ValueError:
return default
if param <= min_val:
return min_val
elif param >= max_val:
return max_val
else:
return round(param, 2) # 可挑选精度,目前是两位小数
def __conversation_message_payload(self, inputs:str, llm_kwargs:dict, history:list, system_prompt:str):
messages = [] messages = []
if system_prompt: if system_prompt:
messages.append({"role": "system", "content": system_prompt}) messages.append({"role": "system", "content": system_prompt})
self.model = llm_kwargs['llm_model'] self.model = llm_kwargs['llm_model']
messages.extend(self.__conversation_history(history, llm_kwargs)) # 处理 history messages.extend(self.__conversation_history(history, llm_kwargs)) # 处理 history
if inputs.strip() == "": # 处理空输入导致报错的问题 https://github.com/binary-husky/gpt_academic/issues/1640 提示 {"error":{"code":"1214","message":"messages[1]:content和tool_calls 字段不能同时为空"}
inputs = "." # 空格、换行、空字符串都会报错,所以用最没有意义的一个点代替
messages.append(self.__conversation_user(inputs, llm_kwargs)) # 处理用户对话 messages.append(self.__conversation_user(inputs, llm_kwargs)) # 处理用户对话
"""
采样温度,控制输出的随机性,必须为正数
取值范围是:(0.0, 1.0),不能等于 0,默认值为 0.95,
值越大,会使输出更随机,更具创造性;
值越小,输出会更加稳定或确定
建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数
"""
temperature = self.preprocess_param(
param=llm_kwargs.get('temperature', 0.95),
default=0.95,
min_val=0.01,
max_val=0.99
)
"""
用温度取样的另一种方法,称为核取样
取值范围是:(0.0, 1.0) 开区间,
不能等于 0 或 1,默认值为 0.7
模型考虑具有 top_p 概率质量 tokens 的结果
例如0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens
建议您根据应用场景调整 top_p 或 temperature 参数,
但不要同时调整两个参数
"""
top_p = self.preprocess_param(
param=llm_kwargs.get('top_p', 0.70),
default=0.70,
min_val=0.01,
max_val=0.99
)
response = self.zhipu_bro.chat.completions.create( response = self.zhipu_bro.chat.completions.create(
model=self.model, messages=messages, stream=True, model=self.model, messages=messages, stream=True,
temperature=llm_kwargs.get('temperature', 0.95) * 0.95, # 只能传默认的 temperature 和 top_p temperature=temperature,
top_p=llm_kwargs.get('top_p', 0.7) * 0.7, top_p=top_p,
max_tokens=llm_kwargs.get('max_tokens', 1024 * 4), # 最大输出模型的一半 max_tokens=llm_kwargs.get('max_tokens', 1024 * 4),
) )
return response return response
def generate_chat(self, inputs, llm_kwargs, history, system_prompt): def generate_chat(self, inputs:str, llm_kwargs:dict, history:list, system_prompt:str):
self.model = llm_kwargs['llm_model'] self.model = llm_kwargs['llm_model']
response = self.__conversation_message_payload(inputs, llm_kwargs, history, system_prompt) response = self.__conversation_message_payload(inputs, llm_kwargs, history, system_prompt)
bro_results = '' bro_results = ''

查看文件

@@ -2,12 +2,12 @@ import random
def Singleton(cls): def Singleton(cls):
_instance = {} _instance = {}
def _singleton(*args, **kargs): def _singleton(*args, **kargs):
if cls not in _instance: if cls not in _instance:
_instance[cls] = cls(*args, **kargs) _instance[cls] = cls(*args, **kargs)
return _instance[cls] return _instance[cls]
return _singleton return _singleton
@@ -16,7 +16,7 @@ class OpenAI_ApiKeyManager():
def __init__(self, mode='blacklist') -> None: def __init__(self, mode='blacklist') -> None:
# self.key_avail_list = [] # self.key_avail_list = []
self.key_black_list = [] self.key_black_list = []
def add_key_to_blacklist(self, key): def add_key_to_blacklist(self, key):
self.key_black_list.append(key) self.key_black_list.append(key)

查看文件

@@ -1,6 +1,7 @@
import time import time
import threading import threading
from toolbox import update_ui, Singleton from toolbox import update_ui, Singleton
from toolbox import ChatBotWithCookies
from multiprocessing import Process, Pipe from multiprocessing import Process, Pipe
from contextlib import redirect_stdout from contextlib import redirect_stdout
from request_llms.queued_pipe import create_queue_pipe from request_llms.queued_pipe import create_queue_pipe
@@ -90,7 +91,7 @@ class LocalLLMHandle(Process):
return self.state return self.state
def set_state(self, new_state): def set_state(self, new_state):
# ⭐run in main process or 🏃‍♂️🏃‍♂️🏃‍♂️ run in child process # ⭐run in main process or 🏃‍♂️🏃‍♂️🏃‍♂️ run in child process
if self.is_main_process: if self.is_main_process:
self.state = new_state self.state = new_state
else: else:
@@ -178,8 +179,8 @@ class LocalLLMHandle(Process):
r = self.parent.recv() r = self.parent.recv()
continue continue
break break
return return
def stream_chat(self, **kwargs): def stream_chat(self, **kwargs):
# ⭐run in main process # ⭐run in main process
if self.get_state() == "`准备就绪`": if self.get_state() == "`准备就绪`":
@@ -214,7 +215,7 @@ class LocalLLMHandle(Process):
def get_local_llm_predict_fns(LLMSingletonClass, model_name, history_format='classic'): def get_local_llm_predict_fns(LLMSingletonClass, model_name, history_format='classic'):
load_message = f"{model_name}尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,{model_name}消耗大量的内存CPU或显存GPU,也许会导致低配计算机卡死 ……" load_message = f"{model_name}尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,{model_name}消耗大量的内存CPU或显存GPU,也许会导致低配计算机卡死 ……"
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=[], console_slience:bool=False):
""" """
refer to request_llms/bridge_all.py refer to request_llms/bridge_all.py
""" """
@@ -260,7 +261,8 @@ def get_local_llm_predict_fns(LLMSingletonClass, model_name, history_format='cla
raise RuntimeError("程序终止。") raise RuntimeError("程序终止。")
return response return response
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None): def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
""" """
refer to request_llms/bridge_all.py refer to request_llms/bridge_all.py
""" """

查看文件

@@ -1,4 +1,4 @@
https://public.agent-matrix.com/publish/gradio-3.32.8-py3-none-any.whl https://public.agent-matrix.com/publish/gradio-3.32.9-py3-none-any.whl
gradio-client==0.8 gradio-client==0.8
pypdf2==2.12.1 pypdf2==2.12.1
zhipuai>=2 zhipuai>=2
@@ -8,6 +8,7 @@ pydantic==2.5.2
protobuf==3.18 protobuf==3.18
transformers>=4.27.1 transformers>=4.27.1
scipdf_parser>=0.52 scipdf_parser>=0.52
anthropic>=0.18.1
python-markdown-math python-markdown-math
pymdown-extensions pymdown-extensions
websocket-client websocket-client
@@ -16,7 +17,7 @@ prompt_toolkit
latex2mathml latex2mathml
python-docx python-docx
mdtex2html mdtex2html
anthropic dashscope
pyautogen pyautogen
colorama colorama
Markdown Markdown
@@ -25,4 +26,4 @@ pymupdf
openai openai
arxiv arxiv
numpy numpy
rich rich

查看文件

@@ -0,0 +1,61 @@
from typing import Callable
def load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns)->Callable:
def load_web_cookie_cache(persistent_cookie_, cookies_):
import gradio as gr
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid
ret = {}
for k in customize_btns:
ret.update({customize_btns[k]: gr.update(visible=False, value="")})
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
except: return ret
customize_fn_overwrite_ = persistent_cookie_.get("custom_bnt", {})
cookies_['customize_fn_overwrite'] = customize_fn_overwrite_
ret.update({cookies: cookies_})
for k,v in persistent_cookie_["custom_bnt"].items():
if v['Title'] == "": continue
if k in customize_btns: ret.update({customize_btns[k]: gr.update(visible=True, value=v['Title'])})
else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
return ret
return load_web_cookie_cache
def assign_btn__fn_builder(customize_btns, predefined_btns, cookies, web_cookie_cache)->Callable:
def assign_btn(persistent_cookie_, cookies_, basic_btn_dropdown_, basic_fn_title, basic_fn_prefix, basic_fn_suffix, clean_up=False):
import gradio as gr
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid
ret = {}
# 读取之前的自定义按钮
customize_fn_overwrite_ = cookies_['customize_fn_overwrite']
# 更新新的自定义按钮
customize_fn_overwrite_.update({
basic_btn_dropdown_:
{
"Title":basic_fn_title,
"Prefix":basic_fn_prefix,
"Suffix":basic_fn_suffix,
}
}
)
if clean_up:
customize_fn_overwrite_ = {}
cookies_.update(customize_fn_overwrite_) # 更新cookie
visible = (not clean_up) and (basic_fn_title != "")
if basic_btn_dropdown_ in customize_btns:
# 是自定义按钮,不是预定义按钮
ret.update({customize_btns[basic_btn_dropdown_]: gr.update(visible=visible, value=basic_fn_title)})
else:
# 是预定义按钮
ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=visible, value=basic_fn_title)})
ret.update({cookies: cookies_})
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
except: persistent_cookie_ = {}
persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value
persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict
ret.update({web_cookie_cache: persistent_cookie_}) # write persistent cookie
return ret
return assign_btn

查看文件

@@ -0,0 +1,211 @@
"""
Tests:
- custom_path false / no user auth:
-- upload file(yes)
-- download file(yes)
-- websocket(yes)
-- block __pycache__ access(yes)
-- rel (yes)
-- abs (yes)
-- block user access(fail) http://localhost:45013/file=gpt_log/admin/chat_secrets.log
-- fix(commit f6bf05048c08f5cd84593f7fdc01e64dec1f584a)-> block successful
- custom_path yes("/cc/gptac") / no user auth:
-- upload file(yes)
-- download file(yes)
-- websocket(yes)
-- block __pycache__ access(yes)
-- block user access(yes)
- custom_path yes("/cc/gptac/") / no user auth:
-- upload file(yes)
-- download file(yes)
-- websocket(yes)
-- block user access(yes)
- custom_path yes("/cc/gptac/") / + user auth:
-- upload file(yes)
-- download file(yes)
-- websocket(yes)
-- block user access(yes)
-- block user-wise access (yes)
- custom_path no + user auth:
-- upload file(yes)
-- download file(yes)
-- websocket(yes)
-- block user access(yes)
-- block user-wise access (yes)
queue cocurrent effectiveness
-- upload file(yes)
-- download file(yes)
-- websocket(yes)
"""
import os, requests, threading, time
import uvicorn
def _authorize_user(path_or_url, request, gradio_app):
from toolbox import get_conf, default_user_name
PATH_PRIVATE_UPLOAD, PATH_LOGGING = get_conf('PATH_PRIVATE_UPLOAD', 'PATH_LOGGING')
sensitive_path = None
path_or_url = os.path.relpath(path_or_url)
if path_or_url.startswith(PATH_LOGGING):
sensitive_path = PATH_LOGGING
if path_or_url.startswith(PATH_PRIVATE_UPLOAD):
sensitive_path = PATH_PRIVATE_UPLOAD
if sensitive_path:
token = request.cookies.get("access-token") or request.cookies.get("access-token-unsecure")
user = gradio_app.tokens.get(token) # get user
allowed_users = [user, 'autogen', default_user_name] # three user path that can be accessed
for user_allowed in allowed_users:
# exact match
if f"{os.sep}".join(path_or_url.split(os.sep)[:2]) == os.path.join(sensitive_path, user_allowed):
return True
return False # "越权访问!"
return True
class Server(uvicorn.Server):
# A server that runs in a separate thread
def install_signal_handlers(self):
pass
def run_in_thread(self):
self.thread = threading.Thread(target=self.run, daemon=True)
self.thread.start()
while not self.started:
time.sleep(1e-3)
def close(self):
self.should_exit = True
self.thread.join()
def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SSL_CERTFILE):
import uvicorn
import fastapi
import gradio as gr
from fastapi import FastAPI
from gradio.routes import App
from toolbox import get_conf
CUSTOM_PATH, PATH_LOGGING = get_conf('CUSTOM_PATH', 'PATH_LOGGING')
# --- --- configurate gradio app block --- ---
app_block:gr.Blocks
app_block.ssl_verify = False
app_block.auth_message = '请登录'
app_block.favicon_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), "docs/logo.png")
app_block.auth = AUTHENTICATION if len(AUTHENTICATION) != 0 else None
app_block.blocked_paths = ["config.py", "__pycache__", "config_private.py", "docker-compose.yml", "Dockerfile", f"{PATH_LOGGING}/admin"]
app_block.dev_mode = False
app_block.config = app_block.get_config_file()
app_block.enable_queue = True
app_block.queue(concurrency_count=CONCURRENT_COUNT)
app_block.validate_queue_settings()
app_block.show_api = False
app_block.config = app_block.get_config_file()
max_threads = 40
app_block.max_threads = max(
app_block._queue.max_thread_count if app_block.enable_queue else 0, max_threads
)
app_block.is_colab = False
app_block.is_kaggle = False
app_block.is_sagemaker = False
gradio_app = App.create_app(app_block)
# --- --- replace gradio endpoint to forbid access to sensitive files --- ---
if len(AUTHENTICATION) > 0:
dependencies = []
endpoint = None
for route in list(gradio_app.router.routes):
if route.path == "/file/{path:path}":
gradio_app.router.routes.remove(route)
if route.path == "/file={path_or_url:path}":
dependencies = route.dependencies
endpoint = route.endpoint
gradio_app.router.routes.remove(route)
@gradio_app.get("/file/{path:path}", dependencies=dependencies)
@gradio_app.head("/file={path_or_url:path}", dependencies=dependencies)
@gradio_app.get("/file={path_or_url:path}", dependencies=dependencies)
async def file(path_or_url: str, request: fastapi.Request):
if len(AUTHENTICATION) > 0:
if not _authorize_user(path_or_url, request, gradio_app):
return "越权访问!"
return await endpoint(path_or_url, request)
# --- --- app_lifespan --- ---
from contextlib import asynccontextmanager
@asynccontextmanager
async def app_lifespan(app):
async def startup_gradio_app():
if gradio_app.get_blocks().enable_queue:
gradio_app.get_blocks().startup_events()
async def shutdown_gradio_app():
pass
await startup_gradio_app() # startup logic here
yield # The application will serve requests after this point
await shutdown_gradio_app() # cleanup/shutdown logic here
# --- --- FastAPI --- ---
fastapi_app = FastAPI(lifespan=app_lifespan)
fastapi_app.mount(CUSTOM_PATH, gradio_app)
# --- --- favicon --- ---
if CUSTOM_PATH != '/':
from fastapi.responses import FileResponse
@fastapi_app.get("/favicon.ico")
async def favicon():
return FileResponse(app_block.favicon_path)
# --- --- uvicorn.Config --- ---
ssl_keyfile = None if SSL_KEYFILE == "" else SSL_KEYFILE
ssl_certfile = None if SSL_CERTFILE == "" else SSL_CERTFILE
server_name = "0.0.0.0"
config = uvicorn.Config(
fastapi_app,
host=server_name,
port=PORT,
reload=False,
log_level="warning",
ssl_keyfile=ssl_keyfile,
ssl_certfile=ssl_certfile,
)
server = Server(config)
url_host_name = "localhost" if server_name == "0.0.0.0" else server_name
if ssl_keyfile is not None:
if ssl_certfile is None:
raise ValueError(
"ssl_certfile must be provided if ssl_keyfile is provided."
)
path_to_local_server = f"https://{url_host_name}:{PORT}/"
else:
path_to_local_server = f"http://{url_host_name}:{PORT}/"
if CUSTOM_PATH != '/':
path_to_local_server += CUSTOM_PATH.lstrip('/').rstrip('/') + '/'
# --- --- begin --- ---
server.run_in_thread()
# --- --- after server launch --- ---
app_block.server = server
app_block.server_name = server_name
app_block.local_url = path_to_local_server
app_block.protocol = (
"https"
if app_block.local_url.startswith("https") or app_block.is_colab
else "http"
)
if app_block.enable_queue:
app_block._queue.set_url(path_to_local_server)
forbid_proxies = {
"http": "",
"https": "",
}
requests.get(f"{app_block.local_url}startup-events", verify=app_block.ssl_verify, proxies=forbid_proxies)
app_block.is_running = True
app_block.block_thread()

查看文件

@@ -28,6 +28,11 @@ def is_api2d_key(key):
return bool(API_MATCH_API2D) return bool(API_MATCH_API2D)
def is_cohere_api_key(key):
API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{40}$", key)
return bool(API_MATCH_AZURE)
def is_any_api_key(key): def is_any_api_key(key):
if ',' in key: if ',' in key:
keys = key.split(',') keys = key.split(',')
@@ -35,7 +40,7 @@ def is_any_api_key(key):
if is_any_api_key(k): return True if is_any_api_key(k): return True
return False return False
else: else:
return is_openai_api_key(key) or is_api2d_key(key) or is_azure_api_key(key) return is_openai_api_key(key) or is_api2d_key(key) or is_azure_api_key(key) or is_cohere_api_key(key)
def what_keys(keys): def what_keys(keys):
@@ -62,7 +67,7 @@ def select_api_key(keys, llm_model):
avail_key_list = [] avail_key_list = []
key_list = keys.split(',') key_list = keys.split(',')
if llm_model.startswith('gpt-'): if llm_model.startswith('gpt-') or llm_model.startswith('one-api-'):
for k in key_list: for k in key_list:
if is_openai_api_key(k): avail_key_list.append(k) if is_openai_api_key(k): avail_key_list.append(k)
@@ -74,8 +79,12 @@ def select_api_key(keys, llm_model):
for k in key_list: for k in key_list:
if is_azure_api_key(k): avail_key_list.append(k) if is_azure_api_key(k): avail_key_list.append(k)
if llm_model.startswith('cohere-'):
for k in key_list:
if is_cohere_api_key(k): avail_key_list.append(k)
if len(avail_key_list) == 0: if len(avail_key_list) == 0:
raise RuntimeError(f"您提供的api-key不满足要求,不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源右下角更换模型菜单中可切换openai,azure,claude,api2d等请求源)。") raise RuntimeError(f"您提供的api-key不满足要求,不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源左上角更换模型菜单中可切换openai,azure,claude,cohere等请求源)。")
api_key = random.choice(avail_key_list) # 随机负载均衡 api_key = random.choice(avail_key_list) # 随机负载均衡
return api_key return api_key

34
shared_utils/map_names.py 普通文件
查看文件

@@ -0,0 +1,34 @@
import re
mapping_dic = {
# "qianfan": "qianfan文心一言大模型",
# "zhipuai": "zhipuai智谱GLM4超级模型🔥",
# "gpt-4-1106-preview": "gpt-4-1106-preview新调优版本GPT-4🔥",
# "gpt-4-vision-preview": "gpt-4-vision-preview识图模型GPT-4V",
}
rev_mapping_dic = {}
for k, v in mapping_dic.items():
rev_mapping_dic[v] = k
def map_model_to_friendly_names(m):
if m in mapping_dic:
return mapping_dic[m]
return m
def map_friendly_names_to_model(m):
if m in rev_mapping_dic:
return rev_mapping_dic[m]
return m
def read_one_api_model_name(model: str):
"""return real model name and max_token.
"""
max_token_pattern = r"\(max_token=(\d+)\)"
match = re.search(max_token_pattern, model)
if match:
max_token_tmp = match.group(1) # 获取 max_token 的值
max_token_tmp = int(max_token_tmp)
model = re.sub(max_token_pattern, "", model) # 从原字符串中删除 "(max_token=...)"
else:
max_token_tmp = 4096
return model, max_token_tmp

查看文件

@@ -59,7 +59,7 @@ def apply_gpt_academic_string_mask_langbased(string, lang_reference):
lang_reference = "hello world" lang_reference = "hello world"
输出1 输出1
"注意,lang_reference这段文字是英语" "注意,lang_reference这段文字是英语"
输入2 输入2
string = "注意,lang_reference这段文字是中文" # 注意这里没有掩码tag,所以不会被处理 string = "注意,lang_reference这段文字是中文" # 注意这里没有掩码tag,所以不会被处理
lang_reference = "hello world" lang_reference = "hello world"

查看文件

@@ -11,28 +11,45 @@ def validate_path():
validate_path() # validate path so you can run from base directory validate_path() # validate path so you can run from base directory
if __name__ == "__main__":
# from request_llms.bridge_newbingfree import predict_no_ui_long_connection
# from request_llms.bridge_moss import predict_no_ui_long_connection
# from request_llms.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
# from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
# from request_llms.bridge_claude import predict_no_ui_long_connection
# from request_llms.bridge_internlm import predict_no_ui_long_connection
# from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
# from request_llms.bridge_qwen_7B import predict_no_ui_long_connection
from request_llms.bridge_qwen_local import predict_no_ui_long_connection
# from request_llms.bridge_spark import predict_no_ui_long_connection if "在线模型":
# from request_llms.bridge_zhipu import predict_no_ui_long_connection if __name__ == "__main__":
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection from request_llms.bridge_cohere import predict_no_ui_long_connection
# from request_llms.bridge_spark import predict_no_ui_long_connection
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
llm_kwargs = {
"llm_model": "command-r-plus",
"max_length": 4096,
"top_p": 1,
"temperature": 1,
}
llm_kwargs = { result = predict_no_ui_long_connection(
"max_length": 4096, inputs="请问什么是质子?", llm_kwargs=llm_kwargs, history=["你好", "我好!"], sys_prompt="系统"
"top_p": 1, )
"temperature": 1, print("final result:", result)
} print("final result:", result)
if "本地模型":
if __name__ == "__main__":
# from request_llms.bridge_newbingfree import predict_no_ui_long_connection
# from request_llms.bridge_moss import predict_no_ui_long_connection
# from request_llms.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
# from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
# from request_llms.bridge_claude import predict_no_ui_long_connection
# from request_llms.bridge_internlm import predict_no_ui_long_connection
# from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
# from request_llms.bridge_qwen_7B import predict_no_ui_long_connection
# from request_llms.bridge_qwen_local import predict_no_ui_long_connection
llm_kwargs = {
"max_length": 4096,
"top_p": 1,
"temperature": 1,
}
result = predict_no_ui_long_connection(
inputs="请问什么是质子?", llm_kwargs=llm_kwargs, history=["你好", "我好!"], sys_prompt=""
)
print("final result:", result)
result = predict_no_ui_long_connection(
inputs="请问什么是质子?", llm_kwargs=llm_kwargs, history=["你好", "我好!"], sys_prompt=""
)
print("final result:", result)

查看文件

@@ -2,6 +2,76 @@
// 第 1 部分: 工具函数 // 第 1 部分: 工具函数
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function push_data_to_gradio_component(DAT, ELEM_ID, TYPE) {
// type, // type==="str" / type==="float"
if (TYPE == "str") {
// convert dat to string: do nothing
}
else if (TYPE == "no_conversion") {
// no nothing
}
else if (TYPE == "float") {
// convert dat to float
DAT = parseFloat(DAT);
}
const myEvent = new CustomEvent('gpt_academic_update_gradio_component', {
detail: {
data: DAT,
elem_id: ELEM_ID,
}
});
window.dispatchEvent(myEvent);
}
async function get_gradio_component(ELEM_ID) {
function waitFor(ELEM_ID) {
return new Promise((resolve) => {
const myEvent = new CustomEvent('gpt_academic_get_gradio_component_value', {
detail: {
elem_id: ELEM_ID,
resolve,
}
});
window.dispatchEvent(myEvent);
});
}
result = await waitFor(ELEM_ID);
return result;
}
async function get_data_from_gradio_component(ELEM_ID) {
let comp = await get_gradio_component(ELEM_ID);
return comp.props.value;
}
function update_array(arr, item, mode) {
// // Remove "输入清除键"
// p = updateArray(p, "输入清除键", "remove");
// console.log(p); // Should log: ["基础功能区", "函数插件区"]
// // Add "输入清除键"
// p = updateArray(p, "输入清除键", "add");
// console.log(p); // Should log: ["基础功能区", "函数插件区", "输入清除键"]
const index = arr.indexOf(item);
if (mode === "remove") {
if (index !== -1) {
// Item found, remove it
arr.splice(index, 1);
}
} else if (mode === "add") {
if (index === -1) {
// Item not found, add it
arr.push(item);
}
}
return arr;
}
function gradioApp() { function gradioApp() {
// https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript // https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
const elems = document.getElementsByTagName('gradio-app'); const elems = document.getElementsByTagName('gradio-app');
@@ -14,6 +84,7 @@ function gradioApp() {
return elem.shadowRoot ? elem.shadowRoot : elem; return elem.shadowRoot ? elem.shadowRoot : elem;
} }
function setCookie(name, value, days) { function setCookie(name, value, days) {
var expires = ""; var expires = "";
@@ -26,6 +97,7 @@ function setCookie(name, value, days) {
document.cookie = name + "=" + value + expires + "; path=/"; document.cookie = name + "=" + value + expires + "; path=/";
} }
function getCookie(name) { function getCookie(name) {
var decodedCookie = decodeURIComponent(document.cookie); var decodedCookie = decodeURIComponent(document.cookie);
var cookies = decodedCookie.split(';'); var cookies = decodedCookie.split(';');
@@ -41,6 +113,7 @@ function getCookie(name) {
return null; return null;
} }
let toastCount = 0; let toastCount = 0;
function toast_push(msg, duration) { function toast_push(msg, duration) {
duration = isNaN(duration) ? 3000 : duration; duration = isNaN(duration) ? 3000 : duration;
@@ -63,6 +136,7 @@ function toast_push(msg, duration) {
toastCount++; toastCount++;
} }
function toast_up(msg) { function toast_up(msg) {
var m = document.getElementById('toast_up'); var m = document.getElementById('toast_up');
if (m) { if (m) {
@@ -75,6 +149,7 @@ function toast_up(msg) {
document.body.appendChild(m); document.body.appendChild(m);
} }
function toast_down() { function toast_down() {
var m = document.getElementById('toast_up'); var m = document.getElementById('toast_up');
if (m) { if (m) {
@@ -82,6 +157,7 @@ function toast_down() {
} }
} }
function begin_loading_status() { function begin_loading_status() {
// Create the loader div and add styling // Create the loader div and add styling
var loader = document.createElement('div'); var loader = document.createElement('div');
@@ -256,6 +332,7 @@ function do_something_but_not_too_frequently(min_interval, func) {
} }
} }
function chatbotContentChanged(attempt = 1, force = false) { function chatbotContentChanged(attempt = 1, force = false) {
// https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript // https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
for (var i = 0; i < attempt; i++) { for (var i = 0; i < attempt; i++) {
@@ -272,7 +349,6 @@ function chatbotContentChanged(attempt = 1, force = false) {
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 3 部分: chatbot动态高度调整 // 第 3 部分: chatbot动态高度调整
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function chatbotAutoHeight() { function chatbotAutoHeight() {
// 自动调整高度:立即 // 自动调整高度:立即
function update_height() { function update_height() {
@@ -304,6 +380,7 @@ function chatbotAutoHeight() {
setInterval(function () { update_height_slow() }, 50); // 每50毫秒执行一次 setInterval(function () { update_height_slow() }, 50); // 每50毫秒执行一次
} }
swapped = false; swapped = false;
function swap_input_area() { function swap_input_area() {
// Get the elements to be swapped // Get the elements to be swapped
@@ -323,6 +400,7 @@ function swap_input_area() {
else { swapped = true; } else { swapped = true; }
} }
function get_elements(consider_state_panel = false) { function get_elements(consider_state_panel = false) {
var chatbot = document.querySelector('#gpt-chatbot > div.wrap.svelte-18telvq'); var chatbot = document.querySelector('#gpt-chatbot > div.wrap.svelte-18telvq');
if (!chatbot) { if (!chatbot) {
@@ -420,6 +498,7 @@ async function upload_files(files) {
} }
} }
function register_func_paste(input) { function register_func_paste(input) {
let paste_files = []; let paste_files = [];
if (input) { if (input) {
@@ -446,6 +525,7 @@ function register_func_paste(input) {
} }
} }
function register_func_drag(elem) { function register_func_drag(elem) {
if (elem) { if (elem) {
const dragEvents = ["dragover"]; const dragEvents = ["dragover"];
@@ -482,6 +562,7 @@ function register_func_drag(elem) {
} }
} }
function elem_upload_component_pop_message(elem) { function elem_upload_component_pop_message(elem) {
if (elem) { if (elem) {
const dragEvents = ["dragover"]; const dragEvents = ["dragover"];
@@ -511,6 +592,7 @@ function elem_upload_component_pop_message(elem) {
} }
} }
function register_upload_event() { function register_upload_event() {
locate_upload_elems(); locate_upload_elems();
if (elem_upload_float) { if (elem_upload_float) {
@@ -533,6 +615,7 @@ function register_upload_event() {
} }
} }
function monitoring_input_box() { function monitoring_input_box() {
register_upload_event(); register_upload_event();
@@ -566,7 +649,6 @@ window.addEventListener("DOMContentLoaded", function () {
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 5 部分: 音频按钮样式变化 // 第 5 部分: 音频按钮样式变化
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function audio_fn_init() { function audio_fn_init() {
let audio_component = document.getElementById('elem_audio'); let audio_component = document.getElementById('elem_audio');
if (audio_component) { if (audio_component) {
@@ -603,6 +685,7 @@ function audio_fn_init() {
} }
} }
function minor_ui_adjustment() { function minor_ui_adjustment() {
let cbsc_area = document.getElementById('cbsc'); let cbsc_area = document.getElementById('cbsc');
cbsc_area.style.paddingTop = '15px'; cbsc_area.style.paddingTop = '15px';
@@ -695,21 +778,6 @@ function limit_scroll_position() {
// 第 7 部分: JS初始化函数 // 第 7 部分: JS初始化函数
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
audio_fn_init();
minor_ui_adjustment();
chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
var chatbotObserver = new MutationObserver(() => {
chatbotContentChanged(1);
});
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
if (LAYOUT === "LEFT-RIGHT") { chatbotAutoHeight(); }
if (LAYOUT === "LEFT-RIGHT") { limit_scroll_position(); }
// setInterval(function () { uml("mermaid") }, 5000); // 每50毫秒执行一次
}
function loadLive2D() { function loadLive2D() {
try { try {
$("<link>").attr({ href: "file=themes/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css" }).appendTo('head'); $("<link>").attr({ href: "file=themes/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css" }).appendTo('head');
@@ -731,12 +799,12 @@ function loadLive2D() {
live2d_settings['canTakeScreenshot'] = false; live2d_settings['canTakeScreenshot'] = false;
live2d_settings['canTurnToHomePage'] = false; live2d_settings['canTurnToHomePage'] = false;
live2d_settings['canTurnToAboutPage'] = false; live2d_settings['canTurnToAboutPage'] = false;
live2d_settings['showHitokoto'] = false; // 显示一言 live2d_settings['showHitokoto'] = false; // 显示一言
live2d_settings['showF12Status'] = false; // 显示加载状态 live2d_settings['showF12Status'] = false; // 显示加载状态
live2d_settings['showF12Message'] = false; // 显示看板娘消息 live2d_settings['showF12Message'] = false; // 显示看板娘消息
live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示 live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示
live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示 live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示
live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词 live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词
/* 在 initModel 前添加 */ /* 在 initModel 前添加 */
initModel("file=themes/waifu_plugin/waifu-tips.json"); initModel("file=themes/waifu_plugin/waifu-tips.json");
} }
@@ -746,7 +814,8 @@ function loadLive2D() {
} catch (err) { console.log("[Error] JQuery is not defined.") } } catch (err) { console.log("[Error] JQuery is not defined.") }
} }
function get_checkbox_selected_items(elem_id){
function get_checkbox_selected_items(elem_id) {
display_panel_arr = []; display_panel_arr = [];
document.getElementById(elem_id).querySelector('[data-testid="checkbox-group"]').querySelectorAll('label').forEach(label => { document.getElementById(elem_id).querySelector('[data-testid="checkbox-group"]').querySelectorAll('label').forEach(label => {
// Get the span text // Get the span text
@@ -760,51 +829,52 @@ function get_checkbox_selected_items(elem_id){
return display_panel_arr; return display_panel_arr;
} }
function set_checkbox(key, bool, set_twice=false) {
set_success = false;
elem_ids = ["cbsc", "cbs"]
elem_ids.forEach(id => {
document.getElementById(id).querySelector('[data-testid="checkbox-group"]').querySelectorAll('label').forEach(label => {
// Get the span text
const spanText = label.querySelector('span').textContent;
if (spanText === key) {
if (bool){
label.classList.add('selected');
} else {
if (label.classList.contains('selected')) {
label.classList.remove('selected');
}
}
if (set_twice){
setTimeout(() => {
if (bool){
label.classList.add('selected');
} else {
if (label.classList.contains('selected')) {
label.classList.remove('selected');
}
}
}, 5000);
}
label.querySelector('input').checked = bool; function gpt_academic_gradio_saveload(
set_success = true; save_or_load, // save_or_load==="save" / save_or_load==="load"
return elem_id, // element id
cookie_key, // cookie key
save_value = "", // save value
load_type = "str", // type==="str" / type==="float"
load_default = false, // load default value
load_default_value = ""
) {
if (save_or_load === "load") {
let value = getCookie(cookie_key);
if (value) {
console.log('加载cookie', elem_id, value)
push_data_to_gradio_component(value, elem_id, load_type);
}
else {
if (load_default) {
console.log('加载cookie的默认值', elem_id, load_default_value)
push_data_to_gradio_component(load_default_value, elem_id, load_type);
} }
}); }
}); }
if (save_or_load === "save") {
if (!set_success){ setCookie(cookie_key, save_value, 365);
console.log("设置checkbox失败,没有找到对应的key")
} }
} }
function apply_cookie_for_checkbox(dark) {
// console.log("apply_cookie_for_checkboxes")
let searchString = "输入清除键";
let bool_value = "False";
////////////////// darkmode /////////////////// async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout) {
// 第一部分,布局初始化
audio_fn_init();
minor_ui_adjustment();
chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
var chatbotObserver = new MutationObserver(() => {
chatbotContentChanged(1);
});
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
if (layout === "LEFT-RIGHT") { chatbotAutoHeight(); }
if (layout === "LEFT-RIGHT") { limit_scroll_position(); }
// 第二部分,读取Cookie,初始话界面
let searchString = "";
let bool_value = "";
// darkmode 深色模式
if (getCookie("js_darkmode_cookie")) { if (getCookie("js_darkmode_cookie")) {
dark = getCookie("js_darkmode_cookie") dark = getCookie("js_darkmode_cookie")
} }
@@ -819,29 +889,41 @@ function apply_cookie_for_checkbox(dark) {
} }
} }
////////////////////// clearButton /////////////////////////// // SysPrompt 系统静默提示词
gpt_academic_gradio_saveload("load", "elem_prompt", "js_system_prompt_cookie", null, "str");
// Temperature 大模型温度参数
gpt_academic_gradio_saveload("load", "elem_temperature", "js_temperature_cookie", null, "float");
// clearButton 自动清除按钮
if (getCookie("js_clearbtn_show_cookie")) { if (getCookie("js_clearbtn_show_cookie")) {
// have cookie // have cookie
bool_value = getCookie("js_clearbtn_show_cookie") bool_value = getCookie("js_clearbtn_show_cookie")
bool_value = bool_value == "True"; bool_value = bool_value == "True";
searchString = "输入清除键"; searchString = "输入清除键";
if (bool_value) { if (bool_value) {
let clearButton = document.getElementById("elem_clear"); // make btns appear
let clearButton2 = document.getElementById("elem_clear2"); let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "block";
clearButton.style.display = "block"; let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "block";
clearButton2.style.display = "block"; // deal with checkboxes
set_checkbox(searchString, true); let arr_with_clear_btn = update_array(
await get_data_from_gradio_component('cbs'), "输入清除键", "add"
)
push_data_to_gradio_component(arr_with_clear_btn, "cbs", "no_conversion");
} else { } else {
let clearButton = document.getElementById("elem_clear"); // make btns disappear
let clearButton2 = document.getElementById("elem_clear2"); let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "none";
clearButton.style.display = "none"; let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "none";
clearButton2.style.display = "none"; // deal with checkboxes
set_checkbox(searchString, false); let arr_without_clear_btn = update_array(
await get_data_from_gradio_component('cbs'), "输入清除键", "remove"
)
push_data_to_gradio_component(arr_without_clear_btn, "cbs", "no_conversion");
} }
} }
////////////////////// live2d /////////////////////////// // live2d 显示
if (getCookie("js_live2d_show_cookie")) { if (getCookie("js_live2d_show_cookie")) {
// have cookie // have cookie
searchString = "添加Live2D形象"; searchString = "添加Live2D形象";
@@ -849,17 +931,23 @@ function apply_cookie_for_checkbox(dark) {
bool_value = bool_value == "True"; bool_value = bool_value == "True";
if (bool_value) { if (bool_value) {
loadLive2D(); loadLive2D();
set_checkbox(searchString, true); let arr_with_live2d = update_array(
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "add"
)
push_data_to_gradio_component(arr_with_live2d, "cbsc", "no_conversion");
} else { } else {
$('.waifu').hide(); try {
set_checkbox(searchString, false); $('.waifu').hide();
let arr_without_live2d = update_array(
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "remove"
)
push_data_to_gradio_component(arr_without_live2d, "cbsc", "no_conversion");
} catch (error) {
}
} }
} else { } else {
// do not have cookie // do not have cookie
// get conf if (live2d) {
display_panel_arr = get_checkbox_selected_items("cbsc");
searchString = "添加Live2D形象";
if (display_panel_arr.includes(searchString)) {
loadLive2D(); loadLive2D();
} else { } else {
} }

某些文件未显示,因为此 diff 中更改的文件太多 显示更多