镜像自地址
https://github.com/binary-husky/gpt_academic.git
已同步 2025-12-06 14:36:48 +00:00
比较提交
68 次代码提交
binary-hus
...
version3.7
| 作者 | SHA1 | 提交日期 | |
|---|---|---|---|
|
|
163f12c533 | ||
|
|
bdd46c5dd1 | ||
|
|
ae51a0e686 | ||
|
|
f2582ea137 | ||
|
|
ddd2fd84da | ||
|
|
6c90ff80ea | ||
|
|
cb7c0703be | ||
|
|
5181cd441d | ||
|
|
216d4374e7 | ||
|
|
8af6c0cab6 | ||
|
|
67ad041372 | ||
|
|
725c72229c | ||
|
|
e42ede512b | ||
|
|
84ccc9e64c | ||
|
|
c172847e19 | ||
|
|
d166d25eb4 | ||
|
|
516bbb1331 | ||
|
|
c3140ce344 | ||
|
|
cd18663800 | ||
|
|
dbf1322836 | ||
|
|
98dd3ae1c0 | ||
|
|
3036709496 | ||
|
|
8e9c07644f | ||
|
|
90d96b77e6 | ||
|
|
66c876a9ca | ||
|
|
0665eb75ed | ||
|
|
6b784035fa | ||
|
|
8bb3d84912 | ||
|
|
a0193cf227 | ||
|
|
b72289bfb0 | ||
|
|
bdfe3862eb | ||
|
|
dae180b9ea | ||
|
|
e359fff040 | ||
|
|
2e9b4a5770 | ||
|
|
e0c5859cf9 | ||
|
|
b9b1e12dc9 | ||
|
|
8814026ec3 | ||
|
|
3025d5be45 | ||
|
|
6c13bb7b46 | ||
|
|
c27e559f10 | ||
|
|
cdb5288f49 | ||
|
|
49c6fcfe97 | ||
|
|
45fa0404eb | ||
|
|
f889ef7625 | ||
|
|
a93bf4410d | ||
|
|
1c0764753a | ||
|
|
c847209ac9 | ||
|
|
4f9d40c14f | ||
|
|
91926d24b7 | ||
|
|
ef311c4859 | ||
|
|
82795d3817 | ||
|
|
49e28a5a00 | ||
|
|
01def2e329 | ||
|
|
2291be2b28 | ||
|
|
c89ec7969f | ||
|
|
1506c19834 | ||
|
|
a6fdc493b7 | ||
|
|
113067c6ab | ||
|
|
7b6828ab07 | ||
|
|
d818c38dfe | ||
|
|
08b4e9796e | ||
|
|
b55d573819 | ||
|
|
06b0e800a2 | ||
|
|
7bbaf05961 | ||
|
|
dd2a97e7a9 | ||
|
|
e579006c4a | ||
|
|
031f19b6dd | ||
|
|
142b516749 |
63
README.md
63
README.md
@@ -1,7 +1,6 @@
|
|||||||
> [!IMPORTANT]
|
> [!IMPORTANT]
|
||||||
|
> 2024.3.11: 恭迎Claude3和Moonshot,全力支持Qwen、GLM、DeepseekCoder等中文大语言模型!
|
||||||
> 2024.1.18: 更新3.70版本,支持Mermaid绘图库(让大模型绘制脑图)
|
> 2024.1.18: 更新3.70版本,支持Mermaid绘图库(让大模型绘制脑图)
|
||||||
> 2024.1.17: 恭迎GLM4,全力支持Qwen、GLM、DeepseekCoder等国内中文大语言基座模型!
|
|
||||||
> 2024.1.17: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
|
|
||||||
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
|
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
@@ -55,6 +54,11 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
|
|||||||
功能(⭐= 近期新增功能) | 描述
|
功能(⭐= 近期新增功能) | 描述
|
||||||
--- | ---
|
--- | ---
|
||||||
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱GLM4](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
|
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱GLM4](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
|
||||||
|
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等(3.7版本)
|
||||||
|
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
||||||
|
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
||||||
|
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
|
||||||
|
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
|
||||||
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
||||||
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
||||||
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||||
@@ -63,21 +67,16 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
|
|||||||
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
|
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
|
||||||
批量注释生成 | [插件] 一键批量生成函数注释
|
批量注释生成 | [插件] 一键批量生成函数注释
|
||||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
|
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
|
||||||
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等(3.7版本)
|
|
||||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
||||||
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||||
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
||||||
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||||
互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
|
互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
|
||||||
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
|
||||||
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
|
||||||
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
||||||
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
|
|
||||||
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
||||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
|
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
|
||||||
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
||||||
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
|
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
|
||||||
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
|
|
||||||
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -116,6 +115,25 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
|||||||
<br><br>
|
<br><br>
|
||||||
|
|
||||||
# Installation
|
# Installation
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart TD
|
||||||
|
A{"安装方法"} --> W1("I. 🔑直接运行 (Windows, Linux or MacOS)")
|
||||||
|
W1 --> W11["1. Python pip包管理依赖"]
|
||||||
|
W1 --> W12["2. Anaconda包管理依赖(推荐⭐)"]
|
||||||
|
|
||||||
|
A --> W2["II. 🐳使用Docker (Windows, Linux or MacOS)"]
|
||||||
|
|
||||||
|
W2 --> k1["1. 部署项目全部能力的大镜像(推荐⭐)"]
|
||||||
|
W2 --> k2["2. 仅在线模型(GPT, GLM4等)镜像"]
|
||||||
|
W2 --> k3["3. 在线模型 + Latex的大镜像"]
|
||||||
|
|
||||||
|
A --> W4["IV. 🚀其他部署方法"]
|
||||||
|
W4 --> C1["1. Windows/MacOS 一键安装运行脚本(推荐⭐)"]
|
||||||
|
W4 --> C2["2. Huggingface, Sealos远程部署"]
|
||||||
|
W4 --> C4["3. ... 其他 ..."]
|
||||||
|
```
|
||||||
|
|
||||||
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
||||||
|
|
||||||
1. 下载项目
|
1. 下载项目
|
||||||
@@ -129,7 +147,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
|||||||
|
|
||||||
在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
||||||
|
|
||||||
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,以确保更新或其他用户无法轻易查看您的私有配置 」。
|
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,从而确保自动更新时不会丢失配置 」。
|
||||||
|
|
||||||
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
|
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
|
||||||
|
|
||||||
@@ -234,8 +252,7 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
|
|||||||
# Advanced Usage
|
# Advanced Usage
|
||||||
### I:自定义新的便捷按钮(学术快捷键)
|
### I:自定义新的便捷按钮(学术快捷键)
|
||||||
|
|
||||||
任意文本编辑器打开`core_functional.py`,添加如下条目,然后重启程序。(如果按钮已存在,那么可以直接修改(前缀、后缀都已支持热修改),无需重启程序即可生效。)
|
现在已可以通过UI中的`界面外观`菜单中的`自定义菜单`添加新的便捷按钮。如果需要在代码中定义,请使用任意文本编辑器打开`core_functional.py`,添加如下条目即可:
|
||||||
例如
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
"超级英译中": {
|
"超级英译中": {
|
||||||
@@ -358,6 +375,32 @@ GPT Academic开发者QQ群:`610599535`
|
|||||||
- 某些浏览器翻译插件干扰此软件前端的运行
|
- 某些浏览器翻译插件干扰此软件前端的运行
|
||||||
- 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
|
- 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
timeline LR
|
||||||
|
title GPT-Academic项目发展历程
|
||||||
|
section 2.x
|
||||||
|
1.0~2.2: 基础功能: 引入模块化函数插件: 可折叠式布局: 函数插件支持热重载
|
||||||
|
2.3~2.5: 增强多线程交互性: 新增PDF全文翻译功能: 新增输入区切换位置的功能: 自更新
|
||||||
|
2.6: 重构了插件结构: 提高了交互性: 加入更多插件
|
||||||
|
section 3.x
|
||||||
|
3.0~3.1: 对chatglm支持: 对其他小型llm支持: 支持同时问询多个gpt模型: 支持多个apikey负载均衡
|
||||||
|
3.2~3.3: 函数插件支持更多参数接口: 保存对话功能: 解读任意语言代码: 同时询问任意的LLM组合: 互联网信息综合功能
|
||||||
|
3.4: 加入arxiv论文翻译: 加入latex论文批改功能
|
||||||
|
3.44: 正式支持Azure: 优化界面易用性
|
||||||
|
3.46: 自定义ChatGLM2微调模型: 实时语音对话
|
||||||
|
3.49: 支持阿里达摩院通义千问: 上海AI-Lab书生: 讯飞星火: 支持百度千帆平台 & 文心一言
|
||||||
|
3.50: 虚空终端: 支持插件分类: 改进UI: 设计新主题
|
||||||
|
3.53: 动态选择不同界面主题: 提高稳定性: 解决多用户冲突问题
|
||||||
|
3.55: 动态代码解释器: 重构前端界面: 引入悬浮窗口与菜单栏
|
||||||
|
3.56: 动态追加基础功能按钮: 新汇报PDF汇总页面
|
||||||
|
3.57: GLM3, 星火v3: 支持文心一言v4: 修复本地模型的并发BUG
|
||||||
|
3.60: 引入AutoGen
|
||||||
|
3.70: 引入Mermaid绘图: 实现GPT画脑图等功能
|
||||||
|
3.80(TODO): 优化AutoGen插件主题: 设计衍生插件
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
### III:主题
|
### III:主题
|
||||||
可以通过修改`THEME`选项(config.py)变更主题
|
可以通过修改`THEME`选项(config.py)变更主题
|
||||||
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
||||||
|
|||||||
@@ -47,7 +47,7 @@ def backup_and_download(current_version, remote_version):
|
|||||||
shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
|
shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
|
||||||
proxies = get_conf('proxies')
|
proxies = get_conf('proxies')
|
||||||
try: r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
|
try: r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
|
||||||
except: r = requests.get('https://public.gpt-academic.top/publish/master.zip', proxies=proxies, stream=True)
|
except: r = requests.get('https://public.agent-matrix.com/publish/master.zip', proxies=proxies, stream=True)
|
||||||
zip_file_path = backup_dir+'/master.zip'
|
zip_file_path = backup_dir+'/master.zip'
|
||||||
with open(zip_file_path, 'wb+') as f:
|
with open(zip_file_path, 'wb+') as f:
|
||||||
f.write(r.content)
|
f.write(r.content)
|
||||||
@@ -113,7 +113,7 @@ def auto_update(raise_error=False):
|
|||||||
import json
|
import json
|
||||||
proxies = get_conf('proxies')
|
proxies = get_conf('proxies')
|
||||||
try: response = requests.get("https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
|
try: response = requests.get("https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
|
||||||
except: response = requests.get("https://public.gpt-academic.top/publish/version", proxies=proxies, timeout=5)
|
except: response = requests.get("https://public.agent-matrix.com/publish/version", proxies=proxies, timeout=5)
|
||||||
remote_json_data = json.loads(response.text)
|
remote_json_data = json.loads(response.text)
|
||||||
remote_version = remote_json_data['version']
|
remote_version = remote_json_data['version']
|
||||||
if remote_json_data["show_feature"]:
|
if remote_json_data["show_feature"]:
|
||||||
|
|||||||
103
config.py
103
config.py
@@ -30,7 +30,33 @@ if USE_PROXY:
|
|||||||
else:
|
else:
|
||||||
proxies = None
|
proxies = None
|
||||||
|
|
||||||
# ------------------------------------ 以下配置可以优化体验, 但大部分场合下并不需要修改 ------------------------------------
|
# [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
|
||||||
|
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
|
||||||
|
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
|
||||||
|
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
||||||
|
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-3-turbo",
|
||||||
|
"gemini-pro", "chatglm3"
|
||||||
|
]
|
||||||
|
# --- --- --- ---
|
||||||
|
# P.S. 其他可用的模型还包括
|
||||||
|
# AVAIL_LLM_MODELS = [
|
||||||
|
# "qianfan", "deepseekcoder",
|
||||||
|
# "spark", "sparkv2", "sparkv3", "sparkv3.5",
|
||||||
|
# "qwen-turbo", "qwen-plus", "qwen-max", "qwen-local",
|
||||||
|
# "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k",
|
||||||
|
# "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125"
|
||||||
|
# "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2",
|
||||||
|
# "moss", "llama2", "chatglm_onnx", "internlm", "jittorllms_pangualpha", "jittorllms_llama",
|
||||||
|
# "yi-34b-chat-0205", "yi-34b-chat-200k"
|
||||||
|
# ]
|
||||||
|
# --- --- --- ---
|
||||||
|
# 此外,为了更灵活地接入one-api多模型管理界面,您还可以在接入one-api时,
|
||||||
|
# 使用"one-api-*"前缀直接使用非标准方式接入的模型,例如
|
||||||
|
# AVAIL_LLM_MODELS = ["one-api-claude-3-sonnet-20240229(max_token=100000)"]
|
||||||
|
# --- --- --- ---
|
||||||
|
|
||||||
|
|
||||||
|
# --------------- 以下配置可以优化体验 ---------------
|
||||||
|
|
||||||
# 重新URL重新定向,实现更换API_URL的作用(高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!)
|
# 重新URL重新定向,实现更换API_URL的作用(高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!)
|
||||||
# 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
|
# 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
|
||||||
@@ -85,20 +111,6 @@ MAX_RETRY = 2
|
|||||||
DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
|
DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
|
||||||
|
|
||||||
|
|
||||||
# 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
|
|
||||||
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
|
||||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-preview",
|
|
||||||
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
|
||||||
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
|
|
||||||
"gemini-pro", "chatglm3", "claude-2", "zhipuai"]
|
|
||||||
# P.S. 其他可用的模型还包括 [
|
|
||||||
# "moss", "qwen-turbo", "qwen-plus", "qwen-max"
|
|
||||||
# "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613",
|
|
||||||
# "gpt-3.5-turbo-16k-0613", "gpt-3.5-random", "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
|
|
||||||
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"
|
|
||||||
# ]
|
|
||||||
|
|
||||||
|
|
||||||
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
|
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
|
||||||
MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
|
MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
|
||||||
|
|
||||||
@@ -127,6 +139,7 @@ CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b
|
|||||||
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
||||||
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
|
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
|
||||||
|
|
||||||
|
|
||||||
# 设置gradio的并行线程数(不需要修改)
|
# 设置gradio的并行线程数(不需要修改)
|
||||||
CONCURRENT_COUNT = 100
|
CONCURRENT_COUNT = 100
|
||||||
|
|
||||||
@@ -144,7 +157,8 @@ ADD_WAIFU = False
|
|||||||
AUTHENTICATION = []
|
AUTHENTICATION = []
|
||||||
|
|
||||||
|
|
||||||
# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
|
# 如果需要在二级路径下运行(常规情况下,不要修改!!)
|
||||||
|
# (举例 CUSTOM_PATH = "/gpt_academic",可以让软件运行在 http://ip:port/gpt_academic/ 下。)
|
||||||
CUSTOM_PATH = "/"
|
CUSTOM_PATH = "/"
|
||||||
|
|
||||||
|
|
||||||
@@ -172,14 +186,8 @@ AZURE_ENGINE = "填入你亲手写的部署名" # 读 docs\use_azure.
|
|||||||
AZURE_CFG_ARRAY = {}
|
AZURE_CFG_ARRAY = {}
|
||||||
|
|
||||||
|
|
||||||
# 使用Newbing (不推荐使用,未来将删除)
|
# 阿里云实时语音识别 配置难度较高
|
||||||
NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"]
|
# 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
|
||||||
NEWBING_COOKIES = """
|
|
||||||
put your new bing cookies here
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
# 阿里云实时语音识别 配置难度较高 仅建议高手用户使用 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
|
|
||||||
ENABLE_AUDIO = False
|
ENABLE_AUDIO = False
|
||||||
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
|
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
|
||||||
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
|
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
|
||||||
@@ -195,19 +203,26 @@ XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
|||||||
|
|
||||||
# 接入智谱大模型
|
# 接入智谱大模型
|
||||||
ZHIPUAI_API_KEY = ""
|
ZHIPUAI_API_KEY = ""
|
||||||
ZHIPUAI_MODEL = "glm-4" # 可选 "glm-3-turbo" "glm-4"
|
ZHIPUAI_MODEL = "" # 此选项已废弃,不再需要填写
|
||||||
|
|
||||||
|
|
||||||
# # 火山引擎YUNQUE大模型
|
|
||||||
# YUNQUE_SECRET_KEY = ""
|
|
||||||
# YUNQUE_ACCESS_KEY = ""
|
|
||||||
# YUNQUE_MODEL = ""
|
|
||||||
|
|
||||||
|
|
||||||
# Claude API KEY
|
# Claude API KEY
|
||||||
ANTHROPIC_API_KEY = ""
|
ANTHROPIC_API_KEY = ""
|
||||||
|
|
||||||
|
|
||||||
|
# 月之暗面 API KEY
|
||||||
|
MOONSHOT_API_KEY = ""
|
||||||
|
|
||||||
|
|
||||||
|
# 零一万物(Yi Model) API KEY
|
||||||
|
YIMODEL_API_KEY = ""
|
||||||
|
|
||||||
|
|
||||||
|
# Mathpix 拥有执行PDF的OCR功能,但是需要注册账号
|
||||||
|
MATHPIX_APPID = ""
|
||||||
|
MATHPIX_APPKEY = ""
|
||||||
|
|
||||||
|
|
||||||
# 自定义API KEY格式
|
# 自定义API KEY格式
|
||||||
CUSTOM_API_KEY_PATTERN = ""
|
CUSTOM_API_KEY_PATTERN = ""
|
||||||
|
|
||||||
@@ -261,7 +276,11 @@ PLUGIN_HOT_RELOAD = False
|
|||||||
# 自定义按钮的最大数量限制
|
# 自定义按钮的最大数量限制
|
||||||
NUM_CUSTOM_BASIC_BTN = 4
|
NUM_CUSTOM_BASIC_BTN = 4
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
--------------- 配置关联关系说明 ---------------
|
||||||
|
|
||||||
在线大模型配置关联关系示意图
|
在线大模型配置关联关系示意图
|
||||||
│
|
│
|
||||||
├── "gpt-3.5-turbo" 等openai模型
|
├── "gpt-3.5-turbo" 等openai模型
|
||||||
@@ -285,7 +304,7 @@ NUM_CUSTOM_BASIC_BTN = 4
|
|||||||
│ ├── XFYUN_API_SECRET
|
│ ├── XFYUN_API_SECRET
|
||||||
│ └── XFYUN_API_KEY
|
│ └── XFYUN_API_KEY
|
||||||
│
|
│
|
||||||
├── "claude-1-100k" 等claude模型
|
├── "claude-3-opus-20240229" 等claude模型
|
||||||
│ └── ANTHROPIC_API_KEY
|
│ └── ANTHROPIC_API_KEY
|
||||||
│
|
│
|
||||||
├── "stack-claude"
|
├── "stack-claude"
|
||||||
@@ -297,9 +316,11 @@ NUM_CUSTOM_BASIC_BTN = 4
|
|||||||
│ ├── BAIDU_CLOUD_API_KEY
|
│ ├── BAIDU_CLOUD_API_KEY
|
||||||
│ └── BAIDU_CLOUD_SECRET_KEY
|
│ └── BAIDU_CLOUD_SECRET_KEY
|
||||||
│
|
│
|
||||||
├── "zhipuai" 智谱AI大模型chatglm_turbo
|
├── "glm-4", "glm-3-turbo", "zhipuai" 智谱AI大模型
|
||||||
│ ├── ZHIPUAI_API_KEY
|
│ └── ZHIPUAI_API_KEY
|
||||||
│ └── ZHIPUAI_MODEL
|
│
|
||||||
|
├── "yi-34b-chat-0205", "yi-34b-chat-200k" 等零一万物(Yi Model)大模型
|
||||||
|
│ └── YIMODEL_API_KEY
|
||||||
│
|
│
|
||||||
├── "qwen-turbo" 等通义千问大模型
|
├── "qwen-turbo" 等通义千问大模型
|
||||||
│ └── DASHSCOPE_API_KEY
|
│ └── DASHSCOPE_API_KEY
|
||||||
@@ -307,9 +328,10 @@ NUM_CUSTOM_BASIC_BTN = 4
|
|||||||
├── "Gemini"
|
├── "Gemini"
|
||||||
│ └── GEMINI_API_KEY
|
│ └── GEMINI_API_KEY
|
||||||
│
|
│
|
||||||
└── "newbing" Newbing接口不再稳定,不推荐使用
|
└── "one-api-...(max_token=...)" 用一种更方便的方式接入one-api多模型管理界面
|
||||||
├── NEWBING_STYLE
|
├── AVAIL_LLM_MODELS
|
||||||
└── NEWBING_COOKIES
|
├── API_KEY
|
||||||
|
└── API_URL_REDIRECT
|
||||||
|
|
||||||
|
|
||||||
本地大模型示意图
|
本地大模型示意图
|
||||||
@@ -351,6 +373,9 @@ NUM_CUSTOM_BASIC_BTN = 4
|
|||||||
│ └── ALIYUN_SECRET
|
│ └── ALIYUN_SECRET
|
||||||
│
|
│
|
||||||
└── PDF文档精准解析
|
└── PDF文档精准解析
|
||||||
└── GROBID_URLS
|
├── GROBID_URLS
|
||||||
|
├── MATHPIX_APPID
|
||||||
|
└── MATHPIX_APPKEY
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -3,18 +3,27 @@
|
|||||||
# 'stop' 颜色对应 theme.py 中的 color_er
|
# 'stop' 颜色对应 theme.py 中的 color_er
|
||||||
import importlib
|
import importlib
|
||||||
from toolbox import clear_line_break
|
from toolbox import clear_line_break
|
||||||
|
from toolbox import apply_gpt_academic_string_mask_langbased
|
||||||
|
from toolbox import build_gpt_academic_masked_string_langbased
|
||||||
from textwrap import dedent
|
from textwrap import dedent
|
||||||
|
|
||||||
def get_core_functions():
|
def get_core_functions():
|
||||||
return {
|
return {
|
||||||
|
|
||||||
"英语学术润色": {
|
"学术语料润色": {
|
||||||
# [1*] 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
# [1*] 前缀字符串,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等。
|
||||||
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, "
|
# 这里填一个提示词字符串就行了,这里为了区分中英文情景搞复杂了一点
|
||||||
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. "
|
"Prefix": build_gpt_academic_masked_string_langbased(
|
||||||
r"Firstly, you should provide the polished paragraph. "
|
text_show_english=
|
||||||
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table." + "\n\n",
|
r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, "
|
||||||
# [2*] 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. "
|
||||||
|
r"Firstly, you should provide the polished paragraph. "
|
||||||
|
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table.",
|
||||||
|
text_show_chinese=
|
||||||
|
r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性,"
|
||||||
|
r"同时分解长句,减少重复,并提供改进建议。请先提供文本的更正版本,然后在markdown表格中列出修改的内容,并给出修改的理由:"
|
||||||
|
) + "\n\n",
|
||||||
|
# [2*] 后缀字符串,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
# [3] 按钮颜色 (可选参数,默认 secondary)
|
# [3] 按钮颜色 (可选参数,默认 secondary)
|
||||||
"Color": r"secondary",
|
"Color": r"secondary",
|
||||||
@@ -29,11 +38,13 @@ def get_core_functions():
|
|||||||
|
|
||||||
"总结绘制脑图": {
|
"总结绘制脑图": {
|
||||||
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
||||||
"Prefix": r"",
|
"Prefix": '''"""\n\n''',
|
||||||
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||||
"Suffix":
|
"Suffix":
|
||||||
dedent("\n"+r'''
|
# dedent() 函数用于去除多行字符串的缩进
|
||||||
==============================
|
dedent("\n\n"+r'''
|
||||||
|
"""
|
||||||
|
|
||||||
使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如:
|
使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如:
|
||||||
|
|
||||||
以下是对以上文本的总结,以mermaid flowchart的形式展示:
|
以下是对以上文本的总结,以mermaid flowchart的形式展示:
|
||||||
@@ -46,7 +57,7 @@ def get_core_functions():
|
|||||||
C --> |"箭头名2"| F["节点名6"]
|
C --> |"箭头名2"| F["节点名6"]
|
||||||
```
|
```
|
||||||
|
|
||||||
警告:
|
注意:
|
||||||
(1)使用中文
|
(1)使用中文
|
||||||
(2)节点名字使用引号包裹,如["Laptop"]
|
(2)节点名字使用引号包裹,如["Laptop"]
|
||||||
(3)`|` 和 `"`之间不要存在空格
|
(3)`|` 和 `"`之间不要存在空格
|
||||||
@@ -83,14 +94,22 @@ def get_core_functions():
|
|||||||
|
|
||||||
|
|
||||||
"学术英中互译": {
|
"学术英中互译": {
|
||||||
"Prefix": r"I want you to act as a scientific English-Chinese translator, " +
|
"Prefix": build_gpt_academic_masked_string_langbased(
|
||||||
r"I will provide you with some paragraphs in one language " +
|
text_show_chinese=
|
||||||
r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
|
r"I want you to act as a scientific English-Chinese translator, "
|
||||||
r"Do not repeat the original provided paragraphs after translation. " +
|
r"I will provide you with some paragraphs in one language "
|
||||||
r"You should use artificial intelligence tools, " +
|
r"and your task is to accurately and academically translate the paragraphs only into the other language. "
|
||||||
r"such as natural language processing, and rhetorical knowledge " +
|
r"Do not repeat the original provided paragraphs after translation. "
|
||||||
r"and experience about effective writing techniques to reply. " +
|
r"You should use artificial intelligence tools, "
|
||||||
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
|
r"such as natural language processing, and rhetorical knowledge "
|
||||||
|
r"and experience about effective writing techniques to reply. "
|
||||||
|
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:",
|
||||||
|
text_show_english=
|
||||||
|
r"你是经验丰富的翻译,请把以下学术文章段落翻译成中文,"
|
||||||
|
r"并同时充分考虑中文的语法、清晰、简洁和整体可读性,"
|
||||||
|
r"必要时,你可以修改整个句子的顺序以确保翻译后的段落符合中文的语言习惯。"
|
||||||
|
r"你需要翻译的文本如下:"
|
||||||
|
) + "\n\n",
|
||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
},
|
},
|
||||||
|
|
||||||
@@ -140,7 +159,11 @@ def handle_core_functionality(additional_fn, inputs, history, chatbot):
|
|||||||
if "PreProcess" in core_functional[additional_fn]:
|
if "PreProcess" in core_functional[additional_fn]:
|
||||||
if core_functional[additional_fn]["PreProcess"] is not None:
|
if core_functional[additional_fn]["PreProcess"] is not None:
|
||||||
inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
# 为字符串加上上面定义的前缀和后缀。
|
||||||
|
inputs = apply_gpt_academic_string_mask_langbased(
|
||||||
|
string = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"],
|
||||||
|
lang_reference = inputs,
|
||||||
|
)
|
||||||
if core_functional[additional_fn].get("AutoClearHistory", False):
|
if core_functional[additional_fn].get("AutoClearHistory", False):
|
||||||
history = []
|
history = []
|
||||||
return inputs, history
|
return inputs, history
|
||||||
|
|||||||
@@ -32,10 +32,9 @@ def get_crazy_functions():
|
|||||||
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
|
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
|
||||||
from crazy_functions.Latex全文润色 import Latex中文润色
|
from crazy_functions.Latex全文润色 import Latex中文润色
|
||||||
from crazy_functions.Latex全文润色 import Latex英文纠错
|
from crazy_functions.Latex全文润色 import Latex英文纠错
|
||||||
from crazy_functions.Latex全文翻译 import Latex中译英
|
|
||||||
from crazy_functions.Latex全文翻译 import Latex英译中
|
|
||||||
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
||||||
from crazy_functions.虚空终端 import 虚空终端
|
from crazy_functions.虚空终端 import 虚空终端
|
||||||
|
from crazy_functions.生成多种Mermaid图表 import 生成多种Mermaid图表
|
||||||
|
|
||||||
function_plugins = {
|
function_plugins = {
|
||||||
"虚空终端": {
|
"虚空终端": {
|
||||||
@@ -71,6 +70,15 @@ def get_crazy_functions():
|
|||||||
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
|
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
|
||||||
"Function": HotReload(清除缓存),
|
"Function": HotReload(清除缓存),
|
||||||
},
|
},
|
||||||
|
"生成多种Mermaid图表(从当前对话或路径(.pdf/.md/.docx)中生产图表)": {
|
||||||
|
"Group": "对话",
|
||||||
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"Info" : "基于当前对话或文件生成多种Mermaid图表,图表类型由模型判断",
|
||||||
|
"Function": HotReload(生成多种Mermaid图表),
|
||||||
|
"AdvancedArgs": True,
|
||||||
|
"ArgsReminder": "请输入图类型对应的数字,不输入则为模型自行判断:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图,9-思维导图",
|
||||||
|
},
|
||||||
"批量总结Word文档": {
|
"批量总结Word文档": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
@@ -237,13 +245,7 @@ def get_crazy_functions():
|
|||||||
"Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
"Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
||||||
"Function": HotReload(Latex英文润色),
|
"Function": HotReload(Latex英文润色),
|
||||||
},
|
},
|
||||||
"英文Latex项目全文纠错(输入路径或上传压缩包)": {
|
|
||||||
"Group": "学术",
|
|
||||||
"Color": "stop",
|
|
||||||
"AsButton": False, # 加入下拉菜单中
|
|
||||||
"Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
|
|
||||||
"Function": HotReload(Latex英文纠错),
|
|
||||||
},
|
|
||||||
"中文Latex项目全文润色(输入路径或上传压缩包)": {
|
"中文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
@@ -252,6 +254,14 @@ def get_crazy_functions():
|
|||||||
"Function": HotReload(Latex中文润色),
|
"Function": HotReload(Latex中文润色),
|
||||||
},
|
},
|
||||||
# 已经被新插件取代
|
# 已经被新插件取代
|
||||||
|
# "英文Latex项目全文纠错(输入路径或上传压缩包)": {
|
||||||
|
# "Group": "学术",
|
||||||
|
# "Color": "stop",
|
||||||
|
# "AsButton": False, # 加入下拉菜单中
|
||||||
|
# "Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
|
||||||
|
# "Function": HotReload(Latex英文纠错),
|
||||||
|
# },
|
||||||
|
# 已经被新插件取代
|
||||||
# "Latex项目全文中译英(输入路径或上传压缩包)": {
|
# "Latex项目全文中译英(输入路径或上传压缩包)": {
|
||||||
# "Group": "学术",
|
# "Group": "学术",
|
||||||
# "Color": "stop",
|
# "Color": "stop",
|
||||||
@@ -522,7 +532,9 @@ def get_crazy_functions():
|
|||||||
print("Load function plugin failed")
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
|
from crazy_functions.Latex输出PDF import Latex英文纠错加PDF对比
|
||||||
|
from crazy_functions.Latex输出PDF import Latex翻译中文并重新编译PDF
|
||||||
|
from crazy_functions.Latex输出PDF import PDF翻译中文并重新编译PDF
|
||||||
|
|
||||||
function_plugins.update(
|
function_plugins.update(
|
||||||
{
|
{
|
||||||
@@ -533,38 +545,39 @@ def get_crazy_functions():
|
|||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
|
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
|
||||||
"Function": HotReload(Latex英文纠错加PDF对比),
|
"Function": HotReload(Latex英文纠错加PDF对比),
|
||||||
}
|
},
|
||||||
}
|
|
||||||
)
|
|
||||||
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
|
||||||
|
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"Arxiv论文精细翻译(输入arxivID)[需Latex]": {
|
"Arxiv论文精细翻译(输入arxivID)[需Latex]": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
||||||
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
||||||
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||||
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
|
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
|
||||||
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
||||||
}
|
},
|
||||||
}
|
|
||||||
)
|
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
|
"本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
||||||
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
||||||
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||||
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
|
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
|
||||||
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
||||||
|
},
|
||||||
|
"PDF翻译中文并重新编译PDF(上传PDF)[需Latex]": {
|
||||||
|
"Group": "学术",
|
||||||
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"AdvancedArgs": True,
|
||||||
|
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
||||||
|
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
||||||
|
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||||
|
"Info": "PDF翻译中文,并重新编译PDF | 输入参数为路径",
|
||||||
|
"Function": HotReload(PDF翻译中文并重新编译PDF)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -1,232 +0,0 @@
|
|||||||
from collections.abc import Callable, Iterable, Mapping
|
|
||||||
from typing import Any
|
|
||||||
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc
|
|
||||||
from toolbox import promote_file_to_downloadzone, get_log_folder
|
|
||||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
|
||||||
from .crazy_utils import input_clipping, try_install_deps
|
|
||||||
from multiprocessing import Process, Pipe
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
|
|
||||||
templete = """
|
|
||||||
```python
|
|
||||||
import ... # Put dependencies here, e.g. import numpy as np
|
|
||||||
|
|
||||||
class TerminalFunction(object): # Do not change the name of the class, The name of the class must be `TerminalFunction`
|
|
||||||
|
|
||||||
def run(self, path): # The name of the function must be `run`, it takes only a positional argument.
|
|
||||||
# rewrite the function you have just written here
|
|
||||||
...
|
|
||||||
return generated_file_path
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
|
|
||||||
def inspect_dependency(chatbot, history):
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return True
|
|
||||||
|
|
||||||
def get_code_block(reply):
|
|
||||||
import re
|
|
||||||
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
|
|
||||||
matches = re.findall(pattern, reply) # find all code blocks in text
|
|
||||||
if len(matches) == 1:
|
|
||||||
return matches[0].strip('python') # code block
|
|
||||||
for match in matches:
|
|
||||||
if 'class TerminalFunction' in match:
|
|
||||||
return match.strip('python') # code block
|
|
||||||
raise RuntimeError("GPT is not generating proper code.")
|
|
||||||
|
|
||||||
def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
|
|
||||||
# 输入
|
|
||||||
prompt_compose = [
|
|
||||||
f'Your job:\n'
|
|
||||||
f'1. write a single Python function, which takes a path of a `{file_type}` file as the only argument and returns a `string` containing the result of analysis or the path of generated files. \n',
|
|
||||||
f"2. You should write this function to perform following task: " + txt + "\n",
|
|
||||||
f"3. Wrap the output python function with markdown codeblock."
|
|
||||||
]
|
|
||||||
i_say = "".join(prompt_compose)
|
|
||||||
demo = []
|
|
||||||
|
|
||||||
# 第一步
|
|
||||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
inputs=i_say, inputs_show_user=i_say,
|
|
||||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo,
|
|
||||||
sys_prompt= r"You are a programmer."
|
|
||||||
)
|
|
||||||
history.extend([i_say, gpt_say])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
|
||||||
|
|
||||||
# 第二步
|
|
||||||
prompt_compose = [
|
|
||||||
"If previous stage is successful, rewrite the function you have just written to satisfy following templete: \n",
|
|
||||||
templete
|
|
||||||
]
|
|
||||||
i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable templete. "
|
|
||||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
inputs=i_say, inputs_show_user=inputs_show_user,
|
|
||||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
|
||||||
sys_prompt= r"You are a programmer."
|
|
||||||
)
|
|
||||||
code_to_return = gpt_say
|
|
||||||
history.extend([i_say, gpt_say])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
|
||||||
|
|
||||||
# # 第三步
|
|
||||||
# i_say = "Please list to packages to install to run the code above. Then show me how to use `try_install_deps` function to install them."
|
|
||||||
# i_say += 'For instance. `try_install_deps(["opencv-python", "scipy", "numpy"])`'
|
|
||||||
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
# inputs=i_say, inputs_show_user=inputs_show_user,
|
|
||||||
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
|
||||||
# sys_prompt= r"You are a programmer."
|
|
||||||
# )
|
|
||||||
# # # 第三步
|
|
||||||
# i_say = "Show me how to use `pip` to install packages to run the code above. "
|
|
||||||
# i_say += 'For instance. `pip install -r opencv-python scipy numpy`'
|
|
||||||
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
# inputs=i_say, inputs_show_user=i_say,
|
|
||||||
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
|
||||||
# sys_prompt= r"You are a programmer."
|
|
||||||
# )
|
|
||||||
installation_advance = ""
|
|
||||||
|
|
||||||
return code_to_return, installation_advance, txt, file_type, llm_kwargs, chatbot, history
|
|
||||||
|
|
||||||
def make_module(code):
|
|
||||||
module_file = 'gpt_fn_' + gen_time_str().replace('-','_')
|
|
||||||
with open(f'{get_log_folder()}/{module_file}.py', 'w', encoding='utf8') as f:
|
|
||||||
f.write(code)
|
|
||||||
|
|
||||||
def get_class_name(class_string):
|
|
||||||
import re
|
|
||||||
# Use regex to extract the class name
|
|
||||||
class_name = re.search(r'class (\w+)\(', class_string).group(1)
|
|
||||||
return class_name
|
|
||||||
|
|
||||||
class_name = get_class_name(code)
|
|
||||||
return f"{get_log_folder().replace('/', '.')}.{module_file}->{class_name}"
|
|
||||||
|
|
||||||
def init_module_instance(module):
|
|
||||||
import importlib
|
|
||||||
module_, class_ = module.split('->')
|
|
||||||
init_f = getattr(importlib.import_module(module_), class_)
|
|
||||||
return init_f()
|
|
||||||
|
|
||||||
def for_immediate_show_off_when_possible(file_type, fp, chatbot):
|
|
||||||
if file_type in ['png', 'jpg']:
|
|
||||||
image_path = os.path.abspath(fp)
|
|
||||||
chatbot.append(['这是一张图片, 展示如下:',
|
|
||||||
f'本地文件地址: <br/>`{image_path}`<br/>'+
|
|
||||||
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
|
|
||||||
])
|
|
||||||
return chatbot
|
|
||||||
|
|
||||||
def subprocess_worker(instance, file_path, return_dict):
|
|
||||||
return_dict['result'] = instance.run(file_path)
|
|
||||||
|
|
||||||
def have_any_recent_upload_files(chatbot):
|
|
||||||
_5min = 5 * 60
|
|
||||||
if not chatbot: return False # chatbot is None
|
|
||||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
|
||||||
if not most_recent_uploaded: return False # most_recent_uploaded is None
|
|
||||||
if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
|
|
||||||
else: return False # most_recent_uploaded is too old
|
|
||||||
|
|
||||||
def get_recent_file_prompt_support(chatbot):
|
|
||||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
|
||||||
path = most_recent_uploaded['path']
|
|
||||||
return path
|
|
||||||
|
|
||||||
@CatchException
|
|
||||||
def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
|
||||||
"""
|
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
|
||||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
|
||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
|
||||||
history 聊天历史,前情提要
|
|
||||||
system_prompt 给gpt的静默提醒
|
|
||||||
web_port 当前软件运行的端口号
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
# 清空历史,以免输入溢出
|
|
||||||
history = []; clear_file_downloadzone(chatbot)
|
|
||||||
|
|
||||||
# 基本信息:功能、贡献者
|
|
||||||
chatbot.append([
|
|
||||||
"函数插件功能?",
|
|
||||||
"CodeInterpreter开源版, 此插件处于开发阶段, 建议暂时不要使用, 插件初始化中 ..."
|
|
||||||
])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
|
|
||||||
if have_any_recent_upload_files(chatbot):
|
|
||||||
file_path = get_recent_file_prompt_support(chatbot)
|
|
||||||
else:
|
|
||||||
chatbot.append(["文件检索", "没有发现任何近期上传的文件。"])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
|
|
||||||
# 读取文件
|
|
||||||
if ("recently_uploaded_files" in plugin_kwargs) and (plugin_kwargs["recently_uploaded_files"] == ""): plugin_kwargs.pop("recently_uploaded_files")
|
|
||||||
recently_uploaded_files = plugin_kwargs.get("recently_uploaded_files", None)
|
|
||||||
file_path = recently_uploaded_files[-1]
|
|
||||||
file_type = file_path.split('.')[-1]
|
|
||||||
|
|
||||||
# 粗心检查
|
|
||||||
if is_the_upload_folder(txt):
|
|
||||||
chatbot.append([
|
|
||||||
"...",
|
|
||||||
f"请在输入框内填写需求,然后再次点击该插件(文件路径 {file_path} 已经被记忆)"
|
|
||||||
])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
# 开始干正事
|
|
||||||
for j in range(5): # 最多重试5次
|
|
||||||
try:
|
|
||||||
code, installation_advance, txt, file_type, llm_kwargs, chatbot, history = \
|
|
||||||
yield from gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history)
|
|
||||||
code = get_code_block(code)
|
|
||||||
res = make_module(code)
|
|
||||||
instance = init_module_instance(res)
|
|
||||||
break
|
|
||||||
except Exception as e:
|
|
||||||
chatbot.append([f"第{j}次代码生成尝试,失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
|
|
||||||
# 代码生成结束, 开始执行
|
|
||||||
try:
|
|
||||||
import multiprocessing
|
|
||||||
manager = multiprocessing.Manager()
|
|
||||||
return_dict = manager.dict()
|
|
||||||
|
|
||||||
p = multiprocessing.Process(target=subprocess_worker, args=(instance, file_path, return_dict))
|
|
||||||
# only has 10 seconds to run
|
|
||||||
p.start(); p.join(timeout=10)
|
|
||||||
if p.is_alive(): p.terminate(); p.join()
|
|
||||||
p.close()
|
|
||||||
res = return_dict['result']
|
|
||||||
# res = instance.run(file_path)
|
|
||||||
except Exception as e:
|
|
||||||
chatbot.append(["执行失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
|
|
||||||
# chatbot.append(["如果是缺乏依赖,请参考以下建议", installation_advance])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
# 顺利完成,收尾
|
|
||||||
res = str(res)
|
|
||||||
if os.path.exists(res):
|
|
||||||
chatbot.append(["执行成功了,结果是一个有效文件", "结果:" + res])
|
|
||||||
new_file_path = promote_file_to_downloadzone(res, chatbot=chatbot)
|
|
||||||
chatbot = for_immediate_show_off_when_possible(file_type, new_file_path, chatbot)
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
|
||||||
else:
|
|
||||||
chatbot.append(["执行成功了,结果是一个字符串", "结果:" + res])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
|
||||||
|
|
||||||
"""
|
|
||||||
测试:
|
|
||||||
裁剪图像,保留下半部分
|
|
||||||
交换图像的蓝色通道和红色通道
|
|
||||||
将图像转为灰度图像
|
|
||||||
将csv文件转excel表格
|
|
||||||
"""
|
|
||||||
@@ -81,8 +81,8 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
|||||||
# <-------- 多线程润色开始 ---------->
|
# <-------- 多线程润色开始 ---------->
|
||||||
if language == 'en':
|
if language == 'en':
|
||||||
if mode == 'polish':
|
if mode == 'polish':
|
||||||
inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, " +
|
inputs_array = [r"Below is a section from an academic paper, polish this section to meet the academic standard, " +
|
||||||
"improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" +
|
r"improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" +
|
||||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||||
else:
|
else:
|
||||||
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
||||||
@@ -93,10 +93,10 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
|||||||
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
|
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
|
||||||
elif language == 'zh':
|
elif language == 'zh':
|
||||||
if mode == 'polish':
|
if mode == 'polish':
|
||||||
inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
|
inputs_array = [r"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
|
||||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||||
else:
|
else:
|
||||||
inputs_array = [f"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
|
inputs_array = [r"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
|
||||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||||
inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag]
|
inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag]
|
||||||
sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)]
|
sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)]
|
||||||
@@ -135,11 +135,11 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
"对整个Latex项目进行润色。函数插件贡献者: Binary-Husky。(注意,此插件不调用Latex,如果有Latex环境,请使用“Latex英文纠错+高亮”插件)"])
|
"对整个Latex项目进行润色。函数插件贡献者: Binary-Husky。(注意,此插件不调用Latex,如果有Latex环境,请使用「Latex英文纠错+高亮修正位置(需Latex)插件」"])
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
@@ -173,7 +173,7 @@ def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
@@ -209,7 +209,7 @@ def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
|
|||||||
@@ -106,7 +106,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
@@ -143,7 +143,7 @@ def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
|
|||||||
538
crazy_functions/Latex输出PDF.py
普通文件
538
crazy_functions/Latex输出PDF.py
普通文件
@@ -0,0 +1,538 @@
|
|||||||
|
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone, check_repeat_upload, map_file_to_sha256
|
||||||
|
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
|
||||||
|
from functools import partial
|
||||||
|
import glob, os, requests, time, json, tarfile
|
||||||
|
|
||||||
|
pj = os.path.join
|
||||||
|
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
||||||
|
|
||||||
|
|
||||||
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
|
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
||||||
|
def switch_prompt(pfg, mode, more_requirement):
|
||||||
|
"""
|
||||||
|
Generate prompts and system prompts based on the mode for proofreading or translating.
|
||||||
|
Args:
|
||||||
|
- pfg: Proofreader or Translator instance.
|
||||||
|
- mode: A string specifying the mode, either 'proofread' or 'translate_zh'.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- inputs_array: A list of strings containing prompts for users to respond to.
|
||||||
|
- sys_prompt_array: A list of strings containing prompts for system prompts.
|
||||||
|
"""
|
||||||
|
n_split = len(pfg.sp_file_contents)
|
||||||
|
if mode == 'proofread_en':
|
||||||
|
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
||||||
|
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + more_requirement +
|
||||||
|
r"Answer me only with the revised text:" +
|
||||||
|
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||||
|
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
|
||||||
|
elif mode == 'translate_zh':
|
||||||
|
inputs_array = [
|
||||||
|
r"Below is a section from an English academic paper, translate it into Chinese. " + more_requirement +
|
||||||
|
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
|
||||||
|
r"Answer me only with the translated text:" +
|
||||||
|
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||||
|
sys_prompt_array = ["You are a professional translator." for _ in range(n_split)]
|
||||||
|
else:
|
||||||
|
assert False, "未知指令"
|
||||||
|
return inputs_array, sys_prompt_array
|
||||||
|
|
||||||
|
|
||||||
|
def desend_to_extracted_folder_if_exist(project_folder):
|
||||||
|
"""
|
||||||
|
Descend into the extracted folder if it exists, otherwise return the original folder.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- project_folder: A string specifying the folder path.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- A string specifying the path to the extracted folder, or the original folder if there is no extracted folder.
|
||||||
|
"""
|
||||||
|
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
|
||||||
|
if len(maybe_dir) == 0: return project_folder
|
||||||
|
if maybe_dir[0].endswith('.extract'): return maybe_dir[0]
|
||||||
|
return project_folder
|
||||||
|
|
||||||
|
|
||||||
|
def move_project(project_folder, arxiv_id=None):
|
||||||
|
"""
|
||||||
|
Create a new work folder and copy the project folder to it.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- project_folder: A string specifying the folder path of the project.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- A string specifying the path to the new work folder.
|
||||||
|
"""
|
||||||
|
import shutil, time
|
||||||
|
time.sleep(2) # avoid time string conflict
|
||||||
|
if arxiv_id is not None:
|
||||||
|
new_workfolder = pj(ARXIV_CACHE_DIR, arxiv_id, 'workfolder')
|
||||||
|
else:
|
||||||
|
new_workfolder = f'{get_log_folder()}/{gen_time_str()}'
|
||||||
|
try:
|
||||||
|
shutil.rmtree(new_workfolder)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# align subfolder if there is a folder wrapper
|
||||||
|
items = glob.glob(pj(project_folder, '*'))
|
||||||
|
items = [item for item in items if os.path.basename(item) != '__MACOSX']
|
||||||
|
if len(glob.glob(pj(project_folder, '*.tex'))) == 0 and len(items) == 1:
|
||||||
|
if os.path.isdir(items[0]): project_folder = items[0]
|
||||||
|
|
||||||
|
shutil.copytree(src=project_folder, dst=new_workfolder)
|
||||||
|
return new_workfolder
|
||||||
|
|
||||||
|
|
||||||
|
def arxiv_download(chatbot, history, txt, allow_cache=True):
|
||||||
|
def check_cached_translation_pdf(arxiv_id):
|
||||||
|
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'translation')
|
||||||
|
if not os.path.exists(translation_dir):
|
||||||
|
os.makedirs(translation_dir)
|
||||||
|
target_file = pj(translation_dir, 'translate_zh.pdf')
|
||||||
|
if os.path.exists(target_file):
|
||||||
|
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
|
||||||
|
target_file_compare = pj(translation_dir, 'comparison.pdf')
|
||||||
|
if os.path.exists(target_file_compare):
|
||||||
|
promote_file_to_downloadzone(target_file_compare, rename_file=None, chatbot=chatbot)
|
||||||
|
return target_file
|
||||||
|
return False
|
||||||
|
|
||||||
|
def is_float(s):
|
||||||
|
try:
|
||||||
|
float(s)
|
||||||
|
return True
|
||||||
|
except ValueError:
|
||||||
|
return False
|
||||||
|
|
||||||
|
if ('.' in txt) and ('/' not in txt) and is_float(txt): # is arxiv ID
|
||||||
|
txt = 'https://arxiv.org/abs/' + txt.strip()
|
||||||
|
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
|
||||||
|
txt = 'https://arxiv.org/abs/' + txt[:10]
|
||||||
|
|
||||||
|
if not txt.startswith('https://arxiv.org'):
|
||||||
|
return txt, None # 是本地文件,跳过下载
|
||||||
|
|
||||||
|
# <-------------- inspect format ------------->
|
||||||
|
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
|
||||||
|
url_ = txt # https://arxiv.org/abs/1707.06690
|
||||||
|
if not txt.startswith('https://arxiv.org/abs/'):
|
||||||
|
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}。"
|
||||||
|
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return msg, None
|
||||||
|
# <-------------- set format ------------->
|
||||||
|
arxiv_id = url_.split('/abs/')[-1]
|
||||||
|
if 'v' in arxiv_id: arxiv_id = arxiv_id[:10]
|
||||||
|
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
|
||||||
|
if cached_translation_pdf and allow_cache: return cached_translation_pdf, arxiv_id
|
||||||
|
|
||||||
|
url_tar = url_.replace('/abs/', '/e-print/')
|
||||||
|
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
|
||||||
|
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
|
||||||
|
os.makedirs(translation_dir, exist_ok=True)
|
||||||
|
|
||||||
|
# <-------------- download arxiv source file ------------->
|
||||||
|
dst = pj(translation_dir, arxiv_id + '.tar')
|
||||||
|
if os.path.exists(dst):
|
||||||
|
yield from update_ui_lastest_msg("调用缓存", chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
else:
|
||||||
|
yield from update_ui_lastest_msg("开始下载", chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
proxies = get_conf('proxies')
|
||||||
|
r = requests.get(url_tar, proxies=proxies)
|
||||||
|
with open(dst, 'wb+') as f:
|
||||||
|
f.write(r.content)
|
||||||
|
# <-------------- extract file ------------->
|
||||||
|
yield from update_ui_lastest_msg("下载完成", chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
from toolbox import extract_archive
|
||||||
|
extract_archive(file_path=dst, dest_dir=extract_dst)
|
||||||
|
return extract_dst, arxiv_id
|
||||||
|
|
||||||
|
|
||||||
|
def pdf2tex_project(pdf_file_path):
|
||||||
|
# Mathpix API credentials
|
||||||
|
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
|
||||||
|
headers = {"app_id": app_id, "app_key": app_key}
|
||||||
|
|
||||||
|
# Step 1: Send PDF file for processing
|
||||||
|
options = {
|
||||||
|
"conversion_formats": {"tex.zip": True},
|
||||||
|
"math_inline_delimiters": ["$", "$"],
|
||||||
|
"rm_spaces": True
|
||||||
|
}
|
||||||
|
|
||||||
|
response = requests.post(url="https://api.mathpix.com/v3/pdf",
|
||||||
|
headers=headers,
|
||||||
|
data={"options_json": json.dumps(options)},
|
||||||
|
files={"file": open(pdf_file_path, "rb")})
|
||||||
|
|
||||||
|
if response.ok:
|
||||||
|
pdf_id = response.json()["pdf_id"]
|
||||||
|
print(f"PDF processing initiated. PDF ID: {pdf_id}")
|
||||||
|
|
||||||
|
# Step 2: Check processing status
|
||||||
|
while True:
|
||||||
|
conversion_response = requests.get(f"https://api.mathpix.com/v3/pdf/{pdf_id}", headers=headers)
|
||||||
|
conversion_data = conversion_response.json()
|
||||||
|
|
||||||
|
if conversion_data["status"] == "completed":
|
||||||
|
print("PDF processing completed.")
|
||||||
|
break
|
||||||
|
elif conversion_data["status"] == "error":
|
||||||
|
print("Error occurred during processing.")
|
||||||
|
else:
|
||||||
|
print(f"Processing status: {conversion_data['status']}")
|
||||||
|
time.sleep(5) # wait for a few seconds before checking again
|
||||||
|
|
||||||
|
# Step 3: Save results to local files
|
||||||
|
output_dir = os.path.join(os.path.dirname(pdf_file_path), 'mathpix_output')
|
||||||
|
if not os.path.exists(output_dir):
|
||||||
|
os.makedirs(output_dir)
|
||||||
|
|
||||||
|
url = f"https://api.mathpix.com/v3/pdf/{pdf_id}.tex"
|
||||||
|
response = requests.get(url, headers=headers)
|
||||||
|
file_name_wo_dot = '_'.join(os.path.basename(pdf_file_path).split('.')[:-1])
|
||||||
|
output_name = f"{file_name_wo_dot}.tex.zip"
|
||||||
|
output_path = os.path.join(output_dir, output_name)
|
||||||
|
with open(output_path, "wb") as output_file:
|
||||||
|
output_file.write(response.content)
|
||||||
|
print(f"tex.zip file saved at: {output_path}")
|
||||||
|
|
||||||
|
import zipfile
|
||||||
|
unzip_dir = os.path.join(output_dir, file_name_wo_dot)
|
||||||
|
with zipfile.ZipFile(output_path, 'r') as zip_ref:
|
||||||
|
zip_ref.extractall(unzip_dir)
|
||||||
|
|
||||||
|
return unzip_dir
|
||||||
|
|
||||||
|
else:
|
||||||
|
print(f"Error sending PDF for processing. Status code: {response.status_code}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
|
|
||||||
|
@CatchException
|
||||||
|
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
|
# <-------------- information about this plugin ------------->
|
||||||
|
chatbot.append(["函数插件功能?",
|
||||||
|
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
# <-------------- more requirements ------------->
|
||||||
|
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||||
|
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||||
|
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||||
|
|
||||||
|
# <-------------- check deps ------------->
|
||||||
|
try:
|
||||||
|
import glob, os, time, subprocess
|
||||||
|
subprocess.Popen(['pdflatex', '-version'])
|
||||||
|
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
||||||
|
except Exception as e:
|
||||||
|
chatbot.append([f"解析项目: {txt}",
|
||||||
|
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- clear history and read input ------------->
|
||||||
|
history = []
|
||||||
|
if os.path.exists(txt):
|
||||||
|
project_folder = txt
|
||||||
|
else:
|
||||||
|
if txt == "": txt = '空空如也的输入栏'
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||||
|
if len(file_manifest) == 0:
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- if is a zip/tar file ------------->
|
||||||
|
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
||||||
|
|
||||||
|
# <-------------- move latex project away from temp folder ------------->
|
||||||
|
project_folder = move_project(project_folder, arxiv_id=None)
|
||||||
|
|
||||||
|
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||||
|
if not os.path.exists(project_folder + '/merge_proofread_en.tex'):
|
||||||
|
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||||
|
chatbot, history, system_prompt, mode='proofread_en',
|
||||||
|
switch_prompt=_switch_prompt_)
|
||||||
|
|
||||||
|
# <-------------- compile PDF ------------->
|
||||||
|
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
|
||||||
|
main_file_modified='merge_proofread_en',
|
||||||
|
work_folder_original=project_folder, work_folder_modified=project_folder,
|
||||||
|
work_folder=project_folder)
|
||||||
|
|
||||||
|
# <-------------- zip PDF ------------->
|
||||||
|
zip_res = zip_result(project_folder)
|
||||||
|
if success:
|
||||||
|
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
else:
|
||||||
|
chatbot.append((f"失败了",
|
||||||
|
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
|
||||||
|
# <-------------- we are done ------------->
|
||||||
|
return success
|
||||||
|
|
||||||
|
|
||||||
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
|
@CatchException
|
||||||
|
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
|
# <-------------- information about this plugin ------------->
|
||||||
|
chatbot.append([
|
||||||
|
"函数插件功能?",
|
||||||
|
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
# <-------------- more requirements ------------->
|
||||||
|
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||||
|
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||||
|
no_cache = more_req.startswith("--no-cache")
|
||||||
|
if no_cache: more_req.lstrip("--no-cache")
|
||||||
|
allow_cache = not no_cache
|
||||||
|
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||||
|
|
||||||
|
# <-------------- check deps ------------->
|
||||||
|
try:
|
||||||
|
import glob, os, time, subprocess
|
||||||
|
subprocess.Popen(['pdflatex', '-version'])
|
||||||
|
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
||||||
|
except Exception as e:
|
||||||
|
chatbot.append([f"解析项目: {txt}",
|
||||||
|
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- clear history and read input ------------->
|
||||||
|
history = []
|
||||||
|
try:
|
||||||
|
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
|
||||||
|
except tarfile.ReadError as e:
|
||||||
|
yield from update_ui_lastest_msg(
|
||||||
|
"无法自动下载该论文的Latex源码,请前往arxiv打开此论文下载页面,点other Formats,然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。",
|
||||||
|
chatbot=chatbot, history=history)
|
||||||
|
return
|
||||||
|
|
||||||
|
if txt.endswith('.pdf'):
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"发现已经存在翻译好的PDF文档")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
if os.path.exists(txt):
|
||||||
|
project_folder = txt
|
||||||
|
else:
|
||||||
|
if txt == "": txt = '空空如也的输入栏'
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无法处理: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||||
|
if len(file_manifest) == 0:
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- if is a zip/tar file ------------->
|
||||||
|
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
||||||
|
|
||||||
|
# <-------------- move latex project away from temp folder ------------->
|
||||||
|
project_folder = move_project(project_folder, arxiv_id)
|
||||||
|
|
||||||
|
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||||
|
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
|
||||||
|
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||||
|
chatbot, history, system_prompt, mode='translate_zh',
|
||||||
|
switch_prompt=_switch_prompt_)
|
||||||
|
|
||||||
|
# <-------------- compile PDF ------------->
|
||||||
|
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
|
||||||
|
main_file_modified='merge_translate_zh', mode='translate_zh',
|
||||||
|
work_folder_original=project_folder, work_folder_modified=project_folder,
|
||||||
|
work_folder=project_folder)
|
||||||
|
|
||||||
|
# <-------------- zip PDF ------------->
|
||||||
|
zip_res = zip_result(project_folder)
|
||||||
|
if success:
|
||||||
|
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
else:
|
||||||
|
chatbot.append((f"失败了",
|
||||||
|
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体(见Github wiki) ...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
|
||||||
|
# <-------------- we are done ------------->
|
||||||
|
return success
|
||||||
|
|
||||||
|
|
||||||
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 插件主程序3 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
|
@CatchException
|
||||||
|
def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
# <-------------- information about this plugin ------------->
|
||||||
|
chatbot.append([
|
||||||
|
"函数插件功能?",
|
||||||
|
"将PDF转换为Latex项目,翻译为中文后重新编译为PDF。函数插件贡献者: Marroh。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
# <-------------- more requirements ------------->
|
||||||
|
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||||
|
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||||
|
no_cache = more_req.startswith("--no-cache")
|
||||||
|
if no_cache: more_req.lstrip("--no-cache")
|
||||||
|
allow_cache = not no_cache
|
||||||
|
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||||
|
|
||||||
|
# <-------------- check deps ------------->
|
||||||
|
try:
|
||||||
|
import glob, os, time, subprocess
|
||||||
|
subprocess.Popen(['pdflatex', '-version'])
|
||||||
|
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
||||||
|
except Exception as e:
|
||||||
|
chatbot.append([f"解析项目: {txt}",
|
||||||
|
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- clear history and read input ------------->
|
||||||
|
if os.path.exists(txt):
|
||||||
|
project_folder = txt
|
||||||
|
else:
|
||||||
|
if txt == "": txt = '空空如也的输入栏'
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无法处理: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
|
||||||
|
if len(file_manifest) == 0:
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.pdf文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
if len(file_manifest) != 1:
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"不支持同时处理多个pdf文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
|
||||||
|
if len(app_id) == 0 or len(app_key) == 0:
|
||||||
|
report_exception(chatbot, history, a="缺失 MATHPIX_APPID 和 MATHPIX_APPKEY。", b=f"请配置 MATHPIX_APPID 和 MATHPIX_APPKEY")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
hash_tag = map_file_to_sha256(file_manifest[0])
|
||||||
|
|
||||||
|
# <-------------- check repeated pdf ------------->
|
||||||
|
chatbot.append([f"检查PDF是否被重复上传", "正在检查..."])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
repeat, project_folder = check_repeat_upload(file_manifest[0], hash_tag)
|
||||||
|
|
||||||
|
except_flag = False
|
||||||
|
|
||||||
|
if repeat:
|
||||||
|
yield from update_ui_lastest_msg(f"发现重复上传,请查收结果(压缩包)...", chatbot=chatbot, history=history)
|
||||||
|
|
||||||
|
try:
|
||||||
|
trans_html_file = [f for f in glob.glob(f'{project_folder}/**/*.trans.html', recursive=True)][0]
|
||||||
|
promote_file_to_downloadzone(trans_html_file, rename_file=None, chatbot=chatbot)
|
||||||
|
|
||||||
|
translate_pdf = [f for f in glob.glob(f'{project_folder}/**/merge_translate_zh.pdf', recursive=True)][0]
|
||||||
|
promote_file_to_downloadzone(translate_pdf, rename_file=None, chatbot=chatbot)
|
||||||
|
|
||||||
|
comparison_pdf = [f for f in glob.glob(f'{project_folder}/**/comparison.pdf', recursive=True)][0]
|
||||||
|
promote_file_to_downloadzone(comparison_pdf, rename_file=None, chatbot=chatbot)
|
||||||
|
|
||||||
|
zip_res = zip_result(project_folder)
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
except:
|
||||||
|
report_exception(chatbot, history, b=f"发现重复上传,但是无法找到相关文件")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
|
chatbot.append([f"没有相关文件", '尝试重新翻译PDF...'])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
|
except_flag = True
|
||||||
|
|
||||||
|
|
||||||
|
elif not repeat or except_flag:
|
||||||
|
yield from update_ui_lastest_msg(f"未发现重复上传", chatbot=chatbot, history=history)
|
||||||
|
|
||||||
|
# <-------------- convert pdf into tex ------------->
|
||||||
|
chatbot.append([f"解析项目: {txt}", "正在将PDF转换为tex项目,请耐心等待..."])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
project_folder = pdf2tex_project(file_manifest[0])
|
||||||
|
if project_folder is None:
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"PDF转换为tex项目失败")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# <-------------- translate latex file into Chinese ------------->
|
||||||
|
yield from update_ui_lastest_msg("正在tex项目将翻译为中文...", chatbot=chatbot, history=history)
|
||||||
|
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||||
|
if len(file_manifest) == 0:
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- if is a zip/tar file ------------->
|
||||||
|
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
||||||
|
|
||||||
|
# <-------------- move latex project away from temp folder ------------->
|
||||||
|
project_folder = move_project(project_folder)
|
||||||
|
|
||||||
|
# <-------------- set a hash tag for repeat-checking ------------->
|
||||||
|
with open(pj(project_folder, hash_tag + '.tag'), 'w') as f:
|
||||||
|
f.write(hash_tag)
|
||||||
|
f.close()
|
||||||
|
|
||||||
|
|
||||||
|
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||||
|
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
|
||||||
|
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||||
|
chatbot, history, system_prompt, mode='translate_zh',
|
||||||
|
switch_prompt=_switch_prompt_)
|
||||||
|
|
||||||
|
# <-------------- compile PDF ------------->
|
||||||
|
yield from update_ui_lastest_msg("正在将翻译好的项目tex项目编译为PDF...", chatbot=chatbot, history=history)
|
||||||
|
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
|
||||||
|
main_file_modified='merge_translate_zh', mode='translate_zh',
|
||||||
|
work_folder_original=project_folder, work_folder_modified=project_folder,
|
||||||
|
work_folder=project_folder)
|
||||||
|
|
||||||
|
# <-------------- zip PDF ------------->
|
||||||
|
zip_res = zip_result(project_folder)
|
||||||
|
if success:
|
||||||
|
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
else:
|
||||||
|
chatbot.append((f"失败了",
|
||||||
|
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体(见Github wiki) ...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
|
||||||
|
# <-------------- we are done ------------->
|
||||||
|
return success
|
||||||
@@ -1,306 +0,0 @@
|
|||||||
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone
|
|
||||||
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
|
|
||||||
from functools import partial
|
|
||||||
import glob, os, requests, time
|
|
||||||
pj = os.path.join
|
|
||||||
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
|
||||||
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
|
||||||
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
|
||||||
def switch_prompt(pfg, mode, more_requirement):
|
|
||||||
"""
|
|
||||||
Generate prompts and system prompts based on the mode for proofreading or translating.
|
|
||||||
Args:
|
|
||||||
- pfg: Proofreader or Translator instance.
|
|
||||||
- mode: A string specifying the mode, either 'proofread' or 'translate_zh'.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
- inputs_array: A list of strings containing prompts for users to respond to.
|
|
||||||
- sys_prompt_array: A list of strings containing prompts for system prompts.
|
|
||||||
"""
|
|
||||||
n_split = len(pfg.sp_file_contents)
|
|
||||||
if mode == 'proofread_en':
|
|
||||||
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
|
||||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + more_requirement +
|
|
||||||
r"Answer me only with the revised text:" +
|
|
||||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
|
||||||
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
|
|
||||||
elif mode == 'translate_zh':
|
|
||||||
inputs_array = [r"Below is a section from an English academic paper, translate it into Chinese. " + more_requirement +
|
|
||||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
|
|
||||||
r"Answer me only with the translated text:" +
|
|
||||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
|
||||||
sys_prompt_array = ["You are a professional translator." for _ in range(n_split)]
|
|
||||||
else:
|
|
||||||
assert False, "未知指令"
|
|
||||||
return inputs_array, sys_prompt_array
|
|
||||||
|
|
||||||
def desend_to_extracted_folder_if_exist(project_folder):
|
|
||||||
"""
|
|
||||||
Descend into the extracted folder if it exists, otherwise return the original folder.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
- project_folder: A string specifying the folder path.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
- A string specifying the path to the extracted folder, or the original folder if there is no extracted folder.
|
|
||||||
"""
|
|
||||||
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
|
|
||||||
if len(maybe_dir) == 0: return project_folder
|
|
||||||
if maybe_dir[0].endswith('.extract'): return maybe_dir[0]
|
|
||||||
return project_folder
|
|
||||||
|
|
||||||
def move_project(project_folder, arxiv_id=None):
|
|
||||||
"""
|
|
||||||
Create a new work folder and copy the project folder to it.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
- project_folder: A string specifying the folder path of the project.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
- A string specifying the path to the new work folder.
|
|
||||||
"""
|
|
||||||
import shutil, time
|
|
||||||
time.sleep(2) # avoid time string conflict
|
|
||||||
if arxiv_id is not None:
|
|
||||||
new_workfolder = pj(ARXIV_CACHE_DIR, arxiv_id, 'workfolder')
|
|
||||||
else:
|
|
||||||
new_workfolder = f'{get_log_folder()}/{gen_time_str()}'
|
|
||||||
try:
|
|
||||||
shutil.rmtree(new_workfolder)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# align subfolder if there is a folder wrapper
|
|
||||||
items = glob.glob(pj(project_folder,'*'))
|
|
||||||
items = [item for item in items if os.path.basename(item)!='__MACOSX']
|
|
||||||
if len(glob.glob(pj(project_folder,'*.tex'))) == 0 and len(items) == 1:
|
|
||||||
if os.path.isdir(items[0]): project_folder = items[0]
|
|
||||||
|
|
||||||
shutil.copytree(src=project_folder, dst=new_workfolder)
|
|
||||||
return new_workfolder
|
|
||||||
|
|
||||||
def arxiv_download(chatbot, history, txt, allow_cache=True):
|
|
||||||
def check_cached_translation_pdf(arxiv_id):
|
|
||||||
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'translation')
|
|
||||||
if not os.path.exists(translation_dir):
|
|
||||||
os.makedirs(translation_dir)
|
|
||||||
target_file = pj(translation_dir, 'translate_zh.pdf')
|
|
||||||
if os.path.exists(target_file):
|
|
||||||
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
|
|
||||||
target_file_compare = pj(translation_dir, 'comparison.pdf')
|
|
||||||
if os.path.exists(target_file_compare):
|
|
||||||
promote_file_to_downloadzone(target_file_compare, rename_file=None, chatbot=chatbot)
|
|
||||||
return target_file
|
|
||||||
return False
|
|
||||||
def is_float(s):
|
|
||||||
try:
|
|
||||||
float(s)
|
|
||||||
return True
|
|
||||||
except ValueError:
|
|
||||||
return False
|
|
||||||
if ('.' in txt) and ('/' not in txt) and is_float(txt): # is arxiv ID
|
|
||||||
txt = 'https://arxiv.org/abs/' + txt.strip()
|
|
||||||
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
|
|
||||||
txt = 'https://arxiv.org/abs/' + txt[:10]
|
|
||||||
if not txt.startswith('https://arxiv.org'):
|
|
||||||
return txt, None
|
|
||||||
|
|
||||||
# <-------------- inspect format ------------->
|
|
||||||
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
time.sleep(1) # 刷新界面
|
|
||||||
|
|
||||||
url_ = txt # https://arxiv.org/abs/1707.06690
|
|
||||||
if not txt.startswith('https://arxiv.org/abs/'):
|
|
||||||
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}。"
|
|
||||||
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return msg, None
|
|
||||||
# <-------------- set format ------------->
|
|
||||||
arxiv_id = url_.split('/abs/')[-1]
|
|
||||||
if 'v' in arxiv_id: arxiv_id = arxiv_id[:10]
|
|
||||||
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
|
|
||||||
if cached_translation_pdf and allow_cache: return cached_translation_pdf, arxiv_id
|
|
||||||
|
|
||||||
url_tar = url_.replace('/abs/', '/e-print/')
|
|
||||||
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
|
|
||||||
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
|
|
||||||
os.makedirs(translation_dir, exist_ok=True)
|
|
||||||
|
|
||||||
# <-------------- download arxiv source file ------------->
|
|
||||||
dst = pj(translation_dir, arxiv_id+'.tar')
|
|
||||||
if os.path.exists(dst):
|
|
||||||
yield from update_ui_lastest_msg("调用缓存", chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
else:
|
|
||||||
yield from update_ui_lastest_msg("开始下载", chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
proxies = get_conf('proxies')
|
|
||||||
r = requests.get(url_tar, proxies=proxies)
|
|
||||||
with open(dst, 'wb+') as f:
|
|
||||||
f.write(r.content)
|
|
||||||
# <-------------- extract file ------------->
|
|
||||||
yield from update_ui_lastest_msg("下载完成", chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
from toolbox import extract_archive
|
|
||||||
extract_archive(file_path=dst, dest_dir=extract_dst)
|
|
||||||
return extract_dst, arxiv_id
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
|
||||||
|
|
||||||
|
|
||||||
@CatchException
|
|
||||||
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
|
||||||
# <-------------- information about this plugin ------------->
|
|
||||||
chatbot.append([ "函数插件功能?",
|
|
||||||
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
|
|
||||||
# <-------------- more requirements ------------->
|
|
||||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
|
||||||
more_req = plugin_kwargs.get("advanced_arg", "")
|
|
||||||
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
|
||||||
|
|
||||||
# <-------------- check deps ------------->
|
|
||||||
try:
|
|
||||||
import glob, os, time, subprocess
|
|
||||||
subprocess.Popen(['pdflatex', '-version'])
|
|
||||||
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
|
||||||
except Exception as e:
|
|
||||||
chatbot.append([ f"解析项目: {txt}",
|
|
||||||
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- clear history and read input ------------->
|
|
||||||
history = []
|
|
||||||
if os.path.exists(txt):
|
|
||||||
project_folder = txt
|
|
||||||
else:
|
|
||||||
if txt == "": txt = '空空如也的输入栏'
|
|
||||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
|
||||||
if len(file_manifest) == 0:
|
|
||||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- if is a zip/tar file ------------->
|
|
||||||
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- move latex project away from temp folder ------------->
|
|
||||||
project_folder = move_project(project_folder, arxiv_id=None)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
|
||||||
if not os.path.exists(project_folder + '/merge_proofread_en.tex'):
|
|
||||||
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|
||||||
chatbot, history, system_prompt, mode='proofread_en', switch_prompt=_switch_prompt_)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- compile PDF ------------->
|
|
||||||
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_proofread_en',
|
|
||||||
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- zip PDF ------------->
|
|
||||||
zip_res = zip_result(project_folder)
|
|
||||||
if success:
|
|
||||||
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
|
||||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
|
||||||
else:
|
|
||||||
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
|
||||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
|
||||||
|
|
||||||
# <-------------- we are done ------------->
|
|
||||||
return success
|
|
||||||
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
|
||||||
|
|
||||||
@CatchException
|
|
||||||
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
|
||||||
# <-------------- information about this plugin ------------->
|
|
||||||
chatbot.append([
|
|
||||||
"函数插件功能?",
|
|
||||||
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
|
|
||||||
# <-------------- more requirements ------------->
|
|
||||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
|
||||||
more_req = plugin_kwargs.get("advanced_arg", "")
|
|
||||||
no_cache = more_req.startswith("--no-cache")
|
|
||||||
if no_cache: more_req.lstrip("--no-cache")
|
|
||||||
allow_cache = not no_cache
|
|
||||||
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
|
||||||
|
|
||||||
# <-------------- check deps ------------->
|
|
||||||
try:
|
|
||||||
import glob, os, time, subprocess
|
|
||||||
subprocess.Popen(['pdflatex', '-version'])
|
|
||||||
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
|
||||||
except Exception as e:
|
|
||||||
chatbot.append([ f"解析项目: {txt}",
|
|
||||||
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- clear history and read input ------------->
|
|
||||||
history = []
|
|
||||||
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
|
|
||||||
if txt.endswith('.pdf'):
|
|
||||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"发现已经存在翻译好的PDF文档")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
if os.path.exists(txt):
|
|
||||||
project_folder = txt
|
|
||||||
else:
|
|
||||||
if txt == "": txt = '空空如也的输入栏'
|
|
||||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无法处理: {txt}")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
|
||||||
if len(file_manifest) == 0:
|
|
||||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- if is a zip/tar file ------------->
|
|
||||||
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- move latex project away from temp folder ------------->
|
|
||||||
project_folder = move_project(project_folder, arxiv_id)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
|
||||||
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
|
|
||||||
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|
||||||
chatbot, history, system_prompt, mode='translate_zh', switch_prompt=_switch_prompt_)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- compile PDF ------------->
|
|
||||||
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_translate_zh', mode='translate_zh',
|
|
||||||
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
|
|
||||||
|
|
||||||
# <-------------- zip PDF ------------->
|
|
||||||
zip_res = zip_result(project_folder)
|
|
||||||
if success:
|
|
||||||
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
|
||||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
|
||||||
else:
|
|
||||||
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体(见Github wiki) ...'))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
|
||||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- we are done ------------->
|
|
||||||
return success
|
|
||||||
@@ -35,7 +35,11 @@ def gpt_academic_generate_oai_reply(
|
|||||||
class AutoGenGeneral(PluginMultiprocessManager):
|
class AutoGenGeneral(PluginMultiprocessManager):
|
||||||
def gpt_academic_print_override(self, user_proxy, message, sender):
|
def gpt_academic_print_override(self, user_proxy, message, sender):
|
||||||
# ⭐⭐ run in subprocess
|
# ⭐⭐ run in subprocess
|
||||||
self.child_conn.send(PipeCom("show", sender.name + "\n\n---\n\n" + message["content"]))
|
try:
|
||||||
|
print_msg = sender.name + "\n\n---\n\n" + message["content"]
|
||||||
|
except:
|
||||||
|
print_msg = sender.name + "\n\n---\n\n" + message
|
||||||
|
self.child_conn.send(PipeCom("show", print_msg))
|
||||||
|
|
||||||
def gpt_academic_get_human_input(self, user_proxy, message):
|
def gpt_academic_get_human_input(self, user_proxy, message):
|
||||||
# ⭐⭐ run in subprocess
|
# ⭐⭐ run in subprocess
|
||||||
@@ -62,33 +66,33 @@ class AutoGenGeneral(PluginMultiprocessManager):
|
|||||||
def exe_autogen(self, input):
|
def exe_autogen(self, input):
|
||||||
# ⭐⭐ run in subprocess
|
# ⭐⭐ run in subprocess
|
||||||
input = input.content
|
input = input.content
|
||||||
with ProxyNetworkActivate("AutoGen"):
|
code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker}
|
||||||
code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker}
|
agents = self.define_agents()
|
||||||
agents = self.define_agents()
|
user_proxy = None
|
||||||
user_proxy = None
|
assistant = None
|
||||||
assistant = None
|
for agent_kwargs in agents:
|
||||||
for agent_kwargs in agents:
|
agent_cls = agent_kwargs.pop('cls')
|
||||||
agent_cls = agent_kwargs.pop('cls')
|
kwargs = {
|
||||||
kwargs = {
|
'llm_config':self.llm_kwargs,
|
||||||
'llm_config':self.llm_kwargs,
|
'code_execution_config':code_execution_config
|
||||||
'code_execution_config':code_execution_config
|
}
|
||||||
}
|
kwargs.update(agent_kwargs)
|
||||||
kwargs.update(agent_kwargs)
|
agent_handle = agent_cls(**kwargs)
|
||||||
agent_handle = agent_cls(**kwargs)
|
agent_handle._print_received_message = lambda a,b: self.gpt_academic_print_override(agent_kwargs, a, b)
|
||||||
agent_handle._print_received_message = lambda a,b: self.gpt_academic_print_override(agent_kwargs, a, b)
|
for d in agent_handle._reply_func_list:
|
||||||
for d in agent_handle._reply_func_list:
|
if hasattr(d['reply_func'],'__name__') and d['reply_func'].__name__ == 'generate_oai_reply':
|
||||||
if hasattr(d['reply_func'],'__name__') and d['reply_func'].__name__ == 'generate_oai_reply':
|
d['reply_func'] = gpt_academic_generate_oai_reply
|
||||||
d['reply_func'] = gpt_academic_generate_oai_reply
|
if agent_kwargs['name'] == 'user_proxy':
|
||||||
if agent_kwargs['name'] == 'user_proxy':
|
agent_handle.get_human_input = lambda a: self.gpt_academic_get_human_input(user_proxy, a)
|
||||||
agent_handle.get_human_input = lambda a: self.gpt_academic_get_human_input(user_proxy, a)
|
user_proxy = agent_handle
|
||||||
user_proxy = agent_handle
|
if agent_kwargs['name'] == 'assistant': assistant = agent_handle
|
||||||
if agent_kwargs['name'] == 'assistant': assistant = agent_handle
|
try:
|
||||||
try:
|
if user_proxy is None or assistant is None: raise Exception("用户代理或助理代理未定义")
|
||||||
if user_proxy is None or assistant is None: raise Exception("用户代理或助理代理未定义")
|
with ProxyNetworkActivate("AutoGen"):
|
||||||
user_proxy.initiate_chat(assistant, message=input)
|
user_proxy.initiate_chat(assistant, message=input)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
self.child_conn.send(PipeCom("done", "AutoGen 执行失败: \n\n" + tb_str))
|
self.child_conn.send(PipeCom("done", "AutoGen 执行失败: \n\n" + tb_str))
|
||||||
|
|
||||||
def subprocess_worker(self, child_conn):
|
def subprocess_worker(self, child_conn):
|
||||||
# ⭐⭐ run in subprocess
|
# ⭐⭐ run in subprocess
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ class PipeCom:
|
|||||||
|
|
||||||
|
|
||||||
class PluginMultiprocessManager:
|
class PluginMultiprocessManager:
|
||||||
def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
# ⭐ run in main process
|
# ⭐ run in main process
|
||||||
self.autogen_work_dir = os.path.join(get_log_folder("autogen"), gen_time_str())
|
self.autogen_work_dir = os.path.join(get_log_folder("autogen"), gen_time_str())
|
||||||
self.previous_work_dir_files = {}
|
self.previous_work_dir_files = {}
|
||||||
@@ -18,7 +18,7 @@ class PluginMultiprocessManager:
|
|||||||
self.chatbot = chatbot
|
self.chatbot = chatbot
|
||||||
self.history = history
|
self.history = history
|
||||||
self.system_prompt = system_prompt
|
self.system_prompt = system_prompt
|
||||||
# self.web_port = web_port
|
# self.user_request = user_request
|
||||||
self.alive = True
|
self.alive = True
|
||||||
self.use_docker = get_conf("AUTOGEN_USE_DOCKER")
|
self.use_docker = get_conf("AUTOGEN_USE_DOCKER")
|
||||||
self.last_user_input = ""
|
self.last_user_input = ""
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ def string_to_options(arguments):
|
|||||||
return args
|
return args
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -40,7 +40,7 @@ def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
|
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
|
||||||
@@ -80,7 +80,7 @@ def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -88,7 +88,7 @@ def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
import subprocess
|
import subprocess
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
|
|||||||
@@ -135,13 +135,25 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
|||||||
yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息
|
yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息
|
||||||
return final_result
|
return final_result
|
||||||
|
|
||||||
def can_multi_process(llm):
|
def can_multi_process(llm) -> bool:
|
||||||
if llm.startswith('gpt-'): return True
|
from request_llms.bridge_all import model_info
|
||||||
if llm.startswith('api2d-'): return True
|
|
||||||
if llm.startswith('azure-'): return True
|
def default_condition(llm) -> bool:
|
||||||
if llm.startswith('spark'): return True
|
# legacy condition
|
||||||
if llm.startswith('zhipuai'): return True
|
if llm.startswith('gpt-'): return True
|
||||||
return False
|
if llm.startswith('api2d-'): return True
|
||||||
|
if llm.startswith('azure-'): return True
|
||||||
|
if llm.startswith('spark'): return True
|
||||||
|
if llm.startswith('zhipuai') or llm.startswith('glm-'): return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
if llm in model_info:
|
||||||
|
if 'can_multi_thread' in model_info[llm]:
|
||||||
|
return model_info[llm]['can_multi_thread']
|
||||||
|
else:
|
||||||
|
return default_condition(llm)
|
||||||
|
else:
|
||||||
|
return default_condition(llm)
|
||||||
|
|
||||||
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||||
inputs_array, inputs_show_user_array, llm_kwargs,
|
inputs_array, inputs_show_user_array, llm_kwargs,
|
||||||
@@ -284,8 +296,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
|||||||
# 在前端打印些好玩的东西
|
# 在前端打印些好玩的东西
|
||||||
for thread_index, _ in enumerate(worker_done):
|
for thread_index, _ in enumerate(worker_done):
|
||||||
print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\
|
print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\
|
||||||
replace('\n', '').replace('`', '.').replace(
|
replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
|
||||||
' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
|
|
||||||
observe_win.append(print_something_really_funny)
|
observe_win.append(print_something_really_funny)
|
||||||
# 在前端打印些好玩的东西
|
# 在前端打印些好玩的东西
|
||||||
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
|
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
|
||||||
|
|||||||
@@ -0,0 +1,122 @@
|
|||||||
|
import os
|
||||||
|
from textwrap import indent
|
||||||
|
|
||||||
|
class FileNode:
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
self.children = []
|
||||||
|
self.is_leaf = False
|
||||||
|
self.level = 0
|
||||||
|
self.parenting_ship = []
|
||||||
|
self.comment = ""
|
||||||
|
self.comment_maxlen_show = 50
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def add_linebreaks_at_spaces(string, interval=10):
|
||||||
|
return '\n'.join(string[i:i+interval] for i in range(0, len(string), interval))
|
||||||
|
|
||||||
|
def sanitize_comment(self, comment):
|
||||||
|
if len(comment) > self.comment_maxlen_show: suf = '...'
|
||||||
|
else: suf = ''
|
||||||
|
comment = comment[:self.comment_maxlen_show]
|
||||||
|
comment = comment.replace('\"', '').replace('`', '').replace('\n', '').replace('`', '').replace('$', '')
|
||||||
|
comment = self.add_linebreaks_at_spaces(comment, 10)
|
||||||
|
return '`' + comment + suf + '`'
|
||||||
|
|
||||||
|
def add_file(self, file_path, file_comment):
|
||||||
|
directory_names, file_name = os.path.split(file_path)
|
||||||
|
current_node = self
|
||||||
|
level = 1
|
||||||
|
if directory_names == "":
|
||||||
|
new_node = FileNode(file_name)
|
||||||
|
current_node.children.append(new_node)
|
||||||
|
new_node.is_leaf = True
|
||||||
|
new_node.comment = self.sanitize_comment(file_comment)
|
||||||
|
new_node.level = level
|
||||||
|
current_node = new_node
|
||||||
|
else:
|
||||||
|
dnamesplit = directory_names.split(os.sep)
|
||||||
|
for i, directory_name in enumerate(dnamesplit):
|
||||||
|
found_child = False
|
||||||
|
level += 1
|
||||||
|
for child in current_node.children:
|
||||||
|
if child.name == directory_name:
|
||||||
|
current_node = child
|
||||||
|
found_child = True
|
||||||
|
break
|
||||||
|
if not found_child:
|
||||||
|
new_node = FileNode(directory_name)
|
||||||
|
current_node.children.append(new_node)
|
||||||
|
new_node.level = level - 1
|
||||||
|
current_node = new_node
|
||||||
|
term = FileNode(file_name)
|
||||||
|
term.level = level
|
||||||
|
term.comment = self.sanitize_comment(file_comment)
|
||||||
|
term.is_leaf = True
|
||||||
|
current_node.children.append(term)
|
||||||
|
|
||||||
|
def print_files_recursively(self, level=0, code="R0"):
|
||||||
|
print(' '*level + self.name + ' ' + str(self.is_leaf) + ' ' + str(self.level))
|
||||||
|
for j, child in enumerate(self.children):
|
||||||
|
child.print_files_recursively(level=level+1, code=code+str(j))
|
||||||
|
self.parenting_ship.extend(child.parenting_ship)
|
||||||
|
p1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
|
||||||
|
p2 = """ --> """
|
||||||
|
p3 = f"""{code+str(j)}[\"🗎{child.name}\"]""" if child.is_leaf else f"""{code+str(j)}[[\"📁{child.name}\"]]"""
|
||||||
|
edge_code = p1 + p2 + p3
|
||||||
|
if edge_code in self.parenting_ship:
|
||||||
|
continue
|
||||||
|
self.parenting_ship.append(edge_code)
|
||||||
|
if self.comment != "":
|
||||||
|
pc1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
|
||||||
|
pc2 = f""" -.-x """
|
||||||
|
pc3 = f"""C{code}[\"{self.comment}\"]:::Comment"""
|
||||||
|
edge_code = pc1 + pc2 + pc3
|
||||||
|
self.parenting_ship.append(edge_code)
|
||||||
|
|
||||||
|
|
||||||
|
MERMAID_TEMPLATE = r"""
|
||||||
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
|
%% <gpt_academic_hide_mermaid_code> 一个特殊标记,用于在生成mermaid图表时隐藏代码块
|
||||||
|
classDef Comment stroke-dasharray: 5 5
|
||||||
|
subgraph {graph_name}
|
||||||
|
{relationship}
|
||||||
|
end
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
|
||||||
|
def build_file_tree_mermaid_diagram(file_manifest, file_comments, graph_name):
|
||||||
|
# Create the root node
|
||||||
|
file_tree_struct = FileNode("root")
|
||||||
|
# Build the tree structure
|
||||||
|
for file_path, file_comment in zip(file_manifest, file_comments):
|
||||||
|
file_tree_struct.add_file(file_path, file_comment)
|
||||||
|
file_tree_struct.print_files_recursively()
|
||||||
|
cc = "\n".join(file_tree_struct.parenting_ship)
|
||||||
|
ccc = indent(cc, prefix=" "*8)
|
||||||
|
return MERMAID_TEMPLATE.format(graph_name=graph_name, relationship=ccc)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# File manifest
|
||||||
|
file_manifest = [
|
||||||
|
"cradle_void_terminal.ipynb",
|
||||||
|
"tests/test_utils.py",
|
||||||
|
"tests/test_plugins.py",
|
||||||
|
"tests/test_llms.py",
|
||||||
|
"config.py",
|
||||||
|
"build/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/model_weights_0.bin",
|
||||||
|
"crazy_functions/latex_fns/latex_actions.py",
|
||||||
|
"crazy_functions/latex_fns/latex_toolbox.py"
|
||||||
|
]
|
||||||
|
file_comments = [
|
||||||
|
"根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件",
|
||||||
|
"包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器",
|
||||||
|
"用于构建HTML报告的类和方法用于构建HTML报告的类和方法用于构建HTML报告的类和方法",
|
||||||
|
"包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码",
|
||||||
|
"用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数",
|
||||||
|
"是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块",
|
||||||
|
"用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器",
|
||||||
|
"包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类",
|
||||||
|
]
|
||||||
|
print(build_file_tree_mermaid_diagram(file_manifest, file_comments, "项目文件树"))
|
||||||
@@ -0,0 +1,85 @@
|
|||||||
|
from crazy_functions.crazy_utils import read_and_clean_pdf_text, get_files_from_everything
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
def extract_text_from_files(txt, chatbot, history):
|
||||||
|
"""
|
||||||
|
查找pdf/md/word并获取文本内容并返回状态以及文本
|
||||||
|
|
||||||
|
输入参数 Args:
|
||||||
|
chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化)
|
||||||
|
history (list): List of chat history (历史,对话历史列表)
|
||||||
|
|
||||||
|
输出 Returns:
|
||||||
|
文件是否存在(bool)
|
||||||
|
final_result(list):文本内容
|
||||||
|
page_one(list):第一页内容/摘要
|
||||||
|
file_manifest(list):文件路径
|
||||||
|
excption(string):需要用户手动处理的信息,如没出错则保持为空
|
||||||
|
"""
|
||||||
|
|
||||||
|
final_result = []
|
||||||
|
page_one = []
|
||||||
|
file_manifest = []
|
||||||
|
excption = ""
|
||||||
|
|
||||||
|
if txt == "":
|
||||||
|
final_result.append(txt)
|
||||||
|
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
|
||||||
|
|
||||||
|
#查找输入区内容中的文件
|
||||||
|
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
|
||||||
|
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
|
||||||
|
file_word,word_manifest,folder_word = get_files_from_everything(txt, '.docx')
|
||||||
|
file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc')
|
||||||
|
|
||||||
|
if file_doc:
|
||||||
|
excption = "word"
|
||||||
|
return False, final_result, page_one, file_manifest, excption
|
||||||
|
|
||||||
|
file_num = len(pdf_manifest) + len(md_manifest) + len(word_manifest)
|
||||||
|
if file_num == 0:
|
||||||
|
final_result.append(txt)
|
||||||
|
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
|
||||||
|
|
||||||
|
if file_pdf:
|
||||||
|
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
|
import fitz
|
||||||
|
except:
|
||||||
|
excption = "pdf"
|
||||||
|
return False, final_result, page_one, file_manifest, excption
|
||||||
|
for index, fp in enumerate(pdf_manifest):
|
||||||
|
file_content, pdf_one = read_and_clean_pdf_text(fp) # (尝试)按照章节切割PDF
|
||||||
|
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||||
|
pdf_one = str(pdf_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||||
|
final_result.append(file_content)
|
||||||
|
page_one.append(pdf_one)
|
||||||
|
file_manifest.append(os.path.relpath(fp, folder_pdf))
|
||||||
|
|
||||||
|
if file_md:
|
||||||
|
for index, fp in enumerate(md_manifest):
|
||||||
|
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
|
||||||
|
file_content = f.read()
|
||||||
|
file_content = file_content.encode('utf-8', 'ignore').decode()
|
||||||
|
headers = re.findall(r'^#\s(.*)$', file_content, re.MULTILINE) #接下来提取md中的一级/二级标题作为摘要
|
||||||
|
if len(headers) > 0:
|
||||||
|
page_one.append("\n".join(headers)) #合并所有的标题,以换行符分割
|
||||||
|
else:
|
||||||
|
page_one.append("")
|
||||||
|
final_result.append(file_content)
|
||||||
|
file_manifest.append(os.path.relpath(fp, folder_md))
|
||||||
|
|
||||||
|
if file_word:
|
||||||
|
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
|
from docx import Document
|
||||||
|
except:
|
||||||
|
excption = "word_pip"
|
||||||
|
return False, final_result, page_one, file_manifest, excption
|
||||||
|
for index, fp in enumerate(word_manifest):
|
||||||
|
doc = Document(fp)
|
||||||
|
file_content = '\n'.join([p.text for p in doc.paragraphs])
|
||||||
|
file_content = file_content.encode('utf-8', 'ignore').decode()
|
||||||
|
page_one.append(file_content[:200])
|
||||||
|
final_result.append(file_content)
|
||||||
|
file_manifest.append(os.path.relpath(fp, folder_word))
|
||||||
|
|
||||||
|
return True, final_result, page_one, file_manifest, excption
|
||||||
@@ -130,7 +130,7 @@ def get_name(_url_):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
|
|
||||||
CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
|
CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
|
||||||
import glob
|
import glob
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ from request_llms.bridge_all import predict_no_ui_long_connection
|
|||||||
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
|
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
from crazy_functions.game_fns.game_interactive_story import MiniGame_ResumeStory
|
from crazy_functions.game_fns.game_interactive_story import MiniGame_ResumeStory
|
||||||
# 清空历史
|
# 清空历史
|
||||||
history = []
|
history = []
|
||||||
@@ -23,7 +23,7 @@ def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 随机小游戏1(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 随机小游戏1(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
from crazy_functions.game_fns.game_ascii_art import MiniGame_ASCII_Art
|
from crazy_functions.game_fns.game_ascii_art import MiniGame_ASCII_Art
|
||||||
# 清空历史
|
# 清空历史
|
||||||
history = []
|
history = []
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||||
@@ -11,7 +11,7 @@ def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
chatbot.append(("这是什么功能?", "交互功能函数模板。在执行完成之后, 可以将自身的状态存储到cookie中, 等待用户的再次调用。"))
|
chatbot.append(("这是什么功能?", "交互功能函数模板。在执行完成之后, 可以将自身的状态存储到cookie中, 等待用户的再次调用。"))
|
||||||
|
|||||||
@@ -139,7 +139,7 @@ def get_recent_file_prompt_support(chatbot):
|
|||||||
return path
|
return path
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -147,7 +147,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# 清空历史
|
# 清空历史
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ from .crazy_utils import input_clipping
|
|||||||
import copy, json
|
import copy, json
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||||
@@ -12,7 +12,7 @@ def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
|||||||
chatbot 聊天显示框的句柄, 用于显示给用户
|
chatbot 聊天显示框的句柄, 用于显示给用户
|
||||||
history 聊天历史, 前情提要
|
history 聊天历史, 前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
# 清空历史, 以免输入溢出
|
# 清空历史, 以免输入溢出
|
||||||
history = []
|
history = []
|
||||||
|
|||||||
@@ -93,7 +93,7 @@ def edit_image(llm_kwargs, prompt, image_path, resolution="1024x1024", model="da
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -101,7 +101,7 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
if prompt.strip() == "":
|
if prompt.strip() == "":
|
||||||
@@ -123,7 +123,7 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
if prompt.strip() == "":
|
if prompt.strip() == "":
|
||||||
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
|
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
|
||||||
@@ -209,7 +209,7 @@ class ImageEditState(GptAcademicState):
|
|||||||
return all([x['value'] is not None for x in self.req])
|
return all([x['value'] is not None for x in self.req])
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
# 尚未完成
|
# 尚未完成
|
||||||
history = [] # 清空历史
|
history = [] # 清空历史
|
||||||
state = ImageEditState.get_state(chatbot, ImageEditState)
|
state = ImageEditState.get_state(chatbot, ImageEditState)
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ def remove_model_prefix(llm):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -29,7 +29,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
# 检查当前的模型是否符合要求
|
# 检查当前的模型是否符合要求
|
||||||
supported_llms = [
|
supported_llms = [
|
||||||
@@ -51,13 +51,6 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
if model_info[llm_kwargs['llm_model']]["endpoint"] is not None: # 如果不是本地模型,加载API_KEY
|
if model_info[llm_kwargs['llm_model']]["endpoint"] is not None: # 如果不是本地模型,加载API_KEY
|
||||||
llm_kwargs['api_key'] = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
llm_kwargs['api_key'] = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
||||||
|
|
||||||
# 检查当前的模型是否符合要求
|
|
||||||
API_URL_REDIRECT = get_conf('API_URL_REDIRECT')
|
|
||||||
if len(API_URL_REDIRECT) > 0:
|
|
||||||
chatbot.append([f"处理任务: {txt}", f"暂不支持中转."])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
try:
|
try:
|
||||||
import autogen
|
import autogen
|
||||||
@@ -96,7 +89,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
history = []
|
history = []
|
||||||
chatbot.append(["正在启动: 多智能体终端", "插件动态生成, 执行开始, 作者 Microsoft & Binary-Husky."])
|
chatbot.append(["正在启动: 多智能体终端", "插件动态生成, 执行开始, 作者 Microsoft & Binary-Husky."])
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
executor = AutoGenMath(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
executor = AutoGenMath(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||||
persistent_class_multi_user_manager.set(persistent_key, executor)
|
persistent_class_multi_user_manager.set(persistent_key, executor)
|
||||||
exit_reason = yield from executor.main_process_ui_control(txt, create_or_resume="create")
|
exit_reason = yield from executor.main_process_ui_control(txt, create_or_resume="create")
|
||||||
|
|
||||||
|
|||||||
@@ -69,7 +69,7 @@ def read_file_to_chat(chatbot, history, file_name):
|
|||||||
return chatbot, history
|
return chatbot, history
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -77,7 +77,7 @@ def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
chatbot.append(("保存当前对话",
|
chatbot.append(("保存当前对话",
|
||||||
@@ -91,7 +91,7 @@ def hide_cwd(str):
|
|||||||
return str.replace(current_path, replace_path)
|
return str.replace(current_path, replace_path)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -99,7 +99,7 @@ def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
from .crazy_utils import get_files_from_everything
|
from .crazy_utils import get_files_from_everything
|
||||||
success, file_manifest, _ = get_files_from_everything(txt, type='.html')
|
success, file_manifest, _ = get_files_from_everything(txt, type='.html')
|
||||||
@@ -126,7 +126,7 @@ def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
return
|
return
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -134,7 +134,7 @@ def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import glob, os
|
import glob, os
|
||||||
|
|||||||
@@ -79,7 +79,7 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
import glob, os
|
import glob, os
|
||||||
|
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
|
|||||||
@@ -153,7 +153,7 @@ def get_files_from_everything(txt, preference=''):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
@@ -193,7 +193,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
@@ -226,7 +226,7 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
|
|||||||
@@ -101,7 +101,7 @@ do not have too much repetitive information, numerical values using the original
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
import glob, os
|
import glob, os
|
||||||
|
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
|
|||||||
@@ -124,7 +124,7 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
|
|
||||||
|
|||||||
@@ -48,7 +48,7 @@ def markdown_to_dict(article_content):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
|
|
||||||
disable_auto_promotion(chatbot)
|
disable_auto_promotion(chatbot)
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ import os
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
|
|
||||||
disable_auto_promotion(chatbot)
|
disable_auto_promotion(chatbot)
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
|
|||||||
@@ -50,7 +50,7 @@ def get_code_block(reply):
|
|||||||
return matches[0].strip('python') # code block
|
return matches[0].strip('python') # code block
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -58,7 +58,7 @@ def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
# 清空历史,以免输入溢出
|
# 清空历史,以免输入溢出
|
||||||
history = []
|
history = []
|
||||||
|
|||||||
@@ -63,7 +63,7 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
import glob, os
|
import glob, os
|
||||||
|
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
|
|||||||
@@ -36,7 +36,7 @@ def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
|
|||||||
296
crazy_functions/生成多种Mermaid图表.py
普通文件
296
crazy_functions/生成多种Mermaid图表.py
普通文件
@@ -0,0 +1,296 @@
|
|||||||
|
from toolbox import CatchException, update_ui, report_exception
|
||||||
|
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
|
import datetime
|
||||||
|
|
||||||
|
#以下是每类图表的PROMPT
|
||||||
|
SELECT_PROMPT = """
|
||||||
|
“{subject}”
|
||||||
|
=============
|
||||||
|
以上是从文章中提取的摘要,将会使用这些摘要绘制图表。请你选择一个合适的图表类型:
|
||||||
|
1 流程图
|
||||||
|
2 序列图
|
||||||
|
3 类图
|
||||||
|
4 饼图
|
||||||
|
5 甘特图
|
||||||
|
6 状态图
|
||||||
|
7 实体关系图
|
||||||
|
8 象限提示图
|
||||||
|
不需要解释原因,仅需要输出单个不带任何标点符号的数字。
|
||||||
|
"""
|
||||||
|
#没有思维导图!!!测试发现模型始终会优先选择思维导图
|
||||||
|
#流程图
|
||||||
|
PROMPT_1 = """
|
||||||
|
请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,mermaid语法举例:
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
P(编程) --> L1(Python)
|
||||||
|
P(编程) --> L2(C)
|
||||||
|
P(编程) --> L3(C++)
|
||||||
|
P(编程) --> L4(Javascipt)
|
||||||
|
P(编程) --> L5(PHP)
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
#序列图
|
||||||
|
PROMPT_2 = """
|
||||||
|
请你给出围绕“{subject}”的序列图,使用mermaid语法,mermaid语法举例:
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
participant A as 用户
|
||||||
|
participant B as 系统
|
||||||
|
A->>B: 登录请求
|
||||||
|
B->>A: 登录成功
|
||||||
|
A->>B: 获取数据
|
||||||
|
B->>A: 返回数据
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
#类图
|
||||||
|
PROMPT_3 = """
|
||||||
|
请你给出围绕“{subject}”的类图,使用mermaid语法,mermaid语法举例:
|
||||||
|
```mermaid
|
||||||
|
classDiagram
|
||||||
|
Class01 <|-- AveryLongClass : Cool
|
||||||
|
Class03 *-- Class04
|
||||||
|
Class05 o-- Class06
|
||||||
|
Class07 .. Class08
|
||||||
|
Class09 --> C2 : Where am i?
|
||||||
|
Class09 --* C3
|
||||||
|
Class09 --|> Class07
|
||||||
|
Class07 : equals()
|
||||||
|
Class07 : Object[] elementData
|
||||||
|
Class01 : size()
|
||||||
|
Class01 : int chimp
|
||||||
|
Class01 : int gorilla
|
||||||
|
Class08 <--> C2: Cool label
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
#饼图
|
||||||
|
PROMPT_4 = """
|
||||||
|
请你给出围绕“{subject}”的饼图,使用mermaid语法,mermaid语法举例:
|
||||||
|
```mermaid
|
||||||
|
pie title Pets adopted by volunteers
|
||||||
|
"狗" : 386
|
||||||
|
"猫" : 85
|
||||||
|
"兔子" : 15
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
#甘特图
|
||||||
|
PROMPT_5 = """
|
||||||
|
请你给出围绕“{subject}”的甘特图,使用mermaid语法,mermaid语法举例:
|
||||||
|
```mermaid
|
||||||
|
gantt
|
||||||
|
title 项目开发流程
|
||||||
|
dateFormat YYYY-MM-DD
|
||||||
|
section 设计
|
||||||
|
需求分析 :done, des1, 2024-01-06,2024-01-08
|
||||||
|
原型设计 :active, des2, 2024-01-09, 3d
|
||||||
|
UI设计 : des3, after des2, 5d
|
||||||
|
section 开发
|
||||||
|
前端开发 :2024-01-20, 10d
|
||||||
|
后端开发 :2024-01-20, 10d
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
#状态图
|
||||||
|
PROMPT_6 = """
|
||||||
|
请你给出围绕“{subject}”的状态图,使用mermaid语法,mermaid语法举例:
|
||||||
|
```mermaid
|
||||||
|
stateDiagram-v2
|
||||||
|
[*] --> Still
|
||||||
|
Still --> [*]
|
||||||
|
Still --> Moving
|
||||||
|
Moving --> Still
|
||||||
|
Moving --> Crash
|
||||||
|
Crash --> [*]
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
#实体关系图
|
||||||
|
PROMPT_7 = """
|
||||||
|
请你给出围绕“{subject}”的实体关系图,使用mermaid语法,mermaid语法举例:
|
||||||
|
```mermaid
|
||||||
|
erDiagram
|
||||||
|
CUSTOMER ||--o{ ORDER : places
|
||||||
|
ORDER ||--|{ LINE-ITEM : contains
|
||||||
|
CUSTOMER {
|
||||||
|
string name
|
||||||
|
string id
|
||||||
|
}
|
||||||
|
ORDER {
|
||||||
|
string orderNumber
|
||||||
|
date orderDate
|
||||||
|
string customerID
|
||||||
|
}
|
||||||
|
LINE-ITEM {
|
||||||
|
number quantity
|
||||||
|
string productID
|
||||||
|
}
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
#象限提示图
|
||||||
|
PROMPT_8 = """
|
||||||
|
请你给出围绕“{subject}”的象限图,使用mermaid语法,mermaid语法举例:
|
||||||
|
```mermaid
|
||||||
|
graph LR
|
||||||
|
A[Hard skill] --> B(Programming)
|
||||||
|
A[Hard skill] --> C(Design)
|
||||||
|
D[Soft skill] --> E(Coordination)
|
||||||
|
D[Soft skill] --> F(Communication)
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
#思维导图
|
||||||
|
PROMPT_9 = """
|
||||||
|
{subject}
|
||||||
|
==========
|
||||||
|
请给出上方内容的思维导图,充分考虑其之间的逻辑,使用mermaid语法,mermaid语法举例:
|
||||||
|
```mermaid
|
||||||
|
mindmap
|
||||||
|
root((mindmap))
|
||||||
|
Origins
|
||||||
|
Long history
|
||||||
|
::icon(fa fa-book)
|
||||||
|
Popularisation
|
||||||
|
British popular psychology author Tony Buzan
|
||||||
|
Research
|
||||||
|
On effectiveness<br/>and features
|
||||||
|
On Automatic creation
|
||||||
|
Uses
|
||||||
|
Creative techniques
|
||||||
|
Strategic planning
|
||||||
|
Argument mapping
|
||||||
|
Tools
|
||||||
|
Pen and paper
|
||||||
|
Mermaid
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
|
||||||
|
def 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs):
|
||||||
|
############################## <第 0 步,切割输入> ##################################
|
||||||
|
# 借用PDF切割中的函数对文本进行切割
|
||||||
|
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
||||||
|
txt = str(history).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||||
|
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
||||||
|
txt = breakdown_text_to_satisfy_token_limit(txt=txt, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
|
||||||
|
############################## <第 1 步,迭代地历遍整个文章,提取精炼信息> ##################################
|
||||||
|
results = []
|
||||||
|
MAX_WORD_TOTAL = 4096
|
||||||
|
n_txt = len(txt)
|
||||||
|
last_iteration_result = "从以下文本中提取摘要。"
|
||||||
|
if n_txt >= 20: print('文章极长,不能达到预期效果')
|
||||||
|
for i in range(n_txt):
|
||||||
|
NUM_OF_WORD = MAX_WORD_TOTAL // n_txt
|
||||||
|
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words in Chinese: {txt[i]}"
|
||||||
|
i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...."
|
||||||
|
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
|
||||||
|
llm_kwargs, chatbot,
|
||||||
|
history=["The main content of the previous section is?", last_iteration_result], # 迭代上一次的结果
|
||||||
|
sys_prompt="Extracts the main content from the text section where it is located for graphing purposes, answer me with Chinese." # 提示
|
||||||
|
)
|
||||||
|
results.append(gpt_say)
|
||||||
|
last_iteration_result = gpt_say
|
||||||
|
############################## <第 2 步,根据整理的摘要选择图表类型> ##################################
|
||||||
|
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||||
|
gpt_say = plugin_kwargs.get("advanced_arg", "") #将图表类型参数赋值为插件参数
|
||||||
|
results_txt = '\n'.join(results) #合并摘要
|
||||||
|
if gpt_say not in ['1','2','3','4','5','6','7','8','9']: #如插件参数不正确则使用对话模型判断
|
||||||
|
i_say_show_user = f'接下来将判断适合的图表类型,如连续3次判断失败将会使用流程图进行绘制'; gpt_say = "[Local Message] 收到。" # 用户提示
|
||||||
|
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI
|
||||||
|
i_say = SELECT_PROMPT.format(subject=results_txt)
|
||||||
|
i_say_show_user = f'请判断适合使用的流程图类型,其中数字对应关系为:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图。由于不管提供文本是什么,模型大概率认为"思维导图"最合适,因此思维导图仅能通过参数调用。'
|
||||||
|
for i in range(3):
|
||||||
|
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||||
|
inputs=i_say,
|
||||||
|
inputs_show_user=i_say_show_user,
|
||||||
|
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||||
|
sys_prompt=""
|
||||||
|
)
|
||||||
|
if gpt_say in ['1','2','3','4','5','6','7','8','9']: #判断返回是否正确
|
||||||
|
break
|
||||||
|
if gpt_say not in ['1','2','3','4','5','6','7','8','9']:
|
||||||
|
gpt_say = '1'
|
||||||
|
############################## <第 3 步,根据选择的图表类型绘制图表> ##################################
|
||||||
|
if gpt_say == '1':
|
||||||
|
i_say = PROMPT_1.format(subject=results_txt)
|
||||||
|
elif gpt_say == '2':
|
||||||
|
i_say = PROMPT_2.format(subject=results_txt)
|
||||||
|
elif gpt_say == '3':
|
||||||
|
i_say = PROMPT_3.format(subject=results_txt)
|
||||||
|
elif gpt_say == '4':
|
||||||
|
i_say = PROMPT_4.format(subject=results_txt)
|
||||||
|
elif gpt_say == '5':
|
||||||
|
i_say = PROMPT_5.format(subject=results_txt)
|
||||||
|
elif gpt_say == '6':
|
||||||
|
i_say = PROMPT_6.format(subject=results_txt)
|
||||||
|
elif gpt_say == '7':
|
||||||
|
i_say = PROMPT_7.replace("{subject}", results_txt) #由于实体关系图用到了{}符号
|
||||||
|
elif gpt_say == '8':
|
||||||
|
i_say = PROMPT_8.format(subject=results_txt)
|
||||||
|
elif gpt_say == '9':
|
||||||
|
i_say = PROMPT_9.format(subject=results_txt)
|
||||||
|
i_say_show_user = f'请根据判断结果绘制相应的图表。如需绘制思维导图请使用参数调用,同时过大的图表可能需要复制到在线编辑器中进行渲染。'
|
||||||
|
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||||
|
inputs=i_say,
|
||||||
|
inputs_show_user=i_say_show_user,
|
||||||
|
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||||
|
sys_prompt=""
|
||||||
|
)
|
||||||
|
history.append(gpt_say)
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||||
|
|
||||||
|
@CatchException
|
||||||
|
def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
"""
|
||||||
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
|
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
||||||
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
|
history 聊天历史,前情提要
|
||||||
|
system_prompt 给gpt的静默提醒
|
||||||
|
web_port 当前软件运行的端口号
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
|
||||||
|
# 基本信息:功能、贡献者
|
||||||
|
chatbot.append([
|
||||||
|
"函数插件功能?",
|
||||||
|
"根据当前聊天历史或指定的路径文件(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\
|
||||||
|
\n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
if os.path.exists(txt): #如输入区无内容则直接解析历史记录
|
||||||
|
from crazy_functions.pdf_fns.parse_word import extract_text_from_files
|
||||||
|
file_exist, final_result, page_one, file_manifest, excption = extract_text_from_files(txt, chatbot, history)
|
||||||
|
else:
|
||||||
|
file_exist = False
|
||||||
|
excption = ""
|
||||||
|
file_manifest = []
|
||||||
|
|
||||||
|
if excption != "":
|
||||||
|
if excption == "word":
|
||||||
|
report_exception(chatbot, history,
|
||||||
|
a = f"解析项目: {txt}",
|
||||||
|
b = f"找到了.doc文件,但是该文件格式不被支持,请先转化为.docx格式。")
|
||||||
|
|
||||||
|
elif excption == "pdf":
|
||||||
|
report_exception(chatbot, history,
|
||||||
|
a = f"解析项目: {txt}",
|
||||||
|
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
|
||||||
|
|
||||||
|
elif excption == "word_pip":
|
||||||
|
report_exception(chatbot, history,
|
||||||
|
a=f"解析项目: {txt}",
|
||||||
|
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。")
|
||||||
|
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
else:
|
||||||
|
if not file_exist:
|
||||||
|
history.append(txt) #如输入区不是文件则将输入区内容加入历史记录
|
||||||
|
i_say_show_user = f'首先你从历史记录中提取摘要。'; gpt_say = "[Local Message] 收到。" # 用户提示
|
||||||
|
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI
|
||||||
|
yield from 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs)
|
||||||
|
else:
|
||||||
|
file_num = len(file_manifest)
|
||||||
|
for i in range(file_num): #依次处理文件
|
||||||
|
i_say_show_user = f"[{i+1}/{file_num}]处理文件{file_manifest[i]}"; gpt_say = "[Local Message] 收到。" # 用户提示
|
||||||
|
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI
|
||||||
|
history = [] #如输入区内容为文件则清空历史记录
|
||||||
|
history.append(final_result[i])
|
||||||
|
yield from 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs)
|
||||||
@@ -13,7 +13,7 @@ install_msg ="""
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||||
@@ -21,7 +21,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
|
|
||||||
@@ -84,7 +84,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port=-1):
|
def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request=-1):
|
||||||
# resolve deps
|
# resolve deps
|
||||||
try:
|
try:
|
||||||
# from zh_langchain import construct_vector_store
|
# from zh_langchain import construct_vector_store
|
||||||
|
|||||||
@@ -55,7 +55,7 @@ def scrape_text(url, proxies) -> str:
|
|||||||
return text
|
return text
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -63,7 +63,7 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
||||||
|
|||||||
@@ -55,7 +55,7 @@ def scrape_text(url, proxies) -> str:
|
|||||||
return text
|
return text
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -63,7 +63,7 @@ def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, histor
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
||||||
|
|||||||
@@ -104,7 +104,7 @@ def analyze_intention_with_simple_rules(txt):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
disable_auto_promotion(chatbot=chatbot)
|
disable_auto_promotion(chatbot=chatbot)
|
||||||
# 获取当前虚空终端状态
|
# 获取当前虚空终端状态
|
||||||
state = VoidTerminalState.get_state(chatbot)
|
state = VoidTerminalState.get_state(chatbot)
|
||||||
@@ -121,7 +121,7 @@ def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
|
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
|
||||||
state.unlock_plugin(chatbot=chatbot)
|
state.unlock_plugin(chatbot=chatbot)
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||||
return
|
return
|
||||||
else:
|
else:
|
||||||
# 如果意图模糊,提示
|
# 如果意图模糊,提示
|
||||||
@@ -133,7 +133,7 @@ def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = []
|
history = []
|
||||||
chatbot.append(("虚空终端状态: ", f"正在执行任务: {txt}"))
|
chatbot.append(("虚空终端状态: ", f"正在执行任务: {txt}"))
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|||||||
@@ -12,6 +12,12 @@ class PaperFileGroup():
|
|||||||
self.sp_file_index = []
|
self.sp_file_index = []
|
||||||
self.sp_file_tag = []
|
self.sp_file_tag = []
|
||||||
|
|
||||||
|
# count_token
|
||||||
|
from request_llms.bridge_all import model_info
|
||||||
|
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||||
|
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||||
|
self.get_token_num = get_token_num
|
||||||
|
|
||||||
def run_file_split(self, max_token_limit=1900):
|
def run_file_split(self, max_token_limit=1900):
|
||||||
"""
|
"""
|
||||||
将长文本分离开来
|
将长文本分离开来
|
||||||
@@ -109,7 +115,7 @@ def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
|
|||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
"对IPynb文件进行解析。Contributor: codycjy."])
|
"对IPynb文件进行解析。Contributor: codycjy."])
|
||||||
|
|||||||
@@ -83,7 +83,8 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|||||||
history=this_iteration_history_feed, # 迭代之前的分析
|
history=this_iteration_history_feed, # 迭代之前的分析
|
||||||
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
|
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
|
||||||
|
|
||||||
summary = "请用一句话概括这些文件的整体功能"
|
diagram_code = make_diagram(this_iteration_files, result, this_iteration_history_feed)
|
||||||
|
summary = "请用一句话概括这些文件的整体功能。\n\n" + diagram_code
|
||||||
summary_result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
summary_result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||||
inputs=summary,
|
inputs=summary,
|
||||||
inputs_show_user=summary,
|
inputs_show_user=summary,
|
||||||
@@ -104,9 +105,12 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|||||||
chatbot.append(("完成了吗?", res))
|
chatbot.append(("完成了吗?", res))
|
||||||
yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
|
||||||
|
|
||||||
|
def make_diagram(this_iteration_files, result, this_iteration_history_feed):
|
||||||
|
from crazy_functions.diagram_fns.file_tree import build_file_tree_mermaid_diagram
|
||||||
|
return build_file_tree_mermaid_diagram(this_iteration_history_feed[0::2], this_iteration_history_feed[1::2], "项目示意图")
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob
|
import glob
|
||||||
file_manifest = [f for f in glob.glob('./*.py')] + \
|
file_manifest = [f for f in glob.glob('./*.py')] + \
|
||||||
@@ -119,7 +123,7 @@ def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -137,7 +141,7 @@ def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -155,7 +159,7 @@ def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -175,7 +179,7 @@ def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, his
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -197,7 +201,7 @@ def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -219,7 +223,7 @@ def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -248,7 +252,7 @@ def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -269,7 +273,7 @@ def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -289,7 +293,7 @@ def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -311,7 +315,7 @@ def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -331,7 +335,7 @@ def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
txt_pattern = plugin_kwargs.get("advanced_arg")
|
txt_pattern = plugin_kwargs.get("advanced_arg")
|
||||||
txt_pattern = txt_pattern.replace(",", ",")
|
txt_pattern = txt_pattern.replace(",", ",")
|
||||||
# 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
|
# 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
|
||||||
@@ -341,9 +345,12 @@ def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
|||||||
pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")]
|
pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")]
|
||||||
pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件
|
pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件
|
||||||
# 将要忽略匹配的文件名(例如: ^README.md)
|
# 将要忽略匹配的文件名(例如: ^README.md)
|
||||||
pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")]
|
pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", r"\.") # 移除左边通配符,移除右侧逗号,转义点号
|
||||||
|
for _ in txt_pattern.split(" ") # 以空格分割
|
||||||
|
if (_ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")) # ^开始,但不是^*.开始
|
||||||
|
]
|
||||||
# 生成正则表达式
|
# 生成正则表达式
|
||||||
pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
|
pattern_except = r'/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
|
||||||
pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else ''
|
pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else ''
|
||||||
|
|
||||||
history.clear()
|
history.clear()
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ from toolbox import CatchException, update_ui, get_conf
|
|||||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
import datetime
|
import datetime
|
||||||
@CatchException
|
@CatchException
|
||||||
def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -10,7 +10,7 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
MULTI_QUERY_LLM_MODELS = get_conf('MULTI_QUERY_LLM_MODELS')
|
MULTI_QUERY_LLM_MODELS = get_conf('MULTI_QUERY_LLM_MODELS')
|
||||||
@@ -32,7 +32,7 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -40,7 +40,7 @@ def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history,
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
|
|
||||||
|
|||||||
@@ -166,7 +166,7 @@ class InterviewAssistant(AliyunASR):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
# pip install -U openai-whisper
|
# pip install -U openai-whisper
|
||||||
chatbot.append(["对话助手函数插件:使用时,双手离开鼠标键盘吧", "音频助手, 正在听您讲话(点击“停止”键可终止程序)..."])
|
chatbot.append(["对话助手函数插件:使用时,双手离开鼠标键盘吧", "音频助手, 正在听您讲话(点击“停止”键可终止程序)..."])
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
|
|||||||
@@ -132,7 +132,7 @@ def get_meta_information(url, chatbot, history):
|
|||||||
return profile
|
return profile
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
disable_auto_promotion(chatbot=chatbot)
|
disable_auto_promotion(chatbot=chatbot)
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ import os
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
if txt:
|
if txt:
|
||||||
show_say = txt
|
show_say = txt
|
||||||
prompt = txt+'\n回答完问题后,再列出用户可能提出的三个问题。'
|
prompt = txt+'\n回答完问题后,再列出用户可能提出的三个问题。'
|
||||||
@@ -32,7 +32,7 @@ def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 清除缓存(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 清除缓存(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
chatbot.append(['清除本地缓存数据', '执行中. 删除数据'])
|
chatbot.append(['清除本地缓存数据', '执行中. 删除数据'])
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
|||||||
@@ -1,19 +1,47 @@
|
|||||||
from toolbox import CatchException, update_ui
|
from toolbox import CatchException, update_ui
|
||||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
import datetime
|
import datetime
|
||||||
|
|
||||||
|
高阶功能模板函数示意图 = f"""
|
||||||
|
```mermaid
|
||||||
|
flowchart TD
|
||||||
|
%% <gpt_academic_hide_mermaid_code> 一个特殊标记,用于在生成mermaid图表时隐藏代码块
|
||||||
|
subgraph 函数调用["函数调用过程"]
|
||||||
|
AA["输入栏用户输入的文本(txt)"] --> BB["gpt模型参数(llm_kwargs)"]
|
||||||
|
BB --> CC["插件模型参数(plugin_kwargs)"]
|
||||||
|
CC --> DD["对话显示框的句柄(chatbot)"]
|
||||||
|
DD --> EE["对话历史(history)"]
|
||||||
|
EE --> FF["系统提示词(system_prompt)"]
|
||||||
|
FF --> GG["当前用户信息(web_port)"]
|
||||||
|
|
||||||
|
A["开始(查询5天历史事件)"]
|
||||||
|
A --> B["获取当前月份和日期"]
|
||||||
|
B --> C["生成历史事件查询提示词"]
|
||||||
|
C --> D["调用大模型"]
|
||||||
|
D --> E["更新界面"]
|
||||||
|
E --> F["记录历史"]
|
||||||
|
F --> |"下一天"| B
|
||||||
|
end
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
|
# 高阶功能模板函数示意图:https://mermaid.live/edit#pako:eNptk1tvEkEYhv8KmattQpvlvOyFCcdeeaVXuoYssBwie8gyhCIlqVoLhrbbtAWNUpEGUkyMEDW2Fmn_DDOL_8LZHdOwxrnamX3f7_3mmZk6yKhZCfAgV1KrmYKoQ9fDuKC4yChX0nld1Aou1JzjznQ5fWmejh8LYHW6vG2a47YAnlCLNSIRolnenKBXI_zRIBrcuqRT890u7jZx7zMDt-AaMbnW1--5olGiz2sQjwfoQxsZL0hxplSSU0-rop4vrzmKR6O2JxYjHmwcL2Y_HDatVMkXlf86YzHbGY9bO5j8XE7O8Nsbc3iNB3ukL2SMcH-XIQBgWoVOZzxuOxOJOyc63EPGV6ZQLENVrznViYStTiaJ2vw2M2d9bByRnOXkgCnXylCSU5quyto_IcmkbdvctELmJ-j1ASW3uB3g5xOmKqVTmqr_Na3AtuS_dtBFm8H90XJyHkDDT7S9xXWb4HGmRChx64AOL5HRpUm411rM5uh4H78Z4V7fCZzytjZz2seto9XaNPFue07clLaVZF8UNLygJ-VES8lah_n-O-5Ozc7-77NzJ0-K0yr0ZYrmHdqAk50t2RbA4qq9uNohBASw7YpSgaRkLWCCAtxAlnRZLGbJba9bPwUAC5IsCYAnn1kpJ1ZKUACC0iBSsQLVBzUlA3ioVyQ3qGhZEUrxokiehAz4nFgqk1VNVABfB1uAD_g2_AGPl-W8nMcbCvsDblADfNCz4feyobDPy3rYEMtxwYYbPFNVUoHdCPmDHBv2cP4AMfrCbiBli-Q-3afv0X6WdsIjW2-10fgDy1SAig
|
||||||
|
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!"))
|
chatbot.append((
|
||||||
|
"您正在调用插件:历史上的今天",
|
||||||
|
"[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!" + 高阶功能模板函数示意图))
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||||
for i in range(5):
|
for i in range(5):
|
||||||
currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month
|
currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month
|
||||||
@@ -43,7 +71,7 @@ graph TD
|
|||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
@CatchException
|
@CatchException
|
||||||
def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -51,7 +79,7 @@ def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
web_port 当前软件运行的端口号
|
user_request 当前用户的请求信息(IP地址等)
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
chatbot.append(("这是什么功能?", "一个测试mermaid绘制图表的功能,您可以在输入框中输入一些关键词,然后使用mermaid+llm绘制图表。"))
|
chatbot.append(("这是什么功能?", "一个测试mermaid绘制图表的功能,您可以在输入框中输入一些关键词,然后使用mermaid+llm绘制图表。"))
|
||||||
|
|||||||
@@ -1,12 +1,12 @@
|
|||||||
## ===================================================
|
## ===================================================
|
||||||
# docker-compose.yml
|
# docker-compose.yml
|
||||||
## ===================================================
|
## ===================================================
|
||||||
# 1. 请在以下方案中选择任意一种,然后删除其他的方案
|
# 1. 请在以下方案中选择任意一种,然后删除其他的方案
|
||||||
# 2. 修改你选择的方案中的environment环境变量,详情请见github wiki或者config.py
|
# 2. 修改你选择的方案中的environment环境变量,详情请见github wiki或者config.py
|
||||||
# 3. 选择一种暴露服务端口的方法,并对相应的配置做出修改:
|
# 3. 选择一种暴露服务端口的方法,并对相应的配置做出修改:
|
||||||
# 【方法1: 适用于Linux,很方便,可惜windows不支持】与宿主的网络融合为一体,这个是默认配置
|
# 「方法1: 适用于Linux,很方便,可惜windows不支持」与宿主的网络融合为一体,这个是默认配置
|
||||||
# network_mode: "host"
|
# network_mode: "host"
|
||||||
# 【方法2: 适用于所有系统包括Windows和MacOS】端口映射,把容器的端口映射到宿主的端口(注意您需要先删除network_mode: "host",再追加以下内容)
|
# 「方法2: 适用于所有系统包括Windows和MacOS」端口映射,把容器的端口映射到宿主的端口(注意您需要先删除network_mode: "host",再追加以下内容)
|
||||||
# ports:
|
# ports:
|
||||||
# - "12345:12345" # 注意!12345必须与WEB_PORT环境变量相互对应
|
# - "12345:12345" # 注意!12345必须与WEB_PORT环境变量相互对应
|
||||||
# 4. 最后`docker-compose up`运行
|
# 4. 最后`docker-compose up`运行
|
||||||
@@ -25,7 +25,7 @@
|
|||||||
## ===================================================
|
## ===================================================
|
||||||
|
|
||||||
## ===================================================
|
## ===================================================
|
||||||
## 【方案零】 部署项目的全部能力(这个是包含cuda和latex的大型镜像。如果您网速慢、硬盘小或没有显卡,则不推荐使用这个)
|
## 「方案零」 部署项目的全部能力(这个是包含cuda和latex的大型镜像。如果您网速慢、硬盘小或没有显卡,则不推荐使用这个)
|
||||||
## ===================================================
|
## ===================================================
|
||||||
version: '3'
|
version: '3'
|
||||||
services:
|
services:
|
||||||
@@ -63,10 +63,10 @@ services:
|
|||||||
# count: 1
|
# count: 1
|
||||||
# capabilities: [gpu]
|
# capabilities: [gpu]
|
||||||
|
|
||||||
# 【WEB_PORT暴露方法1: 适用于Linux】与宿主的网络融合
|
# 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
|
||||||
network_mode: "host"
|
network_mode: "host"
|
||||||
|
|
||||||
# 【WEB_PORT暴露方法2: 适用于所有系统】端口映射
|
# 「WEB_PORT暴露方法2: 适用于所有系统」端口映射
|
||||||
# ports:
|
# ports:
|
||||||
# - "12345:12345" # 12345必须与WEB_PORT相互对应
|
# - "12345:12345" # 12345必须与WEB_PORT相互对应
|
||||||
|
|
||||||
@@ -75,10 +75,8 @@ services:
|
|||||||
bash -c "python3 -u main.py"
|
bash -c "python3 -u main.py"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## ===================================================
|
## ===================================================
|
||||||
## 【方案一】 如果不需要运行本地模型(仅 chatgpt, azure, 星火, 千帆, claude 等在线大模型服务)
|
## 「方案一」 如果不需要运行本地模型(仅 chatgpt, azure, 星火, 千帆, claude 等在线大模型服务)
|
||||||
## ===================================================
|
## ===================================================
|
||||||
version: '3'
|
version: '3'
|
||||||
services:
|
services:
|
||||||
@@ -97,16 +95,16 @@ services:
|
|||||||
# DEFAULT_WORKER_NUM: ' 10 '
|
# DEFAULT_WORKER_NUM: ' 10 '
|
||||||
# AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] '
|
# AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] '
|
||||||
|
|
||||||
# 与宿主的网络融合
|
# 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
|
||||||
network_mode: "host"
|
network_mode: "host"
|
||||||
|
|
||||||
# 不使用代理网络拉取最新代码
|
# 启动命令
|
||||||
command: >
|
command: >
|
||||||
bash -c "python3 -u main.py"
|
bash -c "python3 -u main.py"
|
||||||
|
|
||||||
|
|
||||||
### ===================================================
|
### ===================================================
|
||||||
### 【方案二】 如果需要运行ChatGLM + Qwen + MOSS等本地模型
|
### 「方案二」 如果需要运行ChatGLM + Qwen + MOSS等本地模型
|
||||||
### ===================================================
|
### ===================================================
|
||||||
version: '3'
|
version: '3'
|
||||||
services:
|
services:
|
||||||
@@ -130,8 +128,10 @@ services:
|
|||||||
devices:
|
devices:
|
||||||
- /dev/nvidia0:/dev/nvidia0
|
- /dev/nvidia0:/dev/nvidia0
|
||||||
|
|
||||||
# 与宿主的网络融合
|
# 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
|
||||||
network_mode: "host"
|
network_mode: "host"
|
||||||
|
|
||||||
|
# 启动命令
|
||||||
command: >
|
command: >
|
||||||
bash -c "python3 -u main.py"
|
bash -c "python3 -u main.py"
|
||||||
|
|
||||||
@@ -139,8 +139,9 @@ services:
|
|||||||
# command: >
|
# command: >
|
||||||
# bash -c "pip install -r request_llms/requirements_qwen.txt && python3 -u main.py"
|
# bash -c "pip install -r request_llms/requirements_qwen.txt && python3 -u main.py"
|
||||||
|
|
||||||
|
|
||||||
### ===================================================
|
### ===================================================
|
||||||
### 【方案三】 如果需要运行ChatGPT + LLAMA + 盘古 + RWKV本地模型
|
### 「方案三」 如果需要运行ChatGPT + LLAMA + 盘古 + RWKV本地模型
|
||||||
### ===================================================
|
### ===================================================
|
||||||
version: '3'
|
version: '3'
|
||||||
services:
|
services:
|
||||||
@@ -164,16 +165,16 @@ services:
|
|||||||
devices:
|
devices:
|
||||||
- /dev/nvidia0:/dev/nvidia0
|
- /dev/nvidia0:/dev/nvidia0
|
||||||
|
|
||||||
# 与宿主的网络融合
|
# 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
|
||||||
network_mode: "host"
|
network_mode: "host"
|
||||||
|
|
||||||
# 不使用代理网络拉取最新代码
|
# 启动命令
|
||||||
command: >
|
command: >
|
||||||
python3 -u main.py
|
python3 -u main.py
|
||||||
|
|
||||||
|
|
||||||
## ===================================================
|
## ===================================================
|
||||||
## 【方案四】 ChatGPT + Latex
|
## 「方案四」 ChatGPT + Latex
|
||||||
## ===================================================
|
## ===================================================
|
||||||
version: '3'
|
version: '3'
|
||||||
services:
|
services:
|
||||||
@@ -190,16 +191,16 @@ services:
|
|||||||
DEFAULT_WORKER_NUM: ' 10 '
|
DEFAULT_WORKER_NUM: ' 10 '
|
||||||
WEB_PORT: ' 12303 '
|
WEB_PORT: ' 12303 '
|
||||||
|
|
||||||
# 与宿主的网络融合
|
# 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
|
||||||
network_mode: "host"
|
network_mode: "host"
|
||||||
|
|
||||||
# 不使用代理网络拉取最新代码
|
# 启动命令
|
||||||
command: >
|
command: >
|
||||||
bash -c "python3 -u main.py"
|
bash -c "python3 -u main.py"
|
||||||
|
|
||||||
|
|
||||||
## ===================================================
|
## ===================================================
|
||||||
## 【方案五】 ChatGPT + 语音助手 (请先阅读 docs/use_audio.md)
|
## 「方案五」 ChatGPT + 语音助手 (请先阅读 docs/use_audio.md)
|
||||||
## ===================================================
|
## ===================================================
|
||||||
version: '3'
|
version: '3'
|
||||||
services:
|
services:
|
||||||
@@ -223,9 +224,9 @@ services:
|
|||||||
# (无需填写) ALIYUN_ACCESSKEY: ' LTAI5q6BrFUzoRXVGUWnekh1 '
|
# (无需填写) ALIYUN_ACCESSKEY: ' LTAI5q6BrFUzoRXVGUWnekh1 '
|
||||||
# (无需填写) ALIYUN_SECRET: ' eHmI20AVWIaQZ0CiTD2bGQVsaP9i68 '
|
# (无需填写) ALIYUN_SECRET: ' eHmI20AVWIaQZ0CiTD2bGQVsaP9i68 '
|
||||||
|
|
||||||
# 与宿主的网络融合
|
# 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
|
||||||
network_mode: "host"
|
network_mode: "host"
|
||||||
|
|
||||||
# 不使用代理网络拉取最新代码
|
# 启动命令
|
||||||
command: >
|
command: >
|
||||||
bash -c "python3 -u main.py"
|
bash -c "python3 -u main.py"
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ COPY . .
|
|||||||
RUN pip3 install -r requirements.txt
|
RUN pip3 install -r requirements.txt
|
||||||
|
|
||||||
# 安装语音插件的额外依赖
|
# 安装语音插件的额外依赖
|
||||||
RUN pip3 install pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
RUN pip3 install aliyun-python-sdk-core==2.13.3 pyOpenSSL webrtcvad scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
||||||
|
|
||||||
# 可选步骤,用于预热模块
|
# 可选步骤,用于预热模块
|
||||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||||
|
|||||||
@@ -165,7 +165,7 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
|
|||||||
|
|
||||||
3. read_file_to_chat(chatbot, history, file_name):从传入的文件中读取内容,解析出对话历史记录并更新聊天显示框。
|
3. read_file_to_chat(chatbot, history, file_name):从传入的文件中读取内容,解析出对话历史记录并更新聊天显示框。
|
||||||
|
|
||||||
4. 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。
|
4. 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。
|
||||||
|
|
||||||
## [19/48] 请对下面的程序文件做一个概述: crazy_functions\总结word文档.py
|
## [19/48] 请对下面的程序文件做一个概述: crazy_functions\总结word文档.py
|
||||||
|
|
||||||
|
|||||||
@@ -1668,7 +1668,7 @@
|
|||||||
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
|
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
|
||||||
"Langchain知识库": "LangchainKnowledgeBase",
|
"Langchain知识库": "LangchainKnowledgeBase",
|
||||||
"Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison",
|
"Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison",
|
||||||
"Latex输出PDF结果": "OutputPDFFromLatex",
|
"Latex输出PDF": "OutputPDFFromLatex",
|
||||||
"Latex翻译中文并重新编译PDF": "TranslateChineseToEnglishInLatexAndRecompilePDF",
|
"Latex翻译中文并重新编译PDF": "TranslateChineseToEnglishInLatexAndRecompilePDF",
|
||||||
"sprint亮靛": "SprintIndigo",
|
"sprint亮靛": "SprintIndigo",
|
||||||
"寻找Latex主文件": "FindLatexMainFile",
|
"寻找Latex主文件": "FindLatexMainFile",
|
||||||
@@ -3004,5 +3004,7 @@
|
|||||||
"1. 上传图片": "TranslatedText",
|
"1. 上传图片": "TranslatedText",
|
||||||
"保存状态": "TranslatedText",
|
"保存状态": "TranslatedText",
|
||||||
"GPT-Academic对话存档": "TranslatedText",
|
"GPT-Academic对话存档": "TranslatedText",
|
||||||
"Arxiv论文精细翻译": "TranslatedText"
|
"Arxiv论文精细翻译": "TranslatedText",
|
||||||
|
"from crazy_functions.AdvancedFunctionTemplate import 测试图表渲染": "from crazy_functions.AdvancedFunctionTemplate import test_chart_rendering",
|
||||||
|
"测试图表渲染": "test_chart_rendering"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1492,7 +1492,7 @@
|
|||||||
"交互功能模板函数": "InteractiveFunctionTemplateFunction",
|
"交互功能模板函数": "InteractiveFunctionTemplateFunction",
|
||||||
"交互功能函数模板": "InteractiveFunctionFunctionTemplate",
|
"交互功能函数模板": "InteractiveFunctionFunctionTemplate",
|
||||||
"Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison",
|
"Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison",
|
||||||
"Latex输出PDF结果": "LatexOutputPDFResult",
|
"Latex输出PDF": "LatexOutputPDFResult",
|
||||||
"Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF",
|
"Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF",
|
||||||
"语音助手": "VoiceAssistant",
|
"语音助手": "VoiceAssistant",
|
||||||
"微调数据集生成": "FineTuneDatasetGeneration",
|
"微调数据集生成": "FineTuneDatasetGeneration",
|
||||||
|
|||||||
@@ -16,7 +16,7 @@
|
|||||||
"批量Markdown翻译": "BatchTranslateMarkdown",
|
"批量Markdown翻译": "BatchTranslateMarkdown",
|
||||||
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
|
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
|
||||||
"Langchain知识库": "LangchainKnowledgeBase",
|
"Langchain知识库": "LangchainKnowledgeBase",
|
||||||
"Latex输出PDF结果": "OutputPDFFromLatex",
|
"Latex输出PDF": "OutputPDFFromLatex",
|
||||||
"把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline",
|
"把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline",
|
||||||
"Latex精细分解与转化": "DecomposeAndConvertLatex",
|
"Latex精细分解与转化": "DecomposeAndConvertLatex",
|
||||||
"解析一个C项目的头文件": "ParseCProjectHeaderFiles",
|
"解析一个C项目的头文件": "ParseCProjectHeaderFiles",
|
||||||
@@ -97,5 +97,12 @@
|
|||||||
"多智能体": "MultiAgent",
|
"多智能体": "MultiAgent",
|
||||||
"图片生成_DALLE2": "ImageGeneration_DALLE2",
|
"图片生成_DALLE2": "ImageGeneration_DALLE2",
|
||||||
"图片生成_DALLE3": "ImageGeneration_DALLE3",
|
"图片生成_DALLE3": "ImageGeneration_DALLE3",
|
||||||
"图片修改_DALLE2": "ImageModification_DALLE2"
|
"图片修改_DALLE2": "ImageModification_DALLE2",
|
||||||
|
"生成多种Mermaid图表": "GenerateMultipleMermaidCharts",
|
||||||
|
"知识库文件注入": "InjectKnowledgeBaseFiles",
|
||||||
|
"PDF翻译中文并重新编译PDF": "TranslatePDFToChineseAndRecompilePDF",
|
||||||
|
"随机小游戏": "RandomMiniGame",
|
||||||
|
"互动小游戏": "InteractiveMiniGame",
|
||||||
|
"解析历史输入": "ParseHistoricalInput",
|
||||||
|
"高阶功能模板函数示意图": "HighOrderFunctionTemplateDiagram"
|
||||||
}
|
}
|
||||||
@@ -1468,7 +1468,7 @@
|
|||||||
"交互功能模板函数": "InteractiveFunctionTemplateFunctions",
|
"交互功能模板函数": "InteractiveFunctionTemplateFunctions",
|
||||||
"交互功能函数模板": "InteractiveFunctionFunctionTemplates",
|
"交互功能函数模板": "InteractiveFunctionFunctionTemplates",
|
||||||
"Latex英文纠错加PDF对比": "LatexEnglishCorrectionWithPDFComparison",
|
"Latex英文纠错加PDF对比": "LatexEnglishCorrectionWithPDFComparison",
|
||||||
"Latex输出PDF结果": "OutputPDFFromLatex",
|
"Latex输出PDF": "OutputPDFFromLatex",
|
||||||
"Latex翻译中文并重新编译PDF": "TranslateLatexToChineseAndRecompilePDF",
|
"Latex翻译中文并重新编译PDF": "TranslateLatexToChineseAndRecompilePDF",
|
||||||
"语音助手": "VoiceAssistant",
|
"语音助手": "VoiceAssistant",
|
||||||
"微调数据集生成": "FineTuneDatasetGeneration",
|
"微调数据集生成": "FineTuneDatasetGeneration",
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
|
|
||||||
## 1. 安装额外依赖
|
## 1. 安装额外依赖
|
||||||
```
|
```
|
||||||
pip install --upgrade pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
pip install --upgrade pyOpenSSL webrtcvad scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
||||||
```
|
```
|
||||||
|
|
||||||
如果因为特色网络问题导致上述命令无法执行:
|
如果因为特色网络问题导致上述命令无法执行:
|
||||||
|
|||||||
@@ -1,30 +0,0 @@
|
|||||||
try {
|
|
||||||
$("<link>").attr({href: "file=docs/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css"}).appendTo('head');
|
|
||||||
$('body').append('<div class="waifu"><div class="waifu-tips"></div><canvas id="live2d" class="live2d"></canvas><div class="waifu-tool"><span class="fui-home"></span> <span class="fui-chat"></span> <span class="fui-eye"></span> <span class="fui-user"></span> <span class="fui-photo"></span> <span class="fui-info-circle"></span> <span class="fui-cross"></span></div></div>');
|
|
||||||
$.ajax({url: "file=docs/waifu_plugin/waifu-tips.js", dataType:"script", cache: true, success: function() {
|
|
||||||
$.ajax({url: "file=docs/waifu_plugin/live2d.js", dataType:"script", cache: true, success: function() {
|
|
||||||
/* 可直接修改部分参数 */
|
|
||||||
live2d_settings['hitokotoAPI'] = "hitokoto.cn"; // 一言 API
|
|
||||||
live2d_settings['modelId'] = 5; // 默认模型 ID
|
|
||||||
live2d_settings['modelTexturesId'] = 1; // 默认材质 ID
|
|
||||||
live2d_settings['modelStorage'] = false; // 不储存模型 ID
|
|
||||||
live2d_settings['waifuSize'] = '210x187';
|
|
||||||
live2d_settings['waifuTipsSize'] = '187x52';
|
|
||||||
live2d_settings['canSwitchModel'] = true;
|
|
||||||
live2d_settings['canSwitchTextures'] = true;
|
|
||||||
live2d_settings['canSwitchHitokoto'] = false;
|
|
||||||
live2d_settings['canTakeScreenshot'] = false;
|
|
||||||
live2d_settings['canTurnToHomePage'] = false;
|
|
||||||
live2d_settings['canTurnToAboutPage'] = false;
|
|
||||||
live2d_settings['showHitokoto'] = false; // 显示一言
|
|
||||||
live2d_settings['showF12Status'] = false; // 显示加载状态
|
|
||||||
live2d_settings['showF12Message'] = false; // 显示看板娘消息
|
|
||||||
live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示
|
|
||||||
live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示
|
|
||||||
live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词
|
|
||||||
|
|
||||||
/* 在 initModel 前添加 */
|
|
||||||
initModel("file=docs/waifu_plugin/waifu-tips.json");
|
|
||||||
}});
|
|
||||||
}});
|
|
||||||
} catch(err) { console.log("[Error] JQuery is not defined.") }
|
|
||||||
183
main.py
183
main.py
@@ -13,35 +13,40 @@ help_menu_description = \
|
|||||||
</br></br>如何语音对话: 请阅读Wiki
|
</br></br>如何语音对话: 请阅读Wiki
|
||||||
</br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交(网页刷新后失效)"""
|
</br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交(网页刷新后失效)"""
|
||||||
|
|
||||||
|
def enable_log(PATH_LOGGING):
|
||||||
|
import logging, uuid
|
||||||
|
admin_log_path = os.path.join(PATH_LOGGING, "admin")
|
||||||
|
os.makedirs(admin_log_path, exist_ok=True)
|
||||||
|
log_dir = os.path.join(admin_log_path, "chat_secrets.log")
|
||||||
|
try:logging.basicConfig(filename=log_dir, level=logging.INFO, encoding="utf-8", format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
|
||||||
|
except:logging.basicConfig(filename=log_dir, level=logging.INFO, format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
|
||||||
|
# Disable logging output from the 'httpx' logger
|
||||||
|
logging.getLogger("httpx").setLevel(logging.WARNING)
|
||||||
|
print(f"所有对话记录将自动保存在本地目录{log_dir}, 请注意自我隐私保护哦!")
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
import gradio as gr
|
import gradio as gr
|
||||||
if gr.__version__ not in ['3.32.6', '3.32.7']:
|
if gr.__version__ not in ['3.32.9']:
|
||||||
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
|
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
|
||||||
from request_llms.bridge_all import predict
|
from request_llms.bridge_all import predict
|
||||||
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
|
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
|
||||||
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址
|
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址
|
||||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
|
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
|
||||||
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
|
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
|
||||||
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME')
|
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU')
|
||||||
DARK_MODE, NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('DARK_MODE', 'NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE')
|
NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE')
|
||||||
INIT_SYS_PROMPT = get_conf('INIT_SYS_PROMPT')
|
DARK_MODE, INIT_SYS_PROMPT, ADD_WAIFU = get_conf('DARK_MODE', 'INIT_SYS_PROMPT', 'ADD_WAIFU')
|
||||||
|
|
||||||
# 如果WEB_PORT是-1, 则随机选取WEB端口
|
# 如果WEB_PORT是-1, 则随机选取WEB端口
|
||||||
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
|
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
|
||||||
from check_proxy import get_current_version
|
from check_proxy import get_current_version
|
||||||
from themes.theme import adjust_theme, advanced_css, theme_declaration
|
from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_reset, js_code_show_or_hide, js_code_show_or_hide_group2
|
||||||
from themes.theme import js_code_for_css_changing, js_code_for_darkmode_init, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
|
from themes.theme import js_code_for_css_changing, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
|
||||||
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, init_cookie
|
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid
|
||||||
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
|
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
|
||||||
|
|
||||||
# 问询记录, python 版本建议3.9+(越新越好)
|
# 对话、日志记录
|
||||||
import logging, uuid
|
enable_log(PATH_LOGGING)
|
||||||
os.makedirs(PATH_LOGGING, exist_ok=True)
|
|
||||||
try:logging.basicConfig(filename=f"{PATH_LOGGING}/chat_secrets.log", level=logging.INFO, encoding="utf-8", format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
|
|
||||||
except:logging.basicConfig(filename=f"{PATH_LOGGING}/chat_secrets.log", level=logging.INFO, format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
|
|
||||||
# Disable logging output from the 'httpx' logger
|
|
||||||
logging.getLogger("httpx").setLevel(logging.WARNING)
|
|
||||||
print(f"所有问询记录将自动保存在本地目录./{PATH_LOGGING}/chat_secrets.log, 请注意自我隐私保护哦!")
|
|
||||||
|
|
||||||
# 一些普通功能模块
|
# 一些普通功能模块
|
||||||
from core_functional import get_core_functions
|
from core_functional import get_core_functions
|
||||||
@@ -65,7 +70,7 @@ def main():
|
|||||||
proxy_info = check_proxy(proxies)
|
proxy_info = check_proxy(proxies)
|
||||||
|
|
||||||
gr_L1 = lambda: gr.Row().style()
|
gr_L1 = lambda: gr.Row().style()
|
||||||
gr_L2 = lambda scale, elem_id: gr.Column(scale=scale, elem_id=elem_id)
|
gr_L2 = lambda scale, elem_id: gr.Column(scale=scale, elem_id=elem_id, min_width=400)
|
||||||
if LAYOUT == "TOP-DOWN":
|
if LAYOUT == "TOP-DOWN":
|
||||||
gr_L1 = lambda: DummyWith()
|
gr_L1 = lambda: DummyWith()
|
||||||
gr_L2 = lambda scale, elem_id: gr.Row()
|
gr_L2 = lambda scale, elem_id: gr.Row()
|
||||||
@@ -74,9 +79,9 @@ def main():
|
|||||||
cancel_handles = []
|
cancel_handles = []
|
||||||
customize_btns = {}
|
customize_btns = {}
|
||||||
predefined_btns = {}
|
predefined_btns = {}
|
||||||
with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo:
|
with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as app_block:
|
||||||
gr.HTML(title_html)
|
gr.HTML(title_html)
|
||||||
secret_css, dark_mode, persistent_cookie = gr.Textbox(visible=False), gr.Textbox(DARK_MODE, visible=False), gr.Textbox(visible=False)
|
secret_css, web_cookie_cache = gr.Textbox(visible=False), gr.Textbox(visible=False)
|
||||||
cookies = gr.State(load_chat_cookies())
|
cookies = gr.State(load_chat_cookies())
|
||||||
with gr_L1():
|
with gr_L1():
|
||||||
with gr_L2(scale=2, elem_id="gpt-chat"):
|
with gr_L2(scale=2, elem_id="gpt-chat"):
|
||||||
@@ -98,6 +103,7 @@ def main():
|
|||||||
audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False)
|
audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False)
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel")
|
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel")
|
||||||
|
|
||||||
with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn:
|
with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn:
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
for k in range(NUM_CUSTOM_BASIC_BTN):
|
for k in range(NUM_CUSTOM_BASIC_BTN):
|
||||||
@@ -142,7 +148,6 @@ def main():
|
|||||||
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
|
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
|
||||||
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
|
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
|
||||||
|
|
||||||
|
|
||||||
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"):
|
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"):
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
with gr.Tab("上传文件", elem_id="interact-panel"):
|
with gr.Tab("上传文件", elem_id="interact-panel"):
|
||||||
@@ -152,16 +157,21 @@ def main():
|
|||||||
with gr.Tab("更换模型", elem_id="interact-panel"):
|
with gr.Tab("更换模型", elem_id="interact-panel"):
|
||||||
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
||||||
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
|
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
|
||||||
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
|
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature", elem_id="elem_temperature")
|
||||||
max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength",)
|
max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength",)
|
||||||
system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=INIT_SYS_PROMPT)
|
system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=INIT_SYS_PROMPT, elem_id="elem_prompt")
|
||||||
|
temperature.change(None, inputs=[temperature], outputs=None,
|
||||||
|
_js="""(temperature)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_temperature_cookie", temperature)""")
|
||||||
|
system_prompt.change(None, inputs=[system_prompt], outputs=None,
|
||||||
|
_js="""(system_prompt)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_system_prompt_cookie", system_prompt)""")
|
||||||
|
|
||||||
with gr.Tab("界面外观", elem_id="interact-panel"):
|
with gr.Tab("界面外观", elem_id="interact-panel"):
|
||||||
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False)
|
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False)
|
||||||
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"],
|
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
|
||||||
value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
|
opt = ["自定义菜单"]
|
||||||
checkboxes_2 = gr.CheckboxGroup(["自定义菜单"],
|
value=[]
|
||||||
value=[], label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False)
|
if ADD_WAIFU: opt += ["添加Live2D形象"]; value += ["添加Live2D形象"]
|
||||||
|
checkboxes_2 = gr.CheckboxGroup(opt, value=value, label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False)
|
||||||
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
|
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
|
||||||
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
|
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
|
||||||
with gr.Tab("帮助", elem_id="interact-panel"):
|
with gr.Tab("帮助", elem_id="interact-panel"):
|
||||||
@@ -178,7 +188,7 @@ def main():
|
|||||||
submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
|
submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
|
||||||
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
|
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
|
||||||
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
|
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
|
||||||
clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm")
|
clearBtn2 = gr.Button("清除", elem_id="elem_clear2", variant="secondary", visible=False); clearBtn2.style(size="sm")
|
||||||
|
|
||||||
|
|
||||||
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
|
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
|
||||||
@@ -192,69 +202,31 @@ def main():
|
|||||||
basic_fn_suffix = gr.Textbox(show_label=False, placeholder="输入新提示后缀", lines=4).style(container=False)
|
basic_fn_suffix = gr.Textbox(show_label=False, placeholder="输入新提示后缀", lines=4).style(container=False)
|
||||||
with gr.Column(scale=1, min_width=70):
|
with gr.Column(scale=1, min_width=70):
|
||||||
basic_fn_confirm = gr.Button("确认并保存", variant="primary"); basic_fn_confirm.style(size="sm")
|
basic_fn_confirm = gr.Button("确认并保存", variant="primary"); basic_fn_confirm.style(size="sm")
|
||||||
basic_fn_load = gr.Button("加载已保存", variant="primary"); basic_fn_load.style(size="sm")
|
basic_fn_clean = gr.Button("恢复默认", variant="primary"); basic_fn_clean.style(size="sm")
|
||||||
def assign_btn(persistent_cookie_, cookies_, basic_btn_dropdown_, basic_fn_title, basic_fn_prefix, basic_fn_suffix):
|
|
||||||
ret = {}
|
|
||||||
customize_fn_overwrite_ = cookies_['customize_fn_overwrite']
|
|
||||||
customize_fn_overwrite_.update({
|
|
||||||
basic_btn_dropdown_:
|
|
||||||
{
|
|
||||||
"Title":basic_fn_title,
|
|
||||||
"Prefix":basic_fn_prefix,
|
|
||||||
"Suffix":basic_fn_suffix,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
cookies_.update(customize_fn_overwrite_)
|
|
||||||
if basic_btn_dropdown_ in customize_btns:
|
|
||||||
ret.update({customize_btns[basic_btn_dropdown_]: gr.update(visible=True, value=basic_fn_title)})
|
|
||||||
else:
|
|
||||||
ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=True, value=basic_fn_title)})
|
|
||||||
ret.update({cookies: cookies_})
|
|
||||||
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
|
|
||||||
except: persistent_cookie_ = {}
|
|
||||||
persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value
|
|
||||||
persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict
|
|
||||||
ret.update({persistent_cookie: persistent_cookie_}) # write persistent cookie
|
|
||||||
return ret
|
|
||||||
|
|
||||||
def reflesh_btn(persistent_cookie_, cookies_):
|
from shared_utils.cookie_manager import assign_btn__fn_builder
|
||||||
ret = {}
|
assign_btn = assign_btn__fn_builder(customize_btns, predefined_btns, cookies, web_cookie_cache)
|
||||||
for k in customize_btns:
|
# update btn
|
||||||
ret.update({customize_btns[k]: gr.update(visible=False, value="")})
|
h = basic_fn_confirm.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
|
||||||
|
[web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
|
||||||
|
h.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
|
||||||
|
# clean up btn
|
||||||
|
h2 = basic_fn_clean.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix, gr.State(True)],
|
||||||
|
[web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
|
||||||
|
h2.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
|
||||||
|
|
||||||
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
|
|
||||||
except: return ret
|
|
||||||
|
|
||||||
customize_fn_overwrite_ = persistent_cookie_.get("custom_bnt", {})
|
|
||||||
cookies_['customize_fn_overwrite'] = customize_fn_overwrite_
|
|
||||||
ret.update({cookies: cookies_})
|
|
||||||
|
|
||||||
for k,v in persistent_cookie_["custom_bnt"].items():
|
|
||||||
if v['Title'] == "": continue
|
|
||||||
if k in customize_btns: ret.update({customize_btns[k]: gr.update(visible=True, value=v['Title'])})
|
|
||||||
else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
|
|
||||||
return ret
|
|
||||||
|
|
||||||
basic_fn_load.click(reflesh_btn, [persistent_cookie, cookies], [cookies, *customize_btns.values(), *predefined_btns.values()])
|
|
||||||
h = basic_fn_confirm.click(assign_btn, [persistent_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
|
|
||||||
[persistent_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
|
|
||||||
# save persistent cookie
|
|
||||||
h.then(None, [persistent_cookie], None, _js="""(persistent_cookie)=>{setCookie("persistent_cookie", persistent_cookie, 5);}""")
|
|
||||||
|
|
||||||
# 功能区显示开关与功能区的互动
|
# 功能区显示开关与功能区的互动
|
||||||
def fn_area_visibility(a):
|
def fn_area_visibility(a):
|
||||||
ret = {}
|
ret = {}
|
||||||
ret.update({area_basic_fn: gr.update(visible=("基础功能区" in a))})
|
|
||||||
ret.update({area_crazy_fn: gr.update(visible=("函数插件区" in a))})
|
|
||||||
ret.update({area_input_primary: gr.update(visible=("浮动输入区" not in a))})
|
ret.update({area_input_primary: gr.update(visible=("浮动输入区" not in a))})
|
||||||
ret.update({area_input_secondary: gr.update(visible=("浮动输入区" in a))})
|
ret.update({area_input_secondary: gr.update(visible=("浮动输入区" in a))})
|
||||||
ret.update({clearBtn: gr.update(visible=("输入清除键" in a))})
|
|
||||||
ret.update({clearBtn2: gr.update(visible=("输入清除键" in a))})
|
|
||||||
ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))})
|
ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))})
|
||||||
if "浮动输入区" in a: ret.update({txt: gr.update(value="")})
|
if "浮动输入区" in a: ret.update({txt: gr.update(value="")})
|
||||||
return ret
|
return ret
|
||||||
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2, plugin_advanced_arg] )
|
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, plugin_advanced_arg] )
|
||||||
|
checkboxes.select(None, [checkboxes], None, _js=js_code_show_or_hide)
|
||||||
|
|
||||||
# 功能区显示开关与功能区的互动
|
# 功能区显示开关与功能区的互动
|
||||||
def fn_area_visibility_2(a):
|
def fn_area_visibility_2(a):
|
||||||
@@ -262,6 +234,7 @@ def main():
|
|||||||
ret.update({area_customize: gr.update(visible=("自定义菜单" in a))})
|
ret.update({area_customize: gr.update(visible=("自定义菜单" in a))})
|
||||||
return ret
|
return ret
|
||||||
checkboxes_2.select(fn_area_visibility_2, [checkboxes_2], [area_customize] )
|
checkboxes_2.select(fn_area_visibility_2, [checkboxes_2], [area_customize] )
|
||||||
|
checkboxes_2.select(None, [checkboxes_2], None, _js=js_code_show_or_hide_group2)
|
||||||
|
|
||||||
# 整理反复出现的控件句柄组合
|
# 整理反复出现的控件句柄组合
|
||||||
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg]
|
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg]
|
||||||
@@ -272,15 +245,17 @@ def main():
|
|||||||
cancel_handles.append(txt2.submit(**predict_args))
|
cancel_handles.append(txt2.submit(**predict_args))
|
||||||
cancel_handles.append(submitBtn.click(**predict_args))
|
cancel_handles.append(submitBtn.click(**predict_args))
|
||||||
cancel_handles.append(submitBtn2.click(**predict_args))
|
cancel_handles.append(submitBtn2.click(**predict_args))
|
||||||
resetBtn.click(lambda: ([], [], "已重置"), None, [chatbot, history, status])
|
resetBtn.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
|
||||||
resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status])
|
resetBtn2.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
|
||||||
clearBtn.click(lambda: ("",""), None, [txt, txt2])
|
resetBtn.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) # 再在后端清除history
|
||||||
clearBtn2.click(lambda: ("",""), None, [txt, txt2])
|
resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) # 再在后端清除history
|
||||||
|
clearBtn.click(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
|
clearBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
if AUTO_CLEAR_TXT:
|
if AUTO_CLEAR_TXT:
|
||||||
submitBtn.click(lambda: ("",""), None, [txt, txt2])
|
submitBtn.click(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
submitBtn2.click(lambda: ("",""), None, [txt, txt2])
|
submitBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
txt.submit(lambda: ("",""), None, [txt, txt2])
|
txt.submit(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
txt2.submit(lambda: ("",""), None, [txt, txt2])
|
txt2.submit(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
# 基础功能区的回调函数注册
|
# 基础功能区的回调函数注册
|
||||||
for k in functional:
|
for k in functional:
|
||||||
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
|
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
|
||||||
@@ -360,11 +335,14 @@ def main():
|
|||||||
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
|
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
|
||||||
|
|
||||||
|
|
||||||
demo.load(init_cookie, inputs=[cookies, chatbot], outputs=[cookies])
|
app_block.load(assign_user_uuid, inputs=[cookies], outputs=[cookies])
|
||||||
darkmode_js = js_code_for_darkmode_init
|
|
||||||
demo.load(None, inputs=None, outputs=[persistent_cookie], _js=js_code_for_persistent_cookie_init)
|
from shared_utils.cookie_manager import load_web_cookie_cache__fn_builder
|
||||||
demo.load(None, inputs=[dark_mode], outputs=None, _js=darkmode_js) # 配置暗色主题或亮色主题
|
load_web_cookie_cache = load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns)
|
||||||
demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}')
|
app_block.load(load_web_cookie_cache, inputs = [web_cookie_cache, cookies],
|
||||||
|
outputs = [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()], _js=js_code_for_persistent_cookie_init)
|
||||||
|
|
||||||
|
app_block.load(None, inputs=[], outputs=None, _js=f"""()=>GptAcademicJavaScriptInit("{DARK_MODE}","{INIT_SYS_PROMPT}","{ADD_WAIFU}","{LAYOUT}")""") # 配置暗色主题或亮色主题
|
||||||
|
|
||||||
# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
|
# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
|
||||||
def run_delayed_tasks():
|
def run_delayed_tasks():
|
||||||
@@ -379,28 +357,15 @@ def main():
|
|||||||
|
|
||||||
threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新
|
threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新
|
||||||
threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面
|
threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面
|
||||||
threading.Thread(target=warm_up_mods, name="warm-up", daemon=True).start() # 预热tiktoken模块
|
threading.Thread(target=warm_up_mods, name="warm-up", daemon=True).start() # 预热tiktoken模块
|
||||||
|
|
||||||
|
# 运行一些异步任务:自动更新、打开浏览器页面、预热tiktoken模块
|
||||||
run_delayed_tasks()
|
run_delayed_tasks()
|
||||||
demo.queue(concurrency_count=CONCURRENT_COUNT).launch(
|
|
||||||
quiet=True,
|
|
||||||
server_name="0.0.0.0",
|
|
||||||
ssl_keyfile=None if SSL_KEYFILE == "" else SSL_KEYFILE,
|
|
||||||
ssl_certfile=None if SSL_CERTFILE == "" else SSL_CERTFILE,
|
|
||||||
ssl_verify=False,
|
|
||||||
server_port=PORT,
|
|
||||||
favicon_path=os.path.join(os.path.dirname(__file__), "docs/logo.png"),
|
|
||||||
auth=AUTHENTICATION if len(AUTHENTICATION) != 0 else None,
|
|
||||||
blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile",f"{PATH_LOGGING}/admin"])
|
|
||||||
|
|
||||||
# 如果需要在二级路径下运行
|
# 最后,正式开始服务
|
||||||
# CUSTOM_PATH = get_conf('CUSTOM_PATH')
|
from shared_utils.fastapi_server import start_app
|
||||||
# if CUSTOM_PATH != "/":
|
start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SSL_CERTFILE)
|
||||||
# from toolbox import run_gradio_in_subpath
|
|
||||||
# run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
|
|
||||||
# else:
|
|
||||||
# demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png",
|
|
||||||
# blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile",f"{PATH_LOGGING}/admin"])
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
|||||||
@@ -8,10 +8,10 @@
|
|||||||
具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁
|
具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁
|
||||||
2. predict_no_ui_long_connection(...)
|
2. predict_no_ui_long_connection(...)
|
||||||
"""
|
"""
|
||||||
import tiktoken, copy
|
import tiktoken, copy, re
|
||||||
from functools import lru_cache
|
from functools import lru_cache
|
||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
from toolbox import get_conf, trimmed_format_exc
|
from toolbox import get_conf, trimmed_format_exc, apply_gpt_academic_string_mask, read_one_api_model_name
|
||||||
|
|
||||||
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
|
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
|
||||||
from .bridge_chatgpt import predict as chatgpt_ui
|
from .bridge_chatgpt import predict as chatgpt_ui
|
||||||
@@ -31,6 +31,12 @@ from .bridge_qianfan import predict as qianfan_ui
|
|||||||
from .bridge_google_gemini import predict as genai_ui
|
from .bridge_google_gemini import predict as genai_ui
|
||||||
from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
|
from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
|
||||||
|
|
||||||
|
from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
|
||||||
|
from .bridge_zhipu import predict as zhipu_ui
|
||||||
|
|
||||||
|
from .bridge_cohere import predict as cohere_ui
|
||||||
|
from .bridge_cohere import predict_no_ui_long_connection as cohere_noui
|
||||||
|
|
||||||
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
|
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
|
||||||
|
|
||||||
class LazyloadTiktoken(object):
|
class LazyloadTiktoken(object):
|
||||||
@@ -58,6 +64,11 @@ API_URL_REDIRECT, AZURE_ENDPOINT, AZURE_ENGINE = get_conf("API_URL_REDIRECT", "A
|
|||||||
openai_endpoint = "https://api.openai.com/v1/chat/completions"
|
openai_endpoint = "https://api.openai.com/v1/chat/completions"
|
||||||
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
|
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
|
||||||
newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
|
newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
|
||||||
|
gemini_endpoint = "https://generativelanguage.googleapis.com/v1beta/models"
|
||||||
|
claude_endpoint = "https://api.anthropic.com/v1/messages"
|
||||||
|
yimodel_endpoint = "https://api.lingyiwanwu.com/v1/chat/completions"
|
||||||
|
cohere_endpoint = 'https://api.cohere.ai/v1/chat'
|
||||||
|
|
||||||
if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/'
|
if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/'
|
||||||
azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15'
|
azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15'
|
||||||
# 兼容旧版的配置
|
# 兼容旧版的配置
|
||||||
@@ -72,7 +83,10 @@ except:
|
|||||||
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
|
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
|
||||||
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
|
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
|
||||||
if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
|
if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
|
||||||
|
if gemini_endpoint in API_URL_REDIRECT: gemini_endpoint = API_URL_REDIRECT[gemini_endpoint]
|
||||||
|
if claude_endpoint in API_URL_REDIRECT: claude_endpoint = API_URL_REDIRECT[claude_endpoint]
|
||||||
|
if yimodel_endpoint in API_URL_REDIRECT: yimodel_endpoint = API_URL_REDIRECT[yimodel_endpoint]
|
||||||
|
if cohere_endpoint in API_URL_REDIRECT: cohere_endpoint = API_URL_REDIRECT[cohere_endpoint]
|
||||||
|
|
||||||
# 获取tokenizer
|
# 获取tokenizer
|
||||||
tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo")
|
tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo")
|
||||||
@@ -91,7 +105,7 @@ model_info = {
|
|||||||
"fn_with_ui": chatgpt_ui,
|
"fn_with_ui": chatgpt_ui,
|
||||||
"fn_without_ui": chatgpt_noui,
|
"fn_without_ui": chatgpt_noui,
|
||||||
"endpoint": openai_endpoint,
|
"endpoint": openai_endpoint,
|
||||||
"max_token": 4096,
|
"max_token": 16385,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
},
|
},
|
||||||
@@ -123,7 +137,16 @@ model_info = {
|
|||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
},
|
},
|
||||||
|
|
||||||
"gpt-3.5-turbo-1106": {#16k
|
"gpt-3.5-turbo-1106": { #16k
|
||||||
|
"fn_with_ui": chatgpt_ui,
|
||||||
|
"fn_without_ui": chatgpt_noui,
|
||||||
|
"endpoint": openai_endpoint,
|
||||||
|
"max_token": 16385,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
|
||||||
|
"gpt-3.5-turbo-0125": { #16k
|
||||||
"fn_with_ui": chatgpt_ui,
|
"fn_with_ui": chatgpt_ui,
|
||||||
"fn_without_ui": chatgpt_noui,
|
"fn_without_ui": chatgpt_noui,
|
||||||
"endpoint": openai_endpoint,
|
"endpoint": openai_endpoint,
|
||||||
@@ -150,6 +173,15 @@ model_info = {
|
|||||||
"token_cnt": get_token_num_gpt4,
|
"token_cnt": get_token_num_gpt4,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
"gpt-4-turbo-preview": {
|
||||||
|
"fn_with_ui": chatgpt_ui,
|
||||||
|
"fn_without_ui": chatgpt_noui,
|
||||||
|
"endpoint": openai_endpoint,
|
||||||
|
"max_token": 128000,
|
||||||
|
"tokenizer": tokenizer_gpt4,
|
||||||
|
"token_cnt": get_token_num_gpt4,
|
||||||
|
},
|
||||||
|
|
||||||
"gpt-4-1106-preview": {
|
"gpt-4-1106-preview": {
|
||||||
"fn_with_ui": chatgpt_ui,
|
"fn_with_ui": chatgpt_ui,
|
||||||
"fn_without_ui": chatgpt_noui,
|
"fn_without_ui": chatgpt_noui,
|
||||||
@@ -159,6 +191,15 @@ model_info = {
|
|||||||
"token_cnt": get_token_num_gpt4,
|
"token_cnt": get_token_num_gpt4,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
"gpt-4-0125-preview": {
|
||||||
|
"fn_with_ui": chatgpt_ui,
|
||||||
|
"fn_without_ui": chatgpt_noui,
|
||||||
|
"endpoint": openai_endpoint,
|
||||||
|
"max_token": 128000,
|
||||||
|
"tokenizer": tokenizer_gpt4,
|
||||||
|
"token_cnt": get_token_num_gpt4,
|
||||||
|
},
|
||||||
|
|
||||||
"gpt-3.5-random": {
|
"gpt-3.5-random": {
|
||||||
"fn_with_ui": chatgpt_ui,
|
"fn_with_ui": chatgpt_ui,
|
||||||
"fn_without_ui": chatgpt_noui,
|
"fn_without_ui": chatgpt_noui,
|
||||||
@@ -197,16 +238,25 @@ model_info = {
|
|||||||
"token_cnt": get_token_num_gpt4,
|
"token_cnt": get_token_num_gpt4,
|
||||||
},
|
},
|
||||||
|
|
||||||
# api_2d (此后不需要在此处添加api2d的接口了,因为下面的代码会自动添加)
|
# 智谱AI
|
||||||
"api2d-gpt-3.5-turbo": {
|
"glm-4": {
|
||||||
"fn_with_ui": chatgpt_ui,
|
"fn_with_ui": zhipu_ui,
|
||||||
"fn_without_ui": chatgpt_noui,
|
"fn_without_ui": zhipu_noui,
|
||||||
"endpoint": api2d_endpoint,
|
"endpoint": None,
|
||||||
"max_token": 4096,
|
"max_token": 10124 * 8,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
"glm-3-turbo": {
|
||||||
|
"fn_with_ui": zhipu_ui,
|
||||||
|
"fn_without_ui": zhipu_noui,
|
||||||
|
"endpoint": None,
|
||||||
|
"max_token": 10124 * 4,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
# api_2d (此后不需要在此处添加api2d的接口了,因为下面的代码会自动添加)
|
||||||
"api2d-gpt-4": {
|
"api2d-gpt-4": {
|
||||||
"fn_with_ui": chatgpt_ui,
|
"fn_with_ui": chatgpt_ui,
|
||||||
"fn_without_ui": chatgpt_noui,
|
"fn_without_ui": chatgpt_noui,
|
||||||
@@ -252,7 +302,7 @@ model_info = {
|
|||||||
"gemini-pro": {
|
"gemini-pro": {
|
||||||
"fn_with_ui": genai_ui,
|
"fn_with_ui": genai_ui,
|
||||||
"fn_without_ui": genai_noui,
|
"fn_without_ui": genai_noui,
|
||||||
"endpoint": None,
|
"endpoint": gemini_endpoint,
|
||||||
"max_token": 1024 * 32,
|
"max_token": 1024 * 32,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
@@ -260,13 +310,56 @@ model_info = {
|
|||||||
"gemini-pro-vision": {
|
"gemini-pro-vision": {
|
||||||
"fn_with_ui": genai_ui,
|
"fn_with_ui": genai_ui,
|
||||||
"fn_without_ui": genai_noui,
|
"fn_without_ui": genai_noui,
|
||||||
|
"endpoint": gemini_endpoint,
|
||||||
|
"max_token": 1024 * 32,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
|
||||||
|
# cohere
|
||||||
|
"cohere-command-r-plus": {
|
||||||
|
"fn_with_ui": cohere_ui,
|
||||||
|
"fn_without_ui": cohere_noui,
|
||||||
|
"can_multi_thread": True,
|
||||||
|
"endpoint": cohere_endpoint,
|
||||||
|
"max_token": 1024 * 4,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
|
||||||
|
}
|
||||||
|
# -=-=-=-=-=-=- 月之暗面 -=-=-=-=-=-=-
|
||||||
|
from request_llms.bridge_moonshot import predict as moonshot_ui
|
||||||
|
from request_llms.bridge_moonshot import predict_no_ui_long_connection as moonshot_no_ui
|
||||||
|
model_info.update({
|
||||||
|
"moonshot-v1-8k": {
|
||||||
|
"fn_with_ui": moonshot_ui,
|
||||||
|
"fn_without_ui": moonshot_no_ui,
|
||||||
|
"can_multi_thread": True,
|
||||||
|
"endpoint": None,
|
||||||
|
"max_token": 1024 * 8,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
"moonshot-v1-32k": {
|
||||||
|
"fn_with_ui": moonshot_ui,
|
||||||
|
"fn_without_ui": moonshot_no_ui,
|
||||||
|
"can_multi_thread": True,
|
||||||
"endpoint": None,
|
"endpoint": None,
|
||||||
"max_token": 1024 * 32,
|
"max_token": 1024 * 32,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
},
|
},
|
||||||
}
|
"moonshot-v1-128k": {
|
||||||
|
"fn_with_ui": moonshot_ui,
|
||||||
|
"fn_without_ui": moonshot_no_ui,
|
||||||
|
"can_multi_thread": True,
|
||||||
|
"endpoint": None,
|
||||||
|
"max_token": 1024 * 128,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
}
|
||||||
|
})
|
||||||
# -=-=-=-=-=-=- api2d 对齐支持 -=-=-=-=-=-=-
|
# -=-=-=-=-=-=- api2d 对齐支持 -=-=-=-=-=-=-
|
||||||
for model in AVAIL_LLM_MODELS:
|
for model in AVAIL_LLM_MODELS:
|
||||||
if model.startswith('api2d-') and (model.replace('api2d-','') in model_info.keys()):
|
if model.startswith('api2d-') and (model.replace('api2d-','') in model_info.keys()):
|
||||||
@@ -282,25 +375,67 @@ for model in AVAIL_LLM_MODELS:
|
|||||||
model_info.update({model: mi})
|
model_info.update({model: mi})
|
||||||
|
|
||||||
# -=-=-=-=-=-=- 以下部分是新加入的模型,可能附带额外依赖 -=-=-=-=-=-=-
|
# -=-=-=-=-=-=- 以下部分是新加入的模型,可能附带额外依赖 -=-=-=-=-=-=-
|
||||||
if "claude-1-100k" in AVAIL_LLM_MODELS or "claude-2" in AVAIL_LLM_MODELS:
|
# claude家族
|
||||||
|
claude_models = ["claude-instant-1.2","claude-2.0","claude-2.1","claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229"]
|
||||||
|
if any(item in claude_models for item in AVAIL_LLM_MODELS):
|
||||||
from .bridge_claude import predict_no_ui_long_connection as claude_noui
|
from .bridge_claude import predict_no_ui_long_connection as claude_noui
|
||||||
from .bridge_claude import predict as claude_ui
|
from .bridge_claude import predict as claude_ui
|
||||||
model_info.update({
|
model_info.update({
|
||||||
"claude-1-100k": {
|
"claude-instant-1.2": {
|
||||||
"fn_with_ui": claude_ui,
|
"fn_with_ui": claude_ui,
|
||||||
"fn_without_ui": claude_noui,
|
"fn_without_ui": claude_noui,
|
||||||
"endpoint": None,
|
"endpoint": claude_endpoint,
|
||||||
"max_token": 8196,
|
"max_token": 100000,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
model_info.update({
|
model_info.update({
|
||||||
"claude-2": {
|
"claude-2.0": {
|
||||||
"fn_with_ui": claude_ui,
|
"fn_with_ui": claude_ui,
|
||||||
"fn_without_ui": claude_noui,
|
"fn_without_ui": claude_noui,
|
||||||
"endpoint": None,
|
"endpoint": claude_endpoint,
|
||||||
"max_token": 8196,
|
"max_token": 100000,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
model_info.update({
|
||||||
|
"claude-2.1": {
|
||||||
|
"fn_with_ui": claude_ui,
|
||||||
|
"fn_without_ui": claude_noui,
|
||||||
|
"endpoint": claude_endpoint,
|
||||||
|
"max_token": 200000,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
model_info.update({
|
||||||
|
"claude-3-haiku-20240307": {
|
||||||
|
"fn_with_ui": claude_ui,
|
||||||
|
"fn_without_ui": claude_noui,
|
||||||
|
"endpoint": claude_endpoint,
|
||||||
|
"max_token": 200000,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
model_info.update({
|
||||||
|
"claude-3-sonnet-20240229": {
|
||||||
|
"fn_with_ui": claude_ui,
|
||||||
|
"fn_without_ui": claude_noui,
|
||||||
|
"endpoint": claude_endpoint,
|
||||||
|
"max_token": 200000,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
model_info.update({
|
||||||
|
"claude-3-opus-20240229": {
|
||||||
|
"fn_with_ui": claude_ui,
|
||||||
|
"fn_without_ui": claude_noui,
|
||||||
|
"endpoint": claude_endpoint,
|
||||||
|
"max_token": 200000,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
},
|
},
|
||||||
@@ -370,22 +505,6 @@ if "stack-claude" in AVAIL_LLM_MODELS:
|
|||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
if "newbing-free" in AVAIL_LLM_MODELS:
|
|
||||||
try:
|
|
||||||
from .bridge_newbingfree import predict_no_ui_long_connection as newbingfree_noui
|
|
||||||
from .bridge_newbingfree import predict as newbingfree_ui
|
|
||||||
model_info.update({
|
|
||||||
"newbing-free": {
|
|
||||||
"fn_with_ui": newbingfree_ui,
|
|
||||||
"fn_without_ui": newbingfree_noui,
|
|
||||||
"endpoint": newbing_endpoint,
|
|
||||||
"max_token": 4096,
|
|
||||||
"tokenizer": tokenizer_gpt35,
|
|
||||||
"token_cnt": get_token_num_gpt35,
|
|
||||||
}
|
|
||||||
})
|
|
||||||
except:
|
|
||||||
print(trimmed_format_exc())
|
|
||||||
if "newbing" in AVAIL_LLM_MODELS: # same with newbing-free
|
if "newbing" in AVAIL_LLM_MODELS: # same with newbing-free
|
||||||
try:
|
try:
|
||||||
from .bridge_newbingfree import predict_no_ui_long_connection as newbingfree_noui
|
from .bridge_newbingfree import predict_no_ui_long_connection as newbingfree_noui
|
||||||
@@ -418,6 +537,7 @@ if "chatglmft" in AVAIL_LLM_MODELS: # same with newbing-free
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
|
# -=-=-=-=-=-=- 上海AI-LAB书生大模型 -=-=-=-=-=-=-
|
||||||
if "internlm" in AVAIL_LLM_MODELS:
|
if "internlm" in AVAIL_LLM_MODELS:
|
||||||
try:
|
try:
|
||||||
from .bridge_internlm import predict_no_ui_long_connection as internlm_noui
|
from .bridge_internlm import predict_no_ui_long_connection as internlm_noui
|
||||||
@@ -450,6 +570,7 @@ if "chatglm_onnx" in AVAIL_LLM_MODELS:
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
|
# -=-=-=-=-=-=- 通义-本地模型 -=-=-=-=-=-=-
|
||||||
if "qwen-local" in AVAIL_LLM_MODELS:
|
if "qwen-local" in AVAIL_LLM_MODELS:
|
||||||
try:
|
try:
|
||||||
from .bridge_qwen_local import predict_no_ui_long_connection as qwen_local_noui
|
from .bridge_qwen_local import predict_no_ui_long_connection as qwen_local_noui
|
||||||
@@ -458,6 +579,7 @@ if "qwen-local" in AVAIL_LLM_MODELS:
|
|||||||
"qwen-local": {
|
"qwen-local": {
|
||||||
"fn_with_ui": qwen_local_ui,
|
"fn_with_ui": qwen_local_ui,
|
||||||
"fn_without_ui": qwen_local_noui,
|
"fn_without_ui": qwen_local_noui,
|
||||||
|
"can_multi_thread": False,
|
||||||
"endpoint": None,
|
"endpoint": None,
|
||||||
"max_token": 4096,
|
"max_token": 4096,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
@@ -466,6 +588,7 @@ if "qwen-local" in AVAIL_LLM_MODELS:
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
|
# -=-=-=-=-=-=- 通义-在线模型 -=-=-=-=-=-=-
|
||||||
if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai
|
if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai
|
||||||
try:
|
try:
|
||||||
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
|
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
|
||||||
@@ -474,6 +597,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
|
|||||||
"qwen-turbo": {
|
"qwen-turbo": {
|
||||||
"fn_with_ui": qwen_ui,
|
"fn_with_ui": qwen_ui,
|
||||||
"fn_without_ui": qwen_noui,
|
"fn_without_ui": qwen_noui,
|
||||||
|
"can_multi_thread": True,
|
||||||
"endpoint": None,
|
"endpoint": None,
|
||||||
"max_token": 6144,
|
"max_token": 6144,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
@@ -482,6 +606,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
|
|||||||
"qwen-plus": {
|
"qwen-plus": {
|
||||||
"fn_with_ui": qwen_ui,
|
"fn_with_ui": qwen_ui,
|
||||||
"fn_without_ui": qwen_noui,
|
"fn_without_ui": qwen_noui,
|
||||||
|
"can_multi_thread": True,
|
||||||
"endpoint": None,
|
"endpoint": None,
|
||||||
"max_token": 30720,
|
"max_token": 30720,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
@@ -490,6 +615,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
|
|||||||
"qwen-max": {
|
"qwen-max": {
|
||||||
"fn_with_ui": qwen_ui,
|
"fn_with_ui": qwen_ui,
|
||||||
"fn_without_ui": qwen_noui,
|
"fn_without_ui": qwen_noui,
|
||||||
|
"can_multi_thread": True,
|
||||||
"endpoint": None,
|
"endpoint": None,
|
||||||
"max_token": 28672,
|
"max_token": 28672,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
@@ -498,7 +624,35 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
if "spark" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
# -=-=-=-=-=-=- 零一万物模型 -=-=-=-=-=-=-
|
||||||
|
if "yi-34b-chat-0205" in AVAIL_LLM_MODELS or "yi-34b-chat-200k" in AVAIL_LLM_MODELS: # zhipuai
|
||||||
|
try:
|
||||||
|
from .bridge_yimodel import predict_no_ui_long_connection as yimodel_noui
|
||||||
|
from .bridge_yimodel import predict as yimodel_ui
|
||||||
|
model_info.update({
|
||||||
|
"yi-34b-chat-0205": {
|
||||||
|
"fn_with_ui": yimodel_ui,
|
||||||
|
"fn_without_ui": yimodel_noui,
|
||||||
|
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
|
||||||
|
"endpoint": yimodel_endpoint,
|
||||||
|
"max_token": 4000,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
"yi-34b-chat-200k": {
|
||||||
|
"fn_with_ui": yimodel_ui,
|
||||||
|
"fn_without_ui": yimodel_noui,
|
||||||
|
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
|
||||||
|
"endpoint": yimodel_endpoint,
|
||||||
|
"max_token": 200000,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
except:
|
||||||
|
print(trimmed_format_exc())
|
||||||
|
# -=-=-=-=-=-=- 讯飞星火认知大模型 -=-=-=-=-=-=-
|
||||||
|
if "spark" in AVAIL_LLM_MODELS:
|
||||||
try:
|
try:
|
||||||
from .bridge_spark import predict_no_ui_long_connection as spark_noui
|
from .bridge_spark import predict_no_ui_long_connection as spark_noui
|
||||||
from .bridge_spark import predict as spark_ui
|
from .bridge_spark import predict as spark_ui
|
||||||
@@ -506,6 +660,7 @@ if "spark" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
|||||||
"spark": {
|
"spark": {
|
||||||
"fn_with_ui": spark_ui,
|
"fn_with_ui": spark_ui,
|
||||||
"fn_without_ui": spark_noui,
|
"fn_without_ui": spark_noui,
|
||||||
|
"can_multi_thread": True,
|
||||||
"endpoint": None,
|
"endpoint": None,
|
||||||
"max_token": 4096,
|
"max_token": 4096,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
@@ -522,6 +677,7 @@ if "sparkv2" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
|||||||
"sparkv2": {
|
"sparkv2": {
|
||||||
"fn_with_ui": spark_ui,
|
"fn_with_ui": spark_ui,
|
||||||
"fn_without_ui": spark_noui,
|
"fn_without_ui": spark_noui,
|
||||||
|
"can_multi_thread": True,
|
||||||
"endpoint": None,
|
"endpoint": None,
|
||||||
"max_token": 4096,
|
"max_token": 4096,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
@@ -530,7 +686,7 @@ if "sparkv2" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
if "sparkv3" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
if "sparkv3" in AVAIL_LLM_MODELS or "sparkv3.5" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
||||||
try:
|
try:
|
||||||
from .bridge_spark import predict_no_ui_long_connection as spark_noui
|
from .bridge_spark import predict_no_ui_long_connection as spark_noui
|
||||||
from .bridge_spark import predict as spark_ui
|
from .bridge_spark import predict as spark_ui
|
||||||
@@ -538,6 +694,16 @@ if "sparkv3" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
|||||||
"sparkv3": {
|
"sparkv3": {
|
||||||
"fn_with_ui": spark_ui,
|
"fn_with_ui": spark_ui,
|
||||||
"fn_without_ui": spark_noui,
|
"fn_without_ui": spark_noui,
|
||||||
|
"can_multi_thread": True,
|
||||||
|
"endpoint": None,
|
||||||
|
"max_token": 4096,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
"sparkv3.5": {
|
||||||
|
"fn_with_ui": spark_ui,
|
||||||
|
"fn_without_ui": spark_noui,
|
||||||
|
"can_multi_thread": True,
|
||||||
"endpoint": None,
|
"endpoint": None,
|
||||||
"max_token": 4096,
|
"max_token": 4096,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
@@ -562,22 +728,22 @@ if "llama2" in AVAIL_LLM_MODELS: # llama2
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai
|
# -=-=-=-=-=-=- 智谱 -=-=-=-=-=-=-
|
||||||
|
if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai 是glm-4的别名,向后兼容配置
|
||||||
try:
|
try:
|
||||||
from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
|
|
||||||
from .bridge_zhipu import predict as zhipu_ui
|
|
||||||
model_info.update({
|
model_info.update({
|
||||||
"zhipuai": {
|
"zhipuai": {
|
||||||
"fn_with_ui": zhipu_ui,
|
"fn_with_ui": zhipu_ui,
|
||||||
"fn_without_ui": zhipu_noui,
|
"fn_without_ui": zhipu_noui,
|
||||||
"endpoint": None,
|
"endpoint": None,
|
||||||
"max_token": 4096,
|
"max_token": 10124 * 8,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
}
|
},
|
||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
|
# -=-=-=-=-=-=- 幻方-深度求索大模型 -=-=-=-=-=-=-
|
||||||
if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
|
if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
|
||||||
try:
|
try:
|
||||||
from .bridge_deepseekcoder import predict_no_ui_long_connection as deepseekcoder_noui
|
from .bridge_deepseekcoder import predict_no_ui_long_connection as deepseekcoder_noui
|
||||||
@@ -594,26 +760,34 @@ if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
# if "skylark" in AVAIL_LLM_MODELS:
|
|
||||||
# try:
|
|
||||||
# from .bridge_skylark2 import predict_no_ui_long_connection as skylark_noui
|
|
||||||
# from .bridge_skylark2 import predict as skylark_ui
|
|
||||||
# model_info.update({
|
|
||||||
# "skylark": {
|
|
||||||
# "fn_with_ui": skylark_ui,
|
|
||||||
# "fn_without_ui": skylark_noui,
|
|
||||||
# "endpoint": None,
|
|
||||||
# "max_token": 4096,
|
|
||||||
# "tokenizer": tokenizer_gpt35,
|
|
||||||
# "token_cnt": get_token_num_gpt35,
|
|
||||||
# }
|
|
||||||
# })
|
|
||||||
# except:
|
|
||||||
# print(trimmed_format_exc())
|
|
||||||
|
|
||||||
|
|
||||||
# <-- 用于定义和切换多个azure模型 -->
|
# -=-=-=-=-=-=- one-api 对齐支持 -=-=-=-=-=-=-
|
||||||
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")
|
for model in [m for m in AVAIL_LLM_MODELS if m.startswith("one-api-")]:
|
||||||
|
# 为了更灵活地接入one-api多模型管理界面,设计了此接口,例子:AVAIL_LLM_MODELS = ["one-api-mixtral-8x7b(max_token=6666)"]
|
||||||
|
# 其中
|
||||||
|
# "one-api-" 是前缀(必要)
|
||||||
|
# "mixtral-8x7b" 是模型名(必要)
|
||||||
|
# "(max_token=6666)" 是配置(非必要)
|
||||||
|
try:
|
||||||
|
_, max_token_tmp = read_one_api_model_name(model)
|
||||||
|
except:
|
||||||
|
print(f"one-api模型 {model} 的 max_token 配置不是整数,请检查配置文件。")
|
||||||
|
continue
|
||||||
|
model_info.update({
|
||||||
|
model: {
|
||||||
|
"fn_with_ui": chatgpt_ui,
|
||||||
|
"fn_without_ui": chatgpt_noui,
|
||||||
|
"endpoint": openai_endpoint,
|
||||||
|
"max_token": max_token_tmp,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
# -=-=-=-=-=-=- azure模型对齐支持 -=-=-=-=-=-=-
|
||||||
|
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY") # <-- 用于定义和切换多个azure模型 -->
|
||||||
if len(AZURE_CFG_ARRAY) > 0:
|
if len(AZURE_CFG_ARRAY) > 0:
|
||||||
for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items():
|
for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items():
|
||||||
# 可能会覆盖之前的配置,但这是意料之中的
|
# 可能会覆盖之前的配置,但这是意料之中的
|
||||||
@@ -642,7 +816,7 @@ def LLM_CATCH_EXCEPTION(f):
|
|||||||
"""
|
"""
|
||||||
装饰器函数,将错误显示出来
|
装饰器函数,将错误显示出来
|
||||||
"""
|
"""
|
||||||
def decorated(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience):
|
def decorated(inputs:str, llm_kwargs:dict, history:list, sys_prompt:str, observe_window:list, console_slience:bool):
|
||||||
try:
|
try:
|
||||||
return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
|
return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -652,9 +826,9 @@ def LLM_CATCH_EXCEPTION(f):
|
|||||||
return decorated
|
return decorated
|
||||||
|
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list, sys_prompt:str, observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
发送至LLM,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
|
发送至LLM,等待回复,一次性完成,不显示中间过程。但内部(尽可能地)用stream的方法避免中途网线被掐。
|
||||||
inputs:
|
inputs:
|
||||||
是本次问询的输入
|
是本次问询的输入
|
||||||
sys_prompt:
|
sys_prompt:
|
||||||
@@ -668,10 +842,10 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
|
|||||||
"""
|
"""
|
||||||
import threading, time, copy
|
import threading, time, copy
|
||||||
|
|
||||||
|
inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")
|
||||||
model = llm_kwargs['llm_model']
|
model = llm_kwargs['llm_model']
|
||||||
n_model = 1
|
n_model = 1
|
||||||
if '&' not in model:
|
if '&' not in model:
|
||||||
assert not model.startswith("tgui"), "TGUI不支持函数插件的实现"
|
|
||||||
|
|
||||||
# 如果只询问1个大语言模型:
|
# 如果只询问1个大语言模型:
|
||||||
method = model_info[model]["fn_without_ui"]
|
method = model_info[model]["fn_without_ui"]
|
||||||
@@ -706,7 +880,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
|
|||||||
# 观察窗(window)
|
# 观察窗(window)
|
||||||
chat_string = []
|
chat_string = []
|
||||||
for i in range(n_model):
|
for i in range(n_model):
|
||||||
chat_string.append( f"【{str(models[i])} 说】: <font color=\"{colors[i]}\"> {window_mutex[i][0]} </font>" )
|
color = colors[i%len(colors)]
|
||||||
|
chat_string.append( f"【{str(models[i])} 说】: <font color=\"{color}\"> {window_mutex[i][0]} </font>" )
|
||||||
res = '<br/><br/>\n\n---\n\n'.join(chat_string)
|
res = '<br/><br/>\n\n---\n\n'.join(chat_string)
|
||||||
# # # # # # # # # # #
|
# # # # # # # # # # #
|
||||||
observe_window[0] = res
|
observe_window[0] = res
|
||||||
@@ -723,24 +898,33 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
|
|||||||
time.sleep(1)
|
time.sleep(1)
|
||||||
|
|
||||||
for i, future in enumerate(futures): # wait and get
|
for i, future in enumerate(futures): # wait and get
|
||||||
return_string_collect.append( f"【{str(models[i])} 说】: <font color=\"{colors[i]}\"> {future.result()} </font>" )
|
color = colors[i%len(colors)]
|
||||||
|
return_string_collect.append( f"【{str(models[i])} 说】: <font color=\"{color}\"> {future.result()} </font>" )
|
||||||
|
|
||||||
window_mutex[-1] = False # stop mutex thread
|
window_mutex[-1] = False # stop mutex thread
|
||||||
res = '<br/><br/>\n\n---\n\n'.join(return_string_collect)
|
res = '<br/><br/>\n\n---\n\n'.join(return_string_collect)
|
||||||
return res
|
return res
|
||||||
|
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, *args, **kwargs):
|
def predict(inputs:str, llm_kwargs:dict, *args, **kwargs):
|
||||||
"""
|
"""
|
||||||
发送至LLM,流式获取输出。
|
发送至LLM,流式获取输出。
|
||||||
用于基础的对话功能。
|
用于基础的对话功能。
|
||||||
inputs 是本次问询的输入
|
|
||||||
top_p, temperature是LLM的内部调优参数
|
完整参数列表:
|
||||||
history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误)
|
predict(
|
||||||
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
|
inputs:str, # 是本次问询的输入
|
||||||
additional_fn代表点击的哪个按钮,按钮见functional.py
|
llm_kwargs:dict, # 是LLM的内部调优参数
|
||||||
|
plugin_kwargs:dict, # 是插件的内部参数
|
||||||
|
chatbot:ChatBotWithCookies, # 原样传递,负责向用户前端展示对话,兼顾前端状态的功能
|
||||||
|
history:list=[], # 是之前的对话列表
|
||||||
|
system_prompt:str='', # 系统静默prompt
|
||||||
|
stream:bool=True, # 是否流式输出(已弃用)
|
||||||
|
additional_fn:str=None # 基础功能区按钮的附加功能
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")
|
||||||
method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错,检查config中的AVAIL_LLM_MODELS选项
|
method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错,检查config中的AVAIL_LLM_MODELS选项
|
||||||
yield from method(inputs, llm_kwargs, *args, **kwargs)
|
yield from method(inputs, llm_kwargs, *args, **kwargs)
|
||||||
|
|
||||||
|
|||||||
@@ -137,7 +137,8 @@ class GetGLMFTHandle(Process):
|
|||||||
global glmft_handle
|
global glmft_handle
|
||||||
glmft_handle = None
|
glmft_handle = None
|
||||||
#################################################################################
|
#################################################################################
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
|
||||||
|
observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
多线程方法
|
多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
|
|||||||
@@ -21,7 +21,9 @@ import random
|
|||||||
|
|
||||||
# config_private.py放自己的秘密如API和代理网址
|
# config_private.py放自己的秘密如API和代理网址
|
||||||
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
||||||
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder
|
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
|
||||||
|
from toolbox import trimmed_format_exc, is_the_upload_folder, read_one_api_model_name, log_chat
|
||||||
|
from toolbox import ChatBotWithCookies
|
||||||
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
|
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
|
||||||
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
|
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
|
||||||
|
|
||||||
@@ -68,7 +70,7 @@ def verify_endpoint(endpoint):
|
|||||||
raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint)
|
raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint)
|
||||||
return endpoint
|
return endpoint
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=None, console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
|
发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
|
||||||
inputs:
|
inputs:
|
||||||
@@ -113,6 +115,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
error_msg = get_full_error(chunk, stream_response).decode()
|
error_msg = get_full_error(chunk, stream_response).decode()
|
||||||
if "reduce the length" in error_msg:
|
if "reduce the length" in error_msg:
|
||||||
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
|
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
|
||||||
|
elif """type":"upstream_error","param":"307""" in error_msg:
|
||||||
|
raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。")
|
||||||
else:
|
else:
|
||||||
raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
|
raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
|
||||||
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成
|
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成
|
||||||
@@ -123,8 +127,9 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
json_data = chunkjson['choices'][0]
|
json_data = chunkjson['choices'][0]
|
||||||
delta = json_data["delta"]
|
delta = json_data["delta"]
|
||||||
if len(delta) == 0: break
|
if len(delta) == 0: break
|
||||||
if "role" in delta: continue
|
if (not has_content) and has_role: continue
|
||||||
if "content" in delta:
|
if (not has_content) and (not has_role): continue # raise RuntimeError("发现不标准的第三方接口:"+delta)
|
||||||
|
if has_content: # has_role = True/False
|
||||||
result += delta["content"]
|
result += delta["content"]
|
||||||
if not console_slience: print(delta["content"], end='')
|
if not console_slience: print(delta["content"], end='')
|
||||||
if observe_window is not None:
|
if observe_window is not None:
|
||||||
@@ -143,7 +148,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
|
||||||
|
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
|
||||||
"""
|
"""
|
||||||
发送至chatGPT,流式获取输出。
|
发送至chatGPT,流式获取输出。
|
||||||
用于基础的对话功能。
|
用于基础的对话功能。
|
||||||
@@ -169,7 +175,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
|
|
||||||
raw_input = inputs
|
raw_input = inputs
|
||||||
logging.info(f'[raw_input] {raw_input}')
|
# logging.info(f'[raw_input] {raw_input}')
|
||||||
chatbot.append((inputs, ""))
|
chatbot.append((inputs, ""))
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
||||||
|
|
||||||
@@ -250,7 +256,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
# 前者是API2D的结束条件,后者是OPENAI的结束条件
|
# 前者是API2D的结束条件,后者是OPENAI的结束条件
|
||||||
if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0):
|
if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0):
|
||||||
# 判定为数据流的结束,gpt_replying_buffer也写完了
|
# 判定为数据流的结束,gpt_replying_buffer也写完了
|
||||||
logging.info(f'[response] {gpt_replying_buffer}')
|
# logging.info(f'[response] {gpt_replying_buffer}')
|
||||||
|
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
|
||||||
break
|
break
|
||||||
# 处理数据流的主体
|
# 处理数据流的主体
|
||||||
status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
|
status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
|
||||||
@@ -262,7 +269,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
# 一些第三方接口的出现这样的错误,兼容一下吧
|
# 一些第三方接口的出现这样的错误,兼容一下吧
|
||||||
continue
|
continue
|
||||||
else:
|
else:
|
||||||
# 一些垃圾第三方接口的出现这样的错误
|
# 至此已经超出了正常接口应该进入的范围,一些垃圾第三方接口会出现这样的错误
|
||||||
|
if chunkjson['choices'][0]["delta"]["content"] is None: continue # 一些垃圾第三方接口出现这样的错误,兼容一下吧
|
||||||
gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
|
gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
|
||||||
|
|
||||||
history[-1] = gpt_replying_buffer
|
history[-1] = gpt_replying_buffer
|
||||||
@@ -354,6 +362,9 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
|
|||||||
model = llm_kwargs['llm_model']
|
model = llm_kwargs['llm_model']
|
||||||
if llm_kwargs['llm_model'].startswith('api2d-'):
|
if llm_kwargs['llm_model'].startswith('api2d-'):
|
||||||
model = llm_kwargs['llm_model'][len('api2d-'):]
|
model = llm_kwargs['llm_model'][len('api2d-'):]
|
||||||
|
if llm_kwargs['llm_model'].startswith('one-api-'):
|
||||||
|
model = llm_kwargs['llm_model'][len('one-api-'):]
|
||||||
|
model, _ = read_one_api_model_name(model)
|
||||||
|
|
||||||
if model == "gpt-3.5-random": # 随机选择, 绕过openai访问频率限制
|
if model == "gpt-3.5-random": # 随机选择, 绕过openai访问频率限制
|
||||||
model = random.choice([
|
model = random.choice([
|
||||||
|
|||||||
@@ -9,15 +9,15 @@
|
|||||||
具备多线程调用能力的函数
|
具备多线程调用能力的函数
|
||||||
2. predict_no_ui_long_connection:支持多线程
|
2. predict_no_ui_long_connection:支持多线程
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import os
|
|
||||||
import json
|
|
||||||
import time
|
|
||||||
import gradio as gr
|
|
||||||
import logging
|
import logging
|
||||||
|
import os
|
||||||
|
import time
|
||||||
import traceback
|
import traceback
|
||||||
|
import json
|
||||||
import requests
|
import requests
|
||||||
import importlib
|
from toolbox import get_conf, update_ui, trimmed_format_exc, encode_image, every_image_file_in_path, log_chat
|
||||||
|
picture_system_prompt = "\n当回复图像时,必须说明正在回复哪张图像。所有图像仅在最后一个问题中提供,即使它们在历史记录中被提及。请使用'这是第X张图像:'的格式来指明您正在描述的是哪张图像。"
|
||||||
|
Claude_3_Models = ["claude-3-haiku-20240307", "claude-3-sonnet-20240229", "claude-3-opus-20240229"]
|
||||||
|
|
||||||
# config_private.py放自己的秘密如API和代理网址
|
# config_private.py放自己的秘密如API和代理网址
|
||||||
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
||||||
@@ -39,6 +39,34 @@ def get_full_error(chunk, stream_response):
|
|||||||
break
|
break
|
||||||
return chunk
|
return chunk
|
||||||
|
|
||||||
|
def decode_chunk(chunk):
|
||||||
|
# 提前读取一些信息(用于判断异常)
|
||||||
|
chunk_decoded = chunk.decode()
|
||||||
|
chunkjson = None
|
||||||
|
is_last_chunk = False
|
||||||
|
need_to_pass = False
|
||||||
|
if chunk_decoded.startswith('data:'):
|
||||||
|
try:
|
||||||
|
chunkjson = json.loads(chunk_decoded[6:])
|
||||||
|
except:
|
||||||
|
need_to_pass = True
|
||||||
|
pass
|
||||||
|
elif chunk_decoded.startswith('event:'):
|
||||||
|
try:
|
||||||
|
event_type = chunk_decoded.split(':')[1].strip()
|
||||||
|
if event_type == 'content_block_stop' or event_type == 'message_stop':
|
||||||
|
is_last_chunk = True
|
||||||
|
elif event_type == 'content_block_start' or event_type == 'message_start':
|
||||||
|
need_to_pass = True
|
||||||
|
pass
|
||||||
|
except:
|
||||||
|
need_to_pass = True
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
need_to_pass = True
|
||||||
|
pass
|
||||||
|
return need_to_pass, chunkjson, is_last_chunk
|
||||||
|
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
|
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
|
||||||
"""
|
"""
|
||||||
@@ -54,50 +82,67 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
observe_window = None:
|
observe_window = None:
|
||||||
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
|
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
|
||||||
"""
|
"""
|
||||||
from anthropic import Anthropic
|
|
||||||
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
||||||
prompt = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
|
|
||||||
retry = 0
|
|
||||||
if len(ANTHROPIC_API_KEY) == 0:
|
if len(ANTHROPIC_API_KEY) == 0:
|
||||||
raise RuntimeError("没有设置ANTHROPIC_API_KEY选项")
|
raise RuntimeError("没有设置ANTHROPIC_API_KEY选项")
|
||||||
|
if inputs == "": inputs = "空空如也的输入栏"
|
||||||
|
headers, message = generate_payload(inputs, llm_kwargs, history, sys_prompt, image_paths=None)
|
||||||
|
retry = 0
|
||||||
|
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
# make a POST request to the API endpoint, stream=False
|
# make a POST request to the API endpoint, stream=False
|
||||||
from .bridge_all import model_info
|
from .bridge_all import model_info
|
||||||
anthropic = Anthropic(api_key=ANTHROPIC_API_KEY)
|
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
||||||
# endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
response = requests.post(endpoint, headers=headers, json=message,
|
||||||
# with ProxyNetworkActivate()
|
proxies=proxies, stream=True, timeout=TIMEOUT_SECONDS);break
|
||||||
stream = anthropic.completions.create(
|
except requests.exceptions.ReadTimeout as e:
|
||||||
prompt=prompt,
|
|
||||||
max_tokens_to_sample=4096, # The maximum number of tokens to generate before stopping.
|
|
||||||
model=llm_kwargs['llm_model'],
|
|
||||||
stream=True,
|
|
||||||
temperature = llm_kwargs['temperature']
|
|
||||||
)
|
|
||||||
break
|
|
||||||
except Exception as e:
|
|
||||||
retry += 1
|
retry += 1
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
if retry > MAX_RETRY: raise TimeoutError
|
if retry > MAX_RETRY: raise TimeoutError
|
||||||
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
|
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
|
||||||
|
stream_response = response.iter_lines()
|
||||||
result = ''
|
result = ''
|
||||||
try:
|
while True:
|
||||||
for completion in stream:
|
try: chunk = next(stream_response)
|
||||||
result += completion.completion
|
except StopIteration:
|
||||||
if not console_slience: print(completion.completion, end='')
|
break
|
||||||
if observe_window is not None:
|
except requests.exceptions.ConnectionError:
|
||||||
# 观测窗,把已经获取的数据显示出去
|
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
|
||||||
if len(observe_window) >= 1: observe_window[0] += completion.completion
|
need_to_pass, chunkjson, is_last_chunk = decode_chunk(chunk)
|
||||||
# 看门狗,如果超过期限没有喂狗,则终止
|
if chunk:
|
||||||
if len(observe_window) >= 2:
|
try:
|
||||||
if (time.time()-observe_window[1]) > watch_dog_patience:
|
if need_to_pass:
|
||||||
raise RuntimeError("用户取消了程序。")
|
pass
|
||||||
except Exception as e:
|
elif is_last_chunk:
|
||||||
traceback.print_exc()
|
# logging.info(f'[response] {result}')
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
if chunkjson and chunkjson['type'] == 'content_block_delta':
|
||||||
|
result += chunkjson['delta']['text']
|
||||||
|
print(chunkjson['delta']['text'], end='')
|
||||||
|
if observe_window is not None:
|
||||||
|
# 观测窗,把已经获取的数据显示出去
|
||||||
|
if len(observe_window) >= 1:
|
||||||
|
observe_window[0] += chunkjson['delta']['text']
|
||||||
|
# 看门狗,如果超过期限没有喂狗,则终止
|
||||||
|
if len(observe_window) >= 2:
|
||||||
|
if (time.time()-observe_window[1]) > watch_dog_patience:
|
||||||
|
raise RuntimeError("用户取消了程序。")
|
||||||
|
except Exception as e:
|
||||||
|
chunk = get_full_error(chunk, stream_response)
|
||||||
|
chunk_decoded = chunk.decode()
|
||||||
|
error_msg = chunk_decoded
|
||||||
|
print(error_msg)
|
||||||
|
raise RuntimeError("Json解析不合常规")
|
||||||
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
def make_media_input(history,inputs,image_paths):
|
||||||
|
for image_path in image_paths:
|
||||||
|
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
|
||||||
|
return inputs
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
||||||
"""
|
"""
|
||||||
@@ -109,7 +154,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
|
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
|
||||||
additional_fn代表点击的哪个按钮,按钮见functional.py
|
additional_fn代表点击的哪个按钮,按钮见functional.py
|
||||||
"""
|
"""
|
||||||
from anthropic import Anthropic
|
if inputs == "": inputs = "空空如也的输入栏"
|
||||||
if len(ANTHROPIC_API_KEY) == 0:
|
if len(ANTHROPIC_API_KEY) == 0:
|
||||||
chatbot.append((inputs, "没有设置ANTHROPIC_API_KEY"))
|
chatbot.append((inputs, "没有设置ANTHROPIC_API_KEY"))
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
||||||
@@ -119,13 +164,23 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
from core_functional import handle_core_functionality
|
from core_functional import handle_core_functionality
|
||||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
|
|
||||||
raw_input = inputs
|
have_recent_file, image_paths = every_image_file_in_path(chatbot)
|
||||||
logging.info(f'[raw_input] {raw_input}')
|
if len(image_paths) > 20:
|
||||||
chatbot.append((inputs, ""))
|
chatbot.append((inputs, "图片数量超过api上限(20张)"))
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应")
|
||||||
|
return
|
||||||
|
|
||||||
|
if any([llm_kwargs['llm_model'] == model for model in Claude_3_Models]) and have_recent_file:
|
||||||
|
if inputs == "" or inputs == "空空如也的输入栏": inputs = "请描述给出的图片"
|
||||||
|
system_prompt += picture_system_prompt # 由于没有单独的参数保存包含图片的历史,所以只能通过提示词对第几张图片进行定位
|
||||||
|
chatbot.append((make_media_input(history,inputs, image_paths), ""))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
||||||
|
else:
|
||||||
|
chatbot.append((inputs, ""))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
||||||
|
|
||||||
try:
|
try:
|
||||||
prompt = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
|
headers, message = generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths)
|
||||||
except RuntimeError as e:
|
except RuntimeError as e:
|
||||||
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
|
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
|
||||||
@@ -138,91 +193,117 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
try:
|
try:
|
||||||
# make a POST request to the API endpoint, stream=True
|
# make a POST request to the API endpoint, stream=True
|
||||||
from .bridge_all import model_info
|
from .bridge_all import model_info
|
||||||
anthropic = Anthropic(api_key=ANTHROPIC_API_KEY)
|
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
||||||
# endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
response = requests.post(endpoint, headers=headers, json=message,
|
||||||
# with ProxyNetworkActivate()
|
proxies=proxies, stream=True, timeout=TIMEOUT_SECONDS);break
|
||||||
stream = anthropic.completions.create(
|
except requests.exceptions.ReadTimeout as e:
|
||||||
prompt=prompt,
|
|
||||||
max_tokens_to_sample=4096, # The maximum number of tokens to generate before stopping.
|
|
||||||
model=llm_kwargs['llm_model'],
|
|
||||||
stream=True,
|
|
||||||
temperature = llm_kwargs['temperature']
|
|
||||||
)
|
|
||||||
|
|
||||||
break
|
|
||||||
except:
|
|
||||||
retry += 1
|
retry += 1
|
||||||
chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg))
|
traceback.print_exc()
|
||||||
retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
|
|
||||||
if retry > MAX_RETRY: raise TimeoutError
|
if retry > MAX_RETRY: raise TimeoutError
|
||||||
|
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
|
||||||
|
stream_response = response.iter_lines()
|
||||||
gpt_replying_buffer = ""
|
gpt_replying_buffer = ""
|
||||||
|
|
||||||
for completion in stream:
|
while True:
|
||||||
try:
|
try: chunk = next(stream_response)
|
||||||
gpt_replying_buffer = gpt_replying_buffer + completion.completion
|
except StopIteration:
|
||||||
history[-1] = gpt_replying_buffer
|
break
|
||||||
chatbot[-1] = (history[-2], history[-1])
|
except requests.exceptions.ConnectionError:
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg='正常') # 刷新界面
|
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
|
||||||
|
need_to_pass, chunkjson, is_last_chunk = decode_chunk(chunk)
|
||||||
|
if chunk:
|
||||||
|
try:
|
||||||
|
if need_to_pass:
|
||||||
|
pass
|
||||||
|
elif is_last_chunk:
|
||||||
|
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
|
||||||
|
# logging.info(f'[response] {gpt_replying_buffer}')
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
if chunkjson and chunkjson['type'] == 'content_block_delta':
|
||||||
|
gpt_replying_buffer += chunkjson['delta']['text']
|
||||||
|
history[-1] = gpt_replying_buffer
|
||||||
|
chatbot[-1] = (history[-2], history[-1])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg='正常') # 刷新界面
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
from toolbox import regular_txt_to_markdown
|
chunk = get_full_error(chunk, stream_response)
|
||||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
chunk_decoded = chunk.decode()
|
||||||
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str}")
|
error_msg = chunk_decoded
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + tb_str) # 刷新界面
|
print(error_msg)
|
||||||
return
|
raise RuntimeError("Json解析不合常规")
|
||||||
|
|
||||||
|
def multiple_picture_types(image_paths):
|
||||||
|
"""
|
||||||
|
根据图片类型返回image/jpeg, image/png, image/gif, image/webp,无法判断则返回image/jpeg
|
||||||
|
"""
|
||||||
|
for image_path in image_paths:
|
||||||
|
if image_path.endswith('.jpeg') or image_path.endswith('.jpg'):
|
||||||
|
return 'image/jpeg'
|
||||||
|
elif image_path.endswith('.png'):
|
||||||
|
return 'image/png'
|
||||||
|
elif image_path.endswith('.gif'):
|
||||||
|
return 'image/gif'
|
||||||
|
elif image_path.endswith('.webp'):
|
||||||
|
return 'image/webp'
|
||||||
|
return 'image/jpeg'
|
||||||
|
|
||||||
|
def generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths):
|
||||||
|
|
||||||
# https://github.com/jtsang4/claude-to-chatgpt/blob/main/claude_to_chatgpt/adapter.py
|
|
||||||
def convert_messages_to_prompt(messages):
|
|
||||||
prompt = ""
|
|
||||||
role_map = {
|
|
||||||
"system": "Human",
|
|
||||||
"user": "Human",
|
|
||||||
"assistant": "Assistant",
|
|
||||||
}
|
|
||||||
for message in messages:
|
|
||||||
role = message["role"]
|
|
||||||
content = message["content"]
|
|
||||||
transformed_role = role_map[role]
|
|
||||||
prompt += f"\n\n{transformed_role.capitalize()}: {content}"
|
|
||||||
prompt += "\n\nAssistant: "
|
|
||||||
return prompt
|
|
||||||
|
|
||||||
def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
|
|
||||||
"""
|
"""
|
||||||
整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
|
整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
|
||||||
"""
|
"""
|
||||||
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
|
|
||||||
|
|
||||||
conversation_cnt = len(history) // 2
|
conversation_cnt = len(history) // 2
|
||||||
|
|
||||||
messages = [{"role": "system", "content": system_prompt}]
|
messages = []
|
||||||
|
|
||||||
if conversation_cnt:
|
if conversation_cnt:
|
||||||
for index in range(0, 2*conversation_cnt, 2):
|
for index in range(0, 2*conversation_cnt, 2):
|
||||||
what_i_have_asked = {}
|
what_i_have_asked = {}
|
||||||
what_i_have_asked["role"] = "user"
|
what_i_have_asked["role"] = "user"
|
||||||
what_i_have_asked["content"] = history[index]
|
what_i_have_asked["content"] = [{"type": "text", "text": history[index]}]
|
||||||
what_gpt_answer = {}
|
what_gpt_answer = {}
|
||||||
what_gpt_answer["role"] = "assistant"
|
what_gpt_answer["role"] = "assistant"
|
||||||
what_gpt_answer["content"] = history[index+1]
|
what_gpt_answer["content"] = [{"type": "text", "text": history[index+1]}]
|
||||||
if what_i_have_asked["content"] != "":
|
if what_i_have_asked["content"][0]["text"] != "":
|
||||||
if what_gpt_answer["content"] == "": continue
|
if what_i_have_asked["content"][0]["text"] == "": continue
|
||||||
if what_gpt_answer["content"] == timeout_bot_msg: continue
|
if what_i_have_asked["content"][0]["text"] == timeout_bot_msg: continue
|
||||||
messages.append(what_i_have_asked)
|
messages.append(what_i_have_asked)
|
||||||
messages.append(what_gpt_answer)
|
messages.append(what_gpt_answer)
|
||||||
else:
|
else:
|
||||||
messages[-1]['content'] = what_gpt_answer['content']
|
messages[-1]['content'][0]['text'] = what_gpt_answer['content'][0]['text']
|
||||||
|
|
||||||
what_i_ask_now = {}
|
if any([llm_kwargs['llm_model'] == model for model in Claude_3_Models]) and image_paths:
|
||||||
what_i_ask_now["role"] = "user"
|
what_i_ask_now = {}
|
||||||
what_i_ask_now["content"] = inputs
|
what_i_ask_now["role"] = "user"
|
||||||
|
what_i_ask_now["content"] = []
|
||||||
|
for image_path in image_paths:
|
||||||
|
what_i_ask_now["content"].append({
|
||||||
|
"type": "image",
|
||||||
|
"source": {
|
||||||
|
"type": "base64",
|
||||||
|
"media_type": multiple_picture_types(image_paths),
|
||||||
|
"data": encode_image(image_path),
|
||||||
|
}
|
||||||
|
})
|
||||||
|
what_i_ask_now["content"].append({"type": "text", "text": inputs})
|
||||||
|
else:
|
||||||
|
what_i_ask_now = {}
|
||||||
|
what_i_ask_now["role"] = "user"
|
||||||
|
what_i_ask_now["content"] = [{"type": "text", "text": inputs}]
|
||||||
messages.append(what_i_ask_now)
|
messages.append(what_i_ask_now)
|
||||||
prompt = convert_messages_to_prompt(messages)
|
# 开始整理headers与message
|
||||||
|
headers = {
|
||||||
return prompt
|
'x-api-key': ANTHROPIC_API_KEY,
|
||||||
|
'anthropic-version': '2023-06-01',
|
||||||
|
'content-type': 'application/json'
|
||||||
|
}
|
||||||
|
payload = {
|
||||||
|
'model': llm_kwargs['llm_model'],
|
||||||
|
'max_tokens': 4096,
|
||||||
|
'messages': messages,
|
||||||
|
'temperature': llm_kwargs['temperature'],
|
||||||
|
'stream': True,
|
||||||
|
'system': system_prompt
|
||||||
|
}
|
||||||
|
return headers, payload
|
||||||
|
|||||||
328
request_llms/bridge_cohere.py
普通文件
328
request_llms/bridge_cohere.py
普通文件
@@ -0,0 +1,328 @@
|
|||||||
|
# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目
|
||||||
|
|
||||||
|
"""
|
||||||
|
该文件中主要包含三个函数
|
||||||
|
|
||||||
|
不具备多线程能力的函数:
|
||||||
|
1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
|
||||||
|
|
||||||
|
具备多线程调用能力的函数
|
||||||
|
2. predict_no_ui_long_connection:支持多线程
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import time
|
||||||
|
import gradio as gr
|
||||||
|
import logging
|
||||||
|
import traceback
|
||||||
|
import requests
|
||||||
|
import importlib
|
||||||
|
import random
|
||||||
|
|
||||||
|
# config_private.py放自己的秘密如API和代理网址
|
||||||
|
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
||||||
|
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
|
||||||
|
from toolbox import trimmed_format_exc, is_the_upload_folder, read_one_api_model_name, log_chat
|
||||||
|
from toolbox import ChatBotWithCookies
|
||||||
|
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
|
||||||
|
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
|
||||||
|
|
||||||
|
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
|
||||||
|
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
|
||||||
|
|
||||||
|
def get_full_error(chunk, stream_response):
|
||||||
|
"""
|
||||||
|
获取完整的从Cohere返回的报错
|
||||||
|
"""
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
chunk += next(stream_response)
|
||||||
|
except:
|
||||||
|
break
|
||||||
|
return chunk
|
||||||
|
|
||||||
|
def decode_chunk(chunk):
|
||||||
|
# 提前读取一些信息 (用于判断异常)
|
||||||
|
chunk_decoded = chunk.decode()
|
||||||
|
chunkjson = None
|
||||||
|
has_choices = False
|
||||||
|
choice_valid = False
|
||||||
|
has_content = False
|
||||||
|
has_role = False
|
||||||
|
try:
|
||||||
|
chunkjson = json.loads(chunk_decoded)
|
||||||
|
has_choices = 'choices' in chunkjson
|
||||||
|
if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
|
||||||
|
if has_choices and choice_valid: has_content = ("content" in chunkjson['choices'][0]["delta"])
|
||||||
|
if has_content: has_content = (chunkjson['choices'][0]["delta"]["content"] is not None)
|
||||||
|
if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
return chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role
|
||||||
|
|
||||||
|
from functools import lru_cache
|
||||||
|
@lru_cache(maxsize=32)
|
||||||
|
def verify_endpoint(endpoint):
|
||||||
|
"""
|
||||||
|
检查endpoint是否可用
|
||||||
|
"""
|
||||||
|
if "你亲手写的api名称" in endpoint:
|
||||||
|
raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint)
|
||||||
|
return endpoint
|
||||||
|
|
||||||
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=None, console_slience:bool=False):
|
||||||
|
"""
|
||||||
|
发送,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
|
||||||
|
inputs:
|
||||||
|
是本次问询的输入
|
||||||
|
sys_prompt:
|
||||||
|
系统静默prompt
|
||||||
|
llm_kwargs:
|
||||||
|
内部调优参数
|
||||||
|
history:
|
||||||
|
是之前的对话列表
|
||||||
|
observe_window = None:
|
||||||
|
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
|
||||||
|
"""
|
||||||
|
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
||||||
|
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
|
||||||
|
retry = 0
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
# make a POST request to the API endpoint, stream=False
|
||||||
|
from .bridge_all import model_info
|
||||||
|
endpoint = verify_endpoint(model_info[llm_kwargs['llm_model']]['endpoint'])
|
||||||
|
response = requests.post(endpoint, headers=headers, proxies=proxies,
|
||||||
|
json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
|
||||||
|
except requests.exceptions.ReadTimeout as e:
|
||||||
|
retry += 1
|
||||||
|
traceback.print_exc()
|
||||||
|
if retry > MAX_RETRY: raise TimeoutError
|
||||||
|
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
|
||||||
|
|
||||||
|
stream_response = response.iter_lines()
|
||||||
|
result = ''
|
||||||
|
json_data = None
|
||||||
|
while True:
|
||||||
|
try: chunk = next(stream_response)
|
||||||
|
except StopIteration:
|
||||||
|
break
|
||||||
|
except requests.exceptions.ConnectionError:
|
||||||
|
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
|
||||||
|
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
|
||||||
|
if chunkjson['event_type'] == 'stream-start': continue
|
||||||
|
if chunkjson['event_type'] == 'text-generation':
|
||||||
|
result += chunkjson["text"]
|
||||||
|
if not console_slience: print(chunkjson["text"], end='')
|
||||||
|
if observe_window is not None:
|
||||||
|
# 观测窗,把已经获取的数据显示出去
|
||||||
|
if len(observe_window) >= 1:
|
||||||
|
observe_window[0] += chunkjson["text"]
|
||||||
|
# 看门狗,如果超过期限没有喂狗,则终止
|
||||||
|
if len(observe_window) >= 2:
|
||||||
|
if (time.time()-observe_window[1]) > watch_dog_patience:
|
||||||
|
raise RuntimeError("用户取消了程序。")
|
||||||
|
if chunkjson['event_type'] == 'stream-end': break
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
|
||||||
|
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
|
||||||
|
"""
|
||||||
|
发送至chatGPT,流式获取输出。
|
||||||
|
用于基础的对话功能。
|
||||||
|
inputs 是本次问询的输入
|
||||||
|
top_p, temperature是chatGPT的内部调优参数
|
||||||
|
history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误)
|
||||||
|
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
|
||||||
|
additional_fn代表点击的哪个按钮,按钮见functional.py
|
||||||
|
"""
|
||||||
|
# if is_any_api_key(inputs):
|
||||||
|
# chatbot._cookies['api_key'] = inputs
|
||||||
|
# chatbot.append(("输入已识别为Cohere的api_key", what_keys(inputs)))
|
||||||
|
# yield from update_ui(chatbot=chatbot, history=history, msg="api_key已导入") # 刷新界面
|
||||||
|
# return
|
||||||
|
# elif not is_any_api_key(chatbot._cookies['api_key']):
|
||||||
|
# chatbot.append((inputs, "缺少api_key。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。"))
|
||||||
|
# yield from update_ui(chatbot=chatbot, history=history, msg="缺少api_key") # 刷新界面
|
||||||
|
# return
|
||||||
|
|
||||||
|
user_input = inputs
|
||||||
|
if additional_fn is not None:
|
||||||
|
from core_functional import handle_core_functionality
|
||||||
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
|
|
||||||
|
raw_input = inputs
|
||||||
|
# logging.info(f'[raw_input] {raw_input}')
|
||||||
|
chatbot.append((inputs, ""))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
||||||
|
|
||||||
|
# check mis-behavior
|
||||||
|
if is_the_upload_folder(user_input):
|
||||||
|
chatbot[-1] = (inputs, f"[Local Message] 检测到操作错误!当您上传文档之后,需点击“**函数插件区**”按钮进行处理,请勿点击“提交”按钮或者“基础功能区”按钮。")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
|
||||||
|
time.sleep(2)
|
||||||
|
|
||||||
|
try:
|
||||||
|
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
|
||||||
|
except RuntimeError as e:
|
||||||
|
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# 检查endpoint是否合法
|
||||||
|
try:
|
||||||
|
from .bridge_all import model_info
|
||||||
|
endpoint = verify_endpoint(model_info[llm_kwargs['llm_model']]['endpoint'])
|
||||||
|
except:
|
||||||
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
|
chatbot[-1] = (inputs, tb_str)
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="Endpoint不满足要求") # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
history.append(inputs); history.append("")
|
||||||
|
|
||||||
|
retry = 0
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
# make a POST request to the API endpoint, stream=True
|
||||||
|
response = requests.post(endpoint, headers=headers, proxies=proxies,
|
||||||
|
json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
|
||||||
|
except:
|
||||||
|
retry += 1
|
||||||
|
chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg))
|
||||||
|
retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
|
||||||
|
if retry > MAX_RETRY: raise TimeoutError
|
||||||
|
|
||||||
|
gpt_replying_buffer = ""
|
||||||
|
|
||||||
|
is_head_of_the_stream = True
|
||||||
|
if stream:
|
||||||
|
stream_response = response.iter_lines()
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
chunk = next(stream_response)
|
||||||
|
except StopIteration:
|
||||||
|
# 非Cohere官方接口的出现这样的报错,Cohere和API2D不会走这里
|
||||||
|
chunk_decoded = chunk.decode()
|
||||||
|
error_msg = chunk_decoded
|
||||||
|
# 其他情况,直接返回报错
|
||||||
|
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="非Cohere官方接口返回了错误:" + chunk.decode()) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# 提前读取一些信息 (用于判断异常)
|
||||||
|
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
|
||||||
|
|
||||||
|
if chunkjson:
|
||||||
|
try:
|
||||||
|
if chunkjson['event_type'] == 'stream-start':
|
||||||
|
continue
|
||||||
|
if chunkjson['event_type'] == 'text-generation':
|
||||||
|
gpt_replying_buffer = gpt_replying_buffer + chunkjson["text"]
|
||||||
|
history[-1] = gpt_replying_buffer
|
||||||
|
chatbot[-1] = (history[-2], history[-1])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
|
||||||
|
if chunkjson['event_type'] == 'stream-end':
|
||||||
|
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
|
||||||
|
history[-1] = gpt_replying_buffer
|
||||||
|
chatbot[-1] = (history[-2], history[-1])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面
|
||||||
|
chunk = get_full_error(chunk, stream_response)
|
||||||
|
chunk_decoded = chunk.decode()
|
||||||
|
error_msg = chunk_decoded
|
||||||
|
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
|
||||||
|
print(error_msg)
|
||||||
|
return
|
||||||
|
|
||||||
|
def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg):
|
||||||
|
from .bridge_all import model_info
|
||||||
|
Cohere_website = ' 请登录Cohere查看详情 https://platform.Cohere.com/signup'
|
||||||
|
if "reduce the length" in error_msg:
|
||||||
|
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出
|
||||||
|
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
|
||||||
|
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
|
||||||
|
elif "does not exist" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
|
||||||
|
elif "Incorrect API key" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. Cohere以提供了不正确的API_KEY为由, 拒绝服务. " + Cohere_website)
|
||||||
|
elif "exceeded your current quota" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. Cohere以账户额度不足为由, 拒绝服务." + Cohere_website)
|
||||||
|
elif "account is not active" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Your account is not active. Cohere以账户失效为由, 拒绝服务." + Cohere_website)
|
||||||
|
elif "associated with a deactivated account" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] You are associated with a deactivated account. Cohere以账户失效为由, 拒绝服务." + Cohere_website)
|
||||||
|
elif "API key has been deactivated" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] API key has been deactivated. Cohere以账户失效为由, 拒绝服务." + Cohere_website)
|
||||||
|
elif "bad forward key" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
|
||||||
|
elif "Not enough point" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
|
||||||
|
else:
|
||||||
|
from toolbox import regular_txt_to_markdown
|
||||||
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
|
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
|
||||||
|
return chatbot, history
|
||||||
|
|
||||||
|
def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
|
||||||
|
"""
|
||||||
|
整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
|
||||||
|
"""
|
||||||
|
# if not is_any_api_key(llm_kwargs['api_key']):
|
||||||
|
# raise AssertionError("你提供了错误的API_KEY。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。")
|
||||||
|
|
||||||
|
api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
||||||
|
|
||||||
|
headers = {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Authorization": f"Bearer {api_key}"
|
||||||
|
}
|
||||||
|
if API_ORG.startswith('org-'): headers.update({"Cohere-Organization": API_ORG})
|
||||||
|
if llm_kwargs['llm_model'].startswith('azure-'):
|
||||||
|
headers.update({"api-key": api_key})
|
||||||
|
if llm_kwargs['llm_model'] in AZURE_CFG_ARRAY.keys():
|
||||||
|
azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"]
|
||||||
|
headers.update({"api-key": azure_api_key_unshared})
|
||||||
|
|
||||||
|
conversation_cnt = len(history) // 2
|
||||||
|
|
||||||
|
messages = [{"role": "SYSTEM", "message": system_prompt}]
|
||||||
|
if conversation_cnt:
|
||||||
|
for index in range(0, 2*conversation_cnt, 2):
|
||||||
|
what_i_have_asked = {}
|
||||||
|
what_i_have_asked["role"] = "USER"
|
||||||
|
what_i_have_asked["message"] = history[index]
|
||||||
|
what_gpt_answer = {}
|
||||||
|
what_gpt_answer["role"] = "CHATBOT"
|
||||||
|
what_gpt_answer["message"] = history[index+1]
|
||||||
|
if what_i_have_asked["message"] != "":
|
||||||
|
if what_gpt_answer["message"] == "": continue
|
||||||
|
if what_gpt_answer["message"] == timeout_bot_msg: continue
|
||||||
|
messages.append(what_i_have_asked)
|
||||||
|
messages.append(what_gpt_answer)
|
||||||
|
else:
|
||||||
|
messages[-1]['message'] = what_gpt_answer['message']
|
||||||
|
|
||||||
|
model = llm_kwargs['llm_model']
|
||||||
|
if model.startswith('cohere-'): model = model[len('cohere-'):]
|
||||||
|
payload = {
|
||||||
|
"model": model,
|
||||||
|
"message": inputs,
|
||||||
|
"chat_history": messages,
|
||||||
|
"temperature": llm_kwargs['temperature'], # 1.0,
|
||||||
|
"top_p": llm_kwargs['top_p'], # 1.0,
|
||||||
|
"n": 1,
|
||||||
|
"stream": stream,
|
||||||
|
"presence_penalty": 0,
|
||||||
|
"frequency_penalty": 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
return headers,payload
|
||||||
|
|
||||||
|
|
||||||
@@ -7,6 +7,7 @@ import re
|
|||||||
import os
|
import os
|
||||||
import time
|
import time
|
||||||
from request_llms.com_google import GoogleChatInit
|
from request_llms.com_google import GoogleChatInit
|
||||||
|
from toolbox import ChatBotWithCookies
|
||||||
from toolbox import get_conf, update_ui, update_ui_lastest_msg, have_any_recent_upload_image_files, trimmed_format_exc
|
from toolbox import get_conf, update_ui, update_ui_lastest_msg, have_any_recent_upload_image_files, trimmed_format_exc
|
||||||
|
|
||||||
proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY')
|
proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY')
|
||||||
@@ -20,7 +21,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
if get_conf("GEMINI_API_KEY") == "":
|
if get_conf("GEMINI_API_KEY") == "":
|
||||||
raise ValueError(f"请配置 GEMINI_API_KEY。")
|
raise ValueError(f"请配置 GEMINI_API_KEY。")
|
||||||
|
|
||||||
genai = GoogleChatInit()
|
genai = GoogleChatInit(llm_kwargs)
|
||||||
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
||||||
gpt_replying_buffer = ''
|
gpt_replying_buffer = ''
|
||||||
stream_response = genai.generate_chat(inputs, llm_kwargs, history, sys_prompt)
|
stream_response = genai.generate_chat(inputs, llm_kwargs, history, sys_prompt)
|
||||||
@@ -44,7 +45,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
return gpt_replying_buffer
|
return gpt_replying_buffer
|
||||||
|
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None):
|
def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
|
||||||
|
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
|
||||||
# 检查API_KEY
|
# 检查API_KEY
|
||||||
if get_conf("GEMINI_API_KEY") == "":
|
if get_conf("GEMINI_API_KEY") == "":
|
||||||
yield from update_ui_lastest_msg(f"请配置 GEMINI_API_KEY。", chatbot=chatbot, history=history, delay=0)
|
yield from update_ui_lastest_msg(f"请配置 GEMINI_API_KEY。", chatbot=chatbot, history=history, delay=0)
|
||||||
@@ -57,6 +59,10 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
|
|
||||||
if "vision" in llm_kwargs["llm_model"]:
|
if "vision" in llm_kwargs["llm_model"]:
|
||||||
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
|
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
|
||||||
|
if not have_recent_file:
|
||||||
|
chatbot.append((inputs, "没有检测到任何近期上传的图像文件,请上传jpg格式的图片,此外,请注意拓展名需要小写"))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待图片") # 刷新界面
|
||||||
|
return
|
||||||
def make_media_input(inputs, image_paths):
|
def make_media_input(inputs, image_paths):
|
||||||
for image_path in image_paths:
|
for image_path in image_paths:
|
||||||
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
|
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
|
||||||
@@ -66,7 +72,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
|
|
||||||
chatbot.append((inputs, ""))
|
chatbot.append((inputs, ""))
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
genai = GoogleChatInit()
|
genai = GoogleChatInit(llm_kwargs)
|
||||||
retry = 0
|
retry = 0
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
|
|||||||
@@ -1,10 +1,10 @@
|
|||||||
|
|
||||||
from transformers import AutoModel, AutoTokenizer
|
|
||||||
import time
|
import time
|
||||||
import threading
|
import threading
|
||||||
import importlib
|
import importlib
|
||||||
from toolbox import update_ui, get_conf
|
from toolbox import update_ui, get_conf
|
||||||
from multiprocessing import Process, Pipe
|
from multiprocessing import Process, Pipe
|
||||||
|
from transformers import AutoModel, AutoTokenizer
|
||||||
|
|
||||||
load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
|
load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
|
||||||
|
|
||||||
@@ -106,7 +106,8 @@ class GetGLMHandle(Process):
|
|||||||
global llama_glm_handle
|
global llama_glm_handle
|
||||||
llama_glm_handle = None
|
llama_glm_handle = None
|
||||||
#################################################################################
|
#################################################################################
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
|
||||||
|
observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
多线程方法
|
多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
|
|||||||
@@ -1,10 +1,10 @@
|
|||||||
|
|
||||||
from transformers import AutoModel, AutoTokenizer
|
|
||||||
import time
|
import time
|
||||||
import threading
|
import threading
|
||||||
import importlib
|
import importlib
|
||||||
from toolbox import update_ui, get_conf
|
from toolbox import update_ui, get_conf
|
||||||
from multiprocessing import Process, Pipe
|
from multiprocessing import Process, Pipe
|
||||||
|
from transformers import AutoModel, AutoTokenizer
|
||||||
|
|
||||||
load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
|
load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
|
||||||
|
|
||||||
@@ -106,7 +106,8 @@ class GetGLMHandle(Process):
|
|||||||
global pangu_glm_handle
|
global pangu_glm_handle
|
||||||
pangu_glm_handle = None
|
pangu_glm_handle = None
|
||||||
#################################################################################
|
#################################################################################
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
|
||||||
|
observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
多线程方法
|
多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
|
|||||||
@@ -106,7 +106,8 @@ class GetGLMHandle(Process):
|
|||||||
global rwkv_glm_handle
|
global rwkv_glm_handle
|
||||||
rwkv_glm_handle = None
|
rwkv_glm_handle = None
|
||||||
#################################################################################
|
#################################################################################
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
|
||||||
|
observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
多线程方法
|
多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
|
|||||||
197
request_llms/bridge_moonshot.py
普通文件
197
request_llms/bridge_moonshot.py
普通文件
@@ -0,0 +1,197 @@
|
|||||||
|
# encoding: utf-8
|
||||||
|
# @Time : 2024/3/3
|
||||||
|
# @Author : Spike
|
||||||
|
# @Descr :
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
import logging
|
||||||
|
|
||||||
|
from toolbox import get_conf, update_ui, log_chat
|
||||||
|
from toolbox import ChatBotWithCookies
|
||||||
|
|
||||||
|
import requests
|
||||||
|
|
||||||
|
|
||||||
|
class MoonShotInit:
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.llm_model = None
|
||||||
|
self.url = 'https://api.moonshot.cn/v1/chat/completions'
|
||||||
|
self.api_key = get_conf('MOONSHOT_API_KEY')
|
||||||
|
|
||||||
|
def __converter_file(self, user_input: str):
|
||||||
|
what_ask = []
|
||||||
|
for f in user_input.splitlines():
|
||||||
|
if os.path.exists(f):
|
||||||
|
files = []
|
||||||
|
if os.path.isdir(f):
|
||||||
|
file_list = os.listdir(f)
|
||||||
|
files.extend([os.path.join(f, file) for file in file_list])
|
||||||
|
else:
|
||||||
|
files.append(f)
|
||||||
|
for file in files:
|
||||||
|
if file.split('.')[-1] in ['pdf']:
|
||||||
|
with open(file, 'r') as fp:
|
||||||
|
from crazy_functions.crazy_utils import read_and_clean_pdf_text
|
||||||
|
file_content, _ = read_and_clean_pdf_text(fp)
|
||||||
|
what_ask.append({"role": "system", "content": file_content})
|
||||||
|
return what_ask
|
||||||
|
|
||||||
|
def __converter_user(self, user_input: str):
|
||||||
|
what_i_ask_now = {"role": "user", "content": user_input}
|
||||||
|
return what_i_ask_now
|
||||||
|
|
||||||
|
def __conversation_history(self, history):
|
||||||
|
conversation_cnt = len(history) // 2
|
||||||
|
messages = []
|
||||||
|
if conversation_cnt:
|
||||||
|
for index in range(0, 2 * conversation_cnt, 2):
|
||||||
|
what_i_have_asked = {
|
||||||
|
"role": "user",
|
||||||
|
"content": str(history[index])
|
||||||
|
}
|
||||||
|
what_gpt_answer = {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": str(history[index + 1])
|
||||||
|
}
|
||||||
|
if what_i_have_asked["content"] != "":
|
||||||
|
if what_gpt_answer["content"] == "": continue
|
||||||
|
messages.append(what_i_have_asked)
|
||||||
|
messages.append(what_gpt_answer)
|
||||||
|
else:
|
||||||
|
messages[-1]['content'] = what_gpt_answer['content']
|
||||||
|
return messages
|
||||||
|
|
||||||
|
def _analysis_content(self, chuck):
|
||||||
|
chunk_decoded = chuck.decode("utf-8")
|
||||||
|
chunk_json = {}
|
||||||
|
content = ""
|
||||||
|
try:
|
||||||
|
chunk_json = json.loads(chunk_decoded[6:])
|
||||||
|
content = chunk_json['choices'][0]["delta"].get("content", "")
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
return chunk_decoded, chunk_json, content
|
||||||
|
|
||||||
|
def generate_payload(self, inputs, llm_kwargs, history, system_prompt, stream):
|
||||||
|
self.llm_model = llm_kwargs['llm_model']
|
||||||
|
llm_kwargs.update({'use-key': self.api_key})
|
||||||
|
messages = []
|
||||||
|
if system_prompt:
|
||||||
|
messages.append({"role": "system", "content": system_prompt})
|
||||||
|
messages.extend(self.__converter_file(inputs))
|
||||||
|
for i in history[0::2]: # 历史文件继续上传
|
||||||
|
messages.extend(self.__converter_file(i))
|
||||||
|
messages.extend(self.__conversation_history(history))
|
||||||
|
messages.append(self.__converter_user(inputs))
|
||||||
|
header = {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Authorization": f"Bearer {self.api_key}",
|
||||||
|
}
|
||||||
|
payload = {
|
||||||
|
"model": self.llm_model,
|
||||||
|
"messages": messages,
|
||||||
|
"temperature": llm_kwargs.get('temperature', 0.3), # 1.0,
|
||||||
|
"top_p": llm_kwargs.get('top_p', 1.0), # 1.0,
|
||||||
|
"n": llm_kwargs.get('n_choices', 1),
|
||||||
|
"stream": stream
|
||||||
|
}
|
||||||
|
return payload, header
|
||||||
|
|
||||||
|
def generate_messages(self, inputs, llm_kwargs, history, system_prompt, stream):
|
||||||
|
payload, headers = self.generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
|
||||||
|
response = requests.post(self.url, headers=headers, json=payload, stream=stream)
|
||||||
|
|
||||||
|
chunk_content = ""
|
||||||
|
gpt_bro_result = ""
|
||||||
|
for chuck in response.iter_lines():
|
||||||
|
chunk_decoded, check_json, content = self._analysis_content(chuck)
|
||||||
|
chunk_content += chunk_decoded
|
||||||
|
if content:
|
||||||
|
gpt_bro_result += content
|
||||||
|
yield content, gpt_bro_result, ''
|
||||||
|
else:
|
||||||
|
error_msg = msg_handle_error(llm_kwargs, chunk_decoded)
|
||||||
|
if error_msg:
|
||||||
|
yield error_msg, gpt_bro_result, error_msg
|
||||||
|
break
|
||||||
|
|
||||||
|
|
||||||
|
def msg_handle_error(llm_kwargs, chunk_decoded):
|
||||||
|
use_ket = llm_kwargs.get('use-key', '')
|
||||||
|
api_key_encryption = use_ket[:8] + '****' + use_ket[-5:]
|
||||||
|
openai_website = f' 请登录OpenAI查看详情 https://platform.openai.com/signup api-key: `{api_key_encryption}`'
|
||||||
|
error_msg = ''
|
||||||
|
if "does not exist" in chunk_decoded:
|
||||||
|
error_msg = f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格."
|
||||||
|
elif "Incorrect API key" in chunk_decoded:
|
||||||
|
error_msg = f"[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务." + openai_website
|
||||||
|
elif "exceeded your current quota" in chunk_decoded:
|
||||||
|
error_msg = "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务." + openai_website
|
||||||
|
elif "account is not active" in chunk_decoded:
|
||||||
|
error_msg = "[Local Message] Your account is not active. OpenAI以账户失效为由, 拒绝服务." + openai_website
|
||||||
|
elif "associated with a deactivated account" in chunk_decoded:
|
||||||
|
error_msg = "[Local Message] You are associated with a deactivated account. OpenAI以账户失效为由, 拒绝服务." + openai_website
|
||||||
|
elif "API key has been deactivated" in chunk_decoded:
|
||||||
|
error_msg = "[Local Message] API key has been deactivated. OpenAI以账户失效为由, 拒绝服务." + openai_website
|
||||||
|
elif "bad forward key" in chunk_decoded:
|
||||||
|
error_msg = "[Local Message] Bad forward key. API2D账户额度不足."
|
||||||
|
elif "Not enough point" in chunk_decoded:
|
||||||
|
error_msg = "[Local Message] Not enough point. API2D账户点数不足."
|
||||||
|
elif 'error' in str(chunk_decoded).lower():
|
||||||
|
try:
|
||||||
|
error_msg = json.dumps(json.loads(chunk_decoded[:6]), indent=4, ensure_ascii=False)
|
||||||
|
except:
|
||||||
|
error_msg = chunk_decoded
|
||||||
|
return error_msg
|
||||||
|
|
||||||
|
|
||||||
|
def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
|
||||||
|
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
|
||||||
|
chatbot.append([inputs, ""])
|
||||||
|
|
||||||
|
if additional_fn is not None:
|
||||||
|
from core_functional import handle_core_functionality
|
||||||
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
||||||
|
gpt_bro_init = MoonShotInit()
|
||||||
|
history.extend([inputs, ''])
|
||||||
|
stream_response = gpt_bro_init.generate_messages(inputs, llm_kwargs, history, system_prompt, stream)
|
||||||
|
for content, gpt_bro_result, error_bro_meg in stream_response:
|
||||||
|
chatbot[-1] = [inputs, gpt_bro_result]
|
||||||
|
history[-1] = gpt_bro_result
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
if error_bro_meg:
|
||||||
|
chatbot[-1] = [inputs, error_bro_meg]
|
||||||
|
history = history[:-2]
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
break
|
||||||
|
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_bro_result)
|
||||||
|
|
||||||
|
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None,
|
||||||
|
console_slience=False):
|
||||||
|
gpt_bro_init = MoonShotInit()
|
||||||
|
watch_dog_patience = 60 # 看门狗的耐心, 设置10秒即可
|
||||||
|
stream_response = gpt_bro_init.generate_messages(inputs, llm_kwargs, history, sys_prompt, True)
|
||||||
|
moonshot_bro_result = ''
|
||||||
|
for content, moonshot_bro_result, error_bro_meg in stream_response:
|
||||||
|
moonshot_bro_result = moonshot_bro_result
|
||||||
|
if error_bro_meg:
|
||||||
|
if len(observe_window) >= 3:
|
||||||
|
observe_window[2] = error_bro_meg
|
||||||
|
return f'{moonshot_bro_result} 对话错误'
|
||||||
|
# 观测窗
|
||||||
|
if len(observe_window) >= 1:
|
||||||
|
observe_window[0] = moonshot_bro_result
|
||||||
|
if len(observe_window) >= 2:
|
||||||
|
if (time.time() - observe_window[1]) > watch_dog_patience:
|
||||||
|
observe_window[2] = "请求超时,程序终止。"
|
||||||
|
raise RuntimeError(f"{moonshot_bro_result} 程序终止。")
|
||||||
|
return moonshot_bro_result
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
moon_ai = MoonShotInit()
|
||||||
|
for g in moon_ai.generate_messages('hello', {'llm_model': 'moonshot-v1-8k'},
|
||||||
|
[], '', True):
|
||||||
|
print(g)
|
||||||
@@ -171,7 +171,8 @@ class GetGLMHandle(Process):
|
|||||||
global moss_handle
|
global moss_handle
|
||||||
moss_handle = None
|
moss_handle = None
|
||||||
#################################################################################
|
#################################################################################
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
|
||||||
|
observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
多线程方法
|
多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
|
|||||||
@@ -117,7 +117,8 @@ def generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt):
|
|||||||
raise RuntimeError(dec['error_msg'])
|
raise RuntimeError(dec['error_msg'])
|
||||||
|
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
|
||||||
|
observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
⭐多线程方法
|
⭐多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
@@ -146,9 +147,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
# 开始接收回复
|
# 开始接收回复
|
||||||
try:
|
try:
|
||||||
|
response = f"[Local Message] 等待{model_name}响应中 ..."
|
||||||
for response in generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt):
|
for response in generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt):
|
||||||
chatbot[-1] = (inputs, response)
|
chatbot[-1] = (inputs, response)
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
history.extend([inputs, response])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
except ConnectionAbortedError as e:
|
except ConnectionAbortedError as e:
|
||||||
from .bridge_all import model_info
|
from .bridge_all import model_info
|
||||||
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出
|
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出
|
||||||
@@ -157,10 +161,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="异常") # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history, msg="异常") # 刷新界面
|
||||||
return
|
return
|
||||||
|
except RuntimeError as e:
|
||||||
# 总结输出
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
response = f"[Local Message] {model_name}响应异常 ..."
|
chatbot[-1] = (chatbot[-1][0], tb_str)
|
||||||
if response == f"[Local Message] 等待{model_name}响应中 ...":
|
yield from update_ui(chatbot=chatbot, history=history, msg="异常") # 刷新界面
|
||||||
response = f"[Local Message] {model_name}响应异常 ..."
|
return
|
||||||
history.extend([inputs, response])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
@@ -5,7 +5,8 @@ from toolbox import check_packages, report_exception
|
|||||||
|
|
||||||
model_name = 'Qwen'
|
model_name = 'Qwen'
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
|
||||||
|
observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
⭐多线程方法
|
⭐多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
@@ -47,10 +48,13 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
if additional_fn is not None:
|
if additional_fn is not None:
|
||||||
from core_functional import handle_core_functionality
|
from core_functional import handle_core_functionality
|
||||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
|
chatbot[-1] = (inputs, "")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
# 开始接收回复
|
# 开始接收回复
|
||||||
from .com_qwenapi import QwenRequestInstance
|
from .com_qwenapi import QwenRequestInstance
|
||||||
sri = QwenRequestInstance()
|
sri = QwenRequestInstance()
|
||||||
|
response = f"[Local Message] 等待{model_name}响应中 ..."
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
||||||
chatbot[-1] = (inputs, response)
|
chatbot[-1] = (inputs, response)
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|||||||
@@ -9,7 +9,8 @@ def validate_key():
|
|||||||
if YUNQUE_SECRET_KEY == '': return False
|
if YUNQUE_SECRET_KEY == '': return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
|
||||||
|
observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
⭐ 多线程方法
|
⭐ 多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
@@ -56,6 +57,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
# 开始接收回复
|
# 开始接收回复
|
||||||
from .com_skylark2api import YUNQUERequestInstance
|
from .com_skylark2api import YUNQUERequestInstance
|
||||||
sri = YUNQUERequestInstance()
|
sri = YUNQUERequestInstance()
|
||||||
|
response = f"[Local Message] 等待{model_name}响应中 ..."
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
||||||
chatbot[-1] = (inputs, response)
|
chatbot[-1] = (inputs, response)
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|||||||
@@ -13,7 +13,8 @@ def validate_key():
|
|||||||
return False
|
return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
|
||||||
|
observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
⭐多线程方法
|
⭐多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
@@ -52,6 +53,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
# 开始接收回复
|
# 开始接收回复
|
||||||
from .com_sparkapi import SparkRequestInstance
|
from .com_sparkapi import SparkRequestInstance
|
||||||
sri = SparkRequestInstance()
|
sri = SparkRequestInstance()
|
||||||
|
response = f"[Local Message] 等待{model_name}响应中 ..."
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt, use_image_api=True):
|
for response in sri.generate(inputs, llm_kwargs, history, system_prompt, use_image_api=True):
|
||||||
chatbot[-1] = (inputs, response)
|
chatbot[-1] = (inputs, response)
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|||||||
283
request_llms/bridge_yimodel.py
普通文件
283
request_llms/bridge_yimodel.py
普通文件
@@ -0,0 +1,283 @@
|
|||||||
|
# 借鉴自同目录下的bridge_chatgpt.py
|
||||||
|
|
||||||
|
"""
|
||||||
|
该文件中主要包含三个函数
|
||||||
|
|
||||||
|
不具备多线程能力的函数:
|
||||||
|
1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
|
||||||
|
|
||||||
|
具备多线程调用能力的函数
|
||||||
|
2. predict_no_ui_long_connection:支持多线程
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import time
|
||||||
|
import gradio as gr
|
||||||
|
import logging
|
||||||
|
import traceback
|
||||||
|
import requests
|
||||||
|
import importlib
|
||||||
|
import random
|
||||||
|
|
||||||
|
# config_private.py放自己的秘密如API和代理网址
|
||||||
|
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
||||||
|
from toolbox import get_conf, update_ui, trimmed_format_exc, is_the_upload_folder, read_one_api_model_name
|
||||||
|
proxies, TIMEOUT_SECONDS, MAX_RETRY, YIMODEL_API_KEY = \
|
||||||
|
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'YIMODEL_API_KEY')
|
||||||
|
|
||||||
|
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
|
||||||
|
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
|
||||||
|
|
||||||
|
def get_full_error(chunk, stream_response):
|
||||||
|
"""
|
||||||
|
获取完整的从Openai返回的报错
|
||||||
|
"""
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
chunk += next(stream_response)
|
||||||
|
except:
|
||||||
|
break
|
||||||
|
return chunk
|
||||||
|
|
||||||
|
def decode_chunk(chunk):
|
||||||
|
# 提前读取一些信息(用于判断异常)
|
||||||
|
chunk_decoded = chunk.decode()
|
||||||
|
chunkjson = None
|
||||||
|
is_last_chunk = False
|
||||||
|
try:
|
||||||
|
chunkjson = json.loads(chunk_decoded[6:])
|
||||||
|
is_last_chunk = chunkjson.get("lastOne", False)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
return chunk_decoded, chunkjson, is_last_chunk
|
||||||
|
|
||||||
|
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
|
||||||
|
"""
|
||||||
|
发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
|
||||||
|
inputs:
|
||||||
|
是本次问询的输入
|
||||||
|
sys_prompt:
|
||||||
|
系统静默prompt
|
||||||
|
llm_kwargs:
|
||||||
|
chatGPT的内部调优参数
|
||||||
|
history:
|
||||||
|
是之前的对话列表
|
||||||
|
observe_window = None:
|
||||||
|
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
|
||||||
|
"""
|
||||||
|
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
||||||
|
if inputs == "": inputs = "空空如也的输入栏"
|
||||||
|
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
|
||||||
|
retry = 0
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
# make a POST request to the API endpoint, stream=False
|
||||||
|
from .bridge_all import model_info
|
||||||
|
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
||||||
|
response = requests.post(endpoint, headers=headers, proxies=proxies,
|
||||||
|
json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
|
||||||
|
except requests.exceptions.ReadTimeout as e:
|
||||||
|
retry += 1
|
||||||
|
traceback.print_exc()
|
||||||
|
if retry > MAX_RETRY: raise TimeoutError
|
||||||
|
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
|
||||||
|
|
||||||
|
stream_response = response.iter_lines()
|
||||||
|
result = ''
|
||||||
|
is_head_of_the_stream = True
|
||||||
|
while True:
|
||||||
|
try: chunk = next(stream_response)
|
||||||
|
except StopIteration:
|
||||||
|
break
|
||||||
|
except requests.exceptions.ConnectionError:
|
||||||
|
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
|
||||||
|
chunk_decoded, chunkjson, is_last_chunk = decode_chunk(chunk)
|
||||||
|
if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r'"role":"assistant"' in chunk_decoded):
|
||||||
|
# 数据流的第一帧不携带content
|
||||||
|
is_head_of_the_stream = False; continue
|
||||||
|
if chunk:
|
||||||
|
try:
|
||||||
|
if is_last_chunk:
|
||||||
|
# 判定为数据流的结束,gpt_replying_buffer也写完了
|
||||||
|
logging.info(f'[response] {result}')
|
||||||
|
break
|
||||||
|
result += chunkjson['choices'][0]["delta"]["content"]
|
||||||
|
if not console_slience: print(chunkjson['choices'][0]["delta"]["content"], end='')
|
||||||
|
if observe_window is not None:
|
||||||
|
# 观测窗,把已经获取的数据显示出去
|
||||||
|
if len(observe_window) >= 1:
|
||||||
|
observe_window[0] += chunkjson['choices'][0]["delta"]["content"]
|
||||||
|
# 看门狗,如果超过期限没有喂狗,则终止
|
||||||
|
if len(observe_window) >= 2:
|
||||||
|
if (time.time()-observe_window[1]) > watch_dog_patience:
|
||||||
|
raise RuntimeError("用户取消了程序。")
|
||||||
|
except Exception as e:
|
||||||
|
chunk = get_full_error(chunk, stream_response)
|
||||||
|
chunk_decoded = chunk.decode()
|
||||||
|
error_msg = chunk_decoded
|
||||||
|
print(error_msg)
|
||||||
|
raise RuntimeError("Json解析不合常规")
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
||||||
|
"""
|
||||||
|
发送至chatGPT,流式获取输出。
|
||||||
|
用于基础的对话功能。
|
||||||
|
inputs 是本次问询的输入
|
||||||
|
top_p, temperature是chatGPT的内部调优参数
|
||||||
|
history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误)
|
||||||
|
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
|
||||||
|
additional_fn代表点击的哪个按钮,按钮见functional.py
|
||||||
|
"""
|
||||||
|
if len(YIMODEL_API_KEY) == 0:
|
||||||
|
raise RuntimeError("没有设置YIMODEL_API_KEY选项")
|
||||||
|
if inputs == "": inputs = "空空如也的输入栏"
|
||||||
|
user_input = inputs
|
||||||
|
if additional_fn is not None:
|
||||||
|
from core_functional import handle_core_functionality
|
||||||
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
|
|
||||||
|
raw_input = inputs
|
||||||
|
logging.info(f'[raw_input] {raw_input}')
|
||||||
|
chatbot.append((inputs, ""))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
||||||
|
|
||||||
|
# check mis-behavior
|
||||||
|
if is_the_upload_folder(user_input):
|
||||||
|
chatbot[-1] = (inputs, f"[Local Message] 检测到操作错误!当您上传文档之后,需点击“**函数插件区**”按钮进行处理,请勿点击“提交”按钮或者“基础功能区”按钮。")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
|
||||||
|
time.sleep(2)
|
||||||
|
|
||||||
|
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
|
||||||
|
|
||||||
|
from .bridge_all import model_info
|
||||||
|
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
||||||
|
|
||||||
|
history.append(inputs); history.append("")
|
||||||
|
|
||||||
|
retry = 0
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
# make a POST request to the API endpoint, stream=True
|
||||||
|
response = requests.post(endpoint, headers=headers, proxies=proxies,
|
||||||
|
json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
|
||||||
|
except:
|
||||||
|
retry += 1
|
||||||
|
chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg))
|
||||||
|
retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
|
||||||
|
if retry > MAX_RETRY: raise TimeoutError
|
||||||
|
|
||||||
|
gpt_replying_buffer = ""
|
||||||
|
|
||||||
|
is_head_of_the_stream = True
|
||||||
|
if stream:
|
||||||
|
stream_response = response.iter_lines()
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
chunk = next(stream_response)
|
||||||
|
except StopIteration:
|
||||||
|
break
|
||||||
|
except requests.exceptions.ConnectionError:
|
||||||
|
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
|
||||||
|
|
||||||
|
# 提前读取一些信息 (用于判断异常)
|
||||||
|
chunk_decoded, chunkjson, is_last_chunk = decode_chunk(chunk)
|
||||||
|
|
||||||
|
if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r'"role":"assistant"' in chunk_decoded):
|
||||||
|
# 数据流的第一帧不携带content
|
||||||
|
is_head_of_the_stream = False; continue
|
||||||
|
|
||||||
|
if chunk:
|
||||||
|
try:
|
||||||
|
if is_last_chunk:
|
||||||
|
# 判定为数据流的结束,gpt_replying_buffer也写完了
|
||||||
|
logging.info(f'[response] {gpt_replying_buffer}')
|
||||||
|
break
|
||||||
|
# 处理数据流的主体
|
||||||
|
status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
|
||||||
|
gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
|
||||||
|
# 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出
|
||||||
|
history[-1] = gpt_replying_buffer
|
||||||
|
chatbot[-1] = (history[-2], history[-1])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面
|
||||||
|
except Exception as e:
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面
|
||||||
|
chunk = get_full_error(chunk, stream_response)
|
||||||
|
chunk_decoded = chunk.decode()
|
||||||
|
error_msg = chunk_decoded
|
||||||
|
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
|
||||||
|
print(error_msg)
|
||||||
|
return
|
||||||
|
|
||||||
|
def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg):
|
||||||
|
from .bridge_all import model_info
|
||||||
|
if "bad_request" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] 已经超过了模型的最大上下文或是模型格式错误,请尝试削减单次输入的文本量。")
|
||||||
|
elif "authentication_error" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. 请确保API key有效。")
|
||||||
|
elif "not_found" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], f"[Local Message] {llm_kwargs['llm_model']} 无效,请确保使用小写的模型名称。")
|
||||||
|
elif "rate_limit" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] 遇到了控制请求速率限制,请一分钟后重试。")
|
||||||
|
elif "system_busy" in error_msg:
|
||||||
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] 系统繁忙,请一分钟后重试。")
|
||||||
|
else:
|
||||||
|
from toolbox import regular_txt_to_markdown
|
||||||
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
|
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
|
||||||
|
return chatbot, history
|
||||||
|
|
||||||
|
def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
|
||||||
|
"""
|
||||||
|
整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
|
||||||
|
"""
|
||||||
|
api_key = f"Bearer {YIMODEL_API_KEY}"
|
||||||
|
|
||||||
|
headers = {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Authorization": api_key
|
||||||
|
}
|
||||||
|
|
||||||
|
conversation_cnt = len(history) // 2
|
||||||
|
|
||||||
|
messages = [{"role": "system", "content": system_prompt}]
|
||||||
|
if conversation_cnt:
|
||||||
|
for index in range(0, 2*conversation_cnt, 2):
|
||||||
|
what_i_have_asked = {}
|
||||||
|
what_i_have_asked["role"] = "user"
|
||||||
|
what_i_have_asked["content"] = history[index]
|
||||||
|
what_gpt_answer = {}
|
||||||
|
what_gpt_answer["role"] = "assistant"
|
||||||
|
what_gpt_answer["content"] = history[index+1]
|
||||||
|
if what_i_have_asked["content"] != "":
|
||||||
|
if what_gpt_answer["content"] == "": continue
|
||||||
|
if what_gpt_answer["content"] == timeout_bot_msg: continue
|
||||||
|
messages.append(what_i_have_asked)
|
||||||
|
messages.append(what_gpt_answer)
|
||||||
|
else:
|
||||||
|
messages[-1]['content'] = what_gpt_answer['content']
|
||||||
|
|
||||||
|
what_i_ask_now = {}
|
||||||
|
what_i_ask_now["role"] = "user"
|
||||||
|
what_i_ask_now["content"] = inputs
|
||||||
|
messages.append(what_i_ask_now)
|
||||||
|
model = llm_kwargs['llm_model']
|
||||||
|
if llm_kwargs['llm_model'].startswith('one-api-'):
|
||||||
|
model = llm_kwargs['llm_model'][len('one-api-'):]
|
||||||
|
model, _ = read_one_api_model_name(model)
|
||||||
|
tokens = 600 if llm_kwargs['llm_model'] == 'yi-34b-chat-0205' else 4096 #yi-34b-chat-0205只有4k上下文...
|
||||||
|
payload = {
|
||||||
|
"model": model,
|
||||||
|
"messages": messages,
|
||||||
|
"temperature": llm_kwargs['temperature'], # 1.0,
|
||||||
|
"stream": stream,
|
||||||
|
"max_tokens": tokens
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")
|
||||||
|
except:
|
||||||
|
print('输入中可能存在乱码。')
|
||||||
|
return headers,payload
|
||||||
@@ -1,16 +1,24 @@
|
|||||||
|
|
||||||
import time
|
import time
|
||||||
from toolbox import update_ui, get_conf, update_ui_lastest_msg
|
import os
|
||||||
from toolbox import check_packages, report_exception
|
from toolbox import update_ui, get_conf, update_ui_lastest_msg, log_chat
|
||||||
|
from toolbox import check_packages, report_exception, have_any_recent_upload_image_files
|
||||||
|
from toolbox import ChatBotWithCookies
|
||||||
|
|
||||||
model_name = '智谱AI大模型'
|
model_name = '智谱AI大模型'
|
||||||
|
zhipuai_default_model = 'glm-4'
|
||||||
|
|
||||||
def validate_key():
|
def validate_key():
|
||||||
ZHIPUAI_API_KEY = get_conf("ZHIPUAI_API_KEY")
|
ZHIPUAI_API_KEY = get_conf("ZHIPUAI_API_KEY")
|
||||||
if ZHIPUAI_API_KEY == '': return False
|
if ZHIPUAI_API_KEY == '': return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def make_media_input(inputs, image_paths):
|
||||||
|
for image_path in image_paths:
|
||||||
|
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
|
||||||
|
return inputs
|
||||||
|
|
||||||
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
|
||||||
|
observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
⭐多线程方法
|
⭐多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
@@ -18,32 +26,39 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
watch_dog_patience = 5
|
watch_dog_patience = 5
|
||||||
response = ""
|
response = ""
|
||||||
|
|
||||||
|
if llm_kwargs["llm_model"] == "zhipuai":
|
||||||
|
llm_kwargs["llm_model"] = zhipuai_default_model
|
||||||
|
|
||||||
if validate_key() is False:
|
if validate_key() is False:
|
||||||
raise RuntimeError('请配置ZHIPUAI_API_KEY')
|
raise RuntimeError('请配置ZHIPUAI_API_KEY')
|
||||||
|
|
||||||
from .com_zhipuapi import ZhipuRequestInstance
|
# 开始接收回复
|
||||||
sri = ZhipuRequestInstance()
|
from .com_zhipuglm import ZhipuChatInit
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
|
zhipu_bro_init = ZhipuChatInit()
|
||||||
|
for chunk, response in zhipu_bro_init.generate_chat(inputs, llm_kwargs, history, sys_prompt):
|
||||||
if len(observe_window) >= 1:
|
if len(observe_window) >= 1:
|
||||||
observe_window[0] = response
|
observe_window[0] = response
|
||||||
if len(observe_window) >= 2:
|
if len(observe_window) >= 2:
|
||||||
if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
|
if (time.time() - observe_window[1]) > watch_dog_patience:
|
||||||
|
raise RuntimeError("程序终止。")
|
||||||
return response
|
return response
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
|
||||||
|
def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
|
||||||
|
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
|
||||||
"""
|
"""
|
||||||
⭐单线程方法
|
⭐单线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
"""
|
"""
|
||||||
chatbot.append((inputs, ""))
|
chatbot.append([inputs, ""])
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
try:
|
try:
|
||||||
check_packages(["zhipuai"])
|
check_packages(["zhipuai"])
|
||||||
except:
|
except:
|
||||||
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install zhipuai==1.0.7```。",
|
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
|
||||||
chatbot=chatbot, history=history, delay=0)
|
chatbot=chatbot, history=history, delay=0)
|
||||||
return
|
return
|
||||||
|
|
||||||
if validate_key() is False:
|
if validate_key() is False:
|
||||||
@@ -53,16 +68,30 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
if additional_fn is not None:
|
if additional_fn is not None:
|
||||||
from core_functional import handle_core_functionality
|
from core_functional import handle_core_functionality
|
||||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
|
chatbot[-1] = [inputs, ""]
|
||||||
# 开始接收回复
|
|
||||||
from .com_zhipuapi import ZhipuRequestInstance
|
|
||||||
sri = ZhipuRequestInstance()
|
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
|
||||||
chatbot[-1] = (inputs, response)
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
# 总结输出
|
if llm_kwargs["llm_model"] == "zhipuai":
|
||||||
if response == f"[Local Message] 等待{model_name}响应中 ...":
|
llm_kwargs["llm_model"] = zhipuai_default_model
|
||||||
response = f"[Local Message] {model_name}响应异常 ..."
|
|
||||||
|
if llm_kwargs["llm_model"] in ["glm-4v"]:
|
||||||
|
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
|
||||||
|
if not have_recent_file:
|
||||||
|
chatbot.append((inputs, "没有检测到任何近期上传的图像文件,请上传jpg格式的图片,此外,请注意拓展名需要小写"))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待图片") # 刷新界面
|
||||||
|
return
|
||||||
|
if have_recent_file:
|
||||||
|
inputs = make_media_input(inputs, image_paths)
|
||||||
|
chatbot[-1] = [inputs, ""]
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
|
|
||||||
|
# 开始接收回复
|
||||||
|
from .com_zhipuglm import ZhipuChatInit
|
||||||
|
zhipu_bro_init = ZhipuChatInit()
|
||||||
|
for chunk, response in zhipu_bro_init.generate_chat(inputs, llm_kwargs, history, system_prompt):
|
||||||
|
chatbot[-1] = [inputs, response]
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
history.extend([inputs, response])
|
history.extend([inputs, response])
|
||||||
|
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=response)
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
@@ -7,7 +7,7 @@ import os
|
|||||||
import re
|
import re
|
||||||
import requests
|
import requests
|
||||||
from typing import List, Dict, Tuple
|
from typing import List, Dict, Tuple
|
||||||
from toolbox import get_conf, encode_image, get_pictures_list
|
from toolbox import get_conf, encode_image, get_pictures_list, to_markdown_tabs
|
||||||
|
|
||||||
proxies, TIMEOUT_SECONDS = get_conf("proxies", "TIMEOUT_SECONDS")
|
proxies, TIMEOUT_SECONDS = get_conf("proxies", "TIMEOUT_SECONDS")
|
||||||
|
|
||||||
@@ -112,38 +112,12 @@ def html_local_img(__file, layout="left", max_width=None, max_height=None, md=Tr
|
|||||||
return a
|
return a
|
||||||
|
|
||||||
|
|
||||||
def to_markdown_tabs(head: list, tabs: list, alignment=":---:", column=False):
|
|
||||||
"""
|
|
||||||
Args:
|
|
||||||
head: 表头:[]
|
|
||||||
tabs: 表值:[[列1], [列2], [列3], [列4]]
|
|
||||||
alignment: :--- 左对齐, :---: 居中对齐, ---: 右对齐
|
|
||||||
column: True to keep data in columns, False to keep data in rows (default).
|
|
||||||
Returns:
|
|
||||||
A string representation of the markdown table.
|
|
||||||
"""
|
|
||||||
if column:
|
|
||||||
transposed_tabs = list(map(list, zip(*tabs)))
|
|
||||||
else:
|
|
||||||
transposed_tabs = tabs
|
|
||||||
# Find the maximum length among the columns
|
|
||||||
max_len = max(len(column) for column in transposed_tabs)
|
|
||||||
|
|
||||||
tab_format = "| %s "
|
|
||||||
tabs_list = "".join([tab_format % i for i in head]) + "|\n"
|
|
||||||
tabs_list += "".join([tab_format % alignment for i in head]) + "|\n"
|
|
||||||
|
|
||||||
for i in range(max_len):
|
|
||||||
row_data = [tab[i] if i < len(tab) else "" for tab in transposed_tabs]
|
|
||||||
row_data = file_manifest_filter_html(row_data, filter_=None)
|
|
||||||
tabs_list += "".join([tab_format % i for i in row_data]) + "|\n"
|
|
||||||
|
|
||||||
return tabs_list
|
|
||||||
|
|
||||||
|
|
||||||
class GoogleChatInit:
|
class GoogleChatInit:
|
||||||
def __init__(self):
|
def __init__(self, llm_kwargs):
|
||||||
self.url_gemini = "https://generativelanguage.googleapis.com/v1beta/models/%m:streamGenerateContent?key=%k"
|
from .bridge_all import model_info
|
||||||
|
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
||||||
|
self.url_gemini = endpoint + "/%m:streamGenerateContent?key=%k"
|
||||||
|
|
||||||
def generate_chat(self, inputs, llm_kwargs, history, system_prompt):
|
def generate_chat(self, inputs, llm_kwargs, history, system_prompt):
|
||||||
headers, payload = self.generate_message_payload(
|
headers, payload = self.generate_message_payload(
|
||||||
|
|||||||
@@ -48,6 +48,10 @@ class QwenRequestInstance():
|
|||||||
for response in responses:
|
for response in responses:
|
||||||
if response.status_code == HTTPStatus.OK:
|
if response.status_code == HTTPStatus.OK:
|
||||||
if response.output.choices[0].finish_reason == 'stop':
|
if response.output.choices[0].finish_reason == 'stop':
|
||||||
|
try:
|
||||||
|
self.result_buf += response.output.choices[0].message.content
|
||||||
|
except:
|
||||||
|
pass
|
||||||
yield self.result_buf
|
yield self.result_buf
|
||||||
break
|
break
|
||||||
elif response.output.choices[0].finish_reason == 'length':
|
elif response.output.choices[0].finish_reason == 'length':
|
||||||
|
|||||||
@@ -65,6 +65,7 @@ class SparkRequestInstance():
|
|||||||
self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat"
|
self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat"
|
||||||
self.gpt_url_v2 = "ws://spark-api.xf-yun.com/v2.1/chat"
|
self.gpt_url_v2 = "ws://spark-api.xf-yun.com/v2.1/chat"
|
||||||
self.gpt_url_v3 = "ws://spark-api.xf-yun.com/v3.1/chat"
|
self.gpt_url_v3 = "ws://spark-api.xf-yun.com/v3.1/chat"
|
||||||
|
self.gpt_url_v35 = "wss://spark-api.xf-yun.com/v3.5/chat"
|
||||||
self.gpt_url_img = "wss://spark-api.cn-huabei-1.xf-yun.com/v2.1/image"
|
self.gpt_url_img = "wss://spark-api.cn-huabei-1.xf-yun.com/v2.1/image"
|
||||||
|
|
||||||
self.time_to_yield_event = threading.Event()
|
self.time_to_yield_event = threading.Event()
|
||||||
@@ -91,6 +92,8 @@ class SparkRequestInstance():
|
|||||||
gpt_url = self.gpt_url_v2
|
gpt_url = self.gpt_url_v2
|
||||||
elif llm_kwargs['llm_model'] == 'sparkv3':
|
elif llm_kwargs['llm_model'] == 'sparkv3':
|
||||||
gpt_url = self.gpt_url_v3
|
gpt_url = self.gpt_url_v3
|
||||||
|
elif llm_kwargs['llm_model'] == 'sparkv3.5':
|
||||||
|
gpt_url = self.gpt_url_v35
|
||||||
else:
|
else:
|
||||||
gpt_url = self.gpt_url
|
gpt_url = self.gpt_url
|
||||||
file_manifest = []
|
file_manifest = []
|
||||||
@@ -190,6 +193,7 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt, file_manifest)
|
|||||||
"spark": "general",
|
"spark": "general",
|
||||||
"sparkv2": "generalv2",
|
"sparkv2": "generalv2",
|
||||||
"sparkv3": "generalv3",
|
"sparkv3": "generalv3",
|
||||||
|
"sparkv3.5": "generalv3.5",
|
||||||
}
|
}
|
||||||
domains_select = domains[llm_kwargs['llm_model']]
|
domains_select = domains[llm_kwargs['llm_model']]
|
||||||
if file_manifest: domains_select = 'image'
|
if file_manifest: domains_select = 'image'
|
||||||
|
|||||||
@@ -1,70 +0,0 @@
|
|||||||
from toolbox import get_conf
|
|
||||||
import threading
|
|
||||||
import logging
|
|
||||||
|
|
||||||
timeout_bot_msg = '[Local Message] Request timeout. Network error.'
|
|
||||||
|
|
||||||
class ZhipuRequestInstance():
|
|
||||||
def __init__(self):
|
|
||||||
|
|
||||||
self.time_to_yield_event = threading.Event()
|
|
||||||
self.time_to_exit_event = threading.Event()
|
|
||||||
|
|
||||||
self.result_buf = ""
|
|
||||||
|
|
||||||
def generate(self, inputs, llm_kwargs, history, system_prompt):
|
|
||||||
# import _thread as thread
|
|
||||||
import zhipuai
|
|
||||||
ZHIPUAI_API_KEY, ZHIPUAI_MODEL = get_conf("ZHIPUAI_API_KEY", "ZHIPUAI_MODEL")
|
|
||||||
zhipuai.api_key = ZHIPUAI_API_KEY
|
|
||||||
self.result_buf = ""
|
|
||||||
response = zhipuai.model_api.sse_invoke(
|
|
||||||
model=ZHIPUAI_MODEL,
|
|
||||||
prompt=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
|
|
||||||
top_p=llm_kwargs['top_p']*0.7, # 智谱的API抽风,手动*0.7给做个线性变换
|
|
||||||
temperature=llm_kwargs['temperature']*0.95, # 智谱的API抽风,手动*0.7给做个线性变换
|
|
||||||
)
|
|
||||||
for event in response.events():
|
|
||||||
if event.event == "add":
|
|
||||||
# if self.result_buf == "" and event.data.startswith(" "):
|
|
||||||
# event.data = event.data.lstrip(" ") # 每次智谱为啥都要带个空格开头呢?
|
|
||||||
self.result_buf += event.data
|
|
||||||
yield self.result_buf
|
|
||||||
elif event.event == "error" or event.event == "interrupted":
|
|
||||||
raise RuntimeError("Unknown error:" + event.data)
|
|
||||||
elif event.event == "finish":
|
|
||||||
yield self.result_buf
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
raise RuntimeError("Unknown error:" + str(event))
|
|
||||||
if self.result_buf == "":
|
|
||||||
yield "智谱没有返回任何数据, 请检查ZHIPUAI_API_KEY和ZHIPUAI_MODEL是否填写正确."
|
|
||||||
logging.info(f'[raw_input] {inputs}')
|
|
||||||
logging.info(f'[response] {self.result_buf}')
|
|
||||||
return self.result_buf
|
|
||||||
|
|
||||||
def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
|
|
||||||
conversation_cnt = len(history) // 2
|
|
||||||
messages = [{"role": "user", "content": system_prompt}, {"role": "assistant", "content": "Certainly!"}]
|
|
||||||
if conversation_cnt:
|
|
||||||
for index in range(0, 2*conversation_cnt, 2):
|
|
||||||
what_i_have_asked = {}
|
|
||||||
what_i_have_asked["role"] = "user"
|
|
||||||
what_i_have_asked["content"] = history[index]
|
|
||||||
what_gpt_answer = {}
|
|
||||||
what_gpt_answer["role"] = "assistant"
|
|
||||||
what_gpt_answer["content"] = history[index+1]
|
|
||||||
if what_i_have_asked["content"] != "":
|
|
||||||
if what_gpt_answer["content"] == "":
|
|
||||||
continue
|
|
||||||
if what_gpt_answer["content"] == timeout_bot_msg:
|
|
||||||
continue
|
|
||||||
messages.append(what_i_have_asked)
|
|
||||||
messages.append(what_gpt_answer)
|
|
||||||
else:
|
|
||||||
messages[-1]['content'] = what_gpt_answer['content']
|
|
||||||
what_i_ask_now = {}
|
|
||||||
what_i_ask_now["role"] = "user"
|
|
||||||
what_i_ask_now["content"] = inputs
|
|
||||||
messages.append(what_i_ask_now)
|
|
||||||
return messages
|
|
||||||
129
request_llms/com_zhipuglm.py
普通文件
129
request_llms/com_zhipuglm.py
普通文件
@@ -0,0 +1,129 @@
|
|||||||
|
# encoding: utf-8
|
||||||
|
# @Time : 2024/1/22
|
||||||
|
# @Author : Kilig947 & binary husky
|
||||||
|
# @Descr : 兼容最新的智谱Ai
|
||||||
|
from toolbox import get_conf
|
||||||
|
from zhipuai import ZhipuAI
|
||||||
|
from toolbox import get_conf, encode_image, get_pictures_list
|
||||||
|
import logging, os
|
||||||
|
|
||||||
|
|
||||||
|
def input_encode_handler(inputs:str, llm_kwargs:dict):
|
||||||
|
if llm_kwargs["most_recent_uploaded"].get("path"):
|
||||||
|
image_paths = get_pictures_list(llm_kwargs["most_recent_uploaded"]["path"])
|
||||||
|
md_encode = []
|
||||||
|
for md_path in image_paths:
|
||||||
|
type_ = os.path.splitext(md_path)[1].replace(".", "")
|
||||||
|
type_ = "jpeg" if type_ == "jpg" else type_
|
||||||
|
md_encode.append({"data": encode_image(md_path), "type": type_})
|
||||||
|
return inputs, md_encode
|
||||||
|
|
||||||
|
|
||||||
|
class ZhipuChatInit:
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
ZHIPUAI_API_KEY, ZHIPUAI_MODEL = get_conf("ZHIPUAI_API_KEY", "ZHIPUAI_MODEL")
|
||||||
|
if len(ZHIPUAI_MODEL) > 0:
|
||||||
|
logging.error('ZHIPUAI_MODEL 配置项选项已经弃用,请在LLM_MODEL中配置')
|
||||||
|
self.zhipu_bro = ZhipuAI(api_key=ZHIPUAI_API_KEY)
|
||||||
|
self.model = ''
|
||||||
|
|
||||||
|
def __conversation_user(self, user_input: str, llm_kwargs:dict):
|
||||||
|
if self.model not in ["glm-4v"]:
|
||||||
|
return {"role": "user", "content": user_input}
|
||||||
|
else:
|
||||||
|
input_, encode_img = input_encode_handler(user_input, llm_kwargs=llm_kwargs)
|
||||||
|
what_i_have_asked = {"role": "user", "content": []}
|
||||||
|
what_i_have_asked['content'].append({"type": 'text', "text": user_input})
|
||||||
|
if encode_img:
|
||||||
|
img_d = {"type": "image_url",
|
||||||
|
"image_url": {'url': encode_img}}
|
||||||
|
what_i_have_asked['content'].append(img_d)
|
||||||
|
return what_i_have_asked
|
||||||
|
|
||||||
|
def __conversation_history(self, history:list, llm_kwargs:dict):
|
||||||
|
messages = []
|
||||||
|
conversation_cnt = len(history) // 2
|
||||||
|
if conversation_cnt:
|
||||||
|
for index in range(0, 2 * conversation_cnt, 2):
|
||||||
|
what_i_have_asked = self.__conversation_user(history[index], llm_kwargs)
|
||||||
|
what_gpt_answer = {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": history[index + 1]
|
||||||
|
}
|
||||||
|
messages.append(what_i_have_asked)
|
||||||
|
messages.append(what_gpt_answer)
|
||||||
|
return messages
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def preprocess_param(param, default=0.95, min_val=0.01, max_val=0.99):
|
||||||
|
"""预处理参数,保证其在允许范围内,并处理精度问题"""
|
||||||
|
try:
|
||||||
|
param = float(param)
|
||||||
|
except ValueError:
|
||||||
|
return default
|
||||||
|
|
||||||
|
if param <= min_val:
|
||||||
|
return min_val
|
||||||
|
elif param >= max_val:
|
||||||
|
return max_val
|
||||||
|
else:
|
||||||
|
return round(param, 2) # 可挑选精度,目前是两位小数
|
||||||
|
|
||||||
|
def __conversation_message_payload(self, inputs:str, llm_kwargs:dict, history:list, system_prompt:str):
|
||||||
|
messages = []
|
||||||
|
if system_prompt:
|
||||||
|
messages.append({"role": "system", "content": system_prompt})
|
||||||
|
self.model = llm_kwargs['llm_model']
|
||||||
|
messages.extend(self.__conversation_history(history, llm_kwargs)) # 处理 history
|
||||||
|
if inputs.strip() == "": # 处理空输入导致报错的问题 https://github.com/binary-husky/gpt_academic/issues/1640 提示 {"error":{"code":"1214","message":"messages[1]:content和tool_calls 字段不能同时为空"}
|
||||||
|
inputs = "." # 空格、换行、空字符串都会报错,所以用最没有意义的一个点代替
|
||||||
|
messages.append(self.__conversation_user(inputs, llm_kwargs)) # 处理用户对话
|
||||||
|
"""
|
||||||
|
采样温度,控制输出的随机性,必须为正数
|
||||||
|
取值范围是:(0.0, 1.0),不能等于 0,默认值为 0.95,
|
||||||
|
值越大,会使输出更随机,更具创造性;
|
||||||
|
值越小,输出会更加稳定或确定
|
||||||
|
建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数
|
||||||
|
"""
|
||||||
|
temperature = self.preprocess_param(
|
||||||
|
param=llm_kwargs.get('temperature', 0.95),
|
||||||
|
default=0.95,
|
||||||
|
min_val=0.01,
|
||||||
|
max_val=0.99
|
||||||
|
)
|
||||||
|
"""
|
||||||
|
用温度取样的另一种方法,称为核取样
|
||||||
|
取值范围是:(0.0, 1.0) 开区间,
|
||||||
|
不能等于 0 或 1,默认值为 0.7
|
||||||
|
模型考虑具有 top_p 概率质量 tokens 的结果
|
||||||
|
例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens
|
||||||
|
建议您根据应用场景调整 top_p 或 temperature 参数,
|
||||||
|
但不要同时调整两个参数
|
||||||
|
"""
|
||||||
|
top_p = self.preprocess_param(
|
||||||
|
param=llm_kwargs.get('top_p', 0.70),
|
||||||
|
default=0.70,
|
||||||
|
min_val=0.01,
|
||||||
|
max_val=0.99
|
||||||
|
)
|
||||||
|
response = self.zhipu_bro.chat.completions.create(
|
||||||
|
model=self.model, messages=messages, stream=True,
|
||||||
|
temperature=temperature,
|
||||||
|
top_p=top_p,
|
||||||
|
max_tokens=llm_kwargs.get('max_tokens', 1024 * 4),
|
||||||
|
)
|
||||||
|
return response
|
||||||
|
|
||||||
|
def generate_chat(self, inputs:str, llm_kwargs:dict, history:list, system_prompt:str):
|
||||||
|
self.model = llm_kwargs['llm_model']
|
||||||
|
response = self.__conversation_message_payload(inputs, llm_kwargs, history, system_prompt)
|
||||||
|
bro_results = ''
|
||||||
|
for chunk in response:
|
||||||
|
bro_results += chunk.choices[0].delta.content
|
||||||
|
yield chunk.choices[0].delta.content, bro_results
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
zhipu = ZhipuChatInit()
|
||||||
|
zhipu.generate_chat('你好', {'llm_model': 'glm-4'}, [], '你是WPSAi')
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
import time
|
import time
|
||||||
import threading
|
import threading
|
||||||
from toolbox import update_ui, Singleton
|
from toolbox import update_ui, Singleton
|
||||||
|
from toolbox import ChatBotWithCookies
|
||||||
from multiprocessing import Process, Pipe
|
from multiprocessing import Process, Pipe
|
||||||
from contextlib import redirect_stdout
|
from contextlib import redirect_stdout
|
||||||
from request_llms.queued_pipe import create_queue_pipe
|
from request_llms.queued_pipe import create_queue_pipe
|
||||||
@@ -214,7 +215,7 @@ class LocalLLMHandle(Process):
|
|||||||
def get_local_llm_predict_fns(LLMSingletonClass, model_name, history_format='classic'):
|
def get_local_llm_predict_fns(LLMSingletonClass, model_name, history_format='classic'):
|
||||||
load_message = f"{model_name}尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,{model_name}消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
|
load_message = f"{model_name}尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,{model_name}消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=[], console_slience:bool=False):
|
||||||
"""
|
"""
|
||||||
refer to request_llms/bridge_all.py
|
refer to request_llms/bridge_all.py
|
||||||
"""
|
"""
|
||||||
@@ -260,7 +261,8 @@ def get_local_llm_predict_fns(LLMSingletonClass, model_name, history_format='cla
|
|||||||
raise RuntimeError("程序终止。")
|
raise RuntimeError("程序终止。")
|
||||||
return response
|
return response
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None):
|
def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
|
||||||
|
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
|
||||||
"""
|
"""
|
||||||
refer to request_llms/bridge_all.py
|
refer to request_llms/bridge_all.py
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -1,12 +1,14 @@
|
|||||||
https://fastly.jsdelivr.net/gh/binary-husky/gradio-fix@gpt-academic/release/gradio-3.32.7-py3-none-any.whl
|
https://public.agent-matrix.com/publish/gradio-3.32.9-py3-none-any.whl
|
||||||
|
gradio-client==0.8
|
||||||
pypdf2==2.12.1
|
pypdf2==2.12.1
|
||||||
zhipuai<2
|
zhipuai>=2
|
||||||
tiktoken>=0.3.3
|
tiktoken>=0.3.3
|
||||||
requests[socks]
|
requests[socks]
|
||||||
pydantic==1.10.11
|
pydantic==2.5.2
|
||||||
protobuf==3.18
|
protobuf==3.18
|
||||||
transformers>=4.27.1
|
transformers>=4.27.1
|
||||||
scipdf_parser>=0.52
|
scipdf_parser>=0.52
|
||||||
|
anthropic>=0.18.1
|
||||||
python-markdown-math
|
python-markdown-math
|
||||||
pymdown-extensions
|
pymdown-extensions
|
||||||
websocket-client
|
websocket-client
|
||||||
@@ -15,7 +17,7 @@ prompt_toolkit
|
|||||||
latex2mathml
|
latex2mathml
|
||||||
python-docx
|
python-docx
|
||||||
mdtex2html
|
mdtex2html
|
||||||
anthropic
|
dashscope
|
||||||
pyautogen
|
pyautogen
|
||||||
colorama
|
colorama
|
||||||
Markdown
|
Markdown
|
||||||
|
|||||||
@@ -4,62 +4,47 @@ import os
|
|||||||
import math
|
import math
|
||||||
from textwrap import dedent
|
from textwrap import dedent
|
||||||
from functools import lru_cache
|
from functools import lru_cache
|
||||||
from pymdownx.superfences import fence_div_format, fence_code_format
|
from pymdownx.superfences import fence_code_format
|
||||||
from latex2mathml.converter import convert as tex2mathml
|
from latex2mathml.converter import convert as tex2mathml
|
||||||
from shared_utils.config_loader import get_conf as get_conf
|
from shared_utils.config_loader import get_conf as get_conf
|
||||||
|
from shared_utils.text_mask import apply_gpt_academic_string_mask
|
||||||
pj = os.path.join
|
|
||||||
default_user_name = 'default_user'
|
|
||||||
|
|
||||||
markdown_extension_configs = {
|
markdown_extension_configs = {
|
||||||
'mdx_math': {
|
"mdx_math": {
|
||||||
'enable_dollar_delimiter': True,
|
"enable_dollar_delimiter": True,
|
||||||
'use_gitlab_delimiters': False,
|
"use_gitlab_delimiters": False,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
code_highlight_configs = {
|
code_highlight_configs = {
|
||||||
"pymdownx.superfences": {
|
"pymdownx.superfences": {
|
||||||
'css_class': 'codehilite',
|
"css_class": "codehilite",
|
||||||
"custom_fences": [
|
"custom_fences": [
|
||||||
{
|
{"name": "mermaid", "class": "mermaid", "format": fence_code_format}
|
||||||
'name': 'mermaid',
|
],
|
||||||
'class': 'mermaid',
|
|
||||||
'format': fence_code_format
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
},
|
||||||
"pymdownx.highlight": {
|
"pymdownx.highlight": {
|
||||||
'css_class': 'codehilite',
|
"css_class": "codehilite",
|
||||||
'guess_lang': True,
|
"guess_lang": True,
|
||||||
# 'auto_title': True,
|
# 'auto_title': True,
|
||||||
# 'linenums': True
|
# 'linenums': True
|
||||||
}
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
def text_divide_paragraph(text):
|
code_highlight_configs_block_mermaid = {
|
||||||
"""
|
"pymdownx.superfences": {
|
||||||
将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。
|
"css_class": "codehilite",
|
||||||
"""
|
# "custom_fences": [
|
||||||
pre = '<div class="markdown-body">'
|
# {"name": "mermaid", "class": "mermaid", "format": fence_code_format}
|
||||||
suf = '</div>'
|
# ],
|
||||||
if text.startswith(pre) and text.endswith(suf):
|
},
|
||||||
return text
|
"pymdownx.highlight": {
|
||||||
|
"css_class": "codehilite",
|
||||||
if '```' in text:
|
"guess_lang": True,
|
||||||
# careful input
|
# 'auto_title': True,
|
||||||
return text
|
# 'linenums': True
|
||||||
elif '</div>' in text:
|
},
|
||||||
# careful input
|
}
|
||||||
return text
|
|
||||||
else:
|
|
||||||
# whatever input
|
|
||||||
lines = text.split("\n")
|
|
||||||
for i, line in enumerate(lines):
|
|
||||||
lines[i] = lines[i].replace(" ", " ")
|
|
||||||
text = "</br>".join(lines)
|
|
||||||
return pre + text + suf
|
|
||||||
|
|
||||||
|
|
||||||
def tex2mathml_catch_exception(content, *args, **kwargs):
|
def tex2mathml_catch_exception(content, *args, **kwargs):
|
||||||
try:
|
try:
|
||||||
@@ -71,20 +56,20 @@ def tex2mathml_catch_exception(content, *args, **kwargs):
|
|||||||
|
|
||||||
def replace_math_no_render(match):
|
def replace_math_no_render(match):
|
||||||
content = match.group(1)
|
content = match.group(1)
|
||||||
if 'mode=display' in match.group(0):
|
if "mode=display" in match.group(0):
|
||||||
content = content.replace('\n', '</br>')
|
content = content.replace("\n", "</br>")
|
||||||
return f"<font color=\"#00FF00\">$$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$$</font>"
|
return f'<font color="#00FF00">$$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$$</font>'
|
||||||
else:
|
else:
|
||||||
return f"<font color=\"#00FF00\">$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$</font>"
|
return f'<font color="#00FF00">$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$</font>'
|
||||||
|
|
||||||
|
|
||||||
def replace_math_render(match):
|
def replace_math_render(match):
|
||||||
content = match.group(1)
|
content = match.group(1)
|
||||||
if 'mode=display' in match.group(0):
|
if "mode=display" in match.group(0):
|
||||||
if '\\begin{aligned}' in content:
|
if "\\begin{aligned}" in content:
|
||||||
content = content.replace('\\begin{aligned}', '\\begin{array}')
|
content = content.replace("\\begin{aligned}", "\\begin{array}")
|
||||||
content = content.replace('\\end{aligned}', '\\end{array}')
|
content = content.replace("\\end{aligned}", "\\end{array}")
|
||||||
content = content.replace('&', ' ')
|
content = content.replace("&", " ")
|
||||||
content = tex2mathml_catch_exception(content, display="block")
|
content = tex2mathml_catch_exception(content, display="block")
|
||||||
return content
|
return content
|
||||||
else:
|
else:
|
||||||
@@ -95,9 +80,11 @@ def markdown_bug_hunt(content):
|
|||||||
"""
|
"""
|
||||||
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
||||||
"""
|
"""
|
||||||
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">',
|
content = content.replace(
|
||||||
'<script type="math/tex; mode=display">')
|
'<script type="math/tex">\n<script type="math/tex; mode=display">',
|
||||||
content = content.replace('</script>\n</script>', '</script>')
|
'<script type="math/tex; mode=display">',
|
||||||
|
)
|
||||||
|
content = content.replace("</script>\n</script>", "</script>")
|
||||||
return content
|
return content
|
||||||
|
|
||||||
|
|
||||||
@@ -105,25 +92,29 @@ def is_equation(txt):
|
|||||||
"""
|
"""
|
||||||
判定是否为公式 | 测试1 写出洛伦兹定律,使用tex格式公式 测试2 给出柯西不等式,使用latex格式 测试3 写出麦克斯韦方程组
|
判定是否为公式 | 测试1 写出洛伦兹定律,使用tex格式公式 测试2 给出柯西不等式,使用latex格式 测试3 写出麦克斯韦方程组
|
||||||
"""
|
"""
|
||||||
if '```' in txt and '```reference' not in txt: return False
|
if "```" in txt and "```reference" not in txt:
|
||||||
if '$' not in txt and '\\[' not in txt: return False
|
return False
|
||||||
|
if "$" not in txt and "\\[" not in txt:
|
||||||
|
return False
|
||||||
mathpatterns = {
|
mathpatterns = {
|
||||||
r'(?<!\\|\$)(\$)([^\$]+)(\$)': {'allow_multi_lines': False}, # $...$
|
r"(?<!\\|\$)(\$)([^\$]+)(\$)": {"allow_multi_lines": False}, # $...$
|
||||||
r'(?<!\\)(\$\$)([^\$]+)(\$\$)': {'allow_multi_lines': True}, # $$...$$
|
r"(?<!\\)(\$\$)([^\$]+)(\$\$)": {"allow_multi_lines": True}, # $$...$$
|
||||||
r'(?<!\\)(\\\[)(.+?)(\\\])': {'allow_multi_lines': False}, # \[...\]
|
r"(?<!\\)(\\\[)(.+?)(\\\])": {"allow_multi_lines": False}, # \[...\]
|
||||||
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
|
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
|
||||||
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
|
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
|
||||||
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
|
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
|
||||||
}
|
}
|
||||||
matches = []
|
matches = []
|
||||||
for pattern, property in mathpatterns.items():
|
for pattern, property in mathpatterns.items():
|
||||||
flags = re.ASCII | re.DOTALL if property['allow_multi_lines'] else re.ASCII
|
flags = re.ASCII | re.DOTALL if property["allow_multi_lines"] else re.ASCII
|
||||||
matches.extend(re.findall(pattern, txt, flags))
|
matches.extend(re.findall(pattern, txt, flags))
|
||||||
if len(matches) == 0: return False
|
if len(matches) == 0:
|
||||||
|
return False
|
||||||
contain_any_eq = False
|
contain_any_eq = False
|
||||||
illegal_pattern = re.compile(r'[^\x00-\x7F]|echo')
|
illegal_pattern = re.compile(r"[^\x00-\x7F]|echo")
|
||||||
for match in matches:
|
for match in matches:
|
||||||
if len(match) != 3: return False
|
if len(match) != 3:
|
||||||
|
return False
|
||||||
eq_canidate = match[1]
|
eq_canidate = match[1]
|
||||||
if illegal_pattern.search(eq_canidate):
|
if illegal_pattern.search(eq_canidate):
|
||||||
return False
|
return False
|
||||||
@@ -134,27 +125,28 @@ def is_equation(txt):
|
|||||||
|
|
||||||
def fix_markdown_indent(txt):
|
def fix_markdown_indent(txt):
|
||||||
# fix markdown indent
|
# fix markdown indent
|
||||||
if (' - ' not in txt) or ('. ' not in txt):
|
if (" - " not in txt) or (". " not in txt):
|
||||||
# do not need to fix, fast escape
|
# do not need to fix, fast escape
|
||||||
return txt
|
return txt
|
||||||
# walk through the lines and fix non-standard indentation
|
# walk through the lines and fix non-standard indentation
|
||||||
lines = txt.split("\n")
|
lines = txt.split("\n")
|
||||||
pattern = re.compile(r'^\s+-')
|
pattern = re.compile(r"^\s+-")
|
||||||
activated = False
|
activated = False
|
||||||
for i, line in enumerate(lines):
|
for i, line in enumerate(lines):
|
||||||
if line.startswith('- ') or line.startswith('1. '):
|
if line.startswith("- ") or line.startswith("1. "):
|
||||||
activated = True
|
activated = True
|
||||||
if activated and pattern.match(line):
|
if activated and pattern.match(line):
|
||||||
stripped_string = line.lstrip()
|
stripped_string = line.lstrip()
|
||||||
num_spaces = len(line) - len(stripped_string)
|
num_spaces = len(line) - len(stripped_string)
|
||||||
if (num_spaces % 4) == 3:
|
if (num_spaces % 4) == 3:
|
||||||
num_spaces_should_be = math.ceil(num_spaces / 4) * 4
|
num_spaces_should_be = math.ceil(num_spaces / 4) * 4
|
||||||
lines[i] = ' ' * num_spaces_should_be + stripped_string
|
lines[i] = " " * num_spaces_should_be + stripped_string
|
||||||
return '\n'.join(lines)
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
FENCED_BLOCK_RE = re.compile(
|
FENCED_BLOCK_RE = re.compile(
|
||||||
dedent(r'''
|
dedent(
|
||||||
|
r"""
|
||||||
(?P<fence>^[ \t]*(?:~{3,}|`{3,}))[ ]* # opening fence
|
(?P<fence>^[ \t]*(?:~{3,}|`{3,}))[ ]* # opening fence
|
||||||
((\{(?P<attrs>[^\}\n]*)\})| # (optional {attrs} or
|
((\{(?P<attrs>[^\}\n]*)\})| # (optional {attrs} or
|
||||||
(\.?(?P<lang>[\w#.+-]*)[ ]*)? # optional (.)lang
|
(\.?(?P<lang>[\w#.+-]*)[ ]*)? # optional (.)lang
|
||||||
@@ -162,16 +154,17 @@ FENCED_BLOCK_RE = re.compile(
|
|||||||
\n # newline (end of opening fence)
|
\n # newline (end of opening fence)
|
||||||
(?P<code>.*?)(?<=\n) # the code block
|
(?P<code>.*?)(?<=\n) # the code block
|
||||||
(?P=fence)[ ]*$ # closing fence
|
(?P=fence)[ ]*$ # closing fence
|
||||||
'''),
|
"""
|
||||||
re.MULTILINE | re.DOTALL | re.VERBOSE
|
),
|
||||||
|
re.MULTILINE | re.DOTALL | re.VERBOSE,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def get_line_range(re_match_obj, txt):
|
def get_line_range(re_match_obj, txt):
|
||||||
start_pos, end_pos = re_match_obj.regs[0]
|
start_pos, end_pos = re_match_obj.regs[0]
|
||||||
num_newlines_before = txt[:start_pos+1].count('\n')
|
num_newlines_before = txt[: start_pos + 1].count("\n")
|
||||||
line_start = num_newlines_before
|
line_start = num_newlines_before
|
||||||
line_end = num_newlines_before + txt[start_pos:end_pos].count('\n')+1
|
line_end = num_newlines_before + txt[start_pos:end_pos].count("\n") + 1
|
||||||
return line_start, line_end
|
return line_start, line_end
|
||||||
|
|
||||||
|
|
||||||
@@ -181,12 +174,14 @@ def fix_code_segment_indent(txt):
|
|||||||
txt_tmp = txt
|
txt_tmp = txt
|
||||||
while True:
|
while True:
|
||||||
re_match_obj = FENCED_BLOCK_RE.search(txt_tmp)
|
re_match_obj = FENCED_BLOCK_RE.search(txt_tmp)
|
||||||
if not re_match_obj: break
|
if not re_match_obj:
|
||||||
if len(lines) == 0: lines = txt.split("\n")
|
break
|
||||||
|
if len(lines) == 0:
|
||||||
|
lines = txt.split("\n")
|
||||||
|
|
||||||
# 清空 txt_tmp 对应的位置方便下次搜索
|
# 清空 txt_tmp 对应的位置方便下次搜索
|
||||||
start_pos, end_pos = re_match_obj.regs[0]
|
start_pos, end_pos = re_match_obj.regs[0]
|
||||||
txt_tmp = txt_tmp[:start_pos] + ' '*(end_pos-start_pos) + txt_tmp[end_pos:]
|
txt_tmp = txt_tmp[:start_pos] + " " * (end_pos - start_pos) + txt_tmp[end_pos:]
|
||||||
line_start, line_end = get_line_range(re_match_obj, txt)
|
line_start, line_end = get_line_range(re_match_obj, txt)
|
||||||
|
|
||||||
# 获取公共缩进
|
# 获取公共缩进
|
||||||
@@ -202,26 +197,26 @@ def fix_code_segment_indent(txt):
|
|||||||
num_spaces_should_be = math.ceil(shared_indent_cnt / 4) * 4
|
num_spaces_should_be = math.ceil(shared_indent_cnt / 4) * 4
|
||||||
for i in range(line_start, line_end):
|
for i in range(line_start, line_end):
|
||||||
add_n = num_spaces_should_be - shared_indent_cnt
|
add_n = num_spaces_should_be - shared_indent_cnt
|
||||||
lines[i] = ' ' * add_n + lines[i]
|
lines[i] = " " * add_n + lines[i]
|
||||||
if not change_any: # 遇到第一个
|
if not change_any: # 遇到第一个
|
||||||
change_any = True
|
change_any = True
|
||||||
|
|
||||||
if change_any:
|
if change_any:
|
||||||
return '\n'.join(lines)
|
return "\n".join(lines)
|
||||||
else:
|
else:
|
||||||
return txt
|
return txt
|
||||||
|
|
||||||
|
|
||||||
@lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度
|
@lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度
|
||||||
def markdown_convertion(txt):
|
def markdown_convertion(txt):
|
||||||
"""
|
"""
|
||||||
将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。
|
将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。
|
||||||
"""
|
"""
|
||||||
pre = '<div class="markdown-body">'
|
pre = '<div class="markdown-body">'
|
||||||
suf = '</div>'
|
suf = "</div>"
|
||||||
if txt.startswith(pre) and txt.endswith(suf):
|
if txt.startswith(pre) and txt.endswith(suf):
|
||||||
# print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
|
# print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
|
||||||
return txt # 已经被转化过,不需要再次转化
|
return txt # 已经被转化过,不需要再次转化
|
||||||
|
|
||||||
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
|
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
|
||||||
|
|
||||||
@@ -229,18 +224,47 @@ def markdown_convertion(txt):
|
|||||||
# txt = fix_code_segment_indent(txt)
|
# txt = fix_code_segment_indent(txt)
|
||||||
if is_equation(txt): # 有$标识的公式符号,且没有代码段```的标识
|
if is_equation(txt): # 有$标识的公式符号,且没有代码段```的标识
|
||||||
# convert everything to html format
|
# convert everything to html format
|
||||||
split = markdown.markdown(text='---')
|
split = markdown.markdown(text="---")
|
||||||
convert_stage_1 = markdown.markdown(text=txt, extensions=['sane_lists', 'tables', 'mdx_math', 'pymdownx.superfences', 'pymdownx.highlight'],
|
convert_stage_1 = markdown.markdown(
|
||||||
extension_configs={**markdown_extension_configs, **code_highlight_configs})
|
text=txt,
|
||||||
|
extensions=[
|
||||||
|
"sane_lists",
|
||||||
|
"tables",
|
||||||
|
"mdx_math",
|
||||||
|
"pymdownx.superfences",
|
||||||
|
"pymdownx.highlight",
|
||||||
|
],
|
||||||
|
extension_configs={**markdown_extension_configs, **code_highlight_configs},
|
||||||
|
)
|
||||||
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
|
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
|
||||||
# 1. convert to easy-to-copy tex (do not render math)
|
# 1. convert to easy-to-copy tex (do not render math)
|
||||||
convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
|
convert_stage_2_1, n = re.subn(
|
||||||
|
find_equation_pattern,
|
||||||
|
replace_math_no_render,
|
||||||
|
convert_stage_1,
|
||||||
|
flags=re.DOTALL,
|
||||||
|
)
|
||||||
# 2. convert to rendered equation
|
# 2. convert to rendered equation
|
||||||
convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL)
|
convert_stage_2_2, n = re.subn(
|
||||||
|
find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL
|
||||||
|
)
|
||||||
# cat them together
|
# cat them together
|
||||||
return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf
|
return pre + convert_stage_2_1 + f"{split}" + convert_stage_2_2 + suf
|
||||||
else:
|
else:
|
||||||
return pre + markdown.markdown(txt, extensions=['sane_lists', 'tables', 'pymdownx.superfences', 'pymdownx.highlight'], extension_configs=code_highlight_configs) + suf
|
return (
|
||||||
|
pre
|
||||||
|
+ markdown.markdown(
|
||||||
|
txt,
|
||||||
|
extensions=[
|
||||||
|
"sane_lists",
|
||||||
|
"tables",
|
||||||
|
"pymdownx.superfences",
|
||||||
|
"pymdownx.highlight",
|
||||||
|
],
|
||||||
|
extension_configs=code_highlight_configs,
|
||||||
|
)
|
||||||
|
+ suf
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def close_up_code_segment_during_stream(gpt_reply):
|
def close_up_code_segment_during_stream(gpt_reply):
|
||||||
@@ -254,20 +278,67 @@ def close_up_code_segment_during_stream(gpt_reply):
|
|||||||
str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。
|
str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。
|
||||||
|
|
||||||
"""
|
"""
|
||||||
if '```' not in gpt_reply:
|
if "```" not in gpt_reply:
|
||||||
return gpt_reply
|
return gpt_reply
|
||||||
if gpt_reply.endswith('```'):
|
if gpt_reply.endswith("```"):
|
||||||
return gpt_reply
|
return gpt_reply
|
||||||
|
|
||||||
# 排除了以上两个情况,我们
|
# 排除了以上两个情况,我们
|
||||||
segments = gpt_reply.split('```')
|
segments = gpt_reply.split("```")
|
||||||
n_mark = len(segments) - 1
|
n_mark = len(segments) - 1
|
||||||
if n_mark % 2 == 1:
|
if n_mark % 2 == 1:
|
||||||
return gpt_reply + '\n```' # 输出代码片段中!
|
return gpt_reply + "\n```" # 输出代码片段中!
|
||||||
else:
|
else:
|
||||||
return gpt_reply
|
return gpt_reply
|
||||||
|
|
||||||
|
|
||||||
|
def special_render_issues_for_mermaid(text):
|
||||||
|
# 用不太优雅的方式处理一个core_functional.py中出现的mermaid渲染特例:
|
||||||
|
# 我不希望"总结绘制脑图"prompt中的mermaid渲染出来
|
||||||
|
@lru_cache(maxsize=1)
|
||||||
|
def get_special_case():
|
||||||
|
from core_functional import get_core_functions
|
||||||
|
special_case = get_core_functions()["总结绘制脑图"]["Suffix"]
|
||||||
|
return special_case
|
||||||
|
if text.endswith(get_special_case()): text = text.replace("```mermaid", "```")
|
||||||
|
return text
|
||||||
|
|
||||||
|
|
||||||
|
def compat_non_markdown_input(text):
|
||||||
|
"""
|
||||||
|
改善非markdown输入的显示效果,例如将空格转换为 ,将换行符转换为</br>等。
|
||||||
|
"""
|
||||||
|
if "```" in text:
|
||||||
|
# careful input:markdown输入
|
||||||
|
text = special_render_issues_for_mermaid(text) # 处理特殊的渲染问题
|
||||||
|
return text
|
||||||
|
elif "</div>" in text:
|
||||||
|
# careful input:html输入
|
||||||
|
return text
|
||||||
|
else:
|
||||||
|
# whatever input:非markdown输入
|
||||||
|
lines = text.split("\n")
|
||||||
|
for i, line in enumerate(lines):
|
||||||
|
lines[i] = lines[i].replace(" ", " ") # 空格转换为
|
||||||
|
text = "</br>".join(lines) # 换行符转换为</br>
|
||||||
|
return text
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=128) # 使用lru缓存
|
||||||
|
def simple_markdown_convertion(text):
|
||||||
|
pre = '<div class="markdown-body">'
|
||||||
|
suf = "</div>"
|
||||||
|
if text.startswith(pre) and text.endswith(suf):
|
||||||
|
return text # 已经被转化过,不需要再次转化
|
||||||
|
text = compat_non_markdown_input(text) # 兼容非markdown输入
|
||||||
|
text = markdown.markdown(
|
||||||
|
text,
|
||||||
|
extensions=["pymdownx.superfences", "tables", "pymdownx.highlight"],
|
||||||
|
extension_configs=code_highlight_configs,
|
||||||
|
)
|
||||||
|
return pre + text + suf
|
||||||
|
|
||||||
|
|
||||||
def format_io(self, y):
|
def format_io(self, y):
|
||||||
"""
|
"""
|
||||||
将输入和输出解析为HTML格式。将y中最后一项的输入部分段落化,并将输出部分的Markdown和数学公式转换为HTML格式。
|
将输入和输出解析为HTML格式。将y中最后一项的输入部分段落化,并将输出部分的Markdown和数学公式转换为HTML格式。
|
||||||
@@ -275,13 +346,16 @@ def format_io(self, y):
|
|||||||
if y is None or y == []:
|
if y is None or y == []:
|
||||||
return []
|
return []
|
||||||
i_ask, gpt_reply = y[-1]
|
i_ask, gpt_reply = y[-1]
|
||||||
# 输入部分太自由,预处理一波
|
i_ask = apply_gpt_academic_string_mask(i_ask, mode="show_render")
|
||||||
if i_ask is not None: i_ask = text_divide_paragraph(i_ask)
|
gpt_reply = apply_gpt_academic_string_mask(gpt_reply, mode="show_render")
|
||||||
# 当代码输出半截的时候,试着补上后个```
|
# 当代码输出半截的时候,试着补上后个```
|
||||||
if gpt_reply is not None: gpt_reply = close_up_code_segment_during_stream(gpt_reply)
|
if gpt_reply is not None:
|
||||||
# process
|
gpt_reply = close_up_code_segment_during_stream(gpt_reply)
|
||||||
|
# 处理提问与输出
|
||||||
y[-1] = (
|
y[-1] = (
|
||||||
None if i_ask is None else markdown.markdown(i_ask, extensions=['pymdownx.superfences', 'tables', 'pymdownx.highlight'], extension_configs=code_highlight_configs),
|
# 输入部分
|
||||||
None if gpt_reply is None else markdown_convertion(gpt_reply)
|
None if i_ask is None else simple_markdown_convertion(i_ask),
|
||||||
|
# 输出部分
|
||||||
|
None if gpt_reply is None else markdown_convertion(gpt_reply),
|
||||||
)
|
)
|
||||||
return y
|
return y
|
||||||
|
|||||||
@@ -52,7 +52,7 @@ def get_plugin_default_kwargs():
|
|||||||
}
|
}
|
||||||
chatbot = ChatBotWithCookies(llm_kwargs)
|
chatbot = ChatBotWithCookies(llm_kwargs)
|
||||||
|
|
||||||
# txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port
|
# txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request
|
||||||
DEFAULT_FN_GROUPS_kwargs = {
|
DEFAULT_FN_GROUPS_kwargs = {
|
||||||
"main_input": "./README.md",
|
"main_input": "./README.md",
|
||||||
"llm_kwargs": llm_kwargs,
|
"llm_kwargs": llm_kwargs,
|
||||||
@@ -60,7 +60,7 @@ def get_plugin_default_kwargs():
|
|||||||
"chatbot_with_cookie": chatbot,
|
"chatbot_with_cookie": chatbot,
|
||||||
"history": [],
|
"history": [],
|
||||||
"system_prompt": "You are a good AI.",
|
"system_prompt": "You are a good AI.",
|
||||||
"web_port": None,
|
"user_request": None,
|
||||||
}
|
}
|
||||||
return DEFAULT_FN_GROUPS_kwargs
|
return DEFAULT_FN_GROUPS_kwargs
|
||||||
|
|
||||||
|
|||||||
61
shared_utils/cookie_manager.py
普通文件
61
shared_utils/cookie_manager.py
普通文件
@@ -0,0 +1,61 @@
|
|||||||
|
from typing import Callable
|
||||||
|
def load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns)->Callable:
|
||||||
|
def load_web_cookie_cache(persistent_cookie_, cookies_):
|
||||||
|
import gradio as gr
|
||||||
|
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid
|
||||||
|
|
||||||
|
ret = {}
|
||||||
|
for k in customize_btns:
|
||||||
|
ret.update({customize_btns[k]: gr.update(visible=False, value="")})
|
||||||
|
|
||||||
|
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
|
||||||
|
except: return ret
|
||||||
|
|
||||||
|
customize_fn_overwrite_ = persistent_cookie_.get("custom_bnt", {})
|
||||||
|
cookies_['customize_fn_overwrite'] = customize_fn_overwrite_
|
||||||
|
ret.update({cookies: cookies_})
|
||||||
|
|
||||||
|
for k,v in persistent_cookie_["custom_bnt"].items():
|
||||||
|
if v['Title'] == "": continue
|
||||||
|
if k in customize_btns: ret.update({customize_btns[k]: gr.update(visible=True, value=v['Title'])})
|
||||||
|
else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
|
||||||
|
return ret
|
||||||
|
return load_web_cookie_cache
|
||||||
|
|
||||||
|
|
||||||
|
def assign_btn__fn_builder(customize_btns, predefined_btns, cookies, web_cookie_cache)->Callable:
|
||||||
|
def assign_btn(persistent_cookie_, cookies_, basic_btn_dropdown_, basic_fn_title, basic_fn_prefix, basic_fn_suffix, clean_up=False):
|
||||||
|
import gradio as gr
|
||||||
|
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid
|
||||||
|
ret = {}
|
||||||
|
# 读取之前的自定义按钮
|
||||||
|
customize_fn_overwrite_ = cookies_['customize_fn_overwrite']
|
||||||
|
# 更新新的自定义按钮
|
||||||
|
customize_fn_overwrite_.update({
|
||||||
|
basic_btn_dropdown_:
|
||||||
|
{
|
||||||
|
"Title":basic_fn_title,
|
||||||
|
"Prefix":basic_fn_prefix,
|
||||||
|
"Suffix":basic_fn_suffix,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
if clean_up:
|
||||||
|
customize_fn_overwrite_ = {}
|
||||||
|
cookies_.update(customize_fn_overwrite_) # 更新cookie
|
||||||
|
visible = (not clean_up) and (basic_fn_title != "")
|
||||||
|
if basic_btn_dropdown_ in customize_btns:
|
||||||
|
# 是自定义按钮,不是预定义按钮
|
||||||
|
ret.update({customize_btns[basic_btn_dropdown_]: gr.update(visible=visible, value=basic_fn_title)})
|
||||||
|
else:
|
||||||
|
# 是预定义按钮
|
||||||
|
ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=visible, value=basic_fn_title)})
|
||||||
|
ret.update({cookies: cookies_})
|
||||||
|
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
|
||||||
|
except: persistent_cookie_ = {}
|
||||||
|
persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value
|
||||||
|
persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict
|
||||||
|
ret.update({web_cookie_cache: persistent_cookie_}) # write persistent cookie
|
||||||
|
return ret
|
||||||
|
return assign_btn
|
||||||
|
|
||||||
211
shared_utils/fastapi_server.py
普通文件
211
shared_utils/fastapi_server.py
普通文件
@@ -0,0 +1,211 @@
|
|||||||
|
"""
|
||||||
|
Tests:
|
||||||
|
|
||||||
|
- custom_path false / no user auth:
|
||||||
|
-- upload file(yes)
|
||||||
|
-- download file(yes)
|
||||||
|
-- websocket(yes)
|
||||||
|
-- block __pycache__ access(yes)
|
||||||
|
-- rel (yes)
|
||||||
|
-- abs (yes)
|
||||||
|
-- block user access(fail) http://localhost:45013/file=gpt_log/admin/chat_secrets.log
|
||||||
|
-- fix(commit f6bf05048c08f5cd84593f7fdc01e64dec1f584a)-> block successful
|
||||||
|
|
||||||
|
- custom_path yes("/cc/gptac") / no user auth:
|
||||||
|
-- upload file(yes)
|
||||||
|
-- download file(yes)
|
||||||
|
-- websocket(yes)
|
||||||
|
-- block __pycache__ access(yes)
|
||||||
|
-- block user access(yes)
|
||||||
|
|
||||||
|
- custom_path yes("/cc/gptac/") / no user auth:
|
||||||
|
-- upload file(yes)
|
||||||
|
-- download file(yes)
|
||||||
|
-- websocket(yes)
|
||||||
|
-- block user access(yes)
|
||||||
|
|
||||||
|
- custom_path yes("/cc/gptac/") / + user auth:
|
||||||
|
-- upload file(yes)
|
||||||
|
-- download file(yes)
|
||||||
|
-- websocket(yes)
|
||||||
|
-- block user access(yes)
|
||||||
|
-- block user-wise access (yes)
|
||||||
|
|
||||||
|
- custom_path no + user auth:
|
||||||
|
-- upload file(yes)
|
||||||
|
-- download file(yes)
|
||||||
|
-- websocket(yes)
|
||||||
|
-- block user access(yes)
|
||||||
|
-- block user-wise access (yes)
|
||||||
|
|
||||||
|
queue cocurrent effectiveness
|
||||||
|
-- upload file(yes)
|
||||||
|
-- download file(yes)
|
||||||
|
-- websocket(yes)
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os, requests, threading, time
|
||||||
|
import uvicorn
|
||||||
|
|
||||||
|
def _authorize_user(path_or_url, request, gradio_app):
|
||||||
|
from toolbox import get_conf, default_user_name
|
||||||
|
PATH_PRIVATE_UPLOAD, PATH_LOGGING = get_conf('PATH_PRIVATE_UPLOAD', 'PATH_LOGGING')
|
||||||
|
sensitive_path = None
|
||||||
|
path_or_url = os.path.relpath(path_or_url)
|
||||||
|
if path_or_url.startswith(PATH_LOGGING):
|
||||||
|
sensitive_path = PATH_LOGGING
|
||||||
|
if path_or_url.startswith(PATH_PRIVATE_UPLOAD):
|
||||||
|
sensitive_path = PATH_PRIVATE_UPLOAD
|
||||||
|
if sensitive_path:
|
||||||
|
token = request.cookies.get("access-token") or request.cookies.get("access-token-unsecure")
|
||||||
|
user = gradio_app.tokens.get(token) # get user
|
||||||
|
allowed_users = [user, 'autogen', default_user_name] # three user path that can be accessed
|
||||||
|
for user_allowed in allowed_users:
|
||||||
|
# exact match
|
||||||
|
if f"{os.sep}".join(path_or_url.split(os.sep)[:2]) == os.path.join(sensitive_path, user_allowed):
|
||||||
|
return True
|
||||||
|
return False # "越权访问!"
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
class Server(uvicorn.Server):
|
||||||
|
# A server that runs in a separate thread
|
||||||
|
def install_signal_handlers(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def run_in_thread(self):
|
||||||
|
self.thread = threading.Thread(target=self.run, daemon=True)
|
||||||
|
self.thread.start()
|
||||||
|
while not self.started:
|
||||||
|
time.sleep(1e-3)
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
self.should_exit = True
|
||||||
|
self.thread.join()
|
||||||
|
|
||||||
|
|
||||||
|
def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SSL_CERTFILE):
|
||||||
|
import uvicorn
|
||||||
|
import fastapi
|
||||||
|
import gradio as gr
|
||||||
|
from fastapi import FastAPI
|
||||||
|
from gradio.routes import App
|
||||||
|
from toolbox import get_conf
|
||||||
|
CUSTOM_PATH, PATH_LOGGING = get_conf('CUSTOM_PATH', 'PATH_LOGGING')
|
||||||
|
|
||||||
|
# --- --- configurate gradio app block --- ---
|
||||||
|
app_block:gr.Blocks
|
||||||
|
app_block.ssl_verify = False
|
||||||
|
app_block.auth_message = '请登录'
|
||||||
|
app_block.favicon_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), "docs/logo.png")
|
||||||
|
app_block.auth = AUTHENTICATION if len(AUTHENTICATION) != 0 else None
|
||||||
|
app_block.blocked_paths = ["config.py", "__pycache__", "config_private.py", "docker-compose.yml", "Dockerfile", f"{PATH_LOGGING}/admin"]
|
||||||
|
app_block.dev_mode = False
|
||||||
|
app_block.config = app_block.get_config_file()
|
||||||
|
app_block.enable_queue = True
|
||||||
|
app_block.queue(concurrency_count=CONCURRENT_COUNT)
|
||||||
|
app_block.validate_queue_settings()
|
||||||
|
app_block.show_api = False
|
||||||
|
app_block.config = app_block.get_config_file()
|
||||||
|
max_threads = 40
|
||||||
|
app_block.max_threads = max(
|
||||||
|
app_block._queue.max_thread_count if app_block.enable_queue else 0, max_threads
|
||||||
|
)
|
||||||
|
app_block.is_colab = False
|
||||||
|
app_block.is_kaggle = False
|
||||||
|
app_block.is_sagemaker = False
|
||||||
|
|
||||||
|
gradio_app = App.create_app(app_block)
|
||||||
|
|
||||||
|
# --- --- replace gradio endpoint to forbid access to sensitive files --- ---
|
||||||
|
if len(AUTHENTICATION) > 0:
|
||||||
|
dependencies = []
|
||||||
|
endpoint = None
|
||||||
|
for route in list(gradio_app.router.routes):
|
||||||
|
if route.path == "/file/{path:path}":
|
||||||
|
gradio_app.router.routes.remove(route)
|
||||||
|
if route.path == "/file={path_or_url:path}":
|
||||||
|
dependencies = route.dependencies
|
||||||
|
endpoint = route.endpoint
|
||||||
|
gradio_app.router.routes.remove(route)
|
||||||
|
@gradio_app.get("/file/{path:path}", dependencies=dependencies)
|
||||||
|
@gradio_app.head("/file={path_or_url:path}", dependencies=dependencies)
|
||||||
|
@gradio_app.get("/file={path_or_url:path}", dependencies=dependencies)
|
||||||
|
async def file(path_or_url: str, request: fastapi.Request):
|
||||||
|
if len(AUTHENTICATION) > 0:
|
||||||
|
if not _authorize_user(path_or_url, request, gradio_app):
|
||||||
|
return "越权访问!"
|
||||||
|
return await endpoint(path_or_url, request)
|
||||||
|
|
||||||
|
# --- --- app_lifespan --- ---
|
||||||
|
from contextlib import asynccontextmanager
|
||||||
|
@asynccontextmanager
|
||||||
|
async def app_lifespan(app):
|
||||||
|
async def startup_gradio_app():
|
||||||
|
if gradio_app.get_blocks().enable_queue:
|
||||||
|
gradio_app.get_blocks().startup_events()
|
||||||
|
async def shutdown_gradio_app():
|
||||||
|
pass
|
||||||
|
await startup_gradio_app() # startup logic here
|
||||||
|
yield # The application will serve requests after this point
|
||||||
|
await shutdown_gradio_app() # cleanup/shutdown logic here
|
||||||
|
|
||||||
|
# --- --- FastAPI --- ---
|
||||||
|
fastapi_app = FastAPI(lifespan=app_lifespan)
|
||||||
|
fastapi_app.mount(CUSTOM_PATH, gradio_app)
|
||||||
|
|
||||||
|
# --- --- favicon --- ---
|
||||||
|
if CUSTOM_PATH != '/':
|
||||||
|
from fastapi.responses import FileResponse
|
||||||
|
@fastapi_app.get("/favicon.ico")
|
||||||
|
async def favicon():
|
||||||
|
return FileResponse(app_block.favicon_path)
|
||||||
|
|
||||||
|
# --- --- uvicorn.Config --- ---
|
||||||
|
ssl_keyfile = None if SSL_KEYFILE == "" else SSL_KEYFILE
|
||||||
|
ssl_certfile = None if SSL_CERTFILE == "" else SSL_CERTFILE
|
||||||
|
server_name = "0.0.0.0"
|
||||||
|
config = uvicorn.Config(
|
||||||
|
fastapi_app,
|
||||||
|
host=server_name,
|
||||||
|
port=PORT,
|
||||||
|
reload=False,
|
||||||
|
log_level="warning",
|
||||||
|
ssl_keyfile=ssl_keyfile,
|
||||||
|
ssl_certfile=ssl_certfile,
|
||||||
|
)
|
||||||
|
server = Server(config)
|
||||||
|
url_host_name = "localhost" if server_name == "0.0.0.0" else server_name
|
||||||
|
if ssl_keyfile is not None:
|
||||||
|
if ssl_certfile is None:
|
||||||
|
raise ValueError(
|
||||||
|
"ssl_certfile must be provided if ssl_keyfile is provided."
|
||||||
|
)
|
||||||
|
path_to_local_server = f"https://{url_host_name}:{PORT}/"
|
||||||
|
else:
|
||||||
|
path_to_local_server = f"http://{url_host_name}:{PORT}/"
|
||||||
|
if CUSTOM_PATH != '/':
|
||||||
|
path_to_local_server += CUSTOM_PATH.lstrip('/').rstrip('/') + '/'
|
||||||
|
# --- --- begin --- ---
|
||||||
|
server.run_in_thread()
|
||||||
|
|
||||||
|
# --- --- after server launch --- ---
|
||||||
|
app_block.server = server
|
||||||
|
app_block.server_name = server_name
|
||||||
|
app_block.local_url = path_to_local_server
|
||||||
|
app_block.protocol = (
|
||||||
|
"https"
|
||||||
|
if app_block.local_url.startswith("https") or app_block.is_colab
|
||||||
|
else "http"
|
||||||
|
)
|
||||||
|
|
||||||
|
if app_block.enable_queue:
|
||||||
|
app_block._queue.set_url(path_to_local_server)
|
||||||
|
|
||||||
|
forbid_proxies = {
|
||||||
|
"http": "",
|
||||||
|
"https": "",
|
||||||
|
}
|
||||||
|
requests.get(f"{app_block.local_url}startup-events", verify=app_block.ssl_verify, proxies=forbid_proxies)
|
||||||
|
app_block.is_running = True
|
||||||
|
app_block.block_thread()
|
||||||
137
shared_utils/handle_upload.py
普通文件
137
shared_utils/handle_upload.py
普通文件
@@ -0,0 +1,137 @@
|
|||||||
|
import importlib
|
||||||
|
import time
|
||||||
|
import inspect
|
||||||
|
import re
|
||||||
|
import os
|
||||||
|
import base64
|
||||||
|
import gradio
|
||||||
|
import shutil
|
||||||
|
import glob
|
||||||
|
from shared_utils.config_loader import get_conf
|
||||||
|
|
||||||
|
def html_local_file(file):
|
||||||
|
base_path = os.path.dirname(__file__) # 项目目录
|
||||||
|
if os.path.exists(str(file)):
|
||||||
|
file = f'file={file.replace(base_path, ".")}'
|
||||||
|
return file
|
||||||
|
|
||||||
|
|
||||||
|
def html_local_img(__file, layout="left", max_width=None, max_height=None, md=True):
|
||||||
|
style = ""
|
||||||
|
if max_width is not None:
|
||||||
|
style += f"max-width: {max_width};"
|
||||||
|
if max_height is not None:
|
||||||
|
style += f"max-height: {max_height};"
|
||||||
|
__file = html_local_file(__file)
|
||||||
|
a = f'<div align="{layout}"><img src="{__file}" style="{style}"></div>'
|
||||||
|
if md:
|
||||||
|
a = f""
|
||||||
|
return a
|
||||||
|
|
||||||
|
|
||||||
|
def file_manifest_filter_type(file_list, filter_: list = None):
|
||||||
|
new_list = []
|
||||||
|
if not filter_:
|
||||||
|
filter_ = ["png", "jpg", "jpeg"]
|
||||||
|
for file in file_list:
|
||||||
|
if str(os.path.basename(file)).split(".")[-1] in filter_:
|
||||||
|
new_list.append(html_local_img(file, md=False))
|
||||||
|
else:
|
||||||
|
new_list.append(file)
|
||||||
|
return new_list
|
||||||
|
|
||||||
|
|
||||||
|
def zip_extract_member_new(self, member, targetpath, pwd):
|
||||||
|
# 修复中文乱码的问题
|
||||||
|
"""Extract the ZipInfo object 'member' to a physical
|
||||||
|
file on the path targetpath.
|
||||||
|
"""
|
||||||
|
import zipfile
|
||||||
|
if not isinstance(member, zipfile.ZipInfo):
|
||||||
|
member = self.getinfo(member)
|
||||||
|
|
||||||
|
# build the destination pathname, replacing
|
||||||
|
# forward slashes to platform specific separators.
|
||||||
|
arcname = member.filename.replace('/', os.path.sep)
|
||||||
|
arcname = arcname.encode('cp437', errors='replace').decode('gbk', errors='replace')
|
||||||
|
|
||||||
|
if os.path.altsep:
|
||||||
|
arcname = arcname.replace(os.path.altsep, os.path.sep)
|
||||||
|
# interpret absolute pathname as relative, remove drive letter or
|
||||||
|
# UNC path, redundant separators, "." and ".." components.
|
||||||
|
arcname = os.path.splitdrive(arcname)[1]
|
||||||
|
invalid_path_parts = ('', os.path.curdir, os.path.pardir)
|
||||||
|
arcname = os.path.sep.join(x for x in arcname.split(os.path.sep)
|
||||||
|
if x not in invalid_path_parts)
|
||||||
|
if os.path.sep == '\\':
|
||||||
|
# filter illegal characters on Windows
|
||||||
|
arcname = self._sanitize_windows_name(arcname, os.path.sep)
|
||||||
|
|
||||||
|
targetpath = os.path.join(targetpath, arcname)
|
||||||
|
targetpath = os.path.normpath(targetpath)
|
||||||
|
|
||||||
|
# Create all upper directories if necessary.
|
||||||
|
upperdirs = os.path.dirname(targetpath)
|
||||||
|
if upperdirs and not os.path.exists(upperdirs):
|
||||||
|
os.makedirs(upperdirs)
|
||||||
|
|
||||||
|
if member.is_dir():
|
||||||
|
if not os.path.isdir(targetpath):
|
||||||
|
os.mkdir(targetpath)
|
||||||
|
return targetpath
|
||||||
|
|
||||||
|
with self.open(member, pwd=pwd) as source, \
|
||||||
|
open(targetpath, "wb") as target:
|
||||||
|
shutil.copyfileobj(source, target)
|
||||||
|
|
||||||
|
return targetpath
|
||||||
|
|
||||||
|
|
||||||
|
def extract_archive(file_path, dest_dir):
|
||||||
|
import zipfile
|
||||||
|
import tarfile
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Get the file extension of the input file
|
||||||
|
file_extension = os.path.splitext(file_path)[1]
|
||||||
|
|
||||||
|
# Extract the archive based on its extension
|
||||||
|
if file_extension == ".zip":
|
||||||
|
with zipfile.ZipFile(file_path, "r") as zipobj:
|
||||||
|
zipobj._extract_member = lambda a,b,c: zip_extract_member_new(zipobj, a,b,c) # 修复中文乱码的问题
|
||||||
|
zipobj.extractall(path=dest_dir)
|
||||||
|
print("Successfully extracted zip archive to {}".format(dest_dir))
|
||||||
|
|
||||||
|
elif file_extension in [".tar", ".gz", ".bz2"]:
|
||||||
|
with tarfile.open(file_path, "r:*") as tarobj:
|
||||||
|
tarobj.extractall(path=dest_dir)
|
||||||
|
print("Successfully extracted tar archive to {}".format(dest_dir))
|
||||||
|
|
||||||
|
# 第三方库,需要预先pip install rarfile
|
||||||
|
# 此外,Windows上还需要安装winrar软件,配置其Path环境变量,如"C:\Program Files\WinRAR"才可以
|
||||||
|
elif file_extension == ".rar":
|
||||||
|
try:
|
||||||
|
import rarfile
|
||||||
|
|
||||||
|
with rarfile.RarFile(file_path) as rf:
|
||||||
|
rf.extractall(path=dest_dir)
|
||||||
|
print("Successfully extracted rar archive to {}".format(dest_dir))
|
||||||
|
except:
|
||||||
|
print("Rar format requires additional dependencies to install")
|
||||||
|
return "\n\n解压失败! 需要安装pip install rarfile来解压rar文件。建议:使用zip压缩格式。"
|
||||||
|
|
||||||
|
# 第三方库,需要预先pip install py7zr
|
||||||
|
elif file_extension == ".7z":
|
||||||
|
try:
|
||||||
|
import py7zr
|
||||||
|
|
||||||
|
with py7zr.SevenZipFile(file_path, mode="r") as f:
|
||||||
|
f.extractall(path=dest_dir)
|
||||||
|
print("Successfully extracted 7z archive to {}".format(dest_dir))
|
||||||
|
except:
|
||||||
|
print("7z format requires additional dependencies to install")
|
||||||
|
return "\n\n解压失败! 需要安装pip install py7zr来解压7z文件"
|
||||||
|
else:
|
||||||
|
return ""
|
||||||
|
return ""
|
||||||
|
|
||||||
@@ -14,7 +14,7 @@ def is_openai_api_key(key):
|
|||||||
if len(CUSTOM_API_KEY_PATTERN) != 0:
|
if len(CUSTOM_API_KEY_PATTERN) != 0:
|
||||||
API_MATCH_ORIGINAL = re.match(CUSTOM_API_KEY_PATTERN, key)
|
API_MATCH_ORIGINAL = re.match(CUSTOM_API_KEY_PATTERN, key)
|
||||||
else:
|
else:
|
||||||
API_MATCH_ORIGINAL = re.match(r"sk-[a-zA-Z0-9]{48}$", key)
|
API_MATCH_ORIGINAL = re.match(r"sk-[a-zA-Z0-9]{48}$|sess-[a-zA-Z0-9]{40}$", key)
|
||||||
return bool(API_MATCH_ORIGINAL)
|
return bool(API_MATCH_ORIGINAL)
|
||||||
|
|
||||||
|
|
||||||
@@ -28,6 +28,11 @@ def is_api2d_key(key):
|
|||||||
return bool(API_MATCH_API2D)
|
return bool(API_MATCH_API2D)
|
||||||
|
|
||||||
|
|
||||||
|
def is_cohere_api_key(key):
|
||||||
|
API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{40}$", key)
|
||||||
|
return bool(API_MATCH_AZURE)
|
||||||
|
|
||||||
|
|
||||||
def is_any_api_key(key):
|
def is_any_api_key(key):
|
||||||
if ',' in key:
|
if ',' in key:
|
||||||
keys = key.split(',')
|
keys = key.split(',')
|
||||||
@@ -35,7 +40,7 @@ def is_any_api_key(key):
|
|||||||
if is_any_api_key(k): return True
|
if is_any_api_key(k): return True
|
||||||
return False
|
return False
|
||||||
else:
|
else:
|
||||||
return is_openai_api_key(key) or is_api2d_key(key) or is_azure_api_key(key)
|
return is_openai_api_key(key) or is_api2d_key(key) or is_azure_api_key(key) or is_cohere_api_key(key)
|
||||||
|
|
||||||
|
|
||||||
def what_keys(keys):
|
def what_keys(keys):
|
||||||
@@ -62,7 +67,7 @@ def select_api_key(keys, llm_model):
|
|||||||
avail_key_list = []
|
avail_key_list = []
|
||||||
key_list = keys.split(',')
|
key_list = keys.split(',')
|
||||||
|
|
||||||
if llm_model.startswith('gpt-'):
|
if llm_model.startswith('gpt-') or llm_model.startswith('one-api-'):
|
||||||
for k in key_list:
|
for k in key_list:
|
||||||
if is_openai_api_key(k): avail_key_list.append(k)
|
if is_openai_api_key(k): avail_key_list.append(k)
|
||||||
|
|
||||||
@@ -74,8 +79,12 @@ def select_api_key(keys, llm_model):
|
|||||||
for k in key_list:
|
for k in key_list:
|
||||||
if is_azure_api_key(k): avail_key_list.append(k)
|
if is_azure_api_key(k): avail_key_list.append(k)
|
||||||
|
|
||||||
|
if llm_model.startswith('cohere-'):
|
||||||
|
for k in key_list:
|
||||||
|
if is_cohere_api_key(k): avail_key_list.append(k)
|
||||||
|
|
||||||
if len(avail_key_list) == 0:
|
if len(avail_key_list) == 0:
|
||||||
raise RuntimeError(f"您提供的api-key不满足要求,不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源(右下角更换模型菜单中可切换openai,azure,claude,api2d等请求源)。")
|
raise RuntimeError(f"您提供的api-key不满足要求,不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源(左上角更换模型菜单中可切换openai,azure,claude,cohere等请求源)。")
|
||||||
|
|
||||||
api_key = random.choice(avail_key_list) # 随机负载均衡
|
api_key = random.choice(avail_key_list) # 随机负载均衡
|
||||||
return api_key
|
return api_key
|
||||||
|
|||||||
34
shared_utils/map_names.py
普通文件
34
shared_utils/map_names.py
普通文件
@@ -0,0 +1,34 @@
|
|||||||
|
import re
|
||||||
|
mapping_dic = {
|
||||||
|
# "qianfan": "qianfan(文心一言大模型)",
|
||||||
|
# "zhipuai": "zhipuai(智谱GLM4超级模型🔥)",
|
||||||
|
# "gpt-4-1106-preview": "gpt-4-1106-preview(新调优版本GPT-4🔥)",
|
||||||
|
# "gpt-4-vision-preview": "gpt-4-vision-preview(识图模型GPT-4V)",
|
||||||
|
}
|
||||||
|
|
||||||
|
rev_mapping_dic = {}
|
||||||
|
for k, v in mapping_dic.items():
|
||||||
|
rev_mapping_dic[v] = k
|
||||||
|
|
||||||
|
def map_model_to_friendly_names(m):
|
||||||
|
if m in mapping_dic:
|
||||||
|
return mapping_dic[m]
|
||||||
|
return m
|
||||||
|
|
||||||
|
def map_friendly_names_to_model(m):
|
||||||
|
if m in rev_mapping_dic:
|
||||||
|
return rev_mapping_dic[m]
|
||||||
|
return m
|
||||||
|
|
||||||
|
def read_one_api_model_name(model: str):
|
||||||
|
"""return real model name and max_token.
|
||||||
|
"""
|
||||||
|
max_token_pattern = r"\(max_token=(\d+)\)"
|
||||||
|
match = re.search(max_token_pattern, model)
|
||||||
|
if match:
|
||||||
|
max_token_tmp = match.group(1) # 获取 max_token 的值
|
||||||
|
max_token_tmp = int(max_token_tmp)
|
||||||
|
model = re.sub(max_token_pattern, "", model) # 从原字符串中删除 "(max_token=...)"
|
||||||
|
else:
|
||||||
|
max_token_tmp = 4096
|
||||||
|
return model, max_token_tmp
|
||||||
107
shared_utils/text_mask.py
普通文件
107
shared_utils/text_mask.py
普通文件
@@ -0,0 +1,107 @@
|
|||||||
|
import re
|
||||||
|
from functools import lru_cache
|
||||||
|
|
||||||
|
# 这段代码是使用Python编程语言中的re模块,即正则表达式库,来定义了一个正则表达式模式。
|
||||||
|
# 这个模式被编译成一个正则表达式对象,存储在名为const_extract_exp的变量中,以便于后续快速的匹配和查找操作。
|
||||||
|
# 这里解释一下正则表达式中的几个特殊字符:
|
||||||
|
# - . 表示任意单一字符。
|
||||||
|
# - * 表示前一个字符可以出现0次或多次。
|
||||||
|
# - ? 在这里用作非贪婪匹配,也就是说它会匹配尽可能少的字符。在(.*?)中,它确保我们匹配的任意文本是尽可能短的,也就是说,它会在</show_llm>和</show_render>标签之前停止匹配。
|
||||||
|
# - () 括号在正则表达式中表示捕获组。
|
||||||
|
# - 在这个例子中,(.*?)表示捕获任意长度的文本,直到遇到括号外部最近的限定符,即</show_llm>和</show_render>。
|
||||||
|
|
||||||
|
# -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=/1=-=-=-=-=-=-=-=-=-=-=-=-=-=/2-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
const_extract_re = re.compile(
|
||||||
|
r"<gpt_academic_string_mask><show_llm>(.*?)</show_llm><show_render>(.*?)</show_render></gpt_academic_string_mask>"
|
||||||
|
)
|
||||||
|
# -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=/1=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-/2-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
const_extract_langbased_re = re.compile(
|
||||||
|
r"<gpt_academic_string_mask><lang_english>(.*?)</lang_english><lang_chinese>(.*?)</lang_chinese></gpt_academic_string_mask>",
|
||||||
|
flags=re.DOTALL,
|
||||||
|
)
|
||||||
|
|
||||||
|
@lru_cache(maxsize=128)
|
||||||
|
def apply_gpt_academic_string_mask(string, mode="show_all"):
|
||||||
|
"""
|
||||||
|
当字符串中有掩码tag时(<gpt_academic_string_mask><show_...>),根据字符串要给谁看(大模型,还是web渲染),对字符串进行处理,返回处理后的字符串
|
||||||
|
示意图:https://mermaid.live/edit#pako:eNqlkUtLw0AUhf9KuOta0iaTplkIPlpduFJwoZEwJGNbzItpita2O6tF8QGKogXFtwu7cSHiq3-mk_oznFR8IYLgrGbuOd9hDrcCpmcR0GDW9ubNPKaBMDauuwI_A9M6YN-3y0bODwxsYos4BdMoBrTg5gwHF-d0mBH6-vqFQe58ed5m9XPW2uteX3Tubrj0ljLYcwxxR3h1zB43WeMs3G19yEM9uapDMe_NG9i2dagKw1Fee4c1D9nGEbtc-5n6HbNtJ8IyHOs8tbs7V2HrlDX2w2Y7XD_5haHEtQiNsOwfMVa_7TzsvrWIuJGo02qTrdwLk9gukQylHv3Afv1ML270s-HZUndrmW1tdA-WfvbM_jMFYuAQ6uCCxVdciTJ1CPLEITpo_GphypeouzXuw6XAmyi7JmgBLZEYlHwLB2S4gHMUO-9DH7tTnvf1CVoFFkBLSOk4QmlRTqpIlaWUHINyNFXjaQWpCYRURUKiWovBYo8X4ymEJFlECQUpqaQkJmuvWygPpg
|
||||||
|
"""
|
||||||
|
if "<gpt_academic_string_mask>" not in string: # No need to process
|
||||||
|
return string
|
||||||
|
|
||||||
|
if mode == "show_all":
|
||||||
|
return string
|
||||||
|
if mode == "show_llm":
|
||||||
|
string = const_extract_re.sub(r"\1", string)
|
||||||
|
elif mode == "show_render":
|
||||||
|
string = const_extract_re.sub(r"\2", string)
|
||||||
|
else:
|
||||||
|
raise ValueError("Invalid mode")
|
||||||
|
return string
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=128)
|
||||||
|
def build_gpt_academic_masked_string(text_show_llm="", text_show_render=""):
|
||||||
|
"""
|
||||||
|
根据字符串要给谁看(大模型,还是web渲染),生成带掩码tag的字符串
|
||||||
|
"""
|
||||||
|
return f"<gpt_academic_string_mask><show_llm>{text_show_llm}</show_llm><show_render>{text_show_render}</show_render></gpt_academic_string_mask>"
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=128)
|
||||||
|
def apply_gpt_academic_string_mask_langbased(string, lang_reference):
|
||||||
|
"""
|
||||||
|
当字符串中有掩码tag时(<gpt_academic_string_mask><lang_...>),根据语言,选择提示词,对字符串进行处理,返回处理后的字符串
|
||||||
|
例如,如果lang_reference是英文,那么就只显示英文提示词,中文提示词就不显示了
|
||||||
|
举例:
|
||||||
|
输入1
|
||||||
|
string = "注意,lang_reference这段文字是:<gpt_academic_string_mask><lang_english>英语</lang_english><lang_chinese>中文</lang_chinese></gpt_academic_string_mask>"
|
||||||
|
lang_reference = "hello world"
|
||||||
|
输出1
|
||||||
|
"注意,lang_reference这段文字是:英语"
|
||||||
|
|
||||||
|
输入2
|
||||||
|
string = "注意,lang_reference这段文字是中文" # 注意这里没有掩码tag,所以不会被处理
|
||||||
|
lang_reference = "hello world"
|
||||||
|
输出2
|
||||||
|
"注意,lang_reference这段文字是中文" # 原样返回
|
||||||
|
"""
|
||||||
|
|
||||||
|
if "<gpt_academic_string_mask>" not in string: # No need to process
|
||||||
|
return string
|
||||||
|
|
||||||
|
def contains_chinese(string):
|
||||||
|
chinese_regex = re.compile(u'[\u4e00-\u9fff]+')
|
||||||
|
return chinese_regex.search(string) is not None
|
||||||
|
|
||||||
|
mode = "english" if not contains_chinese(lang_reference) else "chinese"
|
||||||
|
if mode == "english":
|
||||||
|
string = const_extract_langbased_re.sub(r"\1", string)
|
||||||
|
elif mode == "chinese":
|
||||||
|
string = const_extract_langbased_re.sub(r"\2", string)
|
||||||
|
else:
|
||||||
|
raise ValueError("Invalid mode")
|
||||||
|
return string
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=128)
|
||||||
|
def build_gpt_academic_masked_string_langbased(text_show_english="", text_show_chinese=""):
|
||||||
|
"""
|
||||||
|
根据语言,选择提示词,对字符串进行处理,返回处理后的字符串
|
||||||
|
"""
|
||||||
|
return f"<gpt_academic_string_mask><lang_english>{text_show_english}</lang_english><lang_chinese>{text_show_chinese}</lang_chinese></gpt_academic_string_mask>"
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# Test
|
||||||
|
input_string = (
|
||||||
|
"你好\n"
|
||||||
|
+ build_gpt_academic_masked_string(text_show_llm="mermaid", text_show_render="")
|
||||||
|
+ "你好\n"
|
||||||
|
)
|
||||||
|
print(
|
||||||
|
apply_gpt_academic_string_mask(input_string, "show_llm")
|
||||||
|
) # Should print the strings with 'abc' in place of the academic mask tags
|
||||||
|
print(
|
||||||
|
apply_gpt_academic_string_mask(input_string, "show_render")
|
||||||
|
) # Should print the strings with 'xyz' in place of the academic mask tags
|
||||||
@@ -0,0 +1,41 @@
|
|||||||
|
import unittest
|
||||||
|
|
||||||
|
def validate_path():
|
||||||
|
import os, sys
|
||||||
|
|
||||||
|
os.path.dirname(__file__)
|
||||||
|
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
|
||||||
|
os.chdir(root_dir_assume)
|
||||||
|
sys.path.append(root_dir_assume)
|
||||||
|
|
||||||
|
|
||||||
|
validate_path() # validate path so you can run from base directory
|
||||||
|
|
||||||
|
from shared_utils.key_pattern_manager import is_openai_api_key
|
||||||
|
|
||||||
|
class TestKeyPatternManager(unittest.TestCase):
|
||||||
|
def test_is_openai_api_key_with_valid_key(self):
|
||||||
|
key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
|
||||||
|
self.assertTrue(is_openai_api_key(key))
|
||||||
|
|
||||||
|
key = "sx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
|
||||||
|
self.assertFalse(is_openai_api_key(key))
|
||||||
|
|
||||||
|
key = "sess-wg61ZafYHpNz7FFwIH7HGZlbVqUVaeV5tatHCWpl"
|
||||||
|
self.assertTrue(is_openai_api_key(key))
|
||||||
|
|
||||||
|
key = "sess-wg61ZafYHpNz7FFwIH7HGZlbVqUVa5tatHCWpl"
|
||||||
|
self.assertFalse(is_openai_api_key(key))
|
||||||
|
|
||||||
|
|
||||||
|
def test_is_openai_api_key_with_invalid_key(self):
|
||||||
|
key = "invalid_key"
|
||||||
|
self.assertFalse(is_openai_api_key(key))
|
||||||
|
|
||||||
|
def test_is_openai_api_key_with_custom_pattern(self):
|
||||||
|
# Assuming you have set a custom pattern in your configuration
|
||||||
|
key = "custom-pattern-key"
|
||||||
|
self.assertFalse(is_openai_api_key(key))
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
||||||
@@ -11,28 +11,45 @@ def validate_path():
|
|||||||
|
|
||||||
|
|
||||||
validate_path() # validate path so you can run from base directory
|
validate_path() # validate path so you can run from base directory
|
||||||
if __name__ == "__main__":
|
|
||||||
# from request_llms.bridge_newbingfree import predict_no_ui_long_connection
|
|
||||||
# from request_llms.bridge_moss import predict_no_ui_long_connection
|
|
||||||
# from request_llms.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
|
|
||||||
# from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
|
|
||||||
# from request_llms.bridge_claude import predict_no_ui_long_connection
|
|
||||||
# from request_llms.bridge_internlm import predict_no_ui_long_connection
|
|
||||||
# from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
|
|
||||||
# from request_llms.bridge_qwen_7B import predict_no_ui_long_connection
|
|
||||||
from request_llms.bridge_qwen_local import predict_no_ui_long_connection
|
|
||||||
|
|
||||||
# from request_llms.bridge_spark import predict_no_ui_long_connection
|
if "在线模型":
|
||||||
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
|
if __name__ == "__main__":
|
||||||
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
|
from request_llms.bridge_cohere import predict_no_ui_long_connection
|
||||||
|
# from request_llms.bridge_spark import predict_no_ui_long_connection
|
||||||
|
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
|
||||||
|
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
|
||||||
|
llm_kwargs = {
|
||||||
|
"llm_model": "command-r-plus",
|
||||||
|
"max_length": 4096,
|
||||||
|
"top_p": 1,
|
||||||
|
"temperature": 1,
|
||||||
|
}
|
||||||
|
|
||||||
llm_kwargs = {
|
result = predict_no_ui_long_connection(
|
||||||
"max_length": 4096,
|
inputs="请问什么是质子?", llm_kwargs=llm_kwargs, history=["你好", "我好!"], sys_prompt="系统"
|
||||||
"top_p": 1,
|
)
|
||||||
"temperature": 1,
|
print("final result:", result)
|
||||||
}
|
print("final result:", result)
|
||||||
|
|
||||||
|
|
||||||
|
if "本地模型":
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# from request_llms.bridge_newbingfree import predict_no_ui_long_connection
|
||||||
|
# from request_llms.bridge_moss import predict_no_ui_long_connection
|
||||||
|
# from request_llms.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
|
||||||
|
# from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
|
||||||
|
# from request_llms.bridge_claude import predict_no_ui_long_connection
|
||||||
|
# from request_llms.bridge_internlm import predict_no_ui_long_connection
|
||||||
|
# from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
|
||||||
|
# from request_llms.bridge_qwen_7B import predict_no_ui_long_connection
|
||||||
|
# from request_llms.bridge_qwen_local import predict_no_ui_long_connection
|
||||||
|
llm_kwargs = {
|
||||||
|
"max_length": 4096,
|
||||||
|
"top_p": 1,
|
||||||
|
"temperature": 1,
|
||||||
|
}
|
||||||
|
result = predict_no_ui_long_connection(
|
||||||
|
inputs="请问什么是质子?", llm_kwargs=llm_kwargs, history=["你好", "我好!"], sys_prompt=""
|
||||||
|
)
|
||||||
|
print("final result:", result)
|
||||||
|
|
||||||
result = predict_no_ui_long_connection(
|
|
||||||
inputs="请问什么是质子?", llm_kwargs=llm_kwargs, history=["你好", "我好!"], sys_prompt=""
|
|
||||||
)
|
|
||||||
print("final result:", result)
|
|
||||||
|
|||||||
@@ -20,10 +20,10 @@ if __name__ == "__main__":
|
|||||||
|
|
||||||
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
|
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
|
||||||
|
|
||||||
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2307.07522")
|
# plugin_test(plugin='crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF', main_input="2307.07522")
|
||||||
|
|
||||||
plugin_test(
|
plugin_test(
|
||||||
plugin="crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF",
|
plugin="crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF",
|
||||||
main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix",
|
main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix",
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -66,7 +66,7 @@ if __name__ == "__main__":
|
|||||||
|
|
||||||
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
|
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
|
||||||
|
|
||||||
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2210.03629")
|
# plugin_test(plugin='crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF', main_input="2210.03629")
|
||||||
|
|
||||||
# advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求:100字以内,用第二人称。' --system_prompt=''" }
|
# advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求:100字以内,用第二人称。' --system_prompt=''" }
|
||||||
# plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg)
|
# plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg)
|
||||||
|
|||||||
1
themes/base64.mjs
普通文件
1
themes/base64.mjs
普通文件
@@ -0,0 +1 @@
|
|||||||
|
// we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0
|
||||||
@@ -59,6 +59,7 @@
|
|||||||
|
|
||||||
/* Scrollbar Width */
|
/* Scrollbar Width */
|
||||||
::-webkit-scrollbar {
|
::-webkit-scrollbar {
|
||||||
|
height: 12px;
|
||||||
width: 12px;
|
width: 12px;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
275
themes/common.js
275
themes/common.js
@@ -2,6 +2,76 @@
|
|||||||
// 第 1 部分: 工具函数
|
// 第 1 部分: 工具函数
|
||||||
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
|
function push_data_to_gradio_component(DAT, ELEM_ID, TYPE) {
|
||||||
|
// type, // type==="str" / type==="float"
|
||||||
|
if (TYPE == "str") {
|
||||||
|
// convert dat to string: do nothing
|
||||||
|
}
|
||||||
|
else if (TYPE == "no_conversion") {
|
||||||
|
// no nothing
|
||||||
|
}
|
||||||
|
else if (TYPE == "float") {
|
||||||
|
// convert dat to float
|
||||||
|
DAT = parseFloat(DAT);
|
||||||
|
}
|
||||||
|
const myEvent = new CustomEvent('gpt_academic_update_gradio_component', {
|
||||||
|
detail: {
|
||||||
|
data: DAT,
|
||||||
|
elem_id: ELEM_ID,
|
||||||
|
}
|
||||||
|
});
|
||||||
|
window.dispatchEvent(myEvent);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async function get_gradio_component(ELEM_ID) {
|
||||||
|
function waitFor(ELEM_ID) {
|
||||||
|
return new Promise((resolve) => {
|
||||||
|
const myEvent = new CustomEvent('gpt_academic_get_gradio_component_value', {
|
||||||
|
detail: {
|
||||||
|
elem_id: ELEM_ID,
|
||||||
|
resolve,
|
||||||
|
}
|
||||||
|
});
|
||||||
|
window.dispatchEvent(myEvent);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
result = await waitFor(ELEM_ID);
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async function get_data_from_gradio_component(ELEM_ID) {
|
||||||
|
let comp = await get_gradio_component(ELEM_ID);
|
||||||
|
return comp.props.value;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function update_array(arr, item, mode) {
|
||||||
|
// // Remove "输入清除键"
|
||||||
|
// p = updateArray(p, "输入清除键", "remove");
|
||||||
|
// console.log(p); // Should log: ["基础功能区", "函数插件区"]
|
||||||
|
|
||||||
|
// // Add "输入清除键"
|
||||||
|
// p = updateArray(p, "输入清除键", "add");
|
||||||
|
// console.log(p); // Should log: ["基础功能区", "函数插件区", "输入清除键"]
|
||||||
|
|
||||||
|
const index = arr.indexOf(item);
|
||||||
|
if (mode === "remove") {
|
||||||
|
if (index !== -1) {
|
||||||
|
// Item found, remove it
|
||||||
|
arr.splice(index, 1);
|
||||||
|
}
|
||||||
|
} else if (mode === "add") {
|
||||||
|
if (index === -1) {
|
||||||
|
// Item not found, add it
|
||||||
|
arr.push(item);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return arr;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
function gradioApp() {
|
function gradioApp() {
|
||||||
// https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
|
// https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
|
||||||
const elems = document.getElementsByTagName('gradio-app');
|
const elems = document.getElementsByTagName('gradio-app');
|
||||||
@@ -14,6 +84,7 @@ function gradioApp() {
|
|||||||
return elem.shadowRoot ? elem.shadowRoot : elem;
|
return elem.shadowRoot ? elem.shadowRoot : elem;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function setCookie(name, value, days) {
|
function setCookie(name, value, days) {
|
||||||
var expires = "";
|
var expires = "";
|
||||||
|
|
||||||
@@ -26,6 +97,7 @@ function setCookie(name, value, days) {
|
|||||||
document.cookie = name + "=" + value + expires + "; path=/";
|
document.cookie = name + "=" + value + expires + "; path=/";
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function getCookie(name) {
|
function getCookie(name) {
|
||||||
var decodedCookie = decodeURIComponent(document.cookie);
|
var decodedCookie = decodeURIComponent(document.cookie);
|
||||||
var cookies = decodedCookie.split(';');
|
var cookies = decodedCookie.split(';');
|
||||||
@@ -41,6 +113,7 @@ function getCookie(name) {
|
|||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
let toastCount = 0;
|
let toastCount = 0;
|
||||||
function toast_push(msg, duration) {
|
function toast_push(msg, duration) {
|
||||||
duration = isNaN(duration) ? 3000 : duration;
|
duration = isNaN(duration) ? 3000 : duration;
|
||||||
@@ -63,6 +136,7 @@ function toast_push(msg, duration) {
|
|||||||
toastCount++;
|
toastCount++;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function toast_up(msg) {
|
function toast_up(msg) {
|
||||||
var m = document.getElementById('toast_up');
|
var m = document.getElementById('toast_up');
|
||||||
if (m) {
|
if (m) {
|
||||||
@@ -75,6 +149,7 @@ function toast_up(msg) {
|
|||||||
document.body.appendChild(m);
|
document.body.appendChild(m);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function toast_down() {
|
function toast_down() {
|
||||||
var m = document.getElementById('toast_up');
|
var m = document.getElementById('toast_up');
|
||||||
if (m) {
|
if (m) {
|
||||||
@@ -82,6 +157,7 @@ function toast_down() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function begin_loading_status() {
|
function begin_loading_status() {
|
||||||
// Create the loader div and add styling
|
// Create the loader div and add styling
|
||||||
var loader = document.createElement('div');
|
var loader = document.createElement('div');
|
||||||
@@ -234,7 +310,7 @@ let timeoutID = null;
|
|||||||
let lastInvocationTime = 0;
|
let lastInvocationTime = 0;
|
||||||
let lastArgs = null;
|
let lastArgs = null;
|
||||||
function do_something_but_not_too_frequently(min_interval, func) {
|
function do_something_but_not_too_frequently(min_interval, func) {
|
||||||
return function(...args) {
|
return function (...args) {
|
||||||
lastArgs = args;
|
lastArgs = args;
|
||||||
const now = Date.now();
|
const now = Date.now();
|
||||||
if (!lastInvocationTime || (now - lastInvocationTime) >= min_interval) {
|
if (!lastInvocationTime || (now - lastInvocationTime) >= min_interval) {
|
||||||
@@ -256,6 +332,7 @@ function do_something_but_not_too_frequently(min_interval, func) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function chatbotContentChanged(attempt = 1, force = false) {
|
function chatbotContentChanged(attempt = 1, force = false) {
|
||||||
// https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
|
// https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
|
||||||
for (var i = 0; i < attempt; i++) {
|
for (var i = 0; i < attempt; i++) {
|
||||||
@@ -263,13 +340,8 @@ function chatbotContentChanged(attempt = 1, force = false) {
|
|||||||
gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton);
|
gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton);
|
||||||
}, i === 0 ? 0 : 200);
|
}, i === 0 ? 0 : 200);
|
||||||
}
|
}
|
||||||
|
// we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0
|
||||||
|
|
||||||
const run_mermaid_render = do_something_but_not_too_frequently(1000, function () {
|
|
||||||
const blocks = document.querySelectorAll(`pre.mermaid, diagram-div`);
|
|
||||||
if (blocks.length == 0) { return; }
|
|
||||||
uml("mermaid");
|
|
||||||
});
|
|
||||||
run_mermaid_render();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -277,7 +349,6 @@ function chatbotContentChanged(attempt = 1, force = false) {
|
|||||||
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
// 第 3 部分: chatbot动态高度调整
|
// 第 3 部分: chatbot动态高度调整
|
||||||
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
function chatbotAutoHeight() {
|
function chatbotAutoHeight() {
|
||||||
// 自动调整高度:立即
|
// 自动调整高度:立即
|
||||||
function update_height() {
|
function update_height() {
|
||||||
@@ -309,6 +380,7 @@ function chatbotAutoHeight() {
|
|||||||
setInterval(function () { update_height_slow() }, 50); // 每50毫秒执行一次
|
setInterval(function () { update_height_slow() }, 50); // 每50毫秒执行一次
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
swapped = false;
|
swapped = false;
|
||||||
function swap_input_area() {
|
function swap_input_area() {
|
||||||
// Get the elements to be swapped
|
// Get the elements to be swapped
|
||||||
@@ -328,6 +400,7 @@ function swap_input_area() {
|
|||||||
else { swapped = true; }
|
else { swapped = true; }
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function get_elements(consider_state_panel = false) {
|
function get_elements(consider_state_panel = false) {
|
||||||
var chatbot = document.querySelector('#gpt-chatbot > div.wrap.svelte-18telvq');
|
var chatbot = document.querySelector('#gpt-chatbot > div.wrap.svelte-18telvq');
|
||||||
if (!chatbot) {
|
if (!chatbot) {
|
||||||
@@ -349,7 +422,7 @@ function get_elements(consider_state_panel = false) {
|
|||||||
var chatbot_height = chatbot.style.height;
|
var chatbot_height = chatbot.style.height;
|
||||||
// 交换输入区位置,使得输入区始终可用
|
// 交换输入区位置,使得输入区始终可用
|
||||||
if (!swapped) {
|
if (!swapped) {
|
||||||
if (panel1.top != 0 && (panel1.bottom + panel1.top) / 2 < 0) { swap_input_area(); }
|
if (panel1.top != 0 && (0.9 * panel1.bottom + 0.1 * panel1.top) < 0) { swap_input_area(); }
|
||||||
}
|
}
|
||||||
else if (swapped) {
|
else if (swapped) {
|
||||||
if (panel2.top != 0 && panel2.top > 0) { swap_input_area(); }
|
if (panel2.top != 0 && panel2.top > 0) { swap_input_area(); }
|
||||||
@@ -425,6 +498,7 @@ async function upload_files(files) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function register_func_paste(input) {
|
function register_func_paste(input) {
|
||||||
let paste_files = [];
|
let paste_files = [];
|
||||||
if (input) {
|
if (input) {
|
||||||
@@ -451,6 +525,7 @@ function register_func_paste(input) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function register_func_drag(elem) {
|
function register_func_drag(elem) {
|
||||||
if (elem) {
|
if (elem) {
|
||||||
const dragEvents = ["dragover"];
|
const dragEvents = ["dragover"];
|
||||||
@@ -487,6 +562,7 @@ function register_func_drag(elem) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function elem_upload_component_pop_message(elem) {
|
function elem_upload_component_pop_message(elem) {
|
||||||
if (elem) {
|
if (elem) {
|
||||||
const dragEvents = ["dragover"];
|
const dragEvents = ["dragover"];
|
||||||
@@ -516,6 +592,7 @@ function elem_upload_component_pop_message(elem) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function register_upload_event() {
|
function register_upload_event() {
|
||||||
locate_upload_elems();
|
locate_upload_elems();
|
||||||
if (elem_upload_float) {
|
if (elem_upload_float) {
|
||||||
@@ -538,6 +615,7 @@ function register_upload_event() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function monitoring_input_box() {
|
function monitoring_input_box() {
|
||||||
register_upload_event();
|
register_upload_event();
|
||||||
|
|
||||||
@@ -571,7 +649,6 @@ window.addEventListener("DOMContentLoaded", function () {
|
|||||||
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
// 第 5 部分: 音频按钮样式变化
|
// 第 5 部分: 音频按钮样式变化
|
||||||
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
function audio_fn_init() {
|
function audio_fn_init() {
|
||||||
let audio_component = document.getElementById('elem_audio');
|
let audio_component = document.getElementById('elem_audio');
|
||||||
if (audio_component) {
|
if (audio_component) {
|
||||||
@@ -608,6 +685,7 @@ function audio_fn_init() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function minor_ui_adjustment() {
|
function minor_ui_adjustment() {
|
||||||
let cbsc_area = document.getElementById('cbsc');
|
let cbsc_area = document.getElementById('cbsc');
|
||||||
cbsc_area.style.paddingTop = '15px';
|
cbsc_area.style.paddingTop = '15px';
|
||||||
@@ -672,9 +750,9 @@ function limit_scroll_position() {
|
|||||||
let scrollableDiv = document.querySelector('#gpt-chatbot > div.wrap');
|
let scrollableDiv = document.querySelector('#gpt-chatbot > div.wrap');
|
||||||
scrollableDiv.addEventListener('wheel', function (e) {
|
scrollableDiv.addEventListener('wheel', function (e) {
|
||||||
let preventScroll = false;
|
let preventScroll = false;
|
||||||
if (e.deltaX != 0) { prevented_offset = 0; return;}
|
if (e.deltaX != 0) { prevented_offset = 0; return; }
|
||||||
if (this.scrollHeight == this.clientHeight) { prevented_offset = 0; return;}
|
if (this.scrollHeight == this.clientHeight) { prevented_offset = 0; return; }
|
||||||
if (e.deltaY < 0) { prevented_offset = 0; return;}
|
if (e.deltaY < 0) { prevented_offset = 0; return; }
|
||||||
if (e.deltaY > 0 && this.scrollHeight - this.clientHeight - this.scrollTop <= 1) { preventScroll = true; }
|
if (e.deltaY > 0 && this.scrollHeight - this.clientHeight - this.scrollTop <= 1) { preventScroll = true; }
|
||||||
|
|
||||||
if (preventScroll) {
|
if (preventScroll) {
|
||||||
@@ -700,7 +778,88 @@ function limit_scroll_position() {
|
|||||||
// 第 7 部分: JS初始化函数
|
// 第 7 部分: JS初始化函数
|
||||||
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
|
function loadLive2D() {
|
||||||
|
try {
|
||||||
|
$("<link>").attr({ href: "file=themes/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css" }).appendTo('head');
|
||||||
|
$('body').append('<div class="waifu"><div class="waifu-tips"></div><canvas id="live2d" class="live2d"></canvas><div class="waifu-tool"><span class="fui-home"></span> <span class="fui-chat"></span> <span class="fui-eye"></span> <span class="fui-user"></span> <span class="fui-photo"></span> <span class="fui-info-circle"></span> <span class="fui-cross"></span></div></div>');
|
||||||
|
$.ajax({
|
||||||
|
url: "file=themes/waifu_plugin/waifu-tips.js", dataType: "script", cache: true, success: function () {
|
||||||
|
$.ajax({
|
||||||
|
url: "file=themes/waifu_plugin/live2d.js", dataType: "script", cache: true, success: function () {
|
||||||
|
/* 可直接修改部分参数 */
|
||||||
|
live2d_settings['hitokotoAPI'] = "hitokoto.cn"; // 一言 API
|
||||||
|
live2d_settings['modelId'] = 3; // 默认模型 ID
|
||||||
|
live2d_settings['modelTexturesId'] = 44; // 默认材质 ID
|
||||||
|
live2d_settings['modelStorage'] = false; // 不储存模型 ID
|
||||||
|
live2d_settings['waifuSize'] = '210x187';
|
||||||
|
live2d_settings['waifuTipsSize'] = '187x52';
|
||||||
|
live2d_settings['canSwitchModel'] = true;
|
||||||
|
live2d_settings['canSwitchTextures'] = true;
|
||||||
|
live2d_settings['canSwitchHitokoto'] = false;
|
||||||
|
live2d_settings['canTakeScreenshot'] = false;
|
||||||
|
live2d_settings['canTurnToHomePage'] = false;
|
||||||
|
live2d_settings['canTurnToAboutPage'] = false;
|
||||||
|
live2d_settings['showHitokoto'] = false; // 显示一言
|
||||||
|
live2d_settings['showF12Status'] = false; // 显示加载状态
|
||||||
|
live2d_settings['showF12Message'] = false; // 显示看板娘消息
|
||||||
|
live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示
|
||||||
|
live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示
|
||||||
|
live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词
|
||||||
|
/* 在 initModel 前添加 */
|
||||||
|
initModel("file=themes/waifu_plugin/waifu-tips.json");
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (err) { console.log("[Error] JQuery is not defined.") }
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function get_checkbox_selected_items(elem_id) {
|
||||||
|
display_panel_arr = [];
|
||||||
|
document.getElementById(elem_id).querySelector('[data-testid="checkbox-group"]').querySelectorAll('label').forEach(label => {
|
||||||
|
// Get the span text
|
||||||
|
const spanText = label.querySelector('span').textContent;
|
||||||
|
// Get the input value
|
||||||
|
const checked = label.querySelector('input').checked;
|
||||||
|
if (checked) {
|
||||||
|
display_panel_arr.push(spanText)
|
||||||
|
}
|
||||||
|
});
|
||||||
|
return display_panel_arr;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function gpt_academic_gradio_saveload(
|
||||||
|
save_or_load, // save_or_load==="save" / save_or_load==="load"
|
||||||
|
elem_id, // element id
|
||||||
|
cookie_key, // cookie key
|
||||||
|
save_value = "", // save value
|
||||||
|
load_type = "str", // type==="str" / type==="float"
|
||||||
|
load_default = false, // load default value
|
||||||
|
load_default_value = ""
|
||||||
|
) {
|
||||||
|
if (save_or_load === "load") {
|
||||||
|
let value = getCookie(cookie_key);
|
||||||
|
if (value) {
|
||||||
|
console.log('加载cookie', elem_id, value)
|
||||||
|
push_data_to_gradio_component(value, elem_id, load_type);
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
if (load_default) {
|
||||||
|
console.log('加载cookie的默认值', elem_id, load_default_value)
|
||||||
|
push_data_to_gradio_component(load_default_value, elem_id, load_type);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (save_or_load === "save") {
|
||||||
|
setCookie(cookie_key, save_value, 365);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout) {
|
||||||
|
// 第一部分,布局初始化
|
||||||
audio_fn_init();
|
audio_fn_init();
|
||||||
minor_ui_adjustment();
|
minor_ui_adjustment();
|
||||||
chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
|
chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
|
||||||
@@ -708,8 +867,90 @@ function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
|
|||||||
chatbotContentChanged(1);
|
chatbotContentChanged(1);
|
||||||
});
|
});
|
||||||
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
|
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
|
||||||
if (LAYOUT === "LEFT-RIGHT") { chatbotAutoHeight(); }
|
if (layout === "LEFT-RIGHT") { chatbotAutoHeight(); }
|
||||||
if (LAYOUT === "LEFT-RIGHT") { limit_scroll_position(); }
|
if (layout === "LEFT-RIGHT") { limit_scroll_position(); }
|
||||||
// setInterval(function () { uml("mermaid") }, 5000); // 每50毫秒执行一次
|
|
||||||
|
// 第二部分,读取Cookie,初始话界面
|
||||||
|
let searchString = "";
|
||||||
|
let bool_value = "";
|
||||||
|
|
||||||
|
// darkmode 深色模式
|
||||||
|
if (getCookie("js_darkmode_cookie")) {
|
||||||
|
dark = getCookie("js_darkmode_cookie")
|
||||||
|
}
|
||||||
|
dark = dark == "True";
|
||||||
|
if (document.querySelectorAll('.dark').length) {
|
||||||
|
if (!dark) {
|
||||||
|
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if (dark) {
|
||||||
|
document.querySelector('body').classList.add('dark');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// SysPrompt 系统静默提示词
|
||||||
|
gpt_academic_gradio_saveload("load", "elem_prompt", "js_system_prompt_cookie", null, "str");
|
||||||
|
|
||||||
|
// Temperature 大模型温度参数
|
||||||
|
gpt_academic_gradio_saveload("load", "elem_temperature", "js_temperature_cookie", null, "float");
|
||||||
|
|
||||||
|
// clearButton 自动清除按钮
|
||||||
|
if (getCookie("js_clearbtn_show_cookie")) {
|
||||||
|
// have cookie
|
||||||
|
bool_value = getCookie("js_clearbtn_show_cookie")
|
||||||
|
bool_value = bool_value == "True";
|
||||||
|
searchString = "输入清除键";
|
||||||
|
|
||||||
|
if (bool_value) {
|
||||||
|
// make btns appear
|
||||||
|
let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "block";
|
||||||
|
let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "block";
|
||||||
|
// deal with checkboxes
|
||||||
|
let arr_with_clear_btn = update_array(
|
||||||
|
await get_data_from_gradio_component('cbs'), "输入清除键", "add"
|
||||||
|
)
|
||||||
|
push_data_to_gradio_component(arr_with_clear_btn, "cbs", "no_conversion");
|
||||||
|
} else {
|
||||||
|
// make btns disappear
|
||||||
|
let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "none";
|
||||||
|
let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "none";
|
||||||
|
// deal with checkboxes
|
||||||
|
let arr_without_clear_btn = update_array(
|
||||||
|
await get_data_from_gradio_component('cbs'), "输入清除键", "remove"
|
||||||
|
)
|
||||||
|
push_data_to_gradio_component(arr_without_clear_btn, "cbs", "no_conversion");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// live2d 显示
|
||||||
|
if (getCookie("js_live2d_show_cookie")) {
|
||||||
|
// have cookie
|
||||||
|
searchString = "添加Live2D形象";
|
||||||
|
bool_value = getCookie("js_live2d_show_cookie");
|
||||||
|
bool_value = bool_value == "True";
|
||||||
|
if (bool_value) {
|
||||||
|
loadLive2D();
|
||||||
|
let arr_with_live2d = update_array(
|
||||||
|
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "add"
|
||||||
|
)
|
||||||
|
push_data_to_gradio_component(arr_with_live2d, "cbsc", "no_conversion");
|
||||||
|
} else {
|
||||||
|
try {
|
||||||
|
$('.waifu').hide();
|
||||||
|
let arr_without_live2d = update_array(
|
||||||
|
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "remove"
|
||||||
|
)
|
||||||
|
push_data_to_gradio_component(arr_without_live2d, "cbsc", "no_conversion");
|
||||||
|
} catch (error) {
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// do not have cookie
|
||||||
|
if (live2d) {
|
||||||
|
loadLive2D();
|
||||||
|
} else {
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
18
themes/common.py
普通文件
18
themes/common.py
普通文件
@@ -0,0 +1,18 @@
|
|||||||
|
from toolbox import get_conf
|
||||||
|
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
|
||||||
|
|
||||||
|
def get_common_html_javascript_code():
|
||||||
|
js = "\n"
|
||||||
|
for jsf in [
|
||||||
|
"file=themes/common.js",
|
||||||
|
]:
|
||||||
|
js += f"""<script src="{jsf}"></script>\n"""
|
||||||
|
|
||||||
|
# 添加Live2D
|
||||||
|
if ADD_WAIFU:
|
||||||
|
for jsf in [
|
||||||
|
"file=themes/waifu_plugin/jquery.min.js",
|
||||||
|
"file=themes/waifu_plugin/jquery-ui.min.js",
|
||||||
|
]:
|
||||||
|
js += f"""<script src="{jsf}"></script>\n"""
|
||||||
|
return js
|
||||||
@@ -67,22 +67,9 @@ def adjust_theme():
|
|||||||
button_cancel_text_color_dark="white",
|
button_cancel_text_color_dark="white",
|
||||||
)
|
)
|
||||||
|
|
||||||
js = ""
|
from themes.common import get_common_html_javascript_code
|
||||||
for jsf in [
|
js = get_common_html_javascript_code()
|
||||||
os.path.join(theme_dir, "common.js"),
|
|
||||||
os.path.join(theme_dir, "mermaid.min.js"),
|
|
||||||
os.path.join(theme_dir, "mermaid_loader.js"),
|
|
||||||
]:
|
|
||||||
with open(jsf, "r", encoding="utf8") as f:
|
|
||||||
js += f"<script>{f.read()}</script>"
|
|
||||||
|
|
||||||
# 添加一个萌萌的看板娘
|
|
||||||
if ADD_WAIFU:
|
|
||||||
js += """
|
|
||||||
<script src="file=docs/waifu_plugin/jquery.min.js"></script>
|
|
||||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
|
||||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
|
||||||
"""
|
|
||||||
if not hasattr(gr, "RawTemplateResponse"):
|
if not hasattr(gr, "RawTemplateResponse"):
|
||||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||||
gradio_original_template_fn = gr.RawTemplateResponse
|
gradio_original_template_fn = gr.RawTemplateResponse
|
||||||
|
|||||||
@@ -67,22 +67,8 @@ def adjust_theme():
|
|||||||
button_cancel_text_color_dark="white",
|
button_cancel_text_color_dark="white",
|
||||||
)
|
)
|
||||||
|
|
||||||
js = ""
|
from themes.common import get_common_html_javascript_code
|
||||||
for jsf in [
|
js = get_common_html_javascript_code()
|
||||||
os.path.join(theme_dir, "common.js"),
|
|
||||||
os.path.join(theme_dir, "mermaid.min.js"),
|
|
||||||
os.path.join(theme_dir, "mermaid_loader.js"),
|
|
||||||
]:
|
|
||||||
with open(jsf, "r", encoding="utf8") as f:
|
|
||||||
js += f"<script>{f.read()}</script>"
|
|
||||||
|
|
||||||
# 添加一个萌萌的看板娘
|
|
||||||
if ADD_WAIFU:
|
|
||||||
js += """
|
|
||||||
<script src="file=docs/waifu_plugin/jquery.min.js"></script>
|
|
||||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
|
||||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
|
||||||
"""
|
|
||||||
if not hasattr(gr, "RawTemplateResponse"):
|
if not hasattr(gr, "RawTemplateResponse"):
|
||||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||||
gradio_original_template_fn = gr.RawTemplateResponse
|
gradio_original_template_fn = gr.RawTemplateResponse
|
||||||
|
|||||||
@@ -31,23 +31,9 @@ def adjust_theme():
|
|||||||
THEME = THEME.lstrip("huggingface-")
|
THEME = THEME.lstrip("huggingface-")
|
||||||
set_theme = set_theme.from_hub(THEME.lower())
|
set_theme = set_theme.from_hub(THEME.lower())
|
||||||
|
|
||||||
js = ""
|
from themes.common import get_common_html_javascript_code
|
||||||
for jsf in [
|
js = get_common_html_javascript_code()
|
||||||
os.path.join(theme_dir, "common.js"),
|
|
||||||
os.path.join(theme_dir, "mermaid.min.js"),
|
|
||||||
os.path.join(theme_dir, "mermaid_loader.js"),
|
|
||||||
]:
|
|
||||||
with open(jsf, "r", encoding="utf8") as f:
|
|
||||||
js += f"<script>{f.read()}</script>"
|
|
||||||
|
|
||||||
|
|
||||||
# 添加一个萌萌的看板娘
|
|
||||||
if ADD_WAIFU:
|
|
||||||
js += """
|
|
||||||
<script src="file=docs/waifu_plugin/jquery.min.js"></script>
|
|
||||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
|
||||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
|
||||||
"""
|
|
||||||
if not hasattr(gr, "RawTemplateResponse"):
|
if not hasattr(gr, "RawTemplateResponse"):
|
||||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||||
gradio_original_template_fn = gr.RawTemplateResponse
|
gradio_original_template_fn = gr.RawTemplateResponse
|
||||||
|
|||||||
@@ -76,22 +76,8 @@ def adjust_theme():
|
|||||||
chatbot_code_background_color_dark="*neutral_950",
|
chatbot_code_background_color_dark="*neutral_950",
|
||||||
)
|
)
|
||||||
|
|
||||||
js = ""
|
from themes.common import get_common_html_javascript_code
|
||||||
for jsf in [
|
js = get_common_html_javascript_code()
|
||||||
os.path.join(theme_dir, "common.js"),
|
|
||||||
os.path.join(theme_dir, "mermaid.min.js"),
|
|
||||||
os.path.join(theme_dir, "mermaid_loader.js"),
|
|
||||||
]:
|
|
||||||
with open(jsf, "r", encoding="utf8") as f:
|
|
||||||
js += f"<script>{f.read()}</script>"
|
|
||||||
|
|
||||||
# 添加一个萌萌的看板娘
|
|
||||||
if ADD_WAIFU:
|
|
||||||
js += """
|
|
||||||
<script src="file=docs/waifu_plugin/jquery.min.js"></script>
|
|
||||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
|
||||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
|
||||||
"""
|
|
||||||
|
|
||||||
with open(os.path.join(theme_dir, "green.js"), "r", encoding="utf8") as f:
|
with open(os.path.join(theme_dir, "green.js"), "r", encoding="utf8") as f:
|
||||||
js += f"<script>{f.read()}</script>"
|
js += f"<script>{f.read()}</script>"
|
||||||
|
|||||||
1590
themes/mermaid.min.js
vendored
1590
themes/mermaid.min.js
vendored
文件差异因一行或多行过长而隐藏
某些文件未显示,因为此 diff 中更改的文件太多 显示更多
在新工单中引用
屏蔽一个用户