比较提交

...

71 次代码提交

作者 SHA1 备注 提交日期
binary-husky
6538c58b8e Update README.md 2023-04-25 18:30:11 +08:00
binary-husky
e35eb9048e Update README.md 2023-04-25 16:48:08 +08:00
binary-husky
a0fa64de47 Update README.md 2023-04-25 16:46:36 +08:00
binary-husky
e04946c816 Update README.md 2023-04-25 16:45:53 +08:00
binary-husky
231c9c2e57 Update README.md 2023-04-25 16:11:35 +08:00
binary-husky
48555f570c Update README.md 2023-04-25 16:11:00 +08:00
binary-husky
7c9195ddd2 Update README.md 2023-04-25 15:50:35 +08:00
binary-husky
5500fbe682 Update README.md 2023-04-25 15:49:57 +08:00
binary-husky
5a83b3b096 version 3.3 2023-04-24 21:10:01 +08:00
binary-husky
4783fd6f37 UP 2023-04-24 21:02:16 +08:00
binary-husky
9a4b56277c Function Refector 2023-04-24 20:59:10 +08:00
binary-husky
5eea959103 Markdown翻译支持github url 2023-04-24 20:51:34 +08:00
binary-husky
856df8fb62 验证对话上下文 2023-04-24 20:18:32 +08:00
binary-husky
8e59412c47 修正newbing交互的不合理代码 2023-04-24 20:14:23 +08:00
binary-husky
8f571ff68f Merge branch 'v3.3' 2023-04-24 19:58:07 +08:00
binary-husky
b6d2766e59 改善功能 2023-04-24 19:54:28 +08:00
binary-husky
73ce471a0e max_worker_limit 2023-04-24 19:24:19 +08:00
binary-husky
4e113139c8 Merge branch 'master' into v3.3 2023-04-24 19:09:44 +08:00
binary-husky
e4c4b28ddf Update README.md 2023-04-24 18:20:33 +08:00
binary-husky
081acc6404 修复颜色 2023-04-24 17:42:24 +08:00
binary-husky
1a999497d7 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-04-24 17:33:23 +08:00
binary-husky
6137963355 拯救一下之前的灾难性的代码配色 2023-04-24 17:33:18 +08:00
binary-husky
22bffdb737 Update README.md 2023-04-24 12:25:10 +08:00
binary-husky
75adcbffeb Update README.md 2023-04-24 12:24:46 +08:00
binary-husky
4451770061 Update README.md 2023-04-24 12:24:29 +08:00
binary-husky
09c413a272 Update README.md 2023-04-24 12:17:58 +08:00
binary-husky
ddb6c90a8f Update README.md 2023-04-24 12:17:04 +08:00
binary-husky
71590426f9 Update README.md 2023-04-24 12:16:49 +08:00
binary-husky
b3e5cdb3a5 加一些注释 2023-04-24 12:08:42 +08:00
binary-husky
6595ab813e 修正计数错误 2023-04-24 11:54:15 +08:00
binary-husky
d1efbd26da 修正prompt 2023-04-24 11:48:39 +08:00
binary-husky
f04683732e 待调查的BUG 2023-04-24 11:39:40 +08:00
binary-husky
cb0241db78 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-04-24 11:34:53 +08:00
binary-husky
a097b6cd03 减少每次处理的论文数 2023-04-24 11:34:47 +08:00
Your Name
487ffe7888 Merge remote-tracking branch 'origin/master' into v3.3 2023-04-24 02:07:07 +08:00
binary-husky
51424a7d08 Update README.md 2023-04-24 01:57:13 +08:00
binary-husky
06e8e8f9a6 Update README.md 2023-04-24 01:55:53 +08:00
binary-husky
0512b311f8 Update README.md 2023-04-24 01:55:10 +08:00
binary-husky
81d53d0726 Update README.md 2023-04-24 01:47:35 +08:00
binary-husky
a141c5ccdc Update README.md 2023-04-24 01:46:58 +08:00
binary-husky
e361d741c3 Update README.md 2023-04-24 01:44:30 +08:00
binary-husky
f5bc58dbde Update README.md 2023-04-24 01:41:47 +08:00
Your Name
e7b73f3041 update readme 2023-04-24 00:43:57 +08:00
Your Name
ed8db8c8ae README 2023-04-23 23:49:55 +08:00
Your Name
df97213d3b version 3.3 2023-04-23 23:43:07 +08:00
Your Name
97443d1f83 移除依赖 2023-04-23 23:40:18 +08:00
Your Name
59bed52faf 修改依赖的引用方式 2023-04-23 23:39:54 +08:00
Your Name
3814c3a915 修改依赖 2023-04-23 23:36:55 +08:00
Your Name
d98d0a291e 移动函数位置 2023-04-23 23:34:13 +08:00
Your Name
ee94fa6dc4 拆分成两个文件 2023-04-23 23:32:35 +08:00
Your Name
d2e46f6684 更新提示 2023-04-23 23:26:23 +08:00
Your Name
5948dcacd5 加线程锁 2023-04-23 23:25:49 +08:00
Your Name
3041858e7f 优化提示 2023-04-23 23:16:25 +08:00
Your Name
9c2a6bc413 优化错误提示 2023-04-23 23:13:00 +08:00
Your Name
1cf8b6c6c8 修复细节 2023-04-23 22:47:45 +08:00
Your Name
781ef4487c 修复一些细节 2023-04-23 22:44:18 +08:00
Your Name
4a494354b1 显示newbing回复的网址 2023-04-23 22:34:24 +08:00
Your Name
385c775aa5 支持3.10以下的python版本使用newbing 2023-04-23 20:54:57 +08:00
binary-husky
518385dea2 add newbing, testing 2023-04-23 19:17:09 +08:00
binary-husky
4d1eea7bd5 更新说明 2023-04-23 18:40:58 +08:00
binary-husky
9cb51ccc70 restore default model 2023-04-23 18:38:05 +08:00
binary-husky
94dc398163 restore default model 2023-04-23 18:37:15 +08:00
binary-husky
65317e33af Merge branch 'newbing' into v3.3 2023-04-23 18:35:21 +08:00
binary-husky
06fbdf43af 更正部分注释 2023-04-23 18:34:16 +08:00
binary-husky
ab61418410 better traceback 2023-04-23 18:13:30 +08:00
binary-husky
0785ff2aed 微调对话裁剪 2023-04-23 17:45:56 +08:00
binary-husky
676fe40d39 优化chatgpt对话的截断策略 2023-04-23 17:32:44 +08:00
binary-husky
0b89673ee9 Merge pull request #571 from codycjy/notebook_args
feat(jupyter): use args to disable Markdown parse
2023-04-23 11:24:41 +08:00
binary-husky
2f4e050612 Update README.md 2023-04-23 11:22:35 +08:00
binary-husky
2b96217f2b 实现Newbing聊天功能 2023-04-22 21:18:35 +08:00
saltfish
13342c2988 feat(jupter): use args to disable Markdown parse 2023-04-22 21:11:24 +08:00
共有 18 个文件被更改,包括 1105 次插入252 次删除

130
README.md
查看文件

@@ -1,8 +1,13 @@
> **Note**
>
> 本项目依赖的Gradio组件的新版pip包(Gradio 3.26~3.27)有严重bug。所以,请在安装时严格选择requirements.txt中**指定的版本**。
>
> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
>
# <img src="docs/logo.png" width="40" > ChatGPT 学术优化 # <img src="docs/logo.png" width="40" > ChatGPT 学术优化
**如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发issue或者pull requests** **如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发pull requests**
If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself. If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
@@ -20,25 +25,25 @@ If you like this project, please give it a Star. If you've come up with more use
--- | --- --- | ---
一键润色 | 支持一键润色、一键查找论文语法错误 一键润色 | 支持一键润色、一键查找论文语法错误
一键中英互译 | 一键中英互译 一键中英互译 | 一键中英互译
一键代码解释 | 可以正确显示代码、解释代码 一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键 [自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持配置代理服务器 模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
模块化设计 | 支持自定义高阶的函数插件与[函数插件],插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码 [自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树 [程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
读论文 | [函数插件] 一键解读latex论文全文并生成摘要 读论文、翻译论文 | [函数插件] 一键解读latex/pdf论文全文并生成摘要
Latex全文翻译、润色 | [函数插件] 一键翻译或润色latex论文 Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
批量注释生成 | [函数插件] 一键批量生成函数注释 批量注释生成 | [函数插件] 一键批量生成函数注释
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报 chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
Markdown中英互译 | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
[arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程) [PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你选择有趣的文章 [Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
公式/图片/表格显示 | 可以同时显示公式的tex形式和渲染形式,支持公式、代码高亮 [谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
多线程函数插件支持 | 支持多线调用chatgpt,一键处理海量文本或程序 互联网信息聚合+GPT | [函数插件] 一键让ChatGPT先Google搜索,再回答问题,信息流永不过时
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题 启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧? [多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic) 更多LLM模型接入 | 新加入Newbing测试接口(新必应AI)
…… | …… …… | ……
</div> </div>
@@ -75,9 +80,6 @@ huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" > <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
</div> </div>
多种大语言模型混合调用[huggingface测试版](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta)huggingface版不支持chatglm
--- ---
## 安装-方法1直接运行 (Windows, Linux or MacOS) ## 安装-方法1直接运行 (Windows, Linux or MacOS)
@@ -88,14 +90,10 @@ git clone https://github.com/binary-husky/chatgpt_academic.git
cd chatgpt_academic cd chatgpt_academic
``` ```
2. 配置API_KEY和代理设置 2. 配置API_KEY
在`config.py`中,配置API KEY等[设置](https://github.com/binary-husky/gpt_academic/issues/1) 。
在`config.py`中,配置 海外Proxy 和 OpenAI API KEY,说明如下
```
1. 如果你在国内,需要设置海外代理才能够顺利使用OpenAI API,设置方法请仔细阅读config.py1.修改其中的USE_PROXY为True; 2.按照说明修改其中的proxies
2. 配置 OpenAI API KEY。支持任意数量的OpenAI的密钥和API2D的密钥共存/负载均衡,多个KEY用英文逗号分隔即可,例如输入 API_KEY="OpenAI密钥1,API2D密钥2,OpenAI密钥3,OpenAI密钥4"
3. 与代理网络有关的issue网络超时、代理不起作用汇总到 https://github.com/binary-husky/chatgpt_academic/issues/1
```
P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。 P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。
@@ -123,14 +121,8 @@ python main.py
5. 测试函数插件 5. 测试函数插件
``` ```
- 测试Python项目分析
选择1input区域 输入 `./crazy_functions/test_project/python/dqn` , 然后点击 "解析整个Python项目"
选择2展开文件上传区,将python文件/包含python文件的压缩包拖拽进去,在出现反馈提示后, 然后点击 "解析整个Python项目"
- 测试自我代码解读(本项目自译解)
点击 "[多线程Demo] 解析此项目本身(源码自译解)"
- 测试函数插件模板函数要求gpt回答历史上的今天发生了什么,您可以根据此函数为模板,实现更复杂的功能 - 测试函数插件模板函数要求gpt回答历史上的今天发生了什么,您可以根据此函数为模板,实现更复杂的功能
点击 "[函数插件模板Demo] 历史上的今天" 点击 "[函数插件模板Demo] 历史上的今天"
- 函数插件区下拉菜单中有更多功能可供选择
``` ```
## 安装-方法2使用Docker ## 安装-方法2使用Docker
@@ -141,7 +133,7 @@ python main.py
# 下载项目 # 下载项目
git clone https://github.com/binary-husky/chatgpt_academic.git git clone https://github.com/binary-husky/chatgpt_academic.git
cd chatgpt_academic cd chatgpt_academic
# 配置 “海外Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等 # 配置 “Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等
用任意文本编辑器编辑 config.py 用任意文本编辑器编辑 config.py
# 安装 # 安装
docker build -t gpt-academic . docker build -t gpt-academic .
@@ -164,7 +156,6 @@ docker run --rm -it --net=host --gpus=all gpt-academic
docker run --rm -it --net=host --gpus=all gpt-academic bash docker run --rm -it --net=host --gpus=all gpt-academic bash
``` ```
## 安装-方法3其他部署方式需要云服务器知识与经验 ## 安装-方法3其他部署方式需要云服务器知识与经验
1. 远程云服务器部署 1. 远程云服务器部署
@@ -176,14 +167,6 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
3. 如何在二级网址(如`http://localhost/subpath`)下运行 3. 如何在二级网址(如`http://localhost/subpath`)下运行
请访问[FastAPI运行说明](docs/WithFastapi.md) 请访问[FastAPI运行说明](docs/WithFastapi.md)
## 安装-代理配置
1. 常规方法
[配置代理](https://github.com/binary-husky/chatgpt_academic/issues/1)
2. 纯新手教程
[纯新手教程](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
--- ---
## 自定义新的便捷按钮 / 自定义函数插件 ## 自定义新的便捷按钮 / 自定义函数插件
@@ -211,73 +194,9 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
详情请参考[函数插件指南](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。 详情请参考[函数插件指南](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
---
## 版本:
## 部分功能展示 - version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
1. 图片显示:
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
</div>
2. 本项目的代码自译解(如果一个程序能够读懂并剖析自己):
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
</div>
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
</div>
3. 其他任意Python/Cpp/Java/Go/Rect/...项目剖析:
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
</div>
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
</div>
4. Latex论文一键阅读理解与摘要生成
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
</div>
5. 自动报告生成
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
</div>
6. 模块化功能设计
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
</div>
7. 源代码转译英文
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
</div>
8. 互联网在线信息综合
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/233575247-fb00819e-6d1b-4bb7-bd54-1d7528f03dd9.png" width="800" >
<img src="https://user-images.githubusercontent.com/96192199/233779501-5ce826f0-6cca-4d59-9e5f-b4eacb8cc15f.png" width="800" >
</div>
## Todo 与 版本规划:
- version 3.2+ (todo): 函数插件支持更多参数接口
- version 3.1: 支持同时问询多个gpt模型支持api2d,支持多个apikey负载均衡 - version 3.1: 支持同时问询多个gpt模型支持api2d,支持多个apikey负载均衡
- version 3.0: 对chatglm和其他小型llm的支持 - version 3.0: 对chatglm和其他小型llm的支持
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件 - version 2.6: 重构了插件结构,提高了交互性,加入更多插件
@@ -289,7 +208,6 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
- version 2.0: 引入模块化函数插件 - version 2.0: 引入模块化函数插件
- version 1.0: 基础功能 - version 1.0: 基础功能
chatgpt_academic开发者QQ群734063350
## 参考与学习 ## 参考与学习

查看文件

@@ -45,7 +45,7 @@ MAX_RETRY = 2
# OpenAI模型选择是gpt4现在只对申请成功的人开放,体验gpt-4可以试试api2d # OpenAI模型选择是gpt4现在只对申请成功的人开放,体验gpt-4可以试试api2d
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓ LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm"] AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing"]
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU # 本地LLM模型如ChatGLM的执行方式 CPU/GPU
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda" LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
@@ -58,8 +58,14 @@ CONCURRENT_COUNT = 100
AUTHENTICATION = [] AUTHENTICATION = []
# 重新URL重新定向,实现更换API_URL的作用常规情况下,不要修改!! # 重新URL重新定向,实现更换API_URL的作用常规情况下,不要修改!!
# 格式 {"https://api.openai.com/v1/chat/completions": "重定向的URL"} # 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
API_URL_REDIRECT = {} API_URL_REDIRECT = {}
# 如果需要在二级路径下运行(常规情况下,不要修改!!需要配合修改main.py才能生效! # 如果需要在二级路径下运行(常规情况下,不要修改!!需要配合修改main.py才能生效!
CUSTOM_PATH = "/" CUSTOM_PATH = "/"
# 如果需要使用newbing,把newbing的长长的cookie放到这里
NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"]
NEWBING_COOKIES = """
your bing cookies here
"""

查看文件

@@ -21,6 +21,7 @@ def get_crazy_functions():
from crazy_functions.总结word文档 import 总结word文档 from crazy_functions.总结word文档 import 总结word文档
from crazy_functions.解析JupyterNotebook import 解析ipynb文件 from crazy_functions.解析JupyterNotebook import 解析ipynb文件
from crazy_functions.对话历史存档 import 对话历史存档 from crazy_functions.对话历史存档 import 对话历史存档
from crazy_functions.批量Markdown翻译 import Markdown英译中
function_plugins = { function_plugins = {
"解析整个Python项目": { "解析整个Python项目": {
@@ -35,6 +36,8 @@ def get_crazy_functions():
"Color": "stop", "Color": "stop",
"AsButton":False, "AsButton":False,
"Function": HotReload(解析ipynb文件), "Function": HotReload(解析ipynb文件),
"AdvancedArgs": True, # 调用时,唤起高级参数输入区默认False
"ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示
}, },
"批量总结Word文档": { "批量总结Word文档": {
"Color": "stop", "Color": "stop",
@@ -79,8 +82,14 @@ def get_crazy_functions():
"Color": "stop", # 按钮颜色 "Color": "stop", # 按钮颜色
"Function": HotReload(读文章写摘要) "Function": HotReload(读文章写摘要)
}, },
"Markdown/Readme英译中": {
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
"Color": "stop",
"Function": HotReload(Markdown英译中)
},
"批量生成函数注释": { "批量生成函数注释": {
"Color": "stop", # 按钮颜色 "Color": "stop", # 按钮颜色
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(批量生成函数注释) "Function": HotReload(批量生成函数注释)
}, },
"[多线程Demo] 解析此项目本身(源码自译解)": { "[多线程Demo] 解析此项目本身(源码自译解)": {
@@ -108,7 +117,6 @@ def get_crazy_functions():
from crazy_functions.Latex全文翻译 import Latex中译英 from crazy_functions.Latex全文翻译 import Latex中译英
from crazy_functions.Latex全文翻译 import Latex英译中 from crazy_functions.Latex全文翻译 import Latex英译中
from crazy_functions.批量Markdown翻译 import Markdown中译英 from crazy_functions.批量Markdown翻译 import Markdown中译英
from crazy_functions.批量Markdown翻译 import Markdown英译中
function_plugins.update({ function_plugins.update({
"批量翻译PDF文档多线程": { "批量翻译PDF文档多线程": {
@@ -173,12 +181,7 @@ def get_crazy_functions():
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Function": HotReload(Markdown中译英) "Function": HotReload(Markdown中译英)
}, },
"[测试功能] 批量Markdown英译中输入路径或上传压缩包": {
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(Markdown英译中)
},
}) })

查看文件

@@ -1,5 +1,4 @@
import traceback from toolbox import update_ui, get_conf, trimmed_format_exc
from toolbox import update_ui, get_conf
def input_clipping(inputs, history, max_token_limit): def input_clipping(inputs, history, max_token_limit):
import numpy as np import numpy as np
@@ -94,12 +93,12 @@ def request_gpt_model_in_new_thread_with_ui_alive(
continue # 返回重试 continue # 返回重试
else: else:
# 【选择放弃】 # 【选择放弃】
tb_str = '```\n' + traceback.format_exc() + '```' tb_str = '```\n' + trimmed_format_exc() + '```'
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n" mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
return mutable[0] # 放弃 return mutable[0] # 放弃
except: except:
# 【第三种情况】:其他错误:重试几次 # 【第三种情况】:其他错误:重试几次
tb_str = '```\n' + traceback.format_exc() + '```' tb_str = '```\n' + trimmed_format_exc() + '```'
print(tb_str) print(tb_str)
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n" mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if retry_op > 0: if retry_op > 0:
@@ -173,7 +172,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
if max_workers == -1: # 读取配置文件 if max_workers == -1: # 读取配置文件
try: max_workers, = get_conf('DEFAULT_WORKER_NUM') try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
except: max_workers = 8 except: max_workers = 8
if max_workers <= 0 or max_workers >= 20: max_workers = 8 if max_workers <= 0: max_workers = 3
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿 # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')): if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
max_workers = 1 max_workers = 1
@@ -220,14 +219,14 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
continue # 返回重试 continue # 返回重试
else: else:
# 【选择放弃】 # 【选择放弃】
tb_str = '```\n' + traceback.format_exc() + '```' tb_str = '```\n' + trimmed_format_exc() + '```'
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n" gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
mutable[index][2] = "输入过长已放弃" mutable[index][2] = "输入过长已放弃"
return gpt_say # 放弃 return gpt_say # 放弃
except: except:
# 【第三种情况】:其他错误 # 【第三种情况】:其他错误
tb_str = '```\n' + traceback.format_exc() + '```' tb_str = '```\n' + trimmed_format_exc() + '```'
print(tb_str) print(tb_str)
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n" gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]

查看文件

@@ -84,7 +84,33 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
def get_files_from_everything(txt):
import glob, os
success = True
if txt.startswith('http'):
# 网络的远程文件
txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/")
txt = txt.replace("/blob/", "/")
import requests
from toolbox import get_conf
proxies, = get_conf('proxies')
r = requests.get(txt, proxies=proxies)
with open('./gpt_log/temp.md', 'wb+') as f: f.write(r.content)
project_folder = './gpt_log/'
file_manifest = ['./gpt_log/temp.md']
elif txt.endswith('.md'):
# 直接给定文件
file_manifest = [txt]
project_folder = os.path.dirname(txt)
elif os.path.exists(txt):
# 本地路径,递归搜索
project_folder = txt
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
else:
success = False
return success, file_manifest, project_folder
@CatchException @CatchException
@@ -98,6 +124,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
try: try:
import tiktoken import tiktoken
import glob, os
except: except:
report_execption(chatbot, history, report_execption(chatbot, history,
a=f"解析项目: {txt}", a=f"解析项目: {txt}",
@@ -105,19 +132,21 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt): success, file_manifest, project_folder = get_files_from_everything(txt)
project_folder = txt
else: if not success:
# 什么都没有
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
if len(file_manifest) == 0: if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}") report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh') yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh')
@@ -135,6 +164,7 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
try: try:
import tiktoken import tiktoken
import glob, os
except: except:
report_execption(chatbot, history, report_execption(chatbot, history,
a=f"解析项目: {txt}", a=f"解析项目: {txt}",
@@ -142,18 +172,13 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os success, file_manifest, project_folder = get_files_from_everything(txt)
if os.path.exists(txt): if not success:
project_folder = txt # 什么都没有
else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
if txt.endswith('.md'):
file_manifest = [txt]
else:
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
if len(file_manifest) == 0: if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}") report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

查看文件

@@ -67,11 +67,16 @@ def parseNotebook(filename, enable_markdown=1):
def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
enable_markdown = plugin_kwargs.get("advanced_arg", "1")
try:
enable_markdown = int(enable_markdown)
except ValueError:
enable_markdown = 1
pfg = PaperFileGroup() pfg = PaperFileGroup()
print(file_manifest)
for fp in file_manifest: for fp in file_manifest:
file_content = parseNotebook(fp, enable_markdown=1) file_content = parseNotebook(fp, enable_markdown=enable_markdown)
pfg.file_paths.append(fp) pfg.file_paths.append(fp)
pfg.file_contents.append(file_content) pfg.file_contents.append(file_content)

查看文件

@@ -1,5 +1,6 @@
from toolbox import update_ui from toolbox import update_ui
from toolbox import CatchException, report_execption, write_results_to_file from toolbox import CatchException, report_execption, write_results_to_file
from .crazy_utils import input_clipping
def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
import os, copy import os, copy
@@ -61,13 +62,15 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)]) previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
previous_iteration_files_string = ', '.join(previous_iteration_files) previous_iteration_files_string = ', '.join(previous_iteration_files)
current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)]) current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
i_say = f'根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括{previous_iteration_files_string}' i_say = f'用一张Markdown表格简要描述以下文件的功能{previous_iteration_files_string}。根据以上分析,用一句话概括程序的整体功能'
inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。' inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection) this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
this_iteration_history.append(last_iteration_result) this_iteration_history.append(last_iteration_result)
# 裁剪input
inputs, this_iteration_history_feed = input_clipping(inputs=i_say, history=this_iteration_history, max_token_limit=2560)
result = yield from request_gpt_model_in_new_thread_with_ui_alive( result = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot, inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
history=this_iteration_history, # 迭代之前的分析 history=this_iteration_history_feed, # 迭代之前的分析
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。") sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
report_part_2.extend([i_say, result]) report_part_2.extend([i_say, result])
last_iteration_result = result last_iteration_result = result

查看文件

@@ -70,6 +70,7 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
try: try:
import arxiv import arxiv
import math
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
except: except:
report_execption(chatbot, history, report_execption(chatbot, history,
@@ -80,25 +81,26 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
# 清空历史,以免输入溢出 # 清空历史,以免输入溢出
history = [] history = []
meta_paper_info_list = yield from get_meta_information(txt, chatbot, history) meta_paper_info_list = yield from get_meta_information(txt, chatbot, history)
batchsize = 5
if len(meta_paper_info_list[:10]) > 0: for batch in range(math.ceil(len(meta_paper_info_list)/batchsize)):
i_say = "下面是一些学术文献的数据,请从中提取出以下内容。" + \ if len(meta_paper_info_list[:batchsize]) > 0:
i_say = "下面是一些学术文献的数据,提取出以下内容:" + \
"1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开is_paper_in_arxiv;4、引用数量cite;5、中文摘要翻译。" + \ "1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开is_paper_in_arxiv;4、引用数量cite;5、中文摘要翻译。" + \
f"以下是信息源:{str(meta_paper_info_list[:10])}" f"以下是信息源:{str(meta_paper_info_list[:batchsize])}"
inputs_show_user = f"请分析此页面中出现的所有文章:{txt}" inputs_show_user = f"请分析此页面中出现的所有文章:{txt},这是第{batch+1}"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=inputs_show_user, inputs=i_say, inputs_show_user=inputs_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="你是一个学术翻译,请从数据中提取信息。你必须使用Markdown格。你必须逐个文献进行处理。" sys_prompt="你是一个学术翻译,请从数据中提取信息。你必须使用Markdown格。你必须逐个文献进行处理。"
) )
history.extend([ "", gpt_say ]) history.extend([ f"{batch+1}", gpt_say ])
meta_paper_info_list = meta_paper_info_list[10:] meta_paper_info_list = meta_paper_info_list[batchsize:]
chatbot.append(["状态?", "已经全部完成"]) chatbot.append(["状态?",
"已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."])
msg = '正常' msg = '正常'
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
res = write_results_to_file(history) res = write_results_to_file(history)

查看文件

@@ -173,9 +173,6 @@ def main():
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs) yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo) click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot]) click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
# def expand_file_area(file_upload, area_file_up):
# if len(file_upload)>0: return {area_file_up: gr.update(open=True)}
# click_handle.then(expand_file_area, [file_upload, area_file_up], [area_file_up])
cancel_handles.append(click_handle) cancel_handles.append(click_handle)
# 终止按钮的回调函数注册 # 终止按钮的回调函数注册
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles) stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)

查看文件

@@ -11,7 +11,7 @@
import tiktoken import tiktoken
from functools import lru_cache from functools import lru_cache
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from toolbox import get_conf from toolbox import get_conf, trimmed_format_exc
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
from .bridge_chatgpt import predict as chatgpt_ui from .bridge_chatgpt import predict as chatgpt_ui
@@ -19,6 +19,9 @@ from .bridge_chatgpt import predict as chatgpt_ui
from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
from .bridge_chatglm import predict as chatglm_ui from .bridge_chatglm import predict as chatglm_ui
from .bridge_newbing import predict_no_ui_long_connection as newbing_noui
from .bridge_newbing import predict as newbing_ui
# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui # from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
# from .bridge_tgui import predict as tgui_ui # from .bridge_tgui import predict as tgui_ui
@@ -48,6 +51,7 @@ class LazyloadTiktoken(object):
API_URL_REDIRECT, = get_conf("API_URL_REDIRECT") API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
openai_endpoint = "https://api.openai.com/v1/chat/completions" openai_endpoint = "https://api.openai.com/v1/chat/completions"
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions" api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
# 兼容旧版的配置 # 兼容旧版的配置
try: try:
API_URL, = get_conf("API_URL") API_URL, = get_conf("API_URL")
@@ -59,6 +63,7 @@ except:
# 新版配置 # 新版配置
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint] if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint] if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
# 获取tokenizer # 获取tokenizer
@@ -116,7 +121,15 @@ model_info = {
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
# newbing
"newbing": {
"fn_with_ui": newbing_ui,
"fn_without_ui": newbing_noui,
"endpoint": newbing_endpoint,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
} }
@@ -128,10 +141,7 @@ def LLM_CATCH_EXCEPTION(f):
try: try:
return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
except Exception as e: except Exception as e:
from toolbox import get_conf tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
import traceback
proxies, = get_conf('proxies')
tb_str = '\n```\n' + traceback.format_exc() + '\n```\n'
observe_window[0] = tb_str observe_window[0] = tb_str
return tb_str return tb_str
return decorated return decorated
@@ -182,7 +192,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
def mutex_manager(window_mutex, observe_window): def mutex_manager(window_mutex, observe_window):
while True: while True:
time.sleep(0.5) time.sleep(0.25)
if not window_mutex[-1]: break if not window_mutex[-1]: break
# 看门狗watchdog # 看门狗watchdog
for i in range(n_model): for i in range(n_model):

查看文件

@@ -32,6 +32,7 @@ class GetGLMHandle(Process):
return self.chatglm_model is not None return self.chatglm_model is not None
def run(self): def run(self):
# 子进程执行
# 第一次运行,加载参数 # 第一次运行,加载参数
retry = 0 retry = 0
while True: while True:
@@ -53,17 +54,24 @@ class GetGLMHandle(Process):
self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。') self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。')
raise RuntimeError("不能正常加载ChatGLM的参数") raise RuntimeError("不能正常加载ChatGLM的参数")
# 进入任务等待状态
while True: while True:
# 进入任务等待状态
kwargs = self.child.recv() kwargs = self.child.recv()
# 收到消息,开始请求
try: try:
for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs): for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs):
self.child.send(response) self.child.send(response)
# # 中途接收可能的终止指令(如果有的话)
# if self.child.poll():
# command = self.child.recv()
# if command == '[Terminate]': break
except: except:
self.child.send('[Local Message] Call ChatGLM fail.') self.child.send('[Local Message] Call ChatGLM fail.')
# 请求处理结束,开始下一个循环
self.child.send('[Finish]') self.child.send('[Finish]')
def stream_chat(self, **kwargs): def stream_chat(self, **kwargs):
# 主进程执行
self.parent.send(kwargs) self.parent.send(kwargs)
while True: while True:
res = self.parent.recv() res = self.parent.recv()
@@ -130,14 +138,17 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
# 处理历史信息
history_feedin = [] history_feedin = []
history_feedin.append(["What can I do?", system_prompt] ) history_feedin.append(["What can I do?", system_prompt] )
for i in range(len(history)//2): for i in range(len(history)//2):
history_feedin.append([history[2*i], history[2*i+1]] ) history_feedin.append([history[2*i], history[2*i+1]] )
# 开始接收chatglm的回复
for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
chatbot[-1] = (inputs, response) chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
# 总结输出
history.extend([inputs, response]) history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)

查看文件

@@ -21,7 +21,7 @@ import importlib
# config_private.py放自己的秘密如API和代理网址 # config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件 # 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc
proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \ proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY') get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
@@ -198,21 +198,24 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chunk_decoded = chunk.decode() chunk_decoded = chunk.decode()
error_msg = chunk_decoded error_msg = chunk_decoded
if "reduce the length" in error_msg: if "reduce the length" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长,或历史数据过长. 历史缓存数据现已释放,您可以请再次尝试.") if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出
history = [] # 清除历史 history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
# history = [] # 清除历史
elif "does not exist" in error_msg: elif "does not exist" in error_msg:
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在或者您没有获得体验资格.") chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
elif "Incorrect API key" in error_msg: elif "Incorrect API key" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由拒绝服务.") chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务.")
elif "exceeded your current quota" in error_msg: elif "exceeded your current quota" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由拒绝服务.") chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务.")
elif "bad forward key" in error_msg: elif "bad forward key" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.") chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
elif "Not enough point" in error_msg: elif "Not enough point" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.") chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
else: else:
from toolbox import regular_txt_to_markdown from toolbox import regular_txt_to_markdown
tb_str = '```\n' + traceback.format_exc() + '```' tb_str = '```\n' + trimmed_format_exc() + '```'
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}") chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}")
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
return return

查看文件

@@ -0,0 +1,251 @@
"""
========================================================================
第一部分来自EdgeGPT.py
https://github.com/acheong08/EdgeGPT
========================================================================
"""
from .edge_gpt import NewbingChatbot
load_message = "等待NewBing响应。"
"""
========================================================================
第二部分子进程Worker调用主体
========================================================================
"""
import time
import json
import re
import asyncio
import importlib
import threading
from toolbox import update_ui, get_conf, trimmed_format_exc
from multiprocessing import Process, Pipe
def preprocess_newbing_out(s):
pattern = r'\^(\d+)\^' # 匹配^数字^
sub = lambda m: '\['+m.group(1)+'\]' # 将匹配到的数字作为替换值
result = re.sub(pattern, sub, s) # 替换操作
if '[1]' in result:
result += '\n\n```\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
return result
def preprocess_newbing_out_simple(result):
if '[1]' in result:
result += '\n\n```\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
return result
class NewBingHandle(Process):
def __init__(self):
super().__init__(daemon=True)
self.parent, self.child = Pipe()
self.newbing_model = None
self.info = ""
self.success = True
self.local_history = []
self.check_dependency()
self.start()
self.threadLock = threading.Lock()
def check_dependency(self):
try:
self.success = False
import certifi, httpx, rich
self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口有线程锁,否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
self.success = True
except:
self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。"
self.success = False
def ready(self):
return self.newbing_model is not None
async def async_run(self):
# 读取配置
NEWBING_STYLE, = get_conf('NEWBING_STYLE')
from request_llm.bridge_all import model_info
endpoint = model_info['newbing']['endpoint']
while True:
# 等待
kwargs = self.child.recv()
question=kwargs['query']
history=kwargs['history']
system_prompt=kwargs['system_prompt']
# 是否重置
if len(self.local_history) > 0 and len(history)==0:
await self.newbing_model.reset()
self.local_history = []
# 开始问问题
prompt = ""
if system_prompt not in self.local_history:
self.local_history.append(system_prompt)
prompt += system_prompt + '\n'
# 追加历史
for ab in history:
a, b = ab
if a not in self.local_history:
self.local_history.append(a)
prompt += a + '\n'
# if b not in self.local_history:
# self.local_history.append(b)
# prompt += b + '\n'
# 问题
prompt += question
self.local_history.append(question)
print('question:', prompt)
# 提交
async for final, response in self.newbing_model.ask_stream(
prompt=question,
conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"]
wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub"
):
if not final:
print(response)
self.child.send(str(response))
else:
print('-------- receive final ---------')
self.child.send('[Finish]')
# self.local_history.append(response)
def run(self):
"""
这个函数运行在子进程
"""
# 第一次运行,加载参数
self.success = False
self.local_history = []
if (self.newbing_model is None) or (not self.success):
# 代理设置
proxies, = get_conf('proxies')
if proxies is None:
self.proxies_https = None
else:
self.proxies_https = proxies['https']
# cookie
NEWBING_COOKIES, = get_conf('NEWBING_COOKIES')
try:
cookies = json.loads(NEWBING_COOKIES)
except:
self.success = False
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
self.child.send(f'[Local Message] 不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。')
self.child.send('[Fail]')
self.child.send('[Finish]')
raise RuntimeError(f"不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。")
try:
self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies)
except:
self.success = False
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}')
self.child.send('[Fail]')
self.child.send('[Finish]')
raise RuntimeError(f"不能加载Newbing组件。")
self.success = True
try:
# 进入任务等待状态
asyncio.run(self.async_run())
except Exception:
tb_str = '```\n' + trimmed_format_exc() + '```'
self.child.send(f'[Local Message] Newbing失败 {tb_str}.')
self.child.send('[Fail]')
self.child.send('[Finish]')
def stream_chat(self, **kwargs):
"""
这个函数运行在主进程
"""
self.threadLock.acquire()
self.parent.send(kwargs) # 发送请求到子进程
while True:
res = self.parent.recv() # 等待newbing回复的片段
if res == '[Finish]':
break # 结束
elif res == '[Fail]':
self.success = False
break
else:
yield res # newbing回复的片段
self.threadLock.release()
"""
========================================================================
第三部分:主进程统一调用函数接口
========================================================================
"""
global newbing_handle
newbing_handle = None
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
"""
多线程方法
函数的说明请见 request_llm/bridge_all.py
"""
global newbing_handle
if (newbing_handle is None) or (not newbing_handle.success):
newbing_handle = NewBingHandle()
observe_window[0] = load_message + "\n\n" + newbing_handle.info
if not newbing_handle.success:
error = newbing_handle.info
newbing_handle = None
raise RuntimeError(error)
# 没有 sys_prompt 接口,因此把prompt加入 history
history_feedin = []
for i in range(len(history)//2):
history_feedin.append([history[2*i], history[2*i+1]] )
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
response = ""
observe_window[0] = "[Local Message]: 等待NewBing响应中 ..."
for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
observe_window[0] = preprocess_newbing_out_simple(response)
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。")
return preprocess_newbing_out_simple(response)
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
单线程方法
函数的说明请见 request_llm/bridge_all.py
"""
chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ..."))
global newbing_handle
if (newbing_handle is None) or (not newbing_handle.success):
newbing_handle = NewBingHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + newbing_handle.info)
yield from update_ui(chatbot=chatbot, history=[])
if not newbing_handle.success:
newbing_handle = None
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
history_feedin = []
for i in range(len(history)//2):
history_feedin.append([history[2*i], history[2*i+1]] )
chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...")
response = "[Local Message]: 等待NewBing响应中 ..."
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
chatbot[-1] = (inputs, preprocess_newbing_out(response))
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
if response == "[Local Message]: 等待NewBing响应中 ...": response = "[Local Message]: NewBing响应异常,请刷新界面重试 ..."
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")

409
request_llm/edge_gpt.py 普通文件
查看文件

@@ -0,0 +1,409 @@
"""
========================================================================
第一部分来自EdgeGPT.py
https://github.com/acheong08/EdgeGPT
========================================================================
"""
import argparse
import asyncio
import json
import os
import random
import re
import ssl
import sys
import uuid
from enum import Enum
from typing import Generator
from typing import Literal
from typing import Optional
from typing import Union
import websockets.client as websockets
DELIMITER = "\x1e"
# Generate random IP between range 13.104.0.0/14
FORWARDED_IP = (
f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}"
)
HEADERS = {
"accept": "application/json",
"accept-language": "en-US,en;q=0.9",
"content-type": "application/json",
"sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"',
"sec-ch-ua-arch": '"x86"',
"sec-ch-ua-bitness": '"64"',
"sec-ch-ua-full-version": '"109.0.1518.78"',
"sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-model": "",
"sec-ch-ua-platform": '"Windows"',
"sec-ch-ua-platform-version": '"15.0.0"',
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"x-ms-client-request-id": str(uuid.uuid4()),
"x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32",
"Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx",
"Referrer-Policy": "origin-when-cross-origin",
"x-forwarded-for": FORWARDED_IP,
}
HEADERS_INIT_CONVER = {
"authority": "edgeservices.bing.com",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
"accept-language": "en-US,en;q=0.9",
"cache-control": "max-age=0",
"sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"',
"sec-ch-ua-arch": '"x86"',
"sec-ch-ua-bitness": '"64"',
"sec-ch-ua-full-version": '"110.0.1587.69"',
"sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-model": '""',
"sec-ch-ua-platform": '"Windows"',
"sec-ch-ua-platform-version": '"15.0.0"',
"sec-fetch-dest": "document",
"sec-fetch-mode": "navigate",
"sec-fetch-site": "none",
"sec-fetch-user": "?1",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69",
"x-edge-shopping-flag": "1",
"x-forwarded-for": FORWARDED_IP,
}
def get_ssl_context():
import certifi
ssl_context = ssl.create_default_context()
ssl_context.load_verify_locations(certifi.where())
return ssl_context
class NotAllowedToAccess(Exception):
pass
class ConversationStyle(Enum):
creative = "h3imaginative,clgalileo,gencontentv3"
balanced = "galileo"
precise = "h3precise,clgalileo"
CONVERSATION_STYLE_TYPE = Optional[
Union[ConversationStyle, Literal["creative", "balanced", "precise"]]
]
def _append_identifier(msg: dict) -> str:
"""
Appends special character to end of message to identify end of message
"""
# Convert dict to json string
return json.dumps(msg) + DELIMITER
def _get_ran_hex(length: int = 32) -> str:
"""
Returns random hex string
"""
return "".join(random.choice("0123456789abcdef") for _ in range(length))
class _ChatHubRequest:
"""
Request object for ChatHub
"""
def __init__(
self,
conversation_signature: str,
client_id: str,
conversation_id: str,
invocation_id: int = 0,
) -> None:
self.struct: dict = {}
self.client_id: str = client_id
self.conversation_id: str = conversation_id
self.conversation_signature: str = conversation_signature
self.invocation_id: int = invocation_id
def update(
self,
prompt,
conversation_style,
options,
) -> None:
"""
Updates request object
"""
if options is None:
options = [
"deepleo",
"enable_debug_commands",
"disable_emoji_spoken_text",
"enablemm",
]
if conversation_style:
if not isinstance(conversation_style, ConversationStyle):
conversation_style = getattr(ConversationStyle, conversation_style)
options = [
"nlu_direct_response_filter",
"deepleo",
"disable_emoji_spoken_text",
"responsible_ai_policy_235",
"enablemm",
conversation_style.value,
"dtappid",
"cricinfo",
"cricinfov2",
"dv3sugg",
]
self.struct = {
"arguments": [
{
"source": "cib",
"optionsSets": options,
"sliceIds": [
"222dtappid",
"225cricinfo",
"224locals0",
],
"traceId": _get_ran_hex(32),
"isStartOfSession": self.invocation_id == 0,
"message": {
"author": "user",
"inputMethod": "Keyboard",
"text": prompt,
"messageType": "Chat",
},
"conversationSignature": self.conversation_signature,
"participant": {
"id": self.client_id,
},
"conversationId": self.conversation_id,
},
],
"invocationId": str(self.invocation_id),
"target": "chat",
"type": 4,
}
self.invocation_id += 1
class _Conversation:
"""
Conversation API
"""
def __init__(
self,
cookies,
proxy,
) -> None:
self.struct: dict = {
"conversationId": None,
"clientId": None,
"conversationSignature": None,
"result": {"value": "Success", "message": None},
}
import httpx
self.proxy = proxy
proxy = (
proxy
or os.environ.get("all_proxy")
or os.environ.get("ALL_PROXY")
or os.environ.get("https_proxy")
or os.environ.get("HTTPS_PROXY")
or None
)
if proxy is not None and proxy.startswith("socks5h://"):
proxy = "socks5://" + proxy[len("socks5h://") :]
self.session = httpx.Client(
proxies=proxy,
timeout=30,
headers=HEADERS_INIT_CONVER,
)
for cookie in cookies:
self.session.cookies.set(cookie["name"], cookie["value"])
# Send GET request
response = self.session.get(
url=os.environ.get("BING_PROXY_URL")
or "https://edgeservices.bing.com/edgesvc/turing/conversation/create",
)
if response.status_code != 200:
response = self.session.get(
"https://edge.churchless.tech/edgesvc/turing/conversation/create",
)
if response.status_code != 200:
print(f"Status code: {response.status_code}")
print(response.text)
print(response.url)
raise Exception("Authentication failed")
try:
self.struct = response.json()
except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
raise Exception(
"Authentication failed. You have not been accepted into the beta.",
) from exc
if self.struct["result"]["value"] == "UnauthorizedRequest":
raise NotAllowedToAccess(self.struct["result"]["message"])
class _ChatHub:
"""
Chat API
"""
def __init__(self, conversation) -> None:
self.wss = None
self.request: _ChatHubRequest
self.loop: bool
self.task: asyncio.Task
print(conversation.struct)
self.request = _ChatHubRequest(
conversation_signature=conversation.struct["conversationSignature"],
client_id=conversation.struct["clientId"],
conversation_id=conversation.struct["conversationId"],
)
async def ask_stream(
self,
prompt: str,
wss_link: str,
conversation_style: CONVERSATION_STYLE_TYPE = None,
raw: bool = False,
options: dict = None,
) -> Generator[str, None, None]:
"""
Ask a question to the bot
"""
if self.wss and not self.wss.closed:
await self.wss.close()
# Check if websocket is closed
self.wss = await websockets.connect(
wss_link,
extra_headers=HEADERS,
max_size=None,
ssl=get_ssl_context()
)
await self._initial_handshake()
# Construct a ChatHub request
self.request.update(
prompt=prompt,
conversation_style=conversation_style,
options=options,
)
# Send request
await self.wss.send(_append_identifier(self.request.struct))
final = False
while not final:
objects = str(await self.wss.recv()).split(DELIMITER)
for obj in objects:
if obj is None or not obj:
continue
response = json.loads(obj)
if response.get("type") != 2 and raw:
yield False, response
elif response.get("type") == 1 and response["arguments"][0].get(
"messages",
):
resp_txt = response["arguments"][0]["messages"][0]["adaptiveCards"][
0
]["body"][0].get("text")
yield False, resp_txt
elif response.get("type") == 2:
final = True
yield True, response
async def _initial_handshake(self) -> None:
await self.wss.send(_append_identifier({"protocol": "json", "version": 1}))
await self.wss.recv()
async def close(self) -> None:
"""
Close the connection
"""
if self.wss and not self.wss.closed:
await self.wss.close()
class NewbingChatbot:
"""
Combines everything to make it seamless
"""
def __init__(
self,
cookies,
proxy
) -> None:
if cookies is None:
cookies = {}
self.cookies = cookies
self.proxy = proxy
self.chat_hub: _ChatHub = _ChatHub(
_Conversation(self.cookies, self.proxy),
)
async def ask(
self,
prompt: str,
wss_link: str,
conversation_style: CONVERSATION_STYLE_TYPE = None,
options: dict = None,
) -> dict:
"""
Ask a question to the bot
"""
async for final, response in self.chat_hub.ask_stream(
prompt=prompt,
conversation_style=conversation_style,
wss_link=wss_link,
options=options,
):
if final:
return response
await self.chat_hub.wss.close()
return None
async def ask_stream(
self,
prompt: str,
wss_link: str,
conversation_style: CONVERSATION_STYLE_TYPE = None,
raw: bool = False,
options: dict = None,
) -> Generator[str, None, None]:
"""
Ask a question to the bot
"""
async for response in self.chat_hub.ask_stream(
prompt=prompt,
conversation_style=conversation_style,
wss_link=wss_link,
raw=raw,
options=options,
):
yield response
async def close(self) -> None:
"""
Close the connection
"""
await self.chat_hub.close()
async def reset(self) -> None:
"""
Reset the conversation
"""
await self.close()
self.chat_hub = _ChatHub(_Conversation(self.cookies, self.proxy))

查看文件

@@ -0,0 +1,8 @@
BingImageCreator
certifi
httpx
prompt_toolkit
requests
rich
websockets
httpx[socks]

247
theme.py
查看文件

@@ -137,6 +137,16 @@ advanced_css = """
/* 行内代码的背景设为淡灰色,设定圆角和间距. */ /* 行内代码的背景设为淡灰色,设定圆角和间距. */
.markdown-body code { .markdown-body code {
display: inline;
white-space: break-spaces;
border-radius: 6px;
margin: 0 2px 0 2px;
padding: .2em .4em .1em .4em;
background-color: rgba(13, 17, 23, 0.95);
color: #c9d1d9;
}
.dark .markdown-body code {
display: inline; display: inline;
white-space: break-spaces; white-space: break-spaces;
border-radius: 6px; border-radius: 6px;
@@ -144,8 +154,19 @@ advanced_css = """
padding: .2em .4em .1em .4em; padding: .2em .4em .1em .4em;
background-color: rgba(175,184,193,0.2); background-color: rgba(175,184,193,0.2);
} }
/* 设定代码块的样式,包括背景颜色、内、外边距、圆角。 */ /* 设定代码块的样式,包括背景颜色、内、外边距、圆角。 */
.markdown-body pre code { .markdown-body pre code {
display: block;
overflow: auto;
white-space: pre;
background-color: rgba(13, 17, 23, 0.95);
border-radius: 10px;
padding: 1em;
margin: 1em 2em 1em 0.5em;
}
.dark .markdown-body pre code {
display: block; display: block;
overflow: auto; overflow: auto;
white-space: pre; white-space: pre;
@@ -160,72 +181,162 @@ advanced_css = """
if CODE_HIGHLIGHT: if CODE_HIGHLIGHT:
advanced_css += """ advanced_css += """
.hll { background-color: #ffffcc } .codehilite .hll { background-color: #6e7681 }
.c { color: #3D7B7B; font-style: italic } /* Comment */ .codehilite .c { color: #8b949e; font-style: italic } /* Comment */
.err { border: 1px solid #FF0000 } /* Error */ .codehilite .err { color: #f85149 } /* Error */
.k { color: hsl(197, 94%, 51%); font-weight: bold } /* Keyword */ .codehilite .esc { color: #c9d1d9 } /* Escape */
.o { color: #666666 } /* Operator */ .codehilite .g { color: #c9d1d9 } /* Generic */
.ch { color: #3D7B7B; font-style: italic } /* Comment.Hashbang */ .codehilite .k { color: #ff7b72 } /* Keyword */
.cm { color: #3D7B7B; font-style: italic } /* Comment.Multiline */ .codehilite .l { color: #a5d6ff } /* Literal */
.cp { color: #9C6500 } /* Comment.Preproc */ .codehilite .n { color: #c9d1d9 } /* Name */
.cpf { color: #3D7B7B; font-style: italic } /* Comment.PreprocFile */ .codehilite .o { color: #ff7b72; font-weight: bold } /* Operator */
.c1 { color: #3D7B7B; font-style: italic } /* Comment.Single */ .codehilite .x { color: #c9d1d9 } /* Other */
.cs { color: #3D7B7B; font-style: italic } /* Comment.Special */ .codehilite .p { color: #c9d1d9 } /* Punctuation */
.gd { color: #A00000 } /* Generic.Deleted */ .codehilite .ch { color: #8b949e; font-style: italic } /* Comment.Hashbang */
.ge { font-style: italic } /* Generic.Emph */ .codehilite .cm { color: #8b949e; font-style: italic } /* Comment.Multiline */
.gr { color: #E40000 } /* Generic.Error */ .codehilite .cp { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Preproc */
.gh { color: #000080; font-weight: bold } /* Generic.Heading */ .codehilite .cpf { color: #8b949e; font-style: italic } /* Comment.PreprocFile */
.gi { color: #008400 } /* Generic.Inserted */ .codehilite .c1 { color: #8b949e; font-style: italic } /* Comment.Single */
.go { color: #717171 } /* Generic.Output */ .codehilite .cs { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Special */
.gp { color: #000080; font-weight: bold } /* Generic.Prompt */ .codehilite .gd { color: #ffa198; background-color: #490202 } /* Generic.Deleted */
.gs { font-weight: bold } /* Generic.Strong */ .codehilite .ge { color: #c9d1d9; font-style: italic } /* Generic.Emph */
.gu { color: #800080; font-weight: bold } /* Generic.Subheading */ .codehilite .gr { color: #ffa198 } /* Generic.Error */
.gt { color: #a9dd00 } /* Generic.Traceback */ .codehilite .gh { color: #79c0ff; font-weight: bold } /* Generic.Heading */
.kc { color: #008000; font-weight: bold } /* Keyword.Constant */ .codehilite .gi { color: #56d364; background-color: #0f5323 } /* Generic.Inserted */
.kd { color: #008000; font-weight: bold } /* Keyword.Declaration */ .codehilite .go { color: #8b949e } /* Generic.Output */
.kn { color: #008000; font-weight: bold } /* Keyword.Namespace */ .codehilite .gp { color: #8b949e } /* Generic.Prompt */
.kp { color: #008000 } /* Keyword.Pseudo */ .codehilite .gs { color: #c9d1d9; font-weight: bold } /* Generic.Strong */
.kr { color: #008000; font-weight: bold } /* Keyword.Reserved */ .codehilite .gu { color: #79c0ff } /* Generic.Subheading */
.kt { color: #B00040 } /* Keyword.Type */ .codehilite .gt { color: #ff7b72 } /* Generic.Traceback */
.m { color: #666666 } /* Literal.Number */ .codehilite .g-Underline { color: #c9d1d9; text-decoration: underline } /* Generic.Underline */
.s { color: #BA2121 } /* Literal.String */ .codehilite .kc { color: #79c0ff } /* Keyword.Constant */
.na { color: #687822 } /* Name.Attribute */ .codehilite .kd { color: #ff7b72 } /* Keyword.Declaration */
.nb { color: #e5f8c3 } /* Name.Builtin */ .codehilite .kn { color: #ff7b72 } /* Keyword.Namespace */
.nc { color: #ffad65; font-weight: bold } /* Name.Class */ .codehilite .kp { color: #79c0ff } /* Keyword.Pseudo */
.no { color: #880000 } /* Name.Constant */ .codehilite .kr { color: #ff7b72 } /* Keyword.Reserved */
.nd { color: #AA22FF } /* Name.Decorator */ .codehilite .kt { color: #ff7b72 } /* Keyword.Type */
.ni { color: #717171; font-weight: bold } /* Name.Entity */ .codehilite .ld { color: #79c0ff } /* Literal.Date */
.ne { color: #CB3F38; font-weight: bold } /* Name.Exception */ .codehilite .m { color: #a5d6ff } /* Literal.Number */
.nf { color: #f9f978 } /* Name.Function */ .codehilite .s { color: #a5d6ff } /* Literal.String */
.nl { color: #767600 } /* Name.Label */ .codehilite .na { color: #c9d1d9 } /* Name.Attribute */
.nn { color: #0000FF; font-weight: bold } /* Name.Namespace */ .codehilite .nb { color: #c9d1d9 } /* Name.Builtin */
.nt { color: #008000; font-weight: bold } /* Name.Tag */ .codehilite .nc { color: #f0883e; font-weight: bold } /* Name.Class */
.nv { color: #19177C } /* Name.Variable */ .codehilite .no { color: #79c0ff; font-weight: bold } /* Name.Constant */
.ow { color: #AA22FF; font-weight: bold } /* Operator.Word */ .codehilite .nd { color: #d2a8ff; font-weight: bold } /* Name.Decorator */
.w { color: #bbbbbb } /* Text.Whitespace */ .codehilite .ni { color: #ffa657 } /* Name.Entity */
.mb { color: #666666 } /* Literal.Number.Bin */ .codehilite .ne { color: #f0883e; font-weight: bold } /* Name.Exception */
.mf { color: #666666 } /* Literal.Number.Float */ .codehilite .nf { color: #d2a8ff; font-weight: bold } /* Name.Function */
.mh { color: #666666 } /* Literal.Number.Hex */ .codehilite .nl { color: #79c0ff; font-weight: bold } /* Name.Label */
.mi { color: #666666 } /* Literal.Number.Integer */ .codehilite .nn { color: #ff7b72 } /* Name.Namespace */
.mo { color: #666666 } /* Literal.Number.Oct */ .codehilite .nx { color: #c9d1d9 } /* Name.Other */
.sa { color: #BA2121 } /* Literal.String.Affix */ .codehilite .py { color: #79c0ff } /* Name.Property */
.sb { color: #BA2121 } /* Literal.String.Backtick */ .codehilite .nt { color: #7ee787 } /* Name.Tag */
.sc { color: #BA2121 } /* Literal.String.Char */ .codehilite .nv { color: #79c0ff } /* Name.Variable */
.dl { color: #BA2121 } /* Literal.String.Delimiter */ .codehilite .ow { color: #ff7b72; font-weight: bold } /* Operator.Word */
.sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */ .codehilite .pm { color: #c9d1d9 } /* Punctuation.Marker */
.s2 { color: #2bf840 } /* Literal.String.Double */ .codehilite .w { color: #6e7681 } /* Text.Whitespace */
.se { color: #AA5D1F; font-weight: bold } /* Literal.String.Escape */ .codehilite .mb { color: #a5d6ff } /* Literal.Number.Bin */
.sh { color: #BA2121 } /* Literal.String.Heredoc */ .codehilite .mf { color: #a5d6ff } /* Literal.Number.Float */
.si { color: #A45A77; font-weight: bold } /* Literal.String.Interpol */ .codehilite .mh { color: #a5d6ff } /* Literal.Number.Hex */
.sx { color: #008000 } /* Literal.String.Other */ .codehilite .mi { color: #a5d6ff } /* Literal.Number.Integer */
.sr { color: #A45A77 } /* Literal.String.Regex */ .codehilite .mo { color: #a5d6ff } /* Literal.Number.Oct */
.s1 { color: #BA2121 } /* Literal.String.Single */ .codehilite .sa { color: #79c0ff } /* Literal.String.Affix */
.ss { color: #19177C } /* Literal.String.Symbol */ .codehilite .sb { color: #a5d6ff } /* Literal.String.Backtick */
.bp { color: #008000 } /* Name.Builtin.Pseudo */ .codehilite .sc { color: #a5d6ff } /* Literal.String.Char */
.fm { color: #0000FF } /* Name.Function.Magic */ .codehilite .dl { color: #79c0ff } /* Literal.String.Delimiter */
.vc { color: #19177C } /* Name.Variable.Class */ .codehilite .sd { color: #a5d6ff } /* Literal.String.Doc */
.vg { color: #19177C } /* Name.Variable.Global */ .codehilite .s2 { color: #a5d6ff } /* Literal.String.Double */
.vi { color: #19177C } /* Name.Variable.Instance */ .codehilite .se { color: #79c0ff } /* Literal.String.Escape */
.vm { color: #19177C } /* Name.Variable.Magic */ .codehilite .sh { color: #79c0ff } /* Literal.String.Heredoc */
.il { color: #666666 } /* Literal.Number.Integer.Long */ .codehilite .si { color: #a5d6ff } /* Literal.String.Interpol */
.codehilite .sx { color: #a5d6ff } /* Literal.String.Other */
.codehilite .sr { color: #79c0ff } /* Literal.String.Regex */
.codehilite .s1 { color: #a5d6ff } /* Literal.String.Single */
.codehilite .ss { color: #a5d6ff } /* Literal.String.Symbol */
.codehilite .bp { color: #c9d1d9 } /* Name.Builtin.Pseudo */
.codehilite .fm { color: #d2a8ff; font-weight: bold } /* Name.Function.Magic */
.codehilite .vc { color: #79c0ff } /* Name.Variable.Class */
.codehilite .vg { color: #79c0ff } /* Name.Variable.Global */
.codehilite .vi { color: #79c0ff } /* Name.Variable.Instance */
.codehilite .vm { color: #79c0ff } /* Name.Variable.Magic */
.codehilite .il { color: #a5d6ff } /* Literal.Number.Integer.Long */
.dark .codehilite .hll { background-color: #2C3B41 }
.dark .codehilite .c { color: #79d618; font-style: italic } /* Comment */
.dark .codehilite .err { color: #FF5370 } /* Error */
.dark .codehilite .esc { color: #89DDFF } /* Escape */
.dark .codehilite .g { color: #EEFFFF } /* Generic */
.dark .codehilite .k { color: #BB80B3 } /* Keyword */
.dark .codehilite .l { color: #C3E88D } /* Literal */
.dark .codehilite .n { color: #EEFFFF } /* Name */
.dark .codehilite .o { color: #89DDFF } /* Operator */
.dark .codehilite .p { color: #89DDFF } /* Punctuation */
.dark .codehilite .ch { color: #79d618; font-style: italic } /* Comment.Hashbang */
.dark .codehilite .cm { color: #79d618; font-style: italic } /* Comment.Multiline */
.dark .codehilite .cp { color: #79d618; font-style: italic } /* Comment.Preproc */
.dark .codehilite .cpf { color: #79d618; font-style: italic } /* Comment.PreprocFile */
.dark .codehilite .c1 { color: #79d618; font-style: italic } /* Comment.Single */
.dark .codehilite .cs { color: #79d618; font-style: italic } /* Comment.Special */
.dark .codehilite .gd { color: #FF5370 } /* Generic.Deleted */
.dark .codehilite .ge { color: #89DDFF } /* Generic.Emph */
.dark .codehilite .gr { color: #FF5370 } /* Generic.Error */
.dark .codehilite .gh { color: #C3E88D } /* Generic.Heading */
.dark .codehilite .gi { color: #C3E88D } /* Generic.Inserted */
.dark .codehilite .go { color: #79d618 } /* Generic.Output */
.dark .codehilite .gp { color: #FFCB6B } /* Generic.Prompt */
.dark .codehilite .gs { color: #FF5370 } /* Generic.Strong */
.dark .codehilite .gu { color: #89DDFF } /* Generic.Subheading */
.dark .codehilite .gt { color: #FF5370 } /* Generic.Traceback */
.dark .codehilite .kc { color: #89DDFF } /* Keyword.Constant */
.dark .codehilite .kd { color: #BB80B3 } /* Keyword.Declaration */
.dark .codehilite .kn { color: #89DDFF; font-style: italic } /* Keyword.Namespace */
.dark .codehilite .kp { color: #89DDFF } /* Keyword.Pseudo */
.dark .codehilite .kr { color: #BB80B3 } /* Keyword.Reserved */
.dark .codehilite .kt { color: #BB80B3 } /* Keyword.Type */
.dark .codehilite .ld { color: #C3E88D } /* Literal.Date */
.dark .codehilite .m { color: #F78C6C } /* Literal.Number */
.dark .codehilite .s { color: #C3E88D } /* Literal.String */
.dark .codehilite .na { color: #BB80B3 } /* Name.Attribute */
.dark .codehilite .nb { color: #82AAFF } /* Name.Builtin */
.dark .codehilite .nc { color: #FFCB6B } /* Name.Class */
.dark .codehilite .no { color: #EEFFFF } /* Name.Constant */
.dark .codehilite .nd { color: #82AAFF } /* Name.Decorator */
.dark .codehilite .ni { color: #89DDFF } /* Name.Entity */
.dark .codehilite .ne { color: #FFCB6B } /* Name.Exception */
.dark .codehilite .nf { color: #82AAFF } /* Name.Function */
.dark .codehilite .nl { color: #82AAFF } /* Name.Label */
.dark .codehilite .nn { color: #FFCB6B } /* Name.Namespace */
.dark .codehilite .nx { color: #EEFFFF } /* Name.Other */
.dark .codehilite .py { color: #FFCB6B } /* Name.Property */
.dark .codehilite .nt { color: #FF5370 } /* Name.Tag */
.dark .codehilite .nv { color: #89DDFF } /* Name.Variable */
.dark .codehilite .ow { color: #89DDFF; font-style: italic } /* Operator.Word */
.dark .codehilite .pm { color: #89DDFF } /* Punctuation.Marker */
.dark .codehilite .w { color: #EEFFFF } /* Text.Whitespace */
.dark .codehilite .mb { color: #F78C6C } /* Literal.Number.Bin */
.dark .codehilite .mf { color: #F78C6C } /* Literal.Number.Float */
.dark .codehilite .mh { color: #F78C6C } /* Literal.Number.Hex */
.dark .codehilite .mi { color: #F78C6C } /* Literal.Number.Integer */
.dark .codehilite .mo { color: #F78C6C } /* Literal.Number.Oct */
.dark .codehilite .sa { color: #BB80B3 } /* Literal.String.Affix */
.dark .codehilite .sb { color: #C3E88D } /* Literal.String.Backtick */
.dark .codehilite .sc { color: #C3E88D } /* Literal.String.Char */
.dark .codehilite .dl { color: #EEFFFF } /* Literal.String.Delimiter */
.dark .codehilite .sd { color: #79d618; font-style: italic } /* Literal.String.Doc */
.dark .codehilite .s2 { color: #C3E88D } /* Literal.String.Double */
.dark .codehilite .se { color: #EEFFFF } /* Literal.String.Escape */
.dark .codehilite .sh { color: #C3E88D } /* Literal.String.Heredoc */
.dark .codehilite .si { color: #89DDFF } /* Literal.String.Interpol */
.dark .codehilite .sx { color: #C3E88D } /* Literal.String.Other */
.dark .codehilite .sr { color: #89DDFF } /* Literal.String.Regex */
.dark .codehilite .s1 { color: #C3E88D } /* Literal.String.Single */
.dark .codehilite .ss { color: #89DDFF } /* Literal.String.Symbol */
.dark .codehilite .bp { color: #89DDFF } /* Name.Builtin.Pseudo */
.dark .codehilite .fm { color: #82AAFF } /* Name.Function.Magic */
.dark .codehilite .vc { color: #89DDFF } /* Name.Variable.Class */
.dark .codehilite .vg { color: #89DDFF } /* Name.Variable.Global */
.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */
.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */
.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */
""" """

查看文件

@@ -5,7 +5,20 @@ import inspect
import re import re
from latex2mathml.converter import convert as tex2mathml from latex2mathml.converter import convert as tex2mathml
from functools import wraps, lru_cache from functools import wraps, lru_cache
############################### 插件输入输出接驳区 #######################################
"""
========================================================================
第一部分
函数插件输入输出接驳区
- ChatBotWithCookies: 带Cookies的Chatbot类,为实现更多强大的功能做基础
- ArgsGeneralWrapper: 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构
- update_ui: 刷新界面用 yield from update_ui(chatbot, history)
- CatchException: 将插件中出的所有问题显示在界面上
- HotReload: 实现插件的热更新
- trimmed_format_exc: 打印traceback,为了安全而隐藏绝对地址
========================================================================
"""
class ChatBotWithCookies(list): class ChatBotWithCookies(list):
def __init__(self, cookie): def __init__(self, cookie):
self._cookies = cookie self._cookies = cookie
@@ -20,6 +33,7 @@ class ChatBotWithCookies(list):
def get_cookies(self): def get_cookies(self):
return self._cookies return self._cookies
def ArgsGeneralWrapper(f): def ArgsGeneralWrapper(f):
""" """
装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
@@ -47,6 +61,7 @@ def ArgsGeneralWrapper(f):
yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args) yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
return decorated return decorated
def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面 def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
""" """
刷新用户界面 刷新用户界面
@@ -54,10 +69,18 @@ def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。" assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。"
yield chatbot.get_cookies(), chatbot, history, msg yield chatbot.get_cookies(), chatbot, history, msg
def trimmed_format_exc():
import os, traceback
str = traceback.format_exc()
current_path = os.getcwd()
replace_path = "."
return str.replace(current_path, replace_path)
def CatchException(f): def CatchException(f):
""" """
装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。 装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
""" """
@wraps(f) @wraps(f)
def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
try: try:
@@ -66,7 +89,7 @@ def CatchException(f):
from check_proxy import check_proxy from check_proxy import check_proxy
from toolbox import get_conf from toolbox import get_conf
proxies, = get_conf('proxies') proxies, = get_conf('proxies')
tb_str = '```\n' + traceback.format_exc() + '```' tb_str = '```\n' + trimmed_format_exc() + '```'
if chatbot is None or len(chatbot) == 0: if chatbot is None or len(chatbot) == 0:
chatbot = [["插件调度异常", "异常原因"]] chatbot = [["插件调度异常", "异常原因"]]
chatbot[-1] = (chatbot[-1][0], chatbot[-1] = (chatbot[-1][0],
@@ -93,7 +116,23 @@ def HotReload(f):
return decorated return decorated
####################################### 其他小工具 ##################################### """
========================================================================
第二部分
其他小工具:
- write_results_to_file: 将结果写入markdown文件中
- regular_txt_to_markdown: 将普通文本转换为Markdown格式的文本。
- report_execption: 向chatbot中添加简单的意外错误信息
- text_divide_paragraph: 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。
- markdown_convertion: 用多种方式组合,将markdown转化为好看的html
- format_io: 接管gradio默认的markdown处理方式
- on_file_uploaded: 处理文件的上传(自动解压)
- on_report_generated: 将生成的报告自动投射到文件上传区
- clip_history: 当历史上下文过长时,自动截断
- get_conf: 获取设置
- select_api_key: 根据当前的模型类别,抽取可用的api-key
========================================================================
"""
def get_reduce_token_percent(text): def get_reduce_token_percent(text):
""" """
@@ -113,7 +152,6 @@ def get_reduce_token_percent(text):
return 0.5, '不详' return 0.5, '不详'
def write_results_to_file(history, file_name=None): def write_results_to_file(history, file_name=None):
""" """
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。 将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
@@ -369,6 +407,9 @@ def find_recent_files(directory):
def on_file_uploaded(files, chatbot, txt, txt2, checkboxes): def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
"""
当文件被上传时的回调函数
"""
if len(files) == 0: if len(files) == 0:
return chatbot, txt return chatbot, txt
import shutil import shutil
@@ -388,8 +429,7 @@ def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}') shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}', err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract') dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
moved_files = [fp for fp in glob.glob( moved_files = [fp for fp in glob.glob('private_upload/**/*', recursive=True)]
'private_upload/**/*', recursive=True)]
if "底部输入区" in checkboxes: if "底部输入区" in checkboxes:
txt = "" txt = ""
txt2 = f'private_upload/{time_tag}' txt2 = f'private_upload/{time_tag}'
@@ -508,7 +548,7 @@ def clear_line_break(txt):
class DummyWith(): class DummyWith():
""" """
这段代码定义了一个名为DummyWith的空上下文管理器, 这段代码定义了一个名为DummyWith的空上下文管理器,
它的作用是……额……用,即在代码结构不变得情况下取代其他的上下文管理器。 它的作用是……额……就是不起作用,即在代码结构不变得情况下取代其他的上下文管理器。
上下文管理器是一种Python对象,用于与with语句一起使用, 上下文管理器是一种Python对象,用于与with语句一起使用,
以确保一些资源在代码块执行期间得到正确的初始化和清理。 以确保一些资源在代码块执行期间得到正确的初始化和清理。
上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。 上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
@@ -522,6 +562,9 @@ class DummyWith():
return return
def run_gradio_in_subpath(demo, auth, port, custom_path): def run_gradio_in_subpath(demo, auth, port, custom_path):
"""
把gradio的运行地址更改到指定的二次路径上
"""
def is_path_legal(path: str)->bool: def is_path_legal(path: str)->bool:
''' '''
check path for sub url check path for sub url
@@ -551,3 +594,52 @@ def run_gradio_in_subpath(demo, auth, port, custom_path):
return {"message": f"Gradio is running at: {custom_path}"} return {"message": f"Gradio is running at: {custom_path}"}
app = gr.mount_gradio_app(app, demo, path=custom_path) app = gr.mount_gradio_app(app, demo, path=custom_path)
uvicorn.run(app, host="0.0.0.0", port=port) # , auth=auth uvicorn.run(app, host="0.0.0.0", port=port) # , auth=auth
def clip_history(inputs, history, tokenizer, max_token_limit):
"""
reduce the length of history by clipping.
this function search for the longest entries to clip, little by little,
until the number of token of history is reduced under threshold.
通过裁剪来缩短历史记录的长度。
此函数逐渐地搜索最长的条目进行剪辑,
直到历史记录的标记数量降低到阈值以下。
"""
import numpy as np
from request_llm.bridge_all import model_info
def get_token_num(txt):
return len(tokenizer.encode(txt, disallowed_special=()))
input_token_num = get_token_num(inputs)
if input_token_num < max_token_limit * 3 / 4:
# 当输入部分的token占比小于限制的3/4时,裁剪时
# 1. 把input的余量留出来
max_token_limit = max_token_limit - input_token_num
# 2. 把输出用的余量留出来
max_token_limit = max_token_limit - 128
# 3. 如果余量太小了,直接清除历史
if max_token_limit < 128:
history = []
return history
else:
# 当输入部分的token占比 > 限制的3/4时,直接清除历史
history = []
return history
everything = ['']
everything.extend(history)
n_token = get_token_num('\n'.join(everything))
everything_token = [get_token_num(e) for e in everything]
# 截断时的颗粒度
delta = max(everything_token) // 16
while n_token > max_token_limit:
where = np.argmax(everything_token)
encoded = tokenizer.encode(everything[where], disallowed_special=())
clipped_encoded = encoded[:len(encoded)-delta]
everything[where] = tokenizer.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
everything_token[where] = get_token_num(everything[where])
n_token = get_token_num('\n'.join(everything))
history = everything[1:]
return history

查看文件

@@ -1,5 +1,5 @@
{ {
"version": 3.2, "version": 3.3,
"show_feature": true, "show_feature": true,
"new_feature": "保存对话功能 <-> 解读任意语言代码+同时询问任意的LLM组合 <-> 添加联网Google回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM和GPT-4 <-> 改进架构,支持与多个LLM模型同时对话 <-> 添加支持API2D国内,可支持gpt4" "new_feature": "支持NewBing <-> Markdown翻译功能支持直接输入Readme文件网址 <-> 保存对话功能 <-> 解读任意语言代码+同时询问任意的LLM组合 <-> 添加联网Google回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM和GPT-4 <-> 改进架构,支持与多个LLM模型同时对话 <-> 添加支持API2D国内,可支持gpt4"
} }