镜像自地址
https://github.com/binary-husky/gpt_academic.git
已同步 2025-12-06 14:36:48 +00:00
比较提交
121 次代码提交
version3.6
...
production
| 作者 | SHA1 | 提交日期 | |
|---|---|---|---|
|
|
ffb5655a23 | ||
|
|
c0a36e37be | ||
|
|
2f2b869efd | ||
|
|
2f148bada0 | ||
|
|
916b2e8aa7 | ||
|
|
0cb7dd5280 | ||
|
|
892ccb14c7 | ||
|
|
21bccf69d2 | ||
|
|
7bac8f4bd3 | ||
|
|
d0c2923ab1 | ||
|
|
8a6e96c369 | ||
|
|
49f3fcf2c0 | ||
|
|
cb92ccb409 | ||
|
|
2b96a60b76 | ||
|
|
ec60a85cac | ||
|
|
647d9f88db | ||
|
|
b0c627909a | ||
|
|
102bf2f1eb | ||
|
|
26291b33d1 | ||
|
|
4f04d810b7 | ||
|
|
6d2f126253 | ||
|
|
e5b296d221 | ||
|
|
7933675c12 | ||
|
|
692ff4b59c | ||
|
|
4d8b535c79 | ||
|
|
3c03f240ba | ||
|
|
9bfc3400f9 | ||
|
|
95504f0bb7 | ||
|
|
0cd3274d04 | ||
|
|
2cef81abbe | ||
|
|
6f9bc5d206 | ||
|
|
94ab41d3c0 | ||
|
|
da376068e1 | ||
|
|
552219fd5a | ||
|
|
4985986243 | ||
|
|
d99b443b4c | ||
|
|
2aab6cb708 | ||
|
|
1134723c80 | ||
|
|
6126024f2c | ||
|
|
ef12d4f754 | ||
|
|
e8dd3c02f2 | ||
|
|
e7f4c804eb | ||
|
|
3d6ee5c755 | ||
|
|
d8958da8cd | ||
|
|
a64d550045 | ||
|
|
d876a81e78 | ||
|
|
6723eb77b2 | ||
|
|
86891e3535 | ||
|
|
cc4df91900 | ||
|
|
2f805db35d | ||
|
|
ecaf2bdf45 | ||
|
|
89707a1c58 | ||
|
|
22e00eb1c5 | ||
|
|
900fad69cf | ||
|
|
55d807c116 | ||
|
|
9a0ed248ca | ||
|
|
88802b0f72 | ||
|
|
5720ac127c | ||
|
|
f44642d9d2 | ||
|
|
29775dedd8 | ||
|
|
6417ca9dde | ||
|
|
f417c1ce6d | ||
|
|
e4c057f5a3 | ||
|
|
f9e9b6f4ec | ||
|
|
c141e767c6 | ||
|
|
17f361d63b | ||
|
|
8780fe29f1 | ||
|
|
d57bb8afbe | ||
|
|
d39945c415 | ||
|
|
688df6aa24 | ||
|
|
d539ad809e | ||
|
|
b24fef8a61 | ||
|
|
8c840f3d4c | ||
|
|
577d3d566b | ||
|
|
fd92766083 | ||
|
|
02b18ff67a | ||
|
|
2d2e02040d | ||
|
|
aee57364dd | ||
|
|
7ca37c4831 | ||
|
|
6896b10be9 | ||
|
|
5b06a6cae5 | ||
|
|
5d5695cd9a | ||
|
|
fd72894c90 | ||
|
|
c1abec2e4b | ||
|
|
9916f59753 | ||
|
|
e6716ccf63 | ||
|
|
e533ed6d12 | ||
|
|
0ec5a8e5f8 | ||
|
|
4fefbb80ac | ||
|
|
1253a2b0a6 | ||
|
|
71537b570f | ||
|
|
203d5f7296 | ||
|
|
7754215dad | ||
|
|
b470af7c7b | ||
|
|
f8c5f9045d | ||
|
|
c7a0a5f207 | ||
|
|
79a0b687b8 | ||
|
|
cdca36f5d2 | ||
|
|
6ed88fe848 | ||
|
|
ea4e03b1d8 | ||
|
|
aa341fd268 | ||
|
|
70766cdd44 | ||
|
|
97f33b8bea | ||
|
|
7280ea17fd | ||
|
|
535a901991 | ||
|
|
56f42397b1 | ||
|
|
aa7c47e821 | ||
|
|
62fb2794ec | ||
|
|
3121dee04a | ||
|
|
cad541d8d7 | ||
|
|
9023aa6732 | ||
|
|
2d37b74a0c | ||
|
|
fdc350cfe8 | ||
|
|
58c6d45d84 | ||
|
|
4cc6ff65ac | ||
|
|
8632413011 | ||
|
|
46e279b5dd | ||
|
|
25cf86dae6 | ||
|
|
19e202ddfd | ||
|
|
65dab46a28 | ||
|
|
ecb473bc8b |
164
README.md
164
README.md
@@ -1,42 +1,70 @@
|
||||
> **Note**
|
||||
> **Caution**
|
||||
>
|
||||
> 2023.11.12: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
|
||||
>
|
||||
> 2023.11.7: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目开源免费,近期发现有人蔑视开源协议并利用本项目违规圈钱,请提高警惕,谨防上当受骗。
|
||||
|
||||
<br>
|
||||
|
||||
<div align=center>
|
||||
<h1 aligh="center">
|
||||
<img src="docs/logo.png" width="40"> GPT 学术优化 (GPT Academic)
|
||||
</h1>
|
||||
|
||||
[![Github][Github-image]][Github-url]
|
||||
[![License][License-image]][License-url]
|
||||
[![Releases][Releases-image]][Releases-url]
|
||||
[![Installation][Installation-image]][Installation-url]
|
||||
[![Wiki][Wiki-image]][Wiki-url]
|
||||
[![PR][PRs-image]][PRs-url]
|
||||
|
||||
[Github-image]: https://img.shields.io/badge/github-12100E.svg?style=flat-square
|
||||
[License-image]: https://img.shields.io/github/license/binary-husky/gpt_academic?label=License&style=flat-square&color=orange
|
||||
[Releases-image]: https://img.shields.io/github/release/binary-husky/gpt_academic?label=Release&style=flat-square&color=blue
|
||||
[Installation-image]: https://img.shields.io/badge/dynamic/json?color=blue&url=https://raw.githubusercontent.com/binary-husky/gpt_academic/master/version&query=$.version&label=Installation&style=flat-square
|
||||
[Wiki-image]: https://img.shields.io/badge/wiki-项目文档-black?style=flat-square
|
||||
[PRs-image]: https://img.shields.io/badge/PRs-welcome-pink?style=flat-square
|
||||
|
||||
[Github-url]: https://github.com/binary-husky/gpt_academic
|
||||
[License-url]: https://github.com/binary-husky/gpt_academic/blob/master/LICENSE
|
||||
[Releases-url]: https://github.com/binary-husky/gpt_academic/releases
|
||||
[Installation-url]: https://github.com/binary-husky/gpt_academic#installation
|
||||
[Wiki-url]: https://github.com/binary-husky/gpt_academic/wiki
|
||||
[PRs-url]: https://github.com/binary-husky/gpt_academic/pulls
|
||||
|
||||
|
||||
# <div align=center><img src="docs/logo.png" width="40"> GPT 学术优化 (GPT Academic)</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
**如果喜欢这个项目,请给它一个Star;如果您发明了好用的快捷键或插件,欢迎发pull requests!**
|
||||
|
||||
If you like this project, please give it a Star. We also have a README in [English|](docs/README.English.md)[日本語|](docs/README.Japanese.md)[한국어|](docs/README.Korean.md)[Русский|](docs/README.Russian.md)[Français](docs/README.French.md) translated by this project itself.
|
||||
To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
||||
If you like this project, please give it a Star.
|
||||
Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanese.md) | [한국어](docs/README.Korean.md) | [Русский](docs/README.Russian.md) | [Français](docs/README.French.md). All translations have been provided by the project itself. To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
||||
<br>
|
||||
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> 1.请注意只有 **高亮** 标识的插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
|
||||
>
|
||||
> 2.本项目中每个文件的功能都在[自译解报告`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题[`wiki`](https://github.com/binary-husky/gpt_academic/wiki)。[常规安装方法](#installation) | [一键安装脚本](https://github.com/binary-husky/gpt_academic/releases) | [配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
||||
> 2.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题请查阅wiki。
|
||||
> [](#installation) [](https://github.com/binary-husky/gpt_academic/releases) [](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) []([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
|
||||
>
|
||||
> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
|
||||
> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交即可生效。
|
||||
|
||||
|
||||
|
||||
<br><br>
|
||||
|
||||
<div align="center">
|
||||
|
||||
功能(⭐= 近期新增功能) | 描述
|
||||
--- | ---
|
||||
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, [通义千问](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),智谱API,DALLE3
|
||||
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱API](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
|
||||
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
||||
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
||||
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [插件] 一键可以剖析Python/C/C++/Java/Lua/...项目树 或 [自我剖析](https://www.bilibili.com/video/BV1cj411A7VW)
|
||||
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [插件] 一键剖析Python/C/C++/Java/Lua/...项目树 或 [自我剖析](https://www.bilibili.com/video/BV1cj411A7VW)
|
||||
读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读latex/pdf论文全文并生成摘要
|
||||
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
|
||||
批量注释生成 | [插件] 一键批量生成函数注释
|
||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?
|
||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
|
||||
chat分析报告生成 | [插件] 运行后自动生成总结汇报
|
||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
||||
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||
@@ -48,22 +76,22 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
||||
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
||||
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
|
||||
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧?
|
||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
|
||||
⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
|
||||
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
||||
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
|
||||
⭐虚空终端插件 | [插件] 用自然语言,直接调度本项目其他插件
|
||||
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
|
||||
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
||||
</div>
|
||||
|
||||
|
||||
- 新界面(修改`config.py`中的LAYOUT选项即可实现“左右布局”和“上下布局”的切换)
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/d81137c3-affd-4cd1-bb5e-b15610389762" width="700" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/279702205-d81137c3-affd-4cd1-bb5e-b15610389762.gif" width="700" >
|
||||
</div>
|
||||
|
||||
|
||||
- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板
|
||||
- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放剪贴板
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
|
||||
</div>
|
||||
@@ -73,12 +101,12 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读
|
||||
- 如果输出包含公式,会以tex形式和渲染形式同时显示,方便复制和阅读
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
|
||||
</div>
|
||||
|
||||
- 懒得看项目代码?整个工程直接给chatgpt炫嘴里
|
||||
- 懒得看项目代码?直接把整个工程炫ChatGPT嘴里
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
||||
</div>
|
||||
@@ -88,40 +116,44 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||
</div>
|
||||
|
||||
<br><br>
|
||||
|
||||
# Installation
|
||||
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
||||
|
||||
1. 下载项目
|
||||
```sh
|
||||
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
2. 配置API_KEY
|
||||
```sh
|
||||
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
在`config.py`中,配置API KEY等设置,[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1) 。[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
||||
2. 配置API_KEY等变量
|
||||
|
||||
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解该读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中(仅复制您修改过的配置条目即可)。 」
|
||||
在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
||||
|
||||
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py`。 」
|
||||
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,以确保更新或其他用户无法轻易查看您的私有配置 」。
|
||||
|
||||
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
|
||||
|
||||
|
||||
3. 安装依赖
|
||||
```sh
|
||||
# (选择I: 如熟悉python, python推荐版本 3.9 ~ 3.11)备注:使用官方pip源或者阿里pip源, 临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
```sh
|
||||
# (选择I: 如熟悉python, python推荐版本 3.9 ~ 3.11)备注:使用官方pip源或者阿里pip源, 临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
# (选择II: 使用Anaconda)步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # 创建anaconda环境
|
||||
conda activate gptac_venv # 激活anaconda环境
|
||||
python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤
|
||||
```
|
||||
# (选择II: 使用Anaconda)步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # 创建anaconda环境
|
||||
conda activate gptac_venv # 激活anaconda环境
|
||||
python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤
|
||||
```
|
||||
|
||||
|
||||
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
||||
<p>
|
||||
|
||||
【可选步骤】如果需要支持清华ChatGLM2/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
||||
|
||||
```sh
|
||||
# 【可选步骤I】支持清华ChatGLM2。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
||||
python -m pip install -r request_llms/requirements_chatglm.txt
|
||||
@@ -135,6 +167,14 @@ git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llms/moss #
|
||||
|
||||
# 【可选步骤IV】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
|
||||
# 【可选步骤V】支持本地模型INT8,INT4量化(这里所指的模型本身不是量化版本,目前deepseek-coder支持,后面测试后会加入更多模型量化选择)
|
||||
pip install bitsandbyte
|
||||
# windows用户安装bitsandbytes需要使用下面bitsandbytes-windows-webui
|
||||
python -m pip install bitsandbytes --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-windows-webui
|
||||
pip install -U git+https://github.com/huggingface/transformers.git
|
||||
pip install -U git+https://github.com/huggingface/accelerate.git
|
||||
pip install peft
|
||||
```
|
||||
|
||||
</p>
|
||||
@@ -143,62 +183,64 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-
|
||||
|
||||
|
||||
4. 运行
|
||||
```sh
|
||||
python main.py
|
||||
```
|
||||
```sh
|
||||
python main.py
|
||||
```
|
||||
|
||||
### 安装方法II:使用Docker
|
||||
|
||||
0. 部署项目的全部能力(这个是包含cuda和latex的大型镜像。但如果您网速慢、硬盘小,则不推荐使用这个)
|
||||
0. 部署项目的全部能力(这个是包含cuda和latex的大型镜像。但如果您网速慢、硬盘小,则不推荐该方法部署完整项目)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml)
|
||||
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案0并删除其他方案。然后运行:
|
||||
docker-compose up
|
||||
```
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案0并删除其他方案。然后运行:
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
1. 仅ChatGPT+文心一言+spark等在线模型(推荐大多数人选择)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
|
||||
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案1并删除其他方案。然后运行:
|
||||
docker-compose up
|
||||
```
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案1并删除其他方案。然后运行:
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用方案4或者方案0获取Latex功能。
|
||||
|
||||
2. ChatGPT + ChatGLM2 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
|
||||
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案2并删除其他方案。然后运行:
|
||||
docker-compose up
|
||||
```
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案2并删除其他方案。然后运行:
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
|
||||
### 安装方法III:其他部署姿势
|
||||
### 安装方法III:其他部署方法
|
||||
1. **Windows一键运行脚本**。
|
||||
完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。
|
||||
脚本的贡献来源是[oobabooga](https://github.com/oobabooga/one-click-installers)。
|
||||
完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。脚本贡献来源:[oobabooga](https://github.com/oobabooga/one-click-installers)。
|
||||
|
||||
2. 使用第三方API、Azure等、文心一言、星火等,见[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)
|
||||
|
||||
3. 云服务器远程部署避坑指南。
|
||||
请访问[云服务器远程部署wiki](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
4. 一些新型的部署平台或方法
|
||||
4. 在其他平台部署&二级网址部署
|
||||
- 使用Sealos[一键部署](https://github.com/binary-husky/gpt_academic/issues/993)。
|
||||
- 使用WSL2(Windows Subsystem for Linux 子系统)。请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
- 如何在二级网址(如`http://localhost/subpath`)下运行。请访问[FastAPI运行说明](docs/WithFastapi.md)
|
||||
|
||||
<br><br>
|
||||
|
||||
# Advanced Usage
|
||||
### I:自定义新的便捷按钮(学术快捷键)
|
||||
任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序。(如按钮已存在,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
|
||||
|
||||
任意文本编辑器打开`core_functional.py`,添加如下条目,然后重启程序。(如果按钮已存在,那么可以直接修改(前缀、后缀都已支持热修改),无需重启程序即可生效。)
|
||||
例如
|
||||
```
|
||||
|
||||
```python
|
||||
"超级英译中": {
|
||||
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
||||
"Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n",
|
||||
@@ -207,6 +249,7 @@ docker-compose up
|
||||
"Suffix": "",
|
||||
},
|
||||
```
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
||||
</div>
|
||||
@@ -216,6 +259,7 @@ docker-compose up
|
||||
本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。
|
||||
详情请参考[函数插件指南](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
|
||||
|
||||
<br><br>
|
||||
|
||||
# Updates
|
||||
### I:动态
|
||||
@@ -283,6 +327,7 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
||||
|
||||
|
||||
### II:版本:
|
||||
|
||||
- version 3.70(todo): 优化AutoGen插件主题并设计一系列衍生插件
|
||||
- version 3.60: 引入AutoGen作为新一代插件的基石
|
||||
- version 3.57: 支持GLM3,星火v3,文心一言v4,修复本地模型的并发BUG
|
||||
@@ -303,7 +348,7 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
||||
- version 3.0: 对chatglm和其他小型llm的支持
|
||||
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
|
||||
- version 2.5: 自更新,解决总结大工程源代码时文本过长、token溢出的问题
|
||||
- version 2.4: (1)新增PDF全文翻译功能; (2)新增输入区切换位置的功能; (3)新增垂直布局选项; (4)多线程函数插件优化。
|
||||
- version 2.4: 新增PDF全文翻译功能; 新增输入区切换位置的功能
|
||||
- version 2.3: 增强多线程交互性
|
||||
- version 2.2: 函数插件支持热重载
|
||||
- version 2.1: 可折叠式布局
|
||||
@@ -314,7 +359,7 @@ GPT Academic开发者QQ群:`610599535`
|
||||
|
||||
- 已知问题
|
||||
- 某些浏览器翻译插件干扰此软件前端的运行
|
||||
- 官方Gradio目前有很多兼容性Bug,请务必使用`requirement.txt`安装Gradio
|
||||
- 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
|
||||
|
||||
### III:主题
|
||||
可以通过修改`THEME`选项(config.py)变更主题
|
||||
@@ -325,6 +370,7 @@ GPT Academic开发者QQ群:`610599535`
|
||||
|
||||
1. `master` 分支: 主分支,稳定版
|
||||
2. `frontier` 分支: 开发分支,测试版
|
||||
3. 如何接入其他大模型:[接入其他大模型](request_llms/README.md)
|
||||
|
||||
|
||||
### V:参考与学习
|
||||
|
||||
@@ -5,7 +5,6 @@ def check_proxy(proxies):
|
||||
try:
|
||||
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
|
||||
data = response.json()
|
||||
# print(f'查询代理的地理位置,返回的结果是{data}')
|
||||
if 'country_name' in data:
|
||||
country = data['country_name']
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
|
||||
@@ -47,8 +46,8 @@ def backup_and_download(current_version, remote_version):
|
||||
os.makedirs(new_version_dir)
|
||||
shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
|
||||
proxies = get_conf('proxies')
|
||||
r = requests.get(
|
||||
'https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
|
||||
try: r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
|
||||
except: r = requests.get('https://public.gpt-academic.top/publish/master.zip', proxies=proxies, stream=True)
|
||||
zip_file_path = backup_dir+'/master.zip'
|
||||
with open(zip_file_path, 'wb+') as f:
|
||||
f.write(r.content)
|
||||
@@ -111,11 +110,10 @@ def auto_update(raise_error=False):
|
||||
try:
|
||||
from toolbox import get_conf
|
||||
import requests
|
||||
import time
|
||||
import json
|
||||
proxies = get_conf('proxies')
|
||||
response = requests.get(
|
||||
"https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
|
||||
try: response = requests.get("https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
|
||||
except: response = requests.get("https://public.gpt-academic.top/publish/version", proxies=proxies, timeout=5)
|
||||
remote_json_data = json.loads(response.text)
|
||||
remote_version = remote_json_data['version']
|
||||
if remote_json_data["show_feature"]:
|
||||
@@ -127,8 +125,7 @@ def auto_update(raise_error=False):
|
||||
current_version = json.loads(current_version)['version']
|
||||
if (remote_version - current_version) >= 0.01-1e-5:
|
||||
from colorful import print亮黄
|
||||
print亮黄(
|
||||
f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')
|
||||
print亮黄(f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')
|
||||
print('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
|
||||
user_instruction = input('(2)是否一键更新代码(Y+回车=确认,输入其他/无输入+回车=不更新)?')
|
||||
if user_instruction in ['Y', 'y']:
|
||||
@@ -154,7 +151,7 @@ def auto_update(raise_error=False):
|
||||
print(msg)
|
||||
|
||||
def warm_up_modules():
|
||||
print('正在执行一些模块的预热...')
|
||||
print('正在执行一些模块的预热 ...')
|
||||
from toolbox import ProxyNetworkActivate
|
||||
from request_llms.bridge_all import model_info
|
||||
with ProxyNetworkActivate("Warmup_Modules"):
|
||||
@@ -162,7 +159,15 @@ def warm_up_modules():
|
||||
enc.encode("模块预热", disallowed_special=())
|
||||
enc = model_info["gpt-4"]['tokenizer']
|
||||
enc.encode("模块预热", disallowed_special=())
|
||||
|
||||
def warm_up_vectordb():
|
||||
print('正在执行一些模块的预热 ...')
|
||||
from toolbox import ProxyNetworkActivate
|
||||
with ProxyNetworkActivate("Warmup_Modules"):
|
||||
import nltk
|
||||
with ProxyNetworkActivate("Warmup_Modules"): nltk.download("punkt")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import os
|
||||
os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
|
||||
|
||||
49
config.py
49
config.py
@@ -15,13 +15,13 @@ API_KEY = "此处填API密钥" # 可同时填写多个API-KEY,用英文逗
|
||||
USE_PROXY = False
|
||||
if USE_PROXY:
|
||||
"""
|
||||
代理网络的地址,打开你的代理软件查看代理协议(socks5h / http)、地址(localhost)和端口(11284)
|
||||
填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
|
||||
<配置教程&视频教程> https://github.com/binary-husky/gpt_academic/issues/1>
|
||||
[协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
|
||||
[地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上)
|
||||
[地址] 填localhost或者127.0.0.1(localhost意思是代理软件安装在本机上)
|
||||
[端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
|
||||
"""
|
||||
# 代理网络的地址,打开你的*学*网软件查看代理的协议(socks5h / http)、地址(localhost)和端口(11284)
|
||||
proxies = {
|
||||
# [协议]:// [地址] :[端口]
|
||||
"http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890",
|
||||
@@ -87,12 +87,12 @@ DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
|
||||
|
||||
# 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
|
||||
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview",
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-preview",
|
||||
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
||||
"api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
|
||||
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
|
||||
"chatglm3", "moss", "newbing", "claude-2"]
|
||||
# P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
|
||||
"chatglm3", "moss", "claude-2"]
|
||||
# P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
|
||||
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
|
||||
|
||||
@@ -100,6 +100,12 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview",
|
||||
MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
|
||||
|
||||
|
||||
# 选择本地模型变体(只有当AVAIL_LLM_MODELS包含了对应本地模型时,才会起作用)
|
||||
# 如果你选择Qwen系列的模型,那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型
|
||||
# 也可以是具体的模型路径
|
||||
QWEN_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
|
||||
|
||||
|
||||
# 百度千帆(LLM_MODEL="qianfan")
|
||||
BAIDU_CLOUD_API_KEY = ''
|
||||
BAIDU_CLOUD_SECRET_KEY = ''
|
||||
@@ -114,7 +120,6 @@ CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b
|
||||
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
||||
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
|
||||
|
||||
|
||||
# 设置gradio的并行线程数(不需要修改)
|
||||
CONCURRENT_COUNT = 100
|
||||
|
||||
@@ -232,6 +237,10 @@ WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
|
||||
BLOCK_INVALID_APIKEY = False
|
||||
|
||||
|
||||
# 启用插件热加载
|
||||
PLUGIN_HOT_RELOAD = False
|
||||
|
||||
|
||||
# 自定义按钮的最大数量限制
|
||||
NUM_CUSTOM_BASIC_BTN = 4
|
||||
|
||||
@@ -271,11 +280,31 @@ NUM_CUSTOM_BASIC_BTN = 4
|
||||
│ ├── BAIDU_CLOUD_API_KEY
|
||||
│ └── BAIDU_CLOUD_SECRET_KEY
|
||||
│
|
||||
├── "newbing" Newbing接口不再稳定,不推荐使用
|
||||
├── "zhipuai" 智谱AI大模型chatglm_turbo
|
||||
│ ├── ZHIPUAI_API_KEY
|
||||
│ └── ZHIPUAI_MODEL
|
||||
│
|
||||
└── "newbing" Newbing接口不再稳定,不推荐使用
|
||||
├── NEWBING_STYLE
|
||||
└── NEWBING_COOKIES
|
||||
|
||||
|
||||
本地大模型示意图
|
||||
│
|
||||
├── "chatglm3"
|
||||
├── "chatglm"
|
||||
├── "chatglm_onnx"
|
||||
├── "chatglmft"
|
||||
├── "internlm"
|
||||
├── "moss"
|
||||
├── "jittorllms_pangualpha"
|
||||
├── "jittorllms_llama"
|
||||
├── "deepseekcoder"
|
||||
├── "qwen"
|
||||
├── RWKV的支持见Wiki
|
||||
└── "llama2"
|
||||
|
||||
|
||||
用户图形界面布局依赖关系示意图
|
||||
│
|
||||
├── CHATBOT_HEIGHT 对话窗的高度
|
||||
@@ -286,7 +315,7 @@ NUM_CUSTOM_BASIC_BTN = 4
|
||||
├── THEME 色彩主题
|
||||
├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框
|
||||
├── ADD_WAIFU 加一个live2d装饰
|
||||
├── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性
|
||||
└── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性
|
||||
|
||||
|
||||
插件在线服务配置依赖关系示意图
|
||||
@@ -298,7 +327,7 @@ NUM_CUSTOM_BASIC_BTN = 4
|
||||
│ ├── ALIYUN_ACCESSKEY
|
||||
│ └── ALIYUN_SECRET
|
||||
│
|
||||
├── PDF文档精准解析
|
||||
│ └── GROBID_URLS
|
||||
└── PDF文档精准解析
|
||||
└── GROBID_URLS
|
||||
|
||||
"""
|
||||
|
||||
@@ -354,7 +354,7 @@ def get_crazy_functions():
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.图片生成 import 图片生成_DALLE2, 图片生成_DALLE3
|
||||
from crazy_functions.图片生成 import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
|
||||
function_plugins.update({
|
||||
"图片生成_DALLE2 (先切换模型到openai或api2d)": {
|
||||
"Group": "对话",
|
||||
@@ -372,11 +372,21 @@ def get_crazy_functions():
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "在这里输入分辨率, 如1024x1024(默认),支持 1024x1024, 1792x1024, 1024x1792。如需生成高清图像,请输入 1024x1024-HD, 1792x1024-HD, 1024x1792-HD。", # 高级参数输入区的显示提示
|
||||
"ArgsReminder": "在这里输入自定义参数「分辨率-质量(可选)-风格(可选)」, 参数示例「1024x1024-hd-vivid」 || 分辨率支持 「1024x1024」(默认) /「1792x1024」/「1024x1792」 || 质量支持 「-standard」(默认) /「-hd」 || 风格支持 「-vivid」(默认) /「-natural」", # 高级参数输入区的显示提示
|
||||
"Info": "使用DALLE3生成图片 | 输入参数字符串,提供图像的内容",
|
||||
"Function": HotReload(图片生成_DALLE3)
|
||||
},
|
||||
})
|
||||
function_plugins.update({
|
||||
"图片修改_DALLE2 (先切换模型到openai或api2d)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": False, # 调用时,唤起高级参数输入区(默认False)
|
||||
# "Info": "使用DALLE2修改图片 | 输入参数字符串,提供图像的内容",
|
||||
"Function": HotReload(图片修改_DALLE2)
|
||||
},
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
print('Load function plugin failed')
|
||||
@@ -430,7 +440,7 @@ def get_crazy_functions():
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.Langchain知识库 import 知识库问答
|
||||
from crazy_functions.知识库问答 import 知识库文件注入
|
||||
function_plugins.update({
|
||||
"构建知识库(先上传文件素材,再运行此插件)": {
|
||||
"Group": "对话",
|
||||
@@ -438,7 +448,7 @@ def get_crazy_functions():
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder": "此处待注入的知识库名称id, 默认为default。文件进入知识库后可长期保存。可以通过再次调用本插件的方式,向知识库追加更多文档。",
|
||||
"Function": HotReload(知识库问答)
|
||||
"Function": HotReload(知识库文件注入)
|
||||
}
|
||||
})
|
||||
except:
|
||||
@@ -446,9 +456,9 @@ def get_crazy_functions():
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.Langchain知识库 import 读取知识库作答
|
||||
from crazy_functions.知识库问答 import 读取知识库作答
|
||||
function_plugins.update({
|
||||
"知识库问答(构建知识库后,再运行此插件)": {
|
||||
"知识库文件注入(构建知识库后,再运行此插件)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
@@ -489,7 +499,7 @@ def get_crazy_functions():
|
||||
})
|
||||
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
||||
function_plugins.update({
|
||||
"Arixv论文精细翻译(输入arxivID)[需Latex]": {
|
||||
"Arxiv论文精细翻译(输入arxivID)[需Latex]": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
@@ -580,6 +590,20 @@ def get_crazy_functions():
|
||||
print(trimmed_format_exc())
|
||||
print('Load function plugin failed')
|
||||
|
||||
# try:
|
||||
# from crazy_functions.互动小游戏 import 随机小游戏
|
||||
# function_plugins.update({
|
||||
# "随机小游戏": {
|
||||
# "Group": "智能体",
|
||||
# "Color": "stop",
|
||||
# "AsButton": True,
|
||||
# "Function": HotReload(随机小游戏)
|
||||
# }
|
||||
# })
|
||||
# except:
|
||||
# print(trimmed_format_exc())
|
||||
# print('Load function plugin failed')
|
||||
|
||||
# try:
|
||||
# from crazy_functions.chatglm微调工具 import 微调数据集生成
|
||||
# function_plugins.update({
|
||||
|
||||
@@ -73,6 +73,7 @@ def move_project(project_folder, arxiv_id=None):
|
||||
|
||||
# align subfolder if there is a folder wrapper
|
||||
items = glob.glob(pj(project_folder,'*'))
|
||||
items = [item for item in items if os.path.basename(item)!='__MACOSX']
|
||||
if len(glob.glob(pj(project_folder,'*.tex'))) == 0 and len(items) == 1:
|
||||
if os.path.isdir(items[0]): project_folder = items[0]
|
||||
|
||||
@@ -87,6 +88,9 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
|
||||
target_file = pj(translation_dir, 'translate_zh.pdf')
|
||||
if os.path.exists(target_file):
|
||||
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
|
||||
target_file_compare = pj(translation_dir, 'comparison.pdf')
|
||||
if os.path.exists(target_file_compare):
|
||||
promote_file_to_downloadzone(target_file_compare, rename_file=None, chatbot=chatbot)
|
||||
return target_file
|
||||
return False
|
||||
def is_float(s):
|
||||
@@ -214,7 +218,6 @@ def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, histo
|
||||
# <-------------- we are done ------------->
|
||||
return success
|
||||
|
||||
|
||||
# ========================================= 插件主程序2 =====================================================
|
||||
|
||||
@CatchException
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from toolbox import update_ui, get_conf, trimmed_format_exc, get_log_folder
|
||||
from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton
|
||||
import threading
|
||||
import os
|
||||
import logging
|
||||
@@ -92,7 +92,7 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
||||
# 【选择处理】 尝试计算比例,尽可能多地保留文本
|
||||
from toolbox import get_reduce_token_percent
|
||||
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
|
||||
MAX_TOKEN = 4096
|
||||
MAX_TOKEN = get_max_token(llm_kwargs)
|
||||
EXCEED_ALLO = 512 + 512 * exceeded_cnt
|
||||
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
|
||||
mutable[0] += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
|
||||
@@ -224,7 +224,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
# 【选择处理】 尝试计算比例,尽可能多地保留文本
|
||||
from toolbox import get_reduce_token_percent
|
||||
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
|
||||
MAX_TOKEN = 4096
|
||||
MAX_TOKEN = get_max_token(llm_kwargs)
|
||||
EXCEED_ALLO = 512 + 512 * exceeded_cnt
|
||||
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
|
||||
gpt_say += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
|
||||
@@ -631,89 +631,6 @@ def get_files_from_everything(txt, type): # type='.md'
|
||||
|
||||
|
||||
|
||||
|
||||
def Singleton(cls):
|
||||
_instance = {}
|
||||
|
||||
def _singleton(*args, **kargs):
|
||||
if cls not in _instance:
|
||||
_instance[cls] = cls(*args, **kargs)
|
||||
return _instance[cls]
|
||||
|
||||
return _singleton
|
||||
|
||||
|
||||
@Singleton
|
||||
class knowledge_archive_interface():
|
||||
def __init__(self) -> None:
|
||||
self.threadLock = threading.Lock()
|
||||
self.current_id = ""
|
||||
self.kai_path = None
|
||||
self.qa_handle = None
|
||||
self.text2vec_large_chinese = None
|
||||
|
||||
def get_chinese_text2vec(self):
|
||||
if self.text2vec_large_chinese is None:
|
||||
# < -------------------预热文本向量化模组--------------- >
|
||||
from toolbox import ProxyNetworkActivate
|
||||
print('Checking Text2vec ...')
|
||||
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
||||
with ProxyNetworkActivate('Download_LLM'): # 临时地激活代理网络
|
||||
self.text2vec_large_chinese = HuggingFaceEmbeddings(model_name="GanymedeNil/text2vec-large-chinese")
|
||||
|
||||
return self.text2vec_large_chinese
|
||||
|
||||
|
||||
def feed_archive(self, file_manifest, id="default"):
|
||||
self.threadLock.acquire()
|
||||
# import uuid
|
||||
self.current_id = id
|
||||
from zh_langchain import construct_vector_store
|
||||
self.qa_handle, self.kai_path = construct_vector_store(
|
||||
vs_id=self.current_id,
|
||||
files=file_manifest,
|
||||
sentence_size=100,
|
||||
history=[],
|
||||
one_conent="",
|
||||
one_content_segmentation="",
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
self.threadLock.release()
|
||||
|
||||
def get_current_archive_id(self):
|
||||
return self.current_id
|
||||
|
||||
def get_loaded_file(self):
|
||||
return self.qa_handle.get_loaded_file()
|
||||
|
||||
def answer_with_archive_by_id(self, txt, id):
|
||||
self.threadLock.acquire()
|
||||
if not self.current_id == id:
|
||||
self.current_id = id
|
||||
from zh_langchain import construct_vector_store
|
||||
self.qa_handle, self.kai_path = construct_vector_store(
|
||||
vs_id=self.current_id,
|
||||
files=[],
|
||||
sentence_size=100,
|
||||
history=[],
|
||||
one_conent="",
|
||||
one_content_segmentation="",
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
VECTOR_SEARCH_SCORE_THRESHOLD = 0
|
||||
VECTOR_SEARCH_TOP_K = 4
|
||||
CHUNK_SIZE = 512
|
||||
resp, prompt = self.qa_handle.get_knowledge_based_conent_test(
|
||||
query = txt,
|
||||
vs_path = self.kai_path,
|
||||
score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
|
||||
vector_search_top_k=VECTOR_SEARCH_TOP_K,
|
||||
chunk_conent=True,
|
||||
chunk_size=CHUNK_SIZE,
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
self.threadLock.release()
|
||||
return resp, prompt
|
||||
|
||||
@Singleton
|
||||
class nougat_interface():
|
||||
|
||||
@@ -0,0 +1,35 @@
|
||||
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
def get_code_block(reply):
|
||||
import re
|
||||
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
|
||||
matches = re.findall(pattern, reply) # find all code blocks in text
|
||||
if len(matches) == 1:
|
||||
return "```" + matches[0] + "```" # code block
|
||||
raise RuntimeError("GPT is not generating proper code.")
|
||||
|
||||
def is_same_thing(a, b, llm_kwargs):
|
||||
from pydantic import BaseModel, Field
|
||||
class IsSameThing(BaseModel):
|
||||
is_same_thing: bool = Field(description="determine whether two objects are same thing.", default=False)
|
||||
|
||||
def run_gpt_fn(inputs, sys_prompt, history=[]):
|
||||
return predict_no_ui_long_connection(
|
||||
inputs=inputs, llm_kwargs=llm_kwargs,
|
||||
history=history, sys_prompt=sys_prompt, observe_window=[]
|
||||
)
|
||||
|
||||
gpt_json_io = GptJsonIO(IsSameThing)
|
||||
inputs_01 = "Identity whether the user input and the target is the same thing: \n target object: {a} \n user input object: {b} \n\n\n".format(a=a, b=b)
|
||||
inputs_01 += "\n\n\n Note that the user may describe the target object with a different language, e.g. cat and 猫 are the same thing."
|
||||
analyze_res_cot_01 = run_gpt_fn(inputs_01, "", [])
|
||||
|
||||
inputs_02 = inputs_01 + gpt_json_io.format_instructions
|
||||
analyze_res = run_gpt_fn(inputs_02, "", [inputs_01, analyze_res_cot_01])
|
||||
|
||||
try:
|
||||
res = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
|
||||
return res.is_same_thing
|
||||
except JsonStringError as e:
|
||||
return False
|
||||
@@ -416,8 +416,11 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
|
||||
from .latex_toolbox import merge_pdfs
|
||||
concat_pdf = pj(work_folder_modified, f'comparison.pdf')
|
||||
merge_pdfs(origin_pdf, result_pdf, concat_pdf)
|
||||
if os.path.exists(pj(work_folder, '..', 'translation')):
|
||||
shutil.copyfile(concat_pdf, pj(work_folder, '..', 'translation', 'comparison.pdf'))
|
||||
promote_file_to_downloadzone(concat_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
||||
except Exception as e:
|
||||
print(e)
|
||||
pass
|
||||
return True # 成功啦
|
||||
else:
|
||||
|
||||
@@ -283,10 +283,10 @@ def find_tex_file_ignore_case(fp):
|
||||
dir_name = os.path.dirname(fp)
|
||||
base_name = os.path.basename(fp)
|
||||
# 如果输入的文件路径是正确的
|
||||
if os.path.exists(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
||||
if os.path.isfile(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
||||
# 如果不正确,试着加上.tex后缀试试
|
||||
if not base_name.endswith('.tex'): base_name+='.tex'
|
||||
if os.path.exists(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
||||
if os.path.isfile(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
||||
# 如果还找不到,解除大小写限制,再试一次
|
||||
import glob
|
||||
for f in glob.glob(dir_name+'/*.tex'):
|
||||
@@ -493,11 +493,38 @@ def compile_latex_with_timeout(command, cwd, timeout=60):
|
||||
return False
|
||||
return True
|
||||
|
||||
def run_in_subprocess_wrapper_func(func, args, kwargs, return_dict, exception_dict):
|
||||
import sys
|
||||
try:
|
||||
result = func(*args, **kwargs)
|
||||
return_dict['result'] = result
|
||||
except Exception as e:
|
||||
exc_info = sys.exc_info()
|
||||
exception_dict['exception'] = exc_info
|
||||
|
||||
def run_in_subprocess(func):
|
||||
import multiprocessing
|
||||
def wrapper(*args, **kwargs):
|
||||
return_dict = multiprocessing.Manager().dict()
|
||||
exception_dict = multiprocessing.Manager().dict()
|
||||
process = multiprocessing.Process(target=run_in_subprocess_wrapper_func,
|
||||
args=(func, args, kwargs, return_dict, exception_dict))
|
||||
process.start()
|
||||
process.join()
|
||||
process.close()
|
||||
if 'exception' in exception_dict:
|
||||
# ooops, the subprocess ran into an exception
|
||||
exc_info = exception_dict['exception']
|
||||
raise exc_info[1].with_traceback(exc_info[2])
|
||||
if 'result' in return_dict.keys():
|
||||
# If the subprocess ran successfully, return the result
|
||||
return return_dict['result']
|
||||
return wrapper
|
||||
|
||||
def merge_pdfs(pdf1_path, pdf2_path, output_path):
|
||||
import PyPDF2
|
||||
def _merge_pdfs(pdf1_path, pdf2_path, output_path):
|
||||
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
||||
Percent = 0.95
|
||||
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
|
||||
# Open the first PDF file
|
||||
with open(pdf1_path, 'rb') as pdf1_file:
|
||||
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
|
||||
@@ -531,3 +558,5 @@ def merge_pdfs(pdf1_path, pdf2_path, output_path):
|
||||
# Save the merged PDF file
|
||||
with open(output_path, 'wb') as output_file:
|
||||
output_writer.write(output_file)
|
||||
|
||||
merge_pdfs = run_in_subprocess(_merge_pdfs) # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List
|
||||
from toolbox import update_ui_lastest_msg, disable_auto_promotion
|
||||
from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log_folder
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
|
||||
import time
|
||||
@@ -21,11 +22,7 @@ class GptAcademicState():
|
||||
def reset(self):
|
||||
pass
|
||||
|
||||
def lock_plugin(self, chatbot):
|
||||
chatbot._cookies['plugin_state'] = pickle.dumps(self)
|
||||
|
||||
def unlock_plugin(self, chatbot):
|
||||
self.reset()
|
||||
def dump_state(self, chatbot):
|
||||
chatbot._cookies['plugin_state'] = pickle.dumps(self)
|
||||
|
||||
def set_state(self, chatbot, key, value):
|
||||
@@ -40,6 +37,57 @@ class GptAcademicState():
|
||||
state.chatbot = chatbot
|
||||
return state
|
||||
|
||||
class GatherMaterials():
|
||||
def __init__(self, materials) -> None:
|
||||
materials = ['image', 'prompt']
|
||||
|
||||
class GptAcademicGameBaseState():
|
||||
"""
|
||||
1. first init: __init__ ->
|
||||
"""
|
||||
def init_game(self, chatbot, lock_plugin):
|
||||
self.plugin_name = None
|
||||
self.callback_fn = None
|
||||
self.delete_game = False
|
||||
self.step_cnt = 0
|
||||
|
||||
def lock_plugin(self, chatbot):
|
||||
if self.callback_fn is None:
|
||||
raise ValueError("callback_fn is None")
|
||||
chatbot._cookies['lock_plugin'] = self.callback_fn
|
||||
self.dump_state(chatbot)
|
||||
|
||||
def get_plugin_name(self):
|
||||
if self.plugin_name is None:
|
||||
raise ValueError("plugin_name is None")
|
||||
return self.plugin_name
|
||||
|
||||
def dump_state(self, chatbot):
|
||||
chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = pickle.dumps(self)
|
||||
|
||||
def set_state(self, chatbot, key, value):
|
||||
setattr(self, key, value)
|
||||
chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = pickle.dumps(self)
|
||||
|
||||
@staticmethod
|
||||
def sync_state(chatbot, llm_kwargs, cls, plugin_name, callback_fn, lock_plugin=True):
|
||||
state = chatbot._cookies.get(f'plugin_state/{plugin_name}', None)
|
||||
if state is not None:
|
||||
state = pickle.loads(state)
|
||||
else:
|
||||
state = cls()
|
||||
state.init_game(chatbot, lock_plugin)
|
||||
state.plugin_name = plugin_name
|
||||
state.llm_kwargs = llm_kwargs
|
||||
state.chatbot = chatbot
|
||||
state.callback_fn = callback_fn
|
||||
return state
|
||||
|
||||
def continue_game(self, prompt, chatbot, history):
|
||||
# 游戏主体
|
||||
yield from self.step(prompt, chatbot, history)
|
||||
self.step_cnt += 1
|
||||
# 保存状态,收尾
|
||||
self.dump_state(chatbot)
|
||||
# 如果游戏结束,清理
|
||||
if self.delete_game:
|
||||
chatbot._cookies['lock_plugin'] = None
|
||||
chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = None
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
@@ -0,0 +1,70 @@
|
||||
# From project chatglm-langchain
|
||||
|
||||
|
||||
from langchain.document_loaders import UnstructuredFileLoader
|
||||
from langchain.text_splitter import CharacterTextSplitter
|
||||
import re
|
||||
from typing import List
|
||||
|
||||
class ChineseTextSplitter(CharacterTextSplitter):
|
||||
def __init__(self, pdf: bool = False, sentence_size: int = None, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
self.pdf = pdf
|
||||
self.sentence_size = sentence_size
|
||||
|
||||
def split_text1(self, text: str) -> List[str]:
|
||||
if self.pdf:
|
||||
text = re.sub(r"\n{3,}", "\n", text)
|
||||
text = re.sub('\s', ' ', text)
|
||||
text = text.replace("\n\n", "")
|
||||
sent_sep_pattern = re.compile('([﹒﹔﹖﹗.。!?]["’”」』]{0,2}|(?=["‘“「『]{1,2}|$))') # del :;
|
||||
sent_list = []
|
||||
for ele in sent_sep_pattern.split(text):
|
||||
if sent_sep_pattern.match(ele) and sent_list:
|
||||
sent_list[-1] += ele
|
||||
elif ele:
|
||||
sent_list.append(ele)
|
||||
return sent_list
|
||||
|
||||
def split_text(self, text: str) -> List[str]: ##此处需要进一步优化逻辑
|
||||
if self.pdf:
|
||||
text = re.sub(r"\n{3,}", r"\n", text)
|
||||
text = re.sub('\s', " ", text)
|
||||
text = re.sub("\n\n", "", text)
|
||||
|
||||
text = re.sub(r'([;;.!?。!?\?])([^”’])', r"\1\n\2", text) # 单字符断句符
|
||||
text = re.sub(r'(\.{6})([^"’”」』])', r"\1\n\2", text) # 英文省略号
|
||||
text = re.sub(r'(\…{2})([^"’”」』])', r"\1\n\2", text) # 中文省略号
|
||||
text = re.sub(r'([;;!?。!?\?]["’”」』]{0,2})([^;;!?,。!?\?])', r'\1\n\2', text)
|
||||
# 如果双引号前有终止符,那么双引号才是句子的终点,把分句符\n放到双引号后,注意前面的几句都小心保留了双引号
|
||||
text = text.rstrip() # 段尾如果有多余的\n就去掉它
|
||||
# 很多规则中会考虑分号;,但是这里我把它忽略不计,破折号、英文双引号等同样忽略,需要的再做些简单调整即可。
|
||||
ls = [i for i in text.split("\n") if i]
|
||||
for ele in ls:
|
||||
if len(ele) > self.sentence_size:
|
||||
ele1 = re.sub(r'([,,.]["’”」』]{0,2})([^,,.])', r'\1\n\2', ele)
|
||||
ele1_ls = ele1.split("\n")
|
||||
for ele_ele1 in ele1_ls:
|
||||
if len(ele_ele1) > self.sentence_size:
|
||||
ele_ele2 = re.sub(r'([\n]{1,}| {2,}["’”」』]{0,2})([^\s])', r'\1\n\2', ele_ele1)
|
||||
ele2_ls = ele_ele2.split("\n")
|
||||
for ele_ele2 in ele2_ls:
|
||||
if len(ele_ele2) > self.sentence_size:
|
||||
ele_ele3 = re.sub('( ["’”」』]{0,2})([^ ])', r'\1\n\2', ele_ele2)
|
||||
ele2_id = ele2_ls.index(ele_ele2)
|
||||
ele2_ls = ele2_ls[:ele2_id] + [i for i in ele_ele3.split("\n") if i] + ele2_ls[
|
||||
ele2_id + 1:]
|
||||
ele_id = ele1_ls.index(ele_ele1)
|
||||
ele1_ls = ele1_ls[:ele_id] + [i for i in ele2_ls if i] + ele1_ls[ele_id + 1:]
|
||||
|
||||
id = ls.index(ele)
|
||||
ls = ls[:id] + [i for i in ele1_ls if i] + ls[id + 1:]
|
||||
return ls
|
||||
|
||||
def load_file(filepath, sentence_size):
|
||||
loader = UnstructuredFileLoader(filepath, mode="elements")
|
||||
textsplitter = ChineseTextSplitter(pdf=False, sentence_size=sentence_size)
|
||||
docs = loader.load_and_split(text_splitter=textsplitter)
|
||||
# write_check_file(filepath, docs)
|
||||
return docs
|
||||
|
||||
@@ -0,0 +1,338 @@
|
||||
# From project chatglm-langchain
|
||||
|
||||
import threading
|
||||
from toolbox import Singleton
|
||||
import os
|
||||
import shutil
|
||||
import os
|
||||
import uuid
|
||||
import tqdm
|
||||
from langchain.vectorstores import FAISS
|
||||
from langchain.docstore.document import Document
|
||||
from typing import List, Tuple
|
||||
import numpy as np
|
||||
from crazy_functions.vector_fns.general_file_loader import load_file
|
||||
|
||||
embedding_model_dict = {
|
||||
"ernie-tiny": "nghuyong/ernie-3.0-nano-zh",
|
||||
"ernie-base": "nghuyong/ernie-3.0-base-zh",
|
||||
"text2vec-base": "shibing624/text2vec-base-chinese",
|
||||
"text2vec": "GanymedeNil/text2vec-large-chinese",
|
||||
}
|
||||
|
||||
# Embedding model name
|
||||
EMBEDDING_MODEL = "text2vec"
|
||||
|
||||
# Embedding running device
|
||||
EMBEDDING_DEVICE = "cpu"
|
||||
|
||||
# 基于上下文的prompt模版,请务必保留"{question}"和"{context}"
|
||||
PROMPT_TEMPLATE = """已知信息:
|
||||
{context}
|
||||
|
||||
根据上述已知信息,简洁和专业的来回答用户的问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题” 或 “没有提供足够的相关信息”,不允许在答案中添加编造成分,答案请使用中文。 问题是:{question}"""
|
||||
|
||||
# 文本分句长度
|
||||
SENTENCE_SIZE = 100
|
||||
|
||||
# 匹配后单段上下文长度
|
||||
CHUNK_SIZE = 250
|
||||
|
||||
# LLM input history length
|
||||
LLM_HISTORY_LEN = 3
|
||||
|
||||
# return top-k text chunk from vector store
|
||||
VECTOR_SEARCH_TOP_K = 5
|
||||
|
||||
# 知识检索内容相关度 Score, 数值范围约为0-1100,如果为0,则不生效,经测试设置为小于500时,匹配结果更精准
|
||||
VECTOR_SEARCH_SCORE_THRESHOLD = 0
|
||||
|
||||
NLTK_DATA_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "nltk_data")
|
||||
|
||||
FLAG_USER_NAME = uuid.uuid4().hex
|
||||
|
||||
# 是否开启跨域,默认为False,如果需要开启,请设置为True
|
||||
# is open cross domain
|
||||
OPEN_CROSS_DOMAIN = False
|
||||
|
||||
def similarity_search_with_score_by_vector(
|
||||
self, embedding: List[float], k: int = 4
|
||||
) -> List[Tuple[Document, float]]:
|
||||
|
||||
def seperate_list(ls: List[int]) -> List[List[int]]:
|
||||
lists = []
|
||||
ls1 = [ls[0]]
|
||||
for i in range(1, len(ls)):
|
||||
if ls[i - 1] + 1 == ls[i]:
|
||||
ls1.append(ls[i])
|
||||
else:
|
||||
lists.append(ls1)
|
||||
ls1 = [ls[i]]
|
||||
lists.append(ls1)
|
||||
return lists
|
||||
|
||||
scores, indices = self.index.search(np.array([embedding], dtype=np.float32), k)
|
||||
docs = []
|
||||
id_set = set()
|
||||
store_len = len(self.index_to_docstore_id)
|
||||
for j, i in enumerate(indices[0]):
|
||||
if i == -1 or 0 < self.score_threshold < scores[0][j]:
|
||||
# This happens when not enough docs are returned.
|
||||
continue
|
||||
_id = self.index_to_docstore_id[i]
|
||||
doc = self.docstore.search(_id)
|
||||
if not self.chunk_conent:
|
||||
if not isinstance(doc, Document):
|
||||
raise ValueError(f"Could not find document for id {_id}, got {doc}")
|
||||
doc.metadata["score"] = int(scores[0][j])
|
||||
docs.append(doc)
|
||||
continue
|
||||
id_set.add(i)
|
||||
docs_len = len(doc.page_content)
|
||||
for k in range(1, max(i, store_len - i)):
|
||||
break_flag = False
|
||||
for l in [i + k, i - k]:
|
||||
if 0 <= l < len(self.index_to_docstore_id):
|
||||
_id0 = self.index_to_docstore_id[l]
|
||||
doc0 = self.docstore.search(_id0)
|
||||
if docs_len + len(doc0.page_content) > self.chunk_size:
|
||||
break_flag = True
|
||||
break
|
||||
elif doc0.metadata["source"] == doc.metadata["source"]:
|
||||
docs_len += len(doc0.page_content)
|
||||
id_set.add(l)
|
||||
if break_flag:
|
||||
break
|
||||
if not self.chunk_conent:
|
||||
return docs
|
||||
if len(id_set) == 0 and self.score_threshold > 0:
|
||||
return []
|
||||
id_list = sorted(list(id_set))
|
||||
id_lists = seperate_list(id_list)
|
||||
for id_seq in id_lists:
|
||||
for id in id_seq:
|
||||
if id == id_seq[0]:
|
||||
_id = self.index_to_docstore_id[id]
|
||||
doc = self.docstore.search(_id)
|
||||
else:
|
||||
_id0 = self.index_to_docstore_id[id]
|
||||
doc0 = self.docstore.search(_id0)
|
||||
doc.page_content += " " + doc0.page_content
|
||||
if not isinstance(doc, Document):
|
||||
raise ValueError(f"Could not find document for id {_id}, got {doc}")
|
||||
doc_score = min([scores[0][id] for id in [indices[0].tolist().index(i) for i in id_seq if i in indices[0]]])
|
||||
doc.metadata["score"] = int(doc_score)
|
||||
docs.append(doc)
|
||||
return docs
|
||||
|
||||
|
||||
class LocalDocQA:
|
||||
llm: object = None
|
||||
embeddings: object = None
|
||||
top_k: int = VECTOR_SEARCH_TOP_K
|
||||
chunk_size: int = CHUNK_SIZE
|
||||
chunk_conent: bool = True
|
||||
score_threshold: int = VECTOR_SEARCH_SCORE_THRESHOLD
|
||||
|
||||
def init_cfg(self,
|
||||
top_k=VECTOR_SEARCH_TOP_K,
|
||||
):
|
||||
|
||||
self.llm = None
|
||||
self.top_k = top_k
|
||||
|
||||
def init_knowledge_vector_store(self,
|
||||
filepath,
|
||||
vs_path: str or os.PathLike = None,
|
||||
sentence_size=SENTENCE_SIZE,
|
||||
text2vec=None):
|
||||
loaded_files = []
|
||||
failed_files = []
|
||||
if isinstance(filepath, str):
|
||||
if not os.path.exists(filepath):
|
||||
print("路径不存在")
|
||||
return None
|
||||
elif os.path.isfile(filepath):
|
||||
file = os.path.split(filepath)[-1]
|
||||
try:
|
||||
docs = load_file(filepath, SENTENCE_SIZE)
|
||||
print(f"{file} 已成功加载")
|
||||
loaded_files.append(filepath)
|
||||
except Exception as e:
|
||||
print(e)
|
||||
print(f"{file} 未能成功加载")
|
||||
return None
|
||||
elif os.path.isdir(filepath):
|
||||
docs = []
|
||||
for file in tqdm(os.listdir(filepath), desc="加载文件"):
|
||||
fullfilepath = os.path.join(filepath, file)
|
||||
try:
|
||||
docs += load_file(fullfilepath, SENTENCE_SIZE)
|
||||
loaded_files.append(fullfilepath)
|
||||
except Exception as e:
|
||||
print(e)
|
||||
failed_files.append(file)
|
||||
|
||||
if len(failed_files) > 0:
|
||||
print("以下文件未能成功加载:")
|
||||
for file in failed_files:
|
||||
print(f"{file}\n")
|
||||
|
||||
else:
|
||||
docs = []
|
||||
for file in filepath:
|
||||
docs += load_file(file, SENTENCE_SIZE)
|
||||
print(f"{file} 已成功加载")
|
||||
loaded_files.append(file)
|
||||
|
||||
if len(docs) > 0:
|
||||
print("文件加载完毕,正在生成向量库")
|
||||
if vs_path and os.path.isdir(vs_path):
|
||||
try:
|
||||
self.vector_store = FAISS.load_local(vs_path, text2vec)
|
||||
self.vector_store.add_documents(docs)
|
||||
except:
|
||||
self.vector_store = FAISS.from_documents(docs, text2vec)
|
||||
else:
|
||||
self.vector_store = FAISS.from_documents(docs, text2vec) # docs 为Document列表
|
||||
|
||||
self.vector_store.save_local(vs_path)
|
||||
return vs_path, loaded_files
|
||||
else:
|
||||
raise RuntimeError("文件加载失败,请检查文件格式是否正确")
|
||||
|
||||
def get_loaded_file(self, vs_path):
|
||||
ds = self.vector_store.docstore
|
||||
return set([ds._dict[k].metadata['source'].split(vs_path)[-1] for k in ds._dict])
|
||||
|
||||
|
||||
# query 查询内容
|
||||
# vs_path 知识库路径
|
||||
# chunk_conent 是否启用上下文关联
|
||||
# score_threshold 搜索匹配score阈值
|
||||
# vector_search_top_k 搜索知识库内容条数,默认搜索5条结果
|
||||
# chunk_sizes 匹配单段内容的连接上下文长度
|
||||
def get_knowledge_based_conent_test(self, query, vs_path, chunk_conent,
|
||||
score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
|
||||
vector_search_top_k=VECTOR_SEARCH_TOP_K, chunk_size=CHUNK_SIZE,
|
||||
text2vec=None):
|
||||
self.vector_store = FAISS.load_local(vs_path, text2vec)
|
||||
self.vector_store.chunk_conent = chunk_conent
|
||||
self.vector_store.score_threshold = score_threshold
|
||||
self.vector_store.chunk_size = chunk_size
|
||||
|
||||
embedding = self.vector_store.embedding_function.embed_query(query)
|
||||
related_docs_with_score = similarity_search_with_score_by_vector(self.vector_store, embedding, k=vector_search_top_k)
|
||||
|
||||
if not related_docs_with_score:
|
||||
response = {"query": query,
|
||||
"source_documents": []}
|
||||
return response, ""
|
||||
# prompt = f"{query}. You should answer this question using information from following documents: \n\n"
|
||||
prompt = f"{query}. 你必须利用以下文档中包含的信息回答这个问题: \n\n---\n\n"
|
||||
prompt += "\n\n".join([f"({k}): " + doc.page_content for k, doc in enumerate(related_docs_with_score)])
|
||||
prompt += "\n\n---\n\n"
|
||||
prompt = prompt.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
# print(prompt)
|
||||
response = {"query": query, "source_documents": related_docs_with_score}
|
||||
return response, prompt
|
||||
|
||||
|
||||
|
||||
|
||||
def construct_vector_store(vs_id, vs_path, files, sentence_size, history, one_conent, one_content_segmentation, text2vec):
|
||||
for file in files:
|
||||
assert os.path.exists(file), "输入文件不存在:" + file
|
||||
import nltk
|
||||
if NLTK_DATA_PATH not in nltk.data.path: nltk.data.path = [NLTK_DATA_PATH] + nltk.data.path
|
||||
local_doc_qa = LocalDocQA()
|
||||
local_doc_qa.init_cfg()
|
||||
filelist = []
|
||||
if not os.path.exists(os.path.join(vs_path, vs_id)):
|
||||
os.makedirs(os.path.join(vs_path, vs_id))
|
||||
for file in files:
|
||||
file_name = file.name if not isinstance(file, str) else file
|
||||
filename = os.path.split(file_name)[-1]
|
||||
shutil.copyfile(file_name, os.path.join(vs_path, vs_id, filename))
|
||||
filelist.append(os.path.join(vs_path, vs_id, filename))
|
||||
vs_path, loaded_files = local_doc_qa.init_knowledge_vector_store(filelist, os.path.join(vs_path, vs_id), sentence_size, text2vec)
|
||||
|
||||
if len(loaded_files):
|
||||
file_status = f"已添加 {'、'.join([os.path.split(i)[-1] for i in loaded_files if i])} 内容至知识库,并已加载知识库,请开始提问"
|
||||
else:
|
||||
pass
|
||||
# file_status = "文件未成功加载,请重新上传文件"
|
||||
# print(file_status)
|
||||
return local_doc_qa, vs_path
|
||||
|
||||
@Singleton
|
||||
class knowledge_archive_interface():
|
||||
def __init__(self) -> None:
|
||||
self.threadLock = threading.Lock()
|
||||
self.current_id = ""
|
||||
self.kai_path = None
|
||||
self.qa_handle = None
|
||||
self.text2vec_large_chinese = None
|
||||
|
||||
def get_chinese_text2vec(self):
|
||||
if self.text2vec_large_chinese is None:
|
||||
# < -------------------预热文本向量化模组--------------- >
|
||||
from toolbox import ProxyNetworkActivate
|
||||
print('Checking Text2vec ...')
|
||||
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
||||
with ProxyNetworkActivate('Download_LLM'): # 临时地激活代理网络
|
||||
self.text2vec_large_chinese = HuggingFaceEmbeddings(model_name="GanymedeNil/text2vec-large-chinese")
|
||||
|
||||
return self.text2vec_large_chinese
|
||||
|
||||
|
||||
def feed_archive(self, file_manifest, vs_path, id="default"):
|
||||
self.threadLock.acquire()
|
||||
# import uuid
|
||||
self.current_id = id
|
||||
self.qa_handle, self.kai_path = construct_vector_store(
|
||||
vs_id=self.current_id,
|
||||
vs_path=vs_path,
|
||||
files=file_manifest,
|
||||
sentence_size=100,
|
||||
history=[],
|
||||
one_conent="",
|
||||
one_content_segmentation="",
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
self.threadLock.release()
|
||||
|
||||
def get_current_archive_id(self):
|
||||
return self.current_id
|
||||
|
||||
def get_loaded_file(self, vs_path):
|
||||
return self.qa_handle.get_loaded_file(vs_path)
|
||||
|
||||
def answer_with_archive_by_id(self, txt, id, vs_path):
|
||||
self.threadLock.acquire()
|
||||
if not self.current_id == id:
|
||||
self.current_id = id
|
||||
self.qa_handle, self.kai_path = construct_vector_store(
|
||||
vs_id=self.current_id,
|
||||
vs_path=vs_path,
|
||||
files=[],
|
||||
sentence_size=100,
|
||||
history=[],
|
||||
one_conent="",
|
||||
one_content_segmentation="",
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
VECTOR_SEARCH_SCORE_THRESHOLD = 0
|
||||
VECTOR_SEARCH_TOP_K = 4
|
||||
CHUNK_SIZE = 512
|
||||
resp, prompt = self.qa_handle.get_knowledge_based_conent_test(
|
||||
query = txt,
|
||||
vs_path = self.kai_path,
|
||||
score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
|
||||
vector_search_top_k=VECTOR_SEARCH_TOP_K,
|
||||
chunk_conent=True,
|
||||
chunk_size=CHUNK_SIZE,
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
self.threadLock.release()
|
||||
return resp, prompt
|
||||
59
crazy_functions/互动小游戏.py
普通文件
59
crazy_functions/互动小游戏.py
普通文件
@@ -0,0 +1,59 @@
|
||||
from toolbox import CatchException, update_ui, update_ui_lastest_msg
|
||||
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
|
||||
import random
|
||||
|
||||
|
||||
class MiniGame_ASCII_Art(GptAcademicGameBaseState):
|
||||
|
||||
def step(self, prompt, chatbot, history):
|
||||
if self.step_cnt == 0:
|
||||
chatbot.append(["我画你猜(动物)", "请稍等..."])
|
||||
else:
|
||||
if prompt.strip() == 'exit':
|
||||
self.delete_game = True
|
||||
yield from update_ui_lastest_msg(lastmsg=f"谜底是{self.obj},游戏结束。", chatbot=chatbot, history=history, delay=0.)
|
||||
return
|
||||
chatbot.append([prompt, ""])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
if self.step_cnt == 0:
|
||||
self.lock_plugin(chatbot)
|
||||
self.cur_task = 'draw'
|
||||
|
||||
if self.cur_task == 'draw':
|
||||
avail_obj = ["狗","猫","鸟","鱼","老鼠","蛇"]
|
||||
self.obj = random.choice(avail_obj)
|
||||
inputs = "I want to play a game called Guess the ASCII art. You can draw the ASCII art and I will try to guess it. " + f"This time you draw a {self.obj}. Note that you must not indicate what you have draw in the text, and you should only produce the ASCII art wrapped by ```. "
|
||||
raw_res = predict_no_ui_long_connection(inputs=inputs, llm_kwargs=self.llm_kwargs, history=[], sys_prompt="")
|
||||
self.cur_task = 'identify user guess'
|
||||
res = get_code_block(raw_res)
|
||||
history += ['', f'the answer is {self.obj}', inputs, res]
|
||||
yield from update_ui_lastest_msg(lastmsg=res, chatbot=chatbot, history=history, delay=0.)
|
||||
|
||||
elif self.cur_task == 'identify user guess':
|
||||
if is_same_thing(self.obj, prompt, self.llm_kwargs):
|
||||
self.delete_game = True
|
||||
yield from update_ui_lastest_msg(lastmsg="你猜对了!", chatbot=chatbot, history=history, delay=0.)
|
||||
else:
|
||||
self.cur_task = 'identify user guess'
|
||||
yield from update_ui_lastest_msg(lastmsg="猜错了,再试试,输入“exit”获取答案。", chatbot=chatbot, history=history, delay=0.)
|
||||
|
||||
|
||||
@CatchException
|
||||
def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# 清空历史
|
||||
history = []
|
||||
# 选择游戏
|
||||
cls = MiniGame_ASCII_Art
|
||||
# 如果之前已经初始化了游戏实例,则继续该实例;否则重新初始化
|
||||
state = cls.sync_state(chatbot,
|
||||
llm_kwargs,
|
||||
cls,
|
||||
plugin_name='MiniGame_ASCII_Art',
|
||||
callback_fn='crazy_functions.互动小游戏->随机小游戏',
|
||||
lock_plugin=True
|
||||
)
|
||||
yield from state.continue_game(prompt, chatbot, history)
|
||||
@@ -2,7 +2,7 @@ from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log
|
||||
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicState
|
||||
|
||||
|
||||
def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", quality=None):
|
||||
def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", quality=None, style=None):
|
||||
import requests, json, time, os
|
||||
from request_llms.bridge_all import model_info
|
||||
|
||||
@@ -25,7 +25,10 @@ def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", qual
|
||||
'model': model,
|
||||
'response_format': 'url'
|
||||
}
|
||||
if quality is not None: data.update({'quality': quality})
|
||||
if quality is not None:
|
||||
data['quality'] = quality
|
||||
if style is not None:
|
||||
data['style'] = style
|
||||
response = requests.post(url, headers=headers, json=data, proxies=proxies)
|
||||
print(response.content)
|
||||
try:
|
||||
@@ -54,19 +57,25 @@ def edit_image(llm_kwargs, prompt, image_path, resolution="1024x1024", model="da
|
||||
img_endpoint = chat_endpoint.replace('chat/completions','images/edits')
|
||||
# # Generate the image
|
||||
url = img_endpoint
|
||||
n = 1
|
||||
headers = {
|
||||
'Authorization': f"Bearer {api_key}",
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
data = {
|
||||
'image': open(image_path, 'rb'),
|
||||
'prompt': prompt,
|
||||
'n': 1,
|
||||
'size': resolution,
|
||||
'model': model,
|
||||
'response_format': 'url'
|
||||
}
|
||||
response = requests.post(url, headers=headers, json=data, proxies=proxies)
|
||||
make_transparent(image_path, image_path+'.tsp.png')
|
||||
make_square_image(image_path+'.tsp.png', image_path+'.tspsq.png')
|
||||
resize_image(image_path+'.tspsq.png', image_path+'.ready.png', max_size=1024)
|
||||
image_path = image_path+'.ready.png'
|
||||
with open(image_path, 'rb') as f:
|
||||
file_content = f.read()
|
||||
files = {
|
||||
'image': (os.path.basename(image_path), file_content),
|
||||
# 'mask': ('mask.png', open('mask.png', 'rb'))
|
||||
'prompt': (None, prompt),
|
||||
"n": (None, str(n)),
|
||||
'size': (None, resolution),
|
||||
}
|
||||
|
||||
response = requests.post(url, headers=headers, files=files, proxies=proxies)
|
||||
print(response.content)
|
||||
try:
|
||||
image_url = json.loads(response.content.decode('utf8'))['data'][0]['url']
|
||||
@@ -115,13 +124,18 @@ def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
resolution = plugin_kwargs.get("advanced_arg", '1024x1024').lower()
|
||||
if resolution.endswith('-hd'):
|
||||
resolution = resolution.replace('-hd', '')
|
||||
quality = 'hd'
|
||||
else:
|
||||
quality = 'standard'
|
||||
image_url, image_path = gen_image(llm_kwargs, prompt, resolution, model="dall-e-3", quality=quality)
|
||||
resolution_arg = plugin_kwargs.get("advanced_arg", '1024x1024-standard-vivid').lower()
|
||||
parts = resolution_arg.split('-')
|
||||
resolution = parts[0] # 解析分辨率
|
||||
quality = 'standard' # 质量与风格默认值
|
||||
style = 'vivid'
|
||||
# 遍历检查是否有额外参数
|
||||
for part in parts[1:]:
|
||||
if part in ['hd', 'standard']:
|
||||
quality = part
|
||||
elif part in ['vivid', 'natural']:
|
||||
style = part
|
||||
image_url, image_path = gen_image(llm_kwargs, prompt, resolution, model="dall-e-3", quality=quality, style=style)
|
||||
chatbot.append([prompt,
|
||||
f'图像中转网址: <br/>`{image_url}`<br/>'+
|
||||
f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
|
||||
@@ -130,6 +144,7 @@ def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
|
||||
|
||||
|
||||
class ImageEditState(GptAcademicState):
|
||||
# 尚未完成
|
||||
def get_image_file(self, x):
|
||||
@@ -142,18 +157,27 @@ class ImageEditState(GptAcademicState):
|
||||
file = None if not confirm else file_manifest[0]
|
||||
return confirm, file
|
||||
|
||||
def lock_plugin(self, chatbot):
|
||||
chatbot._cookies['lock_plugin'] = 'crazy_functions.图片生成->图片修改_DALLE2'
|
||||
self.dump_state(chatbot)
|
||||
|
||||
def unlock_plugin(self, chatbot):
|
||||
self.reset()
|
||||
chatbot._cookies['lock_plugin'] = None
|
||||
self.dump_state(chatbot)
|
||||
|
||||
def get_resolution(self, x):
|
||||
return (x in ['256x256', '512x512', '1024x1024']), x
|
||||
|
||||
|
||||
def get_prompt(self, x):
|
||||
confirm = (len(x)>=5) and (not self.get_resolution(x)[0]) and (not self.get_image_file(x)[0])
|
||||
return confirm, x
|
||||
|
||||
|
||||
def reset(self):
|
||||
self.req = [
|
||||
{'value':None, 'description': '请先上传图像(必须是.png格式), 然后再次点击本插件', 'verify_fn': self.get_image_file},
|
||||
{'value':None, 'description': '请输入分辨率,可选:256x256, 512x512 或 1024x1024', 'verify_fn': self.get_resolution},
|
||||
{'value':None, 'description': '请输入修改需求,建议您使用英文提示词', 'verify_fn': self.get_prompt},
|
||||
{'value':None, 'description': '请先上传图像(必须是.png格式), 然后再次点击本插件', 'verify_fn': self.get_image_file},
|
||||
{'value':None, 'description': '请输入分辨率,可选:256x256, 512x512 或 1024x1024, 然后再次点击本插件', 'verify_fn': self.get_resolution},
|
||||
{'value':None, 'description': '请输入修改需求,建议您使用英文提示词, 然后再次点击本插件', 'verify_fn': self.get_prompt},
|
||||
]
|
||||
self.info = ""
|
||||
|
||||
@@ -163,7 +187,7 @@ class ImageEditState(GptAcademicState):
|
||||
confirm, res = r['verify_fn'](prompt)
|
||||
if confirm:
|
||||
r['value'] = res
|
||||
self.set_state(chatbot, 'dummy_key', 'dummy_value')
|
||||
self.dump_state(chatbot)
|
||||
break
|
||||
return self
|
||||
|
||||
@@ -182,23 +206,63 @@ def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
history = [] # 清空历史
|
||||
state = ImageEditState.get_state(chatbot, ImageEditState)
|
||||
state = state.feed(prompt, chatbot)
|
||||
state.lock_plugin(chatbot)
|
||||
if not state.already_obtained_all_materials():
|
||||
chatbot.append(["图片修改(先上传图片,再输入修改需求,最后输入分辨率)", state.next_req()])
|
||||
chatbot.append(["图片修改\n\n1. 上传图片(图片中需要修改的位置用橡皮擦擦除为纯白色,即RGB=255,255,255)\n2. 输入分辨率 \n3. 输入修改需求", state.next_req()])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
return
|
||||
|
||||
image_path = state.req[0]
|
||||
resolution = state.req[1]
|
||||
prompt = state.req[2]
|
||||
image_path = state.req[0]['value']
|
||||
resolution = state.req[1]['value']
|
||||
prompt = state.req[2]['value']
|
||||
chatbot.append(["图片修改, 执行中", f"图片:`{image_path}`<br/>分辨率:`{resolution}`<br/>修改需求:`{prompt}`"])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
image_url, image_path = edit_image(llm_kwargs, prompt, image_path, resolution)
|
||||
chatbot.append([state.prompt,
|
||||
chatbot.append([prompt,
|
||||
f'图像中转网址: <br/>`{image_url}`<br/>'+
|
||||
f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
|
||||
f'本地文件地址: <br/>`{image_path}`<br/>'+
|
||||
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
|
||||
])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
|
||||
state.unlock_plugin(chatbot)
|
||||
|
||||
def make_transparent(input_image_path, output_image_path):
|
||||
from PIL import Image
|
||||
image = Image.open(input_image_path)
|
||||
image = image.convert("RGBA")
|
||||
data = image.getdata()
|
||||
new_data = []
|
||||
for item in data:
|
||||
if item[0] == 255 and item[1] == 255 and item[2] == 255:
|
||||
new_data.append((255, 255, 255, 0))
|
||||
else:
|
||||
new_data.append(item)
|
||||
image.putdata(new_data)
|
||||
image.save(output_image_path, "PNG")
|
||||
|
||||
def resize_image(input_path, output_path, max_size=1024):
|
||||
from PIL import Image
|
||||
with Image.open(input_path) as img:
|
||||
width, height = img.size
|
||||
if width > max_size or height > max_size:
|
||||
if width >= height:
|
||||
new_width = max_size
|
||||
new_height = int((max_size / width) * height)
|
||||
else:
|
||||
new_height = max_size
|
||||
new_width = int((max_size / height) * width)
|
||||
|
||||
resized_img = img.resize(size=(new_width, new_height))
|
||||
resized_img.save(output_path)
|
||||
else:
|
||||
img.save(output_path)
|
||||
|
||||
def make_square_image(input_path, output_path):
|
||||
from PIL import Image
|
||||
with Image.open(input_path) as img:
|
||||
width, height = img.size
|
||||
size = max(width, height)
|
||||
new_img = Image.new("RGBA", (size, size), color="black")
|
||||
new_img.paste(img, ((size - width) // 2, (size - height) // 2))
|
||||
new_img.save(output_path)
|
||||
|
||||
@@ -1,10 +1,19 @@
|
||||
from toolbox import CatchException, update_ui, ProxyNetworkActivate, update_ui_lastest_msg
|
||||
from toolbox import CatchException, update_ui, ProxyNetworkActivate, update_ui_lastest_msg, get_log_folder, get_user
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, get_files_from_everything
|
||||
|
||||
install_msg ="""
|
||||
|
||||
1. python -m pip install torch --index-url https://download.pytorch.org/whl/cpu
|
||||
|
||||
2. python -m pip install transformers protobuf langchain sentence-transformers faiss-cpu nltk beautifulsoup4 bitsandbytes tabulate icetk --upgrade
|
||||
|
||||
3. python -m pip install unstructured[all-docs] --upgrade
|
||||
|
||||
4. python -c 'import nltk; nltk.download("punkt")'
|
||||
"""
|
||||
|
||||
@CatchException
|
||||
def 知识库问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||
@@ -25,15 +34,15 @@ def 知识库问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
||||
|
||||
# resolve deps
|
||||
try:
|
||||
from zh_langchain import construct_vector_store
|
||||
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
||||
from .crazy_utils import knowledge_archive_interface
|
||||
# from zh_langchain import construct_vector_store
|
||||
# from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
||||
from crazy_functions.vector_fns.vector_database import knowledge_archive_interface
|
||||
except Exception as e:
|
||||
chatbot.append(["依赖不足", "导入依赖失败。正在尝试自动安装,请查看终端的输出或耐心等待..."])
|
||||
chatbot.append(["依赖不足", f"{str(e)}\n\n导入依赖失败。请用以下命令安装" + install_msg])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
from .crazy_utils import try_install_deps
|
||||
try_install_deps(['zh_langchain==0.2.1', 'pypinyin'], reload_m=['pypinyin', 'zh_langchain'])
|
||||
yield from update_ui_lastest_msg("安装完成,您可以再次重试。", chatbot, history)
|
||||
# from .crazy_utils import try_install_deps
|
||||
# try_install_deps(['zh_langchain==0.2.1', 'pypinyin'], reload_m=['pypinyin', 'zh_langchain'])
|
||||
# yield from update_ui_lastest_msg("安装完成,您可以再次重试。", chatbot, history)
|
||||
return
|
||||
|
||||
# < --------------------读取文件--------------- >
|
||||
@@ -42,7 +51,7 @@ def 知识库问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
||||
for sp in spl:
|
||||
_, file_manifest_tmp, _ = get_files_from_everything(txt, type=f'.{sp}')
|
||||
file_manifest += file_manifest_tmp
|
||||
|
||||
|
||||
if len(file_manifest) == 0:
|
||||
chatbot.append(["没有找到任何可读取文件", "当前支持的格式包括: txt, md, docx, pptx, pdf, json等"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
@@ -62,13 +71,14 @@ def 知识库问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
||||
print('Establishing knowledge archive ...')
|
||||
with ProxyNetworkActivate('Download_LLM'): # 临时地激活代理网络
|
||||
kai = knowledge_archive_interface()
|
||||
kai.feed_archive(file_manifest=file_manifest, id=kai_id)
|
||||
kai_files = kai.get_loaded_file()
|
||||
vs_path = get_log_folder(user=get_user(chatbot), plugin_name='vec_store')
|
||||
kai.feed_archive(file_manifest=file_manifest, vs_path=vs_path, id=kai_id)
|
||||
kai_files = kai.get_loaded_file(vs_path=vs_path)
|
||||
kai_files = '<br/>'.join(kai_files)
|
||||
# chatbot.append(['知识库构建成功', "正在将知识库存储至cookie中"])
|
||||
# yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
# chatbot._cookies['langchain_plugin_embedding'] = kai.get_current_archive_id()
|
||||
# chatbot._cookies['lock_plugin'] = 'crazy_functions.Langchain知识库->读取知识库作答'
|
||||
# chatbot._cookies['lock_plugin'] = 'crazy_functions.知识库文件注入->读取知识库作答'
|
||||
# chatbot.append(['完成', "“根据知识库作答”函数插件已经接管问答系统, 提问吧! 但注意, 您接下来不能再使用其他插件了,刷新页面即可以退出知识库问答模式。"])
|
||||
chatbot.append(['构建完成', f"当前知识库内的有效文件:\n\n---\n\n{kai_files}\n\n---\n\n请切换至“知识库问答”插件进行知识库访问, 或者使用此插件继续上传更多文件。"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
@@ -77,15 +87,15 @@ def 知识库问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
||||
def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port=-1):
|
||||
# resolve deps
|
||||
try:
|
||||
from zh_langchain import construct_vector_store
|
||||
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
||||
from .crazy_utils import knowledge_archive_interface
|
||||
# from zh_langchain import construct_vector_store
|
||||
# from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
||||
from crazy_functions.vector_fns.vector_database import knowledge_archive_interface
|
||||
except Exception as e:
|
||||
chatbot.append(["依赖不足", "导入依赖失败。正在尝试自动安装,请查看终端的输出或耐心等待..."])
|
||||
chatbot.append(["依赖不足", f"{str(e)}\n\n导入依赖失败。请用以下命令安装" + install_msg])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
from .crazy_utils import try_install_deps
|
||||
try_install_deps(['zh_langchain==0.2.1', 'pypinyin'], reload_m=['pypinyin', 'zh_langchain'])
|
||||
yield from update_ui_lastest_msg("安装完成,您可以再次重试。", chatbot, history)
|
||||
# from .crazy_utils import try_install_deps
|
||||
# try_install_deps(['zh_langchain==0.2.1', 'pypinyin'], reload_m=['pypinyin', 'zh_langchain'])
|
||||
# yield from update_ui_lastest_msg("安装完成,您可以再次重试。", chatbot, history)
|
||||
return
|
||||
|
||||
# < ------------------- --------------- >
|
||||
@@ -93,7 +103,8 @@ def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
||||
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
kai_id = plugin_kwargs.get("advanced_arg", 'default')
|
||||
resp, prompt = kai.answer_with_archive_by_id(txt, kai_id)
|
||||
vs_path = get_log_folder(user=get_user(chatbot), plugin_name='vec_store')
|
||||
resp, prompt = kai.answer_with_archive_by_id(txt, kai_id, vs_path)
|
||||
|
||||
chatbot.append((txt, f'[知识库 {kai_id}] ' + prompt))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
@@ -0,0 +1,26 @@
|
||||
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
|
||||
# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic-nolocal-vs -f docs/GithubAction+NoLocal+Vectordb .
|
||||
# 如何运行: docker run --rm -it --net=host gpt-academic-nolocal-vs
|
||||
FROM python:3.11
|
||||
|
||||
# 指定路径
|
||||
WORKDIR /gpt
|
||||
|
||||
# 装载项目文件
|
||||
COPY . .
|
||||
|
||||
# 安装依赖
|
||||
RUN pip3 install -r requirements.txt
|
||||
|
||||
# 安装知识库插件的额外依赖
|
||||
RUN apt-get update && apt-get install libgl1 -y
|
||||
RUN pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cpu
|
||||
RUN pip3 install transformers protobuf langchain sentence-transformers faiss-cpu nltk beautifulsoup4 bitsandbytes tabulate icetk --upgrade
|
||||
RUN pip3 install unstructured[all-docs] --upgrade
|
||||
|
||||
# 可选步骤,用于预热模块
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()'
|
||||
|
||||
# 启动
|
||||
CMD ["python3", "-u", "main.py"]
|
||||
@@ -923,7 +923,7 @@
|
||||
"的第": "The",
|
||||
"个片段": "fragment",
|
||||
"总结文章": "Summarize the article",
|
||||
"根据以上的对话": "According to the above dialogue",
|
||||
"根据以上的对话": "According to the conversation above",
|
||||
"的主要内容": "The main content of",
|
||||
"所有文件都总结完成了吗": "Are all files summarized?",
|
||||
"如果是.doc文件": "If it is a .doc file",
|
||||
@@ -1501,7 +1501,7 @@
|
||||
"发送请求到OpenAI后": "After sending the request to OpenAI",
|
||||
"上下布局": "Vertical Layout",
|
||||
"左右布局": "Horizontal Layout",
|
||||
"对话窗的高度": "Height of the Dialogue Window",
|
||||
"对话窗的高度": "Height of the Conversation Window",
|
||||
"重试的次数限制": "Retry Limit",
|
||||
"gpt4现在只对申请成功的人开放": "GPT-4 is now only open to those who have successfully applied",
|
||||
"提高限制请查询": "Please check for higher limits",
|
||||
@@ -2183,9 +2183,8 @@
|
||||
"找不到合适插件执行该任务": "Cannot find a suitable plugin to perform this task",
|
||||
"接驳VoidTerminal": "Connect to VoidTerminal",
|
||||
"**很好": "**Very good",
|
||||
"对话|编程": "Conversation|Programming",
|
||||
"对话|编程|学术": "Conversation|Programming|Academic",
|
||||
"4. 建议使用 GPT3.5 或更强的模型": "4. It is recommended to use GPT3.5 or a stronger model",
|
||||
"对话|编程": "Conversation&ImageGenerating|Programming",
|
||||
"对话|编程|学术": "Conversation&ImageGenerating|Programming|Academic", "4. 建议使用 GPT3.5 或更强的模型": "4. It is recommended to use GPT3.5 or a stronger model",
|
||||
"「请调用插件翻译PDF论文": "Please call the plugin to translate the PDF paper",
|
||||
"3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词": "3. If you use keywords such as 'call plugin xxx', 'modify configuration xxx', 'please', etc.",
|
||||
"以下是一篇学术论文的基本信息": "The following is the basic information of an academic paper",
|
||||
@@ -2630,7 +2629,7 @@
|
||||
"已经被记忆": "Already memorized",
|
||||
"默认用英文的": "Default to English",
|
||||
"错误追踪": "Error tracking",
|
||||
"对话|编程|学术|智能体": "Dialogue|Programming|Academic|Intelligent agent",
|
||||
"对话&编程|编程|学术|智能体": "Conversation&ImageGenerating|Programming|Academic|Intelligent agent",
|
||||
"请检查": "Please check",
|
||||
"检测到被滞留的缓存文档": "Detected cached documents being left behind",
|
||||
"还有哪些场合允许使用代理": "What other occasions allow the use of proxies",
|
||||
@@ -2903,5 +2902,107 @@
|
||||
"高优先级": "High priority",
|
||||
"请配置ZHIPUAI_API_KEY": "Please configure ZHIPUAI_API_KEY",
|
||||
"单个azure模型": "Single Azure model",
|
||||
"预留参数 context 未实现": "Reserved parameter 'context' not implemented"
|
||||
}
|
||||
"预留参数 context 未实现": "Reserved parameter 'context' not implemented",
|
||||
"在输入区输入临时API_KEY后提交": "Submit after entering temporary API_KEY in the input area",
|
||||
"鸟": "Bird",
|
||||
"图片中需要修改的位置用橡皮擦擦除为纯白色": "Erase the areas in the image that need to be modified with an eraser to pure white",
|
||||
"└── PDF文档精准解析": "└── Accurate parsing of PDF documents",
|
||||
"└── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置": "└── ALLOW_RESET_CONFIG Whether to allow modifying the configuration of this page through natural language description",
|
||||
"等待指令": "Waiting for instructions",
|
||||
"不存在": "Does not exist",
|
||||
"选择游戏": "Select game",
|
||||
"本地大模型示意图": "Local large model diagram",
|
||||
"无视此消息即可": "You can ignore this message",
|
||||
"即RGB=255": "That is, RGB=255",
|
||||
"如需追问": "If you have further questions",
|
||||
"也可以是具体的模型路径": "It can also be a specific model path",
|
||||
"才会起作用": "Will take effect",
|
||||
"下载失败": "Download failed",
|
||||
"网页刷新后失效": "Invalid after webpage refresh",
|
||||
"crazy_functions.互动小游戏-": "crazy_functions.Interactive mini game-",
|
||||
"右对齐": "Right alignment",
|
||||
"您可以调用下拉菜单中的“LoadConversationHistoryArchive”还原当下的对话": "You can use the 'LoadConversationHistoryArchive' in the drop-down menu to restore the current conversation",
|
||||
"左对齐": "Left alignment",
|
||||
"使用默认的 FP16": "Use default FP16",
|
||||
"一小时": "One hour",
|
||||
"从而方便内存的释放": "Thus facilitating memory release",
|
||||
"如何临时更换API_KEY": "How to temporarily change API_KEY",
|
||||
"请输入 1024x1024-HD": "Please enter 1024x1024-HD",
|
||||
"使用 INT8 量化": "Use INT8 quantization",
|
||||
"3. 输入修改需求": "3. Enter modification requirements",
|
||||
"刷新界面 由于请求gpt需要一段时间": "Refreshing the interface takes some time due to the request for gpt",
|
||||
"随机小游戏": "Random mini game",
|
||||
"那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型": "So please specify the specific model in QWEN_MODEL_SELECTION below",
|
||||
"表值": "Table value",
|
||||
"我画你猜": "I draw, you guess",
|
||||
"狗": "Dog",
|
||||
"2. 输入分辨率": "2. Enter resolution",
|
||||
"鱼": "Fish",
|
||||
"尚未完成": "Not yet completed",
|
||||
"表头": "Table header",
|
||||
"填localhost或者127.0.0.1": "Fill in localhost or 127.0.0.1",
|
||||
"请上传jpg格式的图片": "Please upload images in jpg format",
|
||||
"API_URL_REDIRECT填写格式是错误的": "The format of API_URL_REDIRECT is incorrect",
|
||||
"├── RWKV的支持见Wiki": "Support for RWKV is available in the Wiki",
|
||||
"如果中文Prompt效果不理想": "If the Chinese prompt is not effective",
|
||||
"/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix": "/SEAFILE_LOCAL/50503047/My Library/Degree/paperlatex/aaai/Fu_8368_with_appendix",
|
||||
"只有当AVAIL_LLM_MODELS包含了对应本地模型时": "Only when AVAIL_LLM_MODELS contains the corresponding local model",
|
||||
"选择本地模型变体": "Choose the local model variant",
|
||||
"如果您确信自己没填错": "If you are sure you haven't made a mistake",
|
||||
"PyPDF2这个库有严重的内存泄露问题": "PyPDF2 library has serious memory leak issues",
|
||||
"整理文件集合 输出消息": "Organize file collection and output message",
|
||||
"没有检测到任何近期上传的图像文件": "No recently uploaded image files detected",
|
||||
"游戏结束": "Game over",
|
||||
"调用结束": "Call ended",
|
||||
"猫": "Cat",
|
||||
"请及时切换模型": "Please switch models in time",
|
||||
"次中": "In the meantime",
|
||||
"如需生成高清图像": "If you need to generate high-definition images",
|
||||
"CPU 模式": "CPU mode",
|
||||
"项目目录": "Project directory",
|
||||
"动物": "Animal",
|
||||
"居中对齐": "Center alignment",
|
||||
"请注意拓展名需要小写": "Please note that the extension name needs to be lowercase",
|
||||
"重试第": "Retry",
|
||||
"实验性功能": "Experimental feature",
|
||||
"猜错了": "Wrong guess",
|
||||
"打开你的代理软件查看代理协议": "Open your proxy software to view the proxy agreement",
|
||||
"您不需要再重复强调该文件的路径了": "You don't need to emphasize the file path again",
|
||||
"请阅读": "Please read",
|
||||
"请直接输入您的问题": "Please enter your question directly",
|
||||
"API_URL_REDIRECT填错了": "API_URL_REDIRECT is filled incorrectly",
|
||||
"谜底是": "The answer is",
|
||||
"第一个模型": "The first model",
|
||||
"你猜对了!": "You guessed it right!",
|
||||
"已经接收到您上传的文件": "The file you uploaded has been received",
|
||||
"您正在调用“图像生成”插件": "You are calling the 'Image Generation' plugin",
|
||||
"刷新界面 界面更新": "Refresh the interface, interface update",
|
||||
"如果之前已经初始化了游戏实例": "If the game instance has been initialized before",
|
||||
"文件": "File",
|
||||
"老鼠": "Mouse",
|
||||
"列2": "Column 2",
|
||||
"等待图片": "Waiting for image",
|
||||
"使用 INT4 量化": "Use INT4 quantization",
|
||||
"from crazy_functions.互动小游戏 import 随机小游戏": "TranslatedText",
|
||||
"游戏主体": "TranslatedText",
|
||||
"该模型不具备上下文对话能力": "TranslatedText",
|
||||
"列3": "TranslatedText",
|
||||
"清理": "TranslatedText",
|
||||
"检查量化配置": "TranslatedText",
|
||||
"如果游戏结束": "TranslatedText",
|
||||
"蛇": "TranslatedText",
|
||||
"则继续该实例;否则重新初始化": "TranslatedText",
|
||||
"e.g. cat and 猫 are the same thing": "TranslatedText",
|
||||
"第三个模型": "TranslatedText",
|
||||
"如果你选择Qwen系列的模型": "TranslatedText",
|
||||
"列4": "TranslatedText",
|
||||
"输入“exit”获取答案": "TranslatedText",
|
||||
"把它放到子进程中运行": "TranslatedText",
|
||||
"列1": "TranslatedText",
|
||||
"使用该模型需要额外依赖": "TranslatedText",
|
||||
"再试试": "TranslatedText",
|
||||
"1. 上传图片": "TranslatedText",
|
||||
"保存状态": "TranslatedText",
|
||||
"GPT-Academic对话存档": "TranslatedText",
|
||||
"Arxiv论文精细翻译": "TranslatedText"
|
||||
}
|
||||
|
||||
@@ -1043,9 +1043,9 @@
|
||||
"jittorllms响应异常": "jittorllms response exception",
|
||||
"在项目根目录运行这两个指令": "Run these two commands in the project root directory",
|
||||
"获取tokenizer": "Get tokenizer",
|
||||
"chatbot 为WebUI中显示的对话列表": "chatbot is the list of dialogues displayed in WebUI",
|
||||
"chatbot 为WebUI中显示的对话列表": "chatbot is the list of conversations displayed in WebUI",
|
||||
"test_解析一个Cpp项目": "test_parse a Cpp project",
|
||||
"将对话记录history以Markdown格式写入文件中": "Write the dialogue record history to a file in Markdown format",
|
||||
"将对话记录history以Markdown格式写入文件中": "Write the conversations record history to a file in Markdown format",
|
||||
"装饰器函数": "Decorator function",
|
||||
"玫瑰色": "Rose color",
|
||||
"将单空行": "刪除單行空白",
|
||||
@@ -2270,4 +2270,4 @@
|
||||
"标注节点的行数范围": "標註節點的行數範圍",
|
||||
"默认 True": "默認 True",
|
||||
"将两个PDF拼接": "將兩個PDF拼接"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -258,39 +258,7 @@ function loadTipsMessage(result) {
|
||||
});
|
||||
|
||||
window.showWelcomeMessage = function(result) {
|
||||
var text;
|
||||
if (window.location.href == live2d_settings.homePageUrl) {
|
||||
var now = (new Date()).getHours();
|
||||
if (now > 23 || now <= 5) text = getRandText(result.waifu.hour_tips['t23-5']);
|
||||
else if (now > 5 && now <= 7) text = getRandText(result.waifu.hour_tips['t5-7']);
|
||||
else if (now > 7 && now <= 11) text = getRandText(result.waifu.hour_tips['t7-11']);
|
||||
else if (now > 11 && now <= 14) text = getRandText(result.waifu.hour_tips['t11-14']);
|
||||
else if (now > 14 && now <= 17) text = getRandText(result.waifu.hour_tips['t14-17']);
|
||||
else if (now > 17 && now <= 19) text = getRandText(result.waifu.hour_tips['t17-19']);
|
||||
else if (now > 19 && now <= 21) text = getRandText(result.waifu.hour_tips['t19-21']);
|
||||
else if (now > 21 && now <= 23) text = getRandText(result.waifu.hour_tips['t21-23']);
|
||||
else text = getRandText(result.waifu.hour_tips.default);
|
||||
} else {
|
||||
var referrer_message = result.waifu.referrer_message;
|
||||
if (document.referrer !== '') {
|
||||
var referrer = document.createElement('a');
|
||||
referrer.href = document.referrer;
|
||||
var domain = referrer.hostname.split('.')[1];
|
||||
if (window.location.hostname == referrer.hostname)
|
||||
text = referrer_message.localhost[0] + document.title.split(referrer_message.localhost[2])[0] + referrer_message.localhost[1];
|
||||
else if (domain == 'baidu')
|
||||
text = referrer_message.baidu[0] + referrer.search.split('&wd=')[1].split('&')[0] + referrer_message.baidu[1];
|
||||
else if (domain == 'so')
|
||||
text = referrer_message.so[0] + referrer.search.split('&q=')[1].split('&')[0] + referrer_message.so[1];
|
||||
else if (domain == 'google')
|
||||
text = referrer_message.google[0] + document.title.split(referrer_message.google[2])[0] + referrer_message.google[1];
|
||||
else {
|
||||
$.each(result.waifu.referrer_hostname, function(i,val) {if (i==referrer.hostname) referrer.hostname = getRandText(val)});
|
||||
text = referrer_message.default[0] + referrer.hostname + referrer_message.default[1];
|
||||
}
|
||||
} else text = referrer_message.none[0] + document.title.split(referrer_message.none[2])[0] + referrer_message.none[1];
|
||||
}
|
||||
showMessage(text, 6000);
|
||||
showMessage('欢迎使用GPT-Academic', 6000);
|
||||
}; if (live2d_settings.showWelcomeMessage) showWelcomeMessage(result);
|
||||
|
||||
var waifu_tips = result.waifu;
|
||||
|
||||
@@ -83,8 +83,8 @@
|
||||
"很多强大的函数插件隐藏在下拉菜单中呢。",
|
||||
"红色的插件,使用之前需要把文件上传进去哦。",
|
||||
"想添加功能按钮吗?读读readme很容易就学会啦。",
|
||||
"敏感或机密的信息,不可以问chatGPT的哦!",
|
||||
"chatGPT究竟是划时代的创新,还是扼杀创造力的毒药呢?"
|
||||
"敏感或机密的信息,不可以问AI的哦!",
|
||||
"LLM究竟是划时代的创新,还是扼杀创造力的毒药呢?"
|
||||
] }
|
||||
],
|
||||
"click": [
|
||||
@@ -92,8 +92,6 @@
|
||||
"selector": ".waifu #live2d",
|
||||
"text": [
|
||||
"是…是不小心碰到了吧",
|
||||
"萝莉控是什么呀",
|
||||
"你看到我的小熊了吗",
|
||||
"再摸的话我可要报警了!⌇●﹏●⌇",
|
||||
"110吗,这里有个变态一直在摸我(ó﹏ò。)"
|
||||
]
|
||||
|
||||
7
main.py
7
main.py
@@ -85,7 +85,7 @@ def main():
|
||||
with gr_L2(scale=1, elem_id="gpt-panel"):
|
||||
with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary:
|
||||
with gr.Row():
|
||||
txt = gr.Textbox(show_label=False, placeholder="Input question here.").style(container=False)
|
||||
txt = gr.Textbox(show_label=False, placeholder="Input question here.", elem_id='user_input_main').style(container=False)
|
||||
with gr.Row():
|
||||
submitBtn = gr.Button("提交", elem_id="elem_submit", variant="primary")
|
||||
with gr.Row():
|
||||
@@ -146,7 +146,7 @@ def main():
|
||||
with gr.Row():
|
||||
with gr.Tab("上传文件", elem_id="interact-panel"):
|
||||
gr.Markdown("请上传本地文件/压缩包供“函数插件区”功能调用。请注意: 上传文件后会自动把输入区修改为相应路径。")
|
||||
file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple")
|
||||
file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload_float")
|
||||
|
||||
with gr.Tab("更换模型 & Prompt", elem_id="interact-panel"):
|
||||
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
||||
@@ -178,7 +178,8 @@ def main():
|
||||
with gr.Row() as row:
|
||||
row.style(equal_height=True)
|
||||
with gr.Column(scale=10):
|
||||
txt2 = gr.Textbox(show_label=False, placeholder="Input question here.", lines=8, label="输入区2").style(container=False)
|
||||
txt2 = gr.Textbox(show_label=False, placeholder="Input question here.",
|
||||
elem_id='user_input_float', lines=8, label="输入区2").style(container=False)
|
||||
with gr.Column(scale=1, min_width=40):
|
||||
submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
|
||||
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
|
||||
|
||||
@@ -182,12 +182,12 @@ cached_translation = read_map_from_json(language=LANG)
|
||||
def trans(word_to_translate, language, special=False):
|
||||
if len(word_to_translate) == 0: return {}
|
||||
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from toolbox import get_conf, ChatBotWithCookies
|
||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
|
||||
get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
|
||||
from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies
|
||||
|
||||
cookies = load_chat_cookies()
|
||||
llm_kwargs = {
|
||||
'api_key': API_KEY,
|
||||
'llm_model': LLM_MODEL,
|
||||
'api_key': cookies['api_key'],
|
||||
'llm_model': cookies['llm_model'],
|
||||
'top_p':1.0,
|
||||
'max_length': None,
|
||||
'temperature':0.4,
|
||||
@@ -245,15 +245,15 @@ def trans(word_to_translate, language, special=False):
|
||||
def trans_json(word_to_translate, language, special=False):
|
||||
if len(word_to_translate) == 0: return {}
|
||||
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from toolbox import get_conf, ChatBotWithCookies
|
||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
|
||||
get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
|
||||
from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies
|
||||
|
||||
cookies = load_chat_cookies()
|
||||
llm_kwargs = {
|
||||
'api_key': API_KEY,
|
||||
'llm_model': LLM_MODEL,
|
||||
'api_key': cookies['api_key'],
|
||||
'llm_model': cookies['llm_model'],
|
||||
'top_p':1.0,
|
||||
'max_length': None,
|
||||
'temperature':0.1,
|
||||
'temperature':0.4,
|
||||
}
|
||||
import random
|
||||
N_EACH_REQ = random.randint(16, 32)
|
||||
|
||||
@@ -1,79 +1,35 @@
|
||||
# 如何使用其他大语言模型
|
||||
|
||||
## ChatGLM
|
||||
|
||||
- 安装依赖 `pip install -r request_llms/requirements_chatglm.txt`
|
||||
- 修改配置,在config.py中将LLM_MODEL的值改为"chatglm"
|
||||
|
||||
``` sh
|
||||
LLM_MODEL = "chatglm"
|
||||
```
|
||||
- 运行!
|
||||
``` sh
|
||||
`python main.py`
|
||||
```
|
||||
|
||||
## Claude-Stack
|
||||
|
||||
- 请参考此教程获取 https://zhuanlan.zhihu.com/p/627485689
|
||||
- 1、SLACK_CLAUDE_BOT_ID
|
||||
- 2、SLACK_CLAUDE_USER_TOKEN
|
||||
|
||||
- 把token加入config.py
|
||||
|
||||
## Newbing
|
||||
|
||||
- 使用cookie editor获取cookie(json)
|
||||
- 把cookie(json)加入config.py (NEWBING_COOKIES)
|
||||
|
||||
## Moss
|
||||
- 使用docker-compose
|
||||
|
||||
## RWKV
|
||||
- 使用docker-compose
|
||||
|
||||
## LLAMA
|
||||
- 使用docker-compose
|
||||
|
||||
## 盘古
|
||||
- 使用docker-compose
|
||||
P.S. 如果您按照以下步骤成功接入了新的大模型,欢迎发Pull Requests(如果您在自己接入新模型的过程中遇到困难,欢迎加README底部QQ群联系群主)
|
||||
|
||||
|
||||
---
|
||||
## Text-Generation-UI (TGUI,调试中,暂不可用)
|
||||
# 如何接入其他本地大语言模型
|
||||
|
||||
### 1. 部署TGUI
|
||||
``` sh
|
||||
# 1 下载模型
|
||||
git clone https://github.com/oobabooga/text-generation-webui.git
|
||||
# 2 这个仓库的最新代码有问题,回滚到几周之前
|
||||
git reset --hard fcda3f87767e642d1c0411776e549e1d3894843d
|
||||
# 3 切换路径
|
||||
cd text-generation-webui
|
||||
# 4 安装text-generation的额外依赖
|
||||
pip install accelerate bitsandbytes flexgen gradio llamacpp markdown numpy peft requests rwkv safetensors sentencepiece tqdm datasets git+https://github.com/huggingface/transformers
|
||||
# 5 下载模型
|
||||
python download-model.py facebook/galactica-1.3b
|
||||
# 其他可选如 facebook/opt-1.3b
|
||||
# facebook/galactica-1.3b
|
||||
# facebook/galactica-6.7b
|
||||
# facebook/galactica-120b
|
||||
# facebook/pygmalion-1.3b 等
|
||||
# 详情见 https://github.com/oobabooga/text-generation-webui
|
||||
1. 复制`request_llms/bridge_llama2.py`,重命名为你喜欢的名字
|
||||
|
||||
# 6 启动text-generation
|
||||
python server.py --cpu --listen --listen-port 7865 --model facebook_galactica-1.3b
|
||||
```
|
||||
2. 修改`load_model_and_tokenizer`方法,加载你的模型和分词器(去该模型官网找demo,复制粘贴即可)
|
||||
|
||||
### 2. 修改config.py
|
||||
3. 修改`llm_stream_generator`方法,定义推理模型(去该模型官网找demo,复制粘贴即可)
|
||||
|
||||
``` sh
|
||||
# LLM_MODEL格式: tgui:[模型]@[ws地址]:[ws端口] , 端口要和上面给定的端口一致
|
||||
LLM_MODEL = "tgui:galactica-1.3b@localhost:7860"
|
||||
```
|
||||
4. 命令行测试
|
||||
- 修改`tests/test_llms.py`(聪慧如您,只需要看一眼该文件就明白怎么修改了)
|
||||
- 运行`python tests/test_llms.py`
|
||||
|
||||
### 3. 运行!
|
||||
``` sh
|
||||
cd chatgpt-academic
|
||||
python main.py
|
||||
```
|
||||
5. 测试通过后,在`request_llms/bridge_all.py`中做最后的修改,把你的模型完全接入到框架中(聪慧如您,只需要看一眼该文件就明白怎么修改了)
|
||||
|
||||
6. 修改`LLM_MODEL`配置,然后运行`python main.py`,测试最后的效果
|
||||
|
||||
|
||||
# 如何接入其他在线大语言模型
|
||||
|
||||
1. 复制`request_llms/bridge_zhipu.py`,重命名为你喜欢的名字
|
||||
|
||||
2. 修改`predict_no_ui_long_connection`
|
||||
|
||||
3. 修改`predict`
|
||||
|
||||
4. 命令行测试
|
||||
- 修改`tests/test_llms.py`(聪慧如您,只需要看一眼该文件就明白怎么修改了)
|
||||
- 运行`python tests/test_llms.py`
|
||||
|
||||
5. 测试通过后,在`request_llms/bridge_all.py`中做最后的修改,把你的模型完全接入到框架中(聪慧如您,只需要看一眼该文件就明白怎么修改了)
|
||||
|
||||
6. 修改`LLM_MODEL`配置,然后运行`python main.py`,测试最后的效果
|
||||
@@ -543,6 +543,22 @@ if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
|
||||
try:
|
||||
from .bridge_deepseekcoder import predict_no_ui_long_connection as deepseekcoder_noui
|
||||
from .bridge_deepseekcoder import predict as deepseekcoder_ui
|
||||
model_info.update({
|
||||
"deepseekcoder": {
|
||||
"fn_with_ui": deepseekcoder_ui,
|
||||
"fn_without_ui": deepseekcoder_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 2048,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
}
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
|
||||
# <-- 用于定义和切换多个azure模型 -->
|
||||
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")
|
||||
|
||||
@@ -28,6 +28,12 @@ proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
|
||||
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
|
||||
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
|
||||
|
||||
def report_invalid_key(key):
|
||||
if get_conf("BLOCK_INVALID_APIKEY"):
|
||||
# 实验性功能,自动检测并屏蔽失效的KEY,请勿使用
|
||||
from request_llms.key_manager import ApiKeyManager
|
||||
api_key = ApiKeyManager().add_key_to_blacklist(key)
|
||||
|
||||
def get_full_error(chunk, stream_response):
|
||||
"""
|
||||
获取完整的从Openai返回的报错
|
||||
@@ -82,7 +88,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
||||
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
|
||||
"""
|
||||
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
||||
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
|
||||
headers, payload, api_key = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
|
||||
retry = 0
|
||||
while True:
|
||||
try:
|
||||
@@ -112,6 +118,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
||||
if "reduce the length" in error_msg:
|
||||
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
|
||||
else:
|
||||
if "API key has been deactivated" in error_msg: report_invalid_key(api_key)
|
||||
elif "exceeded your current quota" in error_msg: report_invalid_key(api_key)
|
||||
raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
|
||||
if ('data: [DONE]' in chunk): break # api2d 正常完成
|
||||
json_data = json.loads(chunk.lstrip('data:'))['choices'][0]
|
||||
@@ -174,7 +182,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
time.sleep(2)
|
||||
|
||||
try:
|
||||
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
|
||||
headers, payload, api_key = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
|
||||
except RuntimeError as e:
|
||||
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
|
||||
@@ -222,7 +230,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的非OpenAI官方接口,建议选择更稳定的接口。")
|
||||
break
|
||||
# 其他情况,直接返回报错
|
||||
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
||||
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key)
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="非OpenAI官方接口返回了错误:" + chunk.decode()) # 刷新界面
|
||||
return
|
||||
|
||||
@@ -264,12 +272,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
chunk = get_full_error(chunk, stream_response)
|
||||
chunk_decoded = chunk.decode()
|
||||
error_msg = chunk_decoded
|
||||
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
||||
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key)
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
|
||||
print(error_msg)
|
||||
return
|
||||
|
||||
def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg):
|
||||
def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key=""):
|
||||
from .bridge_all import model_info
|
||||
openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup'
|
||||
if "reduce the length" in error_msg:
|
||||
@@ -280,15 +288,15 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
||||
elif "does not exist" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
|
||||
elif "Incorrect API key" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务. " + openai_website)
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务. " + openai_website); report_invalid_key(api_key)
|
||||
elif "exceeded your current quota" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务." + openai_website)
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
|
||||
elif "account is not active" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Your account is not active. OpenAI以账户失效为由, 拒绝服务." + openai_website)
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Your account is not active. OpenAI以账户失效为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
|
||||
elif "associated with a deactivated account" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] You are associated with a deactivated account. OpenAI以账户失效为由, 拒绝服务." + openai_website)
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] You are associated with a deactivated account. OpenAI以账户失效为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
|
||||
elif "API key has been deactivated" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] API key has been deactivated. OpenAI以账户失效为由, 拒绝服务." + openai_website)
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] API key has been deactivated. OpenAI以账户失效为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
|
||||
elif "bad forward key" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
|
||||
elif "Not enough point" in error_msg:
|
||||
@@ -371,6 +379,6 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
|
||||
print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")
|
||||
except:
|
||||
print('输入中可能存在乱码。')
|
||||
return headers,payload
|
||||
return headers, payload, api_key
|
||||
|
||||
|
||||
|
||||
@@ -15,29 +15,16 @@ import requests
|
||||
import base64
|
||||
import os
|
||||
import glob
|
||||
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder, \
|
||||
update_ui_lastest_msg, get_max_token, encode_image, have_any_recent_upload_image_files
|
||||
|
||||
|
||||
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder, update_ui_lastest_msg, get_max_token
|
||||
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
|
||||
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
|
||||
|
||||
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
|
||||
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
|
||||
|
||||
def have_any_recent_upload_image_files(chatbot):
|
||||
_5min = 5 * 60
|
||||
if chatbot is None: return False, None # chatbot is None
|
||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
||||
if not most_recent_uploaded: return False, None # most_recent_uploaded is None
|
||||
if time.time() - most_recent_uploaded["time"] < _5min:
|
||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
||||
path = most_recent_uploaded['path']
|
||||
file_manifest = [f for f in glob.glob(f'{path}/**/*.jpg', recursive=True)]
|
||||
file_manifest += [f for f in glob.glob(f'{path}/**/*.jpeg', recursive=True)]
|
||||
file_manifest += [f for f in glob.glob(f'{path}/**/*.png', recursive=True)]
|
||||
if len(file_manifest) == 0: return False, None
|
||||
return True, file_manifest # most_recent_uploaded is new
|
||||
else:
|
||||
return False, None # most_recent_uploaded is too old
|
||||
|
||||
def report_invalid_key(key):
|
||||
if get_conf("BLOCK_INVALID_APIKEY"):
|
||||
@@ -258,10 +245,6 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg,
|
||||
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
|
||||
return chatbot, history
|
||||
|
||||
# Function to encode the image
|
||||
def encode_image(image_path):
|
||||
with open(image_path, "rb") as image_file:
|
||||
return base64.b64encode(image_file.read()).decode('utf-8')
|
||||
|
||||
def generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths):
|
||||
"""
|
||||
|
||||
129
request_llms/bridge_deepseekcoder.py
普通文件
129
request_llms/bridge_deepseekcoder.py
普通文件
@@ -0,0 +1,129 @@
|
||||
model_name = "deepseek-coder-6.7b-instruct"
|
||||
cmd_to_install = "未知" # "`pip install -r request_llms/requirements_qwen.txt`"
|
||||
|
||||
import os
|
||||
from toolbox import ProxyNetworkActivate
|
||||
from toolbox import get_conf
|
||||
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
|
||||
from threading import Thread
|
||||
import torch
|
||||
|
||||
def download_huggingface_model(model_name, max_retry, local_dir):
|
||||
from huggingface_hub import snapshot_download
|
||||
for i in range(1, max_retry):
|
||||
try:
|
||||
snapshot_download(repo_id=model_name, local_dir=local_dir, resume_download=True)
|
||||
break
|
||||
except Exception as e:
|
||||
print(f'\n\n下载失败,重试第{i}次中...\n\n')
|
||||
return local_dir
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Local Model
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
class GetCoderLMHandle(LocalLLMHandle):
|
||||
|
||||
def load_model_info(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
self.model_name = model_name
|
||||
self.cmd_to_install = cmd_to_install
|
||||
|
||||
def load_model_and_tokenizer(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
with ProxyNetworkActivate('Download_LLM'):
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM, TextIteratorStreamer
|
||||
model_name = "deepseek-ai/deepseek-coder-6.7b-instruct"
|
||||
# local_dir = f"~/.cache/{model_name}"
|
||||
# if not os.path.exists(local_dir):
|
||||
# tokenizer = download_huggingface_model(model_name, max_retry=128, local_dir=local_dir)
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
||||
self._streamer = TextIteratorStreamer(tokenizer)
|
||||
device_map = {
|
||||
"transformer.word_embeddings": 0,
|
||||
"transformer.word_embeddings_layernorm": 0,
|
||||
"lm_head": 0,
|
||||
"transformer.h": 0,
|
||||
"transformer.ln_f": 0,
|
||||
"model.embed_tokens": 0,
|
||||
"model.layers": 0,
|
||||
"model.norm": 0,
|
||||
}
|
||||
|
||||
# 检查量化配置
|
||||
quantization_type = get_conf('LOCAL_MODEL_QUANT')
|
||||
|
||||
if get_conf('LOCAL_MODEL_DEVICE') != 'cpu':
|
||||
if quantization_type == "INT8":
|
||||
from transformers import BitsAndBytesConfig
|
||||
# 使用 INT8 量化
|
||||
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, load_in_8bit=True,
|
||||
device_map=device_map)
|
||||
elif quantization_type == "INT4":
|
||||
from transformers import BitsAndBytesConfig
|
||||
# 使用 INT4 量化
|
||||
bnb_config = BitsAndBytesConfig(
|
||||
load_in_4bit=True,
|
||||
bnb_4bit_use_double_quant=True,
|
||||
bnb_4bit_quant_type="nf4",
|
||||
bnb_4bit_compute_dtype=torch.bfloat16
|
||||
)
|
||||
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True,
|
||||
quantization_config=bnb_config, device_map=device_map)
|
||||
else:
|
||||
# 使用默认的 FP16
|
||||
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True,
|
||||
torch_dtype=torch.bfloat16, device_map=device_map)
|
||||
else:
|
||||
# CPU 模式
|
||||
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True,
|
||||
torch_dtype=torch.bfloat16)
|
||||
|
||||
return model, tokenizer
|
||||
|
||||
def llm_stream_generator(self, **kwargs):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
def adaptor(kwargs):
|
||||
query = kwargs['query']
|
||||
max_length = kwargs['max_length']
|
||||
top_p = kwargs['top_p']
|
||||
temperature = kwargs['temperature']
|
||||
history = kwargs['history']
|
||||
return query, max_length, top_p, temperature, history
|
||||
|
||||
query, max_length, top_p, temperature, history = adaptor(kwargs)
|
||||
history.append({ 'role': 'user', 'content': query})
|
||||
messages = history
|
||||
inputs = self._tokenizer.apply_chat_template(messages, return_tensors="pt")
|
||||
if inputs.shape[1] > max_length:
|
||||
inputs = inputs[:, -max_length:]
|
||||
inputs = inputs.to(self._model.device)
|
||||
generation_kwargs = dict(
|
||||
inputs=inputs,
|
||||
max_new_tokens=max_length,
|
||||
do_sample=False,
|
||||
top_p=top_p,
|
||||
streamer = self._streamer,
|
||||
top_k=50,
|
||||
temperature=temperature,
|
||||
num_return_sequences=1,
|
||||
eos_token_id=32021,
|
||||
)
|
||||
thread = Thread(target=self._model.generate, kwargs=generation_kwargs, daemon=True)
|
||||
thread.start()
|
||||
generated_text = ""
|
||||
for new_text in self._streamer:
|
||||
generated_text += new_text
|
||||
# print(generated_text)
|
||||
yield generated_text
|
||||
|
||||
|
||||
def try_to_import_special_deps(self, **kwargs): pass
|
||||
# import something that will raise error if the user does not install requirement_*.txt
|
||||
# 🏃♂️🏃♂️🏃♂️ 主进程执行
|
||||
# import importlib
|
||||
# importlib.import_module('modelscope')
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 GPT-Academic Interface
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetCoderLMHandle, model_name, history_format='chatglm3')
|
||||
@@ -12,7 +12,7 @@ from threading import Thread
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Local Model
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
class GetONNXGLMHandle(LocalLLMHandle):
|
||||
class GetLlamaHandle(LocalLLMHandle):
|
||||
|
||||
def load_model_info(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
@@ -87,4 +87,4 @@ class GetONNXGLMHandle(LocalLLMHandle):
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 GPT-Academic Interface
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetONNXGLMHandle, model_name)
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetLlamaHandle, model_name)
|
||||
@@ -1,13 +1,7 @@
|
||||
model_name = "Qwen"
|
||||
cmd_to_install = "`pip install -r request_llms/requirements_qwen.txt`"
|
||||
|
||||
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
import time
|
||||
import threading
|
||||
import importlib
|
||||
from toolbox import update_ui, get_conf, ProxyNetworkActivate
|
||||
from multiprocessing import Process, Pipe
|
||||
from toolbox import ProxyNetworkActivate, get_conf
|
||||
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
|
||||
|
||||
|
||||
@@ -15,7 +9,7 @@ from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Local Model
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
class GetONNXGLMHandle(LocalLLMHandle):
|
||||
class GetQwenLMHandle(LocalLLMHandle):
|
||||
|
||||
def load_model_info(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
@@ -24,16 +18,14 @@ class GetONNXGLMHandle(LocalLLMHandle):
|
||||
|
||||
def load_model_and_tokenizer(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
import os, glob
|
||||
import os
|
||||
import platform
|
||||
from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
||||
|
||||
# from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
from transformers.generation import GenerationConfig
|
||||
with ProxyNetworkActivate('Download_LLM'):
|
||||
model_id = 'qwen/Qwen-7B-Chat'
|
||||
self._tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen-7B-Chat', trust_remote_code=True, resume_download=True)
|
||||
model_id = get_conf('QWEN_MODEL_SELECTION')
|
||||
self._tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, resume_download=True)
|
||||
# use fp16
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True, fp16=True).eval()
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True).eval()
|
||||
model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
|
||||
self._model = model
|
||||
|
||||
@@ -51,7 +43,7 @@ class GetONNXGLMHandle(LocalLLMHandle):
|
||||
|
||||
query, max_length, top_p, temperature, history = adaptor(kwargs)
|
||||
|
||||
for response in self._model.chat(self._tokenizer, query, history=history, stream=True):
|
||||
for response in self._model.chat_stream(self._tokenizer, query, history=history):
|
||||
yield response
|
||||
|
||||
def try_to_import_special_deps(self, **kwargs):
|
||||
@@ -64,4 +56,4 @@ class GetONNXGLMHandle(LocalLLMHandle):
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 GPT-Academic Interface
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetONNXGLMHandle, model_name)
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetQwenLMHandle, model_name)
|
||||
@@ -1,6 +1,7 @@
|
||||
|
||||
import time
|
||||
from toolbox import update_ui, get_conf, update_ui_lastest_msg
|
||||
from toolbox import check_packages, report_exception
|
||||
|
||||
model_name = '智谱AI大模型'
|
||||
|
||||
@@ -37,6 +38,14 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
chatbot.append((inputs, ""))
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
check_packages(["zhipuai"])
|
||||
except:
|
||||
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
|
||||
chatbot=chatbot, history=history, delay=0)
|
||||
return
|
||||
|
||||
if validate_key() is False:
|
||||
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0)
|
||||
return
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from toolbox import get_conf
|
||||
from toolbox import get_conf, get_pictures_list, encode_image
|
||||
import base64
|
||||
import datetime
|
||||
import hashlib
|
||||
@@ -65,6 +65,7 @@ class SparkRequestInstance():
|
||||
self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat"
|
||||
self.gpt_url_v2 = "ws://spark-api.xf-yun.com/v2.1/chat"
|
||||
self.gpt_url_v3 = "ws://spark-api.xf-yun.com/v3.1/chat"
|
||||
self.gpt_url_img = "wss://spark-api.cn-huabei-1.xf-yun.com/v2.1/image"
|
||||
|
||||
self.time_to_yield_event = threading.Event()
|
||||
self.time_to_exit_event = threading.Event()
|
||||
@@ -92,7 +93,11 @@ class SparkRequestInstance():
|
||||
gpt_url = self.gpt_url_v3
|
||||
else:
|
||||
gpt_url = self.gpt_url
|
||||
|
||||
file_manifest = []
|
||||
if llm_kwargs.get('most_recent_uploaded'):
|
||||
if llm_kwargs['most_recent_uploaded'].get('path'):
|
||||
file_manifest = get_pictures_list(llm_kwargs['most_recent_uploaded']['path'])
|
||||
gpt_url = self.gpt_url_img
|
||||
wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, gpt_url)
|
||||
websocket.enableTrace(False)
|
||||
wsUrl = wsParam.create_url()
|
||||
@@ -101,9 +106,8 @@ class SparkRequestInstance():
|
||||
def on_open(ws):
|
||||
import _thread as thread
|
||||
thread.start_new_thread(run, (ws,))
|
||||
|
||||
def run(ws, *args):
|
||||
data = json.dumps(gen_params(ws.appid, *ws.all_args))
|
||||
data = json.dumps(gen_params(ws.appid, *ws.all_args, file_manifest))
|
||||
ws.send(data)
|
||||
|
||||
# 收到websocket消息的处理
|
||||
@@ -142,9 +146,18 @@ class SparkRequestInstance():
|
||||
ws.all_args = (inputs, llm_kwargs, history, system_prompt)
|
||||
ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})
|
||||
|
||||
def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
|
||||
def generate_message_payload(inputs, llm_kwargs, history, system_prompt, file_manifest):
|
||||
conversation_cnt = len(history) // 2
|
||||
messages = [{"role": "system", "content": system_prompt}]
|
||||
messages = []
|
||||
if file_manifest:
|
||||
base64_images = []
|
||||
for image_path in file_manifest:
|
||||
base64_images.append(encode_image(image_path))
|
||||
for img_s in base64_images:
|
||||
if img_s not in str(messages):
|
||||
messages.append({"role": "user", "content": img_s, "content_type": "image"})
|
||||
else:
|
||||
messages = [{"role": "system", "content": system_prompt}]
|
||||
if conversation_cnt:
|
||||
for index in range(0, 2*conversation_cnt, 2):
|
||||
what_i_have_asked = {}
|
||||
@@ -167,7 +180,7 @@ def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
|
||||
return messages
|
||||
|
||||
|
||||
def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
|
||||
def gen_params(appid, inputs, llm_kwargs, history, system_prompt, file_manifest):
|
||||
"""
|
||||
通过appid和用户的提问来生成请参数
|
||||
"""
|
||||
@@ -176,6 +189,8 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
|
||||
"sparkv2": "generalv2",
|
||||
"sparkv3": "generalv3",
|
||||
}
|
||||
domains_select = domains[llm_kwargs['llm_model']]
|
||||
if file_manifest: domains_select = 'image'
|
||||
data = {
|
||||
"header": {
|
||||
"app_id": appid,
|
||||
@@ -183,7 +198,7 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
|
||||
},
|
||||
"parameter": {
|
||||
"chat": {
|
||||
"domain": domains[llm_kwargs['llm_model']],
|
||||
"domain": domains_select,
|
||||
"temperature": llm_kwargs["temperature"],
|
||||
"random_threshold": 0.5,
|
||||
"max_tokens": 4096,
|
||||
@@ -192,7 +207,7 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
|
||||
},
|
||||
"payload": {
|
||||
"message": {
|
||||
"text": generate_message_payload(inputs, llm_kwargs, history, system_prompt)
|
||||
"text": generate_message_payload(inputs, llm_kwargs, history, system_prompt, file_manifest)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
import random
|
||||
import os
|
||||
from toolbox import get_log_folder
|
||||
|
||||
def Singleton(cls):
|
||||
_instance = {}
|
||||
@@ -12,18 +14,41 @@ def Singleton(cls):
|
||||
|
||||
|
||||
@Singleton
|
||||
class OpenAI_ApiKeyManager():
|
||||
class ApiKeyManager():
|
||||
"""
|
||||
只把失效的key保存在内存中
|
||||
"""
|
||||
def __init__(self, mode='blacklist') -> None:
|
||||
# self.key_avail_list = []
|
||||
self.key_black_list = []
|
||||
self.debug = False
|
||||
self.log = True
|
||||
self.remain_keys = []
|
||||
|
||||
def add_key_to_blacklist(self, key):
|
||||
self.key_black_list.append(key)
|
||||
if self.debug: print('black list key added', key)
|
||||
if self.log:
|
||||
with open(
|
||||
os.path.join(get_log_folder(user='admin', plugin_name='api_key_manager'), 'invalid_key.log'), 'a+', encoding='utf8') as f:
|
||||
summary = 'num blacklist keys:' + str(len(self.key_black_list)) + '\tnum valid keys:' + str(len(self.remain_keys))
|
||||
f.write('\n\n' + summary + '\n')
|
||||
f.write('---- <add blacklist key> ----\n')
|
||||
f.write(key)
|
||||
f.write('\n')
|
||||
f.write('---- <all blacklist keys> ----\n')
|
||||
f.write(str(self.key_black_list))
|
||||
f.write('\n')
|
||||
f.write('---- <remain keys> ----\n')
|
||||
f.write(str(self.remain_keys))
|
||||
f.write('\n')
|
||||
|
||||
def select_avail_key(self, key_list):
|
||||
# select key from key_list, but avoid keys also in self.key_black_list, raise error if no key can be found
|
||||
available_keys = [key for key in key_list if key not in self.key_black_list]
|
||||
if not available_keys:
|
||||
raise KeyError("No available key found.")
|
||||
raise KeyError("所有API KEY都被OPENAI拒绝了")
|
||||
selected_key = random.choice(available_keys)
|
||||
if self.debug: print('total keys', len(key_list), 'valid keys', len(available_keys))
|
||||
if self.log: self.remain_keys = available_keys
|
||||
return selected_key
|
||||
@@ -198,7 +198,7 @@ class LocalLLMHandle(Process):
|
||||
if res.startswith(self.std_tag):
|
||||
new_output = res[len(self.std_tag):]
|
||||
std_out = std_out[:std_out_clip_len]
|
||||
# print(new_output, end='')
|
||||
print(new_output, end='')
|
||||
std_out = new_output + std_out
|
||||
yield self.std_tag + '\n```\n' + std_out + '\n```\n'
|
||||
elif res == '[Finish]':
|
||||
|
||||
@@ -1,2 +1,4 @@
|
||||
modelscope
|
||||
transformers_stream_generator
|
||||
transformers_stream_generator
|
||||
auto-gptq
|
||||
optimum
|
||||
@@ -15,8 +15,10 @@ if __name__ == "__main__":
|
||||
# from request_llms.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_claude import predict_no_ui_long_connection
|
||||
from request_llms.bridge_internlm import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_qwen import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_internlm import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_qwen_7B import predict_no_ui_long_connection
|
||||
from request_llms.bridge_qwen import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_spark import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
|
||||
|
||||
@@ -48,11 +48,11 @@ if __name__ == "__main__":
|
||||
# for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]:
|
||||
# plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown翻译指定语言', main_input="README.md", advanced_arg={"advanced_arg": lang})
|
||||
|
||||
# plugin_test(plugin='crazy_functions.Langchain知识库->知识库问答', main_input="./")
|
||||
# plugin_test(plugin='crazy_functions.知识库文件注入->知识库文件注入', main_input="./")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.Langchain知识库->读取知识库作答', main_input="What is the installation method?")
|
||||
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="What is the installation method?")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.Langchain知识库->读取知识库作答', main_input="远程云服务器部署?")
|
||||
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2210.03629")
|
||||
|
||||
|
||||
@@ -56,11 +56,11 @@ vt.get_plugin_handle = silence_stdout_fn(get_plugin_handle)
|
||||
vt.get_plugin_default_kwargs = silence_stdout_fn(get_plugin_default_kwargs)
|
||||
vt.get_chat_handle = silence_stdout_fn(get_chat_handle)
|
||||
vt.get_chat_default_kwargs = silence_stdout_fn(get_chat_default_kwargs)
|
||||
vt.chat_to_markdown_str = chat_to_markdown_str
|
||||
vt.chat_to_markdown_str = (chat_to_markdown_str)
|
||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
|
||||
vt.get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
|
||||
|
||||
def plugin_test(main_input, plugin, advanced_arg=None):
|
||||
def plugin_test(main_input, plugin, advanced_arg=None, debug=True):
|
||||
from rich.live import Live
|
||||
from rich.markdown import Markdown
|
||||
|
||||
@@ -72,7 +72,10 @@ def plugin_test(main_input, plugin, advanced_arg=None):
|
||||
plugin_kwargs['main_input'] = main_input
|
||||
if advanced_arg is not None:
|
||||
plugin_kwargs['plugin_kwargs'] = advanced_arg
|
||||
my_working_plugin = silence_stdout(plugin)(**plugin_kwargs)
|
||||
if debug:
|
||||
my_working_plugin = (plugin)(**plugin_kwargs)
|
||||
else:
|
||||
my_working_plugin = silence_stdout(plugin)(**plugin_kwargs)
|
||||
|
||||
with Live(Markdown(""), auto_refresh=False, vertical_overflow="visible") as live:
|
||||
for cookies, chat, hist, msg in my_working_plugin:
|
||||
|
||||
17
tests/test_vector_plugins.py
普通文件
17
tests/test_vector_plugins.py
普通文件
@@ -0,0 +1,17 @@
|
||||
"""
|
||||
对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py
|
||||
"""
|
||||
|
||||
|
||||
import os, sys
|
||||
def validate_path(): dir_name = os.path.dirname(__file__); root_dir_assume = os.path.abspath(dir_name + '/..'); os.chdir(root_dir_assume); sys.path.append(root_dir_assume)
|
||||
validate_path() # 返回项目根路径
|
||||
|
||||
if __name__ == "__main__":
|
||||
from tests.test_utils import plugin_test
|
||||
|
||||
plugin_test(plugin='crazy_functions.知识库问答->知识库文件注入', main_input="./README.md")
|
||||
|
||||
plugin_test(plugin='crazy_functions.知识库问答->读取知识库作答', main_input="What is the installation method?")
|
||||
|
||||
plugin_test(plugin='crazy_functions.知识库问答->读取知识库作答', main_input="远程云服务器部署?")
|
||||
106
themes/common.js
106
themes/common.js
@@ -122,7 +122,7 @@ function chatbotAutoHeight(){
|
||||
chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
|
||||
}
|
||||
}
|
||||
|
||||
monitoring_input_box()
|
||||
update_height();
|
||||
setInterval(function() {
|
||||
update_height_slow()
|
||||
@@ -160,4 +160,106 @@ function get_elements(consider_state_panel=false) {
|
||||
var chatbot_height = chatbot.style.height;
|
||||
var chatbot_height = parseInt(chatbot_height);
|
||||
return { panel_height_target, chatbot_height, chatbot };
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
function add_func_paste(input) {
|
||||
let paste_files = [];
|
||||
if (input) {
|
||||
input.addEventListener("paste", async function (e) {
|
||||
const clipboardData = e.clipboardData || window.clipboardData;
|
||||
const items = clipboardData.items;
|
||||
if (items) {
|
||||
for (i = 0; i < items.length; i++) {
|
||||
if (items[i].kind === "file") { // 确保是文件类型
|
||||
const file = items[i].getAsFile();
|
||||
// 将每一个粘贴的文件添加到files数组中
|
||||
paste_files.push(file);
|
||||
e.preventDefault(); // 避免粘贴文件名到输入框
|
||||
}
|
||||
}
|
||||
if (paste_files.length > 0) {
|
||||
// 按照文件列表执行批量上传逻辑
|
||||
await paste_upload_files(paste_files);
|
||||
paste_files = []
|
||||
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
async function paste_upload_files(files) {
|
||||
const uploadInputElement = elem_upload_float.querySelector("input[type=file]");
|
||||
let totalSizeMb = 0
|
||||
if (files && files.length > 0) {
|
||||
// 执行具体的上传逻辑
|
||||
if (uploadInputElement) {
|
||||
for (let i = 0; i < files.length; i++) {
|
||||
// 将从文件数组中获取的文件大小(单位为字节)转换为MB,
|
||||
totalSizeMb += files[i].size / 1024 / 1024;
|
||||
}
|
||||
// 检查文件总大小是否超过20MB
|
||||
if (totalSizeMb > 20) {
|
||||
toast_push('⚠️文件夹大于20MB 🚀上传文件中', 2000)
|
||||
// return; // 如果超过了指定大小, 可以不进行后续上传操作
|
||||
}
|
||||
// 监听change事件, 原生Gradio可以实现
|
||||
// uploadInputElement.addEventListener('change', function(){replace_input_string()});
|
||||
let event = new Event("change");
|
||||
Object.defineProperty(event, "target", {value: uploadInputElement, enumerable: true});
|
||||
Object.defineProperty(event, "currentTarget", {value: uploadInputElement, enumerable: true});
|
||||
Object.defineProperty(uploadInputElement, "files", {value: files, enumerable: true});
|
||||
uploadInputElement.dispatchEvent(event);
|
||||
// toast_push('🎉上传文件成功', 2000)
|
||||
} else {
|
||||
toast_push('⚠️请先删除上传区中的历史文件,再尝试粘贴。', 2000)
|
||||
}
|
||||
}
|
||||
}
|
||||
//提示信息 封装
|
||||
function toast_push(msg, duration) {
|
||||
duration = isNaN(duration) ? 3000 : duration;
|
||||
const m = document.createElement('div');
|
||||
m.innerHTML = msg;
|
||||
m.style.cssText = "font-size: var(--text-md) !important; color: rgb(255, 255, 255);background-color: rgba(0, 0, 0, 0.6);padding: 10px 15px;margin: 0 0 0 -60px;border-radius: 4px;position: fixed; top: 50%;left: 50%;width: auto; text-align: center;";
|
||||
document.body.appendChild(m);
|
||||
setTimeout(function () {
|
||||
var d = 0.5;
|
||||
m.style.opacity = '0';
|
||||
setTimeout(function () {
|
||||
document.body.removeChild(m)
|
||||
}, d * 1000);
|
||||
}, duration);
|
||||
}
|
||||
|
||||
var elem_upload = null;
|
||||
var elem_upload_float = null;
|
||||
var elem_input_main = null;
|
||||
var elem_input_float = null;
|
||||
|
||||
|
||||
function monitoring_input_box() {
|
||||
elem_upload = document.getElementById('elem_upload')
|
||||
elem_upload_float = document.getElementById('elem_upload_float')
|
||||
elem_input_main = document.getElementById('user_input_main')
|
||||
elem_input_float = document.getElementById('user_input_float')
|
||||
if (elem_input_main) {
|
||||
if (elem_input_main.querySelector("textarea")) {
|
||||
add_func_paste(elem_input_main.querySelector("textarea"))
|
||||
}
|
||||
}
|
||||
if (elem_input_float) {
|
||||
if (elem_input_float.querySelector("textarea")){
|
||||
add_func_paste(elem_input_float.querySelector("textarea"))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// 监视页面变化
|
||||
window.addEventListener("DOMContentLoaded", function () {
|
||||
// const ga = document.getElementsByTagName("gradio-app");
|
||||
gradioApp().addEventListener("render", monitoring_input_box);
|
||||
});
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import os
|
||||
import gradio as gr
|
||||
from toolbox import get_conf
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
||||
theme_dir = os.path.dirname(__file__)
|
||||
|
||||
def adjust_theme():
|
||||
|
||||
@@ -57,7 +59,7 @@ def adjust_theme():
|
||||
button_cancel_text_color_dark="white",
|
||||
)
|
||||
|
||||
with open('themes/common.js', 'r', encoding='utf8') as f:
|
||||
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
||||
js = f"<script>{f.read()}</script>"
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
@@ -67,7 +69,9 @@ def adjust_theme():
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
gradio_original_template_fn = gr.routes.templates.TemplateResponse
|
||||
if not hasattr(gr, 'RawTemplateResponse'):
|
||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||
gradio_original_template_fn = gr.RawTemplateResponse
|
||||
def gradio_new_template_fn(*args, **kwargs):
|
||||
res = gradio_original_template_fn(*args, **kwargs)
|
||||
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
||||
@@ -79,7 +83,7 @@ def adjust_theme():
|
||||
print('gradio版本较旧, 不能自定义字体和颜色')
|
||||
return set_theme
|
||||
|
||||
with open("themes/contrast.css", "r", encoding="utf-8") as f:
|
||||
with open(os.path.join(theme_dir, 'contrast.css'), "r", encoding="utf-8") as f:
|
||||
advanced_css = f.read()
|
||||
with open("themes/common.css", "r", encoding="utf-8") as f:
|
||||
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
|
||||
advanced_css += f.read()
|
||||
|
||||
@@ -60,7 +60,7 @@ def adjust_theme():
|
||||
|
||||
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
||||
js = f"<script>{f.read()}</script>"
|
||||
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
if ADD_WAIFU:
|
||||
js += """
|
||||
@@ -68,7 +68,9 @@ def adjust_theme():
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
gradio_original_template_fn = gr.routes.templates.TemplateResponse
|
||||
if not hasattr(gr, 'RawTemplateResponse'):
|
||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||
gradio_original_template_fn = gr.RawTemplateResponse
|
||||
def gradio_new_template_fn(*args, **kwargs):
|
||||
res = gradio_original_template_fn(*args, **kwargs)
|
||||
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import gradio as gr
|
||||
import logging
|
||||
import os
|
||||
import gradio as gr
|
||||
from toolbox import get_conf, ProxyNetworkActivate
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
||||
theme_dir = os.path.dirname(__file__)
|
||||
|
||||
def dynamic_set_theme(THEME):
|
||||
set_theme = gr.themes.ThemeClass()
|
||||
@@ -13,7 +15,6 @@ def dynamic_set_theme(THEME):
|
||||
return set_theme
|
||||
|
||||
def adjust_theme():
|
||||
|
||||
try:
|
||||
set_theme = gr.themes.ThemeClass()
|
||||
with ProxyNetworkActivate('Download_Gradio_Theme'):
|
||||
@@ -23,7 +24,7 @@ def adjust_theme():
|
||||
if THEME.startswith('huggingface-'): THEME = THEME.lstrip('huggingface-')
|
||||
set_theme = set_theme.from_hub(THEME.lower())
|
||||
|
||||
with open('themes/common.js', 'r', encoding='utf8') as f:
|
||||
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
||||
js = f"<script>{f.read()}</script>"
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
@@ -33,7 +34,9 @@ def adjust_theme():
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
gradio_original_template_fn = gr.routes.templates.TemplateResponse
|
||||
if not hasattr(gr, 'RawTemplateResponse'):
|
||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||
gradio_original_template_fn = gr.RawTemplateResponse
|
||||
def gradio_new_template_fn(*args, **kwargs):
|
||||
res = gradio_original_template_fn(*args, **kwargs)
|
||||
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
||||
@@ -46,7 +49,5 @@ def adjust_theme():
|
||||
logging.error('gradio版本较旧, 不能自定义字体和颜色:', trimmed_format_exc())
|
||||
return set_theme
|
||||
|
||||
# with open("themes/default.css", "r", encoding="utf-8") as f:
|
||||
# advanced_css = f.read()
|
||||
with open("themes/common.css", "r", encoding="utf-8") as f:
|
||||
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
|
||||
advanced_css = f.read()
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import os
|
||||
import gradio as gr
|
||||
from toolbox import get_conf
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
||||
theme_dir = os.path.dirname(__file__)
|
||||
|
||||
def adjust_theme():
|
||||
try:
|
||||
@@ -73,7 +75,7 @@ def adjust_theme():
|
||||
chatbot_code_background_color_dark="*neutral_950",
|
||||
)
|
||||
|
||||
with open('themes/common.js', 'r', encoding='utf8') as f:
|
||||
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
||||
js = f"<script>{f.read()}</script>"
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
@@ -83,11 +85,13 @@ def adjust_theme():
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
|
||||
with open('themes/green.js', 'r', encoding='utf8') as f:
|
||||
|
||||
with open(os.path.join(theme_dir, 'green.js'), 'r', encoding='utf8') as f:
|
||||
js += f"<script>{f.read()}</script>"
|
||||
|
||||
gradio_original_template_fn = gr.routes.templates.TemplateResponse
|
||||
|
||||
if not hasattr(gr, 'RawTemplateResponse'):
|
||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||
gradio_original_template_fn = gr.RawTemplateResponse
|
||||
def gradio_new_template_fn(*args, **kwargs):
|
||||
res = gradio_original_template_fn(*args, **kwargs)
|
||||
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
||||
@@ -99,8 +103,7 @@ def adjust_theme():
|
||||
print('gradio版本较旧, 不能自定义字体和颜色')
|
||||
return set_theme
|
||||
|
||||
|
||||
with open("themes/green.css", "r", encoding="utf-8") as f:
|
||||
with open(os.path.join(theme_dir, 'green.css'), "r", encoding="utf-8") as f:
|
||||
advanced_css = f.read()
|
||||
with open("themes/common.css", "r", encoding="utf-8") as f:
|
||||
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
|
||||
advanced_css += f.read()
|
||||
|
||||
132
toolbox.py
132
toolbox.py
@@ -4,6 +4,7 @@ import time
|
||||
import inspect
|
||||
import re
|
||||
import os
|
||||
import base64
|
||||
import gradio
|
||||
import shutil
|
||||
import glob
|
||||
@@ -79,13 +80,14 @@ def ArgsGeneralWrapper(f):
|
||||
'max_length': max_length,
|
||||
'temperature':temperature,
|
||||
'client_ip': request.client.host,
|
||||
'most_recent_uploaded': cookies.get('most_recent_uploaded')
|
||||
}
|
||||
plugin_kwargs = {
|
||||
"advanced_arg": plugin_advanced_arg,
|
||||
}
|
||||
chatbot_with_cookie = ChatBotWithCookies(cookies)
|
||||
chatbot_with_cookie.write_list(chatbot)
|
||||
|
||||
print('[plugin is called]:', f)
|
||||
if cookies.get('lock_plugin', None) is None:
|
||||
# 正常状态
|
||||
if len(args) == 0: # 插件通道
|
||||
@@ -178,12 +180,15 @@ def HotReload(f):
|
||||
最后,使用yield from语句返回重新加载过的函数,并在被装饰的函数上执行。
|
||||
最终,装饰器函数返回内部函数。这个内部函数可以将函数的原始定义更新为最新版本,并执行函数的新版本。
|
||||
"""
|
||||
@wraps(f)
|
||||
def decorated(*args, **kwargs):
|
||||
fn_name = f.__name__
|
||||
f_hot_reload = getattr(importlib.reload(inspect.getmodule(f)), fn_name)
|
||||
yield from f_hot_reload(*args, **kwargs)
|
||||
return decorated
|
||||
if get_conf('PLUGIN_HOT_RELOAD'):
|
||||
@wraps(f)
|
||||
def decorated(*args, **kwargs):
|
||||
fn_name = f.__name__
|
||||
f_hot_reload = getattr(importlib.reload(inspect.getmodule(f)), fn_name)
|
||||
yield from f_hot_reload(*args, **kwargs)
|
||||
return decorated
|
||||
else:
|
||||
return f
|
||||
|
||||
|
||||
"""
|
||||
@@ -561,7 +566,8 @@ def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
|
||||
user_name = get_user(chatbot)
|
||||
else:
|
||||
user_name = default_user_name
|
||||
|
||||
if not os.path.exists(file):
|
||||
raise FileNotFoundError(f'文件{file}不存在')
|
||||
user_path = get_log_folder(user_name, plugin_name=None)
|
||||
if file_already_in_downloadzone(file, user_path):
|
||||
new_path = file
|
||||
@@ -602,6 +608,64 @@ def del_outdated_uploads(outdate_time_seconds, target_path_base=None):
|
||||
except: pass
|
||||
return
|
||||
|
||||
|
||||
def html_local_file(file):
|
||||
base_path = os.path.dirname(__file__) # 项目目录
|
||||
if os.path.exists(str(file)):
|
||||
file = f'file={file.replace(base_path, ".")}'
|
||||
return file
|
||||
|
||||
|
||||
def html_local_img(__file, layout='left', max_width=None, max_height=None, md=True):
|
||||
style = ''
|
||||
if max_width is not None:
|
||||
style += f"max-width: {max_width};"
|
||||
if max_height is not None:
|
||||
style += f"max-height: {max_height};"
|
||||
__file = html_local_file(__file)
|
||||
a = f'<div align="{layout}"><img src="{__file}" style="{style}"></div>'
|
||||
if md:
|
||||
a = f''
|
||||
return a
|
||||
|
||||
def file_manifest_filter_type(file_list, filter_: list = None):
|
||||
new_list = []
|
||||
if not filter_: filter_ = ['png', 'jpg', 'jpeg']
|
||||
for file in file_list:
|
||||
if str(os.path.basename(file)).split('.')[-1] in filter_:
|
||||
new_list.append(html_local_img(file, md=False))
|
||||
else:
|
||||
new_list.append(file)
|
||||
return new_list
|
||||
|
||||
def to_markdown_tabs(head: list, tabs: list, alignment=':---:', column=False):
|
||||
"""
|
||||
Args:
|
||||
head: 表头:[]
|
||||
tabs: 表值:[[列1], [列2], [列3], [列4]]
|
||||
alignment: :--- 左对齐, :---: 居中对齐, ---: 右对齐
|
||||
column: True to keep data in columns, False to keep data in rows (default).
|
||||
Returns:
|
||||
A string representation of the markdown table.
|
||||
"""
|
||||
if column:
|
||||
transposed_tabs = list(map(list, zip(*tabs)))
|
||||
else:
|
||||
transposed_tabs = tabs
|
||||
# Find the maximum length among the columns
|
||||
max_len = max(len(column) for column in transposed_tabs)
|
||||
|
||||
tab_format = "| %s "
|
||||
tabs_list = "".join([tab_format % i for i in head]) + '|\n'
|
||||
tabs_list += "".join([tab_format % alignment for i in head]) + '|\n'
|
||||
|
||||
for i in range(max_len):
|
||||
row_data = [tab[i] if i < len(tab) else '' for tab in transposed_tabs]
|
||||
row_data = file_manifest_filter_type(row_data, filter_=None)
|
||||
tabs_list += "".join([tab_format % i for i in row_data]) + '|\n'
|
||||
|
||||
return tabs_list
|
||||
|
||||
def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkboxes, cookies):
|
||||
"""
|
||||
当文件被上传时的回调函数
|
||||
@@ -626,16 +690,15 @@ def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkbo
|
||||
this_file_path = pj(target_path_base, file_origin_name)
|
||||
shutil.move(file.name, this_file_path)
|
||||
upload_msg += extract_archive(file_path=this_file_path, dest_dir=this_file_path+'.extract')
|
||||
|
||||
# 整理文件集合
|
||||
moved_files = [fp for fp in glob.glob(f'{target_path_base}/**/*', recursive=True)]
|
||||
|
||||
if "浮动输入区" in checkboxes:
|
||||
txt, txt2 = "", target_path_base
|
||||
else:
|
||||
txt, txt2 = target_path_base, ""
|
||||
|
||||
# 输出消息
|
||||
moved_files_str = '\t\n\n'.join(moved_files)
|
||||
# 整理文件集合 输出消息
|
||||
moved_files = [fp for fp in glob.glob(f'{target_path_base}/**/*', recursive=True)]
|
||||
moved_files_str = to_markdown_tabs(head=['文件'], tabs=[moved_files])
|
||||
chatbot.append(['我上传了文件,请查收',
|
||||
f'[Local Message] 收到以下文件: \n\n{moved_files_str}' +
|
||||
f'\n\n调用路径参数已自动修正到: \n\n{txt}' +
|
||||
@@ -760,6 +823,11 @@ def select_api_key(keys, llm_model):
|
||||
raise RuntimeError(f"您提供的api-key不满足要求,不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源(右下角更换模型菜单中可切换openai,azure,claude,api2d等请求源)。")
|
||||
|
||||
api_key = random.choice(avail_key_list) # 随机负载均衡
|
||||
|
||||
if get_conf("BLOCK_INVALID_APIKEY"):
|
||||
# 实验性功能,自动检测并屏蔽失效的KEY,请勿使用
|
||||
from request_llms.key_manager import ApiKeyManager
|
||||
api_key = ApiKeyManager().select_avail_key(avail_key_list)
|
||||
return api_key
|
||||
|
||||
def read_env_variable(arg, default_value):
|
||||
@@ -856,7 +924,14 @@ def read_single_conf_with_lru_cache(arg):
|
||||
|
||||
@lru_cache(maxsize=128)
|
||||
def get_conf(*args):
|
||||
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
|
||||
"""
|
||||
本项目的所有配置都集中在config.py中。 修改配置有三种方法,您只需要选择其中一种即可:
|
||||
- 直接修改config.py
|
||||
- 创建并修改config_private.py
|
||||
- 修改环境变量(修改docker-compose.yml等价于修改容器内部的环境变量)
|
||||
|
||||
注意:如果您使用docker-compose部署,请修改docker-compose(等价于修改容器内部的环境变量)
|
||||
"""
|
||||
res = []
|
||||
for arg in args:
|
||||
r = read_single_conf_with_lru_cache(arg)
|
||||
@@ -1198,6 +1273,35 @@ def get_chat_default_kwargs():
|
||||
|
||||
return default_chat_kwargs
|
||||
|
||||
|
||||
def get_pictures_list(path):
|
||||
file_manifest = [f for f in glob.glob(f'{path}/**/*.jpg', recursive=True)]
|
||||
file_manifest += [f for f in glob.glob(f'{path}/**/*.jpeg', recursive=True)]
|
||||
file_manifest += [f for f in glob.glob(f'{path}/**/*.png', recursive=True)]
|
||||
return file_manifest
|
||||
|
||||
|
||||
def have_any_recent_upload_image_files(chatbot):
|
||||
_5min = 5 * 60
|
||||
if chatbot is None: return False, None # chatbot is None
|
||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
||||
if not most_recent_uploaded: return False, None # most_recent_uploaded is None
|
||||
if time.time() - most_recent_uploaded["time"] < _5min:
|
||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
||||
path = most_recent_uploaded['path']
|
||||
file_manifest = get_pictures_list(path)
|
||||
if len(file_manifest) == 0: return False, None
|
||||
return True, file_manifest # most_recent_uploaded is new
|
||||
else:
|
||||
return False, None # most_recent_uploaded is too old
|
||||
|
||||
|
||||
# Function to encode the image
|
||||
def encode_image(image_path):
|
||||
with open(image_path, "rb") as image_file:
|
||||
return base64.b64encode(image_file.read()).decode('utf-8')
|
||||
|
||||
|
||||
def get_max_token(llm_kwargs):
|
||||
from request_llms.bridge_all import model_info
|
||||
return model_info[llm_kwargs['llm_model']]['max_token']
|
||||
|
||||
4
version
4
version
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"version": 3.60,
|
||||
"version": 3.62,
|
||||
"show_feature": true,
|
||||
"new_feature": "修复多个BUG <-> AutoGen多智能体插件测试版 <-> 修复本地模型在Windows下的加载BUG <-> 支持文心一言v4和星火v3 <-> 支持GLM3和智谱的API <-> 解决本地模型并发BUG <-> 支持动态追加基础功能按钮 <-> 新汇报PDF汇总页面 <-> 重新编译Gradio优化使用体验"
|
||||
"new_feature": "修复若干隐蔽的内存BUG <-> 修复多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版 <-> 修复本地模型在Windows下的加载BUG <-> 支持文心一言v4和星火v3 <-> 支持GLM3和智谱的API <-> 解决本地模型并发BUG <-> 支持动态追加基础功能按钮"
|
||||
}
|
||||
|
||||
在新工单中引用
屏蔽一个用户