diff --git a/README.md b/README.md
index b1e5568e..81a82282 100644
--- a/README.md
+++ b/README.md
@@ -1,42 +1,70 @@
-> **Note**
+> **Caution**
>
> 2023.11.12: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
>
> 2023.11.7: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目开源免费,近期发现有人蔑视开源协议并利用本项目违规圈钱,请提高警惕,谨防上当受骗。
+
+
+
+
+
GPT 学术优化 (GPT Academic)
+
+
+[![Github][Github-image]][Github-url]
+[![License][License-image]][License-url]
+[![Releases][Releases-image]][Releases-url]
+[![Installation][Installation-image]][Installation-url]
+[![Wiki][Wiki-image]][Wiki-url]
+[![PR][PRs-image]][PRs-url]
+
+[Github-image]: https://img.shields.io/badge/github-12100E.svg?style=flat-square
+[License-image]: https://img.shields.io/github/license/binary-husky/gpt_academic?label=License&style=flat-square&color=orange
+[Releases-image]: https://img.shields.io/github/release/binary-husky/gpt_academic?label=Release&style=flat-square&color=blue
+[Installation-image]: https://img.shields.io/badge/dynamic/json?color=blue&url=https://raw.githubusercontent.com/binary-husky/gpt_academic/master/version&query=$.version&label=Installation&style=flat-square
+[Wiki-image]: https://img.shields.io/badge/wiki-项目文档-black?style=flat-square
+[PRs-image]: https://img.shields.io/badge/PRs-welcome-pink?style=flat-square
+
+[Github-url]: https://github.com/binary-husky/gpt_academic
+[License-url]: https://github.com/binary-husky/gpt_academic/blob/master/LICENSE
+[Releases-url]: https://github.com/binary-husky/gpt_academic/releases
+[Installation-url]: https://github.com/binary-husky/gpt_academic#installation
+[Wiki-url]: https://github.com/binary-husky/gpt_academic/wiki
+[PRs-url]: https://github.com/binary-husky/gpt_academic/pulls
-#

GPT 学术优化 (GPT Academic)
+
+
**如果喜欢这个项目,请给它一个Star;如果您发明了好用的快捷键或插件,欢迎发pull requests!**
-If you like this project, please give it a Star. We also have a README in [English|](docs/README.English.md)[日本語|](docs/README.Japanese.md)[한국어|](docs/README.Korean.md)[Русский|](docs/README.Russian.md)[Français](docs/README.French.md) translated by this project itself.
-To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
+If you like this project, please give it a Star.
+Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanese.md) | [한국어](docs/README.Korean.md) | [Русский](docs/README.Russian.md) | [Français](docs/README.French.md). All translations have been provided by the project itself. To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
+
+
-> **Note**
->
> 1.请注意只有 **高亮** 标识的插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
>
-> 2.本项目中每个文件的功能都在[自译解报告`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题[`wiki`](https://github.com/binary-husky/gpt_academic/wiki)。[常规安装方法](#installation) | [一键安装脚本](https://github.com/binary-husky/gpt_academic/releases) | [配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
+> 2.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题请查阅wiki。
+> [](#installation) [](https://github.com/binary-husky/gpt_academic/releases) [](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) []([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
>
-> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
+> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交即可生效。
-
-
+
功能(⭐= 近期新增功能) | 描述
--- | ---
-⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱API](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
+⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱API](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
-[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [插件] 一键可以剖析Python/C/C++/Java/Lua/...项目树 或 [自我剖析](https://www.bilibili.com/video/BV1cj411A7VW)
+[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [插件] 一键剖析Python/C/C++/Java/Lua/...项目树 或 [自我剖析](https://www.bilibili.com/video/BV1cj411A7VW)
读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读latex/pdf论文全文并生成摘要
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
批量注释生成 | [插件] 一键批量生成函数注释
-Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?
+Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
chat分析报告生成 | [插件] 运行后自动生成总结汇报
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
@@ -48,22 +76,22 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
-[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧?
+[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
-⭐虚空终端插件 | [插件] 用自然语言,直接调度本项目其他插件
+⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
- 新界面(修改`config.py`中的LAYOUT选项即可实现“左右布局”和“上下布局”的切换)
-

+
-- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板
+- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放剪贴板
@@ -73,12 +101,12 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
-- 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读
+- 如果输出包含公式,会以tex形式和渲染形式同时显示,方便复制和阅读
-- 懒得看项目代码?整个工程直接给chatgpt炫嘴里
+- 懒得看项目代码?直接把整个工程炫ChatGPT嘴里
@@ -88,6 +116,8 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
+
+
# Installation
### 安装方法I:直接运行 (Windows, Linux or MacOS)
@@ -98,13 +128,13 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
cd gpt_academic
```
-2. 配置API_KEY
+2. 配置API_KEY等变量
- 在`config.py`中,配置API KEY等设置,[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1) 。[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
+ 在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
- 「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解该读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中(仅复制您修改过的配置条目即可)。 」
+ 「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,以确保更新或其他用户无法轻易查看您的私有配置 」。
- 「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py`。 」
+ 「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
3. 安装依赖
@@ -151,7 +181,7 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-
### 安装方法II:使用Docker
-0. 部署项目的全部能力(这个是包含cuda和latex的大型镜像。但如果您网速慢、硬盘小,则不推荐使用这个)
+0. 部署项目的全部能力(这个是包含cuda和latex的大型镜像。但如果您网速慢、硬盘小,则不推荐该方法部署完整项目)
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml)
``` sh
@@ -180,26 +210,26 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
```
-### 安装方法III:其他部署姿势
+### 安装方法III:其他部署方法
1. **Windows一键运行脚本**。
-完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。
-脚本的贡献来源是[oobabooga](https://github.com/oobabooga/one-click-installers)。
+完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。脚本贡献来源:[oobabooga](https://github.com/oobabooga/one-click-installers)。
2. 使用第三方API、Azure等、文心一言、星火等,见[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)
3. 云服务器远程部署避坑指南。
请访问[云服务器远程部署wiki](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-4. 一些新型的部署平台或方法
+4. 在其他平台部署&二级网址部署
- 使用Sealos[一键部署](https://github.com/binary-husky/gpt_academic/issues/993)。
- 使用WSL2(Windows Subsystem for Linux 子系统)。请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
- 如何在二级网址(如`http://localhost/subpath`)下运行。请访问[FastAPI运行说明](docs/WithFastapi.md)
+
# Advanced Usage
### I:自定义新的便捷按钮(学术快捷键)
-任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序。(如按钮已存在,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
+任意文本编辑器打开`core_functional.py`,添加如下条目,然后重启程序。(如果按钮已存在,那么可以直接修改(前缀、后缀都已支持热修改),无需重启程序即可生效。)
例如
```python
@@ -221,6 +251,7 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。
详情请参考[函数插件指南](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
+
# Updates
### I:动态
@@ -320,7 +351,7 @@ GPT Academic开发者QQ群:`610599535`
- 已知问题
- 某些浏览器翻译插件干扰此软件前端的运行
- - 官方Gradio目前有很多兼容性Bug,请务必使用`requirement.txt`安装Gradio
+ - 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
### III:主题
可以通过修改`THEME`选项(config.py)变更主题
diff --git a/config.py b/config.py
index f170a2bb..a5117245 100644
--- a/config.py
+++ b/config.py
@@ -15,13 +15,13 @@ API_KEY = "此处填API密钥" # 可同时填写多个API-KEY,用英文逗
USE_PROXY = False
if USE_PROXY:
"""
+ 代理网络的地址,打开你的代理软件查看代理协议(socks5h / http)、地址(localhost)和端口(11284)
填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
<配置教程&视频教程> https://github.com/binary-husky/gpt_academic/issues/1>
[协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
- [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上)
+ [地址] 填localhost或者127.0.0.1(localhost意思是代理软件安装在本机上)
[端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
"""
- # 代理网络的地址,打开你的*学*网软件查看代理的协议(socks5h / http)、地址(localhost)和端口(11284)
proxies = {
# [协议]:// [地址] :[端口]
"http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890",
@@ -100,6 +100,12 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-prev
MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
+# 选择本地模型变体(只有当AVAIL_LLM_MODELS包含了对应本地模型时,才会起作用)
+# 如果你选择Qwen系列的模型,那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型
+# 也可以是具体的模型路径
+QWEN_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
+
+
# 百度千帆(LLM_MODEL="qianfan")
BAIDU_CLOUD_API_KEY = ''
BAIDU_CLOUD_SECRET_KEY = ''
diff --git a/crazy_functional.py b/crazy_functional.py
index d2e75750..ef78b5a3 100644
--- a/crazy_functional.py
+++ b/crazy_functional.py
@@ -372,7 +372,7 @@ def get_crazy_functions():
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
- "ArgsReminder": "在这里输入分辨率, 如1024x1024(默认),支持 1024x1024, 1792x1024, 1024x1792。如需生成高清图像,请输入 1024x1024-HD, 1792x1024-HD, 1024x1792-HD。", # 高级参数输入区的显示提示
+ "ArgsReminder": "在这里输入自定义参数「分辨率-质量(可选)-风格(可选)」, 参数示例「1024x1024-hd-vivid」 || 分辨率支持 「1024x1024」(默认) /「1792x1024」/「1024x1792」 || 质量支持 「-standard」(默认) /「-hd」 || 风格支持 「-vivid」(默认) /「-natural」", # 高级参数输入区的显示提示
"Info": "使用DALLE3生成图片 | 输入参数字符串,提供图像的内容",
"Function": HotReload(图片生成_DALLE3)
},
@@ -499,7 +499,7 @@ def get_crazy_functions():
})
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
function_plugins.update({
- "Arixv论文精细翻译(输入arxivID)[需Latex]": {
+ "Arxiv论文精细翻译(输入arxivID)[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
diff --git a/crazy_functions/图片生成.py b/crazy_functions/图片生成.py
index 134eb07a..f32d1367 100644
--- a/crazy_functions/图片生成.py
+++ b/crazy_functions/图片生成.py
@@ -2,7 +2,7 @@ from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicState
-def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", quality=None):
+def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", quality=None, style=None):
import requests, json, time, os
from request_llms.bridge_all import model_info
@@ -25,7 +25,10 @@ def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", qual
'model': model,
'response_format': 'url'
}
- if quality is not None: data.update({'quality': quality})
+ if quality is not None:
+ data['quality'] = quality
+ if style is not None:
+ data['style'] = style
response = requests.post(url, headers=headers, json=data, proxies=proxies)
print(response.content)
try:
@@ -121,13 +124,18 @@ def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
- resolution = plugin_kwargs.get("advanced_arg", '1024x1024').lower()
- if resolution.endswith('-hd'):
- resolution = resolution.replace('-hd', '')
- quality = 'hd'
- else:
- quality = 'standard'
- image_url, image_path = gen_image(llm_kwargs, prompt, resolution, model="dall-e-3", quality=quality)
+ resolution_arg = plugin_kwargs.get("advanced_arg", '1024x1024-standard-vivid').lower()
+ parts = resolution_arg.split('-')
+ resolution = parts[0] # 解析分辨率
+ quality = 'standard' # 质量与风格默认值
+ style = 'vivid'
+ # 遍历检查是否有额外参数
+ for part in parts[1:]:
+ if part in ['hd', 'standard']:
+ quality = part
+ elif part in ['vivid', 'natural']:
+ style = part
+ image_url, image_path = gen_image(llm_kwargs, prompt, resolution, model="dall-e-3", quality=quality, style=style)
chatbot.append([prompt,
f'图像中转网址:
`{image_url}`
'+
f'中转网址预览:
'
@@ -257,4 +265,4 @@ def make_square_image(input_path, output_path):
size = max(width, height)
new_img = Image.new("RGBA", (size, size), color="black")
new_img.paste(img, ((size - width) // 2, (size - height) // 2))
- new_img.save(output_path)
\ No newline at end of file
+ new_img.save(output_path)
diff --git a/docs/translate_english.json b/docs/translate_english.json
index 955dcaf9..cbfdafd4 100644
--- a/docs/translate_english.json
+++ b/docs/translate_english.json
@@ -923,7 +923,7 @@
"的第": "The",
"个片段": "fragment",
"总结文章": "Summarize the article",
- "根据以上的对话": "According to the above dialogue",
+ "根据以上的对话": "According to the conversation above",
"的主要内容": "The main content of",
"所有文件都总结完成了吗": "Are all files summarized?",
"如果是.doc文件": "If it is a .doc file",
@@ -1501,7 +1501,7 @@
"发送请求到OpenAI后": "After sending the request to OpenAI",
"上下布局": "Vertical Layout",
"左右布局": "Horizontal Layout",
- "对话窗的高度": "Height of the Dialogue Window",
+ "对话窗的高度": "Height of the Conversation Window",
"重试的次数限制": "Retry Limit",
"gpt4现在只对申请成功的人开放": "GPT-4 is now only open to those who have successfully applied",
"提高限制请查询": "Please check for higher limits",
@@ -2183,9 +2183,8 @@
"找不到合适插件执行该任务": "Cannot find a suitable plugin to perform this task",
"接驳VoidTerminal": "Connect to VoidTerminal",
"**很好": "**Very good",
- "对话|编程": "Conversation|Programming",
- "对话|编程|学术": "Conversation|Programming|Academic",
- "4. 建议使用 GPT3.5 或更强的模型": "4. It is recommended to use GPT3.5 or a stronger model",
+ "对话|编程": "Conversation&ImageGenerating|Programming",
+ "对话|编程|学术": "Conversation&ImageGenerating|Programming|Academic", "4. 建议使用 GPT3.5 或更强的模型": "4. It is recommended to use GPT3.5 or a stronger model",
"「请调用插件翻译PDF论文": "Please call the plugin to translate the PDF paper",
"3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词": "3. If you use keywords such as 'call plugin xxx', 'modify configuration xxx', 'please', etc.",
"以下是一篇学术论文的基本信息": "The following is the basic information of an academic paper",
@@ -2630,7 +2629,7 @@
"已经被记忆": "Already memorized",
"默认用英文的": "Default to English",
"错误追踪": "Error tracking",
- "对话|编程|学术|智能体": "Dialogue|Programming|Academic|Intelligent agent",
+ "对话&编程|编程|学术|智能体": "Conversation&ImageGenerating|Programming|Academic|Intelligent agent",
"请检查": "Please check",
"检测到被滞留的缓存文档": "Detected cached documents being left behind",
"还有哪些场合允许使用代理": "What other occasions allow the use of proxies",
@@ -2904,4 +2903,4 @@
"请配置ZHIPUAI_API_KEY": "Please configure ZHIPUAI_API_KEY",
"单个azure模型": "Single Azure model",
"预留参数 context 未实现": "Reserved parameter 'context' not implemented"
-}
\ No newline at end of file
+}
diff --git a/docs/translate_traditionalchinese.json b/docs/translate_traditionalchinese.json
index 9ca7cbaa..4edc65de 100644
--- a/docs/translate_traditionalchinese.json
+++ b/docs/translate_traditionalchinese.json
@@ -1043,9 +1043,9 @@
"jittorllms响应异常": "jittorllms response exception",
"在项目根目录运行这两个指令": "Run these two commands in the project root directory",
"获取tokenizer": "Get tokenizer",
- "chatbot 为WebUI中显示的对话列表": "chatbot is the list of dialogues displayed in WebUI",
+ "chatbot 为WebUI中显示的对话列表": "chatbot is the list of conversations displayed in WebUI",
"test_解析一个Cpp项目": "test_parse a Cpp project",
- "将对话记录history以Markdown格式写入文件中": "Write the dialogue record history to a file in Markdown format",
+ "将对话记录history以Markdown格式写入文件中": "Write the conversations record history to a file in Markdown format",
"装饰器函数": "Decorator function",
"玫瑰色": "Rose color",
"将单空行": "刪除單行空白",
@@ -2270,4 +2270,4 @@
"标注节点的行数范围": "標註節點的行數範圍",
"默认 True": "默認 True",
"将两个PDF拼接": "將兩個PDF拼接"
-}
\ No newline at end of file
+}
diff --git a/main.py b/main.py
index b29c94fc..53fb6889 100644
--- a/main.py
+++ b/main.py
@@ -85,7 +85,7 @@ def main():
with gr_L2(scale=1, elem_id="gpt-panel"):
with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary:
with gr.Row():
- txt = gr.Textbox(show_label=False, placeholder="Input question here.").style(container=False)
+ txt = gr.Textbox(show_label=False, placeholder="Input question here.", elem_id='user_input_main').style(container=False)
with gr.Row():
submitBtn = gr.Button("提交", elem_id="elem_submit", variant="primary")
with gr.Row():
@@ -146,7 +146,7 @@ def main():
with gr.Row():
with gr.Tab("上传文件", elem_id="interact-panel"):
gr.Markdown("请上传本地文件/压缩包供“函数插件区”功能调用。请注意: 上传文件后会自动把输入区修改为相应路径。")
- file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple")
+ file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload_float")
with gr.Tab("更换模型 & Prompt", elem_id="interact-panel"):
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
@@ -178,7 +178,8 @@ def main():
with gr.Row() as row:
row.style(equal_height=True)
with gr.Column(scale=10):
- txt2 = gr.Textbox(show_label=False, placeholder="Input question here.", lines=8, label="输入区2").style(container=False)
+ txt2 = gr.Textbox(show_label=False, placeholder="Input question here.",
+ elem_id='user_input_float', lines=8, label="输入区2").style(container=False)
with gr.Column(scale=1, min_width=40):
submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
diff --git a/request_llms/bridge_qwen.py b/request_llms/bridge_qwen.py
index 85a4d80c..940c41d5 100644
--- a/request_llms/bridge_qwen.py
+++ b/request_llms/bridge_qwen.py
@@ -1,13 +1,7 @@
model_name = "Qwen"
cmd_to_install = "`pip install -r request_llms/requirements_qwen.txt`"
-
-from transformers import AutoModel, AutoTokenizer
-import time
-import threading
-import importlib
-from toolbox import update_ui, get_conf, ProxyNetworkActivate
-from multiprocessing import Process, Pipe
+from toolbox import ProxyNetworkActivate, get_conf
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
@@ -24,16 +18,14 @@ class GetQwenLMHandle(LocalLLMHandle):
def load_model_and_tokenizer(self):
# 🏃♂️🏃♂️🏃♂️ 子进程执行
- import os, glob
- import os
- import platform
- from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
-
+ # from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
+ from transformers import AutoModelForCausalLM, AutoTokenizer
+ from transformers.generation import GenerationConfig
with ProxyNetworkActivate('Download_LLM'):
- model_id = 'qwen/Qwen-7B-Chat'
- self._tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen-7B-Chat', trust_remote_code=True, resume_download=True)
+ model_id = get_conf('QWEN_MODEL_SELECTION')
+ self._tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, resume_download=True)
# use fp16
- model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True, fp16=True).eval()
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True).eval()
model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
self._model = model
@@ -51,7 +43,7 @@ class GetQwenLMHandle(LocalLLMHandle):
query, max_length, top_p, temperature, history = adaptor(kwargs)
- for response in self._model.chat(self._tokenizer, query, history=history, stream=True):
+ for response in self._model.chat_stream(self._tokenizer, query, history=history):
yield response
def try_to_import_special_deps(self, **kwargs):
diff --git a/request_llms/requirements_qwen.txt b/request_llms/requirements_qwen.txt
index 3d7d62a0..ea65dee7 100644
--- a/request_llms/requirements_qwen.txt
+++ b/request_llms/requirements_qwen.txt
@@ -1,2 +1,4 @@
modelscope
-transformers_stream_generator
\ No newline at end of file
+transformers_stream_generator
+auto-gptq
+optimum
\ No newline at end of file
diff --git a/tests/test_llms.py b/tests/test_llms.py
index 8b685972..bdb622b7 100644
--- a/tests/test_llms.py
+++ b/tests/test_llms.py
@@ -16,8 +16,9 @@ if __name__ == "__main__":
# from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
# from request_llms.bridge_claude import predict_no_ui_long_connection
# from request_llms.bridge_internlm import predict_no_ui_long_connection
- from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
- # from request_llms.bridge_qwen import predict_no_ui_long_connection
+ # from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
+ # from request_llms.bridge_qwen_7B import predict_no_ui_long_connection
+ from request_llms.bridge_qwen import predict_no_ui_long_connection
# from request_llms.bridge_spark import predict_no_ui_long_connection
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
diff --git a/themes/common.js b/themes/common.js
index 849cb9a5..a164a070 100644
--- a/themes/common.js
+++ b/themes/common.js
@@ -122,7 +122,7 @@ function chatbotAutoHeight(){
chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
}
}
-
+ monitoring_input_box()
update_height();
setInterval(function() {
update_height_slow()
@@ -160,4 +160,106 @@ function get_elements(consider_state_panel=false) {
var chatbot_height = chatbot.style.height;
var chatbot_height = parseInt(chatbot_height);
return { panel_height_target, chatbot_height, chatbot };
-}
\ No newline at end of file
+}
+
+
+function add_func_paste(input) {
+ let paste_files = [];
+ if (input) {
+ input.addEventListener("paste", async function (e) {
+ const clipboardData = e.clipboardData || window.clipboardData;
+ const items = clipboardData.items;
+ if (items) {
+ for (i = 0; i < items.length; i++) {
+ if (items[i].kind === "file") { // 确保是文件类型
+ const file = items[i].getAsFile();
+ // 将每一个粘贴的文件添加到files数组中
+ paste_files.push(file);
+ e.preventDefault(); // 避免粘贴文件名到输入框
+ }
+ }
+ if (paste_files.length > 0) {
+ // 按照文件列表执行批量上传逻辑
+ await paste_upload_files(paste_files);
+ paste_files = []
+
+ }
+ }
+ });
+ }
+}
+
+
+async function paste_upload_files(files) {
+ const uploadInputElement = elem_upload_float.querySelector("input[type=file]");
+ let totalSizeMb = 0
+ if (files && files.length > 0) {
+ // 执行具体的上传逻辑
+ if (uploadInputElement) {
+ for (let i = 0; i < files.length; i++) {
+ // 将从文件数组中获取的文件大小(单位为字节)转换为MB,
+ totalSizeMb += files[i].size / 1024 / 1024;
+ }
+ // 检查文件总大小是否超过20MB
+ if (totalSizeMb > 20) {
+ toast_push('⚠️文件夹大于20MB 🚀上传文件中', 2000)
+ // return; // 如果超过了指定大小, 可以不进行后续上传操作
+ }
+ // 监听change事件, 原生Gradio可以实现
+ // uploadInputElement.addEventListener('change', function(){replace_input_string()});
+ let event = new Event("change");
+ Object.defineProperty(event, "target", {value: uploadInputElement, enumerable: true});
+ Object.defineProperty(event, "currentTarget", {value: uploadInputElement, enumerable: true});
+ Object.defineProperty(uploadInputElement, "files", {value: files, enumerable: true});
+ uploadInputElement.dispatchEvent(event);
+ // toast_push('🎉上传文件成功', 2000)
+ } else {
+ toast_push('⚠️请先删除上传区中的历史文件,再尝试粘贴。', 2000)
+ }
+ }
+}
+//提示信息 封装
+function toast_push(msg, duration) {
+ duration = isNaN(duration) ? 3000 : duration;
+ const m = document.createElement('div');
+ m.innerHTML = msg;
+ m.style.cssText = "font-size: var(--text-md) !important; color: rgb(255, 255, 255);background-color: rgba(0, 0, 0, 0.6);padding: 10px 15px;margin: 0 0 0 -60px;border-radius: 4px;position: fixed; top: 50%;left: 50%;width: auto; text-align: center;";
+ document.body.appendChild(m);
+ setTimeout(function () {
+ var d = 0.5;
+ m.style.opacity = '0';
+ setTimeout(function () {
+ document.body.removeChild(m)
+ }, d * 1000);
+ }, duration);
+}
+
+var elem_upload = null;
+var elem_upload_float = null;
+var elem_input_main = null;
+var elem_input_float = null;
+
+
+function monitoring_input_box() {
+ elem_upload = document.getElementById('elem_upload')
+ elem_upload_float = document.getElementById('elem_upload_float')
+ elem_input_main = document.getElementById('user_input_main')
+ elem_input_float = document.getElementById('user_input_float')
+ if (elem_input_main) {
+ if (elem_input_main.querySelector("textarea")) {
+ add_func_paste(elem_input_main.querySelector("textarea"))
+ }
+ }
+ if (elem_input_float) {
+ if (elem_input_float.querySelector("textarea")){
+ add_func_paste(elem_input_float.querySelector("textarea"))
+ }
+ }
+}
+
+
+// 监视页面变化
+window.addEventListener("DOMContentLoaded", function () {
+ // const ga = document.getElementsByTagName("gradio-app");
+ gradioApp().addEventListener("render", monitoring_input_box);
+});
diff --git a/version b/version
index 5f6de09c..cb4df5ae 100644
--- a/version
+++ b/version
@@ -1,5 +1,5 @@
{
- "version": 3.61,
+ "version": 3.62,
"show_feature": true,
- "new_feature": "修复潜在的多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版 <-> 修复本地模型在Windows下的加载BUG <-> 支持文心一言v4和星火v3 <-> 支持GLM3和智谱的API <-> 解决本地模型并发BUG <-> 支持动态追加基础功能按钮"
+ "new_feature": "修复若干隐蔽的内存BUG <-> 修复多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版 <-> 修复本地模型在Windows下的加载BUG <-> 支持文心一言v4和星火v3 <-> 支持GLM3和智谱的API <-> 解决本地模型并发BUG <-> 支持动态追加基础功能按钮"
}