比较提交

...

150 次代码提交

作者 SHA1 备注 提交日期
binary-husky
47cedde954 fix security issue GHSA-3jrq-66fm-w7xr 2024-06-18 10:18:33 +00:00
binary-husky
12aebf9707 searxng based information gathering 2024-06-16 12:12:57 +00:00
binary-husky
0b5385e5e5 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2024-06-12 09:34:12 +00:00
binary-husky
2ff1a1fb0b update translation matrix 2024-06-12 09:34:05 +00:00
binary-husky
48e10fb10a Update README.md 2024-06-10 22:22:04 +08:00
Frank Lee
ca64a592f5 Update zhipu models (#1852) 2024-06-10 22:17:51 +08:00
Guoxin Sun
cb96ca132a Update common.js (#1854)
fix typo
2024-06-10 22:17:27 +08:00
binary-husky
46428b7c7a Merge branch 'master' into frontier 2024-06-01 16:22:32 +00:00
binary-husky
66a50c8019 live2d shutdown bug fix 2024-06-01 16:21:04 +00:00
Menghuan1918
814dc943ac 将“生成多种图表”插件高级参数更新为二级菜单 (#1839)
* Improve the prompts

* Update to new meun form

* Bug fix (wrong type of plugin_kwargs)
2024-06-01 13:34:33 +08:00
binary-husky
96cd1f0b25 secondary menu main input sync bug fix 2024-05-31 04:13:27 +00:00
binary-husky
4fc17f4add Merge branch 'master' into frontier 2024-05-30 15:00:44 +00:00
binary-husky
b3665d8fec remove check 2024-05-30 14:54:50 +00:00
binary-husky
80c4281888 TTS Default Enable 2024-05-30 14:27:18 +00:00
binary-husky
beda56abb0 update dockerfile 2024-05-30 12:44:17 +00:00
binary-husky
cb16941d01 update css 2024-05-30 12:35:47 +00:00
binary-husky
5cf9ac7849 Merge branch 'master' into frontier 2024-05-29 16:06:28 +00:00
binary-husky
51ddb88ceb correct hint err 2024-05-29 16:05:23 +00:00
binary-husky
69dfe5d514 compat to old void-terminal plugin 2024-05-29 15:50:00 +00:00
binary-husky
6819f87512 Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-05-23 16:35:20 +00:00
binary-husky
3d51b9d5bb compat baichuan 2024-05-23 16:35:15 +00:00
QiyuanChen
bff87ada92 添加对ERNIE-Speed和ERNIE-Lite模型的支持 (#1821)
* feat: add ERNIE-Speed and ERNIE-Lite

百度的ERNIE-Speed and ERNIE-Lite模型开始免费使用了,故添加了调用地址。可以使用ERNIE-Speed-128K,ERNIE-Speed-8K,ERNIE-Lite-8K来访问

* chore: Modify supported models in config.py

修改了config.py中千帆支持的模型列表,添加了三款免费模型
2024-05-24 00:16:26 +08:00
binary-husky
a938412b6f save conversation wrap 2024-05-23 15:58:59 +00:00
binary-husky
a48acf6fec Flex Btn Bug Fix 2024-05-22 08:38:40 +00:00
binary-husky
c6b9ab5214 add document 2024-05-22 06:39:56 +00:00
binary-husky
aa3332de69 add document 2024-05-22 06:27:26 +00:00
binary-husky
d43175d46d fix type hint 2024-05-21 13:18:38 +00:00
binary-husky
8ca9232db2 Merge branch 'master' into frontier 2024-05-21 12:27:01 +00:00
binary-husky
1339aa0e1a doc2x latex convertion 2024-05-21 12:24:50 +00:00
binary-husky
f41419e767 update demo 2024-05-21 11:12:08 +00:00
binary-husky
d88c585305 improve latex plugin 2024-05-21 10:47:50 +00:00
binary-husky
0a88d18c7a secondary menu for pdf trans 2024-05-21 08:51:29 +00:00
binary-husky
0d0edc2216 Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-05-19 21:54:16 +08:00
binary-husky
5e0875fcf4 from backend to front end 2024-05-19 21:54:06 +08:00
Shixian Sheng
c508b84db8 更新了README.md/Update README.md (#1810) 2024-05-19 20:41:17 +08:00
Menghuan1918
f2b67602bb 为docker构建添加FFmpeg依赖 (#1807)
* Test: change dockerfile to install ffmpeg

* Add the ffmpeg to dockerfile (required by edge-tts)
2024-05-19 14:27:55 +08:00
binary-husky
29daba5d2f success? 2024-05-18 23:03:28 +08:00
binary-husky
9477824ac1 improve css 2024-05-18 21:54:15 +08:00
binary-husky
459c5b2d24 plugin refactor: phase 1 2024-05-18 20:23:50 +08:00
binary-husky
abf9b5aee5 Merge branch 'master' into frontier 2024-05-18 15:52:08 +08:00
binary-husky
2ce4482146 fix new ModelOverride fn bug 2024-05-18 15:47:25 +08:00
binary-husky
4282b83035 change TTS default to DISABLE 2024-05-18 15:43:35 +08:00
binary-husky
537be57c9b fix tts bugs 2024-05-17 21:07:28 +08:00
binary-husky
3aa92d6c80 change main ui hint 2024-05-17 11:34:13 +08:00
awwaawwa
b7eb9aba49 [Feature]: allow model mutex override in core_functional.py (#1708)
* allow_core_func_specify_model

* change arg name

* 模型覆盖支持热更新&当模型覆盖指向不存在的模型时报错

* allow model mutex override

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-05-17 11:15:23 +08:00
hongyi-zhao
881a596a30 model support (gpt4o) in project. (#1760)
* Add the environment variable: OPEN_BROWSER

* Add configurable browser launching with custom arguments

- Update `config.py` to include options for specifying the browser and its arguments for opening URLs.
- Modify `main.py` to use the configured browser settings from `config.py` to launch the web page.
- Enhance `config_loader.py` to process path-like strings by expanding and normalizing paths, which supports the configuration improvements.

* Add support for the following models:

"gpt-4o", "gpt-4o-2024-05-13"

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-05-14 17:01:32 +08:00
binary-husky
1b3c331d01 dos2unix 2024-05-14 12:02:40 +08:00
binary-husky
70d5f2a7df arg name err patch 2024-05-13 23:40:35 +08:00
Menghuan1918
fd2f8b9090 Provide a new fast and simple way of accessing APIs (As example: Yi-models,Deepseek) (#1782)
* deal with the message part

* Finish no_ui_connect

* finish predict part

* Delete old version

* An example of add new api

* Bug fix:can not change in "model_info"

* Bug fix

* Error message handling

* Clear the format

* An example of add a openai form API:Deepseek

* For compatibility reasons

* Feture: set different API/Endpoint to diferent models

* Add support for YI new models

* 更新doc2x的api key机制 (#1766)

* Fix DOC2X API key refresh issue in PDF translation

* remove add

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* 修改部分文件名、变量名

* patch err

---------

Co-authored-by: alex_xiao <113411296+Alex4210987@users.noreply.github.com>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-05-13 23:38:08 +08:00
binary-husky
225a2de011 Version 3.76 (#1752)
* version roll

* add upload processbar
2024-05-13 22:54:38 +08:00
binary-husky
6aea6d8e2b Merge branch 'master' into frontier 2024-05-13 22:52:15 +08:00
alex_xiao
8d85616c27 更新doc2x的api key机制 (#1766)
* Fix DOC2X API key refresh issue in PDF translation

* remove add

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-05-13 22:49:40 +08:00
binary-husky
e4533dd24d Merge branch 'master' into frontier 2024-05-04 17:00:09 +08:00
binary-husky
43ed8cb8a8 Fix fastapi version compat 2024-05-04 16:43:42 +08:00
binary-husky
3eff964424 Update README.md 2024-05-01 17:59:25 +08:00
OREEkE
ebde98b34b Update requirements.txt (#1753)
TTS_TYPE = "EDGE_TTS"需要的依赖
2024-05-01 14:55:04 +08:00
binary-husky
6f883031c0 Update config.py 2024-05-01 14:54:36 +08:00
binary-husky
fa15059f07 add upload processbar 2024-05-01 01:11:35 +08:00
binary-husky
685c573619 version roll 2024-04-30 21:00:25 +08:00
binary-husky
5fcd02506c version 3.75 (#1702)
* Update version to 3.74

* Add support for Yi Model API (#1635)

* 更新以支持零一万物模型

* 删除newbing

* 修改config

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* Refactor function signatures in bridge files

* fix qwen api change

* rename and ref functions

* rename and move some cookie functions

* 增加haiku模型,新增endpoint配置说明 (#1626)

* haiku added

* 新增haiku,新增endpoint配置说明

* Haiku added

* 将说明同步至最新Endpoint

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* private_upload目录下进行文件鉴权 (#1596)

* private_upload目录下进行文件鉴权

* minor fastapi adjustment

* Add logging functionality to enable saving
conversation records

* waiting to fix username retrieve

* support 2rd web path

* allow accessing default user dir

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* remove yaml deps

* fix favicon

* fix abs path auth problem

* forget to write a return

* add `dashscope` to deps

* fix GHSA-v9q9-xj86-953p

* 用户名重叠越权访问patch (#1681)

* add cohere model api access

* cohere + can_multi_thread

* fix block user access(fail)

* fix fastapi bug

* change cohere api endpoint

* explain version

* # fix com_zhipuglm.py illegal temperature problem (#1687)

* Update com_zhipuglm.py

# fix 用户在使用 zhipuai 界面时遇到了关于温度参数的非法参数错误

* allow store lm model dropdown

* add a btn to reverse previous reset

* remove extra fns

* Add support for glm-4v model (#1700)

* 修改chatglm3量化加载方式 (#1688)

Co-authored-by: zym9804 <ren990603@gmail.com>

* save chat stage 1

* consider null cookie situation

* 在点击复制按钮时激活语音

* miss some parts

* move all to js

* done first stage

* add edge tts

* bug fix

* bug fix

* remove console log

* bug fix

* bug fix

* bug fix

* audio switch

* update tts readme

* remove tempfile when done

* disable auto audio follow

* avoid play queue update after shut up

* feat: minimizing common.js

* improve tts functionality

* deterine whether the cached model is in choices

* Add support for Ollama (#1740)

* print err when doc2x not successful

* add icon

* adjust url for doc2x key version

* prepare merge

---------

Co-authored-by: Menghuan1918 <menghuan2003@outlook.com>
Co-authored-by: Skyzayre <120616113+Skyzayre@users.noreply.github.com>
Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com>
Co-authored-by: Yuki <903728862@qq.com>
Co-authored-by: zyren123 <91042213+zyren123@users.noreply.github.com>
Co-authored-by: zym9804 <ren990603@gmail.com>
2024-04-30 20:37:41 +08:00
binary-husky
bd5280df1b minor pdf translation adjustment 2024-04-30 00:52:36 +08:00
binary-husky
744759704d allow personal docx api access 2024-04-29 23:53:41 +08:00
WFS
81df0aa210 fix the issue of when using google Gemini pro, don't have chat histor… (#1743)
* fix the issue of when using google Gemini pro, don't have chat history record

just add chat_log in bridge_google_gmini.py

* Update bridge_google_gemini.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>
2024-04-25 22:26:32 +08:00
Menghuan1918
cadaa81030 Fix the bug cause Nougat can not use (#1738)
* Bug fix for nougat require pdf

* Fixing bugs in a simpler and safer way
2024-04-24 12:13:44 +08:00
binary-husky
3b6cbbdcb0 Update README.md (#1736) 2024-04-24 11:41:56 +08:00
binary-husky
52e49c48b8 the latest zhipuai whl is broken 2024-04-23 18:20:36 +08:00
binary-husky
6ad15a6129 fix equation showing problem 2024-04-22 01:54:03 +08:00
binary-husky
09990d44d3 merge to resolve multiple pickle security issues (#1728)
* 注释调试if分支

* support pdf url for latex translation

* Merge pull request from GHSA-mvrw-h7rc-22r8

* 注释调试if分支

* Improve objload security

* Update README.md

* support pdf url for latex translation

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* fix import

---------

Co-authored-by: Longtaotao <longtaotao@bupt.edu.cn>
Co-authored-by: iluem <57590186+Qhaoduoyu@users.noreply.github.com>
2024-04-21 19:37:05 +08:00
binary-husky
eac5191815 Update README.md 2024-04-21 02:12:15 +08:00
owo
ae4407135d fix: 添加report_exception中缺失的a参数 (#1720)
在report_exception函数的定义中,参数a未包含默认值,因此应提供相应的值传入。
2024-04-18 16:27:00 +08:00
owo
f0e15bd710 fix: 修复了在else语句中调用'schema_str'之前未定义的问题 (#1719)
重新排列了方法中的条件返回语句,以确保在使用之前始终定义了'schema_str'。
2024-04-18 16:26:13 +08:00
jiangfy-ihep
5c5f442649 Fix: openai project API key pattern (#1721)
Co-authored-by: Fayu Jiang <jiangfayu@hotmail.com>
2024-04-18 16:24:29 +08:00
binary-husky
160552cc5f introduce doc2x 2024-04-15 01:57:31 +08:00
binary-husky
c131ec0b20 rename pdf plugin file name 2024-04-14 22:46:31 +08:00
iluem
2f3aeb7976 Merge pull request from GHSA-23cr-v6pm-j89p
* Update crazy_utils.py

Improve security

* add a white space

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>
2024-04-14 21:51:03 +08:00
binary-husky
eff5b89b98 scan first, then extract 2024-04-14 21:36:57 +08:00
iluem
f77ab27bc9 Merge pull request from GHSA-rh7j-jfvq-857j
Prevent path traversal for improved security
2024-04-14 21:33:37 +08:00
awwaawwa
ba0a8b7072 integrate gpt-4-turbo-2024-04-09 (#1698)
* 接入 gpt-4-turbo-2024-04-09 模型

* add gpt-4-turbo and change to vision

* add gpt-4-turbo to avail llm models

* 暂时将gpt-4-turbo接入至普通版本
2024-04-11 22:02:40 +08:00
hmp
2406022c2a access vllm 2024-04-11 22:00:07 +08:00
OREEkE
02b6f26b05 remove logging in gradios.py (#1699)
如果初始主题是HF社区主题,这里使用logging会导致程序不再写入日志(包括对话内容在内的任何记录),下载主题的日志输出和程序启动时的日志初始化有冲突。
2024-04-11 14:15:12 +08:00
OREEkE
2a003e8d49 add loadLive2D() when ADD_WAIFU = False (#1693)
ADD_WAIFU = False,浏览器会抛出错误:[Error] JQuery is not defined. 因为这时候没有jQuery库可用,却依然使用了loadLive2D()函数。现在加一个判断,如果ADD_WAIFU = False,禁用jQuery库的同时也禁用loadLive2D()函数,除非ADD_WAIFU = True
2024-04-10 00:10:53 +08:00
binary-husky
21891b0f6d update translate matrix 2024-04-08 12:43:24 +08:00
Yuki
163f12c533 # fix com_zhipuglm.py illegal temperature problem (#1687)
* Update com_zhipuglm.py

# fix 用户在使用 zhipuai 界面时遇到了关于温度参数的非法参数错误
2024-04-08 12:17:07 +08:00
binary-husky
bdd46c5dd1 Version 3.74: Merge latest updates on dev branch (frontier) (#1621)
* Update version to 3.74

* Add support for Yi Model API (#1635)

* 更新以支持零一万物模型

* 删除newbing

* 修改config

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* Refactor function signatures in bridge files

* fix qwen api change

* rename and ref functions

* rename and move some cookie functions

* 增加haiku模型,新增endpoint配置说明 (#1626)

* haiku added

* 新增haiku,新增endpoint配置说明

* Haiku added

* 将说明同步至最新Endpoint

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* private_upload目录下进行文件鉴权 (#1596)

* private_upload目录下进行文件鉴权

* minor fastapi adjustment

* Add logging functionality to enable saving
conversation records

* waiting to fix username retrieve

* support 2rd web path

* allow accessing default user dir

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* remove yaml deps

* fix favicon

* fix abs path auth problem

* forget to write a return

* add `dashscope` to deps

* fix GHSA-v9q9-xj86-953p

* 用户名重叠越权访问patch (#1681)

* add cohere model api access

* cohere + can_multi_thread

* fix block user access(fail)

* fix fastapi bug

* change cohere api endpoint

* explain version

---------

Co-authored-by: Menghuan1918 <menghuan2003@outlook.com>
Co-authored-by: Skyzayre <120616113+Skyzayre@users.noreply.github.com>
Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com>
2024-04-08 11:49:30 +08:00
binary-husky
ae51a0e686 fix GHSA-v9q9-xj86-953p 2024-04-05 20:47:11 +08:00
binary-husky
f2582ea137 fix qwen api change 2024-04-03 12:17:41 +08:00
binary-husky
ddd2fd84da fix checkbox bugs 2024-04-02 19:42:55 +08:00
binary-husky
6c90ff80ea add prompt and temperature to cookie 2024-04-02 18:02:00 +08:00
binary-husky
cb7c0703be Update requirements.txt (#1668) 2024-04-01 11:30:50 +08:00
binary-husky
5181cd441d change pip install url due to server failure (#1667) 2024-04-01 11:20:14 +08:00
binary-husky
216d4374e7 fix color list overflow 2024-04-01 00:11:32 +08:00
iluem
8af6c0cab6 Qhaoduoyu patch 1: pickle to json to increase security (#1648)
* Update theme.py

fix bugs

* Update theme.py

fix bugs

* change var names

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-03-25 09:54:30 +08:00
binary-husky
67ad041372 fix issue #1640 2024-03-20 18:09:37 +08:00
binary-husky
725c72229c update docker compose 2024-03-20 17:37:03 +08:00
Menghuan1918
e42ede512b Update Claude3 api request and fix some bugs (#1641)
* Update version to 3.74

* Add support for Yi Model API (#1635)

* 更新以支持零一万物模型

* 删除newbing

* 修改config

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* Update claude requrest to http type

* Update for endpoint

* Add support for other tpyes of pictures

* Update pip packages

* Fix console_slience issue while error handling

* revert version changes

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-03-20 17:22:23 +08:00
binary-husky
84ccc9e64c fix claude + oneapi error 2024-03-17 14:53:28 +08:00
binary-husky
c172847e19 add python annotations for toolbox functions 2024-03-16 22:54:33 +08:00
binary-husky
d166d25eb4 resolve invalid escape sequence warning
to support python3.12
2024-03-11 18:10:05 +08:00
binary-husky
516bbb1331 Update README.md 2024-03-11 17:40:16 +08:00
binary-husky
c3140ce344 merge frontier branch (#1620)
* Zhipu sdk update 适配最新的智谱SDK,支持GLM4v (#1502)

* 适配 google gemini 优化为从用户input中提取文件

* 适配最新的智谱SDK、支持glm-4v

* requirements.txt fix

* pending history check

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* Update "生成多种Mermaid图表" plugin: Separate out the file reading function (#1520)

* Update crazy_functional.py with new functionality deal with PDF

* Update crazy_functional.py and Mermaid.py for plugin_kwargs

* Update crazy_functional.py with new chart type: mind map

* Update SELECT_PROMPT and i_say_show_user messages

* Update ArgsReminder message in get_crazy_functions() function

* Update with read md file and update PROMPTS

* Return the PROMPTS as the test found that the initial version worked best

* Update Mermaid chart generation function

* version 3.71

* 解决issues #1510

* Remove unnecessary text from sys_prompt in 解析历史输入 function

* Remove sys_prompt message in 解析历史输入 function

* Update bridge_all.py: supports gpt-4-turbo-preview (#1517)

* Update bridge_all.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update bridge_all.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* Update config.py: supports gpt-4-turbo-preview (#1516)

* Update config.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update config.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* Refactor 解析历史输入 function to handle file input

* Update Mermaid chart generation functionality

* rename files and functions

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com>
Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* 接入mathpix ocr功能 (#1468)

* Update Latex输出PDF结果.py

借助mathpix实现了PDF翻译中文并重新编译PDF

* Update config.py

add mathpix appid & appkey

* Add 'PDF翻译中文并重新编译PDF' feature to plugins.

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* fix zhipuai

* check picture

* remove glm-4 due to bug

* 修改config

* 检查MATHPIX_APPID

* Remove unnecessary code and update
function_plugins dictionary

* capture non-standard token overflow

* bug fix #1524

* change mermaid style

* 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 (#1530)

* 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽

* 微调未果 先stage一下

* update

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* ver 3.72

* change live2d

* save the status of ``clear btn` in cookie

* 前端选择保持

* js ui bug fix

* reset btn bug fix

* update live2d tips

* fix missing get_token_num method

* fix live2d toggle switch

* fix persistent custom btn with cookie

* fix zhipuai feedback with core functionality

* Refactor button update and clean up functions

* tailing space removal

* Fix missing MATHPIX_APPID and MATHPIX_APPKEY
configuration

* Prompt fix、脑图提示词优化 (#1537)

* 适配 google gemini 优化为从用户input中提取文件

* 脑图提示词优化

* Fix missing MATHPIX_APPID and MATHPIX_APPKEY
configuration

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* 优化“PDF翻译中文并重新编译PDF”插件 (#1602)

* Add gemini_endpoint to API_URL_REDIRECT (#1560)

* Add gemini_endpoint to API_URL_REDIRECT

* Update gemini-pro and gemini-pro-vision model_info
endpoints

* Update to support new claude models (#1606)

* Add anthropic library and update claude models

* 更新bridge_claude.py文件,添加了对图片输入的支持。修复了一些bug。

* 添加Claude_3_Models变量以限制图片数量

* Refactor code to improve readability and
maintainability

* minor claude bug fix

* more flexible one-api support

* reformat config

* fix one-api new access bug

* dummy

* compat non-standard api

* version 3.73

---------

Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com>
Co-authored-by: Menghuan1918 <menghuan2003@outlook.com>
Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com>
Co-authored-by: Hao Ma <893017927@qq.com>
Co-authored-by: zeyuan huang <599012428@qq.com>
2024-03-11 17:26:09 +08:00
binary-husky
cd18663800 compat non-standard api - 2 2024-03-10 17:13:54 +08:00
binary-husky
dbf1322836 compat non-standard api 2024-03-10 17:07:59 +08:00
XIao
98dd3ae1c0 Moonshot- 在config.py中增加可用模型 (#1603)
* 支持月之暗面api

* fix文案

* 优化noui的返回值,对话历史文件继续上传到moonshat

* fix

* config 可用模型配置增加

* add `can_multi_thread` model attr (#1598)

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-03-05 16:07:05 +08:00
binary-husky
3036709496 add can_multi_thread model attr (#1598) 2024-03-05 15:58:18 +08:00
XIao
8e9c07644f 支持月之暗面api,文件对话 (#1597)
* 支持月之暗面api

* fix文案
2024-03-03 23:42:17 +08:00
binary-husky
90d96b77e6 handle qianfan chat error 2024-02-29 00:36:06 +08:00
binary-husky
66c876a9ca Update README.md 2024-02-26 22:56:09 +08:00
binary-husky
0665eb75ed Update README.md (#1581) 2024-02-26 22:52:00 +08:00
binary-husky
6b784035fa Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2024-02-25 21:13:56 +08:00
binary-husky
8bb3d84912 fix zip chinese file name error 2024-02-25 21:13:41 +08:00
binary-husky
a0193cf227 edit dep url 2024-02-23 13:28:49 +08:00
binary-husky
b72289bfb0 Fix missing MATHPIX_APPID and MATHPIX_APPKEY
configuration
2024-02-21 14:20:10 +08:00
Menghuan1918
bdfe3862eb 添加部分翻译 (#1566) 2024-02-21 14:14:06 +08:00
binary-husky
dae180b9ea update spark v3.5, fix glm parallel problem 2024-02-18 14:08:35 +08:00
binary-husky
e359fff040 Fix response message bug in bridge_qianfan.py,
bridge_qwen.py, and bridge_skylark2.py
2024-02-15 00:02:24 +08:00
binary-husky
2e9b4a5770 Merge Frontier, Update to Version 3.72 (#1553)
* Zhipu sdk update 适配最新的智谱SDK,支持GLM4v (#1502)

* 适配 google gemini 优化为从用户input中提取文件

* 适配最新的智谱SDK、支持glm-4v

* requirements.txt fix

* pending history check

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* Update "生成多种Mermaid图表" plugin: Separate out the file reading function (#1520)

* Update crazy_functional.py with new functionality deal with PDF

* Update crazy_functional.py and Mermaid.py for plugin_kwargs

* Update crazy_functional.py with new chart type: mind map

* Update SELECT_PROMPT and i_say_show_user messages

* Update ArgsReminder message in get_crazy_functions() function

* Update with read md file and update PROMPTS

* Return the PROMPTS as the test found that the initial version worked best

* Update Mermaid chart generation function

* version 3.71

* 解决issues #1510

* Remove unnecessary text from sys_prompt in 解析历史输入 function

* Remove sys_prompt message in 解析历史输入 function

* Update bridge_all.py: supports gpt-4-turbo-preview (#1517)

* Update bridge_all.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update bridge_all.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* Update config.py: supports gpt-4-turbo-preview (#1516)

* Update config.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update config.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* Refactor 解析历史输入 function to handle file input

* Update Mermaid chart generation functionality

* rename files and functions

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com>
Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* 接入mathpix ocr功能 (#1468)

* Update Latex输出PDF结果.py

借助mathpix实现了PDF翻译中文并重新编译PDF

* Update config.py

add mathpix appid & appkey

* Add 'PDF翻译中文并重新编译PDF' feature to plugins.

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* fix zhipuai

* check picture

* remove glm-4 due to bug

* 修改config

* 检查MATHPIX_APPID

* Remove unnecessary code and update
function_plugins dictionary

* capture non-standard token overflow

* bug fix #1524

* change mermaid style

* 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 (#1530)

* 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽

* 微调未果 先stage一下

* update

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* ver 3.72

* change live2d

* save the status of ``clear btn` in cookie

* 前端选择保持

* js ui bug fix

* reset btn bug fix

* update live2d tips

* fix missing get_token_num method

* fix live2d toggle switch

* fix persistent custom btn with cookie

* fix zhipuai feedback with core functionality

* Refactor button update and clean up functions

---------

Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com>
Co-authored-by: Menghuan1918 <menghuan2003@outlook.com>
Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com>
Co-authored-by: Hao Ma <893017927@qq.com>
Co-authored-by: zeyuan huang <599012428@qq.com>
2024-02-14 18:35:09 +08:00
binary-husky
e0c5859cf9 update Column min_width parameter 2024-02-12 23:37:31 +08:00
binary-husky
b9b1e12dc9 fix missing get_token_num method 2024-02-12 15:58:55 +08:00
binary-husky
8814026ec3 fix gradio-client version (#1548) 2024-02-09 13:25:01 +08:00
binary-husky
3025d5be45 remove jsdelivr (#1547) 2024-02-09 13:17:14 +08:00
binary-husky
6c13bb7b46 patch issue #1538 2024-02-06 17:59:09 +08:00
binary-husky
c27e559f10 match sess-* key 2024-02-06 17:51:47 +08:00
binary-husky
cdb5288f49 fix issue #1532 2024-02-02 17:47:35 +08:00
hongyi-zhao
49c6fcfe97 Update config.py: supports gpt-4-turbo-preview (#1516)
* Update config.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update config.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>
2024-01-26 16:44:32 +08:00
hongyi-zhao
45fa0404eb Update bridge_all.py: supports gpt-4-turbo-preview (#1517)
* Update bridge_all.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update bridge_all.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>
2024-01-26 16:36:23 +08:00
binary-husky
f889ef7625 解决issues #1510 2024-01-25 22:42:08 +08:00
binary-husky
a93bf4410d version 3.71 2024-01-25 22:18:43 +08:00
binary-husky
1c0764753a Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-01-25 22:05:13 +08:00
Menghuan1918
c847209ac9 Update "Generate multiple Mermaid charts" plugin with md file read (#1506)
* Update crazy_functional.py with new functionality deal with PDF

* Update crazy_functional.py and Mermaid.py for plugin_kwargs

* Update crazy_functional.py with new chart type: mind map

* Update SELECT_PROMPT and i_say_show_user messages

* Update ArgsReminder message in get_crazy_functions() function

* Update with read md file and update PROMPTS

* Return the PROMPTS as the test found that the initial version worked best

* Update Mermaid chart generation function
2024-01-24 17:44:54 +08:00
binary-husky
4f9d40c14f 删除冗余代码 2024-01-24 01:42:31 +08:00
binary-husky
91926d24b7 处理一个core_functional.py中出现的mermaid渲染特例 2024-01-24 01:38:06 +08:00
binary-husky
ef311c4859 localize mjs scripts 2024-01-24 01:06:58 +08:00
binary-husky
82795d3817 remove mask string feature for now (still buggy) 2024-01-24 00:44:27 +08:00
binary-husky
49e28a5a00 Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-01-23 15:48:49 +08:00
binary-husky
01def2e329 Merge branch 'master' into frontier 2024-01-23 15:48:06 +08:00
Menghuan1918
2291be2b28 Update "Generate multiple Mermaid charts" plugin (#1503)
* Update crazy_functional.py with new functionality deal with PDF

* Update crazy_functional.py and Mermaid.py for plugin_kwargs
2024-01-23 15:45:34 +08:00
binary-husky
c89ec7969f fix test import err 2024-01-23 09:52:58 +08:00
Menghuan1918
1506c19834 Update crazy_functional.py with new functionality deal with PDF (#1500) 2024-01-22 14:55:39 +08:00
binary-husky
a6fdc493b7 autogen plugin bug fix 2024-01-22 00:08:04 +08:00
binary-husky
113067c6ab Merge branch 'master' into frontier 2024-01-21 23:49:20 +08:00
Menghuan1918
7b6828ab07 从当前对话历史中生产Mermaid图表的插件 (#1497)
* Add functionality to generate multiple types of Mermaid charts

* Update conditional statement in 解析历史输入 function
2024-01-21 23:41:39 +08:00
binary-husky
d818c38dfe better theme 2024-01-21 19:41:18 +08:00
binary-husky
08b4e9796e Update README.md (#1499)
* Update README.md

* Update README.md
2024-01-21 19:08:48 +08:00
binary-husky
b55d573819 auto prompt lang 2024-01-21 13:47:11 +08:00
binary-husky
06b0e800a2 修复渲染的小BUG 2024-01-21 12:19:04 +08:00
binary-husky
7bbaf05961 Merge branch 'master' into frontier 2024-01-20 22:33:41 +08:00
binary-husky
dd2a97e7a9 draw project struct with mermaid 2024-01-20 21:23:56 +08:00
binary-husky
e579006c4a add set_multi_conf 2024-01-20 18:33:35 +08:00
binary-husky
031f19b6dd 替换错误的变量名称 2024-01-20 18:28:54 +08:00
binary-husky
142b516749 gpt_academic text mask imp 2024-01-20 18:00:06 +08:00
共有 179 个文件被更改,包括 9872 次插入4840 次删除

3
.gitignore vendored
查看文件

@@ -153,3 +153,6 @@ media
flagged flagged
request_llms/ChatGLM-6b-onnx-u8s8 request_llms/ChatGLM-6b-onnx-u8s8
.pre-commit-config.yaml .pre-commit-config.yaml
themes/common.js.min.*.js
test*
objdump*

查看文件

@@ -12,11 +12,16 @@ RUN echo '[global]' > /etc/pip.conf && \
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
# 语音输出功能以下两行,第一行更换阿里源,第二行安装ffmpeg,都可以删除
RUN UBUNTU_VERSION=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release); echo "deb https://mirrors.aliyun.com/debian/ $UBUNTU_VERSION main non-free contrib" > /etc/apt/sources.list; apt-get update
RUN apt-get install ffmpeg -y
# 进入工作路径(必要) # 进入工作路径(必要)
WORKDIR /gpt WORKDIR /gpt
# 安装大部分依赖,利用Docker缓存加速以后的构建 (以下行,可以删除) # 安装大部分依赖,利用Docker缓存加速以后的构建 (以下行,可以删除)
COPY requirements.txt ./ COPY requirements.txt ./
RUN pip3 install -r requirements.txt RUN pip3 install -r requirements.txt

查看文件

@@ -1,7 +1,7 @@
> [!IMPORTANT] > [!IMPORTANT]
> 2024.1.18: 更新3.70版本,支持Mermaid绘图库让大模型绘制脑图 > 2024.6.1: 版本3.80加入插件二级菜单功能详见wiki
> 2024.1.17: 恭迎GLM4,全力支持Qwen、GLM、DeepseekCoder等国内中文大语言基座模型 > 2024.5.1: 加入Doc2x翻译PDF论文的功能,[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x)
> 2024.1.17: 某些依赖包尚不兼容python 3.12,推荐python 3.11。 > 2024.3.11: 全力支持Qwen、GLM、DeepseekCoder等中文大语言模型 SoVits语音克隆模块,[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/)
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。 > 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
<br> <br>
@@ -55,6 +55,11 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
功能(⭐= 近期新增功能) | 描述 功能(⭐= 近期新增功能) | 描述
--- | --- --- | ---
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱GLM4](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/) ⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱GLM4](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等3.7版本)
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码 润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键 [自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) 模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
@@ -62,22 +67,17 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读latex/pdf论文全文并生成摘要 读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读latex/pdf论文全文并生成摘要
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文 Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
批量注释生成 | [插件] 一键批量生成函数注释 批量注释生成 | [插件] 一键批量生成函数注释
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔 Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README.English.md)了吗?就是出自他的手笔
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等3.7版本)
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程) [PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF [Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) [谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时 互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮 公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题 启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧? [多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/) 更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件开发中 ⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件开发中
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
更多新功能展示 (图像生成等) …… | 见本文档结尾处 …… 更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
</div> </div>
@@ -87,6 +87,10 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
<img src="https://user-images.githubusercontent.com/96192199/279702205-d81137c3-affd-4cd1-bb5e-b15610389762.gif" width="700" > <img src="https://user-images.githubusercontent.com/96192199/279702205-d81137c3-affd-4cd1-bb5e-b15610389762.gif" width="700" >
</div> </div>
<div align="center">
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/70ff1ec5-e589-4561-a29e-b831079b37fb.gif" width="700" >
</div>
- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放剪贴板 - 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放剪贴板
<div align="center"> <div align="center">
@@ -116,6 +120,25 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
<br><br> <br><br>
# Installation # Installation
```mermaid
flowchart TD
A{"安装方法"} --> W1("I. 🔑直接运行 (Windows, Linux or MacOS)")
W1 --> W11["1. Python pip包管理依赖"]
W1 --> W12["2. Anaconda包管理依赖推荐⭐"]
A --> W2["II. 🐳使用Docker (Windows, Linux or MacOS)"]
W2 --> k1["1. 部署项目全部能力的大镜像(推荐⭐)"]
W2 --> k2["2. 仅在线模型GPT, GLM4等镜像"]
W2 --> k3["3. 在线模型 + Latex的大镜像"]
A --> W4["IV. 🚀其他部署方法"]
W4 --> C1["1. Windows/MacOS 一键安装运行脚本(推荐⭐)"]
W4 --> C2["2. Huggingface, Sealos远程部署"]
W4 --> C4["3. ... 其他 ..."]
```
### 安装方法I直接运行 (Windows, Linux or MacOS) ### 安装方法I直接运行 (Windows, Linux or MacOS)
1. 下载项目 1. 下载项目
@@ -129,7 +152,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。 在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,以确保更新或其他用户无法轻易查看您的私有配置 」。 「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,从而确保自动更新时不会丢失配置 」。
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。 「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
@@ -234,8 +257,7 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
# Advanced Usage # Advanced Usage
### I自定义新的便捷按钮学术快捷键 ### I自定义新的便捷按钮学术快捷键
任意文本编辑器打开`core_functional.py`,添加如下条目,然后重启程序。(如果按钮已存在,那么可以直接修改(前缀、后缀都已支持热修改),无需重启程序即可生效。) 现在已可以通过UI中的`界面外观`菜单中的`自定义菜单`添加新的便捷按钮。如果需要在代码中定义,请使用任意文本编辑器打开`core_functional.py`,添加如下条目即可:
例如
```python ```python
"超级英译中": { "超级英译中": {
@@ -358,6 +380,32 @@ GPT Academic开发者QQ群`610599535`
- 某些浏览器翻译插件干扰此软件前端的运行 - 某些浏览器翻译插件干扰此软件前端的运行
- 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio** - 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
```mermaid
timeline LR
title GPT-Academic项目发展历程
section 2.x
1.0~2.2: 基础功能: 引入模块化函数插件: 可折叠式布局: 函数插件支持热重载
2.3~2.5: 增强多线程交互性: 新增PDF全文翻译功能: 新增输入区切换位置的功能: 自更新
2.6: 重构了插件结构: 提高了交互性: 加入更多插件
section 3.x
3.0~3.1: 对chatglm支持: 对其他小型llm支持: 支持同时问询多个gpt模型: 支持多个apikey负载均衡
3.2~3.3: 函数插件支持更多参数接口: 保存对话功能: 解读任意语言代码: 同时询问任意的LLM组合: 互联网信息综合功能
3.4: 加入arxiv论文翻译: 加入latex论文批改功能
3.44: 正式支持Azure: 优化界面易用性
3.46: 自定义ChatGLM2微调模型: 实时语音对话
3.49: 支持阿里达摩院通义千问: 上海AI-Lab书生: 讯飞星火: 支持百度千帆平台 & 文心一言
3.50: 虚空终端: 支持插件分类: 改进UI: 设计新主题
3.53: 动态选择不同界面主题: 提高稳定性: 解决多用户冲突问题
3.55: 动态代码解释器: 重构前端界面: 引入悬浮窗口与菜单栏
3.56: 动态追加基础功能按钮: 新汇报PDF汇总页面
3.57: GLM3, 星火v3: 支持文心一言v4: 修复本地模型的并发BUG
3.60: 引入AutoGen
3.70: 引入Mermaid绘图: 实现GPT画脑图等功能
3.80(TODO): 优化AutoGen插件主题: 设计衍生插件
```
### III主题 ### III主题
可以通过修改`THEME`选项config.py变更主题 可以通过修改`THEME`选项config.py变更主题
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/) 1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)

查看文件

@@ -47,7 +47,7 @@ def backup_and_download(current_version, remote_version):
shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history']) shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
proxies = get_conf('proxies') proxies = get_conf('proxies')
try: r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True) try: r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
except: r = requests.get('https://public.gpt-academic.top/publish/master.zip', proxies=proxies, stream=True) except: r = requests.get('https://public.agent-matrix.com/publish/master.zip', proxies=proxies, stream=True)
zip_file_path = backup_dir+'/master.zip' zip_file_path = backup_dir+'/master.zip'
with open(zip_file_path, 'wb+') as f: with open(zip_file_path, 'wb+') as f:
f.write(r.content) f.write(r.content)
@@ -71,7 +71,7 @@ def patch_and_restart(path):
import sys import sys
import time import time
import glob import glob
from colorful import print亮黄, print亮绿, print亮红 from shared_utils.colorful import print亮黄, print亮绿, print亮红
# if not using config_private, move origin config.py as config_private.py # if not using config_private, move origin config.py as config_private.py
if not os.path.exists('config_private.py'): if not os.path.exists('config_private.py'):
print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,', print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
@@ -81,7 +81,7 @@ def patch_and_restart(path):
dir_util.copy_tree(path_new_version, './') dir_util.copy_tree(path_new_version, './')
print亮绿('代码已经更新,即将更新pip包依赖……') print亮绿('代码已经更新,即将更新pip包依赖……')
for i in reversed(range(5)): time.sleep(1); print(i) for i in reversed(range(5)): time.sleep(1); print(i)
try: try:
import subprocess import subprocess
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt']) subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
except: except:
@@ -113,7 +113,7 @@ def auto_update(raise_error=False):
import json import json
proxies = get_conf('proxies') proxies = get_conf('proxies')
try: response = requests.get("https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5) try: response = requests.get("https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
except: response = requests.get("https://public.gpt-academic.top/publish/version", proxies=proxies, timeout=5) except: response = requests.get("https://public.agent-matrix.com/publish/version", proxies=proxies, timeout=5)
remote_json_data = json.loads(response.text) remote_json_data = json.loads(response.text)
remote_version = remote_json_data['version'] remote_version = remote_json_data['version']
if remote_json_data["show_feature"]: if remote_json_data["show_feature"]:
@@ -124,7 +124,7 @@ def auto_update(raise_error=False):
current_version = f.read() current_version = f.read()
current_version = json.loads(current_version)['version'] current_version = json.loads(current_version)['version']
if (remote_version - current_version) >= 0.01-1e-5: if (remote_version - current_version) >= 0.01-1e-5:
from colorful import print亮黄 from shared_utils.colorful import print亮黄
print亮黄(f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}{new_feature}') print亮黄(f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}{new_feature}')
print('1Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n') print('1Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
user_instruction = input('2是否一键更新代码Y+回车=确认,输入其他/无输入+回车=不更新)?') user_instruction = input('2是否一键更新代码Y+回车=确认,输入其他/无输入+回车=不更新)?')
@@ -159,7 +159,7 @@ def warm_up_modules():
enc.encode("模块预热", disallowed_special=()) enc.encode("模块预热", disallowed_special=())
enc = model_info["gpt-4"]['tokenizer'] enc = model_info["gpt-4"]['tokenizer']
enc.encode("模块预热", disallowed_special=()) enc.encode("模块预热", disallowed_special=())
def warm_up_vectordb(): def warm_up_vectordb():
print('正在执行一些模块的预热 ...') print('正在执行一些模块的预热 ...')
from toolbox import ProxyNetworkActivate from toolbox import ProxyNetworkActivate
@@ -167,7 +167,7 @@ def warm_up_vectordb():
import nltk import nltk
with ProxyNetworkActivate("Warmup_Modules"): nltk.download("punkt") with ProxyNetworkActivate("Warmup_Modules"): nltk.download("punkt")
if __name__ == '__main__': if __name__ == '__main__':
import os import os
os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染

144
config.py
查看文件

@@ -2,8 +2,8 @@
以下所有配置也都支持利用环境变量覆写,环境变量配置格式见docker-compose.yml。 以下所有配置也都支持利用环境变量覆写,环境变量配置格式见docker-compose.yml。
读取优先级:环境变量 > config_private.py > config.py 读取优先级:环境变量 > config_private.py > config.py
--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
All the following configurations also support using environment variables to override, All the following configurations also support using environment variables to override,
and the environment variable configuration format can be seen in docker-compose.yml. and the environment variable configuration format can be seen in docker-compose.yml.
Configuration reading priority: environment variable > config_private.py > config.py Configuration reading priority: environment variable > config_private.py > config.py
""" """
@@ -30,11 +30,40 @@ if USE_PROXY:
else: else:
proxies = None proxies = None
# ------------------------------------ 以下配置可以优化体验, 但大部分场合下并不需要修改 ------------------------------------ # [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
"gpt-4o", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo",
"gemini-pro", "chatglm3"
]
# --- --- --- ---
# P.S. 其他可用的模型还包括
# AVAIL_LLM_MODELS = [
# "glm-4-0520", "glm-4-air", "glm-4-airx", "glm-4-flash",
# "qianfan", "deepseekcoder",
# "spark", "sparkv2", "sparkv3", "sparkv3.5",
# "qwen-turbo", "qwen-plus", "qwen-max", "qwen-local",
# "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k",
# "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125", "gpt-4o-2024-05-13"
# "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2",
# "moss", "llama2", "chatglm_onnx", "internlm", "jittorllms_pangualpha", "jittorllms_llama",
# "deepseek-chat" ,"deepseek-coder",
# "yi-34b-chat-0205","yi-34b-chat-200k","yi-large","yi-medium","yi-spark","yi-large-turbo","yi-large-preview",
# ]
# --- --- --- ---
# 此外,您还可以在接入one-api/vllm/ollama时,
# 使用"one-api-*","vllm-*","ollama-*"前缀直接使用非标准方式接入的模型,例如
# AVAIL_LLM_MODELS = ["one-api-claude-3-sonnet-20240229(max_token=100000)", "ollama-phi3(max_token=4096)"]
# --- --- --- ---
# --------------- 以下配置可以优化体验 ---------------
# 重新URL重新定向,实现更换API_URL的作用高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人 # 重新URL重新定向,实现更换API_URL的作用高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人
# 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"} # 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
# 举例: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://reverse-proxy-url/v1/chat/completions"} # 举例: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://reverse-proxy-url/v1/chat/completions", "http://localhost:11434/api/chat": "在这里填写您ollama的URL"}
API_URL_REDIRECT = {} API_URL_REDIRECT = {}
@@ -66,7 +95,7 @@ LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下
# 暗色模式 / 亮色模式 # 暗色模式 / 亮色模式
DARK_MODE = True DARK_MODE = True
# 发送请求到OpenAI后,等待多久判定为超时 # 发送请求到OpenAI后,等待多久判定为超时
@@ -77,6 +106,10 @@ TIMEOUT_SECONDS = 30
WEB_PORT = -1 WEB_PORT = -1
# 是否自动打开浏览器页面
AUTO_OPEN_BROWSER = True
# 如果OpenAI不响应网络卡顿、代理失败、KEY失效,重试的次数限制 # 如果OpenAI不响应网络卡顿、代理失败、KEY失效,重试的次数限制
MAX_RETRY = 2 MAX_RETRY = 2
@@ -85,20 +118,6 @@ MAX_RETRY = 2
DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体'] DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
# 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-preview",
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
"gemini-pro", "chatglm3", "claude-2", "zhipuai"]
# P.S. 其他可用的模型还包括 [
# "moss", "qwen-turbo", "qwen-plus", "qwen-max"
# "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613",
# "gpt-3.5-turbo-16k-0613", "gpt-3.5-random", "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"
# ]
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4" # 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3" MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
@@ -116,7 +135,7 @@ DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
# 百度千帆LLM_MODEL="qianfan" # 百度千帆LLM_MODEL="qianfan"
BAIDU_CLOUD_API_KEY = '' BAIDU_CLOUD_API_KEY = ''
BAIDU_CLOUD_SECRET_KEY = '' BAIDU_CLOUD_SECRET_KEY = ''
BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot-4"(文心大模型4.0), "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat" BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot-4"(文心大模型4.0), "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat", "ERNIE-Speed-128K", "ERNIE-Speed-8K", "ERNIE-Lite-8K"
# 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径 # 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径
@@ -127,6 +146,7 @@ CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda" LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本 LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
# 设置gradio的并行线程数不需要修改 # 设置gradio的并行线程数不需要修改
CONCURRENT_COUNT = 100 CONCURRENT_COUNT = 100
@@ -144,7 +164,8 @@ ADD_WAIFU = False
AUTHENTICATION = [] AUTHENTICATION = []
# 如果需要在二级路径下运行(常规情况下,不要修改!!需要配合修改main.py才能生效! # 如果需要在二级路径下运行(常规情况下,不要修改!!
# (举例 CUSTOM_PATH = "/gpt_academic",可以让软件运行在 http://ip:port/gpt_academic/ 下。)
CUSTOM_PATH = "/" CUSTOM_PATH = "/"
@@ -158,7 +179,7 @@ API_ORG = ""
# 如果需要使用Slack Claude,使用教程详情见 request_llms/README.md # 如果需要使用Slack Claude,使用教程详情见 request_llms/README.md
SLACK_CLAUDE_BOT_ID = '' SLACK_CLAUDE_BOT_ID = ''
SLACK_CLAUDE_USER_TOKEN = '' SLACK_CLAUDE_USER_TOKEN = ''
@@ -172,14 +193,8 @@ AZURE_ENGINE = "填入你亲手写的部署名" # 读 docs\use_azure.
AZURE_CFG_ARRAY = {} AZURE_CFG_ARRAY = {}
# 使用Newbing (不推荐使用,未来将删除) # 阿里云实时语音识别 配置难度较高
NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"] # 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
NEWBING_COOKIES = """
put your new bing cookies here
"""
# 阿里云实时语音识别 配置难度较高 仅建议高手用户使用 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
ENABLE_AUDIO = False ENABLE_AUDIO = False
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
@@ -187,6 +202,12 @@ ALIYUN_ACCESSKEY="" # (无需填写)
ALIYUN_SECRET="" # (无需填写) ALIYUN_SECRET="" # (无需填写)
# GPT-SOVITS 文本转语音服务的运行地址(将语言模型的生成文本朗读出来)
TTS_TYPE = "EDGE_TTS" # EDGE_TTS / LOCAL_SOVITS_API / DISABLE
GPT_SOVITS_URL = ""
EDGE_TTS_VOICE = "zh-CN-XiaoxiaoNeural"
# 接入讯飞星火大模型 https://console.xfyun.cn/services/iat # 接入讯飞星火大模型 https://console.xfyun.cn/services/iat
XFYUN_APPID = "00000000" XFYUN_APPID = "00000000"
XFYUN_API_SECRET = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb" XFYUN_API_SECRET = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
@@ -195,19 +216,32 @@ XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
# 接入智谱大模型 # 接入智谱大模型
ZHIPUAI_API_KEY = "" ZHIPUAI_API_KEY = ""
ZHIPUAI_MODEL = "glm-4" # 可选 "glm-3-turbo" "glm-4" ZHIPUAI_MODEL = "" # 此选项已废弃,不再需要填写
# # 火山引擎YUNQUE大模型
# YUNQUE_SECRET_KEY = ""
# YUNQUE_ACCESS_KEY = ""
# YUNQUE_MODEL = ""
# Claude API KEY # Claude API KEY
ANTHROPIC_API_KEY = "" ANTHROPIC_API_KEY = ""
# 月之暗面 API KEY
MOONSHOT_API_KEY = ""
# 零一万物(Yi Model) API KEY
YIMODEL_API_KEY = ""
# 深度求索(DeepSeek) API KEY,默认请求地址为"https://api.deepseek.com/v1/chat/completions"
DEEPSEEK_API_KEY = ""
# Mathpix 拥有执行PDF的OCR功能,但是需要注册账号
MATHPIX_APPID = ""
MATHPIX_APPKEY = ""
# DOC2X的PDF解析服务,注册账号并获取API KEY: https://doc2x.noedgeai.com/login
DOC2X_API_KEY = ""
# 自定义API KEY格式 # 自定义API KEY格式
CUSTOM_API_KEY_PATTERN = "" CUSTOM_API_KEY_PATTERN = ""
@@ -224,8 +258,8 @@ HUGGINGFACE_ACCESS_TOKEN = "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"
# 获取方法复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space" # 获取方法复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space"
GROBID_URLS = [ GROBID_URLS = [
"https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space", "https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space",
"https://qingxu98-grobid4.hf.space","https://qingxu98-grobid5.hf.space", "https://qingxu98-grobid6.hf.space", "https://qingxu98-grobid4.hf.space","https://qingxu98-grobid5.hf.space", "https://qingxu98-grobid6.hf.space",
"https://qingxu98-grobid7.hf.space", "https://qingxu98-grobid8.hf.space", "https://qingxu98-grobid7.hf.space", "https://qingxu98-grobid8.hf.space",
] ]
@@ -246,7 +280,7 @@ PATH_LOGGING = "gpt_log"
# 除了连接OpenAI之外,还有哪些场合允许使用代理,请勿修改 # 除了连接OpenAI之外,还有哪些场合允许使用代理,请勿修改
WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid", WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
"Warmup_Modules", "Nougat_Download", "AutoGen"] "Warmup_Modules", "Nougat_Download", "AutoGen"]
@@ -261,7 +295,11 @@ PLUGIN_HOT_RELOAD = False
# 自定义按钮的最大数量限制 # 自定义按钮的最大数量限制
NUM_CUSTOM_BASIC_BTN = 4 NUM_CUSTOM_BASIC_BTN = 4
""" """
--------------- 配置关联关系说明 ---------------
在线大模型配置关联关系示意图 在线大模型配置关联关系示意图
├── "gpt-3.5-turbo" 等openai模型 ├── "gpt-3.5-turbo" 等openai模型
@@ -285,7 +323,7 @@ NUM_CUSTOM_BASIC_BTN = 4
│ ├── XFYUN_API_SECRET │ ├── XFYUN_API_SECRET
│ └── XFYUN_API_KEY │ └── XFYUN_API_KEY
├── "claude-1-100k" 等claude模型 ├── "claude-3-opus-20240229" 等claude模型
│ └── ANTHROPIC_API_KEY │ └── ANTHROPIC_API_KEY
├── "stack-claude" ├── "stack-claude"
@@ -297,9 +335,11 @@ NUM_CUSTOM_BASIC_BTN = 4
│ ├── BAIDU_CLOUD_API_KEY │ ├── BAIDU_CLOUD_API_KEY
│ └── BAIDU_CLOUD_SECRET_KEY │ └── BAIDU_CLOUD_SECRET_KEY
├── "zhipuai" 智谱AI大模型chatglm_turbo ├── "glm-4", "glm-3-turbo", "zhipuai" 智谱AI大模型
── ZHIPUAI_API_KEY ── ZHIPUAI_API_KEY
└── ZHIPUAI_MODEL
├── "yi-34b-chat-0205", "yi-34b-chat-200k" 等零一万物(Yi Model)大模型
│ └── YIMODEL_API_KEY
├── "qwen-turbo" 等通义千问大模型 ├── "qwen-turbo" 等通义千问大模型
│ └── DASHSCOPE_API_KEY │ └── DASHSCOPE_API_KEY
@@ -307,11 +347,12 @@ NUM_CUSTOM_BASIC_BTN = 4
├── "Gemini" ├── "Gemini"
│ └── GEMINI_API_KEY │ └── GEMINI_API_KEY
└── "newbing" Newbing接口不再稳定,不推荐使用 └── "one-api-...(max_token=...)" 用一种更方便的方式接入one-api多模型管理界面
├── NEWBING_STYLE ├── AVAIL_LLM_MODELS
── NEWBING_COOKIES ── API_KEY
└── API_URL_REDIRECT
本地大模型示意图 本地大模型示意图
├── "chatglm3" ├── "chatglm3"
@@ -351,6 +392,9 @@ NUM_CUSTOM_BASIC_BTN = 4
│ └── ALIYUN_SECRET │ └── ALIYUN_SECRET
└── PDF文档精准解析 └── PDF文档精准解析
── GROBID_URLS ── GROBID_URLS
├── MATHPIX_APPID
└── MATHPIX_APPKEY
""" """

查看文件

@@ -3,18 +3,27 @@
# 'stop' 颜色对应 theme.py 中的 color_er # 'stop' 颜色对应 theme.py 中的 color_er
import importlib import importlib
from toolbox import clear_line_break from toolbox import clear_line_break
from toolbox import apply_gpt_academic_string_mask_langbased
from toolbox import build_gpt_academic_masked_string_langbased
from textwrap import dedent from textwrap import dedent
def get_core_functions(): def get_core_functions():
return { return {
"英语学术润色": { "学术语料润色": {
# [1*] 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等 # [1*] 前缀字符串,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " # 这里填一个提示词字符串就行了,这里为了区分中英文情景搞复杂了一点
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " "Prefix": build_gpt_academic_masked_string_langbased(
r"Firstly, you should provide the polished paragraph. " text_show_english=
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table." + "\n\n", r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, "
# [2*] 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来 r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. "
r"Firstly, you should provide the polished paragraph. "
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table.",
text_show_chinese=
r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性,"
r"同时分解长句,减少重复,并提供改进建议。请先提供文本的更正版本,然后在markdown表格中列出修改的内容,并给出修改的理由:"
) + "\n\n",
# [2*] 后缀字符串,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
"Suffix": r"", "Suffix": r"",
# [3] 按钮颜色 (可选参数,默认 secondary) # [3] 按钮颜色 (可选参数,默认 secondary)
"Color": r"secondary", "Color": r"secondary",
@@ -24,16 +33,20 @@ def get_core_functions():
"AutoClearHistory": False, "AutoClearHistory": False,
# [6] 文本预处理 (可选参数,默认 None,举例写个函数移除所有的换行符 # [6] 文本预处理 (可选参数,默认 None,举例写个函数移除所有的换行符
"PreProcess": None, "PreProcess": None,
# [7] 模型选择 (可选参数。如不设置,则使用当前全局模型;如设置,则用指定模型覆盖全局模型。)
# "ModelOverride": "gpt-3.5-turbo", # 主要用途:强制点击此基础功能按钮时,使用指定的模型。
}, },
"总结绘制脑图": { "总结绘制脑图": {
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等 # 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
"Prefix": r"", "Prefix": '''"""\n\n''',
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来 # 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
"Suffix": "Suffix":
dedent("\n"+r''' # dedent() 函数用于去除多行字符串的缩进
============================== dedent("\n\n"+r'''
"""
使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如 使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如
以下是对以上文本的总结,以mermaid flowchart的形式展示 以下是对以上文本的总结,以mermaid flowchart的形式展示
@@ -46,15 +59,15 @@ def get_core_functions():
C --> |"箭头名2"| F["节点名6"] C --> |"箭头名2"| F["节点名6"]
``` ```
警告 注意
1使用中文 1使用中文
2节点名字使用引号包裹,如["Laptop"] 2节点名字使用引号包裹,如["Laptop"]
3`|` 和 `"`之间不要存在空格 3`|` 和 `"`之间不要存在空格
4根据情况选择flowchart LR从左到右或者flowchart TD从上到下 4根据情况选择flowchart LR从左到右或者flowchart TD从上到下
'''), '''),
}, },
"查找语法错误": { "查找语法错误": {
"Prefix": r"Help me ensure that the grammar and the spelling is correct. " "Prefix": r"Help me ensure that the grammar and the spelling is correct. "
r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good. " r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good. "
@@ -74,48 +87,56 @@ def get_core_functions():
"Suffix": r"", "Suffix": r"",
"PreProcess": clear_line_break, # 预处理:清除换行符 "PreProcess": clear_line_break, # 预处理:清除换行符
}, },
"中译英": { "中译英": {
"Prefix": r"Please translate following sentence to English:" + "\n\n", "Prefix": r"Please translate following sentence to English:" + "\n\n",
"Suffix": r"", "Suffix": r"",
}, },
"学术英中互译": { "学术英中互译": {
"Prefix": r"I want you to act as a scientific English-Chinese translator, " + "Prefix": build_gpt_academic_masked_string_langbased(
r"I will provide you with some paragraphs in one language " + text_show_chinese=
r"and your task is to accurately and academically translate the paragraphs only into the other language. " + r"I want you to act as a scientific English-Chinese translator, "
r"Do not repeat the original provided paragraphs after translation. " + r"I will provide you with some paragraphs in one language "
r"You should use artificial intelligence tools, " + r"and your task is to accurately and academically translate the paragraphs only into the other language. "
r"such as natural language processing, and rhetorical knowledge " + r"Do not repeat the original provided paragraphs after translation. "
r"and experience about effective writing techniques to reply. " + r"You should use artificial intelligence tools, "
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n", r"such as natural language processing, and rhetorical knowledge "
r"and experience about effective writing techniques to reply. "
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:",
text_show_english=
r"你是经验丰富的翻译,请把以下学术文章段落翻译成中文,"
r"并同时充分考虑中文的语法、清晰、简洁和整体可读性,"
r"必要时,你可以修改整个句子的顺序以确保翻译后的段落符合中文的语言习惯。"
r"你需要翻译的文本如下:"
) + "\n\n",
"Suffix": r"", "Suffix": r"",
}, },
"英译中": { "英译中": {
"Prefix": r"翻译成地道的中文:" + "\n\n", "Prefix": r"翻译成地道的中文:" + "\n\n",
"Suffix": r"", "Suffix": r"",
"Visible": False, "Visible": False,
}, },
"找图片": { "找图片": {
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL,"
r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片" + "\n\n", r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片" + "\n\n",
"Suffix": r"", "Suffix": r"",
"Visible": False, "Visible": False,
}, },
"解释代码": { "解释代码": {
"Prefix": r"请解释以下代码:" + "\n```\n", "Prefix": r"请解释以下代码:" + "\n```\n",
"Suffix": "\n```\n", "Suffix": "\n```\n",
}, },
"参考文献转Bib": { "参考文献转Bib": {
"Prefix": r"Here are some bibliography items, please transform them into bibtex style." "Prefix": r"Here are some bibliography items, please transform them into bibtex style."
r"Note that, reference styles maybe more than one kind, you should transform each item correctly." r"Note that, reference styles maybe more than one kind, you should transform each item correctly."
@@ -140,7 +161,11 @@ def handle_core_functionality(additional_fn, inputs, history, chatbot):
if "PreProcess" in core_functional[additional_fn]: if "PreProcess" in core_functional[additional_fn]:
if core_functional[additional_fn]["PreProcess"] is not None: if core_functional[additional_fn]["PreProcess"] is not None:
inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] # 为字符串加上上面定义的前缀和后缀。
inputs = apply_gpt_academic_string_mask_langbased(
string = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"],
lang_reference = inputs,
)
if core_functional[additional_fn].get("AutoClearHistory", False): if core_functional[additional_fn].get("AutoClearHistory", False):
history = [] history = []
return inputs, history return inputs, history

查看文件

@@ -15,27 +15,35 @@ def get_crazy_functions():
from crazy_functions.解析项目源代码 import 解析一个Java项目 from crazy_functions.解析项目源代码 import 解析一个Java项目
from crazy_functions.解析项目源代码 import 解析一个前端项目 from crazy_functions.解析项目源代码 import 解析一个前端项目
from crazy_functions.高级功能函数模板 import 高阶功能模板函数 from crazy_functions.高级功能函数模板 import 高阶功能模板函数
from crazy_functions.高级功能函数模板 import Demo_Wrap
from crazy_functions.Latex全文润色 import Latex英文润色 from crazy_functions.Latex全文润色 import Latex英文润色
from crazy_functions.询问多个大语言模型 import 同时问询 from crazy_functions.询问多个大语言模型 import 同时问询
from crazy_functions.解析项目源代码 import 解析一个Lua项目 from crazy_functions.解析项目源代码 import 解析一个Lua项目
from crazy_functions.解析项目源代码 import 解析一个CSharp项目 from crazy_functions.解析项目源代码 import 解析一个CSharp项目
from crazy_functions.总结word文档 import 总结word文档 from crazy_functions.总结word文档 import 总结word文档
from crazy_functions.解析JupyterNotebook import 解析ipynb文件 from crazy_functions.解析JupyterNotebook import 解析ipynb文件
from crazy_functions.对话历史存档 import 对话历史存档 from crazy_functions.Conversation_To_File import 载入对话历史存档
from crazy_functions.对话历史存档 import 载入对话历史存档 from crazy_functions.Conversation_To_File import 对话历史存档
from crazy_functions.对话历史存档 import 删除所有本地对话历史记录 from crazy_functions.Conversation_To_File import Conversation_To_File_Wrap
from crazy_functions.Conversation_To_File import 删除所有本地对话历史记录
from crazy_functions.辅助功能 import 清除缓存 from crazy_functions.辅助功能 import 清除缓存
from crazy_functions.批量Markdown翻译 import Markdown英译中 from crazy_functions.Markdown_Translate import Markdown英译中
from crazy_functions.批量总结PDF文档 import 批量总结PDF文档 from crazy_functions.批量总结PDF文档 import 批量总结PDF文档
from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档 from crazy_functions.PDF_Translate import 批量翻译PDF文档
from crazy_functions.谷歌检索小助手 import 谷歌检索小助手 from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入 from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
from crazy_functions.Latex全文润色 import Latex中文润色 from crazy_functions.Latex全文润色 import Latex中文润色
from crazy_functions.Latex全文润色 import Latex英文纠错 from crazy_functions.Latex全文润色 import Latex英文纠错
from crazy_functions.Latex全文翻译 import Latex中译英 from crazy_functions.Markdown_Translate import Markdown中译英
from crazy_functions.Latex全文翻译 import Latex英译中
from crazy_functions.批量Markdown翻译 import Markdown中译英
from crazy_functions.虚空终端 import 虚空终端 from crazy_functions.虚空终端 import 虚空终端
from crazy_functions.生成多种Mermaid图表 import Mermaid_Gen
from crazy_functions.PDF_Translate_Wrap import PDF_Tran
from crazy_functions.Latex_Function import Latex英文纠错加PDF对比
from crazy_functions.Latex_Function import Latex翻译中文并重新编译PDF
from crazy_functions.Latex_Function import PDF翻译中文并重新编译PDF
from crazy_functions.Latex_Function_Wrap import Arxiv_Localize
from crazy_functions.Latex_Function_Wrap import PDF_Localize
function_plugins = { function_plugins = {
"虚空终端": { "虚空终端": {
@@ -71,6 +79,14 @@ def get_crazy_functions():
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数", "Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
"Function": HotReload(清除缓存), "Function": HotReload(清除缓存),
}, },
"生成多种Mermaid图表(从当前对话或路径(.pdf/.md/.docx)中生产图表)": {
"Group": "对话",
"Color": "stop",
"AsButton": False,
"Info" : "基于当前对话或文件生成多种Mermaid图表,图表类型由模型判断",
"Function": None,
"Class": Mermaid_Gen
},
"批量总结Word文档": { "批量总结Word文档": {
"Group": "学术", "Group": "学术",
"Color": "stop", "Color": "stop",
@@ -182,7 +198,8 @@ def get_crazy_functions():
"Group": "对话", "Group": "对话",
"AsButton": True, "AsButton": True,
"Info": "保存当前的对话 | 不需要输入参数", "Info": "保存当前的对话 | 不需要输入参数",
"Function": HotReload(对话历史存档), "Function": HotReload(对话历史存档), # 当注册Class后,Function旧接口仅会在“虚空终端”中起作用
"Class": Conversation_To_File_Wrap # 新一代插件需要注册Class
}, },
"[多线程Demo]解析此项目本身(源码自译解)": { "[多线程Demo]解析此项目本身(源码自译解)": {
"Group": "对话|编程", "Group": "对话|编程",
@@ -194,14 +211,16 @@ def get_crazy_functions():
"Group": "对话", "Group": "对话",
"AsButton": True, "AsButton": True,
"Info": "查看历史上的今天事件 (这是一个面向开发者的插件Demo) | 不需要输入参数", "Info": "查看历史上的今天事件 (这是一个面向开发者的插件Demo) | 不需要输入参数",
"Function": HotReload(高阶功能模板函数), "Function": None,
"Class": Demo_Wrap, # 新一代插件需要注册Class
}, },
"精准翻译PDF论文": { "精准翻译PDF论文": {
"Group": "学术", "Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": True, "AsButton": True,
"Info": "精准翻译PDF论文为中文 | 输入参数为路径", "Info": "精准翻译PDF论文为中文 | 输入参数为路径",
"Function": HotReload(批量翻译PDF文档), "Function": HotReload(批量翻译PDF文档), # 当注册Class后,Function旧接口仅会在“虚空终端”中起作用
"Class": PDF_Tran, # 新一代插件需要注册Class
}, },
"询问多个GPT模型": { "询问多个GPT模型": {
"Group": "对话", "Group": "对话",
@@ -237,13 +256,7 @@ def get_crazy_functions():
"Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包", "Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
"Function": HotReload(Latex英文润色), "Function": HotReload(Latex英文润色),
}, },
"英文Latex项目全文纠错输入路径或上传压缩包": {
"Group": "学术",
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
"Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
"Function": HotReload(Latex英文纠错),
},
"中文Latex项目全文润色输入路径或上传压缩包": { "中文Latex项目全文润色输入路径或上传压缩包": {
"Group": "学术", "Group": "学术",
"Color": "stop", "Color": "stop",
@@ -252,6 +265,14 @@ def get_crazy_functions():
"Function": HotReload(Latex中文润色), "Function": HotReload(Latex中文润色),
}, },
# 已经被新插件取代 # 已经被新插件取代
# "英文Latex项目全文纠错输入路径或上传压缩包": {
# "Group": "学术",
# "Color": "stop",
# "AsButton": False, # 加入下拉菜单中
# "Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
# "Function": HotReload(Latex英文纠错),
# },
# 已经被新插件取代
# "Latex项目全文中译英输入路径或上传压缩包": { # "Latex项目全文中译英输入路径或上传压缩包": {
# "Group": "学术", # "Group": "学术",
# "Color": "stop", # "Color": "stop",
@@ -274,8 +295,52 @@ def get_crazy_functions():
"Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包", "Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包",
"Function": HotReload(Markdown中译英), "Function": HotReload(Markdown中译英),
}, },
"Latex英文纠错+高亮修正位置 [需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
"Function": HotReload(Latex英文纠错加PDF对比),
},
"Arxiv论文精细翻译输入arxivID[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
"Function": HotReload(Latex翻译中文并重新编译PDF), # 当注册Class后,Function旧接口仅会在“虚空终端”中起作用
"Class": Arxiv_Localize, # 新一代插件需要注册Class
},
"本地Latex论文精细翻译上传Latex项目[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
"Function": HotReload(Latex翻译中文并重新编译PDF),
},
"PDF翻译中文并重新编译PDF上传PDF[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "PDF翻译中文,并重新编译PDF | 输入参数为路径",
"Function": HotReload(PDF翻译中文并重新编译PDF), # 当注册Class后,Function旧接口仅会在“虚空终端”中起作用
"Class": PDF_Localize # 新一代插件需要注册Class
}
} }
# -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=- # -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-
try: try:
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要 from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
@@ -448,7 +513,7 @@ def get_crazy_functions():
print("Load function plugin failed") print("Load function plugin failed")
try: try:
from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言 from crazy_functions.Markdown_Translate import Markdown翻译指定语言
function_plugins.update( function_plugins.update(
{ {
@@ -521,56 +586,6 @@ def get_crazy_functions():
print(trimmed_format_exc()) print(trimmed_format_exc())
print("Load function plugin failed") print("Load function plugin failed")
try:
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
function_plugins.update(
{
"Latex英文纠错+高亮修正位置 [需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
"Function": HotReload(Latex英文纠错加PDF对比),
}
}
)
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
function_plugins.update(
{
"Arxiv论文精细翻译输入arxivID[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
"Function": HotReload(Latex翻译中文并重新编译PDF),
}
}
)
function_plugins.update(
{
"本地Latex论文精细翻译上传Latex项目[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
"Function": HotReload(Latex翻译中文并重新编译PDF),
}
}
)
except:
print(trimmed_format_exc())
print("Load function plugin failed")
try: try:
from toolbox import get_conf from toolbox import get_conf

查看文件

@@ -1,232 +0,0 @@
from collections.abc import Callable, Iterable, Mapping
from typing import Any
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc
from toolbox import promote_file_to_downloadzone, get_log_folder
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from .crazy_utils import input_clipping, try_install_deps
from multiprocessing import Process, Pipe
import os
import time
templete = """
```python
import ... # Put dependencies here, e.g. import numpy as np
class TerminalFunction(object): # Do not change the name of the class, The name of the class must be `TerminalFunction`
def run(self, path): # The name of the function must be `run`, it takes only a positional argument.
# rewrite the function you have just written here
...
return generated_file_path
```
"""
def inspect_dependency(chatbot, history):
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return True
def get_code_block(reply):
import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) == 1:
return matches[0].strip('python') # code block
for match in matches:
if 'class TerminalFunction' in match:
return match.strip('python') # code block
raise RuntimeError("GPT is not generating proper code.")
def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
# 输入
prompt_compose = [
f'Your job:\n'
f'1. write a single Python function, which takes a path of a `{file_type}` file as the only argument and returns a `string` containing the result of analysis or the path of generated files. \n',
f"2. You should write this function to perform following task: " + txt + "\n",
f"3. Wrap the output python function with markdown codeblock."
]
i_say = "".join(prompt_compose)
demo = []
# 第一步
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo,
sys_prompt= r"You are a programmer."
)
history.extend([i_say, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
# 第二步
prompt_compose = [
"If previous stage is successful, rewrite the function you have just written to satisfy following templete: \n",
templete
]
i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable templete. "
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=inputs_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt= r"You are a programmer."
)
code_to_return = gpt_say
history.extend([i_say, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
# # 第三步
# i_say = "Please list to packages to install to run the code above. Then show me how to use `try_install_deps` function to install them."
# i_say += 'For instance. `try_install_deps(["opencv-python", "scipy", "numpy"])`'
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
# inputs=i_say, inputs_show_user=inputs_show_user,
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
# sys_prompt= r"You are a programmer."
# )
# # # 第三步
# i_say = "Show me how to use `pip` to install packages to run the code above. "
# i_say += 'For instance. `pip install -r opencv-python scipy numpy`'
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
# inputs=i_say, inputs_show_user=i_say,
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
# sys_prompt= r"You are a programmer."
# )
installation_advance = ""
return code_to_return, installation_advance, txt, file_type, llm_kwargs, chatbot, history
def make_module(code):
module_file = 'gpt_fn_' + gen_time_str().replace('-','_')
with open(f'{get_log_folder()}/{module_file}.py', 'w', encoding='utf8') as f:
f.write(code)
def get_class_name(class_string):
import re
# Use regex to extract the class name
class_name = re.search(r'class (\w+)\(', class_string).group(1)
return class_name
class_name = get_class_name(code)
return f"{get_log_folder().replace('/', '.')}.{module_file}->{class_name}"
def init_module_instance(module):
import importlib
module_, class_ = module.split('->')
init_f = getattr(importlib.import_module(module_), class_)
return init_f()
def for_immediate_show_off_when_possible(file_type, fp, chatbot):
if file_type in ['png', 'jpg']:
image_path = os.path.abspath(fp)
chatbot.append(['这是一张图片, 展示如下:',
f'本地文件地址: <br/>`{image_path}`<br/>'+
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
])
return chatbot
def subprocess_worker(instance, file_path, return_dict):
return_dict['result'] = instance.run(file_path)
def have_any_recent_upload_files(chatbot):
_5min = 5 * 60
if not chatbot: return False # chatbot is None
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
if not most_recent_uploaded: return False # most_recent_uploaded is None
if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
else: return False # most_recent_uploaded is too old
def get_recent_file_prompt_support(chatbot):
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
path = most_recent_uploaded['path']
return path
@CatchException
def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数,暂时没有用武之地
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号
"""
raise NotImplementedError
# 清空历史,以免输入溢出
history = []; clear_file_downloadzone(chatbot)
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"CodeInterpreter开源版, 此插件处于开发阶段, 建议暂时不要使用, 插件初始化中 ..."
])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if have_any_recent_upload_files(chatbot):
file_path = get_recent_file_prompt_support(chatbot)
else:
chatbot.append(["文件检索", "没有发现任何近期上传的文件。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 读取文件
if ("recently_uploaded_files" in plugin_kwargs) and (plugin_kwargs["recently_uploaded_files"] == ""): plugin_kwargs.pop("recently_uploaded_files")
recently_uploaded_files = plugin_kwargs.get("recently_uploaded_files", None)
file_path = recently_uploaded_files[-1]
file_type = file_path.split('.')[-1]
# 粗心检查
if is_the_upload_folder(txt):
chatbot.append([
"...",
f"请在输入框内填写需求,然后再次点击该插件(文件路径 {file_path} 已经被记忆)"
])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 开始干正事
for j in range(5): # 最多重试5次
try:
code, installation_advance, txt, file_type, llm_kwargs, chatbot, history = \
yield from gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history)
code = get_code_block(code)
res = make_module(code)
instance = init_module_instance(res)
break
except Exception as e:
chatbot.append([f"{j}次代码生成尝试,失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 代码生成结束, 开始执行
try:
import multiprocessing
manager = multiprocessing.Manager()
return_dict = manager.dict()
p = multiprocessing.Process(target=subprocess_worker, args=(instance, file_path, return_dict))
# only has 10 seconds to run
p.start(); p.join(timeout=10)
if p.is_alive(): p.terminate(); p.join()
p.close()
res = return_dict['result']
# res = instance.run(file_path)
except Exception as e:
chatbot.append(["执行失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
# chatbot.append(["如果是缺乏依赖,请参考以下建议", installation_advance])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 顺利完成,收尾
res = str(res)
if os.path.exists(res):
chatbot.append(["执行成功了,结果是一个有效文件", "结果:" + res])
new_file_path = promote_file_to_downloadzone(res, chatbot=chatbot)
chatbot = for_immediate_show_off_when_possible(file_type, new_file_path, chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
else:
chatbot.append(["执行成功了,结果是一个字符串", "结果:" + res])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
"""
测试:
裁剪图像,保留下半部分
交换图像的蓝色通道和红色通道
将图像转为灰度图像
将csv文件转excel表格
"""

查看文件

@@ -0,0 +1,222 @@
from toolbox import CatchException, update_ui, promote_file_to_downloadzone, get_log_folder, get_user
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
import re
f_prefix = 'GPT-Academic对话存档'
def write_chat_to_file(chatbot, history=None, file_name=None):
"""
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
"""
import os
import time
from themes.theme import advanced_css
if file_name is None:
file_name = f_prefix + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
fp = os.path.join(get_log_folder(get_user(chatbot), plugin_name='chat_history'), file_name)
with open(fp, 'w', encoding='utf8') as f:
from textwrap import dedent
form = dedent("""
<!DOCTYPE html><head><meta charset="utf-8"><title>对话存档</title><style>{CSS}</style></head>
<body>
<div class="test_temp1" style="width:10%; height: 500px; float:left;"></div>
<div class="test_temp2" style="width:80%;padding: 40px;float:left;padding-left: 20px;padding-right: 20px;box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px 8px;border-radius: 10px;">
<div class="chat-body" style="display: flex;justify-content: center;flex-direction: column;align-items: center;flex-wrap: nowrap;">
{CHAT_PREVIEW}
<div></div>
<div></div>
<div style="text-align: center;width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">对话(原始数据)</div>
{HISTORY_PREVIEW}
</div>
</div>
<div class="test_temp3" style="width:10%; height: 500px; float:left;"></div>
</body>
""")
qa_from = dedent("""
<div class="QaBox" style="width:80%;padding: 20px;margin-bottom: 20px;box-shadow: rgb(0 255 159 / 50%) 0px 0px 1px 2px;border-radius: 4px;">
<div class="Question" style="border-radius: 2px;">{QUESTION}</div>
<hr color="blue" style="border-top: dotted 2px #ccc;">
<div class="Answer" style="border-radius: 2px;">{ANSWER}</div>
</div>
""")
history_from = dedent("""
<div class="historyBox" style="width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">
<div class="entry" style="border-radius: 2px;">{ENTRY}</div>
</div>
""")
CHAT_PREVIEW_BUF = ""
for i, contents in enumerate(chatbot):
question, answer = contents[0], contents[1]
if question is None: question = ""
try: question = str(question)
except: question = ""
if answer is None: answer = ""
try: answer = str(answer)
except: answer = ""
CHAT_PREVIEW_BUF += qa_from.format(QUESTION=question, ANSWER=answer)
HISTORY_PREVIEW_BUF = ""
for h in history:
HISTORY_PREVIEW_BUF += history_from.format(ENTRY=h)
html_content = form.format(CHAT_PREVIEW=CHAT_PREVIEW_BUF, HISTORY_PREVIEW=HISTORY_PREVIEW_BUF, CSS=advanced_css)
f.write(html_content)
promote_file_to_downloadzone(fp, rename_file=file_name, chatbot=chatbot)
return '对话历史写入:' + fp
def gen_file_preview(file_name):
try:
with open(file_name, 'r', encoding='utf8') as f:
file_content = f.read()
# pattern to match the text between <head> and </head>
pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
file_content = re.sub(pattern, '', file_content)
html, history = file_content.split('<hr color="blue"> \n\n 对话数据 (无渲染):\n')
history = history.strip('<code>')
history = history.strip('</code>')
history = history.split("\n>>>")
return list(filter(lambda x:x!="", history))[0][:100]
except:
return ""
def read_file_to_chat(chatbot, history, file_name):
with open(file_name, 'r', encoding='utf8') as f:
file_content = f.read()
from bs4 import BeautifulSoup
soup = BeautifulSoup(file_content, 'lxml')
# 提取QaBox信息
chatbot.clear()
qa_box_list = []
qa_boxes = soup.find_all("div", class_="QaBox")
for box in qa_boxes:
question = box.find("div", class_="Question").get_text(strip=False)
answer = box.find("div", class_="Answer").get_text(strip=False)
qa_box_list.append({"Question": question, "Answer": answer})
chatbot.append([question, answer])
# 提取historyBox信息
history_box_list = []
history_boxes = soup.find_all("div", class_="historyBox")
for box in history_boxes:
entry = box.find("div", class_="entry").get_text(strip=False)
history_box_list.append(entry)
history = history_box_list
chatbot.append([None, f"[Local Message] 载入对话{len(qa_box_list)}条,上下文{len(history)}条。"])
return chatbot, history
@CatchException
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数,暂时没有用武之地
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
user_request 当前用户的请求信息IP地址等
"""
file_name = plugin_kwargs.get("file_name", None)
if (file_name is not None) and (file_name != "") and (not file_name.endswith('.html')): file_name += '.html'
else: file_name = None
chatbot.append((None, f"[Local Message] {write_chat_to_file(chatbot, history, file_name)},您可以调用下拉菜单中的“载入对话历史存档”还原当下的对话。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
class Conversation_To_File_Wrap(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
第一个参数,名称`file_name`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
"""
gui_definition = {
"file_name": ArgProperty(title="保存文件名", description="输入对话存档文件名,留空则使用时间作为文件名", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
yield from 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
def hide_cwd(str):
import os
current_path = os.getcwd()
replace_path = "."
return str.replace(current_path, replace_path)
@CatchException
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数,暂时没有用武之地
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
user_request 当前用户的请求信息IP地址等
"""
from .crazy_utils import get_files_from_everything
success, file_manifest, _ = get_files_from_everything(txt, type='.html')
if not success:
if txt == "": txt = '空空如也的输入栏'
import glob
local_history = "<br/>".join([
"`"+hide_cwd(f)+f" ({gen_file_preview(f)})"+"`"
for f in glob.glob(
f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html',
recursive=True
)])
chatbot.append([f"正在查找对话历史文件html格式: {txt}", f"找不到任何html文件: {txt}。但本地存储了以下历史文件,您可以将任意一个文件路径粘贴到输入区,然后重试:<br/>{local_history}"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
try:
chatbot, history = read_file_to_chat(chatbot, history, file_manifest[0])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
except:
chatbot.append([f"载入对话历史文件", f"对话历史文件损坏!"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
@CatchException
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数,暂时没有用武之地
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
user_request 当前用户的请求信息IP地址等
"""
import glob, os
local_history = "<br/>".join([
"`"+hide_cwd(f)+"`"
for f in glob.glob(
f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True
)])
for f in glob.glob(f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True):
os.remove(f)
chatbot.append([f"删除所有历史对话文件", f"已删除<br/>{local_history}"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return

查看文件

@@ -0,0 +1,122 @@
from toolbox import CatchException, update_ui
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
import requests
from bs4 import BeautifulSoup
from request_llms.bridge_all import model_info
import urllib.request
from functools import lru_cache
@lru_cache
def get_auth_ip():
try:
external_ip = urllib.request.urlopen('https://v4.ident.me/').read().decode('utf8')
return external_ip
except:
return '114.114.114.114'
def searxng_request(query, proxies):
url = 'https://cloud-1.agent-matrix.com/' # 请替换为实际的API URL
params = {
'q': query, # 搜索查询
'format': 'json', # 输出格式为JSON
'language': 'zh', # 搜索语言
}
headers = {
'Accept-Language': 'zh-CN,zh;q=0.9',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
'X-Forwarded-For': get_auth_ip(),
'X-Real-IP': get_auth_ip()
}
results = []
response = requests.post(url, params=params, headers=headers, proxies=proxies)
if response.status_code == 200:
json_result = response.json()
for result in json_result['results']:
item = {
"title": result["title"],
"content": result["content"],
"link": result["url"],
}
results.append(item)
return results
else:
raise ValueError("搜索失败,状态码: " + str(response.status_code) + '\t' + response.content.decode('utf-8'))
def scrape_text(url, proxies) -> str:
"""Scrape text from a webpage
Args:
url (str): The URL to scrape text from
Returns:
str: The scraped text
"""
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
'Content-Type': 'text/plain',
}
try:
response = requests.get(url, headers=headers, proxies=proxies, timeout=8)
if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
except:
return "无法连接到该网页"
soup = BeautifulSoup(response.text, "html.parser")
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = "\n".join(chunk for chunk in chunks if chunk)
return text
@CatchException
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数,暂时没有用武之地
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
user_request 当前用户的请求信息IP地址等
"""
history = [] # 清空历史,以免输入溢出
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
"[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
# ------------- < 第1步爬取搜索引擎的结果 > -------------
from toolbox import get_conf
proxies = get_conf('proxies')
urls = searxng_request(txt, proxies)
history = []
if len(urls) == 0:
chatbot.append((f"结论:{txt}",
"[Local Message] 受到google限制,无法从google获取信息"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
return
# ------------- < 第2步依次访问网页 > -------------
max_search_result = 5 # 最多收纳多少个网页的结果
for index, url in enumerate(urls[:max_search_result]):
res = scrape_text(url['link'], proxies)
history.extend([f"{index}份搜索结果:", res])
chatbot.append([f"{index}份搜索结果:", res[:500]+"......"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
# ------------- < 第3步ChatGPT综合 > -------------
i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}"
i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
inputs=i_say,
history=history,
max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4
)
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。"
)
chatbot[-1] = (i_say, gpt_say)
history.append(i_say);history.append(gpt_say)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新

查看文件

@@ -0,0 +1,548 @@
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone, check_repeat_upload, map_file_to_sha256
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
from functools import partial
import glob, os, requests, time, json, tarfile
pj = os.path.join
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
def switch_prompt(pfg, mode, more_requirement):
"""
Generate prompts and system prompts based on the mode for proofreading or translating.
Args:
- pfg: Proofreader or Translator instance.
- mode: A string specifying the mode, either 'proofread' or 'translate_zh'.
Returns:
- inputs_array: A list of strings containing prompts for users to respond to.
- sys_prompt_array: A list of strings containing prompts for system prompts.
"""
n_split = len(pfg.sp_file_contents)
if mode == 'proofread_en':
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + more_requirement +
r"Answer me only with the revised text:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents]
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
elif mode == 'translate_zh':
inputs_array = [
r"Below is a section from an English academic paper, translate it into Chinese. " + more_requirement +
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
r"Answer me only with the translated text:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents]
sys_prompt_array = ["You are a professional translator." for _ in range(n_split)]
else:
assert False, "未知指令"
return inputs_array, sys_prompt_array
def desend_to_extracted_folder_if_exist(project_folder):
"""
Descend into the extracted folder if it exists, otherwise return the original folder.
Args:
- project_folder: A string specifying the folder path.
Returns:
- A string specifying the path to the extracted folder, or the original folder if there is no extracted folder.
"""
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
if len(maybe_dir) == 0: return project_folder
if maybe_dir[0].endswith('.extract'): return maybe_dir[0]
return project_folder
def move_project(project_folder, arxiv_id=None):
"""
Create a new work folder and copy the project folder to it.
Args:
- project_folder: A string specifying the folder path of the project.
Returns:
- A string specifying the path to the new work folder.
"""
import shutil, time
time.sleep(2) # avoid time string conflict
if arxiv_id is not None:
new_workfolder = pj(ARXIV_CACHE_DIR, arxiv_id, 'workfolder')
else:
new_workfolder = f'{get_log_folder()}/{gen_time_str()}'
try:
shutil.rmtree(new_workfolder)
except:
pass
# align subfolder if there is a folder wrapper
items = glob.glob(pj(project_folder, '*'))
items = [item for item in items if os.path.basename(item) != '__MACOSX']
if len(glob.glob(pj(project_folder, '*.tex'))) == 0 and len(items) == 1:
if os.path.isdir(items[0]): project_folder = items[0]
shutil.copytree(src=project_folder, dst=new_workfolder)
return new_workfolder
def arxiv_download(chatbot, history, txt, allow_cache=True):
def check_cached_translation_pdf(arxiv_id):
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'translation')
if not os.path.exists(translation_dir):
os.makedirs(translation_dir)
target_file = pj(translation_dir, 'translate_zh.pdf')
if os.path.exists(target_file):
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
target_file_compare = pj(translation_dir, 'comparison.pdf')
if os.path.exists(target_file_compare):
promote_file_to_downloadzone(target_file_compare, rename_file=None, chatbot=chatbot)
return target_file
return False
def is_float(s):
try:
float(s)
return True
except ValueError:
return False
if txt.startswith('https://arxiv.org/pdf/'):
arxiv_id = txt.split('/')[-1] # 2402.14207v2.pdf
txt = arxiv_id.split('v')[0] # 2402.14207
if ('.' in txt) and ('/' not in txt) and is_float(txt): # is arxiv ID
txt = 'https://arxiv.org/abs/' + txt.strip()
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
txt = 'https://arxiv.org/abs/' + txt[:10]
if not txt.startswith('https://arxiv.org'):
return txt, None # 是本地文件,跳过下载
# <-------------- inspect format ------------->
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
yield from update_ui(chatbot=chatbot, history=history)
time.sleep(1) # 刷新界面
url_ = txt # https://arxiv.org/abs/1707.06690
if not txt.startswith('https://arxiv.org/abs/'):
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}"
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
return msg, None
# <-------------- set format ------------->
arxiv_id = url_.split('/abs/')[-1]
if 'v' in arxiv_id: arxiv_id = arxiv_id[:10]
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
if cached_translation_pdf and allow_cache: return cached_translation_pdf, arxiv_id
url_tar = url_.replace('/abs/', '/e-print/')
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
os.makedirs(translation_dir, exist_ok=True)
# <-------------- download arxiv source file ------------->
dst = pj(translation_dir, arxiv_id + '.tar')
if os.path.exists(dst):
yield from update_ui_lastest_msg("调用缓存", chatbot=chatbot, history=history) # 刷新界面
else:
yield from update_ui_lastest_msg("开始下载", chatbot=chatbot, history=history) # 刷新界面
proxies = get_conf('proxies')
r = requests.get(url_tar, proxies=proxies)
with open(dst, 'wb+') as f:
f.write(r.content)
# <-------------- extract file ------------->
yield from update_ui_lastest_msg("下载完成", chatbot=chatbot, history=history) # 刷新界面
from toolbox import extract_archive
extract_archive(file_path=dst, dest_dir=extract_dst)
return extract_dst, arxiv_id
def pdf2tex_project(pdf_file_path, plugin_kwargs):
if plugin_kwargs["method"] == "MATHPIX":
# Mathpix API credentials
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
headers = {"app_id": app_id, "app_key": app_key}
# Step 1: Send PDF file for processing
options = {
"conversion_formats": {"tex.zip": True},
"math_inline_delimiters": ["$", "$"],
"rm_spaces": True
}
response = requests.post(url="https://api.mathpix.com/v3/pdf",
headers=headers,
data={"options_json": json.dumps(options)},
files={"file": open(pdf_file_path, "rb")})
if response.ok:
pdf_id = response.json()["pdf_id"]
print(f"PDF processing initiated. PDF ID: {pdf_id}")
# Step 2: Check processing status
while True:
conversion_response = requests.get(f"https://api.mathpix.com/v3/pdf/{pdf_id}", headers=headers)
conversion_data = conversion_response.json()
if conversion_data["status"] == "completed":
print("PDF processing completed.")
break
elif conversion_data["status"] == "error":
print("Error occurred during processing.")
else:
print(f"Processing status: {conversion_data['status']}")
time.sleep(5) # wait for a few seconds before checking again
# Step 3: Save results to local files
output_dir = os.path.join(os.path.dirname(pdf_file_path), 'mathpix_output')
if not os.path.exists(output_dir):
os.makedirs(output_dir)
url = f"https://api.mathpix.com/v3/pdf/{pdf_id}.tex"
response = requests.get(url, headers=headers)
file_name_wo_dot = '_'.join(os.path.basename(pdf_file_path).split('.')[:-1])
output_name = f"{file_name_wo_dot}.tex.zip"
output_path = os.path.join(output_dir, output_name)
with open(output_path, "wb") as output_file:
output_file.write(response.content)
print(f"tex.zip file saved at: {output_path}")
import zipfile
unzip_dir = os.path.join(output_dir, file_name_wo_dot)
with zipfile.ZipFile(output_path, 'r') as zip_ref:
zip_ref.extractall(unzip_dir)
return unzip_dir
else:
print(f"Error sending PDF for processing. Status code: {response.status_code}")
return None
else:
from crazy_functions.pdf_fns.parse_pdf_via_doc2x import 解析PDF_DOC2X_转Latex
unzip_dir = 解析PDF_DOC2X_转Latex(pdf_file_path)
return unzip_dir
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# <-------------- information about this plugin ------------->
chatbot.append(["函数插件功能?",
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
more_req = plugin_kwargs.get("advanced_arg", "")
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
# <-------------- check deps ------------->
try:
import glob, os, time, subprocess
subprocess.Popen(['pdflatex', '-version'])
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
except Exception as e:
chatbot.append([f"解析项目: {txt}",
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- clear history and read input ------------->
history = []
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
project_folder = move_project(project_folder, arxiv_id=None)
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_proofread_en.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='proofread_en',
switch_prompt=_switch_prompt_)
# <-------------- compile PDF ------------->
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
main_file_modified='merge_proofread_en',
work_folder_original=project_folder, work_folder_modified=project_folder,
work_folder=project_folder)
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了",
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+Conversation_To_File进行反馈 ...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done ------------->
return success
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# <-------------- information about this plugin ------------->
chatbot.append([
"函数插件功能?",
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
more_req = plugin_kwargs.get("advanced_arg", "")
no_cache = more_req.startswith("--no-cache")
if no_cache: more_req.lstrip("--no-cache")
allow_cache = not no_cache
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
# <-------------- check deps ------------->
try:
import glob, os, time, subprocess
subprocess.Popen(['pdflatex', '-version'])
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
except Exception as e:
chatbot.append([f"解析项目: {txt}",
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- clear history and read input ------------->
history = []
try:
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
except tarfile.ReadError as e:
yield from update_ui_lastest_msg(
"无法自动下载该论文的Latex源码,请前往arxiv打开此论文下载页面,点other Formats,然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。",
chatbot=chatbot, history=history)
return
if txt.endswith('.pdf'):
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"发现已经存在翻译好的PDF文档")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无法处理: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
project_folder = move_project(project_folder, arxiv_id)
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='translate_zh',
switch_prompt=_switch_prompt_)
# <-------------- compile PDF ------------->
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
main_file_modified='merge_translate_zh', mode='translate_zh',
work_folder_original=project_folder, work_folder_modified=project_folder,
work_folder=project_folder)
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了",
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体见Github wiki ...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done ------------->
return success
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 插件主程序3 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException
def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
# <-------------- information about this plugin ------------->
chatbot.append([
"函数插件功能?",
"将PDF转换为Latex项目,翻译为中文后重新编译为PDF。函数插件贡献者: Marroh。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
more_req = plugin_kwargs.get("advanced_arg", "")
no_cache = more_req.startswith("--no-cache")
if no_cache: more_req.lstrip("--no-cache")
allow_cache = not no_cache
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
# <-------------- check deps ------------->
try:
import glob, os, time, subprocess
subprocess.Popen(['pdflatex', '-version'])
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
except Exception as e:
chatbot.append([f"解析项目: {txt}",
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- clear history and read input ------------->
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无法处理: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.pdf文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
if len(file_manifest) != 1:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"不支持同时处理多个pdf文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
if plugin_kwargs.get("method", "") == 'MATHPIX':
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
if len(app_id) == 0 or len(app_key) == 0:
report_exception(chatbot, history, a="缺失 MATHPIX_APPID 和 MATHPIX_APPKEY。", b=f"请配置 MATHPIX_APPID 和 MATHPIX_APPKEY")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
if plugin_kwargs.get("method", "") == 'DOC2X':
app_id, app_key = "", ""
DOC2X_API_KEY = get_conf('DOC2X_API_KEY')
if len(DOC2X_API_KEY) == 0:
report_exception(chatbot, history, a="缺失 DOC2X_API_KEY。", b=f"请配置 DOC2X_API_KEY")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
hash_tag = map_file_to_sha256(file_manifest[0])
# # <-------------- check repeated pdf ------------->
# chatbot.append([f"检查PDF是否被重复上传", "正在检查..."])
# yield from update_ui(chatbot=chatbot, history=history)
# repeat, project_folder = check_repeat_upload(file_manifest[0], hash_tag)
# if repeat:
# yield from update_ui_lastest_msg(f"发现重复上传,请查收结果(压缩包)...", chatbot=chatbot, history=history)
# try:
# translate_pdf = [f for f in glob.glob(f'{project_folder}/**/merge_translate_zh.pdf', recursive=True)][0]
# promote_file_to_downloadzone(translate_pdf, rename_file=None, chatbot=chatbot)
# comparison_pdf = [f for f in glob.glob(f'{project_folder}/**/comparison.pdf', recursive=True)][0]
# promote_file_to_downloadzone(comparison_pdf, rename_file=None, chatbot=chatbot)
# zip_res = zip_result(project_folder)
# promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# return
# except:
# report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"发现重复上传,但是无法找到相关文件")
# yield from update_ui(chatbot=chatbot, history=history)
# else:
# yield from update_ui_lastest_msg(f"未发现重复上传", chatbot=chatbot, history=history)
# <-------------- convert pdf into tex ------------->
chatbot.append([f"解析项目: {txt}", "正在将PDF转换为tex项目,请耐心等待..."])
yield from update_ui(chatbot=chatbot, history=history)
project_folder = pdf2tex_project(file_manifest[0], plugin_kwargs)
if project_folder is None:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"PDF转换为tex项目失败")
yield from update_ui(chatbot=chatbot, history=history)
return False
# <-------------- translate latex file into Chinese ------------->
yield from update_ui_lastest_msg("正在tex项目将翻译为中文...", chatbot=chatbot, history=history)
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
project_folder = move_project(project_folder)
# <-------------- set a hash tag for repeat-checking ------------->
with open(pj(project_folder, hash_tag + '.tag'), 'w') as f:
f.write(hash_tag)
f.close()
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='translate_zh',
switch_prompt=_switch_prompt_)
# <-------------- compile PDF ------------->
yield from update_ui_lastest_msg("正在将翻译好的项目tex项目编译为PDF...", chatbot=chatbot, history=history)
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
main_file_modified='merge_translate_zh', mode='translate_zh',
work_folder_original=project_folder, work_folder_modified=project_folder,
work_folder=project_folder)
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了",
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体见Github wiki ...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done ------------->
return success

查看文件

@@ -0,0 +1,78 @@
from crazy_functions.Latex_Function import Latex翻译中文并重新编译PDF, PDF翻译中文并重新编译PDF
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
class Arxiv_Localize(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
第三个参数,名称`allow_cache`,参数`type`声明这是一个下拉菜单,下拉菜单上方显示`title`+`description`,下拉菜单的选项为`options`,`default_value`为下拉菜单默认值;
"""
gui_definition = {
"main_input":
ArgProperty(title="ArxivID", description="输入Arxiv的ID或者网址", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
"advanced_arg":
ArgProperty(title="额外的翻译提示词",
description=r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步
"allow_cache":
ArgProperty(title="是否允许从缓存中调取结果", options=["允许缓存", "从头执行"], default_value="允许缓存", description="", type="dropdown").model_dump_json(),
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
allow_cache = plugin_kwargs["allow_cache"]
advanced_arg = plugin_kwargs["advanced_arg"]
if allow_cache == "从头执行": plugin_kwargs["advanced_arg"] = "--no-cache " + plugin_kwargs["advanced_arg"]
yield from Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
class PDF_Localize(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
"""
gui_definition = {
"main_input":
ArgProperty(title="PDF文件路径", description="未指定路径,请上传文件后,再点击该插件", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
"advanced_arg":
ArgProperty(title="额外的翻译提示词",
description=r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步
"method":
ArgProperty(title="采用哪种方法执行转换", options=["MATHPIX", "DOC2X"], default_value="DOC2X", description="", type="dropdown").model_dump_json(),
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
yield from PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)

查看文件

@@ -46,7 +46,7 @@ class PaperFileGroup():
manifest.append(path + '.polish.tex') manifest.append(path + '.polish.tex')
f.write(res) f.write(res)
return manifest return manifest
def zip_result(self): def zip_result(self):
import os, time import os, time
folder = os.path.dirname(self.file_paths[0]) folder = os.path.dirname(self.file_paths[0])
@@ -59,7 +59,7 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
# <-------- 读取Latex文件,删除其中的所有注释 ----------> # <-------- 读取Latex文件,删除其中的所有注释 ---------->
pfg = PaperFileGroup() pfg = PaperFileGroup()
for index, fp in enumerate(file_manifest): for index, fp in enumerate(file_manifest):
@@ -73,31 +73,31 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
pfg.file_paths.append(fp) pfg.file_paths.append(fp)
pfg.file_contents.append(clean_tex_content) pfg.file_contents.append(clean_tex_content)
# <-------- 拆分过长的latex文件 ----------> # <-------- 拆分过长的latex文件 ---------->
pfg.run_file_split(max_token_limit=1024) pfg.run_file_split(max_token_limit=1024)
n_split = len(pfg.sp_file_contents) n_split = len(pfg.sp_file_contents)
# <-------- 多线程润色开始 ----------> # <-------- 多线程润色开始 ---------->
if language == 'en': if language == 'en':
if mode == 'polish': if mode == 'polish':
inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, " + inputs_array = [r"Below is a section from an academic paper, polish this section to meet the academic standard, " +
"improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" + r"improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
else: else:
inputs_array = [r"Below is a section from an academic paper, proofread this section." + inputs_array = [r"Below is a section from an academic paper, proofread this section." +
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
r"Answer me only with the revised text:" + r"Answer me only with the revised text:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
elif language == 'zh': elif language == 'zh':
if mode == 'polish': if mode == 'polish':
inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式" + inputs_array = [r"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
else: else:
inputs_array = [f"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式" + inputs_array = [r"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag]
sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)] sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)]
@@ -113,7 +113,7 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
scroller_max_len = 80 scroller_max_len = 80
) )
# <-------- 文本碎片重组为完整的tex文件,整理结果为压缩包 ----------> # <-------- 文本碎片重组为完整的tex文件,整理结果为压缩包 ---------->
try: try:
pfg.sp_file_result = [] pfg.sp_file_result = []
for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]): for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]):
@@ -124,7 +124,7 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# <-------- 整理结果,退出 ----------> # <-------- 整理结果,退出 ---------->
create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name) res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name)
promote_file_to_downloadzone(res, chatbot=chatbot) promote_file_to_downloadzone(res, chatbot=chatbot)
@@ -135,11 +135,11 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
@CatchException @CatchException
def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",
"对整个Latex项目进行润色。函数插件贡献者: Binary-Husky。注意,此插件不调用Latex,如果有Latex环境,请使用Latex英文纠错+高亮插件"]) "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky。注意,此插件不调用Latex,如果有Latex环境,请使用Latex英文纠错+高亮修正位置(需Latex)插件"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
@@ -173,7 +173,7 @@ def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
@CatchException @CatchException
def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",
@@ -209,7 +209,7 @@ def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
@CatchException @CatchException
def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",

查看文件

@@ -39,7 +39,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
import time, os, re import time, os, re
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
# <-------- 读取Latex文件,删除其中的所有注释 ----------> # <-------- 读取Latex文件,删除其中的所有注释 ---------->
pfg = PaperFileGroup() pfg = PaperFileGroup()
for index, fp in enumerate(file_manifest): for index, fp in enumerate(file_manifest):
@@ -53,11 +53,11 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
pfg.file_paths.append(fp) pfg.file_paths.append(fp)
pfg.file_contents.append(clean_tex_content) pfg.file_contents.append(clean_tex_content)
# <-------- 拆分过长的latex文件 ----------> # <-------- 拆分过长的latex文件 ---------->
pfg.run_file_split(max_token_limit=1024) pfg.run_file_split(max_token_limit=1024)
n_split = len(pfg.sp_file_contents) n_split = len(pfg.sp_file_contents)
# <-------- 抽取摘要 ----------> # <-------- 抽取摘要 ---------->
# if language == 'en': # if language == 'en':
# abs_extract_inputs = f"Please write an abstract for this paper" # abs_extract_inputs = f"Please write an abstract for this paper"
@@ -70,14 +70,14 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
# sys_prompt="Your job is to collect information from materials。", # sys_prompt="Your job is to collect information from materials。",
# ) # )
# <-------- 多线程润色开始 ----------> # <-------- 多线程润色开始 ---------->
if language == 'en->zh': if language == 'en->zh':
inputs_array = ["Below is a section from an English academic paper, translate it into Chinese, do not modify any latex command such as \section, \cite and equations:" + inputs_array = ["Below is a section from an English academic paper, translate it into Chinese, do not modify any latex command such as \section, \cite and equations:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
elif language == 'zh->en': elif language == 'zh->en':
inputs_array = [f"Below is a section from a Chinese academic paper, translate it into English, do not modify any latex command such as \section, \cite and equations:" + inputs_array = [f"Below is a section from a Chinese academic paper, translate it into English, do not modify any latex command such as \section, \cite and equations:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
@@ -93,7 +93,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
scroller_max_len = 80 scroller_max_len = 80
) )
# <-------- 整理结果,退出 ----------> # <-------- 整理结果,退出 ---------->
create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
res = write_history_to_file(gpt_response_collection, create_report_file_name) res = write_history_to_file(gpt_response_collection, create_report_file_name)
promote_file_to_downloadzone(res, chatbot=chatbot) promote_file_to_downloadzone(res, chatbot=chatbot)
@@ -106,7 +106,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
@CatchException @CatchException
def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",
@@ -143,7 +143,7 @@ def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
@CatchException @CatchException
def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",

查看文件

@@ -1,306 +0,0 @@
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
from functools import partial
import glob, os, requests, time
pj = os.path.join
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
def switch_prompt(pfg, mode, more_requirement):
"""
Generate prompts and system prompts based on the mode for proofreading or translating.
Args:
- pfg: Proofreader or Translator instance.
- mode: A string specifying the mode, either 'proofread' or 'translate_zh'.
Returns:
- inputs_array: A list of strings containing prompts for users to respond to.
- sys_prompt_array: A list of strings containing prompts for system prompts.
"""
n_split = len(pfg.sp_file_contents)
if mode == 'proofread_en':
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + more_requirement +
r"Answer me only with the revised text:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents]
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
elif mode == 'translate_zh':
inputs_array = [r"Below is a section from an English academic paper, translate it into Chinese. " + more_requirement +
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
r"Answer me only with the translated text:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents]
sys_prompt_array = ["You are a professional translator." for _ in range(n_split)]
else:
assert False, "未知指令"
return inputs_array, sys_prompt_array
def desend_to_extracted_folder_if_exist(project_folder):
"""
Descend into the extracted folder if it exists, otherwise return the original folder.
Args:
- project_folder: A string specifying the folder path.
Returns:
- A string specifying the path to the extracted folder, or the original folder if there is no extracted folder.
"""
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
if len(maybe_dir) == 0: return project_folder
if maybe_dir[0].endswith('.extract'): return maybe_dir[0]
return project_folder
def move_project(project_folder, arxiv_id=None):
"""
Create a new work folder and copy the project folder to it.
Args:
- project_folder: A string specifying the folder path of the project.
Returns:
- A string specifying the path to the new work folder.
"""
import shutil, time
time.sleep(2) # avoid time string conflict
if arxiv_id is not None:
new_workfolder = pj(ARXIV_CACHE_DIR, arxiv_id, 'workfolder')
else:
new_workfolder = f'{get_log_folder()}/{gen_time_str()}'
try:
shutil.rmtree(new_workfolder)
except:
pass
# align subfolder if there is a folder wrapper
items = glob.glob(pj(project_folder,'*'))
items = [item for item in items if os.path.basename(item)!='__MACOSX']
if len(glob.glob(pj(project_folder,'*.tex'))) == 0 and len(items) == 1:
if os.path.isdir(items[0]): project_folder = items[0]
shutil.copytree(src=project_folder, dst=new_workfolder)
return new_workfolder
def arxiv_download(chatbot, history, txt, allow_cache=True):
def check_cached_translation_pdf(arxiv_id):
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'translation')
if not os.path.exists(translation_dir):
os.makedirs(translation_dir)
target_file = pj(translation_dir, 'translate_zh.pdf')
if os.path.exists(target_file):
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
target_file_compare = pj(translation_dir, 'comparison.pdf')
if os.path.exists(target_file_compare):
promote_file_to_downloadzone(target_file_compare, rename_file=None, chatbot=chatbot)
return target_file
return False
def is_float(s):
try:
float(s)
return True
except ValueError:
return False
if ('.' in txt) and ('/' not in txt) and is_float(txt): # is arxiv ID
txt = 'https://arxiv.org/abs/' + txt.strip()
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
txt = 'https://arxiv.org/abs/' + txt[:10]
if not txt.startswith('https://arxiv.org'):
return txt, None
# <-------------- inspect format ------------->
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
yield from update_ui(chatbot=chatbot, history=history)
time.sleep(1) # 刷新界面
url_ = txt # https://arxiv.org/abs/1707.06690
if not txt.startswith('https://arxiv.org/abs/'):
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}"
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
return msg, None
# <-------------- set format ------------->
arxiv_id = url_.split('/abs/')[-1]
if 'v' in arxiv_id: arxiv_id = arxiv_id[:10]
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
if cached_translation_pdf and allow_cache: return cached_translation_pdf, arxiv_id
url_tar = url_.replace('/abs/', '/e-print/')
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
os.makedirs(translation_dir, exist_ok=True)
# <-------------- download arxiv source file ------------->
dst = pj(translation_dir, arxiv_id+'.tar')
if os.path.exists(dst):
yield from update_ui_lastest_msg("调用缓存", chatbot=chatbot, history=history) # 刷新界面
else:
yield from update_ui_lastest_msg("开始下载", chatbot=chatbot, history=history) # 刷新界面
proxies = get_conf('proxies')
r = requests.get(url_tar, proxies=proxies)
with open(dst, 'wb+') as f:
f.write(r.content)
# <-------------- extract file ------------->
yield from update_ui_lastest_msg("下载完成", chatbot=chatbot, history=history) # 刷新界面
from toolbox import extract_archive
extract_archive(file_path=dst, dest_dir=extract_dst)
return extract_dst, arxiv_id
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
# <-------------- information about this plugin ------------->
chatbot.append([ "函数插件功能?",
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
more_req = plugin_kwargs.get("advanced_arg", "")
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
# <-------------- check deps ------------->
try:
import glob, os, time, subprocess
subprocess.Popen(['pdflatex', '-version'])
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
except Exception as e:
chatbot.append([ f"解析项目: {txt}",
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- clear history and read input ------------->
history = []
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
project_folder = move_project(project_folder, arxiv_id=None)
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_proofread_en.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='proofread_en', switch_prompt=_switch_prompt_)
# <-------------- compile PDF ------------->
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_proofread_en',
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done ------------->
return success
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
# <-------------- information about this plugin ------------->
chatbot.append([
"函数插件功能?",
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
more_req = plugin_kwargs.get("advanced_arg", "")
no_cache = more_req.startswith("--no-cache")
if no_cache: more_req.lstrip("--no-cache")
allow_cache = not no_cache
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
# <-------------- check deps ------------->
try:
import glob, os, time, subprocess
subprocess.Popen(['pdflatex', '-version'])
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
except Exception as e:
chatbot.append([ f"解析项目: {txt}",
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- clear history and read input ------------->
history = []
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
if txt.endswith('.pdf'):
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"发现已经存在翻译好的PDF文档")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无法处理: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
project_folder = move_project(project_folder, arxiv_id)
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='translate_zh', switch_prompt=_switch_prompt_)
# <-------------- compile PDF ------------->
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_translate_zh', mode='translate_zh',
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体见Github wiki ...'))
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done ------------->
return success

查看文件

@@ -1,5 +1,5 @@
import glob, time, os, re, logging import glob, shutil, os, re, logging
from toolbox import update_ui, trimmed_format_exc, gen_time_str, disable_auto_promotion from toolbox import update_ui, trimmed_format_exc, gen_time_str
from toolbox import CatchException, report_exception, get_log_folder from toolbox import CatchException, report_exception, get_log_folder
from toolbox import write_history_to_file, promote_file_to_downloadzone from toolbox import write_history_to_file, promote_file_to_downloadzone
fast_debug = False fast_debug = False
@@ -18,7 +18,7 @@ class PaperFileGroup():
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
self.get_token_num = get_token_num self.get_token_num = get_token_num
def run_file_split(self, max_token_limit=1900): def run_file_split(self, max_token_limit=2048):
""" """
将长文本分离开来 将长文本分离开来
""" """
@@ -53,7 +53,7 @@ class PaperFileGroup():
def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'): def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
# <-------- 读取Markdown文件,删除其中的所有注释 ----------> # <-------- 读取Markdown文件,删除其中的所有注释 ---------->
pfg = PaperFileGroup() pfg = PaperFileGroup()
for index, fp in enumerate(file_manifest): for index, fp in enumerate(file_manifest):
@@ -63,26 +63,26 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
pfg.file_paths.append(fp) pfg.file_paths.append(fp)
pfg.file_contents.append(file_content) pfg.file_contents.append(file_content)
# <-------- 拆分过长的Markdown文件 ----------> # <-------- 拆分过长的Markdown文件 ---------->
pfg.run_file_split(max_token_limit=1500) pfg.run_file_split(max_token_limit=2048)
n_split = len(pfg.sp_file_contents) n_split = len(pfg.sp_file_contents)
# <-------- 多线程翻译开始 ----------> # <-------- 多线程翻译开始 ---------->
if language == 'en->zh': if language == 'en->zh':
inputs_array = ["This is a Markdown file, translate it into Chinese, do not modify any existing Markdown commands:" + inputs_array = ["This is a Markdown file, translate it into Chinese, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." + plugin_kwargs.get("additional_prompt", "") for _ in range(n_split)]
elif language == 'zh->en': elif language == 'zh->en':
inputs_array = [f"This is a Markdown file, translate it into English, do not modify any existing Markdown commands:" + inputs_array = [f"This is a Markdown file, translate it into English, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." + plugin_kwargs.get("additional_prompt", "") for _ in range(n_split)]
else: else:
inputs_array = [f"This is a Markdown file, translate it into {language}, do not modify any existing Markdown commands, only answer me with translated results:" + inputs_array = [f"This is a Markdown file, translate it into {language}, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." + plugin_kwargs.get("additional_prompt", "") for _ in range(n_split)]
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=inputs_array, inputs_array=inputs_array,
@@ -99,11 +99,16 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]): for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]):
pfg.sp_file_result.append(gpt_say) pfg.sp_file_result.append(gpt_say)
pfg.merge_result() pfg.merge_result()
pfg.write_result(language) output_file_arr = pfg.write_result(language)
for output_file in output_file_arr:
promote_file_to_downloadzone(output_file, chatbot=chatbot)
if 'markdown_expected_output_path' in plugin_kwargs:
expected_f_name = plugin_kwargs['markdown_expected_output_path']
shutil.copyfile(output_file, expected_f_name)
except: except:
logging.error(trimmed_format_exc()) logging.error(trimmed_format_exc())
# <-------- 整理结果,退出 ----------> # <-------- 整理结果,退出 ---------->
create_report_file_name = gen_time_str() + f"-chatgpt.md" create_report_file_name = gen_time_str() + f"-chatgpt.md"
res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name) res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name)
promote_file_to_downloadzone(res, chatbot=chatbot) promote_file_to_downloadzone(res, chatbot=chatbot)
@@ -153,13 +158,12 @@ def get_files_from_everything(txt, preference=''):
@CatchException @CatchException
def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"]) "对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
disable_auto_promotion(chatbot)
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
try: try:
@@ -193,13 +197,12 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
@CatchException @CatchException
def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"]) "对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
disable_auto_promotion(chatbot)
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
try: try:
@@ -226,13 +229,12 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
@CatchException @CatchException
def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"]) "对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
disable_auto_promotion(chatbot)
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
try: try:
@@ -255,7 +257,7 @@ def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history,
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
language = plugin_kwargs.get("advanced_arg", 'Chinese') language = plugin_kwargs.get("advanced_arg", 'Chinese')
yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language=language) yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language=language)

查看文件

@@ -0,0 +1,83 @@
from toolbox import CatchException, check_packages, get_conf
from toolbox import update_ui, update_ui_lastest_msg, disable_auto_promotion
from toolbox import trimmed_format_exc_markdown
from crazy_functions.crazy_utils import get_files_from_everything
from crazy_functions.pdf_fns.parse_pdf import get_avail_grobid_url
from crazy_functions.pdf_fns.parse_pdf_via_doc2x import 解析PDF_基于DOC2X
from crazy_functions.pdf_fns.parse_pdf_legacy import 解析PDF_简单拆解
from crazy_functions.pdf_fns.parse_pdf_grobid import 解析PDF_基于GROBID
from shared_utils.colorful import *
@CatchException
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
disable_auto_promotion(chatbot)
# 基本信息:功能、贡献者
chatbot.append([None, "插件功能批量翻译PDF文档。函数插件贡献者: Binary-Husky"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
check_packages(["fitz", "tiktoken", "scipdf"])
except:
chatbot.append([None, f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken scipdf_parser```。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 清空历史,以免输入溢出
history = []
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf')
# 检测输入参数,如没有给定输入参数,直接退出
if (not success) and txt == "": txt = '空空如也的输入栏。提示请先上传文件把PDF文件拖入对话'
# 如果没找到任何文件
if len(file_manifest) == 0:
chatbot.append([None, f"找不到任何.pdf拓展名的文件: {txt}"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 开始正式执行任务
method = plugin_kwargs.get("pdf_parse_method", None)
if method == "DOC2X":
# ------- 第一种方法,效果最好,但是需要DOC2X服务 -------
DOC2X_API_KEY = get_conf("DOC2X_API_KEY")
if len(DOC2X_API_KEY) != 0:
try:
yield from 解析PDF_基于DOC2X(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request)
return
except:
chatbot.append([None, f"DOC2X服务不可用,现在将执行效果稍差的旧版代码。{trimmed_format_exc_markdown()}"])
yield from update_ui(chatbot=chatbot, history=history)
if method == "GROBID":
# ------- 第二种方法,效果次优 -------
grobid_url = get_avail_grobid_url()
if grobid_url is not None:
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
return
if method == "ClASSIC":
# ------- 第三种方法,早期代码,效果不理想 -------
yield from update_ui_lastest_msg("GROBID服务不可用,请检查config中的GROBID_URL。作为替代,现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
yield from 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
return
if method is None:
# ------- 以上三种方法都试一遍 -------
DOC2X_API_KEY = get_conf("DOC2X_API_KEY")
if len(DOC2X_API_KEY) != 0:
try:
yield from 解析PDF_基于DOC2X(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request)
return
except:
chatbot.append([None, f"DOC2X服务不可用,正在尝试GROBID。{trimmed_format_exc_markdown()}"])
yield from update_ui(chatbot=chatbot, history=history)
grobid_url = get_avail_grobid_url()
if grobid_url is not None:
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
return
yield from update_ui_lastest_msg("GROBID服务不可用,请检查config中的GROBID_URL。作为替代,现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
yield from 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
return

查看文件

@@ -0,0 +1,33 @@
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
from .PDF_Translate import 批量翻译PDF文档
class PDF_Tran(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
"""
gui_definition = {
"main_input":
ArgProperty(title="PDF文件路径", description="未指定路径,请上传文件后,再点击该插件", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
"additional_prompt":
ArgProperty(title="额外提示词", description="例如:对专有名词、翻译语气等方面的要求", default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步
"pdf_parse_method":
ArgProperty(title="PDF解析方法", options=["DOC2X", "GROBID", "ClASSIC"], description="", default_value="GROBID", type="dropdown").model_dump_json(),
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
main_input = plugin_kwargs["main_input"]
additional_prompt = plugin_kwargs["additional_prompt"]
pdf_parse_method = plugin_kwargs["pdf_parse_method"]
yield from 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)

查看文件

@@ -35,7 +35,11 @@ def gpt_academic_generate_oai_reply(
class AutoGenGeneral(PluginMultiprocessManager): class AutoGenGeneral(PluginMultiprocessManager):
def gpt_academic_print_override(self, user_proxy, message, sender): def gpt_academic_print_override(self, user_proxy, message, sender):
# ⭐⭐ run in subprocess # ⭐⭐ run in subprocess
self.child_conn.send(PipeCom("show", sender.name + "\n\n---\n\n" + message["content"])) try:
print_msg = sender.name + "\n\n---\n\n" + message["content"]
except:
print_msg = sender.name + "\n\n---\n\n" + message
self.child_conn.send(PipeCom("show", print_msg))
def gpt_academic_get_human_input(self, user_proxy, message): def gpt_academic_get_human_input(self, user_proxy, message):
# ⭐⭐ run in subprocess # ⭐⭐ run in subprocess
@@ -62,33 +66,33 @@ class AutoGenGeneral(PluginMultiprocessManager):
def exe_autogen(self, input): def exe_autogen(self, input):
# ⭐⭐ run in subprocess # ⭐⭐ run in subprocess
input = input.content input = input.content
with ProxyNetworkActivate("AutoGen"): code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker}
code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker} agents = self.define_agents()
agents = self.define_agents() user_proxy = None
user_proxy = None assistant = None
assistant = None for agent_kwargs in agents:
for agent_kwargs in agents: agent_cls = agent_kwargs.pop('cls')
agent_cls = agent_kwargs.pop('cls') kwargs = {
kwargs = { 'llm_config':self.llm_kwargs,
'llm_config':self.llm_kwargs, 'code_execution_config':code_execution_config
'code_execution_config':code_execution_config }
} kwargs.update(agent_kwargs)
kwargs.update(agent_kwargs) agent_handle = agent_cls(**kwargs)
agent_handle = agent_cls(**kwargs) agent_handle._print_received_message = lambda a,b: self.gpt_academic_print_override(agent_kwargs, a, b)
agent_handle._print_received_message = lambda a,b: self.gpt_academic_print_override(agent_kwargs, a, b) for d in agent_handle._reply_func_list:
for d in agent_handle._reply_func_list: if hasattr(d['reply_func'],'__name__') and d['reply_func'].__name__ == 'generate_oai_reply':
if hasattr(d['reply_func'],'__name__') and d['reply_func'].__name__ == 'generate_oai_reply': d['reply_func'] = gpt_academic_generate_oai_reply
d['reply_func'] = gpt_academic_generate_oai_reply if agent_kwargs['name'] == 'user_proxy':
if agent_kwargs['name'] == 'user_proxy': agent_handle.get_human_input = lambda a: self.gpt_academic_get_human_input(user_proxy, a)
agent_handle.get_human_input = lambda a: self.gpt_academic_get_human_input(user_proxy, a) user_proxy = agent_handle
user_proxy = agent_handle if agent_kwargs['name'] == 'assistant': assistant = agent_handle
if agent_kwargs['name'] == 'assistant': assistant = agent_handle try:
try: if user_proxy is None or assistant is None: raise Exception("用户代理或助理代理未定义")
if user_proxy is None or assistant is None: raise Exception("用户代理或助理代理未定义") with ProxyNetworkActivate("AutoGen"):
user_proxy.initiate_chat(assistant, message=input) user_proxy.initiate_chat(assistant, message=input)
except Exception as e: except Exception as e:
tb_str = '```\n' + trimmed_format_exc() + '```' tb_str = '```\n' + trimmed_format_exc() + '```'
self.child_conn.send(PipeCom("done", "AutoGen 执行失败: \n\n" + tb_str)) self.child_conn.send(PipeCom("done", "AutoGen 执行失败: \n\n" + tb_str))
def subprocess_worker(self, child_conn): def subprocess_worker(self, child_conn):
# ⭐⭐ run in subprocess # ⭐⭐ run in subprocess

查看文件

@@ -9,7 +9,7 @@ class PipeCom:
class PluginMultiprocessManager: class PluginMultiprocessManager:
def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# ⭐ run in main process # ⭐ run in main process
self.autogen_work_dir = os.path.join(get_log_folder("autogen"), gen_time_str()) self.autogen_work_dir = os.path.join(get_log_folder("autogen"), gen_time_str())
self.previous_work_dir_files = {} self.previous_work_dir_files = {}
@@ -18,7 +18,7 @@ class PluginMultiprocessManager:
self.chatbot = chatbot self.chatbot = chatbot
self.history = history self.history = history
self.system_prompt = system_prompt self.system_prompt = system_prompt
# self.web_port = web_port # self.user_request = user_request
self.alive = True self.alive = True
self.use_docker = get_conf("AUTOGEN_USE_DOCKER") self.use_docker = get_conf("AUTOGEN_USE_DOCKER")
self.last_user_input = "" self.last_user_input = ""
@@ -72,7 +72,7 @@ class PluginMultiprocessManager:
if file_type.lower() in ['png', 'jpg']: if file_type.lower() in ['png', 'jpg']:
image_path = os.path.abspath(fp) image_path = os.path.abspath(fp)
self.chatbot.append([ self.chatbot.append([
'检测到新生图像:', '检测到新生图像:',
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>' f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
]) ])
yield from update_ui(chatbot=self.chatbot, history=self.history) yield from update_ui(chatbot=self.chatbot, history=self.history)
@@ -114,21 +114,21 @@ class PluginMultiprocessManager:
self.cnt = 1 self.cnt = 1
self.parent_conn = self.launch_subprocess_with_pipe() # ⭐⭐⭐ self.parent_conn = self.launch_subprocess_with_pipe() # ⭐⭐⭐
repeated, cmd_to_autogen = self.send_command(txt) repeated, cmd_to_autogen = self.send_command(txt)
if txt == 'exit': if txt == 'exit':
self.chatbot.append([f"结束", "结束信号已明确,终止AutoGen程序。"]) self.chatbot.append([f"结束", "结束信号已明确,终止AutoGen程序。"])
yield from update_ui(chatbot=self.chatbot, history=self.history) yield from update_ui(chatbot=self.chatbot, history=self.history)
self.terminate() self.terminate()
return "terminate" return "terminate"
# patience = 10 # patience = 10
while True: while True:
time.sleep(0.5) time.sleep(0.5)
if not self.alive: if not self.alive:
# the heartbeat watchdog might have it killed # the heartbeat watchdog might have it killed
self.terminate() self.terminate()
return "terminate" return "terminate"
if self.parent_conn.poll(): if self.parent_conn.poll():
self.feed_heartbeat_watchdog() self.feed_heartbeat_watchdog()
if "[GPT-Academic] 等待中" in self.chatbot[-1][-1]: if "[GPT-Academic] 等待中" in self.chatbot[-1][-1]:
self.chatbot.pop(-1) # remove the last line self.chatbot.pop(-1) # remove the last line
@@ -152,8 +152,8 @@ class PluginMultiprocessManager:
yield from update_ui(chatbot=self.chatbot, history=self.history) yield from update_ui(chatbot=self.chatbot, history=self.history)
if msg.cmd == "interact": if msg.cmd == "interact":
yield from self.overwatch_workdir_file_change() yield from self.overwatch_workdir_file_change()
self.chatbot.append([f"程序抵达用户反馈节点.", msg.content + self.chatbot.append([f"程序抵达用户反馈节点.", msg.content +
"\n\n等待您的进一步指令." + "\n\n等待您的进一步指令." +
"\n\n(1) 一般情况下您不需要说什么, 清空输入区, 然后直接点击“提交”以继续. " + "\n\n(1) 一般情况下您不需要说什么, 清空输入区, 然后直接点击“提交”以继续. " +
"\n\n(2) 如果您需要补充些什么, 输入要反馈的内容, 直接点击“提交”以继续. " + "\n\n(2) 如果您需要补充些什么, 输入要反馈的内容, 直接点击“提交”以继续. " +
"\n\n(3) 如果您想终止程序, 输入exit, 直接点击“提交”以终止AutoGen并解锁. " "\n\n(3) 如果您想终止程序, 输入exit, 直接点击“提交”以终止AutoGen并解锁. "

查看文件

@@ -8,7 +8,7 @@ class WatchDog():
self.interval = interval self.interval = interval
self.msg = msg self.msg = msg
self.kill_dog = False self.kill_dog = False
def watch(self): def watch(self):
while True: while True:
if self.kill_dog: break if self.kill_dog: break

查看文件

@@ -32,7 +32,7 @@ def string_to_options(arguments):
return args return args
@CatchException @CatchException
def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -40,13 +40,13 @@ def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成")) chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
args = plugin_kwargs.get("advanced_arg", None) args = plugin_kwargs.get("advanced_arg", None)
if args is None: if args is None:
chatbot.append(("没给定指令", "退出")) chatbot.append(("没给定指令", "退出"))
yield from update_ui(chatbot=chatbot, history=history); return yield from update_ui(chatbot=chatbot, history=history); return
else: else:
@@ -69,7 +69,7 @@ def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
sys_prompt_array=[arguments.system_prompt for _ in (batch)], sys_prompt_array=[arguments.system_prompt for _ in (batch)],
max_workers=10 # OpenAI所允许的最大并行过载 max_workers=10 # OpenAI所允许的最大并行过载
) )
with open(txt+'.generated.json', 'a+', encoding='utf8') as f: with open(txt+'.generated.json', 'a+', encoding='utf8') as f:
for b, r in zip(batch, res[1::2]): for b, r in zip(batch, res[1::2]):
f.write(json.dumps({"content":b, "summary":r}, ensure_ascii=False)+'\n') f.write(json.dumps({"content":b, "summary":r}, ensure_ascii=False)+'\n')
@@ -80,7 +80,7 @@ def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
@CatchException @CatchException
def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -88,19 +88,19 @@ def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
import subprocess import subprocess
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成")) chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
args = plugin_kwargs.get("advanced_arg", None) args = plugin_kwargs.get("advanced_arg", None)
if args is None: if args is None:
chatbot.append(("没给定指令", "退出")) chatbot.append(("没给定指令", "退出"))
yield from update_ui(chatbot=chatbot, history=history); return yield from update_ui(chatbot=chatbot, history=history); return
else: else:
arguments = string_to_options(arguments=args) arguments = string_to_options(arguments=args)
pre_seq_len = arguments.pre_seq_len # 128 pre_seq_len = arguments.pre_seq_len # 128

查看文件

@@ -12,7 +12,7 @@ def input_clipping(inputs, history, max_token_limit):
mode = 'input-and-history' mode = 'input-and-history'
# 当 输入部分的token占比 小于 全文的一半时,只裁剪历史 # 当 输入部分的token占比 小于 全文的一半时,只裁剪历史
input_token_num = get_token_num(inputs) input_token_num = get_token_num(inputs)
if input_token_num < max_token_limit//2: if input_token_num < max_token_limit//2:
mode = 'only-history' mode = 'only-history'
max_token_limit = max_token_limit - input_token_num max_token_limit = max_token_limit - input_token_num
@@ -21,7 +21,7 @@ def input_clipping(inputs, history, max_token_limit):
n_token = get_token_num('\n'.join(everything)) n_token = get_token_num('\n'.join(everything))
everything_token = [get_token_num(e) for e in everything] everything_token = [get_token_num(e) for e in everything]
delta = max(everything_token) // 16 # 截断时的颗粒度 delta = max(everything_token) // 16 # 截断时的颗粒度
while n_token > max_token_limit: while n_token > max_token_limit:
where = np.argmax(everything_token) where = np.argmax(everything_token)
encoded = enc.encode(everything[where], disallowed_special=()) encoded = enc.encode(everything[where], disallowed_special=())
@@ -38,9 +38,9 @@ def input_clipping(inputs, history, max_token_limit):
return inputs, history return inputs, history
def request_gpt_model_in_new_thread_with_ui_alive( def request_gpt_model_in_new_thread_with_ui_alive(
inputs, inputs_show_user, llm_kwargs, inputs, inputs_show_user, llm_kwargs,
chatbot, history, sys_prompt, refresh_interval=0.2, chatbot, history, sys_prompt, refresh_interval=0.2,
handle_token_exceed=True, handle_token_exceed=True,
retry_times_at_unknown_error=2, retry_times_at_unknown_error=2,
): ):
""" """
@@ -77,7 +77,7 @@ def request_gpt_model_in_new_thread_with_ui_alive(
exceeded_cnt = 0 exceeded_cnt = 0
while True: while True:
# watchdog error # watchdog error
if len(mutable) >= 2 and (time.time()-mutable[1]) > watch_dog_patience: if len(mutable) >= 2 and (time.time()-mutable[1]) > watch_dog_patience:
raise RuntimeError("检测到程序终止。") raise RuntimeError("检测到程序终止。")
try: try:
# 【第一种情况】:顺利完成 # 【第一种情况】:顺利完成
@@ -135,17 +135,29 @@ def request_gpt_model_in_new_thread_with_ui_alive(
yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息 yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息
return final_result return final_result
def can_multi_process(llm): def can_multi_process(llm) -> bool:
if llm.startswith('gpt-'): return True from request_llms.bridge_all import model_info
if llm.startswith('api2d-'): return True
if llm.startswith('azure-'): return True def default_condition(llm) -> bool:
if llm.startswith('spark'): return True # legacy condition
if llm.startswith('zhipuai'): return True if llm.startswith('gpt-'): return True
return False if llm.startswith('api2d-'): return True
if llm.startswith('azure-'): return True
if llm.startswith('spark'): return True
if llm.startswith('zhipuai') or llm.startswith('glm-'): return True
return False
if llm in model_info:
if 'can_multi_thread' in model_info[llm]:
return model_info[llm]['can_multi_thread']
else:
return default_condition(llm)
else:
return default_condition(llm)
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array, inputs_show_user_array, llm_kwargs, inputs_array, inputs_show_user_array, llm_kwargs,
chatbot, history_array, sys_prompt_array, chatbot, history_array, sys_prompt_array,
refresh_interval=0.2, max_workers=-1, scroller_max_len=30, refresh_interval=0.2, max_workers=-1, scroller_max_len=30,
handle_token_exceed=True, show_user_at_complete=False, handle_token_exceed=True, show_user_at_complete=False,
retry_times_at_unknown_error=2, retry_times_at_unknown_error=2,
@@ -189,7 +201,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿 # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
if not can_multi_process(llm_kwargs['llm_model']): if not can_multi_process(llm_kwargs['llm_model']):
max_workers = 1 max_workers = 1
executor = ThreadPoolExecutor(max_workers=max_workers) executor = ThreadPoolExecutor(max_workers=max_workers)
n_frag = len(inputs_array) n_frag = len(inputs_array)
# 用户反馈 # 用户反馈
@@ -214,7 +226,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
try: try:
# 【第一种情况】:顺利完成 # 【第一种情况】:顺利完成
gpt_say = predict_no_ui_long_connection( gpt_say = predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=history, inputs=inputs, llm_kwargs=llm_kwargs, history=history,
sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True
) )
mutable[index][2] = "已成功" mutable[index][2] = "已成功"
@@ -246,7 +258,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
print(tb_str) print(tb_str)
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n" gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
if retry_op > 0: if retry_op > 0:
retry_op -= 1 retry_op -= 1
wait = random.randint(5, 20) wait = random.randint(5, 20)
if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str): if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str):
@@ -284,12 +296,11 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
# 在前端打印些好玩的东西 # 在前端打印些好玩的东西
for thread_index, _ in enumerate(worker_done): for thread_index, _ in enumerate(worker_done):
print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\ print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\
replace('\n', '').replace('`', '.').replace( replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
observe_win.append(print_something_really_funny) observe_win.append(print_something_really_funny)
# 在前端打印些好玩的东西 # 在前端打印些好玩的东西
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n' stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
if not done else f'`{mutable[thread_index][2]}`\n\n' if not done else f'`{mutable[thread_index][2]}`\n\n'
for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)]) for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)])
# 在前端打印些好玩的东西 # 在前端打印些好玩的东西
chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))] chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))]
@@ -303,7 +314,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
for inputs_show_user, f in zip(inputs_show_user_array, futures): for inputs_show_user, f in zip(inputs_show_user_array, futures):
gpt_res = f.result() gpt_res = f.result()
gpt_response_collection.extend([inputs_show_user, gpt_res]) gpt_response_collection.extend([inputs_show_user, gpt_res])
# 是否在结束时,在界面上显示结果 # 是否在结束时,在界面上显示结果
if show_user_at_complete: if show_user_at_complete:
for inputs_show_user, f in zip(inputs_show_user_array, futures): for inputs_show_user, f in zip(inputs_show_user_array, futures):
@@ -338,7 +349,7 @@ def read_and_clean_pdf_text(fp):
import fitz, copy import fitz, copy
import re import re
import numpy as np import numpy as np
from colorful import print亮黄, print亮绿 from shared_utils.colorful import print亮黄, print亮绿
fc = 0 # Index 0 文本 fc = 0 # Index 0 文本
fs = 1 # Index 1 字体 fs = 1 # Index 1 字体
fb = 2 # Index 2 框框 fb = 2 # Index 2 框框
@@ -353,7 +364,7 @@ def read_and_clean_pdf_text(fp):
if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0 if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0
fsize_statiscs[wtf['size']] += len(wtf['text']) fsize_statiscs[wtf['size']] += len(wtf['text'])
return max(fsize_statiscs, key=fsize_statiscs.get) return max(fsize_statiscs, key=fsize_statiscs.get)
def ffsize_same(a,b): def ffsize_same(a,b):
""" """
提取字体大小是否近似相等 提取字体大小是否近似相等
@@ -389,7 +400,7 @@ def read_and_clean_pdf_text(fp):
if index == 0: if index == 0:
page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace( page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace(
'- ', '') for t in text_areas['blocks'] if 'lines' in t] '- ', '') for t in text_areas['blocks'] if 'lines' in t]
############################## <第 2 步,获取正文主字体> ################################## ############################## <第 2 步,获取正文主字体> ##################################
try: try:
fsize_statiscs = {} fsize_statiscs = {}
@@ -405,7 +416,7 @@ def read_and_clean_pdf_text(fp):
mega_sec = [] mega_sec = []
sec = [] sec = []
for index, line in enumerate(meta_line): for index, line in enumerate(meta_line):
if index == 0: if index == 0:
sec.append(line[fc]) sec.append(line[fc])
continue continue
if REMOVE_FOOT_NOTE: if REMOVE_FOOT_NOTE:
@@ -502,12 +513,12 @@ def get_files_from_everything(txt, type): # type='.md'
""" """
这个函数是用来获取指定目录下所有指定类型(如.md的文件,并且对于网络上的文件,也可以获取它。 这个函数是用来获取指定目录下所有指定类型(如.md的文件,并且对于网络上的文件,也可以获取它。
下面是对每个参数和返回值的说明: 下面是对每个参数和返回值的说明:
参数 参数
- txt: 路径或网址,表示要搜索的文件或者文件夹路径或网络上的文件。 - txt: 路径或网址,表示要搜索的文件或者文件夹路径或网络上的文件。
- type: 字符串,表示要搜索的文件类型。默认是.md。 - type: 字符串,表示要搜索的文件类型。默认是.md。
返回值 返回值
- success: 布尔值,表示函数是否成功执行。 - success: 布尔值,表示函数是否成功执行。
- file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。 - file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。
- project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。 - project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。
该函数详细注释已添加,请确认是否满足您的需要。 该函数详细注释已添加,请确认是否满足您的需要。
""" """
@@ -557,7 +568,7 @@ class nougat_interface():
from toolbox import ProxyNetworkActivate from toolbox import ProxyNetworkActivate
logging.info(f'正在执行命令 {command}') logging.info(f'正在执行命令 {command}')
with ProxyNetworkActivate("Nougat_Download"): with ProxyNetworkActivate("Nougat_Download"):
process = subprocess.Popen(command, shell=True, cwd=cwd, env=os.environ) process = subprocess.Popen(command, shell=False, cwd=cwd, env=os.environ)
try: try:
stdout, stderr = process.communicate(timeout=timeout) stdout, stderr = process.communicate(timeout=timeout)
except subprocess.TimeoutExpired: except subprocess.TimeoutExpired:
@@ -571,7 +582,7 @@ class nougat_interface():
def NOUGAT_parse_pdf(self, fp, chatbot, history): def NOUGAT_parse_pdf(self, fp, chatbot, history):
from toolbox import update_ui_lastest_msg from toolbox import update_ui_lastest_msg
yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度:正在排队, 等待线程锁...", yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度:正在排队, 等待线程锁...",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
self.threadLock.acquire() self.threadLock.acquire()
import glob, threading, os import glob, threading, os
@@ -579,9 +590,10 @@ class nougat_interface():
dst = os.path.join(get_log_folder(plugin_name='nougat'), gen_time_str()) dst = os.path.join(get_log_folder(plugin_name='nougat'), gen_time_str())
os.makedirs(dst) os.makedirs(dst)
yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度正在加载NOUGAT... 提示首次运行需要花费较长时间下载NOUGAT参数", yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度正在加载NOUGAT... 提示首次运行需要花费较长时间下载NOUGAT参数",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
self.nougat_with_timeout(f'nougat --out "{os.path.abspath(dst)}" "{os.path.abspath(fp)}"', os.getcwd(), timeout=3600) command = ['nougat', '--out', os.path.abspath(dst), os.path.abspath(fp)]
self.nougat_with_timeout(command, cwd=os.getcwd(), timeout=3600)
res = glob.glob(os.path.join(dst,'*.mmd')) res = glob.glob(os.path.join(dst,'*.mmd'))
if len(res) == 0: if len(res) == 0:
self.threadLock.release() self.threadLock.release()

查看文件

@@ -0,0 +1,122 @@
import os
from textwrap import indent
class FileNode:
def __init__(self, name):
self.name = name
self.children = []
self.is_leaf = False
self.level = 0
self.parenting_ship = []
self.comment = ""
self.comment_maxlen_show = 50
@staticmethod
def add_linebreaks_at_spaces(string, interval=10):
return '\n'.join(string[i:i+interval] for i in range(0, len(string), interval))
def sanitize_comment(self, comment):
if len(comment) > self.comment_maxlen_show: suf = '...'
else: suf = ''
comment = comment[:self.comment_maxlen_show]
comment = comment.replace('\"', '').replace('`', '').replace('\n', '').replace('`', '').replace('$', '')
comment = self.add_linebreaks_at_spaces(comment, 10)
return '`' + comment + suf + '`'
def add_file(self, file_path, file_comment):
directory_names, file_name = os.path.split(file_path)
current_node = self
level = 1
if directory_names == "":
new_node = FileNode(file_name)
current_node.children.append(new_node)
new_node.is_leaf = True
new_node.comment = self.sanitize_comment(file_comment)
new_node.level = level
current_node = new_node
else:
dnamesplit = directory_names.split(os.sep)
for i, directory_name in enumerate(dnamesplit):
found_child = False
level += 1
for child in current_node.children:
if child.name == directory_name:
current_node = child
found_child = True
break
if not found_child:
new_node = FileNode(directory_name)
current_node.children.append(new_node)
new_node.level = level - 1
current_node = new_node
term = FileNode(file_name)
term.level = level
term.comment = self.sanitize_comment(file_comment)
term.is_leaf = True
current_node.children.append(term)
def print_files_recursively(self, level=0, code="R0"):
print(' '*level + self.name + ' ' + str(self.is_leaf) + ' ' + str(self.level))
for j, child in enumerate(self.children):
child.print_files_recursively(level=level+1, code=code+str(j))
self.parenting_ship.extend(child.parenting_ship)
p1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
p2 = """ --> """
p3 = f"""{code+str(j)}[\"🗎{child.name}\"]""" if child.is_leaf else f"""{code+str(j)}[[\"📁{child.name}\"]]"""
edge_code = p1 + p2 + p3
if edge_code in self.parenting_ship:
continue
self.parenting_ship.append(edge_code)
if self.comment != "":
pc1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
pc2 = f""" -.-x """
pc3 = f"""C{code}[\"{self.comment}\"]:::Comment"""
edge_code = pc1 + pc2 + pc3
self.parenting_ship.append(edge_code)
MERMAID_TEMPLATE = r"""
```mermaid
flowchart LR
%% <gpt_academic_hide_mermaid_code> 一个特殊标记,用于在生成mermaid图表时隐藏代码块
classDef Comment stroke-dasharray: 5 5
subgraph {graph_name}
{relationship}
end
```
"""
def build_file_tree_mermaid_diagram(file_manifest, file_comments, graph_name):
# Create the root node
file_tree_struct = FileNode("root")
# Build the tree structure
for file_path, file_comment in zip(file_manifest, file_comments):
file_tree_struct.add_file(file_path, file_comment)
file_tree_struct.print_files_recursively()
cc = "\n".join(file_tree_struct.parenting_ship)
ccc = indent(cc, prefix=" "*8)
return MERMAID_TEMPLATE.format(graph_name=graph_name, relationship=ccc)
if __name__ == "__main__":
# File manifest
file_manifest = [
"cradle_void_terminal.ipynb",
"tests/test_utils.py",
"tests/test_plugins.py",
"tests/test_llms.py",
"config.py",
"build/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/model_weights_0.bin",
"crazy_functions/latex_fns/latex_actions.py",
"crazy_functions/latex_fns/latex_toolbox.py"
]
file_comments = [
"根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件",
"包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器",
"用于构建HTML报告的类和方法用于构建HTML报告的类和方法用于构建HTML报告的类和方法",
"包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码",
"用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数",
"是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块",
"用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器",
"包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类",
]
print(build_file_tree_mermaid_diagram(file_manifest, file_comments, "项目文件树"))

查看文件

@@ -8,7 +8,7 @@ import random
class MiniGame_ASCII_Art(GptAcademicGameBaseState): class MiniGame_ASCII_Art(GptAcademicGameBaseState):
def step(self, prompt, chatbot, history): def step(self, prompt, chatbot, history):
if self.step_cnt == 0: if self.step_cnt == 0:
chatbot.append(["我画你猜(动物)", "请稍等..."]) chatbot.append(["我画你猜(动物)", "请稍等..."])
else: else:
if prompt.strip() == 'exit': if prompt.strip() == 'exit':

查看文件

@@ -88,8 +88,8 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
self.story = [] self.story = []
chatbot.append(["互动写故事", f"这次的故事开头是:{self.headstart}"]) chatbot.append(["互动写故事", f"这次的故事开头是:{self.headstart}"])
self.sys_prompt_ = '你是一个想象力丰富的杰出作家。正在与你的朋友互动,一起写故事,因此你每次写的故事段落应少于300字结局除外' self.sys_prompt_ = '你是一个想象力丰富的杰出作家。正在与你的朋友互动,一起写故事,因此你每次写的故事段落应少于300字结局除外'
def generate_story_image(self, story_paragraph): def generate_story_image(self, story_paragraph):
try: try:
from crazy_functions.图片生成 import gen_image from crazy_functions.图片生成 import gen_image
@@ -98,13 +98,13 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
return f'<br/><div align="center"><img src="file={image_path}"></div>' return f'<br/><div align="center"><img src="file={image_path}"></div>'
except: except:
return '' return ''
def step(self, prompt, chatbot, history): def step(self, prompt, chatbot, history):
""" """
首先,处理游戏初始化等特殊情况 首先,处理游戏初始化等特殊情况
""" """
if self.step_cnt == 0: if self.step_cnt == 0:
self.begin_game_step_0(prompt, chatbot, history) self.begin_game_step_0(prompt, chatbot, history)
self.lock_plugin(chatbot) self.lock_plugin(chatbot)
self.cur_task = 'head_start' self.cur_task = 'head_start'
@@ -132,7 +132,7 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
inputs_ = prompts_hs.format(headstart=self.headstart) inputs_ = prompts_hs.format(headstart=self.headstart)
history_ = [] history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive( story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, '故事开头', self.llm_kwargs, inputs_, '故事开头', self.llm_kwargs,
chatbot, history_, self.sys_prompt_ chatbot, history_, self.sys_prompt_
) )
self.story.append(story_paragraph) self.story.append(story_paragraph)
@@ -147,7 +147,7 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
inputs_ = prompts_interact.format(previously_on_story=previously_on_story) inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
history_ = [] history_ = []
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive( self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, '请在以下几种故事走向中,选择一种(当然,您也可以选择给出其他故事走向):', self.llm_kwargs, inputs_, '请在以下几种故事走向中,选择一种(当然,您也可以选择给出其他故事走向):', self.llm_kwargs,
chatbot, chatbot,
history_, history_,
self.sys_prompt_ self.sys_prompt_
@@ -166,7 +166,7 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
inputs_ = prompts_resume.format(previously_on_story=previously_on_story, choice=self.next_choices, user_choice=prompt) inputs_ = prompts_resume.format(previously_on_story=previously_on_story, choice=self.next_choices, user_choice=prompt)
history_ = [] history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive( story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, f'下一段故事(您的选择是:{prompt})。', self.llm_kwargs, inputs_, f'下一段故事(您的选择是:{prompt})。', self.llm_kwargs,
chatbot, history_, self.sys_prompt_ chatbot, history_, self.sys_prompt_
) )
self.story.append(story_paragraph) self.story.append(story_paragraph)
@@ -181,10 +181,10 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
inputs_ = prompts_interact.format(previously_on_story=previously_on_story) inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
history_ = [] history_ = []
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive( self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, inputs_,
'请在以下几种故事走向中,选择一种。当然,您也可以给出您心中的其他故事走向。另外,如果您希望剧情立即收尾,请输入剧情走向,并以“剧情收尾”四个字提示程序。', self.llm_kwargs, '请在以下几种故事走向中,选择一种。当然,您也可以给出您心中的其他故事走向。另外,如果您希望剧情立即收尾,请输入剧情走向,并以“剧情收尾”四个字提示程序。', self.llm_kwargs,
chatbot, chatbot,
history_, history_,
self.sys_prompt_ self.sys_prompt_
) )
self.cur_task = 'user_choice' self.cur_task = 'user_choice'
@@ -200,7 +200,7 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
inputs_ = prompts_terminate.format(previously_on_story=previously_on_story, user_choice=prompt) inputs_ = prompts_terminate.format(previously_on_story=previously_on_story, user_choice=prompt)
history_ = [] history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive( story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, f'故事收尾(您的选择是:{prompt})。', self.llm_kwargs, inputs_, f'故事收尾(您的选择是:{prompt})。', self.llm_kwargs,
chatbot, history_, self.sys_prompt_ chatbot, history_, self.sys_prompt_
) )
# # 配图 # # 配图

查看文件

@@ -5,7 +5,7 @@ def get_code_block(reply):
import re import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) == 1: if len(matches) == 1:
return "```" + matches[0] + "```" # code block return "```" + matches[0] + "```" # code block
raise RuntimeError("GPT is not generating proper code.") raise RuntimeError("GPT is not generating proper code.")
@@ -13,10 +13,10 @@ def is_same_thing(a, b, llm_kwargs):
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
class IsSameThing(BaseModel): class IsSameThing(BaseModel):
is_same_thing: bool = Field(description="determine whether two objects are same thing.", default=False) is_same_thing: bool = Field(description="determine whether two objects are same thing.", default=False)
def run_gpt_fn(inputs, sys_prompt, history=[]): def run_gpt_fn(inputs, sys_prompt, history=[]):
return predict_no_ui_long_connection( return predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, inputs=inputs, llm_kwargs=llm_kwargs,
history=history, sys_prompt=sys_prompt, observe_window=[] history=history, sys_prompt=sys_prompt, observe_window=[]
) )
@@ -24,7 +24,7 @@ def is_same_thing(a, b, llm_kwargs):
inputs_01 = "Identity whether the user input and the target is the same thing: \n target object: {a} \n user input object: {b} \n\n\n".format(a=a, b=b) inputs_01 = "Identity whether the user input and the target is the same thing: \n target object: {a} \n user input object: {b} \n\n\n".format(a=a, b=b)
inputs_01 += "\n\n\n Note that the user may describe the target object with a different language, e.g. cat and 猫 are the same thing." inputs_01 += "\n\n\n Note that the user may describe the target object with a different language, e.g. cat and 猫 are the same thing."
analyze_res_cot_01 = run_gpt_fn(inputs_01, "", []) analyze_res_cot_01 = run_gpt_fn(inputs_01, "", [])
inputs_02 = inputs_01 + gpt_json_io.format_instructions inputs_02 = inputs_01 + gpt_json_io.format_instructions
analyze_res = run_gpt_fn(inputs_02, "", [inputs_01, analyze_res_cot_01]) analyze_res = run_gpt_fn(inputs_02, "", [inputs_01, analyze_res_cot_01])

查看文件

@@ -41,11 +41,11 @@ def is_function_successfully_generated(fn_path, class_name, return_dict):
# Now you can create an instance of the class # Now you can create an instance of the class
instance = some_class() instance = some_class()
return_dict['success'] = True return_dict['success'] = True
return return
except: except:
return_dict['traceback'] = trimmed_format_exc() return_dict['traceback'] = trimmed_format_exc()
return return
def subprocess_worker(code, file_path, return_dict): def subprocess_worker(code, file_path, return_dict):
return_dict['result'] = None return_dict['result'] = None
return_dict['success'] = False return_dict['success'] = False

查看文件

@@ -1,4 +1,4 @@
import platform import platform
import pickle import pickle
import multiprocessing import multiprocessing

查看文件

@@ -62,8 +62,8 @@ class GptJsonIO():
if "type" in reduced_schema: if "type" in reduced_schema:
del reduced_schema["type"] del reduced_schema["type"]
# Ensure json in context is well-formed with double quotes. # Ensure json in context is well-formed with double quotes.
schema_str = json.dumps(reduced_schema)
if self.example_instruction: if self.example_instruction:
schema_str = json.dumps(reduced_schema)
return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str) return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)
else: else:
return PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE.format(schema=schema_str) return PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE.format(schema=schema_str)
@@ -89,7 +89,7 @@ class GptJsonIO():
error + "\n\n" + \ error + "\n\n" + \
"Now, fix this json string. \n\n" "Now, fix this json string. \n\n"
return prompt return prompt
def generate_output_auto_repair(self, response, gpt_gen_fn): def generate_output_auto_repair(self, response, gpt_gen_fn):
""" """
response: string containing canidate json response: string containing canidate json

查看文件

@@ -1,10 +1,11 @@
from toolbox import update_ui, update_ui_lastest_msg, get_log_folder from toolbox import update_ui, update_ui_lastest_msg, get_log_folder
from toolbox import get_conf, objdump, objload, promote_file_to_downloadzone from toolbox import get_conf, promote_file_to_downloadzone
from .latex_toolbox import PRESERVE, TRANSFORM from .latex_toolbox import PRESERVE, TRANSFORM
from .latex_toolbox import set_forbidden_text, set_forbidden_text_begin_end, set_forbidden_text_careful_brace from .latex_toolbox import set_forbidden_text, set_forbidden_text_begin_end, set_forbidden_text_careful_brace
from .latex_toolbox import reverse_forbidden_text_careful_brace, reverse_forbidden_text, convert_to_linklist, post_process from .latex_toolbox import reverse_forbidden_text_careful_brace, reverse_forbidden_text, convert_to_linklist, post_process
from .latex_toolbox import fix_content, find_main_tex_file, merge_tex_files, compile_latex_with_timeout from .latex_toolbox import fix_content, find_main_tex_file, merge_tex_files, compile_latex_with_timeout
from .latex_toolbox import find_title_and_abs from .latex_toolbox import find_title_and_abs
from .latex_pickle_io import objdump, objload
import os, shutil import os, shutil
import re import re
@@ -90,16 +91,16 @@ class LatexPaperSplit():
"版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \ "版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \
"项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。" "项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。"
# 请您不要删除或修改这行警告,除非您是论文的原作者如果您是论文原作者,欢迎加REAME中的QQ联系开发者 # 请您不要删除或修改这行警告,除非您是论文的原作者如果您是论文原作者,欢迎加REAME中的QQ联系开发者
self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\" self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\"
self.title = "unknown" self.title = "unknown"
self.abstract = "unknown" self.abstract = "unknown"
def read_title_and_abstract(self, txt): def read_title_and_abstract(self, txt):
try: try:
title, abstract = find_title_and_abs(txt) title, abstract = find_title_and_abs(txt)
if title is not None: if title is not None:
self.title = title.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '') self.title = title.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '')
if abstract is not None: if abstract is not None:
self.abstract = abstract.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '') self.abstract = abstract.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '')
except: except:
pass pass
@@ -111,7 +112,7 @@ class LatexPaperSplit():
result_string = "" result_string = ""
node_cnt = 0 node_cnt = 0
line_cnt = 0 line_cnt = 0
for node in self.nodes: for node in self.nodes:
if node.preserve: if node.preserve:
line_cnt += node.string.count('\n') line_cnt += node.string.count('\n')
@@ -144,7 +145,7 @@ class LatexPaperSplit():
return result_string return result_string
def split(self, txt, project_folder, opts): def split(self, txt, project_folder, opts):
""" """
break down latex file to a linked list, break down latex file to a linked list,
each node use a preserve flag to indicate whether it should each node use a preserve flag to indicate whether it should
@@ -155,7 +156,7 @@ class LatexPaperSplit():
manager = multiprocessing.Manager() manager = multiprocessing.Manager()
return_dict = manager.dict() return_dict = manager.dict()
p = multiprocessing.Process( p = multiprocessing.Process(
target=split_subprocess, target=split_subprocess,
args=(txt, project_folder, return_dict, opts)) args=(txt, project_folder, return_dict, opts))
p.start() p.start()
p.join() p.join()
@@ -217,13 +218,13 @@ def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin
from ..crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from ..crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from .latex_actions import LatexPaperFileGroup, LatexPaperSplit from .latex_actions import LatexPaperFileGroup, LatexPaperSplit
# <-------- 寻找主tex文件 ----------> # <-------- 寻找主tex文件 ---------->
maintex = find_main_tex_file(file_manifest, mode) maintex = find_main_tex_file(file_manifest, mode)
chatbot.append((f"定位主Latex文件", f'[Local Message] 分析结果该项目的Latex主文件是{maintex}, 如果分析错误, 请立即终止程序, 删除或修改歧义文件, 然后重试。主程序即将开始, 请稍候。')) chatbot.append((f"定位主Latex文件", f'[Local Message] 分析结果该项目的Latex主文件是{maintex}, 如果分析错误, 请立即终止程序, 删除或修改歧义文件, 然后重试。主程序即将开始, 请稍候。'))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
time.sleep(3) time.sleep(3)
# <-------- 读取Latex文件, 将多文件tex工程融合为一个巨型tex ----------> # <-------- 读取Latex文件, 将多文件tex工程融合为一个巨型tex ---------->
main_tex_basename = os.path.basename(maintex) main_tex_basename = os.path.basename(maintex)
assert main_tex_basename.endswith('.tex') assert main_tex_basename.endswith('.tex')
main_tex_basename_bare = main_tex_basename[:-4] main_tex_basename_bare = main_tex_basename[:-4]
@@ -240,13 +241,13 @@ def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin
with open(project_folder + '/merge.tex', 'w', encoding='utf-8', errors='replace') as f: with open(project_folder + '/merge.tex', 'w', encoding='utf-8', errors='replace') as f:
f.write(merged_content) f.write(merged_content)
# <-------- 精细切分latex文件 ----------> # <-------- 精细切分latex文件 ---------->
chatbot.append((f"Latex文件融合完成", f'[Local Message] 正在精细切分latex文件,这需要一段时间计算,文档越长耗时越长,请耐心等待。')) chatbot.append((f"Latex文件融合完成", f'[Local Message] 正在精细切分latex文件,这需要一段时间计算,文档越长耗时越长,请耐心等待。'))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
lps = LatexPaperSplit() lps = LatexPaperSplit()
lps.read_title_and_abstract(merged_content) lps.read_title_and_abstract(merged_content)
res = lps.split(merged_content, project_folder, opts) # 消耗时间的函数 res = lps.split(merged_content, project_folder, opts) # 消耗时间的函数
# <-------- 拆分过长的latex片段 ----------> # <-------- 拆分过长的latex片段 ---------->
pfg = LatexPaperFileGroup() pfg = LatexPaperFileGroup()
for index, r in enumerate(res): for index, r in enumerate(res):
pfg.file_paths.append('segment-' + str(index)) pfg.file_paths.append('segment-' + str(index))
@@ -255,17 +256,17 @@ def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin
pfg.run_file_split(max_token_limit=1024) pfg.run_file_split(max_token_limit=1024)
n_split = len(pfg.sp_file_contents) n_split = len(pfg.sp_file_contents)
# <-------- 根据需要切换prompt ----------> # <-------- 根据需要切换prompt ---------->
inputs_array, sys_prompt_array = switch_prompt(pfg, mode) inputs_array, sys_prompt_array = switch_prompt(pfg, mode)
inputs_show_user_array = [f"{mode} {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"{mode} {f}" for f in pfg.sp_file_tag]
if os.path.exists(pj(project_folder,'temp.pkl')): if os.path.exists(pj(project_folder,'temp.pkl')):
# <-------- 【仅调试】如果存在调试缓存文件,则跳过GPT请求环节 ----------> # <-------- 【仅调试】如果存在调试缓存文件,则跳过GPT请求环节 ---------->
pfg = objload(file=pj(project_folder,'temp.pkl')) pfg = objload(file=pj(project_folder,'temp.pkl'))
else: else:
# <-------- gpt 多线程请求 ----------> # <-------- gpt 多线程请求 ---------->
history_array = [[""] for _ in range(n_split)] history_array = [[""] for _ in range(n_split)]
# LATEX_EXPERIMENTAL, = get_conf('LATEX_EXPERIMENTAL') # LATEX_EXPERIMENTAL, = get_conf('LATEX_EXPERIMENTAL')
# if LATEX_EXPERIMENTAL: # if LATEX_EXPERIMENTAL:
@@ -284,32 +285,32 @@ def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin
scroller_max_len = 40 scroller_max_len = 40
) )
# <-------- 文本碎片重组为完整的tex片段 ----------> # <-------- 文本碎片重组为完整的tex片段 ---------->
pfg.sp_file_result = [] pfg.sp_file_result = []
for i_say, gpt_say, orig_content in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], pfg.sp_file_contents): for i_say, gpt_say, orig_content in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], pfg.sp_file_contents):
pfg.sp_file_result.append(gpt_say) pfg.sp_file_result.append(gpt_say)
pfg.merge_result() pfg.merge_result()
# <-------- 临时存储用于调试 ----------> # <-------- 临时存储用于调试 ---------->
pfg.get_token_num = None pfg.get_token_num = None
objdump(pfg, file=pj(project_folder,'temp.pkl')) objdump(pfg, file=pj(project_folder,'temp.pkl'))
write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder) write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder)
# <-------- 写出文件 ----------> # <-------- 写出文件 ---------->
msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}" msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}"
final_tex = lps.merge_result(pfg.file_result, mode, msg) final_tex = lps.merge_result(pfg.file_result, mode, msg)
objdump((lps, pfg.file_result, mode, msg), file=pj(project_folder,'merge_result.pkl')) objdump((lps, pfg.file_result, mode, msg), file=pj(project_folder,'merge_result.pkl'))
with open(project_folder + f'/merge_{mode}.tex', 'w', encoding='utf-8', errors='replace') as f: with open(project_folder + f'/merge_{mode}.tex', 'w', encoding='utf-8', errors='replace') as f:
if mode != 'translate_zh' or "binary" in final_tex: f.write(final_tex) if mode != 'translate_zh' or "binary" in final_tex: f.write(final_tex)
# <-------- 整理结果, 退出 ---------->
# <-------- 整理结果, 退出 ---------->
chatbot.append((f"完成了吗?", 'GPT结果已输出, 即将编译PDF')) chatbot.append((f"完成了吗?", 'GPT结果已输出, 即将编译PDF'))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------- 返回 ----------> # <-------- 返回 ---------->
return project_folder + f'/merge_{mode}.tex' return project_folder + f'/merge_{mode}.tex'
@@ -362,7 +363,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified) ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')): if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')):
# 只有第二步成功,才能继续下面的步骤 # 只有第二步成功,才能继续下面的步骤
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面
@@ -393,9 +394,9 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
original_pdf_success = os.path.exists(pj(work_folder_original, f'{main_file_original}.pdf')) original_pdf_success = os.path.exists(pj(work_folder_original, f'{main_file_original}.pdf'))
modified_pdf_success = os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')) modified_pdf_success = os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf'))
diff_pdf_success = os.path.exists(pj(work_folder, f'merge_diff.pdf')) diff_pdf_success = os.path.exists(pj(work_folder, f'merge_diff.pdf'))
results_ += f"原始PDF编译是否成功: {original_pdf_success};" results_ += f"原始PDF编译是否成功: {original_pdf_success};"
results_ += f"转化PDF编译是否成功: {modified_pdf_success};" results_ += f"转化PDF编译是否成功: {modified_pdf_success};"
results_ += f"对比PDF编译是否成功: {diff_pdf_success};" results_ += f"对比PDF编译是否成功: {diff_pdf_success};"
yield from update_ui_lastest_msg(f'{n_fix}编译结束:<br/>{results_}...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_lastest_msg(f'{n_fix}编译结束:<br/>{results_}...', chatbot, history) # 刷新Gradio前端界面
if diff_pdf_success: if diff_pdf_success:
@@ -409,7 +410,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
shutil.copyfile(result_pdf, pj(work_folder, '..', 'translation', 'translate_zh.pdf')) shutil.copyfile(result_pdf, pj(work_folder, '..', 'translation', 'translate_zh.pdf'))
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
# 将两个PDF拼接 # 将两个PDF拼接
if original_pdf_success: if original_pdf_success:
try: try:
from .latex_toolbox import merge_pdfs from .latex_toolbox import merge_pdfs
concat_pdf = pj(work_folder_modified, f'comparison.pdf') concat_pdf = pj(work_folder_modified, f'comparison.pdf')
@@ -425,7 +426,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
if n_fix>=max_try: break if n_fix>=max_try: break
n_fix += 1 n_fix += 1
can_retry, main_file_modified, buggy_lines = remove_buggy_lines( can_retry, main_file_modified, buggy_lines = remove_buggy_lines(
file_path=pj(work_folder_modified, f'{main_file_modified}.tex'), file_path=pj(work_folder_modified, f'{main_file_modified}.tex'),
log_path=pj(work_folder_modified, f'{main_file_modified}.log'), log_path=pj(work_folder_modified, f'{main_file_modified}.log'),
tex_name=f'{main_file_modified}.tex', tex_name=f'{main_file_modified}.tex',
tex_name_pure=f'{main_file_modified}', tex_name_pure=f'{main_file_modified}',
@@ -445,14 +446,14 @@ def write_html(sp_file_contents, sp_file_result, chatbot, project_folder):
import shutil import shutil
from crazy_functions.pdf_fns.report_gen_html import construct_html from crazy_functions.pdf_fns.report_gen_html import construct_html
from toolbox import gen_time_str from toolbox import gen_time_str
ch = construct_html() ch = construct_html()
orig = "" orig = ""
trans = "" trans = ""
final = [] final = []
for c,r in zip(sp_file_contents, sp_file_result): for c,r in zip(sp_file_contents, sp_file_result):
final.append(c) final.append(c)
final.append(r) final.append(r)
for i, k in enumerate(final): for i, k in enumerate(final):
if i%2==0: if i%2==0:
orig = k orig = k
if i%2==1: if i%2==1:

查看文件

@@ -0,0 +1,38 @@
import pickle
class SafeUnpickler(pickle.Unpickler):
def get_safe_classes(self):
from .latex_actions import LatexPaperFileGroup, LatexPaperSplit
# 定义允许的安全类
safe_classes = {
# 在这里添加其他安全的类
'LatexPaperFileGroup': LatexPaperFileGroup,
'LatexPaperSplit' : LatexPaperSplit,
}
return safe_classes
def find_class(self, module, name):
# 只允许特定的类进行反序列化
self.safe_classes = self.get_safe_classes()
if f'{module}.{name}' in self.safe_classes:
return self.safe_classes[f'{module}.{name}']
# 如果尝试加载未授权的类,则抛出异常
raise pickle.UnpicklingError(f"Attempted to deserialize unauthorized class '{name}' from module '{module}'")
def objdump(obj, file="objdump.tmp"):
with open(file, "wb+") as f:
pickle.dump(obj, f)
return
def objload(file="objdump.tmp"):
import os
if not os.path.exists(file):
return
with open(file, "rb") as f:
unpickler = SafeUnpickler(f)
return unpickler.load()

查看文件

@@ -85,8 +85,8 @@ def write_numpy_to_wave(filename, rate, data, add_header=False):
def is_speaker_speaking(vad, data, sample_rate): def is_speaker_speaking(vad, data, sample_rate):
# Function to detect if the speaker is speaking # Function to detect if the speaker is speaking
# The WebRTC VAD only accepts 16-bit mono PCM audio, # The WebRTC VAD only accepts 16-bit mono PCM audio,
# sampled at 8000, 16000, 32000 or 48000 Hz. # sampled at 8000, 16000, 32000 or 48000 Hz.
# A frame must be either 10, 20, or 30 ms in duration: # A frame must be either 10, 20, or 30 ms in duration:
frame_duration = 30 frame_duration = 30
n_bit_each = int(sample_rate * frame_duration / 1000)*2 # x2 because audio is 16 bit (2 bytes) n_bit_each = int(sample_rate * frame_duration / 1000)*2 # x2 because audio is 16 bit (2 bytes)
@@ -94,7 +94,7 @@ def is_speaker_speaking(vad, data, sample_rate):
for t in range(len(data)): for t in range(len(data)):
if t!=0 and t % n_bit_each == 0: if t!=0 and t % n_bit_each == 0:
res_list.append(vad.is_speech(data[t-n_bit_each:t], sample_rate)) res_list.append(vad.is_speech(data[t-n_bit_each:t], sample_rate))
info = ''.join(['^' if r else '.' for r in res_list]) info = ''.join(['^' if r else '.' for r in res_list])
info = info[:10] info = info[:10]
if any(res_list): if any(res_list):
@@ -186,10 +186,10 @@ class AliyunASR():
keep_alive_last_send_time = time.time() keep_alive_last_send_time = time.time()
while not self.stop: while not self.stop:
# time.sleep(self.capture_interval) # time.sleep(self.capture_interval)
audio = rad.read(uuid.hex) audio = rad.read(uuid.hex)
if audio is not None: if audio is not None:
# convert to pcm file # convert to pcm file
temp_file = f'{temp_folder}/{uuid.hex}.pcm' # temp_file = f'{temp_folder}/{uuid.hex}.pcm' #
dsdata = change_sample_rate(audio, rad.rate, NEW_SAMPLERATE) # 48000 --> 16000 dsdata = change_sample_rate(audio, rad.rate, NEW_SAMPLERATE) # 48000 --> 16000
write_numpy_to_wave(temp_file, NEW_SAMPLERATE, dsdata) write_numpy_to_wave(temp_file, NEW_SAMPLERATE, dsdata)
# read pcm binary # read pcm binary

查看文件

@@ -3,12 +3,12 @@ from scipy import interpolate
def Singleton(cls): def Singleton(cls):
_instance = {} _instance = {}
def _singleton(*args, **kargs): def _singleton(*args, **kargs):
if cls not in _instance: if cls not in _instance:
_instance[cls] = cls(*args, **kargs) _instance[cls] = cls(*args, **kargs)
return _instance[cls] return _instance[cls]
return _singleton return _singleton
@@ -39,7 +39,7 @@ class RealtimeAudioDistribution():
else: else:
res = None res = None
return res return res
def change_sample_rate(audio, old_sr, new_sr): def change_sample_rate(audio, old_sr, new_sr):
duration = audio.shape[0] / old_sr duration = audio.shape[0] / old_sr

查看文件

@@ -40,7 +40,7 @@ class GptAcademicState():
class GptAcademicGameBaseState(): class GptAcademicGameBaseState():
""" """
1. first init: __init__ -> 1. first init: __init__ ->
""" """
def init_game(self, chatbot, lock_plugin): def init_game(self, chatbot, lock_plugin):
self.plugin_name = None self.plugin_name = None
@@ -53,7 +53,7 @@ class GptAcademicGameBaseState():
raise ValueError("callback_fn is None") raise ValueError("callback_fn is None")
chatbot._cookies['lock_plugin'] = self.callback_fn chatbot._cookies['lock_plugin'] = self.callback_fn
self.dump_state(chatbot) self.dump_state(chatbot)
def get_plugin_name(self): def get_plugin_name(self):
if self.plugin_name is None: if self.plugin_name is None:
raise ValueError("plugin_name is None") raise ValueError("plugin_name is None")
@@ -71,7 +71,7 @@ class GptAcademicGameBaseState():
state = chatbot._cookies.get(f'plugin_state/{plugin_name}', None) state = chatbot._cookies.get(f'plugin_state/{plugin_name}', None)
if state is not None: if state is not None:
state = pickle.loads(state) state = pickle.loads(state)
else: else:
state = cls() state = cls()
state.init_game(chatbot, lock_plugin) state.init_game(chatbot, lock_plugin)
state.plugin_name = plugin_name state.plugin_name = plugin_name
@@ -79,7 +79,7 @@ class GptAcademicGameBaseState():
state.chatbot = chatbot state.chatbot = chatbot
state.callback_fn = callback_fn state.callback_fn = callback_fn
return state return state
def continue_game(self, prompt, chatbot, history): def continue_game(self, prompt, chatbot, history):
# 游戏主体 # 游戏主体
yield from self.step(prompt, chatbot, history) yield from self.step(prompt, chatbot, history)

查看文件

@@ -35,7 +35,7 @@ def cut(limit, get_token_fn, txt_tocut, must_break_at_empty_line, break_anyway=F
remain_txt_to_cut_storage = "" remain_txt_to_cut_storage = ""
# 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage # 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage
remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage) remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage)
while True: while True:
if get_token_fn(remain_txt_to_cut) <= limit: if get_token_fn(remain_txt_to_cut) <= limit:
# 如果剩余文本的token数小于限制,那么就不用切了 # 如果剩余文本的token数小于限制,那么就不用切了

查看文件

@@ -4,7 +4,7 @@ from toolbox import promote_file_to_downloadzone
from toolbox import write_history_to_file, promote_file_to_downloadzone from toolbox import write_history_to_file, promote_file_to_downloadzone
from toolbox import get_conf from toolbox import get_conf
from toolbox import ProxyNetworkActivate from toolbox import ProxyNetworkActivate
from colorful import * from shared_utils.colorful import *
import requests import requests
import random import random
import copy import copy
@@ -64,15 +64,15 @@ def produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chat
# 再做一个小修改重新修改当前part的标题,默认用英文的 # 再做一个小修改重新修改当前part的标题,默认用英文的
cur_value += value cur_value += value
translated_res_array.append(cur_value) translated_res_array.append(cur_value)
res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + translated_res_array, res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + translated_res_array,
file_basename = f"{gen_time_str()}-translated_only.md", file_basename = f"{gen_time_str()}-translated_only.md",
file_fullname = None, file_fullname = None,
auto_caption = False) auto_caption = False)
promote_file_to_downloadzone(res_path, rename_file=os.path.basename(res_path)+'.md', chatbot=chatbot) promote_file_to_downloadzone(res_path, rename_file=os.path.basename(res_path)+'.md', chatbot=chatbot)
generated_conclusion_files.append(res_path) generated_conclusion_files.append(res_path)
return res_path return res_path
def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG): def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG, plugin_kwargs={}):
from crazy_functions.pdf_fns.report_gen_html import construct_html from crazy_functions.pdf_fns.report_gen_html import construct_html
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
@@ -138,17 +138,17 @@ def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_fi
chatbot=chatbot, chatbot=chatbot,
history_array=[meta for _ in inputs_array], history_array=[meta for _ in inputs_array],
sys_prompt_array=[ sys_prompt_array=[
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in inputs_array], "请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" + plugin_kwargs.get("additional_prompt", "") for _ in inputs_array],
) )
# -=-=-=-=-=-=-=-= 写出Markdown文件 -=-=-=-=-=-=-=-= # -=-=-=-=-=-=-=-= 写出Markdown文件 -=-=-=-=-=-=-=-=
produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chatbot, fp, generated_conclusion_files) produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chatbot, fp, generated_conclusion_files)
# -=-=-=-=-=-=-=-= 写出HTML文件 -=-=-=-=-=-=-=-= # -=-=-=-=-=-=-=-= 写出HTML文件 -=-=-=-=-=-=-=-=
ch = construct_html() ch = construct_html()
orig = "" orig = ""
trans = "" trans = ""
gpt_response_collection_html = copy.deepcopy(gpt_response_collection) gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
for i,k in enumerate(gpt_response_collection_html): for i,k in enumerate(gpt_response_collection_html):
if i%2==0: if i%2==0:
gpt_response_collection_html[i] = inputs_show_user_array[i//2] gpt_response_collection_html[i] = inputs_show_user_array[i//2]
else: else:
@@ -159,7 +159,7 @@ def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_fi
final = ["", "", "一、论文概况", "", "Abstract", paper_meta_info, "二、论文翻译", ""] final = ["", "", "一、论文概况", "", "Abstract", paper_meta_info, "二、论文翻译", ""]
final.extend(gpt_response_collection_html) final.extend(gpt_response_collection_html)
for i, k in enumerate(final): for i, k in enumerate(final):
if i%2==0: if i%2==0:
orig = k orig = k
if i%2==1: if i%2==1:

查看文件

@@ -0,0 +1,26 @@
import os
from toolbox import CatchException, report_exception, get_log_folder, gen_time_str, check_packages
from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion
from toolbox import write_history_to_file, promote_file_to_downloadzone, get_conf, extract_archive
from crazy_functions.pdf_fns.parse_pdf import parse_pdf, translate_pdf
def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url):
import copy, json
TOKEN_LIMIT_PER_FRAGMENT = 1024
generated_conclusion_files = []
generated_html_files = []
DST_LANG = "中文"
from crazy_functions.pdf_fns.report_gen_html import construct_html
for index, fp in enumerate(file_manifest):
chatbot.append(["当前进度:", f"正在连接GROBID服务,请稍候: {grobid_url}\n如果等待时间过长,请修改config中的GROBID_URL,可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
article_dict = parse_pdf(fp, grobid_url)
grobid_json_res = os.path.join(get_log_folder(), gen_time_str() + "grobid.json")
with open(grobid_json_res, 'w+', encoding='utf8') as f:
f.write(json.dumps(article_dict, indent=4, ensure_ascii=False))
promote_file_to_downloadzone(grobid_json_res, chatbot=chatbot)
if article_dict is None: raise RuntimeError("解析PDF失败,请检查PDF是否损坏。")
yield from translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG, plugin_kwargs=plugin_kwargs)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

查看文件

@@ -1,83 +1,15 @@
from toolbox import CatchException, report_exception, get_log_folder, gen_time_str, check_packages from toolbox import get_log_folder
from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion from toolbox import update_ui, promote_file_to_downloadzone
from toolbox import write_history_to_file, promote_file_to_downloadzone from toolbox import write_history_to_file, promote_file_to_downloadzone
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from .crazy_utils import read_and_clean_pdf_text from crazy_functions.crazy_utils import read_and_clean_pdf_text
from .pdf_fns.parse_pdf import parse_pdf, get_avail_grobid_url, translate_pdf from shared_utils.colorful import *
from colorful import *
import os import os
def 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
@CatchException
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
disable_auto_promotion(chatbot)
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"批量翻译PDF文档。函数插件贡献者: Binary-Husky"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
check_packages(["fitz", "tiktoken", "scipdf"])
except:
report_exception(chatbot, history,
a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken scipdf_parser```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 清空历史,以免输入溢出
history = []
from .crazy_utils import get_files_from_everything
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf')
# 检测输入参数,如没有给定输入参数,直接退出
if not success:
if txt == "": txt = '空空如也的输入栏'
# 如果没找到任何文件
if len(file_manifest) == 0:
report_exception(chatbot, history,
a=f"解析项目: {txt}", b=f"找不到任何.pdf拓展名的文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 开始正式执行任务
grobid_url = get_avail_grobid_url()
if grobid_url is not None:
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
else:
yield from update_ui_lastest_msg("GROBID服务不可用,请检查config中的GROBID_URL。作为替代,现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url):
import copy, json
TOKEN_LIMIT_PER_FRAGMENT = 1024
generated_conclusion_files = []
generated_html_files = []
DST_LANG = "中文"
from crazy_functions.pdf_fns.report_gen_html import construct_html
for index, fp in enumerate(file_manifest):
chatbot.append(["当前进度:", f"正在连接GROBID服务,请稍候: {grobid_url}\n如果等待时间过长,请修改config中的GROBID_URL,可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
article_dict = parse_pdf(fp, grobid_url)
grobid_json_res = os.path.join(get_log_folder(), gen_time_str() + "grobid.json")
with open(grobid_json_res, 'w+', encoding='utf8') as f:
f.write(json.dumps(article_dict, indent=4, ensure_ascii=False))
promote_file_to_downloadzone(grobid_json_res, chatbot=chatbot)
if article_dict is None: raise RuntimeError("解析PDF失败,请检查PDF是否损坏。")
yield from translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
""" """
此函数已经弃用 注意此函数已经弃用新函数位于crazy_functions/pdf_fns/parse_pdf.py
""" """
import copy import copy
TOKEN_LIMIT_PER_FRAGMENT = 1024 TOKEN_LIMIT_PER_FRAGMENT = 1024
@@ -97,7 +29,7 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
# 为了更好的效果,我们剥离Introduction之后的部分如果有 # 为了更好的效果,我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0] paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
# 单线,获取文章meta信息 # 单线,获取文章meta信息
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive( paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取{paper_meta}", inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取{paper_meta}",
@@ -116,12 +48,13 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
chatbot=chatbot, chatbot=chatbot,
history_array=[[paper_meta] for _ in paper_fragments], history_array=[[paper_meta] for _ in paper_fragments],
sys_prompt_array=[ sys_prompt_array=[
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in paper_fragments], "请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" + plugin_kwargs.get("additional_prompt", "")
for _ in paper_fragments],
# max_workers=5 # OpenAI所允许的最大并行过载 # max_workers=5 # OpenAI所允许的最大并行过载
) )
gpt_response_collection_md = copy.deepcopy(gpt_response_collection) gpt_response_collection_md = copy.deepcopy(gpt_response_collection)
# 整理报告的格式 # 整理报告的格式
for i,k in enumerate(gpt_response_collection_md): for i,k in enumerate(gpt_response_collection_md):
if i%2==0: if i%2==0:
gpt_response_collection_md[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection_md)//2}] \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection_md)//2}]\n " gpt_response_collection_md[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection_md)//2}] \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection_md)//2}]\n "
else: else:
@@ -139,18 +72,18 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
# write html # write html
try: try:
ch = construct_html() ch = construct_html()
orig = "" orig = ""
trans = "" trans = ""
gpt_response_collection_html = copy.deepcopy(gpt_response_collection) gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
for i,k in enumerate(gpt_response_collection_html): for i,k in enumerate(gpt_response_collection_html):
if i%2==0: if i%2==0:
gpt_response_collection_html[i] = paper_fragments[i//2].replace('#', '') gpt_response_collection_html[i] = paper_fragments[i//2].replace('#', '')
else: else:
gpt_response_collection_html[i] = gpt_response_collection_html[i] gpt_response_collection_html[i] = gpt_response_collection_html[i]
final = ["论文概况", paper_meta_info.replace('# ', '### '), "二、论文翻译", ""] final = ["论文概况", paper_meta_info.replace('# ', '### '), "二、论文翻译", ""]
final.extend(gpt_response_collection_html) final.extend(gpt_response_collection_html)
for i, k in enumerate(final): for i, k in enumerate(final):
if i%2==0: if i%2==0:
orig = k orig = k
if i%2==1: if i%2==1:

查看文件

@@ -0,0 +1,211 @@
from toolbox import get_log_folder, gen_time_str, get_conf
from toolbox import update_ui, promote_file_to_downloadzone
from toolbox import promote_file_to_downloadzone, extract_archive
from toolbox import generate_file_link, zip_folder
from crazy_functions.crazy_utils import get_files_from_everything
from shared_utils.colorful import *
import os
def refresh_key(doc2x_api_key):
import requests, json
url = "https://api.doc2x.noedgeai.com/api/token/refresh"
res = requests.post(
url,
headers={"Authorization": "Bearer " + doc2x_api_key}
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
res_json = json.loads(decoded)
doc2x_api_key = res_json['data']['token']
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
return doc2x_api_key
def 解析PDF_DOC2X_转Latex(pdf_file_path):
import requests, json, os
DOC2X_API_KEY = get_conf('DOC2X_API_KEY')
latex_dir = get_log_folder(plugin_name="pdf_ocr_latex")
doc2x_api_key = DOC2X_API_KEY
if doc2x_api_key.startswith('sk-'):
url = "https://api.doc2x.noedgeai.com/api/v1/pdf"
else:
doc2x_api_key = refresh_key(doc2x_api_key)
url = "https://api.doc2x.noedgeai.com/api/platform/pdf"
res = requests.post(
url,
files={"file": open(pdf_file_path, "rb")},
data={"ocr": "1"},
headers={"Authorization": "Bearer " + doc2x_api_key}
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
for z_decoded in decoded.split('\n'):
if len(z_decoded) == 0: continue
assert z_decoded.startswith("data: ")
z_decoded = z_decoded[len("data: "):]
decoded_json = json.loads(z_decoded)
res_json.append(decoded_json)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
uuid = res_json[0]['uuid']
to = "latex" # latex, md, docx
url = "https://api.doc2x.noedgeai.com/api/export"+"?request_id="+uuid+"&to="+to
res = requests.get(url, headers={"Authorization": "Bearer " + doc2x_api_key})
latex_zip_path = os.path.join(latex_dir, gen_time_str() + '.zip')
latex_unzip_path = os.path.join(latex_dir, gen_time_str())
if res.status_code == 200:
with open(latex_zip_path, "wb") as f: f.write(res.content)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
import zipfile
with zipfile.ZipFile(latex_zip_path, 'r') as zip_ref:
zip_ref.extractall(latex_unzip_path)
return latex_unzip_path
def 解析PDF_DOC2X_单文件(fp, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request):
def pdf2markdown(filepath):
import requests, json, os
markdown_dir = get_log_folder(plugin_name="pdf_ocr")
doc2x_api_key = DOC2X_API_KEY
if doc2x_api_key.startswith('sk-'):
url = "https://api.doc2x.noedgeai.com/api/v1/pdf"
else:
doc2x_api_key = refresh_key(doc2x_api_key)
url = "https://api.doc2x.noedgeai.com/api/platform/pdf"
chatbot.append((None, "加载PDF文件,发送至DOC2X解析..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
res = requests.post(
url,
files={"file": open(filepath, "rb")},
data={"ocr": "1"},
headers={"Authorization": "Bearer " + doc2x_api_key}
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
for z_decoded in decoded.split('\n'):
if len(z_decoded) == 0: continue
assert z_decoded.startswith("data: ")
z_decoded = z_decoded[len("data: "):]
decoded_json = json.loads(z_decoded)
res_json.append(decoded_json)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
uuid = res_json[0]['uuid']
to = "md" # latex, md, docx
url = "https://api.doc2x.noedgeai.com/api/export"+"?request_id="+uuid+"&to="+to
chatbot.append((None, f"读取解析: {url} ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
res = requests.get(url, headers={"Authorization": "Bearer " + doc2x_api_key})
md_zip_path = os.path.join(markdown_dir, gen_time_str() + '.zip')
if res.status_code == 200:
with open(md_zip_path, "wb") as f: f.write(res.content)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
promote_file_to_downloadzone(md_zip_path, chatbot=chatbot)
chatbot.append((None, f"完成解析 {md_zip_path} ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return md_zip_path
def deliver_to_markdown_plugin(md_zip_path, user_request):
from crazy_functions.Markdown_Translate import Markdown英译中
import shutil, re
time_tag = gen_time_str()
target_path_base = get_log_folder(chatbot.get_user())
file_origin_name = os.path.basename(md_zip_path)
this_file_path = os.path.join(target_path_base, file_origin_name)
os.makedirs(target_path_base, exist_ok=True)
shutil.copyfile(md_zip_path, this_file_path)
ex_folder = this_file_path + ".extract"
extract_archive(
file_path=this_file_path, dest_dir=ex_folder
)
# edit markdown files
success, file_manifest, project_folder = get_files_from_everything(ex_folder, type='.md')
for generated_fp in file_manifest:
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f:
content = f.read()
# 将公式中的\[ \]替换成$$
content = content.replace(r'\[', r'$$').replace(r'\]', r'$$')
# 将公式中的\( \)替换成$
content = content.replace(r'\(', r'$').replace(r'\)', r'$')
content = content.replace('```markdown', '\n').replace('```', '\n')
with open(generated_fp, 'w', encoding='utf8') as f:
f.write(content)
promote_file_to_downloadzone(generated_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 生成在线预览html
file_name = '在线预览翻译(原文)' + gen_time_str() + '.html'
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
# Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
md = re.sub(r'^<table>', r'😃<table>', md, flags=re.MULTILINE)
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
chatbot.append([None, f"生成在线预览:{generate_file_link([preview_fp])}"])
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
chatbot.append((None, f"调用Markdown插件 {ex_folder} ..."))
plugin_kwargs['markdown_expected_output_dir'] = ex_folder
translated_f_name = 'translated_markdown.md'
generated_fp = plugin_kwargs['markdown_expected_output_path'] = os.path.join(ex_folder, translated_f_name)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from Markdown英译中(ex_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
if os.path.exists(generated_fp):
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f: content = f.read()
content = content.replace('```markdown', '\n').replace('```', '\n')
# Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
content = re.sub(r'^<table>', r'😃<table>', content, flags=re.MULTILINE)
with open(generated_fp, 'w', encoding='utf8') as f: f.write(content)
# 生成在线预览html
file_name = '在线预览翻译' + gen_time_str() + '.html'
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
# 生成包含图片的压缩包
dest_folder = get_log_folder(chatbot.get_user())
zip_name = '翻译后的带图文档.zip'
zip_folder(source_folder=ex_folder, dest_folder=dest_folder, zip_name=zip_name)
zip_fp = os.path.join(dest_folder, zip_name)
promote_file_to_downloadzone(zip_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
md_zip_path = yield from pdf2markdown(fp)
yield from deliver_to_markdown_plugin(md_zip_path, user_request)
def 解析PDF_基于DOC2X(file_manifest, *args):
for index, fp in enumerate(file_manifest):
yield from 解析PDF_DOC2X_单文件(fp, *args)
return

查看文件

@@ -0,0 +1,85 @@
from crazy_functions.crazy_utils import read_and_clean_pdf_text, get_files_from_everything
import os
import re
def extract_text_from_files(txt, chatbot, history):
"""
查找pdf/md/word并获取文本内容并返回状态以及文本
输入参数 Args:
chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化)
history (list): List of chat history (历史,对话历史列表)
输出 Returns:
文件是否存在(bool)
final_result(list):文本内容
page_one(list):第一页内容/摘要
file_manifest(list):文件路径
excption(string):需要用户手动处理的信息,如没出错则保持为空
"""
final_result = []
page_one = []
file_manifest = []
excption = ""
if txt == "":
final_result.append(txt)
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
#查找输入区内容中的文件
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
file_word,word_manifest,folder_word = get_files_from_everything(txt, '.docx')
file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc')
if file_doc:
excption = "word"
return False, final_result, page_one, file_manifest, excption
file_num = len(pdf_manifest) + len(md_manifest) + len(word_manifest)
if file_num == 0:
final_result.append(txt)
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
if file_pdf:
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
import fitz
except:
excption = "pdf"
return False, final_result, page_one, file_manifest, excption
for index, fp in enumerate(pdf_manifest):
file_content, pdf_one = read_and_clean_pdf_text(fp) # 尝试按照章节切割PDF
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
pdf_one = str(pdf_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
final_result.append(file_content)
page_one.append(pdf_one)
file_manifest.append(os.path.relpath(fp, folder_pdf))
if file_md:
for index, fp in enumerate(md_manifest):
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
file_content = f.read()
file_content = file_content.encode('utf-8', 'ignore').decode()
headers = re.findall(r'^#\s(.*)$', file_content, re.MULTILINE) #接下来提取md中的一级/二级标题作为摘要
if len(headers) > 0:
page_one.append("\n".join(headers)) #合并所有的标题,以换行符分割
else:
page_one.append("")
final_result.append(file_content)
file_manifest.append(os.path.relpath(fp, folder_md))
if file_word:
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
from docx import Document
except:
excption = "word_pip"
return False, final_result, page_one, file_manifest, excption
for index, fp in enumerate(word_manifest):
doc = Document(fp)
file_content = '\n'.join([p.text for p in doc.paragraphs])
file_content = file_content.encode('utf-8', 'ignore').decode()
page_one.append(file_content[:200])
final_result.append(file_content)
file_manifest.append(os.path.relpath(fp, folder_word))
return True, final_result, page_one, file_manifest, excption

查看文件

@@ -0,0 +1,73 @@
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>GPT-Academic 翻译报告书</title>
<style>
.centered-a {
color: red;
text-align: center;
margin-bottom: 2%;
font-size: 1.5em;
}
.centered-b {
color: red;
text-align: center;
margin-top: 10%;
margin-bottom: 20%;
font-size: 1.5em;
}
.centered-c {
color: rgba(255, 0, 0, 0);
text-align: center;
margin-top: 2%;
margin-bottom: 20%;
font-size: 7em;
}
</style>
<script>
// Configure MathJax settings
MathJax = {
tex: {
inlineMath: [
['$', '$'],
['\(', '\)']
]
}
}
addEventListener('zero-md-rendered', () => {MathJax.typeset(); console.log('MathJax typeset!');})
</script>
<!-- Load MathJax library -->
<script src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js"></script>
<script
type="module"
src="https://cdn.jsdelivr.net/gh/zerodevx/zero-md@2/dist/zero-md.min.js"
></script>
</head>
<body>
<div class="test_temp1" style="width:10%; height: 500px; float:left;">
</div>
<div class="test_temp2" style="width:80%; height: 500px; float:left;">
<!-- Simply set the `src` attribute to your MD file and win -->
<div class="centered-a">
请按Ctrl+S保存此页面,否则该页面可能在几分钟后失效。
</div>
<zero-md src="translated_markdown.md" no-shadow>
</zero-md>
<div class="centered-b">
本报告由GPT-Academic开源项目生成,地址https://github.com/binary-husky/gpt_academic。
</div>
<div class="centered-c">
本报告由GPT-Academic开源项目生成,地址https://github.com/binary-husky/gpt_academic。
</div>
</div>
<div class="test_temp3" style="width:10%; height: 500px; float:left;">
</div>
</body>
</html>

查看文件

@@ -0,0 +1,52 @@
import os, json, base64
from pydantic import BaseModel, Field
from textwrap import dedent
from typing import List
class ArgProperty(BaseModel): # PLUGIN_ARG_MENU
title: str = Field(description="The title", default="")
description: str = Field(description="The description", default="")
default_value: str = Field(description="The default value", default="")
type: str = Field(description="The type", default="") # currently we support ['string', 'dropdown']
options: List[str] = Field(default=[], description="List of options available for the argument") # only used when type is 'dropdown'
class GptAcademicPluginTemplate():
def __init__(self):
# please note that `execute` method may run in different threads,
# thus you should not store any state in the plugin instance,
# which may be accessed by multiple threads
pass
def define_arg_selection_menu(self):
"""
An example as below:
```
def define_arg_selection_menu(self):
gui_definition = {
"main_input":
ArgProperty(title="main input", description="description", default_value="default_value", type="string").model_dump_json(),
"advanced_arg":
ArgProperty(title="advanced arguments", description="description", default_value="default_value", type="string").model_dump_json(),
"additional_arg_01":
ArgProperty(title="additional", description="description", default_value="default_value", type="string").model_dump_json(),
}
return gui_definition
```
"""
raise NotImplementedError("You need to implement this method in your plugin class")
def get_js_code_for_generating_menu(self, btnName):
define_arg_selection = self.define_arg_selection_menu()
if len(define_arg_selection.keys()) > 8:
raise ValueError("You can only have up to 8 arguments in the define_arg_selection")
# if "main_input" not in define_arg_selection:
# raise ValueError("You must have a 'main_input' in the define_arg_selection")
DEFINE_ARG_INPUT_INTERFACE = json.dumps(define_arg_selection)
return base64.b64encode(DEFINE_ARG_INPUT_INTERFACE.encode('utf-8')).decode('utf-8')
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
raise NotImplementedError("You need to implement this method in your plugin class")

查看文件

@@ -28,7 +28,7 @@ EMBEDDING_DEVICE = "cpu"
# 基于上下文的prompt模版,请务必保留"{question}"和"{context}" # 基于上下文的prompt模版,请务必保留"{question}"和"{context}"
PROMPT_TEMPLATE = """已知信息: PROMPT_TEMPLATE = """已知信息:
{context} {context}
根据上述已知信息,简洁和专业的来回答用户的问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题” 或 “没有提供足够的相关信息”,不允许在答案中添加编造成分,答案请使用中文。 问题是:{question}""" 根据上述已知信息,简洁和专业的来回答用户的问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题” 或 “没有提供足够的相关信息”,不允许在答案中添加编造成分,答案请使用中文。 问题是:{question}"""
@@ -58,7 +58,7 @@ OPEN_CROSS_DOMAIN = False
def similarity_search_with_score_by_vector( def similarity_search_with_score_by_vector(
self, embedding: List[float], k: int = 4 self, embedding: List[float], k: int = 4
) -> List[Tuple[Document, float]]: ) -> List[Tuple[Document, float]]:
def seperate_list(ls: List[int]) -> List[List[int]]: def seperate_list(ls: List[int]) -> List[List[int]]:
lists = [] lists = []
ls1 = [ls[0]] ls1 = [ls[0]]
@@ -200,7 +200,7 @@ class LocalDocQA:
return vs_path, loaded_files return vs_path, loaded_files
else: else:
raise RuntimeError("文件加载失败,请检查文件格式是否正确") raise RuntimeError("文件加载失败,请检查文件格式是否正确")
def get_loaded_file(self, vs_path): def get_loaded_file(self, vs_path):
ds = self.vector_store.docstore ds = self.vector_store.docstore
return set([ds._dict[k].metadata['source'].split(vs_path)[-1] for k in ds._dict]) return set([ds._dict[k].metadata['source'].split(vs_path)[-1] for k in ds._dict])
@@ -290,10 +290,10 @@ class knowledge_archive_interface():
self.threadLock.acquire() self.threadLock.acquire()
# import uuid # import uuid
self.current_id = id self.current_id = id
self.qa_handle, self.kai_path = construct_vector_store( self.qa_handle, self.kai_path = construct_vector_store(
vs_id=self.current_id, vs_id=self.current_id,
vs_path=vs_path, vs_path=vs_path,
files=file_manifest, files=file_manifest,
sentence_size=100, sentence_size=100,
history=[], history=[],
one_conent="", one_conent="",
@@ -304,7 +304,7 @@ class knowledge_archive_interface():
def get_current_archive_id(self): def get_current_archive_id(self):
return self.current_id return self.current_id
def get_loaded_file(self, vs_path): def get_loaded_file(self, vs_path):
return self.qa_handle.get_loaded_file(vs_path) return self.qa_handle.get_loaded_file(vs_path)
@@ -312,10 +312,10 @@ class knowledge_archive_interface():
self.threadLock.acquire() self.threadLock.acquire()
if not self.current_id == id: if not self.current_id == id:
self.current_id = id self.current_id = id
self.qa_handle, self.kai_path = construct_vector_store( self.qa_handle, self.kai_path = construct_vector_store(
vs_id=self.current_id, vs_id=self.current_id,
vs_path=vs_path, vs_path=vs_path,
files=[], files=[],
sentence_size=100, sentence_size=100,
history=[], history=[],
one_conent="", one_conent="",
@@ -329,7 +329,7 @@ class knowledge_archive_interface():
query = txt, query = txt,
vs_path = self.kai_path, vs_path = self.kai_path,
score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD, score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
vector_search_top_k=VECTOR_SEARCH_TOP_K, vector_search_top_k=VECTOR_SEARCH_TOP_K,
chunk_conent=True, chunk_conent=True,
chunk_size=CHUNK_SIZE, chunk_size=CHUNK_SIZE,
text2vec = self.get_chinese_text2vec(), text2vec = self.get_chinese_text2vec(),

查看文件

@@ -10,7 +10,7 @@ def read_avail_plugin_enum():
from crazy_functional import get_crazy_functions from crazy_functional import get_crazy_functions
plugin_arr = get_crazy_functions() plugin_arr = get_crazy_functions()
# remove plugins with out explaination # remove plugins with out explaination
plugin_arr = {k:v for k, v in plugin_arr.items() if 'Info' in v} plugin_arr = {k:v for k, v in plugin_arr.items() if ('Info' in v) and ('Function' in v)}
plugin_arr_info = {"F_{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)} plugin_arr_info = {"F_{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)} plugin_arr_dict = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict_parse = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)} plugin_arr_dict_parse = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
@@ -35,9 +35,9 @@ def get_recent_file_prompt_support(chatbot):
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None) most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
path = most_recent_uploaded['path'] path = most_recent_uploaded['path']
prompt = "\nAdditional Information:\n" prompt = "\nAdditional Information:\n"
prompt = "In case that this plugin requires a path or a file as argument," prompt = "In case that this plugin requires a path or a file as argument,"
prompt += f"it is important for you to know that the user has recently uploaded a file, located at: `{path}`" prompt += f"it is important for you to know that the user has recently uploaded a file, located at: `{path}`"
prompt += f"Only use it when necessary, otherwise, you can ignore this file." prompt += f"Only use it when necessary, otherwise, you can ignore this file."
return prompt return prompt
def get_inputs_show_user(inputs, plugin_arr_enum_prompt): def get_inputs_show_user(inputs, plugin_arr_enum_prompt):
@@ -82,7 +82,7 @@ def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
msg += "\n但您可以尝试再试一次\n" msg += "\n但您可以尝试再试一次\n"
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2) yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
return return
# ⭐ ⭐ ⭐ 确认插件参数 # ⭐ ⭐ ⭐ 确认插件参数
if not have_any_recent_upload_files(chatbot): if not have_any_recent_upload_files(chatbot):
appendix_info = "" appendix_info = ""
@@ -99,7 +99,7 @@ def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
inputs = f"A plugin named {plugin_sel.plugin_selection} is selected, " + \ inputs = f"A plugin named {plugin_sel.plugin_selection} is selected, " + \
"you should extract plugin_arg from the user requirement, the user requirement is: \n\n" + \ "you should extract plugin_arg from the user requirement, the user requirement is: \n\n" + \
">> " + (txt + appendix_info).rstrip('\n').replace('\n','\n>> ') + '\n\n' + \ ">> " + (txt + appendix_info).rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
gpt_json_io.format_instructions gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection( run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[]) inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
plugin_sel = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn) plugin_sel = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)

查看文件

@@ -10,7 +10,7 @@ def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG') ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG')
if not ALLOW_RESET_CONFIG: if not ALLOW_RESET_CONFIG:
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"当前配置不允许被修改如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。", lastmsg=f"当前配置不允许被修改如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
chatbot=chatbot, history=history, delay=2 chatbot=chatbot, history=history, delay=2
) )
return return
@@ -35,7 +35,7 @@ def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
inputs = "Analyze how to change configuration according to following user input, answer me with json: \n\n" + \ inputs = "Analyze how to change configuration according to following user input, answer me with json: \n\n" + \
">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \ ">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
gpt_json_io.format_instructions gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection( run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[]) inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
user_intention = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn) user_intention = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)
@@ -45,11 +45,11 @@ def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
ok = (explicit_conf in txt) ok = (explicit_conf in txt)
if ok: if ok:
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}", lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}",
chatbot=chatbot, history=history, delay=1 chatbot=chatbot, history=history, delay=1
) )
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}\n\n正在修改配置中", lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}\n\n正在修改配置中",
chatbot=chatbot, history=history, delay=2 chatbot=chatbot, history=history, delay=2
) )
@@ -69,7 +69,7 @@ def modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history
ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG') ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG')
if not ALLOW_RESET_CONFIG: if not ALLOW_RESET_CONFIG:
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"当前配置不允许被修改如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。", lastmsg=f"当前配置不允许被修改如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
chatbot=chatbot, history=history, delay=2 chatbot=chatbot, history=history, delay=2
) )
return return

查看文件

@@ -6,7 +6,7 @@ class VoidTerminalState():
def reset_state(self): def reset_state(self):
self.has_provided_explaination = False self.has_provided_explaination = False
def lock_plugin(self, chatbot): def lock_plugin(self, chatbot):
chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端' chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端'
chatbot._cookies['plugin_state'] = pickle.dumps(self) chatbot._cookies['plugin_state'] = pickle.dumps(self)

查看文件

@@ -130,7 +130,7 @@ def get_name(_url_):
@CatchException @CatchException
def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……" CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
import glob import glob
@@ -144,8 +144,8 @@ def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, hi
try: try:
import bs4 import bs4
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4```。") b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
@@ -157,12 +157,12 @@ def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, hi
try: try:
pdf_path, info = download_arxiv_(txt) pdf_path, info = download_arxiv_(txt)
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"下载pdf文件未成功") b = f"下载pdf文件未成功")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# 翻译摘要等 # 翻译摘要等
i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}" i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}"
i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}' i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}'

查看文件

@@ -5,16 +5,16 @@ from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
@CatchException @CatchException
def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
from crazy_functions.game_fns.game_interactive_story import MiniGame_ResumeStory from crazy_functions.game_fns.game_interactive_story import MiniGame_ResumeStory
# 清空历史 # 清空历史
history = [] history = []
# 选择游戏 # 选择游戏
cls = MiniGame_ResumeStory cls = MiniGame_ResumeStory
# 如果之前已经初始化了游戏实例,则继续该实例;否则重新初始化 # 如果之前已经初始化了游戏实例,则继续该实例;否则重新初始化
state = cls.sync_state(chatbot, state = cls.sync_state(chatbot,
llm_kwargs, llm_kwargs,
cls, cls,
plugin_name='MiniGame_ResumeStory', plugin_name='MiniGame_ResumeStory',
callback_fn='crazy_functions.互动小游戏->随机小游戏', callback_fn='crazy_functions.互动小游戏->随机小游戏',
lock_plugin=True lock_plugin=True
@@ -23,16 +23,16 @@ def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_
@CatchException @CatchException
def 随机小游戏1(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 随机小游戏1(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
from crazy_functions.game_fns.game_ascii_art import MiniGame_ASCII_Art from crazy_functions.game_fns.game_ascii_art import MiniGame_ASCII_Art
# 清空历史 # 清空历史
history = [] history = []
# 选择游戏 # 选择游戏
cls = MiniGame_ASCII_Art cls = MiniGame_ASCII_Art
# 如果之前已经初始化了游戏实例,则继续该实例;否则重新初始化 # 如果之前已经初始化了游戏实例,则继续该实例;否则重新初始化
state = cls.sync_state(chatbot, state = cls.sync_state(chatbot,
llm_kwargs, llm_kwargs,
cls, cls,
plugin_name='MiniGame_ASCII_Art', plugin_name='MiniGame_ASCII_Art',
callback_fn='crazy_functions.互动小游戏->随机小游戏1', callback_fn='crazy_functions.互动小游戏->随机小游戏1',
lock_plugin=True lock_plugin=True

查看文件

@@ -3,7 +3,7 @@ from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
@CatchException @CatchException
def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行 llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
@@ -11,7 +11,7 @@ def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
chatbot.append(("这是什么功能?", "交互功能函数模板。在执行完成之后, 可以将自身的状态存储到cookie中, 等待用户的再次调用。")) chatbot.append(("这是什么功能?", "交互功能函数模板。在执行完成之后, 可以将自身的状态存储到cookie中, 等待用户的再次调用。"))
@@ -38,7 +38,7 @@ def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
inputs=inputs_show_user=f"Extract all image urls in this html page, pick the first 5 images and show them with markdown format: \n\n {page_return}" inputs=inputs_show_user=f"Extract all image urls in this html page, pick the first 5 images and show them with markdown format: \n\n {page_return}"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=inputs, inputs_show_user=inputs_show_user, inputs=inputs, inputs_show_user=inputs_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="When you want to show an image, use markdown format. e.g. ![image_description](image_url). If there are no image url provided, answer 'no image url provided'" sys_prompt="When you want to show an image, use markdown format. e.g. ![image_description](image_url). If there are no image url provided, answer 'no image url provided'"
) )
chatbot[-1] = [chatbot[-1][0], gpt_say] chatbot[-1] = [chatbot[-1][0], gpt_say]

查看文件

@@ -6,10 +6,10 @@
- 将图像转为灰度图像 - 将图像转为灰度图像
- 将csv文件转excel表格 - 将csv文件转excel表格
Testing: Testing:
- Crop the image, keeping the bottom half. - Crop the image, keeping the bottom half.
- Swap the blue channel and red channel of the image. - Swap the blue channel and red channel of the image.
- Convert the image to grayscale. - Convert the image to grayscale.
- Convert the CSV file to an Excel spreadsheet. - Convert the CSV file to an Excel spreadsheet.
""" """
@@ -29,12 +29,12 @@ import multiprocessing
templete = """ templete = """
```python ```python
import ... # Put dependencies here, e.g. import numpy as np. import ... # Put dependencies here, e.g. import numpy as np.
class TerminalFunction(object): # Do not change the name of the class, The name of the class must be `TerminalFunction` class TerminalFunction(object): # Do not change the name of the class, The name of the class must be `TerminalFunction`
def run(self, path): # The name of the function must be `run`, it takes only a positional argument. def run(self, path): # The name of the function must be `run`, it takes only a positional argument.
# rewrite the function you have just written here # rewrite the function you have just written here
... ...
return generated_file_path return generated_file_path
``` ```
@@ -48,7 +48,7 @@ def get_code_block(reply):
import re import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) == 1: if len(matches) == 1:
return matches[0].strip('python') # code block return matches[0].strip('python') # code block
for match in matches: for match in matches:
if 'class TerminalFunction' in match: if 'class TerminalFunction' in match:
@@ -68,8 +68,8 @@ def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
# 第一步 # 第一步
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say, inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo, llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo,
sys_prompt= r"You are a world-class programmer." sys_prompt= r"You are a world-class programmer."
) )
history.extend([i_say, gpt_say]) history.extend([i_say, gpt_say])
@@ -82,33 +82,33 @@ def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
] ]
i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable templete. " i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable templete. "
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=inputs_show_user, inputs=i_say, inputs_show_user=inputs_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt= r"You are a programmer. You need to replace `...` with valid packages, do not give `...` in your answer!" sys_prompt= r"You are a programmer. You need to replace `...` with valid packages, do not give `...` in your answer!"
) )
code_to_return = gpt_say code_to_return = gpt_say
history.extend([i_say, gpt_say]) history.extend([i_say, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
# # 第三步 # # 第三步
# i_say = "Please list to packages to install to run the code above. Then show me how to use `try_install_deps` function to install them." # i_say = "Please list to packages to install to run the code above. Then show me how to use `try_install_deps` function to install them."
# i_say += 'For instance. `try_install_deps(["opencv-python", "scipy", "numpy"])`' # i_say += 'For instance. `try_install_deps(["opencv-python", "scipy", "numpy"])`'
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive( # installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
# inputs=i_say, inputs_show_user=inputs_show_user, # inputs=i_say, inputs_show_user=inputs_show_user,
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, # llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
# sys_prompt= r"You are a programmer." # sys_prompt= r"You are a programmer."
# ) # )
# # # 第三步 # # # 第三步
# i_say = "Show me how to use `pip` to install packages to run the code above. " # i_say = "Show me how to use `pip` to install packages to run the code above. "
# i_say += 'For instance. `pip install -r opencv-python scipy numpy`' # i_say += 'For instance. `pip install -r opencv-python scipy numpy`'
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive( # installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
# inputs=i_say, inputs_show_user=i_say, # inputs=i_say, inputs_show_user=i_say,
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, # llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
# sys_prompt= r"You are a programmer." # sys_prompt= r"You are a programmer."
# ) # )
installation_advance = "" installation_advance = ""
return code_to_return, installation_advance, txt, file_type, llm_kwargs, chatbot, history return code_to_return, installation_advance, txt, file_type, llm_kwargs, chatbot, history
@@ -117,7 +117,7 @@ def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
def for_immediate_show_off_when_possible(file_type, fp, chatbot): def for_immediate_show_off_when_possible(file_type, fp, chatbot):
if file_type in ['png', 'jpg']: if file_type in ['png', 'jpg']:
image_path = os.path.abspath(fp) image_path = os.path.abspath(fp)
chatbot.append(['这是一张图片, 展示如下:', chatbot.append(['这是一张图片, 展示如下:',
f'本地文件地址: <br/>`{image_path}`<br/>'+ f'本地文件地址: <br/>`{image_path}`<br/>'+
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>' f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
]) ])
@@ -139,7 +139,7 @@ def get_recent_file_prompt_support(chatbot):
return path return path
@CatchException @CatchException
def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -147,7 +147,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
# 清空历史 # 清空历史
@@ -177,7 +177,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
chatbot.append(["文件检索", "没有发现任何近期上传的文件。"]) chatbot.append(["文件检索", "没有发现任何近期上传的文件。"])
yield from update_ui_lastest_msg("没有发现任何近期上传的文件。", chatbot, history, 1) yield from update_ui_lastest_msg("没有发现任何近期上传的文件。", chatbot, history, 1)
return # 2. 如果没有文件 return # 2. 如果没有文件
# 读取文件 # 读取文件
file_type = file_list[0].split('.')[-1] file_type = file_list[0].split('.')[-1]
@@ -185,7 +185,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
if is_the_upload_folder(txt): if is_the_upload_folder(txt):
yield from update_ui_lastest_msg(f"请在输入框内填写需求, 然后再次点击该插件! 至于您的文件,不用担心, 文件路径 {txt} 已经被记忆. ", chatbot, history, 1) yield from update_ui_lastest_msg(f"请在输入框内填写需求, 然后再次点击该插件! 至于您的文件,不用担心, 文件路径 {txt} 已经被记忆. ", chatbot, history, 1)
return return
# 开始干正事 # 开始干正事
MAX_TRY = 3 MAX_TRY = 3
for j in range(MAX_TRY): # 最多重试5次 for j in range(MAX_TRY): # 最多重试5次
@@ -238,7 +238,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
# chatbot.append(["如果是缺乏依赖,请参考以下建议", installation_advance]) # chatbot.append(["如果是缺乏依赖,请参考以下建议", installation_advance])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# 顺利完成,收尾 # 顺利完成,收尾
res = str(res) res = str(res)
if os.path.exists(res): if os.path.exists(res):
@@ -248,5 +248,5 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
else: else:
chatbot.append(["执行成功了,结果是一个字符串", "结果:" + res]) chatbot.append(["执行成功了,结果是一个字符串", "结果:" + res])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新

查看文件

@@ -4,7 +4,7 @@ from .crazy_utils import input_clipping
import copy, json import copy, json
@CatchException @CatchException
def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行 llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
@@ -12,7 +12,7 @@ def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
chatbot 聊天显示框的句柄, 用于显示给用户 chatbot 聊天显示框的句柄, 用于显示给用户
history 聊天历史, 前情提要 history 聊天历史, 前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
# 清空历史, 以免输入溢出 # 清空历史, 以免输入溢出
history = [] history = []
@@ -21,8 +21,8 @@ def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
i_say = "请写bash命令实现以下功能" + txt i_say = "请写bash命令实现以下功能" + txt
# 开始 # 开始
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=txt, inputs=i_say, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="你是一个Linux大师级用户。注意,当我要求你写bash命令时,尽可能地仅用一行命令解决我的要求。" sys_prompt="你是一个Linux大师级用户。注意,当我要求你写bash命令时,尽可能地仅用一行命令解决我的要求。"
) )
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新

查看文件

@@ -7,7 +7,7 @@ def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", qual
from request_llms.bridge_all import model_info from request_llms.bridge_all import model_info
proxies = get_conf('proxies') proxies = get_conf('proxies')
# Set up OpenAI API key and model # Set up OpenAI API key and model
api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model']) api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
# 'https://api.openai.com/v1/chat/completions' # 'https://api.openai.com/v1/chat/completions'
@@ -93,7 +93,7 @@ def edit_image(llm_kwargs, prompt, image_path, resolution="1024x1024", model="da
@CatchException @CatchException
def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -101,7 +101,7 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
if prompt.strip() == "": if prompt.strip() == "":
@@ -113,7 +113,7 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
resolution = plugin_kwargs.get("advanced_arg", '1024x1024') resolution = plugin_kwargs.get("advanced_arg", '1024x1024')
image_url, image_path = gen_image(llm_kwargs, prompt, resolution) image_url, image_path = gen_image(llm_kwargs, prompt, resolution)
chatbot.append([prompt, chatbot.append([prompt,
f'图像中转网址: <br/>`{image_url}`<br/>'+ f'图像中转网址: <br/>`{image_url}`<br/>'+
f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>' f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
f'本地文件地址: <br/>`{image_path}`<br/>'+ f'本地文件地址: <br/>`{image_path}`<br/>'+
@@ -123,7 +123,7 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
@CatchException @CatchException
def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
if prompt.strip() == "": if prompt.strip() == "":
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。")) chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
@@ -144,7 +144,7 @@ def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
elif part in ['vivid', 'natural']: elif part in ['vivid', 'natural']:
style = part style = part
image_url, image_path = gen_image(llm_kwargs, prompt, resolution, model="dall-e-3", quality=quality, style=style) image_url, image_path = gen_image(llm_kwargs, prompt, resolution, model="dall-e-3", quality=quality, style=style)
chatbot.append([prompt, chatbot.append([prompt,
f'图像中转网址: <br/>`{image_url}`<br/>'+ f'图像中转网址: <br/>`{image_url}`<br/>'+
f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>' f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
f'本地文件地址: <br/>`{image_path}`<br/>'+ f'本地文件地址: <br/>`{image_path}`<br/>'+
@@ -164,7 +164,7 @@ class ImageEditState(GptAcademicState):
confirm = (len(file_manifest) >= 1 and file_manifest[0].endswith('.png') and os.path.exists(file_manifest[0])) confirm = (len(file_manifest) >= 1 and file_manifest[0].endswith('.png') and os.path.exists(file_manifest[0]))
file = None if not confirm else file_manifest[0] file = None if not confirm else file_manifest[0]
return confirm, file return confirm, file
def lock_plugin(self, chatbot): def lock_plugin(self, chatbot):
chatbot._cookies['lock_plugin'] = 'crazy_functions.图片生成->图片修改_DALLE2' chatbot._cookies['lock_plugin'] = 'crazy_functions.图片生成->图片修改_DALLE2'
self.dump_state(chatbot) self.dump_state(chatbot)
@@ -209,7 +209,7 @@ class ImageEditState(GptAcademicState):
return all([x['value'] is not None for x in self.req]) return all([x['value'] is not None for x in self.req])
@CatchException @CatchException
def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# 尚未完成 # 尚未完成
history = [] # 清空历史 history = [] # 清空历史
state = ImageEditState.get_state(chatbot, ImageEditState) state = ImageEditState.get_state(chatbot, ImageEditState)

查看文件

@@ -21,7 +21,7 @@ def remove_model_prefix(llm):
@CatchException @CatchException
def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -29,7 +29,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
# 检查当前的模型是否符合要求 # 检查当前的模型是否符合要求
supported_llms = [ supported_llms = [
@@ -50,25 +50,18 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
return return
if model_info[llm_kwargs['llm_model']]["endpoint"] is not None: # 如果不是本地模型,加载API_KEY if model_info[llm_kwargs['llm_model']]["endpoint"] is not None: # 如果不是本地模型,加载API_KEY
llm_kwargs['api_key'] = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model']) llm_kwargs['api_key'] = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
# 检查当前的模型是否符合要求
API_URL_REDIRECT = get_conf('API_URL_REDIRECT')
if len(API_URL_REDIRECT) > 0:
chatbot.append([f"处理任务: {txt}", f"暂不支持中转."])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
try: try:
import autogen import autogen
if get_conf("AUTOGEN_USE_DOCKER"): if get_conf("AUTOGEN_USE_DOCKER"):
import docker import docker
except: except:
chatbot.append([ f"处理任务: {txt}", chatbot.append([ f"处理任务: {txt}",
f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pyautogen docker```。"]) f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pyautogen docker```。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
try: try:
import autogen import autogen
@@ -79,7 +72,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
chatbot.append([f"处理任务: {txt}", f"缺少docker运行环境"]) chatbot.append([f"处理任务: {txt}", f"缺少docker运行环境"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# 解锁插件 # 解锁插件
chatbot.get_cookies()['lock_plugin'] = None chatbot.get_cookies()['lock_plugin'] = None
persistent_class_multi_user_manager = GradioMultiuserManagerForPersistentClasses() persistent_class_multi_user_manager = GradioMultiuserManagerForPersistentClasses()
@@ -96,7 +89,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
history = [] history = []
chatbot.append(["正在启动: 多智能体终端", "插件动态生成, 执行开始, 作者 Microsoft & Binary-Husky."]) chatbot.append(["正在启动: 多智能体终端", "插件动态生成, 执行开始, 作者 Microsoft & Binary-Husky."])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
executor = AutoGenMath(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port) executor = AutoGenMath(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
persistent_class_multi_user_manager.set(persistent_key, executor) persistent_class_multi_user_manager.set(persistent_key, executor)
exit_reason = yield from executor.main_process_ui_control(txt, create_or_resume="create") exit_reason = yield from executor.main_process_ui_control(txt, create_or_resume="create")

查看文件

@@ -1,152 +0,0 @@
from toolbox import CatchException, update_ui, promote_file_to_downloadzone, get_log_folder, get_user
import re
f_prefix = 'GPT-Academic对话存档'
def write_chat_to_file(chatbot, history=None, file_name=None):
"""
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
"""
import os
import time
if file_name is None:
file_name = f_prefix + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
fp = os.path.join(get_log_folder(get_user(chatbot), plugin_name='chat_history'), file_name)
with open(fp, 'w', encoding='utf8') as f:
from themes.theme import advanced_css
f.write(f'<!DOCTYPE html><head><meta charset="utf-8"><title>对话历史</title><style>{advanced_css}</style></head>')
for i, contents in enumerate(chatbot):
for j, content in enumerate(contents):
try: # 这个bug没找到触发条件,暂时先这样顶一下
if type(content) != str: content = str(content)
except:
continue
f.write(content)
if j == 0:
f.write('<hr style="border-top: dotted 3px #ccc;">')
f.write('<hr color="red"> \n\n')
f.write('<hr color="blue"> \n\n raw chat context:\n')
f.write('<code>')
for h in history:
f.write("\n>>>" + h)
f.write('</code>')
promote_file_to_downloadzone(fp, rename_file=file_name, chatbot=chatbot)
return '对话历史写入:' + fp
def gen_file_preview(file_name):
try:
with open(file_name, 'r', encoding='utf8') as f:
file_content = f.read()
# pattern to match the text between <head> and </head>
pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
file_content = re.sub(pattern, '', file_content)
html, history = file_content.split('<hr color="blue"> \n\n raw chat context:\n')
history = history.strip('<code>')
history = history.strip('</code>')
history = history.split("\n>>>")
return list(filter(lambda x:x!="", history))[0][:100]
except:
return ""
def read_file_to_chat(chatbot, history, file_name):
with open(file_name, 'r', encoding='utf8') as f:
file_content = f.read()
# pattern to match the text between <head> and </head>
pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
file_content = re.sub(pattern, '', file_content)
html, history = file_content.split('<hr color="blue"> \n\n raw chat context:\n')
history = history.strip('<code>')
history = history.strip('</code>')
history = history.split("\n>>>")
history = list(filter(lambda x:x!="", history))
html = html.split('<hr color="red"> \n\n')
html = list(filter(lambda x:x!="", html))
chatbot.clear()
for i, h in enumerate(html):
i_say, gpt_say = h.split('<hr style="border-top: dotted 3px #ccc;">')
chatbot.append([i_say, gpt_say])
chatbot.append([f"存档文件详情?", f"[Local Message] 载入对话{len(html)}条,上下文{len(history)}条。"])
return chatbot, history
@CatchException
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数,暂时没有用武之地
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号
"""
chatbot.append(("保存当前对话",
f"[Local Message] {write_chat_to_file(chatbot, history)},您可以调用下拉菜单中的“载入对话历史存档”还原当下的对话。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
def hide_cwd(str):
import os
current_path = os.getcwd()
replace_path = "."
return str.replace(current_path, replace_path)
@CatchException
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数,暂时没有用武之地
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号
"""
from .crazy_utils import get_files_from_everything
success, file_manifest, _ = get_files_from_everything(txt, type='.html')
if not success:
if txt == "": txt = '空空如也的输入栏'
import glob
local_history = "<br/>".join([
"`"+hide_cwd(f)+f" ({gen_file_preview(f)})"+"`"
for f in glob.glob(
f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html',
recursive=True
)])
chatbot.append([f"正在查找对话历史文件html格式: {txt}", f"找不到任何html文件: {txt}。但本地存储了以下历史文件,您可以将任意一个文件路径粘贴到输入区,然后重试:<br/>{local_history}"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
try:
chatbot, history = read_file_to_chat(chatbot, history, file_manifest[0])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
except:
chatbot.append([f"载入对话历史文件", f"对话历史文件损坏!"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
@CatchException
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数,暂时没有用武之地
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号
"""
import glob, os
local_history = "<br/>".join([
"`"+hide_cwd(f)+"`"
for f in glob.glob(
f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True
)])
for f in glob.glob(f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True):
os.remove(f)
chatbot.append([f"删除所有历史对话文件", f"已删除<br/>{local_history}"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return

查看文件

@@ -40,10 +40,10 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```' i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```'
i_say_show_user = f'请对下面的文章片段做概述: {os.path.abspath(fp)}的第{i+1}/{len(paper_fragments)}个片段。' i_say_show_user = f'请对下面的文章片段做概述: {os.path.abspath(fp)}的第{i+1}/{len(paper_fragments)}个片段。'
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say_show_user, inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
chatbot=chatbot, chatbot=chatbot,
history=[], history=[],
sys_prompt="总结文章。" sys_prompt="总结文章。"
) )
@@ -56,10 +56,10 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
if len(paper_fragments) > 1: if len(paper_fragments) > 1:
i_say = f"根据以上的对话,总结文章{os.path.abspath(fp)}的主要内容。" i_say = f"根据以上的对话,总结文章{os.path.abspath(fp)}的主要内容。"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
chatbot=chatbot, chatbot=chatbot,
history=this_paper_history, history=this_paper_history,
sys_prompt="总结文章。" sys_prompt="总结文章。"
) )
@@ -79,7 +79,7 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
@CatchException @CatchException
def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
import glob, os import glob, os
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者

查看文件

@@ -17,7 +17,7 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
file_content, page_one = read_and_clean_pdf_text(file_name) # 尝试按照章节切割PDF file_content, page_one = read_and_clean_pdf_text(file_name) # 尝试按照章节切割PDF
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
TOKEN_LIMIT_PER_FRAGMENT = 2500 TOKEN_LIMIT_PER_FRAGMENT = 2500
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
@@ -25,7 +25,7 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model']) page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
# 为了更好的效果,我们剥离Introduction之后的部分如果有 # 为了更好的效果,我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0] paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
############################## <第 1 步,从摘要中提取高价值信息,放到history中> ################################## ############################## <第 1 步,从摘要中提取高价值信息,放到history中> ##################################
final_results = [] final_results = []
final_results.append(paper_meta) final_results.append(paper_meta)
@@ -44,10 +44,10 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i]}" i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i]}"
i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i][:200]}" i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i][:200]}"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问 gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
llm_kwargs, chatbot, llm_kwargs, chatbot,
history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果 history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
sys_prompt="Extract the main idea of this section with Chinese." # 提示 sys_prompt="Extract the main idea of this section with Chinese." # 提示
) )
iteration_results.append(gpt_say) iteration_results.append(gpt_say)
last_iteration_result = gpt_say last_iteration_result = gpt_say
@@ -67,15 +67,15 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
- (2):What are the past methods? What are the problems with them? Is the approach well motivated? - (2):What are the past methods? What are the problems with them? Is the approach well motivated?
- (3):What is the research methodology proposed in this paper? - (3):What is the research methodology proposed in this paper?
- (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals? - (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals?
Follow the format of the output that follows: Follow the format of the output that follows:
1. Title: xxx\n\n 1. Title: xxx\n\n
2. Authors: xxx\n\n 2. Authors: xxx\n\n
3. Affiliation: xxx\n\n 3. Affiliation: xxx\n\n
4. Keywords: xxx\n\n 4. Keywords: xxx\n\n
5. Urls: xxx or xxx , xxx \n\n 5. Urls: xxx or xxx , xxx \n\n
6. Summary: \n\n 6. Summary: \n\n
- (1):xxx;\n - (1):xxx;\n
- (2):xxx;\n - (2):xxx;\n
- (3):xxx;\n - (3):xxx;\n
- (4):xxx.\n\n - (4):xxx.\n\n
Be sure to use Chinese answers (proper nouns need to be marked in English), statements as concise and academic as possible, Be sure to use Chinese answers (proper nouns need to be marked in English), statements as concise and academic as possible,
@@ -85,8 +85,8 @@ do not have too much repetitive information, numerical values using the original
file_write_buffer.extend(final_results) file_write_buffer.extend(final_results)
i_say, final_results = input_clipping(i_say, final_results, max_token_limit=2000) i_say, final_results = input_clipping(i_say, final_results, max_token_limit=2000)
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user='开始最终总结', inputs=i_say, inputs_show_user='开始最终总结',
llm_kwargs=llm_kwargs, chatbot=chatbot, history=final_results, llm_kwargs=llm_kwargs, chatbot=chatbot, history=final_results,
sys_prompt= f"Extract the main idea of this paper with less than {NUM_OF_WORD} Chinese characters" sys_prompt= f"Extract the main idea of this paper with less than {NUM_OF_WORD} Chinese characters"
) )
final_results.append(gpt_say) final_results.append(gpt_say)
@@ -101,7 +101,7 @@ do not have too much repetitive information, numerical values using the original
@CatchException @CatchException
def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
import glob, os import glob, os
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
@@ -114,8 +114,8 @@ def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
try: try:
import fitz import fitz
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
@@ -134,7 +134,7 @@ def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
# 搜索需要处理的文件清单 # 搜索需要处理的文件清单
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
# 如果没找到任何文件 # 如果没找到任何文件
if len(file_manifest) == 0: if len(file_manifest) == 0:
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}")

查看文件

@@ -85,10 +85,10 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
msg = '正常' msg = '正常'
# ** gpt request ** # ** gpt request **
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say_show_user, inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
chatbot=chatbot, chatbot=chatbot,
history=[], history=[],
sys_prompt="总结文章。" sys_prompt="总结文章。"
) # 带超时倒计时 ) # 带超时倒计时
@@ -106,10 +106,10 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
msg = '正常' msg = '正常'
# ** gpt request ** # ** gpt request **
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
chatbot=chatbot, chatbot=chatbot,
history=history, history=history,
sys_prompt="总结文章。" sys_prompt="总结文章。"
) # 带超时倒计时 ) # 带超时倒计时
@@ -124,7 +124,7 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
@CatchException @CatchException
def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
@@ -138,8 +138,8 @@ def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, histo
try: try:
import pdfminer, bs4 import pdfminer, bs4
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return

查看文件

@@ -5,7 +5,7 @@ from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from .crazy_utils import read_and_clean_pdf_text from .crazy_utils import read_and_clean_pdf_text
from .pdf_fns.parse_pdf import parse_pdf, get_avail_grobid_url, translate_pdf from .pdf_fns.parse_pdf import parse_pdf, get_avail_grobid_url, translate_pdf
from colorful import * from shared_utils.colorful import *
import copy import copy
import os import os
import math import math
@@ -48,7 +48,7 @@ def markdown_to_dict(article_content):
@CatchException @CatchException
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
disable_auto_promotion(chatbot) disable_auto_promotion(chatbot)
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
@@ -76,8 +76,8 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
success_mmd, file_manifest_mmd, _ = get_files_from_everything(txt, type='.mmd') success_mmd, file_manifest_mmd, _ = get_files_from_everything(txt, type='.mmd')
success = success or success_mmd success = success or success_mmd
file_manifest += file_manifest_mmd file_manifest += file_manifest_mmd
chatbot.append(["文件列表:", ", ".join([e.split('/')[-1] for e in file_manifest])]); chatbot.append(["文件列表:", ", ".join([e.split('/')[-1] for e in file_manifest])]);
yield from update_ui( chatbot=chatbot, history=history) yield from update_ui( chatbot=chatbot, history=history)
# 检测输入参数,如没有给定输入参数,直接退出 # 检测输入参数,如没有给定输入参数,直接退出
if not success: if not success:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'

查看文件

@@ -27,7 +27,7 @@ def eval_manim(code):
class_name = get_class_name(code) class_name = get_class_name(code)
try: try:
time_str = gen_time_str() time_str = gen_time_str()
subprocess.check_output([sys.executable, '-c', f"from gpt_log.MyAnimation import {class_name}; {class_name}().render()"]) subprocess.check_output([sys.executable, '-c', f"from gpt_log.MyAnimation import {class_name}; {class_name}().render()"])
shutil.move(f'media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{time_str}.mp4') shutil.move(f'media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{time_str}.mp4')
@@ -36,7 +36,7 @@ def eval_manim(code):
output = e.output.decode() output = e.output.decode()
print(f"Command returned non-zero exit status {e.returncode}: {output}.") print(f"Command returned non-zero exit status {e.returncode}: {output}.")
return f"Evaluating python script failed: {e.output}." return f"Evaluating python script failed: {e.output}."
except: except:
print('generating mp4 failed') print('generating mp4 failed')
return "Generating mp4 failed." return "Generating mp4 failed."
@@ -45,12 +45,12 @@ def get_code_block(reply):
import re import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) != 1: if len(matches) != 1:
raise RuntimeError("GPT is not generating proper code.") raise RuntimeError("GPT is not generating proper code.")
return matches[0].strip('python') # code block return matches[0].strip('python') # code block
@CatchException @CatchException
def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -58,10 +58,10 @@ def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
# 清空历史,以免输入溢出 # 清空历史,以免输入溢出
history = [] history = []
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
@@ -73,24 +73,24 @@ def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
# 尝试导入依赖, 如果缺少依赖, 则给出安装建议 # 尝试导入依赖, 如果缺少依赖, 则给出安装建议
dep_ok = yield from inspect_dependency(chatbot=chatbot, history=history) # 刷新界面 dep_ok = yield from inspect_dependency(chatbot=chatbot, history=history) # 刷新界面
if not dep_ok: return if not dep_ok: return
# 输入 # 输入
i_say = f'Generate a animation to show: ' + txt i_say = f'Generate a animation to show: ' + txt
demo = ["Here is some examples of manim", examples_of_manim()] demo = ["Here is some examples of manim", examples_of_manim()]
_, demo = input_clipping(inputs="", history=demo, max_token_limit=2560) _, demo = input_clipping(inputs="", history=demo, max_token_limit=2560)
# 开始 # 开始
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say, inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo, llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo,
sys_prompt= sys_prompt=
r"Write a animation script with 3blue1brown's manim. "+ r"Write a animation script with 3blue1brown's manim. "+
r"Please begin with `from manim import *`. " + r"Please begin with `from manim import *`. " +
r"Answer me with a code block wrapped by ```." r"Answer me with a code block wrapped by ```."
) )
chatbot.append(["开始生成动画", "..."]) chatbot.append(["开始生成动画", "..."])
history.extend([i_say, gpt_say]) history.extend([i_say, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
# 将代码转为动画 # 将代码转为动画
code = get_code_block(gpt_say) code = get_code_block(gpt_say)
res = eval_manim(code) res = eval_manim(code)

查看文件

@@ -15,7 +15,7 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
file_content, page_one = read_and_clean_pdf_text(file_name) # 尝试按照章节切割PDF file_content, page_one = read_and_clean_pdf_text(file_name) # 尝试按照章节切割PDF
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
TOKEN_LIMIT_PER_FRAGMENT = 2500 TOKEN_LIMIT_PER_FRAGMENT = 2500
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
@@ -23,7 +23,7 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model']) page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
# 为了更好的效果,我们剥离Introduction之后的部分如果有 # 为了更好的效果,我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0] paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
############################## <第 1 步,从摘要中提取高价值信息,放到history中> ################################## ############################## <第 1 步,从摘要中提取高价值信息,放到history中> ##################################
final_results = [] final_results = []
final_results.append(paper_meta) final_results.append(paper_meta)
@@ -42,10 +42,10 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}" i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}"
i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]} ...." i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]} ...."
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问 gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
llm_kwargs, chatbot, llm_kwargs, chatbot,
history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果 history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
sys_prompt="Extract the main idea of this section, answer me with Chinese." # 提示 sys_prompt="Extract the main idea of this section, answer me with Chinese." # 提示
) )
iteration_results.append(gpt_say) iteration_results.append(gpt_say)
last_iteration_result = gpt_say last_iteration_result = gpt_say
@@ -63,7 +63,7 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
@CatchException @CatchException
def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
import glob, os import glob, os
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
@@ -76,8 +76,8 @@ def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chat
try: try:
import fitz import fitz
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return

查看文件

@@ -16,7 +16,7 @@ def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if not fast_debug: if not fast_debug:
msg = '正常' msg = '正常'
# ** gpt request ** # ** gpt request **
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
@@ -27,7 +27,7 @@ def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
if not fast_debug: time.sleep(2) if not fast_debug: time.sleep(2)
if not fast_debug: if not fast_debug:
res = write_history_to_file(history) res = write_history_to_file(history)
promote_file_to_downloadzone(res, chatbot=chatbot) promote_file_to_downloadzone(res, chatbot=chatbot)
chatbot.append(("完成了吗?", res)) chatbot.append(("完成了吗?", res))
@@ -36,7 +36,7 @@ def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
@CatchException @CatchException
def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):

查看文件

@@ -0,0 +1,438 @@
from toolbox import CatchException, update_ui, report_exception
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.plugin_template.plugin_class_template import (
GptAcademicPluginTemplate,
)
from crazy_functions.plugin_template.plugin_class_template import ArgProperty
# 以下是每类图表的PROMPT
SELECT_PROMPT = """
{subject}
=============
以上是从文章中提取的摘要,将会使用这些摘要绘制图表。请你选择一个合适的图表类型:
1 流程图
2 序列图
3 类图
4 饼图
5 甘特图
6 状态图
7 实体关系图
8 象限提示图
不需要解释原因,仅需要输出单个不带任何标点符号的数字。
"""
# 没有思维导图!!!测试发现模型始终会优先选择思维导图
# 流程图
PROMPT_1 = """
请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid
graph TD
P("编程") --> L1("Python")
P("编程") --> L2("C")
P("编程") --> L3("C++")
P("编程") --> L4("Javascipt")
P("编程") --> L5("PHP")
```
"""
# 序列图
PROMPT_2 = """
请你给出围绕“{subject}”的序列图,使用mermaid语法。
mermaid语法举例
```mermaid
sequenceDiagram
participant A as 用户
participant B as 系统
A->>B: 登录请求
B->>A: 登录成功
A->>B: 获取数据
B->>A: 返回数据
```
"""
# 类图
PROMPT_3 = """
请你给出围绕“{subject}”的类图,使用mermaid语法。
mermaid语法举例
```mermaid
classDiagram
Class01 <|-- AveryLongClass : Cool
Class03 *-- Class04
Class05 o-- Class06
Class07 .. Class08
Class09 --> C2 : Where am i?
Class09 --* C3
Class09 --|> Class07
Class07 : equals()
Class07 : Object[] elementData
Class01 : size()
Class01 : int chimp
Class01 : int gorilla
Class08 <--> C2: Cool label
```
"""
# 饼图
PROMPT_4 = """
请你给出围绕“{subject}”的饼图,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid
pie title Pets adopted by volunteers
"" : 386
"" : 85
"兔子" : 15
```
"""
# 甘特图
PROMPT_5 = """
请你给出围绕“{subject}”的甘特图,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid
gantt
title "项目开发流程"
dateFormat YYYY-MM-DD
section "设计"
"需求分析" :done, des1, 2024-01-06,2024-01-08
"原型设计" :active, des2, 2024-01-09, 3d
"UI设计" : des3, after des2, 5d
section "开发"
"前端开发" :2024-01-20, 10d
"后端开发" :2024-01-20, 10d
```
"""
# 状态图
PROMPT_6 = """
请你给出围绕“{subject}”的状态图,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid
stateDiagram-v2
[*] --> "Still"
"Still" --> [*]
"Still" --> "Moving"
"Moving" --> "Still"
"Moving" --> "Crash"
"Crash" --> [*]
```
"""
# 实体关系图
PROMPT_7 = """
请你给出围绕“{subject}”的实体关系图,使用mermaid语法。
mermaid语法举例
```mermaid
erDiagram
CUSTOMER ||--o{ ORDER : places
ORDER ||--|{ LINE-ITEM : contains
CUSTOMER {
string name
string id
}
ORDER {
string orderNumber
date orderDate
string customerID
}
LINE-ITEM {
number quantity
string productID
}
```
"""
# 象限提示图
PROMPT_8 = """
请你给出围绕“{subject}”的象限图,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid
graph LR
A["Hard skill"] --> B("Programming")
A["Hard skill"] --> C("Design")
D["Soft skill"] --> E("Coordination")
D["Soft skill"] --> F("Communication")
```
"""
# 思维导图
PROMPT_9 = """
{subject}
==========
请给出上方内容的思维导图,充分考虑其之间的逻辑,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid
mindmap
root((mindmap))
("Origins")
("Long history")
::icon(fa fa-book)
("Popularisation")
("British popular psychology author Tony Buzan")
::icon(fa fa-user)
("Research")
("On effectiveness<br/>and features")
::icon(fa fa-search)
("On Automatic creation")
::icon(fa fa-robot)
("Uses")
("Creative techniques")
::icon(fa fa-lightbulb-o)
("Strategic planning")
::icon(fa fa-flag)
("Argument mapping")
::icon(fa fa-comments)
("Tools")
("Pen and paper")
::icon(fa fa-pencil)
("Mermaid")
::icon(fa fa-code)
```
"""
def 解析历史输入(history, llm_kwargs, file_manifest, chatbot, plugin_kwargs):
############################## <第 0 步,切割输入> ##################################
# 借用PDF切割中的函数对文本进行切割
TOKEN_LIMIT_PER_FRAGMENT = 2500
txt = (
str(history).encode("utf-8", "ignore").decode()
) # avoid reading non-utf8 chars
from crazy_functions.pdf_fns.breakdown_txt import (
breakdown_text_to_satisfy_token_limit,
)
txt = breakdown_text_to_satisfy_token_limit(
txt=txt, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs["llm_model"]
)
############################## <第 1 步,迭代地历遍整个文章,提取精炼信息> ##################################
results = []
MAX_WORD_TOTAL = 4096
n_txt = len(txt)
last_iteration_result = "从以下文本中提取摘要。"
if n_txt >= 20:
print("文章极长,不能达到预期效果")
for i in range(n_txt):
NUM_OF_WORD = MAX_WORD_TOTAL // n_txt
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words in Chinese: {txt[i]}"
i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...."
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
i_say,
i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
llm_kwargs,
chatbot,
history=[
"The main content of the previous section is?",
last_iteration_result,
], # 迭代上一次的结果
sys_prompt="Extracts the main content from the text section where it is located for graphing purposes, answer me with Chinese.", # 提示
)
results.append(gpt_say)
last_iteration_result = gpt_say
############################## <第 2 步,根据整理的摘要选择图表类型> ##################################
gpt_say = str(plugin_kwargs) # 将图表类型参数赋值为插件参数
results_txt = "\n".join(results) # 合并摘要
if gpt_say not in [
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
]: # 如插件参数不正确则使用对话模型判断
i_say_show_user = (
f"接下来将判断适合的图表类型,如连续3次判断失败将会使用流程图进行绘制"
)
gpt_say = "[Local Message] 收到。" # 用户提示
chatbot.append([i_say_show_user, gpt_say])
yield from update_ui(chatbot=chatbot, history=[]) # 更新UI
i_say = SELECT_PROMPT.format(subject=results_txt)
i_say_show_user = f'请判断适合使用的流程图类型,其中数字对应关系为:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图。由于不管提供文本是什么,模型大概率认为"思维导图"最合适,因此思维导图仅能通过参数调用。'
for i in range(3):
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say,
inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history=[],
sys_prompt="",
)
if gpt_say in [
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
]: # 判断返回是否正确
break
if gpt_say not in ["1", "2", "3", "4", "5", "6", "7", "8", "9"]:
gpt_say = "1"
############################## <第 3 步,根据选择的图表类型绘制图表> ##################################
if gpt_say == "1":
i_say = PROMPT_1.format(subject=results_txt)
elif gpt_say == "2":
i_say = PROMPT_2.format(subject=results_txt)
elif gpt_say == "3":
i_say = PROMPT_3.format(subject=results_txt)
elif gpt_say == "4":
i_say = PROMPT_4.format(subject=results_txt)
elif gpt_say == "5":
i_say = PROMPT_5.format(subject=results_txt)
elif gpt_say == "6":
i_say = PROMPT_6.format(subject=results_txt)
elif gpt_say == "7":
i_say = PROMPT_7.replace("{subject}", results_txt) # 由于实体关系图用到了{}符号
elif gpt_say == "8":
i_say = PROMPT_8.format(subject=results_txt)
elif gpt_say == "9":
i_say = PROMPT_9.format(subject=results_txt)
i_say_show_user = f"请根据判断结果绘制相应的图表。如需绘制思维导图请使用参数调用,同时过大的图表可能需要复制到在线编辑器中进行渲染。"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say,
inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history=[],
sys_prompt="",
)
history.append(gpt_say)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
@CatchException
def 生成多种Mermaid图表(
txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port
):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号
"""
import os
# 基本信息:功能、贡献者
chatbot.append(
[
"函数插件功能?",
"根据当前聊天历史或指定的路径文件(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\
\n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918",
]
)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if os.path.exists(txt): # 如输入区无内容则直接解析历史记录
from crazy_functions.pdf_fns.parse_word import extract_text_from_files
file_exist, final_result, page_one, file_manifest, excption = (
extract_text_from_files(txt, chatbot, history)
)
else:
file_exist = False
excption = ""
file_manifest = []
if excption != "":
if excption == "word":
report_exception(
chatbot,
history,
a=f"解析项目: {txt}",
b=f"找到了.doc文件,但是该文件格式不被支持,请先转化为.docx格式。",
)
elif excption == "pdf":
report_exception(
chatbot,
history,
a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。",
)
elif excption == "word_pip":
report_exception(
chatbot,
history,
a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。",
)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
else:
if not file_exist:
history.append(txt) # 如输入区不是文件则将输入区内容加入历史记录
i_say_show_user = f"首先你从历史记录中提取摘要。"
gpt_say = "[Local Message] 收到。" # 用户提示
chatbot.append([i_say_show_user, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 更新UI
yield from 解析历史输入(
history, llm_kwargs, file_manifest, chatbot, plugin_kwargs
)
else:
file_num = len(file_manifest)
for i in range(file_num): # 依次处理文件
i_say_show_user = f"[{i+1}/{file_num}]处理文件{file_manifest[i]}"
gpt_say = "[Local Message] 收到。" # 用户提示
chatbot.append([i_say_show_user, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 更新UI
history = [] # 如输入区内容为文件则清空历史记录
history.append(final_result[i])
yield from 解析历史输入(
history, llm_kwargs, file_manifest, chatbot, plugin_kwargs
)
class Mermaid_Gen(GptAcademicPluginTemplate):
def __init__(self):
pass
def define_arg_selection_menu(self):
gui_definition = {
"Type_of_Mermaid": ArgProperty(
title="绘制的Mermaid图表类型",
options=[
"由LLM决定",
"流程图",
"序列图",
"类图",
"饼图",
"甘特图",
"状态图",
"实体关系图",
"象限提示图",
"思维导图",
],
default_value="由LLM决定",
description="选择'由LLM决定'时将由对话模型判断适合的图表类型(不包括思维导图),选择其他类型时将直接绘制指定的图表类型。",
type="dropdown",
).model_dump_json(),
}
return gui_definition
def execute(
txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request
):
options = [
"由LLM决定",
"流程图",
"序列图",
"类图",
"饼图",
"甘特图",
"状态图",
"实体关系图",
"象限提示图",
"思维导图",
]
plugin_kwargs = options.index(plugin_kwargs['Type_of_Mermaid'])
yield from 生成多种Mermaid图表(
txt,
llm_kwargs,
plugin_kwargs,
chatbot,
history,
system_prompt,
user_request,
)

查看文件

@@ -9,11 +9,11 @@ install_msg ="""
3. python -m pip install unstructured[all-docs] --upgrade 3. python -m pip install unstructured[all-docs] --upgrade
4. python -c 'import nltk; nltk.download("punkt")' 4. python -c 'import nltk; nltk.download("punkt")'
""" """
@CatchException @CatchException
def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行 llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
@@ -21,7 +21,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
@@ -56,7 +56,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
chatbot.append(["没有找到任何可读取文件", "当前支持的格式包括: txt, md, docx, pptx, pdf, json等"]) chatbot.append(["没有找到任何可读取文件", "当前支持的格式包括: txt, md, docx, pptx, pdf, json等"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# < -------------------预热文本向量化模组--------------- > # < -------------------预热文本向量化模组--------------- >
chatbot.append(['<br/>'.join(file_manifest), "正在预热文本向量化模组, 如果是第一次运行, 将消耗较长时间下载中文向量化模型..."]) chatbot.append(['<br/>'.join(file_manifest), "正在预热文本向量化模组, 如果是第一次运行, 将消耗较长时间下载中文向量化模型..."])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
@@ -84,7 +84,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
@CatchException @CatchException
def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port=-1): def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request=-1):
# resolve deps # resolve deps
try: try:
# from zh_langchain import construct_vector_store # from zh_langchain import construct_vector_store
@@ -109,8 +109,8 @@ def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
chatbot.append((txt, f'[知识库 {kai_id}] ' + prompt)) chatbot.append((txt, f'[知识库 {kai_id}] ' + prompt))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=prompt, inputs_show_user=txt, inputs=prompt, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt=system_prompt sys_prompt=system_prompt
) )
history.extend((prompt, gpt_say)) history.extend((prompt, gpt_say))

查看文件

@@ -40,10 +40,10 @@ def scrape_text(url, proxies) -> str:
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
'Content-Type': 'text/plain', 'Content-Type': 'text/plain',
} }
try: try:
response = requests.get(url, headers=headers, proxies=proxies, timeout=8) response = requests.get(url, headers=headers, proxies=proxies, timeout=8)
if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
except: except:
return "无法连接到该网页" return "无法连接到该网页"
soup = BeautifulSoup(response.text, "html.parser") soup = BeautifulSoup(response.text, "html.parser")
for script in soup(["script", "style"]): for script in soup(["script", "style"]):
@@ -55,7 +55,7 @@ def scrape_text(url, proxies) -> str:
return text return text
@CatchException @CatchException
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -63,10 +63,10 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
chatbot.append((f"请结合互联网信息回答以下问题:{txt}", chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
"[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR")) "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
@@ -91,13 +91,13 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
# ------------- < 第3步ChatGPT综合 > ------------- # ------------- < 第3步ChatGPT综合 > -------------
i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}" i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}"
i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
inputs=i_say, inputs=i_say,
history=history, history=history,
max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4 max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4
) )
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say, inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。" sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。"
) )
chatbot[-1] = (i_say, gpt_say) chatbot[-1] = (i_say, gpt_say)

查看文件

@@ -55,7 +55,7 @@ def scrape_text(url, proxies) -> str:
return text return text
@CatchException @CatchException
def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -63,7 +63,7 @@ def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, histor
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
chatbot.append((f"请结合互联网信息回答以下问题:{txt}", chatbot.append((f"请结合互联网信息回答以下问题:{txt}",

查看文件

@@ -33,7 +33,7 @@ explain_msg = """
- 「请调用插件,解析python源代码项目,代码我刚刚打包拖到上传区了」 - 「请调用插件,解析python源代码项目,代码我刚刚打包拖到上传区了」
- 「请问Transformer网络的结构是怎样的?」 - 「请问Transformer网络的结构是怎样的?」
2. 您可以打开插件下拉菜单以了解本项目的各种能力。 2. 您可以打开插件下拉菜单以了解本项目的各种能力。
3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词,您的意图可以被识别的更准确。 3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词,您的意图可以被识别的更准确。
@@ -67,7 +67,7 @@ class UserIntention(BaseModel):
def chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention): def chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=txt, inputs_show_user=txt, inputs=txt, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt=system_prompt sys_prompt=system_prompt
) )
chatbot[-1] = [txt, gpt_say] chatbot[-1] = [txt, gpt_say]
@@ -104,7 +104,7 @@ def analyze_intention_with_simple_rules(txt):
@CatchException @CatchException
def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
disable_auto_promotion(chatbot=chatbot) disable_auto_promotion(chatbot=chatbot)
# 获取当前虚空终端状态 # 获取当前虚空终端状态
state = VoidTerminalState.get_state(chatbot) state = VoidTerminalState.get_state(chatbot)
@@ -115,13 +115,13 @@ def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
if is_the_upload_folder(txt): if is_the_upload_folder(txt):
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=False) state.set_state(chatbot=chatbot, key='has_provided_explaination', value=False)
appendix_msg = "\n\n**很好,您已经上传了文件**,现在请您描述您的需求。" appendix_msg = "\n\n**很好,您已经上传了文件**,现在请您描述您的需求。"
if is_certain or (state.has_provided_explaination): if is_certain or (state.has_provided_explaination):
# 如果意图明确,跳过提示环节 # 如果意图明确,跳过提示环节
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True) state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
state.unlock_plugin(chatbot=chatbot) state.unlock_plugin(chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port) yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
return return
else: else:
# 如果意图模糊,提示 # 如果意图模糊,提示
@@ -133,7 +133,7 @@ def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] history = []
chatbot.append(("虚空终端状态: ", f"正在执行任务: {txt}")) chatbot.append(("虚空终端状态: ", f"正在执行任务: {txt}"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
@@ -152,7 +152,7 @@ def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
analyze_res = run_gpt_fn(inputs, "") analyze_res = run_gpt_fn(inputs, "")
try: try:
user_intention = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn) user_intention = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}", lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
except JsonStringError as e: except JsonStringError as e:
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 失败 当前语言模型({llm_kwargs['llm_model']})不能理解您的意图", chatbot=chatbot, history=history, delay=0) lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 失败 当前语言模型({llm_kwargs['llm_model']})不能理解您的意图", chatbot=chatbot, history=history, delay=0)
@@ -161,7 +161,7 @@ def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
pass pass
yield from update_ui_lastest_msg( yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}", lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
# 用户意图: 修改本项目的配置 # 用户意图: 修改本项目的配置

查看文件

@@ -12,6 +12,12 @@ class PaperFileGroup():
self.sp_file_index = [] self.sp_file_index = []
self.sp_file_tag = [] self.sp_file_tag = []
# count_token
from request_llms.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer']
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
self.get_token_num = get_token_num
def run_file_split(self, max_token_limit=1900): def run_file_split(self, max_token_limit=1900):
""" """
将长文本分离开来 将长文本分离开来
@@ -54,7 +60,7 @@ def parseNotebook(filename, enable_markdown=1):
Code += f"This is {idx+1}th code block: \n" Code += f"This is {idx+1}th code block: \n"
Code += code+"\n" Code += code+"\n"
return Code return Code
def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
@@ -109,7 +115,7 @@ def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
@CatchException @CatchException
def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",
"对IPynb文件进行解析。Contributor: codycjy."]) "对IPynb文件进行解析。Contributor: codycjy."])

查看文件

@@ -1,6 +1,7 @@
from toolbox import update_ui, promote_file_to_downloadzone, disable_auto_promotion from toolbox import update_ui, promote_file_to_downloadzone, disable_auto_promotion
from toolbox import CatchException, report_exception, write_history_to_file from toolbox import CatchException, report_exception, write_history_to_file
from .crazy_utils import input_clipping from shared_utils.fastapi_server import validate_path_safety
from crazy_functions.crazy_utils import input_clipping
def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
import os, copy import os, copy
@@ -82,12 +83,13 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot, inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
history=this_iteration_history_feed, # 迭代之前的分析 history=this_iteration_history_feed, # 迭代之前的分析
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional) sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
summary = "请用一句话概括这些文件的整体功能" diagram_code = make_diagram(this_iteration_files, result, this_iteration_history_feed)
summary = "请用一句话概括这些文件的整体功能。\n\n" + diagram_code
summary_result = yield from request_gpt_model_in_new_thread_with_ui_alive( summary_result = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=summary, inputs=summary,
inputs_show_user=summary, inputs_show_user=summary,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
chatbot=chatbot, chatbot=chatbot,
history=[i_say, result], # 迭代之前的分析 history=[i_say, result], # 迭代之前的分析
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional) sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
@@ -104,9 +106,12 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot.append(("完成了吗?", res)) chatbot.append(("完成了吗?", res))
yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
def make_diagram(this_iteration_files, result, this_iteration_history_feed):
from crazy_functions.diagram_fns.file_tree import build_file_tree_mermaid_diagram
return build_file_tree_mermaid_diagram(this_iteration_history_feed[0::2], this_iteration_history_feed[1::2], "项目示意图")
@CatchException @CatchException
def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob import glob
file_manifest = [f for f in glob.glob('./*.py')] + \ file_manifest = [f for f in glob.glob('./*.py')] + \
@@ -119,11 +124,12 @@ def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
@CatchException @CatchException
def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -137,11 +143,12 @@ def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
@CatchException @CatchException
def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析Matlab项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析Matlab项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -155,11 +162,12 @@ def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
@CatchException @CatchException
def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -175,11 +183,12 @@ def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, his
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
@CatchException @CatchException
def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -197,11 +206,12 @@ def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system
@CatchException @CatchException
def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
@@ -219,11 +229,12 @@ def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
@CatchException @CatchException
def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
@@ -248,11 +259,12 @@ def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
@CatchException @CatchException
def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
@@ -269,11 +281,12 @@ def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
@CatchException @CatchException
def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
@@ -289,11 +302,12 @@ def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
@CatchException @CatchException
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -311,11 +325,12 @@ def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
@CatchException @CatchException
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -331,7 +346,7 @@ def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
@CatchException @CatchException
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
txt_pattern = plugin_kwargs.get("advanced_arg") txt_pattern = plugin_kwargs.get("advanced_arg")
txt_pattern = txt_pattern.replace("", ",") txt_pattern = txt_pattern.replace("", ",")
# 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml) # 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
@@ -341,15 +356,19 @@ def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")] pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")]
pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件 pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件
# 将要忽略匹配的文件名(例如: ^README.md) # 将要忽略匹配的文件名(例如: ^README.md)
pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")] pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", r"\.") # 移除左边通配符,移除右侧逗号,转义点号
for _ in txt_pattern.split(" ") # 以空格分割
if (_ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")) # ^开始,但不是^*.开始
]
# 生成正则表达式 # 生成正则表达式
pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$' pattern_except = r'/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else '' pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else ''
history.clear() history.clear()
import glob, os, re import glob, os, re
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")

查看文件

@@ -2,7 +2,7 @@ from toolbox import CatchException, update_ui, get_conf
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
import datetime import datetime
@CatchException @CatchException
def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -10,7 +10,7 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
MULTI_QUERY_LLM_MODELS = get_conf('MULTI_QUERY_LLM_MODELS') MULTI_QUERY_LLM_MODELS = get_conf('MULTI_QUERY_LLM_MODELS')
@@ -20,8 +20,8 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
# llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
llm_kwargs['llm_model'] = MULTI_QUERY_LLM_MODELS # 支持任意数量的llm接口,用&符号分隔 llm_kwargs['llm_model'] = MULTI_QUERY_LLM_MODELS # 支持任意数量的llm接口,用&符号分隔
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=txt, inputs_show_user=txt, inputs=txt, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt=system_prompt, sys_prompt=system_prompt,
retry_times_at_unknown_error=0 retry_times_at_unknown_error=0
) )
@@ -32,7 +32,7 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
@CatchException @CatchException
def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -40,7 +40,7 @@ def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history,
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
@@ -52,8 +52,8 @@ def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history,
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=txt, inputs_show_user=txt, inputs=txt, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt=system_prompt, sys_prompt=system_prompt,
retry_times_at_unknown_error=0 retry_times_at_unknown_error=0
) )

查看文件

@@ -39,7 +39,7 @@ class AsyncGptTask():
try: try:
MAX_TOKEN_ALLO = 2560 MAX_TOKEN_ALLO = 2560
i_say, history = input_clipping(i_say, history, max_token_limit=MAX_TOKEN_ALLO) i_say, history = input_clipping(i_say, history, max_token_limit=MAX_TOKEN_ALLO)
gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=history, sys_prompt=sys_prompt, gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=history, sys_prompt=sys_prompt,
observe_window=observe_window[index], console_slience=True) observe_window=observe_window[index], console_slience=True)
except ConnectionAbortedError as token_exceed_err: except ConnectionAbortedError as token_exceed_err:
print('至少一个线程任务Token溢出而失败', e) print('至少一个线程任务Token溢出而失败', e)
@@ -120,7 +120,7 @@ class InterviewAssistant(AliyunASR):
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
self.plugin_wd.feed() self.plugin_wd.feed()
if self.event_on_result_chg.is_set(): if self.event_on_result_chg.is_set():
# called when some words have finished # called when some words have finished
self.event_on_result_chg.clear() self.event_on_result_chg.clear()
chatbot[-1] = list(chatbot[-1]) chatbot[-1] = list(chatbot[-1])
@@ -151,7 +151,7 @@ class InterviewAssistant(AliyunASR):
# add gpt task 创建子线程请求gpt,避免线程阻塞 # add gpt task 创建子线程请求gpt,避免线程阻塞
history = chatbot2history(chatbot) history = chatbot2history(chatbot)
self.agt.add_async_gpt_task(self.buffered_sentence, len(chatbot)-1, llm_kwargs, history, system_prompt) self.agt.add_async_gpt_task(self.buffered_sentence, len(chatbot)-1, llm_kwargs, history, system_prompt)
self.buffered_sentence = "" self.buffered_sentence = ""
chatbot.append(["[ 请讲话 ]", "[ 正在等您说完问题 ]"]) chatbot.append(["[ 请讲话 ]", "[ 正在等您说完问题 ]"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
@@ -166,7 +166,7 @@ class InterviewAssistant(AliyunASR):
@CatchException @CatchException
def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# pip install -U openai-whisper # pip install -U openai-whisper
chatbot.append(["对话助手函数插件:使用时,双手离开鼠标键盘吧", "音频助手, 正在听您讲话(点击“停止”键可终止程序)..."]) chatbot.append(["对话助手函数插件:使用时,双手离开鼠标键盘吧", "音频助手, 正在听您讲话(点击“停止”键可终止程序)..."])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

查看文件

@@ -44,7 +44,7 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
@CatchException @CatchException
def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):

查看文件

@@ -20,10 +20,10 @@ def get_meta_information(url, chatbot, history):
proxies = get_conf('proxies') proxies = get_conf('proxies')
headers = { headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36',
'Accept-Encoding': 'gzip, deflate, br', 'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7', 'Accept-Language': 'en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7',
'Cache-Control':'max-age=0', 'Cache-Control':'max-age=0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'Connection': 'keep-alive' 'Connection': 'keep-alive'
} }
try: try:
@@ -95,7 +95,7 @@ def get_meta_information(url, chatbot, history):
) )
try: paper = next(search.results()) try: paper = next(search.results())
except: paper = None except: paper = None
is_match = paper is not None and string_similar(title, paper.title) > 0.90 is_match = paper is not None and string_similar(title, paper.title) > 0.90
# 如果在Arxiv上匹配失败,检索文章的历史版本的题目 # 如果在Arxiv上匹配失败,检索文章的历史版本的题目
@@ -132,7 +132,7 @@ def get_meta_information(url, chatbot, history):
return profile return profile
@CatchException @CatchException
def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
disable_auto_promotion(chatbot=chatbot) disable_auto_promotion(chatbot=chatbot)
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
@@ -146,8 +146,8 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
import math import math
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
except: except:
report_exception(chatbot, history, report_exception(chatbot, history,
a = f"解析项目: {txt}", a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4 arxiv```。") b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4 arxiv```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
@@ -163,7 +163,7 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
if len(meta_paper_info_list[:batchsize]) > 0: if len(meta_paper_info_list[:batchsize]) > 0:
i_say = "下面是一些学术文献的数据,提取出以下内容:" + \ i_say = "下面是一些学术文献的数据,提取出以下内容:" + \
"1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开is_paper_in_arxiv;4、引用数量cite;5、中文摘要翻译。" + \ "1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开is_paper_in_arxiv;4、引用数量cite;5、中文摘要翻译。" + \
f"以下是信息源:{str(meta_paper_info_list[:batchsize])}" f"以下是信息源:{str(meta_paper_info_list[:batchsize])}"
inputs_show_user = f"请分析此页面中出现的所有文章:{txt},这是第{batch+1}" inputs_show_user = f"请分析此页面中出现的所有文章:{txt},这是第{batch+1}"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
@@ -175,11 +175,11 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
history.extend([ f"{batch+1}", gpt_say ]) history.extend([ f"{batch+1}", gpt_say ])
meta_paper_info_list = meta_paper_info_list[batchsize:] meta_paper_info_list = meta_paper_info_list[batchsize:]
chatbot.append(["状态?", chatbot.append(["状态?",
"已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."]) "已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."])
msg = '正常' msg = '正常'
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
path = write_history_to_file(history) path = write_history_to_file(history)
promote_file_to_downloadzone(path, chatbot=chatbot) promote_file_to_downloadzone(path, chatbot=chatbot)
chatbot.append(("完成了吗?", path)); chatbot.append(("完成了吗?", path));
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面

查看文件

@@ -11,7 +11,7 @@ import os
@CatchException @CatchException
def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
if txt: if txt:
show_say = txt show_say = txt
prompt = txt+'\n回答完问题后,再列出用户可能提出的三个问题。' prompt = txt+'\n回答完问题后,再列出用户可能提出的三个问题。'
@@ -32,7 +32,7 @@ def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
@CatchException @CatchException
def 清除缓存(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 清除缓存(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
chatbot.append(['清除本地缓存数据', '执行中. 删除数据']) chatbot.append(['清除本地缓存数据', '执行中. 删除数据'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

查看文件

@@ -1,27 +1,59 @@
from toolbox import CatchException, update_ui from toolbox import CatchException, update_ui
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
import datetime import datetime
####################################################################################################################
# Demo 1: 一个非常简单的插件 #########################################################################################
####################################################################################################################
高阶功能模板函数示意图 = f"""
```mermaid
flowchart TD
%% <gpt_academic_hide_mermaid_code> 一个特殊标记,用于在生成mermaid图表时隐藏代码块
subgraph 函数调用["函数调用过程"]
AA["输入栏用户输入的文本(txt)"] --> BB["gpt模型参数(llm_kwargs)"]
BB --> CC["插件模型参数(plugin_kwargs)"]
CC --> DD["对话显示框的句柄(chatbot)"]
DD --> EE["对话历史(history)"]
EE --> FF["系统提示词(system_prompt)"]
FF --> GG["当前用户信息(web_port)"]
A["开始(查询5天历史事件)"]
A --> B["获取当前月份和日期"]
B --> C["生成历史事件查询提示词"]
C --> D["调用大模型"]
D --> E["更新界面"]
E --> F["记录历史"]
F --> |"下一天"| B
end
```
"""
@CatchException @CatchException
def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request, num_day=5):
""" """
# 高阶功能模板函数示意图https://mermaid.live/edit#pako:eNptk1tvEkEYhv8KmattQpvlvOyFCcdeeaVXuoYssBwie8gyhCIlqVoLhrbbtAWNUpEGUkyMEDW2Fmn_DDOL_8LZHdOwxrnamX3f7_3mmZk6yKhZCfAgV1KrmYKoQ9fDuKC4yChX0nld1Aou1JzjznQ5fWmejh8LYHW6vG2a47YAnlCLNSIRolnenKBXI_zRIBrcuqRT890u7jZx7zMDt-AaMbnW1--5olGiz2sQjwfoQxsZL0hxplSSU0-rop4vrzmKR6O2JxYjHmwcL2Y_HDatVMkXlf86YzHbGY9bO5j8XE7O8Nsbc3iNB3ukL2SMcH-XIQBgWoVOZzxuOxOJOyc63EPGV6ZQLENVrznViYStTiaJ2vw2M2d9bByRnOXkgCnXylCSU5quyto_IcmkbdvctELmJ-j1ASW3uB3g5xOmKqVTmqr_Na3AtuS_dtBFm8H90XJyHkDDT7S9xXWb4HGmRChx64AOL5HRpUm411rM5uh4H78Z4V7fCZzytjZz2seto9XaNPFue07clLaVZF8UNLygJ-VES8lah_n-O-5Ozc7-77NzJ0-K0yr0ZYrmHdqAk50t2RbA4qq9uNohBASw7YpSgaRkLWCCAtxAlnRZLGbJba9bPwUAC5IsCYAnn1kpJ1ZKUACC0iBSsQLVBzUlA3ioVyQ3qGhZEUrxokiehAz4nFgqk1VNVABfB1uAD_g2_AGPl-W8nMcbCvsDblADfNCz4feyobDPy3rYEMtxwYYbPFNVUoHdCPmDHBv2cP4AMfrCbiBli-Q-3afv0X6WdsIjW2-10fgDy1SAig
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数 plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板该函数只有20多行代码。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR")) chatbot.append((
"您正在调用插件:历史上的今天",
"[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板该函数只有20多行代码。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR" + 高阶功能模板函数示意图))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
for i in range(5): for i in range(int(num_day)):
currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month
currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day
i_say = f'历史中哪些事件发生在{currentMonth}{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。' i_say = f'历史中哪些事件发生在{currentMonth}{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。'
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say, inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="当你想发送一张照片时,请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。" sys_prompt="当你想发送一张照片时,请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。"
) )
chatbot[-1] = (i_say, gpt_say) chatbot[-1] = (i_say, gpt_say)
@@ -31,6 +63,56 @@ def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
####################################################################################################################
# Demo 2: 一个带二级菜单的插件 #######################################################################################
####################################################################################################################
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
class Demo_Wrap(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
"""
gui_definition = {
"num_day":
ArgProperty(title="日期选择", options=["仅今天", "未来3天", "未来5天"], default_value="未来3天", description="", type="dropdown").model_dump_json(),
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
num_day = plugin_kwargs["num_day"]
if num_day == "仅今天": num_day = 1
if num_day == "未来3天": num_day = 3
if num_day == "未来5天": num_day = 5
yield from 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request, num_day=num_day)
####################################################################################################################
# Demo 3: 绘制脑图的Demo ############################################################################################
####################################################################################################################
PROMPT = """ PROMPT = """
请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,mermaid语法举例 请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,mermaid语法举例
```mermaid ```mermaid
@@ -43,7 +125,7 @@ graph TD
``` ```
""" """
@CatchException @CatchException
def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -51,20 +133,20 @@ def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
chatbot 聊天显示框的句柄,用于显示给用户 chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要 history 聊天历史,前情提要
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号 user_request 当前用户的请求信息IP地址等
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
chatbot.append(("这是什么功能?", "一个测试mermaid绘制图表的功能,您可以在输入框中输入一些关键词,然后使用mermaid+llm绘制图表。")) chatbot.append(("这是什么功能?", "一个测试mermaid绘制图表的功能,您可以在输入框中输入一些关键词,然后使用mermaid+llm绘制图表。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
if txt == "": txt = "空白的输入栏" # 调皮一下 if txt == "": txt = "空白的输入栏" # 调皮一下
i_say_show_user = f'请绘制有关“{txt}”的逻辑关系图。' i_say_show_user = f'请绘制有关“{txt}”的逻辑关系图。'
i_say = PROMPT.format(subject=txt) i_say = PROMPT.format(subject=txt)
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say_show_user, inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="" sys_prompt=""
) )
history.append(i_say); history.append(gpt_say) history.append(i_say); history.append(gpt_say)

查看文件

@@ -1,12 +1,12 @@
## =================================================== ## ===================================================
# docker-compose.yml # docker-compose.yml
## =================================================== ## ===================================================
# 1. 请在以下方案中选择任意一种,然后删除其他的方案 # 1. 请在以下方案中选择任意一种,然后删除其他的方案
# 2. 修改你选择的方案中的environment环境变量,详情请见github wiki或者config.py # 2. 修改你选择的方案中的environment环境变量,详情请见github wiki或者config.py
# 3. 选择一种暴露服务端口的方法,并对相应的配置做出修改: # 3. 选择一种暴露服务端口的方法,并对相应的配置做出修改:
# 方法1: 适用于Linux,很方便,可惜windows不支持与宿主的网络融合为一体,这个是默认配置 # 方法1: 适用于Linux,很方便,可惜windows不支持与宿主的网络融合为一体,这个是默认配置
# network_mode: "host" # network_mode: "host"
# 方法2: 适用于所有系统包括Windows和MacOS端口映射,把容器的端口映射到宿主的端口注意您需要先删除network_mode: "host",再追加以下内容) # 方法2: 适用于所有系统包括Windows和MacOS端口映射,把容器的端口映射到宿主的端口注意您需要先删除network_mode: "host",再追加以下内容)
# ports: # ports:
# - "12345:12345" # 注意12345必须与WEB_PORT环境变量相互对应 # - "12345:12345" # 注意12345必须与WEB_PORT环境变量相互对应
# 4. 最后`docker-compose up`运行 # 4. 最后`docker-compose up`运行
@@ -25,7 +25,7 @@
## =================================================== ## ===================================================
## =================================================== ## ===================================================
## 方案零 部署项目的全部能力这个是包含cuda和latex的大型镜像。如果您网速慢、硬盘小或没有显卡,则不推荐使用这个 ## 方案零 部署项目的全部能力这个是包含cuda和latex的大型镜像。如果您网速慢、硬盘小或没有显卡,则不推荐使用这个
## =================================================== ## ===================================================
version: '3' version: '3'
services: services:
@@ -63,10 +63,10 @@ services:
# count: 1 # count: 1
# capabilities: [gpu] # capabilities: [gpu]
# WEB_PORT暴露方法1: 适用于Linux与宿主的网络融合 # WEB_PORT暴露方法1: 适用于Linux与宿主的网络融合
network_mode: "host" network_mode: "host"
# WEB_PORT暴露方法2: 适用于所有系统端口映射 # WEB_PORT暴露方法2: 适用于所有系统端口映射
# ports: # ports:
# - "12345:12345" # 12345必须与WEB_PORT相互对应 # - "12345:12345" # 12345必须与WEB_PORT相互对应
@@ -75,10 +75,8 @@ services:
bash -c "python3 -u main.py" bash -c "python3 -u main.py"
## =================================================== ## ===================================================
## 方案一 如果不需要运行本地模型(仅 chatgpt, azure, 星火, 千帆, claude 等在线大模型服务) ## 方案一 如果不需要运行本地模型(仅 chatgpt, azure, 星火, 千帆, claude 等在线大模型服务)
## =================================================== ## ===================================================
version: '3' version: '3'
services: services:
@@ -97,16 +95,16 @@ services:
# DEFAULT_WORKER_NUM: ' 10 ' # DEFAULT_WORKER_NUM: ' 10 '
# AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] ' # AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] '
# 与宿主的网络融合 # 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
network_mode: "host" network_mode: "host"
# 不使用代理网络拉取最新代码 # 启动命令
command: > command: >
bash -c "python3 -u main.py" bash -c "python3 -u main.py"
### =================================================== ### ===================================================
### 方案二 如果需要运行ChatGLM + Qwen + MOSS等本地模型 ### 方案二 如果需要运行ChatGLM + Qwen + MOSS等本地模型
### =================================================== ### ===================================================
version: '3' version: '3'
services: services:
@@ -130,8 +128,10 @@ services:
devices: devices:
- /dev/nvidia0:/dev/nvidia0 - /dev/nvidia0:/dev/nvidia0
# 与宿主的网络融合 # 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
network_mode: "host" network_mode: "host"
# 启动命令
command: > command: >
bash -c "python3 -u main.py" bash -c "python3 -u main.py"
@@ -139,8 +139,9 @@ services:
# command: > # command: >
# bash -c "pip install -r request_llms/requirements_qwen.txt && python3 -u main.py" # bash -c "pip install -r request_llms/requirements_qwen.txt && python3 -u main.py"
### =================================================== ### ===================================================
### 方案三 如果需要运行ChatGPT + LLAMA + 盘古 + RWKV本地模型 ### 方案三 如果需要运行ChatGPT + LLAMA + 盘古 + RWKV本地模型
### =================================================== ### ===================================================
version: '3' version: '3'
services: services:
@@ -164,16 +165,16 @@ services:
devices: devices:
- /dev/nvidia0:/dev/nvidia0 - /dev/nvidia0:/dev/nvidia0
# 与宿主的网络融合 # 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
network_mode: "host" network_mode: "host"
# 不使用代理网络拉取最新代码 # 启动命令
command: > command: >
python3 -u main.py python3 -u main.py
## =================================================== ## ===================================================
## 方案四 ChatGPT + Latex ## 方案四 ChatGPT + Latex
## =================================================== ## ===================================================
version: '3' version: '3'
services: services:
@@ -190,16 +191,16 @@ services:
DEFAULT_WORKER_NUM: ' 10 ' DEFAULT_WORKER_NUM: ' 10 '
WEB_PORT: ' 12303 ' WEB_PORT: ' 12303 '
# 与宿主的网络融合 # 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
network_mode: "host" network_mode: "host"
# 不使用代理网络拉取最新代码 # 启动命令
command: > command: >
bash -c "python3 -u main.py" bash -c "python3 -u main.py"
## =================================================== ## ===================================================
## 方案五 ChatGPT + 语音助手 (请先阅读 docs/use_audio.md ## 方案五 ChatGPT + 语音助手 (请先阅读 docs/use_audio.md
## =================================================== ## ===================================================
version: '3' version: '3'
services: services:
@@ -223,9 +224,9 @@ services:
# (无需填写) ALIYUN_ACCESSKEY: ' LTAI5q6BrFUzoRXVGUWnekh1 ' # (无需填写) ALIYUN_ACCESSKEY: ' LTAI5q6BrFUzoRXVGUWnekh1 '
# (无需填写) ALIYUN_SECRET: ' eHmI20AVWIaQZ0CiTD2bGQVsaP9i68 ' # (无需填写) ALIYUN_SECRET: ' eHmI20AVWIaQZ0CiTD2bGQVsaP9i68 '
# 与宿主的网络融合 # 「WEB_PORT暴露方法1: 适用于Linux」与宿主的网络融合
network_mode: "host" network_mode: "host"
# 不使用代理网络拉取最新代码 # 启动命令
command: > command: >
bash -c "python3 -u main.py" bash -c "python3 -u main.py"

查看文件

@@ -28,6 +28,8 @@ RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt RUN python3 -m pip install -r request_llms/requirements_newbing.txt
RUN python3 -m pip install nougat-ocr RUN python3 -m pip install nougat-ocr
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 预热Tiktoken模块 # 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -36,6 +36,9 @@ RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt RUN python3 -m pip install -r request_llms/requirements_newbing.txt
RUN python3 -m pip install nougat-ocr RUN python3 -m pip install nougat-ocr
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 预热Tiktoken模块 # 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -21,7 +21,8 @@ RUN python3 -m pip install -r request_llms/requirements_qwen.txt
RUN python3 -m pip install -r request_llms/requirements_chatglm.txt RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt RUN python3 -m pip install -r request_llms/requirements_newbing.txt
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 预热Tiktoken模块 # 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -23,6 +23,9 @@ RUN python3 -m pip install -r request_llms/requirements_jittorllms.txt -i https:
# 下载JittorLLMs # 下载JittorLLMs
RUN git clone https://github.com/binary-husky/JittorLLMs.git --depth 1 request_llms/jittorllms RUN git clone https://github.com/binary-husky/JittorLLMs.git --depth 1 request_llms/jittorllms
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 禁用缓存,确保更新代码 # 禁用缓存,确保更新代码
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
RUN git pull RUN git pull

查看文件

@@ -12,6 +12,8 @@ COPY . .
# 安装依赖 # 安装依赖
RUN pip3 install -r requirements.txt RUN pip3 install -r requirements.txt
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块 # 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -13,7 +13,10 @@ COPY . .
RUN pip3 install -r requirements.txt RUN pip3 install -r requirements.txt
# 安装语音插件的额外依赖 # 安装语音插件的额外依赖
RUN pip3 install pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git RUN pip3 install aliyun-python-sdk-core==2.13.3 pyOpenSSL webrtcvad scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块 # 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -25,6 +25,9 @@ COPY . .
# 安装依赖 # 安装依赖
RUN pip3 install -r requirements.txt RUN pip3 install -r requirements.txt
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块 # 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -19,6 +19,9 @@ RUN pip3 install transformers protobuf langchain sentence-transformers faiss-cp
RUN pip3 install unstructured[all-docs] --upgrade RUN pip3 install unstructured[all-docs] --upgrade
RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()' RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()'
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块 # 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -0,0 +1,189 @@
# 实现带二级菜单的插件
## 一、如何写带有二级菜单的插件
1. 声明一个 `Class`,继承父类 `GptAcademicPluginTemplate`
```python
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate
from crazy_functions.plugin_template.plugin_class_template import ArgProperty
class Demo_Wrap(GptAcademicPluginTemplate):
def __init__(self): ...
```
2. 声明二级菜单中需要的变量,覆盖父类的`define_arg_selection_menu`函数。
```python
class Demo_Wrap(GptAcademicPluginTemplate):
...
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
第三个参数,名称`allow_cache`,参数`type`声明这是一个下拉菜单,下拉菜单上方显示`title`+`description`,下拉菜单的选项为`options`,`default_value`为下拉菜单默认值;
"""
gui_definition = {
"main_input":
ArgProperty(title="ArxivID", description="输入Arxiv的ID或者网址", default_value="", type="string").model_dump_json(),
"advanced_arg":
ArgProperty(title="额外的翻译提示词",
description=r"如果有必要, 请在此处给出自定义翻译命令",
default_value="", type="string").model_dump_json(),
"allow_cache":
ArgProperty(title="是否允许从缓存中调取结果", options=["允许缓存", "从头执行"], default_value="允许缓存", description="无", type="dropdown").model_dump_json(),
}
return gui_definition
...
```
> [!IMPORTANT]
>
> ArgProperty 中每个条目对应一个参数,`type == "string"`时,使用文本块,`type == dropdown`时,使用下拉菜单。
>
> 注意:`main_input` 和 `advanced_arg`是两个特殊的参数。`main_input`会自动与界面右上角的`输入区`进行同步,而`advanced_arg`会自动与界面右下角的`高级参数输入区`同步。除此之外,参数名称可以任意选取。其他细节详见`crazy_functions/plugin_template/plugin_class_template.py`。
3. 编写插件程序,覆盖父类的`execute`函数。
例如:
```python
class Demo_Wrap(GptAcademicPluginTemplate):
...
...
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
plugin_kwargs字典中会包含用户的选择,与上述 `define_arg_selection_menu` 一一对应
"""
allow_cache = plugin_kwargs["allow_cache"]
advanced_arg = plugin_kwargs["advanced_arg"]
if allow_cache == "从头执行": plugin_kwargs["advanced_arg"] = "--no-cache " + plugin_kwargs["advanced_arg"]
yield from Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
```
4. 注册插件
将以下条目插入`crazy_functional.py`即可。注意,与旧插件不同的是,`Function`键值应该为None,而`Class`键值为上述插件的类名称(`Demo_Wrap`)。
```
"新插件": {
"Group": "学术",
"Color": "stop",
"AsButton": True,
"Info": "插件说明",
"Function": None,
"Class": Demo_Wrap,
},
```
5. 已经结束了,启动程序测试吧~
## 二、背后的原理需要JavaScript的前置知识
### (I) 首先介绍三个Gradio官方没有的重要前端函数
主javascript程序`common.js`中有三个Gradio官方没有的重要API
1. `get_data_from_gradio_component`
这个函数可以获取任意gradio组件的当前值,例如textbox中的字符,dropdown中的当前选项,chatbot当前的对话等等。调用方法举例
```javascript
// 获取当前的对话
let chatbot = await get_data_from_gradio_component('gpt-chatbot');
```
2. `get_gradio_component`
有时候我们不仅需要gradio组件的当前值,还需要它的label值、是否隐藏、下拉菜单其他可选选项等等,而通过这个函数可以直接获取这个组件的句柄。举例
```javascript
// 获取下拉菜单组件的句柄
var model_sel = await get_gradio_component("elem_model_sel");
// 获取它的所有属性,包括其所有可选选项
console.log(model_sel.props)
```
3. `push_data_to_gradio_component`
这个函数可以将数据推回gradio组件,例如textbox中的字符,dropdown中的当前选项等等。调用方法举例
```javascript
// 修改一个按钮上面的文本
push_data_to_gradio_component("btnName", "gradio_element_id", "string");
// 隐藏一个组件
push_data_to_gradio_component({ visible: false, __type__: 'update' }, "plugin_arg_menu", "obj");
// 修改组件label
push_data_to_gradio_component({ label: '新label的值', __type__: 'update' }, "gpt-chatbot", "obj")
// 第一个参数是value,
// - 可以是字符串调整textbox的文本,按钮的文本
// - 还可以是 { visible: false, __type__: 'update' } 这样的字典调整visible, label, choices
// 第二个参数是elem_id
// 第三个参数是"string" 或者 "obj"
```
### (II) 从点击插件到执行插件的逻辑过程
简述程序启动时把每个插件的二级菜单编码为BASE64,存储在用户的浏览器前端,用户调用对应功能时,会按照插件的BASE64编码,将平时隐藏的菜单有选择性地显示出来。
1. 启动阶段(主函数 `main.py` 中,遍历每个插件,生成二级菜单的BASE64编码,存入变量`register_advanced_plugin_init_code_arr`。
```python
def get_js_code_for_generating_menu(self, btnName):
define_arg_selection = self.define_arg_selection_menu()
DEFINE_ARG_INPUT_INTERFACE = json.dumps(define_arg_selection)
return base64.b64encode(DEFINE_ARG_INPUT_INTERFACE.encode('utf-8')).decode('utf-8')
```
2. 用户加载阶段主javascript程序`common.js`中),浏览器加载`register_advanced_plugin_init_code_arr`,存入本地的字典`advanced_plugin_init_code_lib`
```javascript
advanced_plugin_init_code_lib = {}
function register_advanced_plugin_init_code(key, code){
advanced_plugin_init_code_lib[key] = code;
}
```
3. 用户点击插件按钮(主函数 `main.py` 中时,仅执行以下javascript代码,唤醒隐藏的二级菜单生成菜单的代码在`common.js`中的`generate_menu`函数上):
```javascript
// 生成高级插件的选择菜单
function run_advanced_plugin_launch_code(key){
generate_menu(advanced_plugin_init_code_lib[key], key);
}
function on_flex_button_click(key){
run_advanced_plugin_launch_code(key);
}
```
```python
click_handle = plugins[k]["Button"].click(None, inputs=[], outputs=None, _js=f"""()=>run_advanced_plugin_launch_code("{k}")""")
```
4. 当用户点击二级菜单的执行键时,通过javascript脚本模拟点击一个隐藏按钮,触发后续程序`common.js`中的`execute_current_pop_up_plugin`,会把二级菜单中的参数缓存到`invisible_current_pop_up_plugin_arg_final`,然后模拟点击`invisible_callback_btn_for_plugin_exe`按钮)。隐藏按钮的定义在(主函数 `main.py` ),该隐藏按钮会最终触发`route_switchy_bt_with_arg`函数(定义于`themes/gui_advanced_plugin_class.py`
```python
click_handle_ng = new_plugin_callback.click(route_switchy_bt_with_arg, [
gr.State(["new_plugin_callback", "usr_confirmed_arg"] + input_combo_order),
new_plugin_callback, usr_confirmed_arg, *input_combo
], output_combo)
```
5. 最后,`route_switchy_bt_with_arg`中,会搜集所有用户参数,统一集中到`plugin_kwargs`参数中,并执行对应插件的`execute`函数。

查看文件

@@ -22,13 +22,13 @@
| crazy_functions\下载arxiv论文翻译摘要.py | 下载 `arxiv` 论文的 PDF 文件,并提取摘要和翻译 | | crazy_functions\下载arxiv论文翻译摘要.py | 下载 `arxiv` 论文的 PDF 文件,并提取摘要和翻译 |
| crazy_functions\代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 | | crazy_functions\代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 |
| crazy_functions\图片生成.py | 根据激励文本使用GPT模型生成相应的图像 | | crazy_functions\图片生成.py | 根据激励文本使用GPT模型生成相应的图像 |
| crazy_functions\对话历史存档.py | 将每次对话记录写入Markdown格式的文件中 | | crazy_functions\Conversation_To_File.py | 将每次对话记录写入Markdown格式的文件中 |
| crazy_functions\总结word文档.py | 对输入的word文档进行摘要生成 | | crazy_functions\总结word文档.py | 对输入的word文档进行摘要生成 |
| crazy_functions\总结音视频.py | 对输入的音视频文件进行摘要生成 | | crazy_functions\总结音视频.py | 对输入的音视频文件进行摘要生成 |
| crazy_functions\批量Markdown翻译.py | 将指定目录下的Markdown文件进行中英文翻译 | | crazy_functions\Markdown_Translate.py | 将指定目录下的Markdown文件进行中英文翻译 |
| crazy_functions\批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 | | crazy_functions\批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 |
| crazy_functions\批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 | | crazy_functions\批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 |
| crazy_functions\批量翻译PDF文档_多线程.py | 将指定目录下的PDF文件进行中英文翻译 | | crazy_functions\PDF_Translate.py | 将指定目录下的PDF文件进行中英文翻译 |
| crazy_functions\理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 | | crazy_functions\理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 |
| crazy_functions\生成函数注释.py | 自动生成Python函数的注释 | | crazy_functions\生成函数注释.py | 自动生成Python函数的注释 |
| crazy_functions\联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 | | crazy_functions\联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 |
@@ -155,9 +155,9 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
该程序文件提供了一个用于生成图像的函数`图片生成`。函数实现的过程中,会调用`gen_image`函数来生成图像,并返回图像生成的网址和本地文件地址。函数有多个参数,包括`prompt`(激励文本)、`llm_kwargs`(GPT模型的参数)、`plugin_kwargs`(插件模型的参数)等。函数核心代码使用了`requests`库向OpenAI API请求图像,并做了简单的处理和保存。函数还更新了交互界面,清空聊天历史并显示正在生成图像的消息和最终的图像网址和预览。 该程序文件提供了一个用于生成图像的函数`图片生成`。函数实现的过程中,会调用`gen_image`函数来生成图像,并返回图像生成的网址和本地文件地址。函数有多个参数,包括`prompt`(激励文本)、`llm_kwargs`(GPT模型的参数)、`plugin_kwargs`(插件模型的参数)等。函数核心代码使用了`requests`库向OpenAI API请求图像,并做了简单的处理和保存。函数还更新了交互界面,清空聊天历史并显示正在生成图像的消息和最终的图像网址和预览。
## [18/48] 请对下面的程序文件做一个概述: crazy_functions\对话历史存档.py ## [18/48] 请对下面的程序文件做一个概述: crazy_functions\Conversation_To_File.py
这个文件是名为crazy_functions\对话历史存档.py的Python程序文件,包含了4个函数 这个文件是名为crazy_functions\Conversation_To_File.py的Python程序文件,包含了4个函数
1. write_chat_to_file(chatbot, history=None, file_name=None)用来将对话记录以Markdown格式写入文件中,并且生成文件名,如果没指定文件名则用当前时间。写入完成后将文件路径打印出来。 1. write_chat_to_file(chatbot, history=None, file_name=None)用来将对话记录以Markdown格式写入文件中,并且生成文件名,如果没指定文件名则用当前时间。写入完成后将文件路径打印出来。
@@ -165,7 +165,7 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
3. read_file_to_chat(chatbot, history, file_name):从传入的文件中读取内容,解析出对话历史记录并更新聊天显示框。 3. read_file_to_chat(chatbot, history, file_name):从传入的文件中读取内容,解析出对话历史记录并更新聊天显示框。
4. 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。 4. Conversation_To_File(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。
## [19/48] 请对下面的程序文件做一个概述: crazy_functions\总结word文档.py ## [19/48] 请对下面的程序文件做一个概述: crazy_functions\总结word文档.py
@@ -175,9 +175,9 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
该程序文件包括两个函数split_audio_file()和AnalyAudio(),并且导入了一些必要的库并定义了一些工具函数。split_audio_file用于将音频文件分割成多个时长相等的片段,返回一个包含所有切割音频片段文件路径的列表,而AnalyAudio用来分析音频文件,通过调用whisper模型进行音频转文字并使用GPT模型对音频内容进行概述,最终将所有总结结果写入结果文件中。 该程序文件包括两个函数split_audio_file()和AnalyAudio(),并且导入了一些必要的库并定义了一些工具函数。split_audio_file用于将音频文件分割成多个时长相等的片段,返回一个包含所有切割音频片段文件路径的列表,而AnalyAudio用来分析音频文件,通过调用whisper模型进行音频转文字并使用GPT模型对音频内容进行概述,最终将所有总结结果写入结果文件中。
## [21/48] 请对下面的程序文件做一个概述: crazy_functions\批量Markdown翻译.py ## [21/48] 请对下面的程序文件做一个概述: crazy_functions\Markdown_Translate.py
该程序文件名为`批量Markdown翻译.py`,包含了以下功能读取Markdown文件,将长文本分离开来,将Markdown文件进行翻译英译中和中译英,整理结果并退出。程序使用了多线程以提高效率。程序使用了`tiktoken`依赖库,可能需要额外安装。文件中还有一些其他的函数和类,但与文件名所描述的功能无关。 该程序文件名为`Markdown_Translate.py`,包含了以下功能读取Markdown文件,将长文本分离开来,将Markdown文件进行翻译英译中和中译英,整理结果并退出。程序使用了多线程以提高效率。程序使用了`tiktoken`依赖库,可能需要额外安装。文件中还有一些其他的函数和类,但与文件名所描述的功能无关。
## [22/48] 请对下面的程序文件做一个概述: crazy_functions\批量总结PDF文档.py ## [22/48] 请对下面的程序文件做一个概述: crazy_functions\批量总结PDF文档.py
@@ -187,9 +187,9 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
该程序文件是一个用于批量总结PDF文档的函数插件,使用了pdfminer插件和BeautifulSoup库来提取PDF文档的文本内容,对每个PDF文件分别进行处理并生成中英文摘要。同时,该程序文件还包括一些辅助工具函数和处理异常的装饰器。 该程序文件是一个用于批量总结PDF文档的函数插件,使用了pdfminer插件和BeautifulSoup库来提取PDF文档的文本内容,对每个PDF文件分别进行处理并生成中英文摘要。同时,该程序文件还包括一些辅助工具函数和处理异常的装饰器。
## [24/48] 请对下面的程序文件做一个概述: crazy_functions\批量翻译PDF文档_多线程.py ## [24/48] 请对下面的程序文件做一个概述: crazy_functions\PDF_Translate.py
这个程序文件是一个Python脚本,文件名为“批量翻译PDF文档_多线程.py”。它主要使用了“toolbox”、“request_gpt_model_in_new_thread_with_ui_alive”、“request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency”、“colorful”等Python库和自定义的模块“crazy_utils”的一些函数。程序实现了一个批量翻译PDF文档的功能,可以自动解析PDF文件中的基础信息,递归地切割PDF文件,翻译和处理PDF论文中的所有内容,并生成相应的翻译结果文件包括md文件和html文件。功能比较复杂,其中需要调用多个函数和依赖库,涉及到多线程操作和UI更新。文件中有详细的注释和变量命名,代码比较清晰易读。 这个程序文件是一个Python脚本,文件名为“PDF_Translate.py”。它主要使用了“toolbox”、“request_gpt_model_in_new_thread_with_ui_alive”、“request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency”、“colorful”等Python库和自定义的模块“crazy_utils”的一些函数。程序实现了一个批量翻译PDF文档的功能,可以自动解析PDF文件中的基础信息,递归地切割PDF文件,翻译和处理PDF论文中的所有内容,并生成相应的翻译结果文件包括md文件和html文件。功能比较复杂,其中需要调用多个函数和依赖库,涉及到多线程操作和UI更新。文件中有详细的注释和变量命名,代码比较清晰易读。
## [25/48] 请对下面的程序文件做一个概述: crazy_functions\理解PDF文档内容.py ## [25/48] 请对下面的程序文件做一个概述: crazy_functions\理解PDF文档内容.py
@@ -331,19 +331,19 @@ check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, c
这些程序源文件提供了基础的文本和语言处理功能、工具函数和高级插件,使 Chatbot 能够处理各种复杂的学术文本问题,包括润色、翻译、搜索、下载、解析等。 这些程序源文件提供了基础的文本和语言处理功能、工具函数和高级插件,使 Chatbot 能够处理各种复杂的学术文本问题,包括润色、翻译、搜索、下载、解析等。
## 用一张Markdown表格简要描述以下文件的功能 ## 用一张Markdown表格简要描述以下文件的功能
crazy_functions\代码重写为全英文_多线程.py, crazy_functions\图片生成.py, crazy_functions\对话历史存档.py, crazy_functions\总结word文档.py, crazy_functions\总结音视频.py, crazy_functions\批量Markdown翻译.py, crazy_functions\批量总结PDF文档.py, crazy_functions\批量总结PDF文档pdfminer.py, crazy_functions\批量翻译PDF文档_多线程.py, crazy_functions\理解PDF文档内容.py, crazy_functions\生成函数注释.py, crazy_functions\联网的ChatGPT.py, crazy_functions\解析JupyterNotebook.py, crazy_functions\解析项目源代码.py, crazy_functions\询问多个大语言模型.py, crazy_functions\读文章写摘要.py。根据以上分析,用一句话概括程序的整体功能。 crazy_functions\代码重写为全英文_多线程.py, crazy_functions\图片生成.py, crazy_functions\Conversation_To_File.py, crazy_functions\总结word文档.py, crazy_functions\总结音视频.py, crazy_functions\Markdown_Translate.py, crazy_functions\批量总结PDF文档.py, crazy_functions\批量总结PDF文档pdfminer.py, crazy_functions\PDF_Translate.py, crazy_functions\理解PDF文档内容.py, crazy_functions\生成函数注释.py, crazy_functions\联网的ChatGPT.py, crazy_functions\解析JupyterNotebook.py, crazy_functions\解析项目源代码.py, crazy_functions\询问多个大语言模型.py, crazy_functions\读文章写摘要.py。根据以上分析,用一句话概括程序的整体功能。
| 文件名 | 功能简述 | | 文件名 | 功能简述 |
| --- | --- | | --- | --- |
| 代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 | | 代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 |
| 图片生成.py | 根据激励文本使用GPT模型生成相应的图像 | | 图片生成.py | 根据激励文本使用GPT模型生成相应的图像 |
| 对话历史存档.py | 将每次对话记录写入Markdown格式的文件中 | | Conversation_To_File.py | 将每次对话记录写入Markdown格式的文件中 |
| 总结word文档.py | 对输入的word文档进行摘要生成 | | 总结word文档.py | 对输入的word文档进行摘要生成 |
| 总结音视频.py | 对输入的音视频文件进行摘要生成 | | 总结音视频.py | 对输入的音视频文件进行摘要生成 |
| 批量Markdown翻译.py | 将指定目录下的Markdown文件进行中英文翻译 | | Markdown_Translate.py | 将指定目录下的Markdown文件进行中英文翻译 |
| 批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 | | 批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 |
| 批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 | | 批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 |
| 批量翻译PDF文档_多线程.py | 将指定目录下的PDF文件进行中英文翻译 | | PDF_Translate.py | 将指定目录下的PDF文件进行中英文翻译 |
| 理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 | | 理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 |
| 生成函数注释.py | 自动生成Python函数的注释 | | 生成函数注释.py | 自动生成Python函数的注释 |
| 联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 | | 联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 |

文件差异内容过多而无法显示 加载差异

查看文件

@@ -36,15 +36,15 @@
"总结word文档": "SummarizeWordDocument", "总结word文档": "SummarizeWordDocument",
"解析ipynb文件": "ParseIpynbFile", "解析ipynb文件": "ParseIpynbFile",
"解析JupyterNotebook": "ParseJupyterNotebook", "解析JupyterNotebook": "ParseJupyterNotebook",
"对话历史存档": "ConversationHistoryArchive", "Conversation_To_File": "ConversationHistoryArchive",
"载入对话历史存档": "LoadConversationHistoryArchive", "载入Conversation_To_File": "LoadConversationHistoryArchive",
"删除所有本地对话历史记录": "DeleteAllLocalChatHistory", "删除所有本地对话历史记录": "DeleteAllLocalChatHistory",
"Markdown英译中": "MarkdownTranslateFromEngToChi", "Markdown英译中": "MarkdownTranslateFromEngToChi",
"批量Markdown翻译": "BatchTranslateMarkdown", "Markdown_Translate": "BatchTranslateMarkdown",
"批量总结PDF文档": "BatchSummarizePDFDocuments", "批量总结PDF文档": "BatchSummarizePDFDocuments",
"批量总结PDF文档pdfminer": "BatchSummarizePDFDocumentsUsingPDFMiner", "批量总结PDF文档pdfminer": "BatchSummarizePDFDocumentsUsingPDFMiner",
"批量翻译PDF文档": "BatchTranslatePDFDocuments", "批量翻译PDF文档": "BatchTranslatePDFDocuments",
"批量翻译PDF文档_多线程": "BatchTranslatePDFDocumentsUsingMultiThreading", "PDF_Translate": "BatchTranslatePDFDocumentsUsingMultiThreading",
"谷歌检索小助手": "GoogleSearchAssistant", "谷歌检索小助手": "GoogleSearchAssistant",
"理解PDF文档内容标准文件输入": "StandardFileInputForUnderstandingPDFDocumentContent", "理解PDF文档内容标准文件输入": "StandardFileInputForUnderstandingPDFDocumentContent",
"理解PDF文档内容": "UnderstandingPDFDocumentContent", "理解PDF文档内容": "UnderstandingPDFDocumentContent",
@@ -1492,7 +1492,7 @@
"交互功能模板函数": "InteractiveFunctionTemplateFunction", "交互功能模板函数": "InteractiveFunctionTemplateFunction",
"交互功能函数模板": "InteractiveFunctionFunctionTemplate", "交互功能函数模板": "InteractiveFunctionFunctionTemplate",
"Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison", "Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison",
"Latex输出PDF结果": "LatexOutputPDFResult", "Latex_Function": "LatexOutputPDFResult",
"Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF", "Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF",
"语音助手": "VoiceAssistant", "语音助手": "VoiceAssistant",
"微调数据集生成": "FineTuneDatasetGeneration", "微调数据集生成": "FineTuneDatasetGeneration",

查看文件

@@ -6,17 +6,14 @@
"Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison", "Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison",
"下载arxiv论文并翻译摘要": "DownloadArxivPaperAndTranslateAbstract", "下载arxiv论文并翻译摘要": "DownloadArxivPaperAndTranslateAbstract",
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage", "Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
"批量翻译PDF文档_多线程": "BatchTranslatePDFDocuments_MultiThreaded",
"下载arxiv论文翻译摘要": "DownloadArxivPaperTranslateAbstract", "下载arxiv论文翻译摘要": "DownloadArxivPaperTranslateAbstract",
"解析一个Python项目": "ParsePythonProject", "解析一个Python项目": "ParsePythonProject",
"解析一个Golang项目": "ParseGolangProject", "解析一个Golang项目": "ParseGolangProject",
"代码重写为全英文_多线程": "RewriteCodeToEnglish_MultiThreaded", "代码重写为全英文_多线程": "RewriteCodeToEnglish_MultiThreaded",
"解析一个CSharp项目": "ParsingCSharpProject", "解析一个CSharp项目": "ParsingCSharpProject",
"删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords", "删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords",
"批量Markdown翻译": "BatchTranslateMarkdown",
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion", "连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
"Langchain知识库": "LangchainKnowledgeBase", "Langchain知识库": "LangchainKnowledgeBase",
"Latex输出PDF结果": "OutputPDFFromLatex",
"把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline", "把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline",
"Latex精细分解与转化": "DecomposeAndConvertLatex", "Latex精细分解与转化": "DecomposeAndConvertLatex",
"解析一个C项目的头文件": "ParseCProjectHeaderFiles", "解析一个C项目的头文件": "ParseCProjectHeaderFiles",
@@ -46,7 +43,7 @@
"高阶功能模板函数": "HighOrderFunctionTemplateFunctions", "高阶功能模板函数": "HighOrderFunctionTemplateFunctions",
"高级功能函数模板": "AdvancedFunctionTemplate", "高级功能函数模板": "AdvancedFunctionTemplate",
"总结word文档": "SummarizingWordDocuments", "总结word文档": "SummarizingWordDocuments",
"载入对话历史存档": "LoadConversationHistoryArchive", "载入Conversation_To_File": "LoadConversationHistoryArchive",
"Latex中译英": "LatexChineseToEnglish", "Latex中译英": "LatexChineseToEnglish",
"Latex英译中": "LatexEnglishToChinese", "Latex英译中": "LatexEnglishToChinese",
"连接网络回答问题": "ConnectToNetworkToAnswerQuestions", "连接网络回答问题": "ConnectToNetworkToAnswerQuestions",
@@ -70,7 +67,6 @@
"读文章写摘要": "ReadArticleWriteSummary", "读文章写摘要": "ReadArticleWriteSummary",
"生成函数注释": "GenerateFunctionComments", "生成函数注释": "GenerateFunctionComments",
"解析项目本身": "ParseProjectItself", "解析项目本身": "ParseProjectItself",
"对话历史存档": "ConversationHistoryArchive",
"专业词汇声明": "ProfessionalTerminologyDeclaration", "专业词汇声明": "ProfessionalTerminologyDeclaration",
"解析docx": "ParseDocx", "解析docx": "ParseDocx",
"解析源代码新": "ParsingSourceCodeNew", "解析源代码新": "ParsingSourceCodeNew",
@@ -97,5 +93,18 @@
"多智能体": "MultiAgent", "多智能体": "MultiAgent",
"图片生成_DALLE2": "ImageGeneration_DALLE2", "图片生成_DALLE2": "ImageGeneration_DALLE2",
"图片生成_DALLE3": "ImageGeneration_DALLE3", "图片生成_DALLE3": "ImageGeneration_DALLE3",
"图片修改_DALLE2": "ImageModification_DALLE2" "图片修改_DALLE2": "ImageModification_DALLE2",
} "生成多种Mermaid图表": "GenerateMultipleMermaidCharts",
"知识库文件注入": "InjectKnowledgeBaseFiles",
"PDF翻译中文并重新编译PDF": "TranslatePDFToChineseAndRecompilePDF",
"随机小游戏": "RandomMiniGame",
"互动小游戏": "InteractiveMiniGame",
"解析历史输入": "ParseHistoricalInput",
"高阶功能模板函数示意图": "HighOrderFunctionTemplateDiagram",
"载入对话历史存档": "LoadChatHistoryArchive",
"对话历史存档": "ChatHistoryArchive",
"解析PDF_DOC2X_转Latex": "ParsePDF_DOC2X_toLatex",
"解析PDF_基于DOC2X": "ParsePDF_basedDOC2X",
"解析PDF_简单拆解": "ParsePDF_simpleDecomposition",
"解析PDF_DOC2X_单文件": "ParsePDF_DOC2X_singleFile"
}

查看文件

@@ -35,15 +35,15 @@
"总结word文档": "SummarizeWordDocument", "总结word文档": "SummarizeWordDocument",
"解析ipynb文件": "ParseIpynbFile", "解析ipynb文件": "ParseIpynbFile",
"解析JupyterNotebook": "ParseJupyterNotebook", "解析JupyterNotebook": "ParseJupyterNotebook",
"对话历史存档": "ConversationHistoryArchive", "Conversation_To_File": "ConversationHistoryArchive",
"载入对话历史存档": "LoadConversationHistoryArchive", "载入Conversation_To_File": "LoadConversationHistoryArchive",
"删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords", "删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords",
"Markdown英译中": "MarkdownEnglishToChinese", "Markdown英译中": "MarkdownEnglishToChinese",
"批量Markdown翻译": "BatchMarkdownTranslation", "Markdown_Translate": "BatchMarkdownTranslation",
"批量总结PDF文档": "BatchSummarizePDFDocuments", "批量总结PDF文档": "BatchSummarizePDFDocuments",
"批量总结PDF文档pdfminer": "BatchSummarizePDFDocumentsPdfminer", "批量总结PDF文档pdfminer": "BatchSummarizePDFDocumentsPdfminer",
"批量翻译PDF文档": "BatchTranslatePDFDocuments", "批量翻译PDF文档": "BatchTranslatePDFDocuments",
"批量翻译PDF文档_多线程": "BatchTranslatePdfDocumentsMultithreaded", "PDF_Translate": "BatchTranslatePdfDocumentsMultithreaded",
"谷歌检索小助手": "GoogleSearchAssistant", "谷歌检索小助手": "GoogleSearchAssistant",
"理解PDF文档内容标准文件输入": "StandardFileInputForUnderstandingPdfDocumentContent", "理解PDF文档内容标准文件输入": "StandardFileInputForUnderstandingPdfDocumentContent",
"理解PDF文档内容": "UnderstandingPdfDocumentContent", "理解PDF文档内容": "UnderstandingPdfDocumentContent",
@@ -1468,7 +1468,7 @@
"交互功能模板函数": "InteractiveFunctionTemplateFunctions", "交互功能模板函数": "InteractiveFunctionTemplateFunctions",
"交互功能函数模板": "InteractiveFunctionFunctionTemplates", "交互功能函数模板": "InteractiveFunctionFunctionTemplates",
"Latex英文纠错加PDF对比": "LatexEnglishCorrectionWithPDFComparison", "Latex英文纠错加PDF对比": "LatexEnglishCorrectionWithPDFComparison",
"Latex输出PDF结果": "OutputPDFFromLatex", "Latex_Function": "OutputPDFFromLatex",
"Latex翻译中文并重新编译PDF": "TranslateLatexToChineseAndRecompilePDF", "Latex翻译中文并重新编译PDF": "TranslateLatexToChineseAndRecompilePDF",
"语音助手": "VoiceAssistant", "语音助手": "VoiceAssistant",
"微调数据集生成": "FineTuneDatasetGeneration", "微调数据集生成": "FineTuneDatasetGeneration",

查看文件

@@ -3,7 +3,7 @@
## 1. 安装额外依赖 ## 1. 安装额外依赖
``` ```
pip install --upgrade pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git pip install --upgrade pyOpenSSL webrtcvad scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
``` ```
如果因为特色网络问题导致上述命令无法执行: 如果因为特色网络问题导致上述命令无法执行:

58
docs/use_tts.md 普通文件
查看文件

@@ -0,0 +1,58 @@
# 使用TTS文字转语音
## 1. 使用EDGE-TTS简单
将本项目配置项修改如下即可
```
TTS_TYPE = "EDGE_TTS"
EDGE_TTS_VOICE = "zh-CN-XiaoxiaoNeural"
```
## 2. 使用SoVITS需要有显卡
使用以下docker-compose.yml文件,先启动SoVITS服务API
1. 创建以下文件夹结构
```shell
.
├── docker-compose.yml
└── reference
├── clone_target_txt.txt
└── clone_target_wave.mp3
```
2. 其中`docker-compose.yml`为
```yaml
version: '3.8'
services:
gpt-sovits:
image: fuqingxu/sovits_gptac_trim:latest
container_name: sovits_gptac_container
working_dir: /workspace/gpt_sovits_demo
environment:
- is_half=False
- is_share=False
volumes:
- ./reference:/reference
ports:
- "19880:9880" # 19880 为 sovits api 的暴露端口,记住它
shm_size: 16G
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: "all"
capabilities: [gpu]
command: bash -c "python3 api.py"
```
3. 其中`clone_target_wave.mp3`为需要克隆的角色音频,`clone_target_txt.txt`为该音频对应的文字文本( https://wiki.biligame.com/ys/%E8%A7%92%E8%89%B2%E8%AF%AD%E9%9F%B3
4. 运行`docker-compose up`
5. 将本项目配置项修改如下即可
(19880 为 sovits api 的暴露端口,与docker-compose.yml中的端口对应)
```
TTS_TYPE = "LOCAL_SOVITS_API"
GPT_SOVITS_URL = "http://127.0.0.1:19880"
```
6. 启动本项目

46
docs/use_vllm.md 普通文件
查看文件

@@ -0,0 +1,46 @@
# 使用VLLM
## 1. 首先启动 VLLM,自行选择模型
```
python -m vllm.entrypoints.openai.api_server --model /home/hmp/llm/cache/Qwen1___5-32B-Chat --tensor-parallel-size 2 --dtype=half
```
这里使用了存储在 `/home/hmp/llm/cache/Qwen1___5-32B-Chat` 的本地模型,可以根据自己的需求更改。
## 2. 测试 VLLM
```
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "/home/hmp/llm/cache/Qwen1___5-32B-Chat",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "怎么实现一个去中心化的控制器?"}
]
}'
```
## 3. 配置本项目
```
API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"
LLM_MODEL = "vllm-/home/hmp/llm/cache/Qwen1___5-32B-Chat(max_token=4096)"
API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "http://localhost:8000/v1/chat/completions"}
```
```
"vllm-/home/hmp/llm/cache/Qwen1___5-32B-Chat(max_token=4096)"
其中
"vllm-" 是前缀(必要)
"/home/hmp/llm/cache/Qwen1___5-32B-Chat" 是模型名(必要)
"(max_token=6666)" 是配置(非必要)
```
## 4. 启动!
```
python main.py
```

查看文件

@@ -1,30 +0,0 @@
try {
$("<link>").attr({href: "file=docs/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css"}).appendTo('head');
$('body').append('<div class="waifu"><div class="waifu-tips"></div><canvas id="live2d" class="live2d"></canvas><div class="waifu-tool"><span class="fui-home"></span> <span class="fui-chat"></span> <span class="fui-eye"></span> <span class="fui-user"></span> <span class="fui-photo"></span> <span class="fui-info-circle"></span> <span class="fui-cross"></span></div></div>');
$.ajax({url: "file=docs/waifu_plugin/waifu-tips.js", dataType:"script", cache: true, success: function() {
$.ajax({url: "file=docs/waifu_plugin/live2d.js", dataType:"script", cache: true, success: function() {
/* 可直接修改部分参数 */
live2d_settings['hitokotoAPI'] = "hitokoto.cn"; // 一言 API
live2d_settings['modelId'] = 5; // 默认模型 ID
live2d_settings['modelTexturesId'] = 1; // 默认材质 ID
live2d_settings['modelStorage'] = false; // 不储存模型 ID
live2d_settings['waifuSize'] = '210x187';
live2d_settings['waifuTipsSize'] = '187x52';
live2d_settings['canSwitchModel'] = true;
live2d_settings['canSwitchTextures'] = true;
live2d_settings['canSwitchHitokoto'] = false;
live2d_settings['canTakeScreenshot'] = false;
live2d_settings['canTurnToHomePage'] = false;
live2d_settings['canTurnToAboutPage'] = false;
live2d_settings['showHitokoto'] = false; // 显示一言
live2d_settings['showF12Status'] = false; // 显示加载状态
live2d_settings['showF12Message'] = false; // 显示看板娘消息
live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示
live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示
live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词
/* 在 initModel 前添加 */
initModel("file=docs/waifu_plugin/waifu-tips.json");
}});
}});
} catch(err) { console.log("[Error] JQuery is not defined.") }

750
main.py
查看文件

@@ -1,406 +1,344 @@
import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 import os, json; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
help_menu_description = \ help_menu_description = \
"""Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic), """Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic),
感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors). 感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors).
</br></br>常见问题请查阅[项目Wiki](https://github.com/binary-husky/gpt_academic/wiki), </br></br>常见问题请查阅[项目Wiki](https://github.com/binary-husky/gpt_academic/wiki),
如遇到Bug请前往[Bug反馈](https://github.com/binary-husky/gpt_academic/issues). 如遇到Bug请前往[Bug反馈](https://github.com/binary-husky/gpt_academic/issues).
</br></br>普通对话使用说明: 1. 输入问题; 2. 点击提交 </br></br>普通对话使用说明: 1. 输入问题; 2. 点击提交
</br></br>基础功能区使用说明: 1. 输入文本; 2. 点击任意基础功能区按钮 </br></br>基础功能区使用说明: 1. 输入文本; 2. 点击任意基础功能区按钮
</br></br>函数插件区使用说明: 1. 输入路径/问题, 或者上传文件; 2. 点击任意函数插件区按钮 </br></br>函数插件区使用说明: 1. 输入路径/问题, 或者上传文件; 2. 点击任意函数插件区按钮
</br></br>虚空终端使用说明: 点击虚空终端, 然后根据提示输入指令, 再次点击虚空终端 </br></br>虚空终端使用说明: 点击虚空终端, 然后根据提示输入指令, 再次点击虚空终端
</br></br>如何保存对话: 点击保存当前的对话按钮 </br></br>如何保存对话: 点击保存当前的对话按钮
</br></br>如何语音对话: 请阅读Wiki </br></br>如何语音对话: 请阅读Wiki
</br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交网页刷新后失效""" </br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交网页刷新后失效"""
def main(): def enable_log(PATH_LOGGING):
import gradio as gr import logging
if gr.__version__ not in ['3.32.6', '3.32.7']: admin_log_path = os.path.join(PATH_LOGGING, "admin")
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.") os.makedirs(admin_log_path, exist_ok=True)
from request_llms.bridge_all import predict log_dir = os.path.join(admin_log_path, "chat_secrets.log")
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith try:logging.basicConfig(filename=log_dir, level=logging.INFO, encoding="utf-8", format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址 except:logging.basicConfig(filename=log_dir, level=logging.INFO, format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION') # Disable logging output from the 'httpx' logger
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT') logging.getLogger("httpx").setLevel(logging.WARNING)
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME') print(f"所有对话记录将自动保存在本地目录{log_dir}, 请注意自我隐私保护哦!")
DARK_MODE, NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('DARK_MODE', 'NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE')
INIT_SYS_PROMPT = get_conf('INIT_SYS_PROMPT') def main():
import gradio as gr
# 如果WEB_PORT是-1, 则随机选取WEB端口 if gr.__version__ not in ['3.32.9', '3.32.10']:
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
from check_proxy import get_current_version from request_llms.bridge_all import predict
from themes.theme import adjust_theme, advanced_css, theme_declaration from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, DummyWith
from themes.theme import js_code_for_css_changing, js_code_for_darkmode_init, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, init_cookie proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}" CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU')
# 问询记录, python 版本建议3.9+(越新越好) NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE')
import logging, uuid DARK_MODE, INIT_SYS_PROMPT, ADD_WAIFU, TTS_TYPE = get_conf('DARK_MODE', 'INIT_SYS_PROMPT', 'ADD_WAIFU', 'TTS_TYPE')
os.makedirs(PATH_LOGGING, exist_ok=True) if LLM_MODEL not in AVAIL_LLM_MODELS: AVAIL_LLM_MODELS += [LLM_MODEL]
try:logging.basicConfig(filename=f"{PATH_LOGGING}/chat_secrets.log", level=logging.INFO, encoding="utf-8", format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
except:logging.basicConfig(filename=f"{PATH_LOGGING}/chat_secrets.log", level=logging.INFO, format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S") # 如果WEB_PORT是-1, 则随机选取WEB端口
# Disable logging output from the 'httpx' logger PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
logging.getLogger("httpx").setLevel(logging.WARNING) from check_proxy import get_current_version
print(f"所有问询记录将自动保存在本地目录./{PATH_LOGGING}/chat_secrets.log, 请注意自我隐私保护哦!") from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_reset, js_code_show_or_hide, js_code_show_or_hide_group2
from themes.theme import js_code_for_css_changing, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
# 一些普通功能模块 from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid
from core_functional import get_core_functions title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
functional = get_core_functions()
# 对话、日志记录
# 高级函数插件 enable_log(PATH_LOGGING)
from crazy_functional import get_crazy_functions
DEFAULT_FN_GROUPS = get_conf('DEFAULT_FN_GROUPS') # 一些普通功能模块
plugins = get_crazy_functions() from core_functional import get_core_functions
all_plugin_groups = list(set([g for _, plugin in plugins.items() for g in plugin['Group'].split('|')])) functional = get_core_functions()
match_group = lambda tags, groups: any([g in groups for g in tags.split('|')])
# 高级函数插件
# 处理markdown文本格式的转变 from crazy_functional import get_crazy_functions
gr.Chatbot.postprocess = format_io DEFAULT_FN_GROUPS = get_conf('DEFAULT_FN_GROUPS')
plugins = get_crazy_functions()
# 做一些外观色彩上的调整 all_plugin_groups = list(set([g for _, plugin in plugins.items() for g in plugin['Group'].split('|')]))
set_theme = adjust_theme() match_group = lambda tags, groups: any([g in groups for g in tags.split('|')])
# 代理与自动更新 # 处理markdown文本格式的转变
from check_proxy import check_proxy, auto_update, warm_up_modules gr.Chatbot.postprocess = format_io
proxy_info = check_proxy(proxies)
# 做一些外观色彩上的调整
gr_L1 = lambda: gr.Row().style() set_theme = adjust_theme()
gr_L2 = lambda scale, elem_id: gr.Column(scale=scale, elem_id=elem_id)
if LAYOUT == "TOP-DOWN": # 代理与自动更新
gr_L1 = lambda: DummyWith() from check_proxy import check_proxy, auto_update, warm_up_modules
gr_L2 = lambda scale, elem_id: gr.Row() proxy_info = check_proxy(proxies)
CHATBOT_HEIGHT /= 2
gr_L1 = lambda: gr.Row().style()
cancel_handles = [] gr_L2 = lambda scale, elem_id: gr.Column(scale=scale, elem_id=elem_id, min_width=400)
customize_btns = {} if LAYOUT == "TOP-DOWN":
predefined_btns = {} gr_L1 = lambda: DummyWith()
with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo: gr_L2 = lambda scale, elem_id: gr.Row()
gr.HTML(title_html) CHATBOT_HEIGHT /= 2
secret_css, dark_mode, persistent_cookie = gr.Textbox(visible=False), gr.Textbox(DARK_MODE, visible=False), gr.Textbox(visible=False)
cookies = gr.State(load_chat_cookies()) cancel_handles = []
with gr_L1(): customize_btns = {}
with gr_L2(scale=2, elem_id="gpt-chat"): predefined_btns = {}
chatbot = gr.Chatbot(label=f"当前模型:{LLM_MODEL}", elem_id="gpt-chatbot") from shared_utils.cookie_manager import make_cookie_cache, make_history_cache
if LAYOUT == "TOP-DOWN": chatbot.style(height=CHATBOT_HEIGHT) with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as app_block:
history = gr.State([]) gr.HTML(title_html)
with gr_L2(scale=1, elem_id="gpt-panel"): secret_css = gr.Textbox(visible=False, elem_id="secret_css")
with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary: register_advanced_plugin_init_code_arr = ""
with gr.Row():
txt = gr.Textbox(show_label=False, placeholder="Input question here.", elem_id='user_input_main').style(container=False) cookies, web_cookie_cache = make_cookie_cache() # 定义 后端statecookies、前端web_cookie_cache两兄弟
with gr.Row(): with gr_L1():
submitBtn = gr.Button("提交", elem_id="elem_submit", variant="primary") with gr_L2(scale=2, elem_id="gpt-chat"):
with gr.Row(): chatbot = gr.Chatbot(label=f"当前模型:{LLM_MODEL}", elem_id="gpt-chatbot")
resetBtn = gr.Button("重置", elem_id="elem_reset", variant="secondary"); resetBtn.style(size="sm") if LAYOUT == "TOP-DOWN": chatbot.style(height=CHATBOT_HEIGHT)
stopBtn = gr.Button("停止", elem_id="elem_stop", variant="secondary"); stopBtn.style(size="sm") history, history_cache, history_cache_update = make_history_cache() # 定义 后端statehistory、前端history_cache、后端setterhistory_cache_update三兄弟
clearBtn = gr.Button("清除", elem_id="elem_clear", variant="secondary", visible=False); clearBtn.style(size="sm") with gr_L2(scale=1, elem_id="gpt-panel"):
if ENABLE_AUDIO: with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary:
with gr.Row(): with gr.Row():
audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False) txt = gr.Textbox(show_label=False, placeholder="Input question here.", elem_id='user_input_main').style(container=False)
with gr.Row(): with gr.Row():
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel") submitBtn = gr.Button("提交", elem_id="elem_submit", variant="primary")
with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn: with gr.Row():
with gr.Row(): resetBtn = gr.Button("重置", elem_id="elem_reset", variant="secondary"); resetBtn.style(size="sm")
for k in range(NUM_CUSTOM_BASIC_BTN): stopBtn = gr.Button("停止", elem_id="elem_stop", variant="secondary"); stopBtn.style(size="sm")
customize_btn = gr.Button("自定义按钮" + str(k+1), visible=False, variant="secondary", info_str=f'基础功能区: 自定义按钮') clearBtn = gr.Button("清除", elem_id="elem_clear", variant="secondary", visible=False); clearBtn.style(size="sm")
customize_btn.style(size="sm") if ENABLE_AUDIO:
customize_btns.update({"自定义按钮" + str(k+1): customize_btn}) with gr.Row():
for k in functional: audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False)
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue with gr.Row():
variant = functional[k]["Color"] if "Color" in functional[k] else "secondary" status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel")
functional[k]["Button"] = gr.Button(k, variant=variant, info_str=f'基础功能区: {k}')
functional[k]["Button"].style(size="sm") with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn:
predefined_btns.update({k: functional[k]["Button"]}) with gr.Row():
with gr.Accordion("函数插件区", open=True, elem_id="plugin-panel") as area_crazy_fn: for k in range(NUM_CUSTOM_BASIC_BTN):
with gr.Row(): customize_btn = gr.Button("自定义按钮" + str(k+1), visible=False, variant="secondary", info_str=f'基础功能区: 自定义按钮')
gr.Markdown("插件可读取“输入区”文本/路径作为参数(上传文件自动修正路径)") customize_btn.style(size="sm")
with gr.Row(elem_id="input-plugin-group"): customize_btns.update({"自定义按钮" + str(k+1): customize_btn})
plugin_group_sel = gr.Dropdown(choices=all_plugin_groups, label='', show_label=False, value=DEFAULT_FN_GROUPS, for k in functional:
multiselect=True, interactive=True, elem_classes='normal_mut_select').style(container=False) if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
with gr.Row(): variant = functional[k]["Color"] if "Color" in functional[k] else "secondary"
for k, plugin in plugins.items(): functional[k]["Button"] = gr.Button(k, variant=variant, info_str=f'基础功能区: {k}')
if not plugin.get("AsButton", True): continue functional[k]["Button"].style(size="sm")
visible = True if match_group(plugin['Group'], DEFAULT_FN_GROUPS) else False predefined_btns.update({k: functional[k]["Button"]})
variant = plugins[k]["Color"] if "Color" in plugin else "secondary" with gr.Accordion("函数插件区", open=True, elem_id="plugin-panel") as area_crazy_fn:
info = plugins[k].get("Info", k) with gr.Row():
plugin['Button'] = plugins[k]['Button'] = gr.Button(k, variant=variant, gr.Markdown("<small>插件可读取“输入区”文本/路径作为参数(上传文件自动修正路径)</small>")
visible=visible, info_str=f'函数插件区: {info}').style(size="sm") with gr.Row(elem_id="input-plugin-group"):
with gr.Row(): plugin_group_sel = gr.Dropdown(choices=all_plugin_groups, label='', show_label=False, value=DEFAULT_FN_GROUPS,
with gr.Accordion("更多函数插件", open=True): multiselect=True, interactive=True, elem_classes='normal_mut_select').style(container=False)
dropdown_fn_list = [] with gr.Row():
for k, plugin in plugins.items(): for k, plugin in plugins.items():
if not match_group(plugin['Group'], DEFAULT_FN_GROUPS): continue if not plugin.get("AsButton", True): continue
if not plugin.get("AsButton", True): dropdown_fn_list.append(k) # 排除已经是按钮的插件 visible = True if match_group(plugin['Group'], DEFAULT_FN_GROUPS) else False
elif plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示 variant = plugins[k]["Color"] if "Color" in plugin else "secondary"
with gr.Row(): info = plugins[k].get("Info", k)
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="", show_label=False).style(container=False) plugin['Button'] = plugins[k]['Button'] = gr.Button(k, variant=variant,
with gr.Row(): visible=visible, info_str=f'函数插件区: {info}').style(size="sm")
plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False, with gr.Row():
placeholder="这里是特殊函数插件的高级参数输入区").style(container=False) with gr.Accordion("更多函数插件", open=True):
with gr.Row(): dropdown_fn_list = []
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm") for k, plugin in plugins.items():
with gr.Row(): if not match_group(plugin['Group'], DEFAULT_FN_GROUPS): continue
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up: if not plugin.get("AsButton", True): dropdown_fn_list.append(k) # 排除已经是按钮的插件
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload") elif plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示
with gr.Row():
dropdown = gr.Dropdown(dropdown_fn_list, value=r"点击这里搜索插件列表", label="", show_label=False).style(container=False)
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"): with gr.Row():
with gr.Row(): plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False, elem_id="advance_arg_input_legacy",
with gr.Tab("上传文件", elem_id="interact-panel"): placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
gr.Markdown("请上传本地文件/压缩包供“函数插件区”功能调用。请注意: 上传文件后会自动把输入区修改为相应路径。") with gr.Row():
file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload_float") switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm")
with gr.Row():
with gr.Tab("更换模型", elem_id="interact-panel"): with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False) file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",) from themes.gui_toolbar import define_gui_toolbar
max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength",) checkboxes, checkboxes_2, max_length_sl, theme_dropdown, system_prompt, file_upload_2, md_dropdown, top_p, temperature = \
system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=INIT_SYS_PROMPT) define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode)
with gr.Tab("界面外观", elem_id="interact-panel"): from themes.gui_floating_menu import define_gui_floating_menu
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False) area_input_secondary, txt2, area_customize, submitBtn2, resetBtn2, clearBtn2, stopBtn2 = \
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"], define_gui_floating_menu(customize_btns, functional, predefined_btns, cookies, web_cookie_cache)
value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
checkboxes_2 = gr.CheckboxGroup(["自定义菜单"], from themes.gui_advanced_plugin_class import define_gui_advanced_plugin_class
value=[], label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False) new_plugin_callback, route_switchy_bt_with_arg, usr_confirmed_arg = \
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm") define_gui_advanced_plugin_class(plugins)
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
with gr.Tab("帮助", elem_id="interact-panel"): # 功能区显示开关与功能区的互动
gr.Markdown(help_menu_description) def fn_area_visibility(a):
ret = {}
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_input_secondary: ret.update({area_input_primary: gr.update(visible=("浮动输入区" not in a))})
with gr.Accordion("浮动输入区", open=True, elem_id="input-panel2"): ret.update({area_input_secondary: gr.update(visible=("浮动输入区" in a))})
with gr.Row() as row: ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))})
row.style(equal_height=True) if "浮动输入区" in a: ret.update({txt: gr.update(value="")})
with gr.Column(scale=10): return ret
txt2 = gr.Textbox(show_label=False, placeholder="Input question here.", checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, plugin_advanced_arg] )
elem_id='user_input_float', lines=8, label="输入区2").style(container=False) checkboxes.select(None, [checkboxes], None, _js=js_code_show_or_hide)
with gr.Column(scale=1, min_width=40):
submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm") # 功能区显示开关与功能区的互动
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm") def fn_area_visibility_2(a):
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm") ret = {}
clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm") ret.update({area_customize: gr.update(visible=("自定义菜单" in a))})
return ret
checkboxes_2.select(fn_area_visibility_2, [checkboxes_2], [area_customize] )
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize: checkboxes_2.select(None, [checkboxes_2], None, _js=js_code_show_or_hide_group2)
with gr.Accordion("自定义菜单", open=True, elem_id="edit-panel"):
with gr.Row() as row: # 整理反复出现的控件句柄组合
with gr.Column(scale=10): input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg]
AVAIL_BTN = [btn for btn in customize_btns.keys()] + [k for k in functional] input_combo_order = ["cookies", "max_length_sl", "md_dropdown", "txt", "txt2", "top_p", "temperature", "chatbot", "history", "system_prompt", "plugin_advanced_arg"]
basic_btn_dropdown = gr.Dropdown(AVAIL_BTN, value="自定义按钮1", label="选择一个需要自定义基础功能区按钮").style(container=False) output_combo = [cookies, chatbot, history, status]
basic_fn_title = gr.Textbox(show_label=False, placeholder="输入新按钮名称", lines=1).style(container=False) predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True)], outputs=output_combo)
basic_fn_prefix = gr.Textbox(show_label=False, placeholder="输入新提示前缀", lines=4).style(container=False) # 提交按钮、重置按钮
basic_fn_suffix = gr.Textbox(show_label=False, placeholder="输入新提示后缀", lines=4).style(container=False) cancel_handles.append(txt.submit(**predict_args))
with gr.Column(scale=1, min_width=70): cancel_handles.append(txt2.submit(**predict_args))
basic_fn_confirm = gr.Button("确认并保存", variant="primary"); basic_fn_confirm.style(size="sm") cancel_handles.append(submitBtn.click(**predict_args))
basic_fn_load = gr.Button("加载已保存", variant="primary"); basic_fn_load.style(size="sm") cancel_handles.append(submitBtn2.click(**predict_args))
def assign_btn(persistent_cookie_, cookies_, basic_btn_dropdown_, basic_fn_title, basic_fn_prefix, basic_fn_suffix): resetBtn.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
ret = {} resetBtn2.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
customize_fn_overwrite_ = cookies_['customize_fn_overwrite'] reset_server_side_args = (lambda history: ([], [], "已重置", json.dumps(history)), [history], [chatbot, history, status, history_cache])
customize_fn_overwrite_.update({ resetBtn.click(*reset_server_side_args) # 再在后端清除history,把history转存history_cache备用
basic_btn_dropdown_: resetBtn2.click(*reset_server_side_args) # 再在后端清除history,把history转存history_cache备用
{ clearBtn.click(None, None, [txt, txt2], _js=js_code_clear)
"Title":basic_fn_title, clearBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
"Prefix":basic_fn_prefix, if AUTO_CLEAR_TXT:
"Suffix":basic_fn_suffix, submitBtn.click(None, None, [txt, txt2], _js=js_code_clear)
} submitBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
} txt.submit(None, None, [txt, txt2], _js=js_code_clear)
) txt2.submit(None, None, [txt, txt2], _js=js_code_clear)
cookies_.update(customize_fn_overwrite_) # 基础功能区的回调函数注册
if basic_btn_dropdown_ in customize_btns: for k in functional:
ret.update({customize_btns[basic_btn_dropdown_]: gr.update(visible=True, value=basic_fn_title)}) if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
else: click_handle = functional[k]["Button"].click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(k)], outputs=output_combo)
ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=True, value=basic_fn_title)}) cancel_handles.append(click_handle)
ret.update({cookies: cookies_}) for btn in customize_btns.values():
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict click_handle = btn.click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(btn.value)], outputs=output_combo)
except: persistent_cookie_ = {} cancel_handles.append(click_handle)
persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value # 文件上传区,接收文件后与chatbot的互动
persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}")
ret.update({persistent_cookie: persistent_cookie_}) # write persistent cookie file_upload_2.upload(on_file_uploaded, [file_upload_2, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}")
return ret # 函数插件-固定按钮区
for k in plugins:
def reflesh_btn(persistent_cookie_, cookies_): if plugins[k].get("Class", None):
ret = {} plugins[k]["JsMenu"] = plugins[k]["Class"]().get_js_code_for_generating_menu(k)
for k in customize_btns: register_advanced_plugin_init_code_arr += """register_advanced_plugin_init_code("{k}","{gui_js}");""".format(k=k, gui_js=plugins[k]["JsMenu"])
ret.update({customize_btns[k]: gr.update(visible=False, value="")}) if not plugins[k].get("AsButton", True): continue
if plugins[k].get("Class", None) is None:
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict assert plugins[k].get("Function", None) is not None
except: return ret click_handle = plugins[k]["Button"].click(ArgsGeneralWrapper(plugins[k]["Function"]), [*input_combo], output_combo)
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]).then(None, [plugins[k]["Button"]], None, _js=r"(fn)=>on_plugin_exe_complete(fn)")
customize_fn_overwrite_ = persistent_cookie_.get("custom_bnt", {}) cancel_handles.append(click_handle)
cookies_['customize_fn_overwrite'] = customize_fn_overwrite_ else:
ret.update({cookies: cookies_}) click_handle = plugins[k]["Button"].click(None, inputs=[], outputs=None, _js=f"""()=>run_advanced_plugin_launch_code("{k}")""")
for k,v in persistent_cookie_["custom_bnt"].items(): # 函数插件-下拉菜单与随变按钮的互动
if v['Title'] == "": continue def on_dropdown_changed(k):
if k in customize_btns: ret.update({customize_btns[k]: gr.update(visible=True, value=v['Title'])}) variant = plugins[k]["Color"] if "Color" in plugins[k] else "secondary"
else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])}) info = plugins[k].get("Info", k)
return ret ret = {switchy_bt: gr.update(value=k, variant=variant, info_str=f'函数插件区: {info}')}
if plugins[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区
basic_fn_load.click(reflesh_btn, [persistent_cookie, cookies], [cookies, *customize_btns.values(), *predefined_btns.values()]) ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + plugins[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
h = basic_fn_confirm.click(assign_btn, [persistent_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix], else:
[persistent_cookie, cookies, *customize_btns.values(), *predefined_btns.values()]) ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")})
# save persistent cookie return ret
h.then(None, [persistent_cookie], None, _js="""(persistent_cookie)=>{setCookie("persistent_cookie", persistent_cookie, 5);}""") dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt, plugin_advanced_arg] )
# 功能区显示开关与功能区的互动 def on_md_dropdown_changed(k):
def fn_area_visibility(a): return {chatbot: gr.update(label="当前模型:"+k)}
ret = {} md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] )
ret.update({area_basic_fn: gr.update(visible=("基础功能区" in a))})
ret.update({area_crazy_fn: gr.update(visible=("函数插件区" in a))}) def on_theme_dropdown_changed(theme, secret_css):
ret.update({area_input_primary: gr.update(visible=("浮动输入区" not in a))}) adjust_theme, css_part1, _, adjust_dynamic_theme = load_dynamic_theme(theme)
ret.update({area_input_secondary: gr.update(visible=("浮动输入区" in a))}) if adjust_dynamic_theme:
ret.update({clearBtn: gr.update(visible=("输入清除键" in a))}) css_part2 = adjust_dynamic_theme._get_theme_css()
ret.update({clearBtn2: gr.update(visible=("输入清除键" in a))}) else:
ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))}) css_part2 = adjust_theme()._get_theme_css()
if "浮动输入区" in a: ret.update({txt: gr.update(value="")}) return css_part2 + css_part1
return ret
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2, plugin_advanced_arg] ) theme_handle = theme_dropdown.select(on_theme_dropdown_changed, [theme_dropdown, secret_css], [secret_css])
theme_handle.then(
# 功能区显示开关与功能区的互动 None,
def fn_area_visibility_2(a): [secret_css],
ret = {} None,
ret.update({area_customize: gr.update(visible=("自定义菜单" in a))}) _js=js_code_for_css_changing
return ret )
checkboxes_2.select(fn_area_visibility_2, [checkboxes_2], [area_customize] )
# 整理反复出现的控件句柄组合 switchy_bt.click(None, [switchy_bt], None, _js="(switchy_bt)=>on_flex_button_click(switchy_bt)")
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg] # 随变按钮的回调函数注册
output_combo = [cookies, chatbot, history, status] def route(request: gr.Request, k, *args, **kwargs):
predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True)], outputs=output_combo) if k not in [r"点击这里搜索插件列表", r"请先从插件列表中选择"]:
# 提交按钮、重置按钮 if plugins[k].get("Class", None) is None:
cancel_handles.append(txt.submit(**predict_args)) assert plugins[k].get("Function", None) is not None
cancel_handles.append(txt2.submit(**predict_args)) yield from ArgsGeneralWrapper(plugins[k]["Function"])(request, *args, **kwargs)
cancel_handles.append(submitBtn.click(**predict_args)) # 旧插件的高级参数区确认按钮(隐藏)
cancel_handles.append(submitBtn2.click(**predict_args)) old_plugin_callback = gr.Button(r"未选定任何插件", variant="secondary", visible=False, elem_id="old_callback_btn_for_plugin_exe")
resetBtn.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) click_handle_ng = old_plugin_callback.click(route, [switchy_bt, *input_combo], output_combo)
resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) click_handle_ng.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]).then(None, [switchy_bt], None, _js=r"(fn)=>on_plugin_exe_complete(fn)")
clearBtn.click(lambda: ("",""), None, [txt, txt2]) cancel_handles.append(click_handle_ng)
clearBtn2.click(lambda: ("",""), None, [txt, txt2]) # 新一代插件的高级参数区确认按钮(隐藏)
if AUTO_CLEAR_TXT: click_handle_ng = new_plugin_callback.click(route_switchy_bt_with_arg, [
submitBtn.click(lambda: ("",""), None, [txt, txt2]) gr.State(["new_plugin_callback", "usr_confirmed_arg"] + input_combo_order),
submitBtn2.click(lambda: ("",""), None, [txt, txt2]) new_plugin_callback, usr_confirmed_arg, *input_combo
txt.submit(lambda: ("",""), None, [txt, txt2]) ], output_combo)
txt2.submit(lambda: ("",""), None, [txt, txt2]) click_handle_ng.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]).then(None, [switchy_bt], None, _js=r"(fn)=>on_plugin_exe_complete(fn)")
# 基础功能区的回调函数注册 cancel_handles.append(click_handle_ng)
for k in functional: # 终止按钮的回调函数注册
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
click_handle = functional[k]["Button"].click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(k)], outputs=output_combo) stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
cancel_handles.append(click_handle) plugins_as_btn = {name:plugin for name, plugin in plugins.items() if plugin.get('Button', None)}
for btn in customize_btns.values(): def on_group_change(group_list):
click_handle = btn.click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(btn.value)], outputs=output_combo) btn_list = []
cancel_handles.append(click_handle) fns_list = []
# 文件上传区,接收文件后与chatbot的互动 if not group_list: # 处理特殊情况:没有选择任何插件组
file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}") return [*[plugin['Button'].update(visible=False) for _, plugin in plugins_as_btn.items()], gr.Dropdown.update(choices=[])]
file_upload_2.upload(on_file_uploaded, [file_upload_2, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}") for k, plugin in plugins.items():
# 函数插件-固定按钮区 if plugin.get("AsButton", True):
for k in plugins: btn_list.append(plugin['Button'].update(visible=match_group(plugin['Group'], group_list))) # 刷新按钮
if not plugins[k].get("AsButton", True): continue if plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示
click_handle = plugins[k]["Button"].click(ArgsGeneralWrapper(plugins[k]["Function"]), [*input_combo], output_combo) elif match_group(plugin['Group'], group_list): fns_list.append(k) # 刷新下拉列表
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]) return [*btn_list, gr.Dropdown.update(choices=fns_list)]
cancel_handles.append(click_handle) plugin_group_sel.select(fn=on_group_change, inputs=[plugin_group_sel], outputs=[*[plugin['Button'] for name, plugin in plugins_as_btn.items()], dropdown])
# 函数插件-下拉菜单与随变按钮的互动 if ENABLE_AUDIO:
def on_dropdown_changed(k): from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution
variant = plugins[k]["Color"] if "Color" in plugins[k] else "secondary" rad = RealtimeAudioDistribution()
info = plugins[k].get("Info", k) def deal_audio(audio, cookies):
ret = {switchy_bt: gr.update(value=k, variant=variant, info_str=f'函数插件区: {info}')} rad.feed(cookies['uuid'].hex, audio)
if plugins[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区 audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + plugins[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
else:
ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")}) app_block.load(assign_user_uuid, inputs=[cookies], outputs=[cookies])
return ret
dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt, plugin_advanced_arg] ) from shared_utils.cookie_manager import load_web_cookie_cache__fn_builder
load_web_cookie_cache = load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns)
def on_md_dropdown_changed(k): app_block.load(load_web_cookie_cache, inputs = [web_cookie_cache, cookies],
return {chatbot: gr.update(label="当前模型:"+k)} outputs = [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()], _js=js_code_for_persistent_cookie_init)
md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] )
app_block.load(None, inputs=[], outputs=None, _js=f"""()=>GptAcademicJavaScriptInit("{DARK_MODE}","{INIT_SYS_PROMPT}","{ADD_WAIFU}","{LAYOUT}","{TTS_TYPE}")""") # 配置暗色主题或亮色主题
def on_theme_dropdown_changed(theme, secret_css): app_block.load(None, inputs=[], outputs=None, _js="""()=>{REP}""".replace("REP", register_advanced_plugin_init_code_arr))
adjust_theme, css_part1, _, adjust_dynamic_theme = load_dynamic_theme(theme)
if adjust_dynamic_theme: # gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
css_part2 = adjust_dynamic_theme._get_theme_css() def run_delayed_tasks():
else: import threading, webbrowser, time
css_part2 = adjust_theme()._get_theme_css() print(f"如果浏览器没有自动打开,请复制并转到以下URL")
return css_part2 + css_part1 if DARK_MODE: print(f"\t「暗色主题已启用(支持动态切换主题)」: http://localhost:{PORT}")
else: print(f"\t「亮色主题已启用(支持动态切换主题)」: http://localhost:{PORT}")
theme_handle = theme_dropdown.select(on_theme_dropdown_changed, [theme_dropdown, secret_css], [secret_css])
theme_handle.then( def auto_updates(): time.sleep(0); auto_update()
None, def open_browser(): time.sleep(2); webbrowser.open_new_tab(f"http://localhost:{PORT}")
[secret_css], def warm_up_mods(): time.sleep(6); warm_up_modules()
None,
_js=js_code_for_css_changing threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新
) threading.Thread(target=warm_up_mods, name="warm-up", daemon=True).start() # 预热tiktoken模块
# 随变按钮的回调函数注册 if get_conf('AUTO_OPEN_BROWSER'):
def route(request: gr.Request, k, *args, **kwargs): threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
yield from ArgsGeneralWrapper(plugins[k]["Function"])(request, *args, **kwargs) # 运行一些异步任务自动更新、打开浏览器页面、预热tiktoken模块
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo], output_combo) run_delayed_tasks()
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot])
cancel_handles.append(click_handle) # 最后,正式开始服务
# 终止按钮的回调函数注册 from shared_utils.fastapi_server import start_app
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles) start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SSL_CERTFILE)
stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
plugins_as_btn = {name:plugin for name, plugin in plugins.items() if plugin.get('Button', None)}
def on_group_change(group_list): if __name__ == "__main__":
btn_list = [] main()
fns_list = []
if not group_list: # 处理特殊情况:没有选择任何插件组
return [*[plugin['Button'].update(visible=False) for _, plugin in plugins_as_btn.items()], gr.Dropdown.update(choices=[])]
for k, plugin in plugins.items():
if plugin.get("AsButton", True):
btn_list.append(plugin['Button'].update(visible=match_group(plugin['Group'], group_list))) # 刷新按钮
if plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示
elif match_group(plugin['Group'], group_list): fns_list.append(k) # 刷新下拉列表
return [*btn_list, gr.Dropdown.update(choices=fns_list)]
plugin_group_sel.select(fn=on_group_change, inputs=[plugin_group_sel], outputs=[*[plugin['Button'] for name, plugin in plugins_as_btn.items()], dropdown])
if ENABLE_AUDIO:
from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution
rad = RealtimeAudioDistribution()
def deal_audio(audio, cookies):
rad.feed(cookies['uuid'].hex, audio)
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
demo.load(init_cookie, inputs=[cookies, chatbot], outputs=[cookies])
darkmode_js = js_code_for_darkmode_init
demo.load(None, inputs=None, outputs=[persistent_cookie], _js=js_code_for_persistent_cookie_init)
demo.load(None, inputs=[dark_mode], outputs=None, _js=darkmode_js) # 配置暗色主题或亮色主题
demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}')
# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
def run_delayed_tasks():
import threading, webbrowser, time
print(f"如果浏览器没有自动打开,请复制并转到以下URL")
if DARK_MODE: print(f"\t「暗色主题已启用(支持动态切换主题)」: http://localhost:{PORT}")
else: print(f"\t「亮色主题已启用(支持动态切换主题)」: http://localhost:{PORT}")
def auto_updates(): time.sleep(0); auto_update()
def open_browser(): time.sleep(2); webbrowser.open_new_tab(f"http://localhost:{PORT}")
def warm_up_mods(): time.sleep(6); warm_up_modules()
threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新
threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面
threading.Thread(target=warm_up_mods, name="warm-up", daemon=True).start() # 预热tiktoken模块
run_delayed_tasks()
demo.queue(concurrency_count=CONCURRENT_COUNT).launch(
quiet=True,
server_name="0.0.0.0",
ssl_keyfile=None if SSL_KEYFILE == "" else SSL_KEYFILE,
ssl_certfile=None if SSL_CERTFILE == "" else SSL_CERTFILE,
ssl_verify=False,
server_port=PORT,
favicon_path=os.path.join(os.path.dirname(__file__), "docs/logo.png"),
auth=AUTHENTICATION if len(AUTHENTICATION) != 0 else None,
blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile",f"{PATH_LOGGING}/admin"])
# 如果需要在二级路径下运行
# CUSTOM_PATH = get_conf('CUSTOM_PATH')
# if CUSTOM_PATH != "/":
# from toolbox import run_gradio_in_subpath
# run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
# else:
# demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png",
# blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile",f"{PATH_LOGGING}/admin"])
if __name__ == "__main__":
main()

查看文件

@@ -1,7 +1,7 @@
""" """
Translate this project to other languages (experimental, please open an issue if there is any bug) Translate this project to other languages (experimental, please open an issue if there is any bug)
Usage: Usage:
1. modify config.py, set your LLM_MODEL and API_KEY(s) to provide access to OPENAI (or any other LLM model provider) 1. modify config.py, set your LLM_MODEL and API_KEY(s) to provide access to OPENAI (or any other LLM model provider)
@@ -11,20 +11,20 @@
3. modify TransPrompt (below ↓) 3. modify TransPrompt (below ↓)
TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #." TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #."
4. Run `python multi_language.py`. 4. Run `python multi_language.py`.
Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes. Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes.
(You can also run `CACHE_ONLY=True python multi_language.py` to use cached translation mapping) (You can also run `CACHE_ONLY=True python multi_language.py` to use cached translation mapping)
5. Find the translated program in `multi-language\English\*` 5. Find the translated program in `multi-language\English\*`
P.S. P.S.
- The translation mapping will be stored in `docs/translation_xxxx.json`, you can revised mistaken translation there. - The translation mapping will be stored in `docs/translation_xxxx.json`, you can revised mistaken translation there.
- If you would like to share your `docs/translation_xxxx.json`, (so that everyone can use the cached & revised translation mapping), please open a Pull Request - If you would like to share your `docs/translation_xxxx.json`, (so that everyone can use the cached & revised translation mapping), please open a Pull Request
- If there is any translation error in `docs/translation_xxxx.json`, please open a Pull Request - If there is any translation error in `docs/translation_xxxx.json`, please open a Pull Request
- Welcome any Pull Request, regardless of language - Welcome any Pull Request, regardless of language
""" """
@@ -58,7 +58,7 @@ if not os.path.exists(CACHE_FOLDER):
def lru_file_cache(maxsize=128, ttl=None, filename=None): def lru_file_cache(maxsize=128, ttl=None, filename=None):
""" """
Decorator that caches a function's return value after being called with given arguments. Decorator that caches a function's return value after being called with given arguments.
It uses a Least Recently Used (LRU) cache strategy to limit the size of the cache. It uses a Least Recently Used (LRU) cache strategy to limit the size of the cache.
maxsize: Maximum size of the cache. Defaults to 128. maxsize: Maximum size of the cache. Defaults to 128.
ttl: Time-to-Live of the cache. If a value hasn't been accessed for `ttl` seconds, it will be evicted from the cache. ttl: Time-to-Live of the cache. If a value hasn't been accessed for `ttl` seconds, it will be evicted from the cache.
@@ -151,7 +151,7 @@ def map_to_json(map, language):
def read_map_from_json(language): def read_map_from_json(language):
if os.path.exists(f'docs/translate_{language.lower()}.json'): if os.path.exists(f'docs/translate_{language.lower()}.json'):
with open(f'docs/translate_{language.lower()}.json', 'r', encoding='utf8') as f: with open(f'docs/translate_{language.lower()}.json', 'r', encoding='utf8') as f:
res = json.load(f) res = json.load(f)
res = {k:v for k, v in res.items() if v is not None and contains_chinese(k)} res = {k:v for k, v in res.items() if v is not None and contains_chinese(k)}
return res return res
@@ -168,7 +168,7 @@ def advanced_split(splitted_string, spliter, include_spliter=False):
splitted[i] += spliter splitted[i] += spliter
splitted[i] = splitted[i].strip() splitted[i] = splitted[i].strip()
for i in reversed(range(len(splitted))): for i in reversed(range(len(splitted))):
if not contains_chinese(splitted[i]): if not contains_chinese(splitted[i]):
splitted.pop(i) splitted.pop(i)
splitted_string_tmp.extend(splitted) splitted_string_tmp.extend(splitted)
else: else:
@@ -183,12 +183,12 @@ def trans(word_to_translate, language, special=False):
if len(word_to_translate) == 0: return {} if len(word_to_translate) == 0: return {}
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies
cookies = load_chat_cookies() cookies = load_chat_cookies()
llm_kwargs = { llm_kwargs = {
'api_key': cookies['api_key'], 'api_key': cookies['api_key'],
'llm_model': cookies['llm_model'], 'llm_model': cookies['llm_model'],
'top_p':1.0, 'top_p':1.0,
'max_length': None, 'max_length': None,
'temperature':0.4, 'temperature':0.4,
} }
@@ -204,12 +204,12 @@ def trans(word_to_translate, language, special=False):
sys_prompt_array = [f"Translate following sentences to {LANG}. E.g., You should translate sentences to the following format ['translation of sentence 1', 'translation of sentence 2']. Do NOT answer with Chinese!" for _ in inputs_array] sys_prompt_array = [f"Translate following sentences to {LANG}. E.g., You should translate sentences to the following format ['translation of sentence 1', 'translation of sentence 2']. Do NOT answer with Chinese!" for _ in inputs_array]
chatbot = ChatBotWithCookies(llm_kwargs) chatbot = ChatBotWithCookies(llm_kwargs)
gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array, inputs_array,
inputs_show_user_array, inputs_show_user_array,
llm_kwargs, llm_kwargs,
chatbot, chatbot,
history_array, history_array,
sys_prompt_array, sys_prompt_array,
) )
while True: while True:
try: try:
@@ -224,7 +224,7 @@ def trans(word_to_translate, language, special=False):
try: try:
res_before_trans = eval(result[i-1]) res_before_trans = eval(result[i-1])
res_after_trans = eval(result[i]) res_after_trans = eval(result[i])
if len(res_before_trans) != len(res_after_trans): if len(res_before_trans) != len(res_after_trans):
raise RuntimeError raise RuntimeError
for a,b in zip(res_before_trans, res_after_trans): for a,b in zip(res_before_trans, res_after_trans):
translated_result[a] = b translated_result[a] = b
@@ -246,12 +246,12 @@ def trans_json(word_to_translate, language, special=False):
if len(word_to_translate) == 0: return {} if len(word_to_translate) == 0: return {}
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies
cookies = load_chat_cookies() cookies = load_chat_cookies()
llm_kwargs = { llm_kwargs = {
'api_key': cookies['api_key'], 'api_key': cookies['api_key'],
'llm_model': cookies['llm_model'], 'llm_model': cookies['llm_model'],
'top_p':1.0, 'top_p':1.0,
'max_length': None, 'max_length': None,
'temperature':0.4, 'temperature':0.4,
} }
@@ -261,18 +261,18 @@ def trans_json(word_to_translate, language, special=False):
word_to_translate_split = split_list(word_to_translate, N_EACH_REQ) word_to_translate_split = split_list(word_to_translate, N_EACH_REQ)
inputs_array = [{k:"#" for k in s} for s in word_to_translate_split] inputs_array = [{k:"#" for k in s} for s in word_to_translate_split]
inputs_array = [ json.dumps(i, ensure_ascii=False) for i in inputs_array] inputs_array = [ json.dumps(i, ensure_ascii=False) for i in inputs_array]
inputs_show_user_array = inputs_array inputs_show_user_array = inputs_array
history_array = [[] for _ in inputs_array] history_array = [[] for _ in inputs_array]
sys_prompt_array = [TransPrompt for _ in inputs_array] sys_prompt_array = [TransPrompt for _ in inputs_array]
chatbot = ChatBotWithCookies(llm_kwargs) chatbot = ChatBotWithCookies(llm_kwargs)
gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array, inputs_array,
inputs_show_user_array, inputs_show_user_array,
llm_kwargs, llm_kwargs,
chatbot, chatbot,
history_array, history_array,
sys_prompt_array, sys_prompt_array,
) )
while True: while True:
try: try:
@@ -336,7 +336,7 @@ def step_1_core_key_translate():
cached_translation = read_map_from_json(language=LANG_STD) cached_translation = read_map_from_json(language=LANG_STD)
cached_translation_keys = list(cached_translation.keys()) cached_translation_keys = list(cached_translation.keys())
for d in chinese_core_keys_norepeat: for d in chinese_core_keys_norepeat:
if d not in cached_translation_keys: if d not in cached_translation_keys:
need_translate.append(d) need_translate.append(d)
if CACHE_ONLY: if CACHE_ONLY:
@@ -379,7 +379,7 @@ def step_1_core_key_translate():
# read again # read again
with open(file_path, 'r', encoding='utf-8') as f: with open(file_path, 'r', encoding='utf-8') as f:
content = f.read() content = f.read()
for k, v in chinese_core_keys_norepeat_mapping.items(): for k, v in chinese_core_keys_norepeat_mapping.items():
content = content.replace(k, v) content = content.replace(k, v)
@@ -390,7 +390,7 @@ def step_1_core_key_translate():
def step_2_core_key_translate(): def step_2_core_key_translate():
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
# step2 # step2
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
def load_string(strings, string_input): def load_string(strings, string_input):
@@ -423,7 +423,7 @@ def step_2_core_key_translate():
splitted_string = advanced_split(splitted_string, spliter=" ", include_spliter=False) splitted_string = advanced_split(splitted_string, spliter=" ", include_spliter=False)
splitted_string = advanced_split(splitted_string, spliter="- ", include_spliter=False) splitted_string = advanced_split(splitted_string, spliter="- ", include_spliter=False)
splitted_string = advanced_split(splitted_string, spliter="---", include_spliter=False) splitted_string = advanced_split(splitted_string, spliter="---", include_spliter=False)
# -------------------------------------- # --------------------------------------
for j, s in enumerate(splitted_string): # .com for j, s in enumerate(splitted_string): # .com
if '.com' in s: continue if '.com' in s: continue
@@ -457,7 +457,7 @@ def step_2_core_key_translate():
comments_arr = [] comments_arr = []
for code_sp in content.splitlines(): for code_sp in content.splitlines():
comments = re.findall(r'#.*$', code_sp) comments = re.findall(r'#.*$', code_sp)
for comment in comments: for comment in comments:
load_string(strings=comments_arr, string_input=comment) load_string(strings=comments_arr, string_input=comment)
string_literals.extend(comments_arr) string_literals.extend(comments_arr)
@@ -479,7 +479,7 @@ def step_2_core_key_translate():
cached_translation = read_map_from_json(language=LANG) cached_translation = read_map_from_json(language=LANG)
cached_translation_keys = list(cached_translation.keys()) cached_translation_keys = list(cached_translation.keys())
for d in chinese_literal_names_norepeat: for d in chinese_literal_names_norepeat:
if d not in cached_translation_keys: if d not in cached_translation_keys:
need_translate.append(d) need_translate.append(d)
if CACHE_ONLY: if CACHE_ONLY:
@@ -504,18 +504,18 @@ def step_2_core_key_translate():
# read again # read again
with open(file_path, 'r', encoding='utf-8') as f: with open(file_path, 'r', encoding='utf-8') as f:
content = f.read() content = f.read()
for k, v in cached_translation.items(): for k, v in cached_translation.items():
if v is None: continue if v is None: continue
if '"' in v: if '"' in v:
v = v.replace('"', "`") v = v.replace('"', "`")
if '\'' in v: if '\'' in v:
v = v.replace('\'', "`") v = v.replace('\'', "`")
content = content.replace(k, v) content = content.replace(k, v)
with open(file_path, 'w', encoding='utf-8') as f: with open(file_path, 'w', encoding='utf-8') as f:
f.write(content) f.write(content)
if file.strip('.py') in cached_translation: if file.strip('.py') in cached_translation:
file_new = cached_translation[file.strip('.py')] + '.py' file_new = cached_translation[file.strip('.py')] + '.py'
file_path_new = os.path.join(root, file_new) file_path_new = os.path.join(root, file_new)

查看文件

@@ -8,10 +8,10 @@
具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁 具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁
2. predict_no_ui_long_connection(...) 2. predict_no_ui_long_connection(...)
""" """
import tiktoken, copy import tiktoken, copy, re
from functools import lru_cache from functools import lru_cache
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from toolbox import get_conf, trimmed_format_exc from toolbox import get_conf, trimmed_format_exc, apply_gpt_academic_string_mask, read_one_api_model_name
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
from .bridge_chatgpt import predict as chatgpt_ui from .bridge_chatgpt import predict as chatgpt_ui
@@ -31,6 +31,14 @@ from .bridge_qianfan import predict as qianfan_ui
from .bridge_google_gemini import predict as genai_ui from .bridge_google_gemini import predict as genai_ui
from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
from .bridge_zhipu import predict as zhipu_ui
from .bridge_cohere import predict as cohere_ui
from .bridge_cohere import predict_no_ui_long_connection as cohere_noui
from .oai_std_model_template import get_predict_function
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044'] colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
class LazyloadTiktoken(object): class LazyloadTiktoken(object):
@@ -44,13 +52,13 @@ class LazyloadTiktoken(object):
tmp = tiktoken.encoding_for_model(model) tmp = tiktoken.encoding_for_model(model)
print('加载tokenizer完毕') print('加载tokenizer完毕')
return tmp return tmp
def encode(self, *args, **kwargs): def encode(self, *args, **kwargs):
encoder = self.get_encoder(self.model) encoder = self.get_encoder(self.model)
return encoder.encode(*args, **kwargs) return encoder.encode(*args, **kwargs)
def decode(self, *args, **kwargs): def decode(self, *args, **kwargs):
encoder = self.get_encoder(self.model) encoder = self.get_encoder(self.model)
return encoder.decode(*args, **kwargs) return encoder.decode(*args, **kwargs)
# Endpoint 重定向 # Endpoint 重定向
@@ -58,12 +66,19 @@ API_URL_REDIRECT, AZURE_ENDPOINT, AZURE_ENGINE = get_conf("API_URL_REDIRECT", "A
openai_endpoint = "https://api.openai.com/v1/chat/completions" openai_endpoint = "https://api.openai.com/v1/chat/completions"
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions" api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub" newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
gemini_endpoint = "https://generativelanguage.googleapis.com/v1beta/models"
claude_endpoint = "https://api.anthropic.com/v1/messages"
cohere_endpoint = "https://api.cohere.ai/v1/chat"
ollama_endpoint = "http://localhost:11434/api/chat"
yimodel_endpoint = "https://api.lingyiwanwu.com/v1/chat/completions"
deepseekapi_endpoint = "https://api.deepseek.com/v1/chat/completions"
if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/' if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/'
azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15' azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15'
# 兼容旧版的配置 # 兼容旧版的配置
try: try:
API_URL = get_conf("API_URL") API_URL = get_conf("API_URL")
if API_URL != "https://api.openai.com/v1/chat/completions": if API_URL != "https://api.openai.com/v1/chat/completions":
openai_endpoint = API_URL openai_endpoint = API_URL
print("警告API_URL配置选项将被弃用,请更换为API_URL_REDIRECT配置") print("警告API_URL配置选项将被弃用,请更换为API_URL_REDIRECT配置")
except: except:
@@ -72,7 +87,12 @@ except:
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint] if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint] if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint] if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
if gemini_endpoint in API_URL_REDIRECT: gemini_endpoint = API_URL_REDIRECT[gemini_endpoint]
if claude_endpoint in API_URL_REDIRECT: claude_endpoint = API_URL_REDIRECT[claude_endpoint]
if cohere_endpoint in API_URL_REDIRECT: cohere_endpoint = API_URL_REDIRECT[cohere_endpoint]
if ollama_endpoint in API_URL_REDIRECT: ollama_endpoint = API_URL_REDIRECT[ollama_endpoint]
if yimodel_endpoint in API_URL_REDIRECT: yimodel_endpoint = API_URL_REDIRECT[yimodel_endpoint]
if deepseekapi_endpoint in API_URL_REDIRECT: deepseekapi_endpoint = API_URL_REDIRECT[deepseekapi_endpoint]
# 获取tokenizer # 获取tokenizer
tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo") tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo")
@@ -91,11 +111,11 @@ model_info = {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint, "endpoint": openai_endpoint,
"max_token": 4096, "max_token": 16385,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
"gpt-3.5-turbo-16k": { "gpt-3.5-turbo-16k": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
@@ -123,7 +143,16 @@ model_info = {
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
"gpt-3.5-turbo-1106": {#16k "gpt-3.5-turbo-1106": { #16k
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 16385,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"gpt-3.5-turbo-0125": { #16k
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint, "endpoint": openai_endpoint,
@@ -150,6 +179,33 @@ model_info = {
"token_cnt": get_token_num_gpt4, "token_cnt": get_token_num_gpt4,
}, },
"gpt-4o": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-4o-2024-05-13": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-4-turbo-preview": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-4-1106-preview": { "gpt-4-1106-preview": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
@@ -159,6 +215,34 @@ model_info = {
"token_cnt": get_token_num_gpt4, "token_cnt": get_token_num_gpt4,
}, },
"gpt-4-0125-preview": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-4-turbo": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-4-turbo-2024-04-09": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-3.5-random": { "gpt-3.5-random": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
@@ -167,7 +251,7 @@ model_info = {
"tokenizer": tokenizer_gpt4, "tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4, "token_cnt": get_token_num_gpt4,
}, },
"gpt-4-vision-preview": { "gpt-4-vision-preview": {
"fn_with_ui": chatgpt_vision_ui, "fn_with_ui": chatgpt_vision_ui,
"fn_without_ui": chatgpt_vision_noui, "fn_without_ui": chatgpt_vision_noui,
@@ -197,16 +281,65 @@ model_info = {
"token_cnt": get_token_num_gpt4, "token_cnt": get_token_num_gpt4,
}, },
# api_2d (此后不需要在此处添加api2d的接口了,因为下面的代码会自动添加) # 智谱AI
"api2d-gpt-3.5-turbo": { "glm-4": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": zhipu_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": zhipu_noui,
"endpoint": api2d_endpoint, "endpoint": None,
"max_token": 4096, "max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-4-0520": {
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-4-air": {
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-4-airx": {
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-4-flash": {
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-4v": {
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 1000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-3-turbo": {
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 4,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
# api_2d (此后不需要在此处添加api2d的接口了,因为下面的代码会自动添加)
"api2d-gpt-4": { "api2d-gpt-4": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
@@ -252,7 +385,7 @@ model_info = {
"gemini-pro": { "gemini-pro": {
"fn_with_ui": genai_ui, "fn_with_ui": genai_ui,
"fn_without_ui": genai_noui, "fn_without_ui": genai_noui,
"endpoint": None, "endpoint": gemini_endpoint,
"max_token": 1024 * 32, "max_token": 1024 * 32,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
@@ -260,13 +393,56 @@ model_info = {
"gemini-pro-vision": { "gemini-pro-vision": {
"fn_with_ui": genai_ui, "fn_with_ui": genai_ui,
"fn_without_ui": genai_noui, "fn_without_ui": genai_noui,
"endpoint": gemini_endpoint,
"max_token": 1024 * 32,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
# cohere
"cohere-command-r-plus": {
"fn_with_ui": cohere_ui,
"fn_without_ui": cohere_noui,
"can_multi_thread": True,
"endpoint": cohere_endpoint,
"max_token": 1024 * 4,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
}
# -=-=-=-=-=-=- 月之暗面 -=-=-=-=-=-=-
from request_llms.bridge_moonshot import predict as moonshot_ui
from request_llms.bridge_moonshot import predict_no_ui_long_connection as moonshot_no_ui
model_info.update({
"moonshot-v1-8k": {
"fn_with_ui": moonshot_ui,
"fn_without_ui": moonshot_no_ui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 1024 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"moonshot-v1-32k": {
"fn_with_ui": moonshot_ui,
"fn_without_ui": moonshot_no_ui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 1024 * 32, "max_token": 1024 * 32,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
} "moonshot-v1-128k": {
"fn_with_ui": moonshot_ui,
"fn_without_ui": moonshot_no_ui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 1024 * 128,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
}
})
# -=-=-=-=-=-=- api2d 对齐支持 -=-=-=-=-=-=- # -=-=-=-=-=-=- api2d 对齐支持 -=-=-=-=-=-=-
for model in AVAIL_LLM_MODELS: for model in AVAIL_LLM_MODELS:
if model.startswith('api2d-') and (model.replace('api2d-','') in model_info.keys()): if model.startswith('api2d-') and (model.replace('api2d-','') in model_info.keys()):
@@ -282,25 +458,67 @@ for model in AVAIL_LLM_MODELS:
model_info.update({model: mi}) model_info.update({model: mi})
# -=-=-=-=-=-=- 以下部分是新加入的模型,可能附带额外依赖 -=-=-=-=-=-=- # -=-=-=-=-=-=- 以下部分是新加入的模型,可能附带额外依赖 -=-=-=-=-=-=-
if "claude-1-100k" in AVAIL_LLM_MODELS or "claude-2" in AVAIL_LLM_MODELS: # claude家族
claude_models = ["claude-instant-1.2","claude-2.0","claude-2.1","claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229"]
if any(item in claude_models for item in AVAIL_LLM_MODELS):
from .bridge_claude import predict_no_ui_long_connection as claude_noui from .bridge_claude import predict_no_ui_long_connection as claude_noui
from .bridge_claude import predict as claude_ui from .bridge_claude import predict as claude_ui
model_info.update({ model_info.update({
"claude-1-100k": { "claude-instant-1.2": {
"fn_with_ui": claude_ui, "fn_with_ui": claude_ui,
"fn_without_ui": claude_noui, "fn_without_ui": claude_noui,
"endpoint": None, "endpoint": claude_endpoint,
"max_token": 8196, "max_token": 100000,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
}) })
model_info.update({ model_info.update({
"claude-2": { "claude-2.0": {
"fn_with_ui": claude_ui, "fn_with_ui": claude_ui,
"fn_without_ui": claude_noui, "fn_without_ui": claude_noui,
"endpoint": None, "endpoint": claude_endpoint,
"max_token": 8196, "max_token": 100000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-2.1": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-3-haiku-20240307": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-3-sonnet-20240229": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-3-opus-20240229": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
@@ -370,22 +588,6 @@ if "stack-claude" in AVAIL_LLM_MODELS:
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
} }
}) })
if "newbing-free" in AVAIL_LLM_MODELS:
try:
from .bridge_newbingfree import predict_no_ui_long_connection as newbingfree_noui
from .bridge_newbingfree import predict as newbingfree_ui
model_info.update({
"newbing-free": {
"fn_with_ui": newbingfree_ui,
"fn_without_ui": newbingfree_noui,
"endpoint": newbing_endpoint,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
}
})
except:
print(trimmed_format_exc())
if "newbing" in AVAIL_LLM_MODELS: # same with newbing-free if "newbing" in AVAIL_LLM_MODELS: # same with newbing-free
try: try:
from .bridge_newbingfree import predict_no_ui_long_connection as newbingfree_noui from .bridge_newbingfree import predict_no_ui_long_connection as newbingfree_noui
@@ -418,6 +620,7 @@ if "chatglmft" in AVAIL_LLM_MODELS: # same with newbing-free
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# -=-=-=-=-=-=- 上海AI-LAB书生大模型 -=-=-=-=-=-=-
if "internlm" in AVAIL_LLM_MODELS: if "internlm" in AVAIL_LLM_MODELS:
try: try:
from .bridge_internlm import predict_no_ui_long_connection as internlm_noui from .bridge_internlm import predict_no_ui_long_connection as internlm_noui
@@ -450,6 +653,7 @@ if "chatglm_onnx" in AVAIL_LLM_MODELS:
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# -=-=-=-=-=-=- 通义-本地模型 -=-=-=-=-=-=-
if "qwen-local" in AVAIL_LLM_MODELS: if "qwen-local" in AVAIL_LLM_MODELS:
try: try:
from .bridge_qwen_local import predict_no_ui_long_connection as qwen_local_noui from .bridge_qwen_local import predict_no_ui_long_connection as qwen_local_noui
@@ -458,6 +662,7 @@ if "qwen-local" in AVAIL_LLM_MODELS:
"qwen-local": { "qwen-local": {
"fn_with_ui": qwen_local_ui, "fn_with_ui": qwen_local_ui,
"fn_without_ui": qwen_local_noui, "fn_without_ui": qwen_local_noui,
"can_multi_thread": False,
"endpoint": None, "endpoint": None,
"max_token": 4096, "max_token": 4096,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -466,6 +671,7 @@ if "qwen-local" in AVAIL_LLM_MODELS:
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# -=-=-=-=-=-=- 通义-在线模型 -=-=-=-=-=-=-
if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai
try: try:
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
@@ -474,6 +680,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"qwen-turbo": { "qwen-turbo": {
"fn_with_ui": qwen_ui, "fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui, "fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 6144, "max_token": 6144,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -482,6 +689,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"qwen-plus": { "qwen-plus": {
"fn_with_ui": qwen_ui, "fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui, "fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 30720, "max_token": 30720,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -490,6 +698,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"qwen-max": { "qwen-max": {
"fn_with_ui": qwen_ui, "fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui, "fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 28672, "max_token": 28672,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -498,7 +707,88 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
if "spark" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型 # -=-=-=-=-=-=- 零一万物模型 -=-=-=-=-=-=-
yi_models = ["yi-34b-chat-0205","yi-34b-chat-200k","yi-large","yi-medium","yi-spark","yi-large-turbo","yi-large-preview"]
if any(item in yi_models for item in AVAIL_LLM_MODELS):
try:
yimodel_4k_noui, yimodel_4k_ui = get_predict_function(
api_key_conf_name="YIMODEL_API_KEY", max_output_token=600, disable_proxy=False
)
yimodel_16k_noui, yimodel_16k_ui = get_predict_function(
api_key_conf_name="YIMODEL_API_KEY", max_output_token=4000, disable_proxy=False
)
yimodel_200k_noui, yimodel_200k_ui = get_predict_function(
api_key_conf_name="YIMODEL_API_KEY", max_output_token=4096, disable_proxy=False
)
model_info.update({
"yi-34b-chat-0205": {
"fn_with_ui": yimodel_4k_ui,
"fn_without_ui": yimodel_4k_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint,
"max_token": 4000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"yi-34b-chat-200k": {
"fn_with_ui": yimodel_200k_ui,
"fn_without_ui": yimodel_200k_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"yi-large": {
"fn_with_ui": yimodel_16k_ui,
"fn_without_ui": yimodel_16k_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"yi-medium": {
"fn_with_ui": yimodel_16k_ui,
"fn_without_ui": yimodel_16k_noui,
"can_multi_thread": True, # 这个并发量稍微大一点
"endpoint": yimodel_endpoint,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"yi-spark": {
"fn_with_ui": yimodel_16k_ui,
"fn_without_ui": yimodel_16k_noui,
"can_multi_thread": True, # 这个并发量稍微大一点
"endpoint": yimodel_endpoint,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"yi-large-turbo": {
"fn_with_ui": yimodel_16k_ui,
"fn_without_ui": yimodel_16k_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"yi-large-preview": {
"fn_with_ui": yimodel_16k_ui,
"fn_without_ui": yimodel_16k_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
except:
print(trimmed_format_exc())
# -=-=-=-=-=-=- 讯飞星火认知大模型 -=-=-=-=-=-=-
if "spark" in AVAIL_LLM_MODELS:
try: try:
from .bridge_spark import predict_no_ui_long_connection as spark_noui from .bridge_spark import predict_no_ui_long_connection as spark_noui
from .bridge_spark import predict as spark_ui from .bridge_spark import predict as spark_ui
@@ -506,6 +796,7 @@ if "spark" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
"spark": { "spark": {
"fn_with_ui": spark_ui, "fn_with_ui": spark_ui,
"fn_without_ui": spark_noui, "fn_without_ui": spark_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 4096, "max_token": 4096,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -522,6 +813,7 @@ if "sparkv2" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
"sparkv2": { "sparkv2": {
"fn_with_ui": spark_ui, "fn_with_ui": spark_ui,
"fn_without_ui": spark_noui, "fn_without_ui": spark_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 4096, "max_token": 4096,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -530,7 +822,7 @@ if "sparkv2" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
if "sparkv3" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型 if "sparkv3" in AVAIL_LLM_MODELS or "sparkv3.5" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
try: try:
from .bridge_spark import predict_no_ui_long_connection as spark_noui from .bridge_spark import predict_no_ui_long_connection as spark_noui
from .bridge_spark import predict as spark_ui from .bridge_spark import predict as spark_ui
@@ -538,6 +830,16 @@ if "sparkv3" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
"sparkv3": { "sparkv3": {
"fn_with_ui": spark_ui, "fn_with_ui": spark_ui,
"fn_without_ui": spark_noui, "fn_without_ui": spark_noui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"sparkv3.5": {
"fn_with_ui": spark_ui,
"fn_without_ui": spark_noui,
"can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 4096, "max_token": 4096,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
@@ -562,22 +864,22 @@ if "llama2" in AVAIL_LLM_MODELS: # llama2
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai # -=-=-=-=-=-=- 智谱 -=-=-=-=-=-=-
if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai 是glm-4的别名,向后兼容配置
try: try:
from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
from .bridge_zhipu import predict as zhipu_ui
model_info.update({ model_info.update({
"zhipuai": { "zhipuai": {
"fn_with_ui": zhipu_ui, "fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui, "fn_without_ui": zhipu_noui,
"endpoint": None, "endpoint": None,
"max_token": 4096, "max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
} },
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# -=-=-=-=-=-=- 幻方-深度求索大模型 -=-=-=-=-=-=-
if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
try: try:
from .bridge_deepseekcoder import predict_no_ui_long_connection as deepseekcoder_noui from .bridge_deepseekcoder import predict_no_ui_long_connection as deepseekcoder_noui
@@ -594,30 +896,113 @@ if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# if "skylark" in AVAIL_LLM_MODELS: # -=-=-=-=-=-=- 幻方-深度求索大模型在线API -=-=-=-=-=-=-
# try: if "deepseek-chat" in AVAIL_LLM_MODELS or "deepseek-coder" in AVAIL_LLM_MODELS:
# from .bridge_skylark2 import predict_no_ui_long_connection as skylark_noui try:
# from .bridge_skylark2 import predict as skylark_ui deepseekapi_noui, deepseekapi_ui = get_predict_function(
# model_info.update({ api_key_conf_name="DEEPSEEK_API_KEY", max_output_token=4096, disable_proxy=False
# "skylark": { )
# "fn_with_ui": skylark_ui, model_info.update({
# "fn_without_ui": skylark_noui, "deepseek-chat":{
# "endpoint": None, "fn_with_ui": deepseekapi_ui,
# "max_token": 4096, "fn_without_ui": deepseekapi_noui,
# "tokenizer": tokenizer_gpt35, "endpoint": deepseekapi_endpoint,
# "token_cnt": get_token_num_gpt35, "can_multi_thread": True,
# } "max_token": 32000,
# }) "tokenizer": tokenizer_gpt35,
# except: "token_cnt": get_token_num_gpt35,
# print(trimmed_format_exc()) },
"deepseek-coder":{
"fn_with_ui": deepseekapi_ui,
"fn_without_ui": deepseekapi_noui,
"endpoint": deepseekapi_endpoint,
"can_multi_thread": True,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
except:
print(trimmed_format_exc())
# -=-=-=-=-=-=- one-api 对齐支持 -=-=-=-=-=-=-
for model in [m for m in AVAIL_LLM_MODELS if m.startswith("one-api-")]:
# 为了更灵活地接入one-api多模型管理界面,设计了此接口,例子AVAIL_LLM_MODELS = ["one-api-mixtral-8x7b(max_token=6666)"]
# 其中
# "one-api-" 是前缀(必要)
# "mixtral-8x7b" 是模型名(必要)
# "(max_token=6666)" 是配置(非必要)
try:
_, max_token_tmp = read_one_api_model_name(model)
except:
print(f"one-api模型 {model} 的 max_token 配置不是整数,请检查配置文件。")
continue
model_info.update({
model: {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"can_multi_thread": True,
"endpoint": openai_endpoint,
"max_token": max_token_tmp,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
# -=-=-=-=-=-=- vllm 对齐支持 -=-=-=-=-=-=-
for model in [m for m in AVAIL_LLM_MODELS if m.startswith("vllm-")]:
# 为了更灵活地接入vllm多模型管理界面,设计了此接口,例子AVAIL_LLM_MODELS = ["vllm-/home/hmp/llm/cache/Qwen1___5-32B-Chat(max_token=6666)"]
# 其中
# "vllm-" 是前缀(必要)
# "mixtral-8x7b" 是模型名(必要)
# "(max_token=6666)" 是配置(非必要)
try:
_, max_token_tmp = read_one_api_model_name(model)
except:
print(f"vllm模型 {model} 的 max_token 配置不是整数,请检查配置文件。")
continue
model_info.update({
model: {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"can_multi_thread": True,
"endpoint": openai_endpoint,
"max_token": max_token_tmp,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
# -=-=-=-=-=-=- ollama 对齐支持 -=-=-=-=-=-=-
for model in [m for m in AVAIL_LLM_MODELS if m.startswith("ollama-")]:
from .bridge_ollama import predict_no_ui_long_connection as ollama_noui
from .bridge_ollama import predict as ollama_ui
break
for model in [m for m in AVAIL_LLM_MODELS if m.startswith("ollama-")]:
# 为了更灵活地接入ollama多模型管理界面,设计了此接口,例子AVAIL_LLM_MODELS = ["ollama-phi3(max_token=6666)"]
# 其中
# "ollama-" 是前缀(必要)
# "phi3" 是模型名(必要)
# "(max_token=6666)" 是配置(非必要)
try:
_, max_token_tmp = read_one_api_model_name(model)
except:
print(f"ollama模型 {model} 的 max_token 配置不是整数,请检查配置文件。")
continue
model_info.update({
model: {
"fn_with_ui": ollama_ui,
"fn_without_ui": ollama_noui,
"endpoint": ollama_endpoint,
"max_token": max_token_tmp,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
# -=-=-=-=-=-=- azure模型对齐支持 -=-=-=-=-=-=-
# <-- 用于定义和切换多个azure模型 --> AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY") # <-- 用于定义和切换多个azure模型 -->
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")
if len(AZURE_CFG_ARRAY) > 0: if len(AZURE_CFG_ARRAY) > 0:
for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items(): for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items():
# 可能会覆盖之前的配置,但这是意料之中的 # 可能会覆盖之前的配置,但这是意料之中的
if not azure_model_name.startswith('azure'): if not azure_model_name.startswith('azure'):
raise ValueError("AZURE_CFG_ARRAY中配置的模型必须以azure开头") raise ValueError("AZURE_CFG_ARRAY中配置的模型必须以azure开头")
endpoint_ = azure_cfg_dict["AZURE_ENDPOINT"] + \ endpoint_ = azure_cfg_dict["AZURE_ENDPOINT"] + \
f'openai/deployments/{azure_cfg_dict["AZURE_ENGINE"]}/chat/completions?api-version=2023-05-15' f'openai/deployments/{azure_cfg_dict["AZURE_ENGINE"]}/chat/completions?api-version=2023-05-15'
@@ -636,13 +1021,20 @@ if len(AZURE_CFG_ARRAY) > 0:
AVAIL_LLM_MODELS += [azure_model_name] AVAIL_LLM_MODELS += [azure_model_name]
# -=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=-=-=
# -=-=-=-=-=-=-=-=-=- ☝️ 以上是模型路由 -=-=-=-=-=-=-=-=-=
# -=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=-=-=
# -=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=-=-=
# -=-=-=-=-=-=-= 👇 以下是多模型路由切换函数 -=-=-=-=-=-=-=
# -=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=-=-=
def LLM_CATCH_EXCEPTION(f): def LLM_CATCH_EXCEPTION(f):
""" """
装饰器函数,将错误显示出来 装饰器函数,将错误显示出来
""" """
def decorated(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience): def decorated(inputs:str, llm_kwargs:dict, history:list, sys_prompt:str, observe_window:list, console_slience:bool):
try: try:
return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
except Exception as e: except Exception as e:
@@ -652,9 +1044,9 @@ def LLM_CATCH_EXCEPTION(f):
return decorated return decorated
def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list, sys_prompt:str, observe_window:list=[], console_slience:bool=False):
""" """
发送至LLM,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 发送至LLM,等待回复,一次性完成,不显示中间过程。但内部(尽可能地)用stream的方法避免中途网线被掐。
inputs inputs
是本次问询的输入 是本次问询的输入
sys_prompt: sys_prompt:
@@ -668,21 +1060,19 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
""" """
import threading, time, copy import threading, time, copy
inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")
model = llm_kwargs['llm_model'] model = llm_kwargs['llm_model']
n_model = 1 n_model = 1
if '&' not in model: if '&' not in model:
assert not model.startswith("tgui"), "TGUI不支持函数插件的实现" # 如果只询问“一个”大语言模型(多数情况):
# 如果只询问1个大语言模型
method = model_info[model]["fn_without_ui"] method = model_info[model]["fn_without_ui"]
return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
else: else:
# 如果同时询问“多个”大语言模型,这个稍微啰嗦一点,但思路相同,您不必读这个else分支
# 如果同时询问多个大语言模型,这个稍微啰嗦一点,但思路相同,您不必读这个else分支
executor = ThreadPoolExecutor(max_workers=4) executor = ThreadPoolExecutor(max_workers=4)
models = model.split('&') models = model.split('&')
n_model = len(models) n_model = len(models)
window_len = len(observe_window) window_len = len(observe_window)
assert window_len==3 assert window_len==3
window_mutex = [["", time.time(), ""] for _ in range(n_model)] + [True] window_mutex = [["", time.time(), ""] for _ in range(n_model)] + [True]
@@ -701,12 +1091,13 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
time.sleep(0.25) time.sleep(0.25)
if not window_mutex[-1]: break if not window_mutex[-1]: break
# 看门狗watchdog # 看门狗watchdog
for i in range(n_model): for i in range(n_model):
window_mutex[i][1] = observe_window[1] window_mutex[i][1] = observe_window[1]
# 观察窗window # 观察窗window
chat_string = [] chat_string = []
for i in range(n_model): for i in range(n_model):
chat_string.append( f"{str(models[i])} 说】: <font color=\"{colors[i]}\"> {window_mutex[i][0]} </font>" ) color = colors[i%len(colors)]
chat_string.append( f"{str(models[i])} 说】: <font color=\"{color}\"> {window_mutex[i][0]} </font>" )
res = '<br/><br/>\n\n---\n\n'.join(chat_string) res = '<br/><br/>\n\n---\n\n'.join(chat_string)
# # # # # # # # # # # # # # # # # # # # # #
observe_window[0] = res observe_window[0] = res
@@ -723,24 +1114,56 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
time.sleep(1) time.sleep(1)
for i, future in enumerate(futures): # wait and get for i, future in enumerate(futures): # wait and get
return_string_collect.append( f"{str(models[i])} 说】: <font color=\"{colors[i]}\"> {future.result()} </font>" ) color = colors[i%len(colors)]
return_string_collect.append( f"{str(models[i])} 说】: <font color=\"{color}\"> {future.result()} </font>" )
window_mutex[-1] = False # stop mutex thread window_mutex[-1] = False # stop mutex thread
res = '<br/><br/>\n\n---\n\n'.join(return_string_collect) res = '<br/><br/>\n\n---\n\n'.join(return_string_collect)
return res return res
# 根据基础功能区 ModelOverride 参数调整模型类型,用于 `predict` 中
import importlib
import core_functional
def execute_model_override(llm_kwargs, additional_fn, method):
functional = core_functional.get_core_functions()
if (additional_fn in functional) and 'ModelOverride' in functional[additional_fn]:
# 热更新Prompt & ModelOverride
importlib.reload(core_functional)
functional = core_functional.get_core_functions()
model_override = functional[additional_fn]['ModelOverride']
if model_override not in model_info:
raise ValueError(f"模型覆盖参数 '{model_override}' 指向一个暂不支持的模型,请检查配置文件。")
method = model_info[model_override]["fn_with_ui"]
llm_kwargs['llm_model'] = model_override
return llm_kwargs, additional_fn, method
# 默认返回原参数
return llm_kwargs, additional_fn, method
def predict(inputs, llm_kwargs, *args, **kwargs): def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot,
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
""" """
发送至LLM,流式获取输出。 发送至LLM,流式获取输出。
用于基础的对话功能。 用于基础的对话功能。
inputs 是本次问询的输入
top_p, temperature是LLM的内部调优参数 完整参数列表:
history 是之前的对话列表注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误 predict(
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 inputs:str, # 是本次问询的输入
additional_fn代表点击的哪个按钮,按钮见functional.py llm_kwargs:dict, # 是LLM的内部调优参数
plugin_kwargs:dict, # 是插件的内部参数
chatbot:ChatBotWithCookies, # 原样传递,负责向用户前端展示对话,兼顾前端状态的功能
history:list=[], # 是之前的对话列表
system_prompt:str='', # 系统静默prompt
stream:bool=True, # 是否流式输出(已弃用)
additional_fn:str=None # 基础功能区按钮的附加功能
):
""" """
method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错,检查config中的AVAIL_LLM_MODELS选项 inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")
yield from method(inputs, llm_kwargs, *args, **kwargs)
method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错,检查config中的AVAIL_LLM_MODELS选项
if additional_fn: # 根据基础功能区 ModelOverride 参数调整模型类型
llm_kwargs, additional_fn, method = execute_model_override(llm_kwargs, additional_fn, method)
yield from method(inputs, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, stream, additional_fn)

查看文件

@@ -56,15 +56,15 @@ class GetGLM2Handle(LocalLLMHandle):
query, max_length, top_p, temperature, history = adaptor(kwargs) query, max_length, top_p, temperature, history = adaptor(kwargs)
for response, history in self._model.stream_chat(self._tokenizer, for response, history in self._model.stream_chat(self._tokenizer,
query, query,
history, history,
max_length=max_length, max_length=max_length,
top_p=top_p, top_p=top_p,
temperature=temperature, temperature=temperature,
): ):
yield response yield response
def try_to_import_special_deps(self, **kwargs): def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt # import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行 # 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行

查看文件

@@ -6,7 +6,6 @@ from toolbox import get_conf, ProxyNetworkActivate
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
# ------------------------------------------------------------------------------------------------------------------------ # ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model # 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------ # ------------------------------------------------------------------------------------------------------------------------
@@ -23,20 +22,45 @@ class GetGLM3Handle(LocalLLMHandle):
import os, glob import os, glob
import os import os
import platform import platform
LOCAL_MODEL_QUANT, device = get_conf('LOCAL_MODEL_QUANT', 'LOCAL_MODEL_DEVICE')
if LOCAL_MODEL_QUANT == "INT4": # INT4 LOCAL_MODEL_QUANT, device = get_conf("LOCAL_MODEL_QUANT", "LOCAL_MODEL_DEVICE")
_model_name_ = "THUDM/chatglm3-6b-int4" _model_name_ = "THUDM/chatglm3-6b"
elif LOCAL_MODEL_QUANT == "INT8": # INT8 # if LOCAL_MODEL_QUANT == "INT4": # INT4
_model_name_ = "THUDM/chatglm3-6b-int8" # _model_name_ = "THUDM/chatglm3-6b-int4"
else: # elif LOCAL_MODEL_QUANT == "INT8": # INT8
_model_name_ = "THUDM/chatglm3-6b" # FP16 # _model_name_ = "THUDM/chatglm3-6b-int8"
with ProxyNetworkActivate('Download_LLM'): # else:
chatglm_tokenizer = AutoTokenizer.from_pretrained(_model_name_, trust_remote_code=True) # _model_name_ = "THUDM/chatglm3-6b" # FP16
if device=='cpu': with ProxyNetworkActivate("Download_LLM"):
chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True, device='cpu').float() chatglm_tokenizer = AutoTokenizer.from_pretrained(
_model_name_, trust_remote_code=True
)
if device == "cpu":
chatglm_model = AutoModel.from_pretrained(
_model_name_,
trust_remote_code=True,
device="cpu",
).float()
elif LOCAL_MODEL_QUANT == "INT4": # INT4
chatglm_model = AutoModel.from_pretrained(
pretrained_model_name_or_path=_model_name_,
trust_remote_code=True,
device="cuda",
load_in_4bit=True,
)
elif LOCAL_MODEL_QUANT == "INT8": # INT8
chatglm_model = AutoModel.from_pretrained(
pretrained_model_name_or_path=_model_name_,
trust_remote_code=True,
device="cuda",
load_in_8bit=True,
)
else: else:
chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True, device='cuda') chatglm_model = AutoModel.from_pretrained(
pretrained_model_name_or_path=_model_name_,
trust_remote_code=True,
device="cuda",
)
chatglm_model = chatglm_model.eval() chatglm_model = chatglm_model.eval()
self._model = chatglm_model self._model = chatglm_model
@@ -46,32 +70,36 @@ class GetGLM3Handle(LocalLLMHandle):
def llm_stream_generator(self, **kwargs): def llm_stream_generator(self, **kwargs):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行 # 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
def adaptor(kwargs): def adaptor(kwargs):
query = kwargs['query'] query = kwargs["query"]
max_length = kwargs['max_length'] max_length = kwargs["max_length"]
top_p = kwargs['top_p'] top_p = kwargs["top_p"]
temperature = kwargs['temperature'] temperature = kwargs["temperature"]
history = kwargs['history'] history = kwargs["history"]
return query, max_length, top_p, temperature, history return query, max_length, top_p, temperature, history
query, max_length, top_p, temperature, history = adaptor(kwargs) query, max_length, top_p, temperature, history = adaptor(kwargs)
for response, history in self._model.stream_chat(self._tokenizer, for response, history in self._model.stream_chat(
query, self._tokenizer,
history, query,
max_length=max_length, history,
top_p=top_p, max_length=max_length,
temperature=temperature, top_p=top_p,
): temperature=temperature,
):
yield response yield response
def try_to_import_special_deps(self, **kwargs): def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt # import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行 # 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行
import importlib import importlib
# importlib.import_module('modelscope') # importlib.import_module('modelscope')
# ------------------------------------------------------------------------------------------------------------------------ # ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 GPT-Academic Interface # 🔌💻 GPT-Academic Interface
# ------------------------------------------------------------------------------------------------------------------------ # ------------------------------------------------------------------------------------------------------------------------
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetGLM3Handle, model_name, history_format='chatglm3') predict_no_ui_long_connection, predict = get_local_llm_predict_fns(
GetGLM3Handle, model_name, history_format="chatglm3"
)

查看文件

@@ -37,7 +37,7 @@ class GetGLMFTHandle(Process):
self.check_dependency() self.check_dependency()
self.start() self.start()
self.threadLock = threading.Lock() self.threadLock = threading.Lock()
def check_dependency(self): def check_dependency(self):
try: try:
import sentencepiece import sentencepiece
@@ -101,7 +101,7 @@ class GetGLMFTHandle(Process):
break break
except Exception as e: except Exception as e:
retry += 1 retry += 1
if retry > 3: if retry > 3:
self.child.send('[Local Message] Call ChatGLMFT fail 不能正常加载ChatGLMFT的参数。') self.child.send('[Local Message] Call ChatGLMFT fail 不能正常加载ChatGLMFT的参数。')
raise RuntimeError("不能正常加载ChatGLMFT的参数") raise RuntimeError("不能正常加载ChatGLMFT的参数")
@@ -113,7 +113,7 @@ class GetGLMFTHandle(Process):
for response, history in self.chatglmft_model.stream_chat(self.chatglmft_tokenizer, **kwargs): for response, history in self.chatglmft_model.stream_chat(self.chatglmft_tokenizer, **kwargs):
self.child.send(response) self.child.send(response)
# # 中途接收可能的终止指令(如果有的话) # # 中途接收可能的终止指令(如果有的话)
# if self.child.poll(): # if self.child.poll():
# command = self.child.recv() # command = self.child.recv()
# if command == '[Terminate]': break # if command == '[Terminate]': break
except: except:
@@ -133,11 +133,12 @@ class GetGLMFTHandle(Process):
else: else:
break break
self.threadLock.release() self.threadLock.release()
global glmft_handle global glmft_handle
glmft_handle = None glmft_handle = None
################################################################################# #################################################################################
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
""" """
多线程方法 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -146,7 +147,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if glmft_handle is None: if glmft_handle is None:
glmft_handle = GetGLMFTHandle() glmft_handle = GetGLMFTHandle()
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glmft_handle.info if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glmft_handle.info
if not glmft_handle.success: if not glmft_handle.success:
error = glmft_handle.info error = glmft_handle.info
glmft_handle = None glmft_handle = None
raise RuntimeError(error) raise RuntimeError(error)
@@ -161,7 +162,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
response = "" response = ""
for response in glmft_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): for response in glmft_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
if len(observe_window) >= 1: observe_window[0] = response if len(observe_window) >= 1: observe_window[0] = response
if len(observe_window) >= 2: if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience: if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。") raise RuntimeError("程序终止。")
return response return response
@@ -180,7 +181,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
glmft_handle = GetGLMFTHandle() glmft_handle = GetGLMFTHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + glmft_handle.info) chatbot[-1] = (inputs, load_message + "\n\n" + glmft_handle.info)
yield from update_ui(chatbot=chatbot, history=[]) yield from update_ui(chatbot=chatbot, history=[])
if not glmft_handle.success: if not glmft_handle.success:
glmft_handle = None glmft_handle = None
return return

某些文件未显示,因为此 diff 中更改的文件太多 显示更多