比较提交

..

92 次代码提交

作者 SHA1 备注 提交日期
binary-husky
f945a7bd19 preserve theme selection 2024-07-04 14:11:51 +00:00
binary-husky
379dcb2fa7 minor gui bug fix 2024-07-04 13:31:21 +00:00
binary-husky
30c905917a unify plugin calling 2024-07-02 15:32:40 +00:00
binary-husky
0c6c357e9c revise qwen 2024-07-02 14:22:45 +00:00
Menghuan1918
6cd2d80dfd Bug fix: Some non-standard forms of error return are not caught (#1877) 2024-07-01 20:35:49 +08:00
binary-husky
18d3245fc9 ready next gradio version 2024-06-29 15:29:48 +00:00
hcy2206
194e665a3b 增加了对于讯飞星火大模型Spark4.0的支持 (#1875) 2024-06-29 23:20:04 +08:00
binary-husky
7e201c5028 move test file to correct position 2024-06-28 08:23:40 +00:00
binary-husky
00e5a31b50 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2024-06-27 06:50:06 +00:00
binary-husky
d8b9686eeb fix latex auto correct 2024-06-27 06:49:36 +00:00
Menghuan1918
25e06de1b6 Docker build bug fix (#1870) 2024-06-26 14:31:31 +08:00
binary-husky
0ad571e6b5 prevent further stream when reset is clicked 2024-06-25 11:43:14 +00:00
binary-husky
ddad5247fc upgrade searxng 2024-06-25 11:12:51 +00:00
binary-husky
ececfb9b6e test new dropdown js code 2024-06-25 08:34:50 +00:00
binary-husky
9f13c5cedf update default value of scroller_max_len 2024-06-25 05:34:55 +00:00
binary-husky
68b36042ce re-locate plugin 2024-06-25 05:32:20 +00:00
binary-husky
cac6c50d2f roll version 2024-06-19 12:56:23 +00:00
binary-husky
f884eb43cf Merge branch 'master' into frontier 2024-06-19 12:56:04 +00:00
binary-husky
d37383dd4e change arxiv cache dir path 2024-06-19 12:49:34 +00:00
binary-husky
dfae4e8081 optimize scolling visual effect 2024-06-19 12:42:11 +00:00
binary-husky
15cc08505f resolve safe pickle err 2024-06-19 11:59:47 +00:00
iluem
c5a82f6ab7 Merge pull request from GHSA-3jrq-66fm-w7xr 2024-06-19 14:29:21 +08:00
binary-husky
768ed4514a minor formatting issue 2024-06-18 14:51:53 +00:00
binary-husky
9dfbff7fd0 Merge branch 'GHSA-3jrq-66fm-w7xr' into frontier 2024-06-18 10:19:10 +00:00
binary-husky
47cedde954 fix security issue GHSA-3jrq-66fm-w7xr 2024-06-18 10:18:33 +00:00
binary-husky
1e16485087 internet gpt minor bug fix 2024-06-16 15:16:24 +00:00
binary-husky
f3660d669f internet GPT upgrade 2024-06-16 14:10:38 +00:00
binary-husky
e6d1cb09cb Merge branch 'master' into frontier 2024-06-16 13:47:15 +00:00
binary-husky
12aebf9707 searxng based information gathering 2024-06-16 12:12:57 +00:00
binary-husky
0b5385e5e5 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2024-06-12 09:34:12 +00:00
binary-husky
2ff1a1fb0b update translation matrix 2024-06-12 09:34:05 +00:00
Yuki
cdadd38cf7 ️feat: block access to openapi references while running under fastapi (#1849)
- block fastapi openapi reference(swagger and redoc) routes
2024-06-10 22:26:46 +08:00
binary-husky
48e10fb10a Update README.md 2024-06-10 22:22:04 +08:00
binary-husky
ba484c55a0 Merge branch 'master' into frontier 2024-06-10 14:19:26 +00:00
Frank Lee
ca64a592f5 Update zhipu models (#1852) 2024-06-10 22:17:51 +08:00
Guoxin Sun
cb96ca132a Update common.js (#1854)
fix typo
2024-06-10 22:17:27 +08:00
binary-husky
737101b81d remove debug msg 2024-06-07 17:00:05 +00:00
binary-husky
612caa2f5f revise 2024-06-07 16:50:27 +00:00
binary-husky
85dbe4a4bf pdf processing improvement 2024-06-07 15:53:08 +00:00
binary-husky
2262a4d80a taichu model fix 2024-06-06 09:35:05 +00:00
binary-husky
b456ff02ab add note 2024-06-06 09:14:32 +00:00
binary-husky
24a21ae320 紫东太初大模型 2024-06-06 09:05:06 +00:00
binary-husky
3d5790cc2c resolve fallback to non-multimodal problem 2024-06-06 08:00:30 +00:00
binary-husky
7de6015800 multimodal support for gpt-4o etc 2024-06-06 07:36:37 +00:00
binary-husky
46428b7c7a Merge branch 'master' into frontier 2024-06-01 16:22:32 +00:00
binary-husky
66a50c8019 live2d shutdown bug fix 2024-06-01 16:21:04 +00:00
Menghuan1918
814dc943ac 将“生成多种图表”插件高级参数更新为二级菜单 (#1839)
* Improve the prompts

* Update to new meun form

* Bug fix (wrong type of plugin_kwargs)
2024-06-01 13:34:33 +08:00
binary-husky
96cd1f0b25 secondary menu main input sync bug fix 2024-05-31 04:13:27 +00:00
binary-husky
4fc17f4add Merge branch 'master' into frontier 2024-05-30 15:00:44 +00:00
binary-husky
b3665d8fec remove check 2024-05-30 14:54:50 +00:00
binary-husky
80c4281888 TTS Default Enable 2024-05-30 14:27:18 +00:00
binary-husky
beda56abb0 update dockerfile 2024-05-30 12:44:17 +00:00
binary-husky
cb16941d01 update css 2024-05-30 12:35:47 +00:00
binary-husky
5cf9ac7849 Merge branch 'master' into frontier 2024-05-29 16:06:28 +00:00
binary-husky
51ddb88ceb correct hint err 2024-05-29 16:05:23 +00:00
binary-husky
69dfe5d514 compat to old void-terminal plugin 2024-05-29 15:50:00 +00:00
binary-husky
6819f87512 Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-05-23 16:35:20 +00:00
binary-husky
3d51b9d5bb compat baichuan 2024-05-23 16:35:15 +00:00
QiyuanChen
bff87ada92 添加对ERNIE-Speed和ERNIE-Lite模型的支持 (#1821)
* feat: add ERNIE-Speed and ERNIE-Lite

百度的ERNIE-Speed and ERNIE-Lite模型开始免费使用了,故添加了调用地址。可以使用ERNIE-Speed-128K,ERNIE-Speed-8K,ERNIE-Lite-8K来访问

* chore: Modify supported models in config.py

修改了config.py中千帆支持的模型列表,添加了三款免费模型
2024-05-24 00:16:26 +08:00
binary-husky
a938412b6f save conversation wrap 2024-05-23 15:58:59 +00:00
binary-husky
a48acf6fec Flex Btn Bug Fix 2024-05-22 08:38:40 +00:00
binary-husky
c6b9ab5214 add document 2024-05-22 06:39:56 +00:00
binary-husky
aa3332de69 add document 2024-05-22 06:27:26 +00:00
binary-husky
d43175d46d fix type hint 2024-05-21 13:18:38 +00:00
binary-husky
8ca9232db2 Merge branch 'master' into frontier 2024-05-21 12:27:01 +00:00
binary-husky
1339aa0e1a doc2x latex convertion 2024-05-21 12:24:50 +00:00
binary-husky
f41419e767 update demo 2024-05-21 11:12:08 +00:00
binary-husky
d88c585305 improve latex plugin 2024-05-21 10:47:50 +00:00
binary-husky
0a88d18c7a secondary menu for pdf trans 2024-05-21 08:51:29 +00:00
binary-husky
0d0edc2216 Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-05-19 21:54:16 +08:00
binary-husky
5e0875fcf4 from backend to front end 2024-05-19 21:54:06 +08:00
Shixian Sheng
c508b84db8 更新了README.md/Update README.md (#1810) 2024-05-19 20:41:17 +08:00
Menghuan1918
f2b67602bb 为docker构建添加FFmpeg依赖 (#1807)
* Test: change dockerfile to install ffmpeg

* Add the ffmpeg to dockerfile (required by edge-tts)
2024-05-19 14:27:55 +08:00
binary-husky
29daba5d2f success? 2024-05-18 23:03:28 +08:00
binary-husky
9477824ac1 improve css 2024-05-18 21:54:15 +08:00
binary-husky
459c5b2d24 plugin refactor: phase 1 2024-05-18 20:23:50 +08:00
binary-husky
abf9b5aee5 Merge branch 'master' into frontier 2024-05-18 15:52:08 +08:00
binary-husky
2ce4482146 fix new ModelOverride fn bug 2024-05-18 15:47:25 +08:00
binary-husky
4282b83035 change TTS default to DISABLE 2024-05-18 15:43:35 +08:00
binary-husky
537be57c9b fix tts bugs 2024-05-17 21:07:28 +08:00
binary-husky
3aa92d6c80 change main ui hint 2024-05-17 11:34:13 +08:00
awwaawwa
b7eb9aba49 [Feature]: allow model mutex override in core_functional.py (#1708)
* allow_core_func_specify_model

* change arg name

* 模型覆盖支持热更新&当模型覆盖指向不存在的模型时报错

* allow model mutex override

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-05-17 11:15:23 +08:00
hongyi-zhao
881a596a30 model support (gpt4o) in project. (#1760)
* Add the environment variable: OPEN_BROWSER

* Add configurable browser launching with custom arguments

- Update `config.py` to include options for specifying the browser and its arguments for opening URLs.
- Modify `main.py` to use the configured browser settings from `config.py` to launch the web page.
- Enhance `config_loader.py` to process path-like strings by expanding and normalizing paths, which supports the configuration improvements.

* Add support for the following models:

"gpt-4o", "gpt-4o-2024-05-13"

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-05-14 17:01:32 +08:00
binary-husky
1b3c331d01 dos2unix 2024-05-14 12:02:40 +08:00
binary-husky
70d5f2a7df arg name err patch 2024-05-13 23:40:35 +08:00
Menghuan1918
fd2f8b9090 Provide a new fast and simple way of accessing APIs (As example: Yi-models,Deepseek) (#1782)
* deal with the message part

* Finish no_ui_connect

* finish predict part

* Delete old version

* An example of add new api

* Bug fix:can not change in "model_info"

* Bug fix

* Error message handling

* Clear the format

* An example of add a openai form API:Deepseek

* For compatibility reasons

* Feture: set different API/Endpoint to diferent models

* Add support for YI new models

* 更新doc2x的api key机制 (#1766)

* Fix DOC2X API key refresh issue in PDF translation

* remove add

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* 修改部分文件名、变量名

* patch err

---------

Co-authored-by: alex_xiao <113411296+Alex4210987@users.noreply.github.com>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-05-13 23:38:08 +08:00
binary-husky
225a2de011 Version 3.76 (#1752)
* version roll

* add upload processbar
2024-05-13 22:54:38 +08:00
binary-husky
6aea6d8e2b Merge branch 'master' into frontier 2024-05-13 22:52:15 +08:00
alex_xiao
8d85616c27 更新doc2x的api key机制 (#1766)
* Fix DOC2X API key refresh issue in PDF translation

* remove add

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-05-13 22:49:40 +08:00
binary-husky
e4533dd24d Merge branch 'master' into frontier 2024-05-04 17:00:09 +08:00
binary-husky
fa15059f07 add upload processbar 2024-05-01 01:11:35 +08:00
binary-husky
685c573619 version roll 2024-04-30 21:00:25 +08:00
共有 84 个文件被更改,包括 4632 次插入1972 次删除

4
.gitignore vendored
查看文件

@@ -153,4 +153,6 @@ media
flagged flagged
request_llms/ChatGLM-6b-onnx-u8s8 request_llms/ChatGLM-6b-onnx-u8s8
.pre-commit-config.yaml .pre-commit-config.yaml
themes/common.js.min.*.js test.html
objdump*
*.min.*.js

查看文件

@@ -12,11 +12,16 @@ RUN echo '[global]' > /etc/pip.conf && \
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
# 语音输出功能以下两行,第一行更换阿里源,第二行安装ffmpeg,都可以删除
RUN UBUNTU_VERSION=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release); echo "deb https://mirrors.aliyun.com/debian/ $UBUNTU_VERSION main non-free contrib" > /etc/apt/sources.list; apt-get update
RUN apt-get install ffmpeg -y
# 进入工作路径(必要) # 进入工作路径(必要)
WORKDIR /gpt WORKDIR /gpt
# 安装大部分依赖,利用Docker缓存加速以后的构建 (以下行,可以删除) # 安装大部分依赖,利用Docker缓存加速以后的构建 (以下行,可以删除)
COPY requirements.txt ./ COPY requirements.txt ./
RUN pip3 install -r requirements.txt RUN pip3 install -r requirements.txt

查看文件

@@ -1,7 +1,7 @@
> [!IMPORTANT] > [!IMPORTANT]
> 2024.6.1: 版本3.80加入插件二级菜单功能详见wiki
> 2024.5.1: 加入Doc2x翻译PDF论文的功能,[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x) > 2024.5.1: 加入Doc2x翻译PDF论文的功能,[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x)
> 2024.4.30: 3.75版本引入Edge-TTS和SoVits语音克隆模块,[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/) > 2024.3.11: 全力支持Qwen、GLM、DeepseekCoder等中文大语言模型 SoVits语音克隆模块,[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/)
> 2024.3.11: 恭迎Claude3和Moonshot,全力支持Qwen、GLM、DeepseekCoder等中文大语言模型
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。 > 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
<br> <br>
@@ -67,7 +67,7 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读latex/pdf论文全文并生成摘要 读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读latex/pdf论文全文并生成摘要
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文 Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
批量注释生成 | [插件] 一键批量生成函数注释 批量注释生成 | [插件] 一键批量生成函数注释
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔 Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README.English.md)了吗?就是出自他的手笔
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程) [PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF [Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF

查看文件

@@ -1,33 +1,44 @@
def check_proxy(proxies): def check_proxy(proxies, return_ip=False):
import requests import requests
proxies_https = proxies['https'] if proxies is not None else '' proxies_https = proxies['https'] if proxies is not None else ''
ip = None
try: try:
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4) response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
data = response.json() data = response.json()
if 'country_name' in data: if 'country_name' in data:
country = data['country_name'] country = data['country_name']
result = f"代理配置 {proxies_https}, 代理所在地:{country}" result = f"代理配置 {proxies_https}, 代理所在地:{country}"
if 'ip' in data: ip = data['ip']
elif 'error' in data: elif 'error' in data:
alternative = _check_with_backup_source(proxies) alternative, ip = _check_with_backup_source(proxies)
if alternative is None: if alternative is None:
result = f"代理配置 {proxies_https}, 代理所在地未知,IP查询频率受限" result = f"代理配置 {proxies_https}, 代理所在地未知,IP查询频率受限"
else: else:
result = f"代理配置 {proxies_https}, 代理所在地:{alternative}" result = f"代理配置 {proxies_https}, 代理所在地:{alternative}"
else: else:
result = f"代理配置 {proxies_https}, 代理数据解析失败:{data}" result = f"代理配置 {proxies_https}, 代理数据解析失败:{data}"
print(result) if not return_ip:
return result print(result)
return result
else:
return ip
except: except:
result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效" result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效"
print(result) if not return_ip:
return result print(result)
return result
else:
return ip
def _check_with_backup_source(proxies): def _check_with_backup_source(proxies):
import random, string, requests import random, string, requests
random_string = ''.join(random.choices(string.ascii_letters + string.digits, k=32)) random_string = ''.join(random.choices(string.ascii_letters + string.digits, k=32))
try: return requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json()['dns']['geo'] try:
except: return None res_json = requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json()
return res_json['dns']['geo'], res_json['dns']['ip']
except:
return None, None
def backup_and_download(current_version, remote_version): def backup_and_download(current_version, remote_version):
""" """
@@ -71,7 +82,7 @@ def patch_and_restart(path):
import sys import sys
import time import time
import glob import glob
from colorful import print亮黄, print亮绿, print亮红 from shared_utils.colorful import print亮黄, print亮绿, print亮红
# if not using config_private, move origin config.py as config_private.py # if not using config_private, move origin config.py as config_private.py
if not os.path.exists('config_private.py'): if not os.path.exists('config_private.py'):
print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,', print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
@@ -124,7 +135,7 @@ def auto_update(raise_error=False):
current_version = f.read() current_version = f.read()
current_version = json.loads(current_version)['version'] current_version = json.loads(current_version)['version']
if (remote_version - current_version) >= 0.01-1e-5: if (remote_version - current_version) >= 0.01-1e-5:
from colorful import print亮黄 from shared_utils.colorful import print亮黄
print亮黄(f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}{new_feature}') print亮黄(f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}{new_feature}')
print('1Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n') print('1Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
user_instruction = input('2是否一键更新代码Y+回车=确认,输入其他/无输入+回车=不更新)?') user_instruction = input('2是否一键更新代码Y+回车=确认,输入其他/无输入+回车=不更新)?')

查看文件

@@ -32,7 +32,8 @@ else:
# [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 ) # [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓ LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview", "gpt-4-turbo", "gpt-4-turbo-2024-04-09", AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
"gpt-4o", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5", "gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo", "gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo",
"gemini-pro", "chatglm3" "gemini-pro", "chatglm3"
@@ -40,14 +41,16 @@ AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-p
# --- --- --- --- # --- --- --- ---
# P.S. 其他可用的模型还包括 # P.S. 其他可用的模型还包括
# AVAIL_LLM_MODELS = [ # AVAIL_LLM_MODELS = [
# "glm-4-0520", "glm-4-air", "glm-4-airx", "glm-4-flash",
# "qianfan", "deepseekcoder", # "qianfan", "deepseekcoder",
# "spark", "sparkv2", "sparkv3", "sparkv3.5", # "spark", "sparkv2", "sparkv3", "sparkv3.5", "sparkv4",
# "qwen-turbo", "qwen-plus", "qwen-max", "qwen-local", # "qwen-turbo", "qwen-plus", "qwen-max", "qwen-local",
# "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k", # "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k",
# "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125" # "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125", "gpt-4o-2024-05-13"
# "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2", # "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2",
# "moss", "llama2", "chatglm_onnx", "internlm", "jittorllms_pangualpha", "jittorllms_llama", # "moss", "llama2", "chatglm_onnx", "internlm", "jittorllms_pangualpha", "jittorllms_llama",
# "yi-34b-chat-0205", "yi-34b-chat-200k" # "deepseek-chat" ,"deepseek-coder",
# "yi-34b-chat-0205","yi-34b-chat-200k","yi-large","yi-medium","yi-spark","yi-large-turbo","yi-large-preview",
# ] # ]
# --- --- --- --- # --- --- --- ---
# 此外,您还可以在接入one-api/vllm/ollama时, # 此外,您还可以在接入one-api/vllm/ollama时,
@@ -103,6 +106,10 @@ TIMEOUT_SECONDS = 30
WEB_PORT = -1 WEB_PORT = -1
# 是否自动打开浏览器页面
AUTO_OPEN_BROWSER = True
# 如果OpenAI不响应网络卡顿、代理失败、KEY失效,重试的次数限制 # 如果OpenAI不响应网络卡顿、代理失败、KEY失效,重试的次数限制
MAX_RETRY = 2 MAX_RETRY = 2
@@ -128,7 +135,7 @@ DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
# 百度千帆LLM_MODEL="qianfan" # 百度千帆LLM_MODEL="qianfan"
BAIDU_CLOUD_API_KEY = '' BAIDU_CLOUD_API_KEY = ''
BAIDU_CLOUD_SECRET_KEY = '' BAIDU_CLOUD_SECRET_KEY = ''
BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot-4"(文心大模型4.0), "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat" BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot-4"(文心大模型4.0), "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat", "ERNIE-Speed-128K", "ERNIE-Speed-8K", "ERNIE-Lite-8K"
# 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径 # 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径
@@ -196,7 +203,7 @@ ALIYUN_SECRET="" # (无需填写)
# GPT-SOVITS 文本转语音服务的运行地址(将语言模型的生成文本朗读出来) # GPT-SOVITS 文本转语音服务的运行地址(将语言模型的生成文本朗读出来)
TTS_TYPE = "DISABLE" # LOCAL / LOCAL_SOVITS_API / DISABLE TTS_TYPE = "EDGE_TTS" # EDGE_TTS / LOCAL_SOVITS_API / DISABLE
GPT_SOVITS_URL = "" GPT_SOVITS_URL = ""
EDGE_TTS_VOICE = "zh-CN-XiaoxiaoNeural" EDGE_TTS_VOICE = "zh-CN-XiaoxiaoNeural"
@@ -224,6 +231,14 @@ MOONSHOT_API_KEY = ""
YIMODEL_API_KEY = "" YIMODEL_API_KEY = ""
# 深度求索(DeepSeek) API KEY,默认请求地址为"https://api.deepseek.com/v1/chat/completions"
DEEPSEEK_API_KEY = ""
# 紫东太初大模型 https://ai-maas.wair.ac.cn
TAICHU_API_KEY = ""
# Mathpix 拥有执行PDF的OCR功能,但是需要注册账号 # Mathpix 拥有执行PDF的OCR功能,但是需要注册账号
MATHPIX_APPID = "" MATHPIX_APPID = ""
MATHPIX_APPKEY = "" MATHPIX_APPKEY = ""
@@ -254,6 +269,10 @@ GROBID_URLS = [
] ]
# Searxng互联网检索服务
SEARXNG_URL = "https://cloud-1.agent-matrix.com/"
# 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭 # 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭
ALLOW_RESET_CONFIG = False ALLOW_RESET_CONFIG = False
@@ -262,23 +281,23 @@ ALLOW_RESET_CONFIG = False
AUTOGEN_USE_DOCKER = False AUTOGEN_USE_DOCKER = False
# 临时的上传文件夹位置,请修改 # 临时的上传文件夹位置,请尽量不要修改
PATH_PRIVATE_UPLOAD = "private_upload" PATH_PRIVATE_UPLOAD = "private_upload"
# 日志文件夹的位置,请修改 # 日志文件夹的位置,请尽量不要修改
PATH_LOGGING = "gpt_log" PATH_LOGGING = "gpt_log"
# 除了连接OpenAI之外,还有哪些场合允许使用代理,请勿修改 # 存储翻译好的arxiv论文的路径,请尽量不要修改
ARXIV_CACHE_DIR = "gpt_log/arxiv_cache"
# 除了连接OpenAI之外,还有哪些场合允许使用代理,请尽量不要修改
WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid", WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
"Warmup_Modules", "Nougat_Download", "AutoGen"] "Warmup_Modules", "Nougat_Download", "AutoGen"]
# *实验性功能*: 自动检测并屏蔽失效的KEY,请勿使用
BLOCK_INVALID_APIKEY = False
# 启用插件热加载 # 启用插件热加载
PLUGIN_HOT_RELOAD = False PLUGIN_HOT_RELOAD = False
@@ -375,6 +394,9 @@ NUM_CUSTOM_BASIC_BTN = 4
插件在线服务配置依赖关系示意图 插件在线服务配置依赖关系示意图
├── 互联网检索
│ └── SEARXNG_URL
├── 语音功能 ├── 语音功能
│ ├── ENABLE_AUDIO │ ├── ENABLE_AUDIO
│ ├── ALIYUN_TOKEN │ ├── ALIYUN_TOKEN

查看文件

@@ -33,6 +33,8 @@ def get_core_functions():
"AutoClearHistory": False, "AutoClearHistory": False,
# [6] 文本预处理 (可选参数,默认 None,举例写个函数移除所有的换行符 # [6] 文本预处理 (可选参数,默认 None,举例写个函数移除所有的换行符
"PreProcess": None, "PreProcess": None,
# [7] 模型选择 (可选参数。如不设置,则使用当前全局模型;如设置,则用指定模型覆盖全局模型。)
# "ModelOverride": "gpt-3.5-turbo", # 主要用途:强制点击此基础功能按钮时,使用指定的模型。
}, },

查看文件

@@ -15,32 +15,43 @@ def get_crazy_functions():
from crazy_functions.解析项目源代码 import 解析一个Java项目 from crazy_functions.解析项目源代码 import 解析一个Java项目
from crazy_functions.解析项目源代码 import 解析一个前端项目 from crazy_functions.解析项目源代码 import 解析一个前端项目
from crazy_functions.高级功能函数模板 import 高阶功能模板函数 from crazy_functions.高级功能函数模板 import 高阶功能模板函数
from crazy_functions.高级功能函数模板 import Demo_Wrap
from crazy_functions.Latex全文润色 import Latex英文润色 from crazy_functions.Latex全文润色 import Latex英文润色
from crazy_functions.询问多个大语言模型 import 同时问询 from crazy_functions.询问多个大语言模型 import 同时问询
from crazy_functions.解析项目源代码 import 解析一个Lua项目 from crazy_functions.解析项目源代码 import 解析一个Lua项目
from crazy_functions.解析项目源代码 import 解析一个CSharp项目 from crazy_functions.解析项目源代码 import 解析一个CSharp项目
from crazy_functions.总结word文档 import 总结word文档 from crazy_functions.总结word文档 import 总结word文档
from crazy_functions.解析JupyterNotebook import 解析ipynb文件 from crazy_functions.解析JupyterNotebook import 解析ipynb文件
from crazy_functions.对话历史存档 import 对话历史存档 from crazy_functions.Conversation_To_File import 载入对话历史存档
from crazy_functions.对话历史存档 import 载入对话历史存档 from crazy_functions.Conversation_To_File import 对话历史存档
from crazy_functions.对话历史存档 import 删除所有本地对话历史记录 from crazy_functions.Conversation_To_File import Conversation_To_File_Wrap
from crazy_functions.Conversation_To_File import 删除所有本地对话历史记录
from crazy_functions.辅助功能 import 清除缓存 from crazy_functions.辅助功能 import 清除缓存
from crazy_functions.批量Markdown翻译 import Markdown英译中 from crazy_functions.Markdown_Translate import Markdown英译中
from crazy_functions.批量总结PDF文档 import 批量总结PDF文档 from crazy_functions.批量总结PDF文档 import 批量总结PDF文档
from crazy_functions.PDF批量翻译 import 批量翻译PDF文档 from crazy_functions.PDF_Translate import 批量翻译PDF文档
from crazy_functions.谷歌检索小助手 import 谷歌检索小助手 from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入 from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
from crazy_functions.Latex全文润色 import Latex中文润色 from crazy_functions.Latex全文润色 import Latex中文润色
from crazy_functions.Latex全文润色 import Latex英文纠错 from crazy_functions.Latex全文润色 import Latex英文纠错
from crazy_functions.批量Markdown翻译 import Markdown中译英 from crazy_functions.Markdown_Translate import Markdown中译英
from crazy_functions.虚空终端 import 虚空终端 from crazy_functions.虚空终端 import 虚空终端
from crazy_functions.生成多种Mermaid图表 import 生成多种Mermaid图表 from crazy_functions.生成多种Mermaid图表 import Mermaid_Gen
from crazy_functions.PDF_Translate_Wrap import PDF_Tran
from crazy_functions.Latex_Function import Latex英文纠错加PDF对比
from crazy_functions.Latex_Function import Latex翻译中文并重新编译PDF
from crazy_functions.Latex_Function import PDF翻译中文并重新编译PDF
from crazy_functions.Latex_Function_Wrap import Arxiv_Localize
from crazy_functions.Latex_Function_Wrap import PDF_Localize
from crazy_functions.Internet_GPT import 连接网络回答问题
from crazy_functions.Internet_GPT_Wrap import NetworkGPT_Wrap
function_plugins = { function_plugins = {
"虚空终端": { "虚空终端": {
"Group": "对话|编程|学术|智能体", "Group": "对话|编程|学术|智能体",
"Color": "stop", "Color": "stop",
"AsButton": True, "AsButton": True,
"Info": "使用自然语言实现您的想法",
"Function": HotReload(虚空终端), "Function": HotReload(虚空终端),
}, },
"解析整个Python项目": { "解析整个Python项目": {
@@ -75,14 +86,21 @@ def get_crazy_functions():
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"Info" : "基于当前对话或文件生成多种Mermaid图表,图表类型由模型判断", "Info" : "基于当前对话或文件生成多种Mermaid图表,图表类型由模型判断",
"Function": HotReload(生成多种Mermaid图表), "Function": None,
"AdvancedArgs": True, "Class": Mermaid_Gen
"ArgsReminder": "请输入图类型对应的数字,不输入则为模型自行判断:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图,9-思维导图", },
"Arxiv论文翻译": {
"Group": "学术",
"Color": "stop",
"AsButton": True,
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
"Function": HotReload(Latex翻译中文并重新编译PDF), # 当注册Class后,Function旧接口仅会在“虚空终端”中起作用
"Class": Arxiv_Localize, # 新一代插件需要注册Class
}, },
"批量总结Word文档": { "批量总结Word文档": {
"Group": "学术", "Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": True, "AsButton": False,
"Info": "批量总结word文档 | 输入参数为路径", "Info": "批量总结word文档 | 输入参数为路径",
"Function": HotReload(总结word文档), "Function": HotReload(总结word文档),
}, },
@@ -188,28 +206,42 @@ def get_crazy_functions():
}, },
"保存当前的对话": { "保存当前的对话": {
"Group": "对话", "Group": "对话",
"Color": "stop",
"AsButton": True, "AsButton": True,
"Info": "保存当前的对话 | 不需要输入参数", "Info": "保存当前的对话 | 不需要输入参数",
"Function": HotReload(对话历史存档), "Function": HotReload(对话历史存档), # 当注册Class后,Function旧接口仅会在“虚空终端”中起作用
"Class": Conversation_To_File_Wrap # 新一代插件需要注册Class
}, },
"[多线程Demo]解析此项目本身(源码自译解)": { "[多线程Demo]解析此项目本身(源码自译解)": {
"Group": "对话|编程", "Group": "对话|编程",
"Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "多线程解析并翻译此项目的源码 | 不需要输入参数", "Info": "多线程解析并翻译此项目的源码 | 不需要输入参数",
"Function": HotReload(解析项目本身), "Function": HotReload(解析项目本身),
}, },
"查互联网后回答": {
"Group": "对话",
"Color": "stop",
"AsButton": True, # 加入下拉菜单中
# "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
"Function": HotReload(连接网络回答问题),
"Class": NetworkGPT_Wrap # 新一代插件需要注册Class
},
"历史上的今天": { "历史上的今天": {
"Group": "对话", "Group": "对话",
"AsButton": True, "Color": "stop",
"AsButton": False,
"Info": "查看历史上的今天事件 (这是一个面向开发者的插件Demo) | 不需要输入参数", "Info": "查看历史上的今天事件 (这是一个面向开发者的插件Demo) | 不需要输入参数",
"Function": HotReload(高阶功能模板函数), "Function": None,
"Class": Demo_Wrap, # 新一代插件需要注册Class
}, },
"精准翻译PDF论文": { "精准翻译PDF论文": {
"Group": "学术", "Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": True, "AsButton": True,
"Info": "精准翻译PDF论文为中文 | 输入参数为路径", "Info": "精准翻译PDF论文为中文 | 输入参数为路径",
"Function": HotReload(批量翻译PDF文档), "Function": HotReload(批量翻译PDF文档), # 当注册Class后,Function旧接口仅会在“虚空终端”中起作用
"Class": PDF_Tran, # 新一代插件需要注册Class
}, },
"询问多个GPT模型": { "询问多个GPT模型": {
"Group": "对话", "Group": "对话",
@@ -284,8 +316,52 @@ def get_crazy_functions():
"Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包", "Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包",
"Function": HotReload(Markdown中译英), "Function": HotReload(Markdown中译英),
}, },
"Latex英文纠错+高亮修正位置 [需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
"Function": HotReload(Latex英文纠错加PDF对比),
},
"Arxiv论文精细翻译输入arxivID[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
"Function": HotReload(Latex翻译中文并重新编译PDF), # 当注册Class后,Function旧接口仅会在“虚空终端”中起作用
"Class": Arxiv_Localize, # 新一代插件需要注册Class
},
"本地Latex论文精细翻译上传Latex项目[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
"Function": HotReload(Latex翻译中文并重新编译PDF),
},
"PDF翻译中文并重新编译PDF上传PDF[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "PDF翻译中文,并重新编译PDF | 输入参数为路径",
"Function": HotReload(PDF翻译中文并重新编译PDF), # 当注册Class后,Function旧接口仅会在“虚空终端”中起作用
"Class": PDF_Localize # 新一代插件需要注册Class
}
} }
# -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=- # -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-
try: try:
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要 from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
@@ -305,36 +381,36 @@ def get_crazy_functions():
print(trimmed_format_exc()) print(trimmed_format_exc())
print("Load function plugin failed") print("Load function plugin failed")
try: # try:
from crazy_functions.联网的ChatGPT import 连接网络回答问题 # from crazy_functions.联网的ChatGPT import 连接网络回答问题
function_plugins.update( # function_plugins.update(
{ # {
"连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": { # "连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
"Group": "对话", # "Group": "对话",
"Color": "stop", # "Color": "stop",
"AsButton": False, # 加入下拉菜单中 # "AsButton": False, # 加入下拉菜单中
# "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题", # # "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
"Function": HotReload(连接网络回答问题), # "Function": HotReload(连接网络回答问题),
} # }
} # }
) # )
from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题 # from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
function_plugins.update( # function_plugins.update(
{ # {
"连接网络回答问题中文Bing版,输入问题后点击该插件": { # "连接网络回答问题中文Bing版,输入问题后点击该插件": {
"Group": "对话", # "Group": "对话",
"Color": "stop", # "Color": "stop",
"AsButton": False, # 加入下拉菜单中 # "AsButton": False, # 加入下拉菜单中
"Info": "连接网络回答问题需要访问中文Bing| 输入参数是一个问题", # "Info": "连接网络回答问题需要访问中文Bing| 输入参数是一个问题",
"Function": HotReload(连接bing搜索回答问题), # "Function": HotReload(连接bing搜索回答问题),
} # }
} # }
) # )
except: # except:
print(trimmed_format_exc()) # print(trimmed_format_exc())
print("Load function plugin failed") # print("Load function plugin failed")
try: try:
from crazy_functions.解析项目源代码 import 解析任意code项目 from crazy_functions.解析项目源代码 import 解析任意code项目
@@ -363,7 +439,7 @@ def get_crazy_functions():
"询问多个GPT模型手动指定询问哪些模型": { "询问多个GPT模型手动指定询问哪些模型": {
"Group": "对话", "Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": True,
"AdvancedArgs": True, # 调用时,唤起高级参数输入区默认False "AdvancedArgs": True, # 调用时,唤起高级参数输入区默认False
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&gpt-4", # 高级参数输入区的显示提示 "ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&gpt-4", # 高级参数输入区的显示提示
"Function": HotReload(同时问询_指定模型), "Function": HotReload(同时问询_指定模型),
@@ -458,7 +534,7 @@ def get_crazy_functions():
print("Load function plugin failed") print("Load function plugin failed")
try: try:
from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言 from crazy_functions.Markdown_Translate import Markdown翻译指定语言
function_plugins.update( function_plugins.update(
{ {
@@ -531,59 +607,6 @@ def get_crazy_functions():
print(trimmed_format_exc()) print(trimmed_format_exc())
print("Load function plugin failed") print("Load function plugin failed")
try:
from crazy_functions.Latex输出PDF import Latex英文纠错加PDF对比
from crazy_functions.Latex输出PDF import Latex翻译中文并重新编译PDF
from crazy_functions.Latex输出PDF import PDF翻译中文并重新编译PDF
function_plugins.update(
{
"Latex英文纠错+高亮修正位置 [需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
"Function": HotReload(Latex英文纠错加PDF对比),
},
"Arxiv论文精细翻译输入arxivID[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
"Function": HotReload(Latex翻译中文并重新编译PDF),
},
"本地Latex论文精细翻译上传Latex项目[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
"Function": HotReload(Latex翻译中文并重新编译PDF),
},
"PDF翻译中文并重新编译PDF上传PDF[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "PDF翻译中文,并重新编译PDF | 输入参数为路径",
"Function": HotReload(PDF翻译中文并重新编译PDF)
}
}
)
except:
print(trimmed_format_exc())
print("Load function plugin failed")
try: try:
from toolbox import get_conf from toolbox import get_conf

查看文件

@@ -1,4 +1,5 @@
from toolbox import CatchException, update_ui, promote_file_to_downloadzone, get_log_folder, get_user from toolbox import CatchException, update_ui, promote_file_to_downloadzone, get_log_folder, get_user
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
import re import re
f_prefix = 'GPT-Academic对话存档' f_prefix = 'GPT-Academic对话存档'
@@ -9,27 +10,61 @@ def write_chat_to_file(chatbot, history=None, file_name=None):
""" """
import os import os
import time import time
from themes.theme import advanced_css
if file_name is None: if file_name is None:
file_name = f_prefix + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html' file_name = f_prefix + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
fp = os.path.join(get_log_folder(get_user(chatbot), plugin_name='chat_history'), file_name) fp = os.path.join(get_log_folder(get_user(chatbot), plugin_name='chat_history'), file_name)
with open(fp, 'w', encoding='utf8') as f: with open(fp, 'w', encoding='utf8') as f:
from themes.theme import advanced_css from textwrap import dedent
f.write(f'<!DOCTYPE html><head><meta charset="utf-8"><title>对话历史</title><style>{advanced_css}</style></head>') form = dedent("""
<!DOCTYPE html><head><meta charset="utf-8"><title>对话存档</title><style>{CSS}</style></head>
<body>
<div class="test_temp1" style="width:10%; height: 500px; float:left;"></div>
<div class="test_temp2" style="width:80%;padding: 40px;float:left;padding-left: 20px;padding-right: 20px;box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px 8px;border-radius: 10px;">
<div class="chat-body" style="display: flex;justify-content: center;flex-direction: column;align-items: center;flex-wrap: nowrap;">
{CHAT_PREVIEW}
<div></div>
<div></div>
<div style="text-align: center;width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">对话原始数据</div>
{HISTORY_PREVIEW}
</div>
</div>
<div class="test_temp3" style="width:10%; height: 500px; float:left;"></div>
</body>
""")
qa_from = dedent("""
<div class="QaBox" style="width:80%;padding: 20px;margin-bottom: 20px;box-shadow: rgb(0 255 159 / 50%) 0px 0px 1px 2px;border-radius: 4px;">
<div class="Question" style="border-radius: 2px;">{QUESTION}</div>
<hr color="blue" style="border-top: dotted 2px #ccc;">
<div class="Answer" style="border-radius: 2px;">{ANSWER}</div>
</div>
""")
history_from = dedent("""
<div class="historyBox" style="width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">
<div class="entry" style="border-radius: 2px;">{ENTRY}</div>
</div>
""")
CHAT_PREVIEW_BUF = ""
for i, contents in enumerate(chatbot): for i, contents in enumerate(chatbot):
for j, content in enumerate(contents): question, answer = contents[0], contents[1]
try: # 这个bug没找到触发条件,暂时先这样顶一下 if question is None: question = ""
if type(content) != str: content = str(content) try: question = str(question)
except: except: question = ""
continue if answer is None: answer = ""
f.write(content) try: answer = str(answer)
if j == 0: except: answer = ""
f.write('<hr style="border-top: dotted 3px #ccc;">') CHAT_PREVIEW_BUF += qa_from.format(QUESTION=question, ANSWER=answer)
f.write('<hr color="red"> \n\n')
f.write('<hr color="blue"> \n\n raw chat context:\n') HISTORY_PREVIEW_BUF = ""
f.write('<code>')
for h in history: for h in history:
f.write("\n>>>" + h) HISTORY_PREVIEW_BUF += history_from.format(ENTRY=h)
f.write('</code>') html_content = form.format(CHAT_PREVIEW=CHAT_PREVIEW_BUF, HISTORY_PREVIEW=HISTORY_PREVIEW_BUF, CSS=advanced_css)
f.write(html_content)
promote_file_to_downloadzone(fp, rename_file=file_name, chatbot=chatbot) promote_file_to_downloadzone(fp, rename_file=file_name, chatbot=chatbot)
return '对话历史写入:' + fp return '对话历史写入:' + fp
@@ -40,7 +75,7 @@ def gen_file_preview(file_name):
# pattern to match the text between <head> and </head> # pattern to match the text between <head> and </head>
pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL) pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
file_content = re.sub(pattern, '', file_content) file_content = re.sub(pattern, '', file_content)
html, history = file_content.split('<hr color="blue"> \n\n raw chat context:\n') html, history = file_content.split('<hr color="blue"> \n\n 对话数据 (无渲染):\n')
history = history.strip('<code>') history = history.strip('<code>')
history = history.strip('</code>') history = history.strip('</code>')
history = history.split("\n>>>") history = history.split("\n>>>")
@@ -51,21 +86,25 @@ def gen_file_preview(file_name):
def read_file_to_chat(chatbot, history, file_name): def read_file_to_chat(chatbot, history, file_name):
with open(file_name, 'r', encoding='utf8') as f: with open(file_name, 'r', encoding='utf8') as f:
file_content = f.read() file_content = f.read()
# pattern to match the text between <head> and </head> from bs4 import BeautifulSoup
pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL) soup = BeautifulSoup(file_content, 'lxml')
file_content = re.sub(pattern, '', file_content) # 提取QaBox信息
html, history = file_content.split('<hr color="blue"> \n\n raw chat context:\n')
history = history.strip('<code>')
history = history.strip('</code>')
history = history.split("\n>>>")
history = list(filter(lambda x:x!="", history))
html = html.split('<hr color="red"> \n\n')
html = list(filter(lambda x:x!="", html))
chatbot.clear() chatbot.clear()
for i, h in enumerate(html): qa_box_list = []
i_say, gpt_say = h.split('<hr style="border-top: dotted 3px #ccc;">') qa_boxes = soup.find_all("div", class_="QaBox")
chatbot.append([i_say, gpt_say]) for box in qa_boxes:
chatbot.append([f"存档文件详情?", f"[Local Message] 载入对话{len(html)}条,上下文{len(history)}条。"]) question = box.find("div", class_="Question").get_text(strip=False)
answer = box.find("div", class_="Answer").get_text(strip=False)
qa_box_list.append({"Question": question, "Answer": answer})
chatbot.append([question, answer])
# 提取historyBox信息
history_box_list = []
history_boxes = soup.find_all("div", class_="historyBox")
for box in history_boxes:
entry = box.find("div", class_="entry").get_text(strip=False)
history_box_list.append(entry)
history = history_box_list
chatbot.append([None, f"[Local Message] 载入对话{len(qa_box_list)}条,上下文{len(history)}条。"])
return chatbot, history return chatbot, history
@CatchException @CatchException
@@ -79,11 +118,42 @@ def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
system_prompt 给gpt的静默提醒 system_prompt 给gpt的静默提醒
user_request 当前用户的请求信息IP地址等 user_request 当前用户的请求信息IP地址等
""" """
file_name = plugin_kwargs.get("file_name", None)
if (file_name is not None) and (file_name != "") and (not file_name.endswith('.html')): file_name += '.html'
else: file_name = None
chatbot.append(("保存当前对话", chatbot.append((None, f"[Local Message] {write_chat_to_file(chatbot, history, file_name)},您可以调用下拉菜单中的“载入对话历史存档”还原当下的对话"))
f"[Local Message] {write_chat_to_file(chatbot, history)},您可以调用下拉菜单中的“载入对话历史存档”还原当下的对话。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
class Conversation_To_File_Wrap(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中因此您在定义和使用类变量时应当慎之又慎
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
第一个参数名称`file_name`参数`type`声明这是一个文本框文本框上方显示`title`文本框内部显示`description``default_value`为默认值
"""
gui_definition = {
"file_name": ArgProperty(title="保存文件名", description="输入对话存档文件名,留空则使用时间作为文件名", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
yield from 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
def hide_cwd(str): def hide_cwd(str):
import os import os
current_path = os.getcwd() current_path = os.getcwd()
@@ -148,5 +218,3 @@ def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot
chatbot.append([f"删除所有历史对话文件", f"已删除<br/>{local_history}"]) chatbot.append([f"删除所有历史对话文件", f"已删除<br/>{local_history}"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return

查看文件

@@ -0,0 +1,142 @@
from toolbox import CatchException, update_ui, get_conf
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
import requests
from bs4 import BeautifulSoup
from request_llms.bridge_all import model_info
import urllib.request
import random
from functools import lru_cache
from check_proxy import check_proxy
@lru_cache
def get_auth_ip():
ip = check_proxy(None, return_ip=True)
if ip is None:
return '114.114.114.' + str(random.randint(1, 10))
return ip
def searxng_request(query, proxies, categories='general', searxng_url=None, engines=None):
if searxng_url is None:
url = get_conf("SEARXNG_URL")
else:
url = searxng_url
if engines is None:
engines = 'bing'
if categories == 'general':
params = {
'q': query, # 搜索查询
'format': 'json', # 输出格式为JSON
'language': 'zh', # 搜索语言
'engines': engines,
}
elif categories == 'science':
params = {
'q': query, # 搜索查询
'format': 'json', # 输出格式为JSON
'language': 'zh', # 搜索语言
'categories': 'science'
}
else:
raise ValueError('不支持的检索类型')
headers = {
'Accept-Language': 'zh-CN,zh;q=0.9',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
'X-Forwarded-For': get_auth_ip(),
'X-Real-IP': get_auth_ip()
}
results = []
response = requests.post(url, params=params, headers=headers, proxies=proxies, timeout=30)
if response.status_code == 200:
json_result = response.json()
for result in json_result['results']:
item = {
"title": result.get("title", ""),
"source": result.get("engines", "unknown"),
"content": result.get("content", ""),
"link": result["url"],
}
results.append(item)
return results
else:
if response.status_code == 429:
raise ValueError("Searxng在线搜索服务当前使用人数太多,请稍后。")
else:
raise ValueError("在线搜索失败,状态码: " + str(response.status_code) + '\t' + response.content.decode('utf-8'))
def scrape_text(url, proxies) -> str:
"""Scrape text from a webpage
Args:
url (str): The URL to scrape text from
Returns:
str: The scraped text
"""
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
'Content-Type': 'text/plain',
}
try:
response = requests.get(url, headers=headers, proxies=proxies, timeout=8)
if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
except:
return "无法连接到该网页"
soup = BeautifulSoup(response.text, "html.parser")
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = "\n".join(chunk for chunk in chunks if chunk)
return text
@CatchException
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出
chatbot.append((f"请结合互联网信息回答以下问题:{txt}", "检索中..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# ------------- < 第1步爬取搜索引擎的结果 > -------------
from toolbox import get_conf
proxies = get_conf('proxies')
categories = plugin_kwargs.get('categories', 'general')
searxng_url = plugin_kwargs.get('searxng_url', None)
engines = plugin_kwargs.get('engine', None)
urls = searxng_request(txt, proxies, categories, searxng_url, engines=engines)
history = []
if len(urls) == 0:
chatbot.append((f"结论:{txt}",
"[Local Message] 受到限制,无法从searxng获取信息请尝试更换搜索引擎。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# ------------- < 第2步依次访问网页 > -------------
max_search_result = 5 # 最多收纳多少个网页的结果
chatbot.append([f"联网检索中 ...", None])
for index, url in enumerate(urls[:max_search_result]):
res = scrape_text(url['link'], proxies)
prefix = f"{index}份搜索结果 [源自{url['source'][0]}搜索] {url['title'][:25]}"
history.extend([prefix, res])
res_squeeze = res.replace('\n', '...')
chatbot[-1] = [prefix + "\n\n" + res_squeeze[:500] + "......", None]
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# ------------- < 第3步ChatGPT综合 > -------------
i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}"
i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
inputs=i_say,
history=history,
max_token_limit=min(model_info[llm_kwargs['llm_model']]['max_token']*3//4, 8192)
)
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。"
)
chatbot[-1] = (i_say, gpt_say)
history.append(i_say);history.append(gpt_say)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新

查看文件

@@ -0,0 +1,44 @@
from toolbox import get_conf
from crazy_functions.Internet_GPT import 连接网络回答问题
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
class NetworkGPT_Wrap(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
第三个参数,名称`allow_cache`,参数`type`声明这是一个下拉菜单,下拉菜单上方显示`title`+`description`,下拉菜单的选项为`options`,`default_value`为下拉菜单默认值;
"""
gui_definition = {
"main_input":
ArgProperty(title="输入问题", description="待通过互联网检索的问题", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
"categories":
ArgProperty(title="搜索分类", options=["网页", "学术论文"], default_value="网页", description="", type="dropdown").model_dump_json(),
"engine":
ArgProperty(title="选择搜索引擎", options=["bing", "google", "duckduckgo"], default_value="bing", description="", type="dropdown").model_dump_json(),
"searxng_url":
ArgProperty(title="Searxng服务地址", description="输入Searxng的地址", default_value=get_conf("SEARXNG_URL"), type="string").model_dump_json(), # 主输入,自动从输入框同步
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
if plugin_kwargs["categories"] == "网页": plugin_kwargs["categories"] = "general"
if plugin_kwargs["categories"] == "学术论文": plugin_kwargs["categories"] = "science"
yield from 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)

查看文件

@@ -4,7 +4,7 @@ from functools import partial
import glob, os, requests, time, json, tarfile import glob, os, requests, time, json, tarfile
pj = os.path.join pj = os.path.join
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/") ARXIV_CACHE_DIR = get_conf("ARXIV_CACHE_DIR")
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=- # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
@@ -158,65 +158,72 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
return extract_dst, arxiv_id return extract_dst, arxiv_id
def pdf2tex_project(pdf_file_path): def pdf2tex_project(pdf_file_path, plugin_kwargs):
# Mathpix API credentials if plugin_kwargs["method"] == "MATHPIX":
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY') # Mathpix API credentials
headers = {"app_id": app_id, "app_key": app_key} app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
headers = {"app_id": app_id, "app_key": app_key}
# Step 1: Send PDF file for processing # Step 1: Send PDF file for processing
options = { options = {
"conversion_formats": {"tex.zip": True}, "conversion_formats": {"tex.zip": True},
"math_inline_delimiters": ["$", "$"], "math_inline_delimiters": ["$", "$"],
"rm_spaces": True "rm_spaces": True
} }
response = requests.post(url="https://api.mathpix.com/v3/pdf", response = requests.post(url="https://api.mathpix.com/v3/pdf",
headers=headers, headers=headers,
data={"options_json": json.dumps(options)}, data={"options_json": json.dumps(options)},
files={"file": open(pdf_file_path, "rb")}) files={"file": open(pdf_file_path, "rb")})
if response.ok: if response.ok:
pdf_id = response.json()["pdf_id"] pdf_id = response.json()["pdf_id"]
print(f"PDF processing initiated. PDF ID: {pdf_id}") print(f"PDF processing initiated. PDF ID: {pdf_id}")
# Step 2: Check processing status # Step 2: Check processing status
while True: while True:
conversion_response = requests.get(f"https://api.mathpix.com/v3/pdf/{pdf_id}", headers=headers) conversion_response = requests.get(f"https://api.mathpix.com/v3/pdf/{pdf_id}", headers=headers)
conversion_data = conversion_response.json() conversion_data = conversion_response.json()
if conversion_data["status"] == "completed": if conversion_data["status"] == "completed":
print("PDF processing completed.") print("PDF processing completed.")
break break
elif conversion_data["status"] == "error": elif conversion_data["status"] == "error":
print("Error occurred during processing.") print("Error occurred during processing.")
else: else:
print(f"Processing status: {conversion_data['status']}") print(f"Processing status: {conversion_data['status']}")
time.sleep(5) # wait for a few seconds before checking again time.sleep(5) # wait for a few seconds before checking again
# Step 3: Save results to local files # Step 3: Save results to local files
output_dir = os.path.join(os.path.dirname(pdf_file_path), 'mathpix_output') output_dir = os.path.join(os.path.dirname(pdf_file_path), 'mathpix_output')
if not os.path.exists(output_dir): if not os.path.exists(output_dir):
os.makedirs(output_dir) os.makedirs(output_dir)
url = f"https://api.mathpix.com/v3/pdf/{pdf_id}.tex" url = f"https://api.mathpix.com/v3/pdf/{pdf_id}.tex"
response = requests.get(url, headers=headers) response = requests.get(url, headers=headers)
file_name_wo_dot = '_'.join(os.path.basename(pdf_file_path).split('.')[:-1]) file_name_wo_dot = '_'.join(os.path.basename(pdf_file_path).split('.')[:-1])
output_name = f"{file_name_wo_dot}.tex.zip" output_name = f"{file_name_wo_dot}.tex.zip"
output_path = os.path.join(output_dir, output_name) output_path = os.path.join(output_dir, output_name)
with open(output_path, "wb") as output_file: with open(output_path, "wb") as output_file:
output_file.write(response.content) output_file.write(response.content)
print(f"tex.zip file saved at: {output_path}") print(f"tex.zip file saved at: {output_path}")
import zipfile import zipfile
unzip_dir = os.path.join(output_dir, file_name_wo_dot) unzip_dir = os.path.join(output_dir, file_name_wo_dot)
with zipfile.ZipFile(output_path, 'r') as zip_ref: with zipfile.ZipFile(output_path, 'r') as zip_ref:
zip_ref.extractall(unzip_dir) zip_ref.extractall(unzip_dir)
return unzip_dir
else:
print(f"Error sending PDF for processing. Status code: {response.status_code}")
return None
else:
from crazy_functions.pdf_fns.parse_pdf_via_doc2x import 解析PDF_DOC2X_转Latex
unzip_dir = 解析PDF_DOC2X_转Latex(pdf_file_path)
return unzip_dir return unzip_dir
else:
print(f"Error sending PDF for processing. Status code: {response.status_code}")
return None
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@@ -226,7 +233,7 @@ def pdf2tex_project(pdf_file_path):
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request): def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# <-------------- information about this plugin -------------> # <-------------- information about this plugin ------------->
chatbot.append(["函数插件功能?", chatbot.append(["函数插件功能?",
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"]) "对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements -------------> # <-------------- more requirements ------------->
@@ -264,6 +271,8 @@ def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, histo
project_folder = desend_to_extracted_folder_if_exist(project_folder) project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder -------------> # <-------------- move latex project away from temp folder ------------->
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
project_folder = move_project(project_folder, arxiv_id=None) project_folder = move_project(project_folder, arxiv_id=None)
# <-------------- if merge_translate_zh is already generated, skip gpt req -------------> # <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
@@ -287,7 +296,7 @@ def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, histo
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot) promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else: else:
chatbot.append((f"失败了", chatbot.append((f"失败了",
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...')) '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+Conversation_To_File进行反馈 ...'))
yield from update_ui(chatbot=chatbot, history=history); yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面 time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot) promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
@@ -303,7 +312,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
# <-------------- information about this plugin -------------> # <-------------- information about this plugin ------------->
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"]) "对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements -------------> # <-------------- more requirements ------------->
@@ -358,6 +367,8 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
project_folder = desend_to_extracted_folder_if_exist(project_folder) project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder -------------> # <-------------- move latex project away from temp folder ------------->
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
project_folder = move_project(project_folder, arxiv_id) project_folder = move_project(project_folder, arxiv_id)
# <-------------- if merge_translate_zh is already generated, skip gpt req -------------> # <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
@@ -397,7 +408,7 @@ def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, h
# <-------------- information about this plugin -------------> # <-------------- information about this plugin ------------->
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",
"将PDF转换为Latex项目,翻译为中文后重新编译为PDF。函数插件贡献者: Marroh。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"]) "将PDF转换为Latex项目,翻译为中文后重新编译为PDF。函数插件贡献者: Marroh。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements -------------> # <-------------- more requirements ------------->
@@ -437,107 +448,101 @@ def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, h
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"不支持同时处理多个pdf文件: {txt}") report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"不支持同时处理多个pdf文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
if len(app_id) == 0 or len(app_key) == 0:
report_exception(chatbot, history, a="缺失 MATHPIX_APPID 和 MATHPIX_APPKEY。", b=f"请配置 MATHPIX_APPID 和 MATHPIX_APPKEY")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
hash_tag = map_file_to_sha256(file_manifest[0]) if plugin_kwargs.get("method", "") == 'MATHPIX':
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
# <-------------- check repeated pdf -------------> if len(app_id) == 0 or len(app_key) == 0:
chatbot.append([f"检查PDF是否被重复上传", "正在检查..."]) report_exception(chatbot, history, a="缺失 MATHPIX_APPID 和 MATHPIX_APPKEY。", b=f"请配置 MATHPIX_APPID 和 MATHPIX_APPKEY")
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
repeat, project_folder = check_repeat_upload(file_manifest[0], hash_tag) return
if plugin_kwargs.get("method", "") == 'DOC2X':
except_flag = False app_id, app_key = "", ""
DOC2X_API_KEY = get_conf('DOC2X_API_KEY')
if repeat: if len(DOC2X_API_KEY) == 0:
yield from update_ui_lastest_msg(f"发现重复上传,请查收结果(压缩包)...", chatbot=chatbot, history=history) report_exception(chatbot, history, a="缺失 DOC2X_API_KEY。", b=f"请配置 DOC2X_API_KEY")
try:
trans_html_file = [f for f in glob.glob(f'{project_folder}/**/*.trans.html', recursive=True)][0]
promote_file_to_downloadzone(trans_html_file, rename_file=None, chatbot=chatbot)
translate_pdf = [f for f in glob.glob(f'{project_folder}/**/merge_translate_zh.pdf', recursive=True)][0]
promote_file_to_downloadzone(translate_pdf, rename_file=None, chatbot=chatbot)
comparison_pdf = [f for f in glob.glob(f'{project_folder}/**/comparison.pdf', recursive=True)][0]
promote_file_to_downloadzone(comparison_pdf, rename_file=None, chatbot=chatbot)
zip_res = zip_result(project_folder)
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
return True
except:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"发现重复上传,但是无法找到相关文件")
yield from update_ui(chatbot=chatbot, history=history)
chatbot.append([f"没有相关文件", '尝试重新翻译PDF...'])
yield from update_ui(chatbot=chatbot, history=history)
except_flag = True
elif not repeat or except_flag:
yield from update_ui_lastest_msg(f"未发现重复上传", chatbot=chatbot, history=history)
# <-------------- convert pdf into tex ------------->
chatbot.append([f"解析项目: {txt}", "正在将PDF转换为tex项目,请耐心等待..."])
yield from update_ui(chatbot=chatbot, history=history)
project_folder = pdf2tex_project(file_manifest[0])
if project_folder is None:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"PDF转换为tex项目失败")
yield from update_ui(chatbot=chatbot, history=history)
return False
# <-------------- translate latex file into Chinese ------------->
yield from update_ui_lastest_msg("正在tex项目将翻译为中文...", chatbot=chatbot, history=history)
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# <-------------- if is a zip/tar file -------------> hash_tag = map_file_to_sha256(file_manifest[0])
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder -------------> # # <-------------- check repeated pdf ------------->
project_folder = move_project(project_folder) # chatbot.append([f"检查PDF是否被重复上传", "正在检查..."])
# yield from update_ui(chatbot=chatbot, history=history)
# repeat, project_folder = check_repeat_upload(file_manifest[0], hash_tag)
# <-------------- set a hash tag for repeat-checking -------------> # if repeat:
with open(pj(project_folder, hash_tag + '.tag'), 'w') as f: # yield from update_ui_lastest_msg(f"发现重复上传,请查收结果(压缩包)...", chatbot=chatbot, history=history)
f.write(hash_tag) # try:
f.close() # translate_pdf = [f for f in glob.glob(f'{project_folder}/**/merge_translate_zh.pdf', recursive=True)][0]
# promote_file_to_downloadzone(translate_pdf, rename_file=None, chatbot=chatbot)
# comparison_pdf = [f for f in glob.glob(f'{project_folder}/**/comparison.pdf', recursive=True)][0]
# promote_file_to_downloadzone(comparison_pdf, rename_file=None, chatbot=chatbot)
# zip_res = zip_result(project_folder)
# promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# return
# except:
# report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"发现重复上传,但是无法找到相关文件")
# yield from update_ui(chatbot=chatbot, history=history)
# else:
# yield from update_ui_lastest_msg(f"未发现重复上传", chatbot=chatbot, history=history)
# <-------------- convert pdf into tex ------------->
chatbot.append([f"解析项目: {txt}", "正在将PDF转换为tex项目,请耐心等待..."])
yield from update_ui(chatbot=chatbot, history=history)
project_folder = pdf2tex_project(file_manifest[0], plugin_kwargs)
if project_folder is None:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"PDF转换为tex项目失败")
yield from update_ui(chatbot=chatbot, history=history)
return False
# <-------------- translate latex file into Chinese ------------->
yield from update_ui_lastest_msg("正在tex项目将翻译为中文...", chatbot=chatbot, history=history)
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
project_folder = move_project(project_folder)
# <-------------- set a hash tag for repeat-checking ------------->
with open(pj(project_folder, hash_tag + '.tag'), 'w') as f:
f.write(hash_tag)
f.close()
# <-------------- if merge_translate_zh is already generated, skip gpt req -------------> # <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_translate_zh.tex'): if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs, yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='translate_zh', chatbot, history, system_prompt, mode='translate_zh',
switch_prompt=_switch_prompt_) switch_prompt=_switch_prompt_)
# <-------------- compile PDF -------------> # <-------------- compile PDF ------------->
yield from update_ui_lastest_msg("正在将翻译好的项目tex项目编译为PDF...", chatbot=chatbot, history=history) yield from update_ui_lastest_msg("正在将翻译好的项目tex项目编译为PDF...", chatbot=chatbot, history=history)
success = yield from 编译Latex(chatbot, history, main_file_original='merge', success = yield from 编译Latex(chatbot, history, main_file_original='merge',
main_file_modified='merge_translate_zh', mode='translate_zh', main_file_modified='merge_translate_zh', mode='translate_zh',
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder_original=project_folder, work_folder_modified=project_folder,
work_folder=project_folder) work_folder=project_folder)
# <-------------- zip PDF -------------> # <-------------- zip PDF ------------->
zip_res = zip_result(project_folder) zip_res = zip_result(project_folder)
if success: if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...')) chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history); yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面 time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot) promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else: else:
chatbot.append((f"失败了", chatbot.append((f"失败了",
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体见Github wiki ...')) '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体见Github wiki ...'))
yield from update_ui(chatbot=chatbot, history=history); yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面 time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot) promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done -------------> # <-------------- we are done ------------->
return success return success

查看文件

@@ -0,0 +1,78 @@
from crazy_functions.Latex_Function import Latex翻译中文并重新编译PDF, PDF翻译中文并重新编译PDF
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
class Arxiv_Localize(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
第三个参数,名称`allow_cache`,参数`type`声明这是一个下拉菜单,下拉菜单上方显示`title`+`description`,下拉菜单的选项为`options`,`default_value`为下拉菜单默认值;
"""
gui_definition = {
"main_input":
ArgProperty(title="ArxivID", description="输入Arxiv的ID或者网址", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
"advanced_arg":
ArgProperty(title="额外的翻译提示词",
description=r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步
"allow_cache":
ArgProperty(title="是否允许从缓存中调取结果", options=["允许缓存", "从头执行"], default_value="允许缓存", description="", type="dropdown").model_dump_json(),
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
allow_cache = plugin_kwargs["allow_cache"]
advanced_arg = plugin_kwargs["advanced_arg"]
if allow_cache == "从头执行": plugin_kwargs["advanced_arg"] = "--no-cache " + plugin_kwargs["advanced_arg"]
yield from Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
class PDF_Localize(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
"""
gui_definition = {
"main_input":
ArgProperty(title="PDF文件路径", description="未指定路径,请上传文件后,再点击该插件", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
"advanced_arg":
ArgProperty(title="额外的翻译提示词",
description=r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步
"method":
ArgProperty(title="采用哪种方法执行转换", options=["MATHPIX", "DOC2X"], default_value="DOC2X", description="", type="dropdown").model_dump_json(),
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
yield from PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)

查看文件

@@ -72,17 +72,17 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
inputs_array = ["This is a Markdown file, translate it into Chinese, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" + inputs_array = ["This is a Markdown file, translate it into Chinese, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." + plugin_kwargs.get("additional_prompt", "") for _ in range(n_split)]
elif language == 'zh->en': elif language == 'zh->en':
inputs_array = [f"This is a Markdown file, translate it into English, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" + inputs_array = [f"This is a Markdown file, translate it into English, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." + plugin_kwargs.get("additional_prompt", "") for _ in range(n_split)]
else: else:
inputs_array = [f"This is a Markdown file, translate it into {language}, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" + inputs_array = [f"This is a Markdown file, translate it into {language}, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents] f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] sys_prompt_array = ["You are a professional academic paper translator." + plugin_kwargs.get("additional_prompt", "") for _ in range(n_split)]
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=inputs_array, inputs_array=inputs_array,

查看文件

@@ -0,0 +1,83 @@
from toolbox import CatchException, check_packages, get_conf
from toolbox import update_ui, update_ui_lastest_msg, disable_auto_promotion
from toolbox import trimmed_format_exc_markdown
from crazy_functions.crazy_utils import get_files_from_everything
from crazy_functions.pdf_fns.parse_pdf import get_avail_grobid_url
from crazy_functions.pdf_fns.parse_pdf_via_doc2x import 解析PDF_基于DOC2X
from crazy_functions.pdf_fns.parse_pdf_legacy import 解析PDF_简单拆解
from crazy_functions.pdf_fns.parse_pdf_grobid import 解析PDF_基于GROBID
from shared_utils.colorful import *
@CatchException
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
disable_auto_promotion(chatbot)
# 基本信息:功能、贡献者
chatbot.append([None, "插件功能批量翻译PDF文档。函数插件贡献者: Binary-Husky"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
check_packages(["fitz", "tiktoken", "scipdf"])
except:
chatbot.append([None, f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken scipdf_parser```。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 清空历史,以免输入溢出
history = []
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf')
# 检测输入参数,如没有给定输入参数,直接退出
if (not success) and txt == "": txt = '空空如也的输入栏。提示请先上传文件把PDF文件拖入对话'
# 如果没找到任何文件
if len(file_manifest) == 0:
chatbot.append([None, f"找不到任何.pdf拓展名的文件: {txt}"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 开始正式执行任务
method = plugin_kwargs.get("pdf_parse_method", None)
if method == "DOC2X":
# ------- 第一种方法,效果最好,但是需要DOC2X服务 -------
DOC2X_API_KEY = get_conf("DOC2X_API_KEY")
if len(DOC2X_API_KEY) != 0:
try:
yield from 解析PDF_基于DOC2X(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request)
return
except:
chatbot.append([None, f"DOC2X服务不可用,现在将执行效果稍差的旧版代码。{trimmed_format_exc_markdown()}"])
yield from update_ui(chatbot=chatbot, history=history)
if method == "GROBID":
# ------- 第二种方法,效果次优 -------
grobid_url = get_avail_grobid_url()
if grobid_url is not None:
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
return
if method == "ClASSIC":
# ------- 第三种方法,早期代码,效果不理想 -------
yield from update_ui_lastest_msg("GROBID服务不可用,请检查config中的GROBID_URL。作为替代,现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
yield from 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
return
if method is None:
# ------- 以上三种方法都试一遍 -------
DOC2X_API_KEY = get_conf("DOC2X_API_KEY")
if len(DOC2X_API_KEY) != 0:
try:
yield from 解析PDF_基于DOC2X(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request)
return
except:
chatbot.append([None, f"DOC2X服务不可用,正在尝试GROBID。{trimmed_format_exc_markdown()}"])
yield from update_ui(chatbot=chatbot, history=history)
grobid_url = get_avail_grobid_url()
if grobid_url is not None:
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
return
yield from update_ui_lastest_msg("GROBID服务不可用,请检查config中的GROBID_URL。作为替代,现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
yield from 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
return

查看文件

@@ -0,0 +1,33 @@
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
from .PDF_Translate import 批量翻译PDF文档
class PDF_Tran(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
"""
gui_definition = {
"main_input":
ArgProperty(title="PDF文件路径", description="未指定路径,请上传文件后,再点击该插件", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
"additional_prompt":
ArgProperty(title="额外提示词", description="例如:对专有名词、翻译语气等方面的要求", default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步
"pdf_parse_method":
ArgProperty(title="PDF解析方法", options=["DOC2X", "GROBID", "ClASSIC"], description="", default_value="GROBID", type="dropdown").model_dump_json(),
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
main_input = plugin_kwargs["main_input"]
additional_prompt = plugin_kwargs["additional_prompt"]
pdf_parse_method = plugin_kwargs["pdf_parse_method"]
yield from 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)

查看文件

@@ -1,324 +0,0 @@
from toolbox import CatchException, report_exception, get_log_folder, gen_time_str, check_packages
from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion
from toolbox import write_history_to_file, promote_file_to_downloadzone, get_conf, extract_archive
from toolbox import generate_file_link, zip_folder, trimmed_format_exc, trimmed_format_exc_markdown
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from .crazy_utils import read_and_clean_pdf_text
from .crazy_utils import get_files_from_everything
from .pdf_fns.parse_pdf import parse_pdf, get_avail_grobid_url, translate_pdf
from colorful import *
import os
@CatchException
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
disable_auto_promotion(chatbot)
# 基本信息:功能、贡献者
chatbot.append([None, "插件功能批量翻译PDF文档。函数插件贡献者: Binary-Husky"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
check_packages(["fitz", "tiktoken", "scipdf"])
except:
report_exception(chatbot, history,
a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken scipdf_parser```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 清空历史,以免输入溢出
history = []
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf')
# 检测输入参数,如没有给定输入参数,直接退出
if not success:
if txt == "": txt = '空空如也的输入栏'
# 如果没找到任何文件
if len(file_manifest) == 0:
report_exception(chatbot, history,
a=f"解析项目: {txt}", b=f"找不到任何.pdf拓展名的文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 开始正式执行任务
DOC2X_API_KEY = get_conf("DOC2X_API_KEY")
# ------- 第一种方法,效果最好,但是需要DOC2X服务 -------
if len(DOC2X_API_KEY) != 0:
try:
yield from 解析PDF_DOC2X(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request)
return
except:
chatbot.append([None, f"DOC2X服务不可用,现在将执行效果稍差的旧版代码。{trimmed_format_exc_markdown()}"])
yield from update_ui(chatbot=chatbot, history=history)
# ------- 第二种方法,效果次优 -------
grobid_url = get_avail_grobid_url()
if grobid_url is not None:
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
return
# ------- 第三种方法,早期代码,效果不理想 -------
yield from update_ui_lastest_msg("GROBID服务不可用,请检查config中的GROBID_URL。作为替代,现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
return
def 解析PDF_DOC2X_单文件(fp, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request):
def pdf2markdown(filepath):
import requests, json, os
markdown_dir = get_log_folder(plugin_name="pdf_ocr")
doc2x_api_key = DOC2X_API_KEY
if doc2x_api_key.startswith('sk-'):
url = "https://api.doc2x.noedgeai.com/api/v1/pdf"
else:
url = "https://api.doc2x.noedgeai.com/api/platform/pdf"
chatbot.append((None, "加载PDF文件,发送至DOC2X解析..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
res = requests.post(
url,
files={"file": open(filepath, "rb")},
data={"ocr": "1"},
headers={"Authorization": "Bearer " + doc2x_api_key}
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
for z_decoded in decoded.split('\n'):
if len(z_decoded) == 0: continue
assert z_decoded.startswith("data: ")
z_decoded = z_decoded[len("data: "):]
decoded_json = json.loads(z_decoded)
res_json.append(decoded_json)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
uuid = res_json[0]['uuid']
to = "md" # latex, md, docx
url = "https://api.doc2x.noedgeai.com/api/export"+"?request_id="+uuid+"&to="+to
chatbot.append((None, f"读取解析: {url} ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
res = requests.get(url, headers={"Authorization": "Bearer " + doc2x_api_key})
md_zip_path = os.path.join(markdown_dir, gen_time_str() + '.zip')
if res.status_code == 200:
with open(md_zip_path, "wb") as f: f.write(res.content)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
promote_file_to_downloadzone(md_zip_path, chatbot=chatbot)
chatbot.append((None, f"完成解析 {md_zip_path} ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return md_zip_path
def deliver_to_markdown_plugin(md_zip_path, user_request):
from crazy_functions.批量Markdown翻译 import Markdown英译中
import shutil, re
time_tag = gen_time_str()
target_path_base = get_log_folder(chatbot.get_user())
file_origin_name = os.path.basename(md_zip_path)
this_file_path = os.path.join(target_path_base, file_origin_name)
os.makedirs(target_path_base, exist_ok=True)
shutil.copyfile(md_zip_path, this_file_path)
ex_folder = this_file_path + ".extract"
extract_archive(
file_path=this_file_path, dest_dir=ex_folder
)
# edit markdown files
success, file_manifest, project_folder = get_files_from_everything(ex_folder, type='.md')
for generated_fp in file_manifest:
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f:
content = f.read()
# 将公式中的\[ \]替换成$$
content = content.replace(r'\[', r'$$').replace(r'\]', r'$$')
# 将公式中的\( \)替换成$
content = content.replace(r'\(', r'$').replace(r'\)', r'$')
content = content.replace('```markdown', '\n').replace('```', '\n')
with open(generated_fp, 'w', encoding='utf8') as f:
f.write(content)
promote_file_to_downloadzone(generated_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 生成在线预览html
file_name = '在线预览翻译(原文)' + gen_time_str() + '.html'
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
# Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
md = re.sub(r'^<table>', r'😃<table>', md, flags=re.MULTILINE)
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
chatbot.append([None, f"生成在线预览:{generate_file_link([preview_fp])}"])
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
chatbot.append((None, f"调用Markdown插件 {ex_folder} ..."))
plugin_kwargs['markdown_expected_output_dir'] = ex_folder
translated_f_name = 'translated_markdown.md'
generated_fp = plugin_kwargs['markdown_expected_output_path'] = os.path.join(ex_folder, translated_f_name)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from Markdown英译中(ex_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
if os.path.exists(generated_fp):
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f: content = f.read()
content = content.replace('```markdown', '\n').replace('```', '\n')
# Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
content = re.sub(r'^<table>', r'😃<table>', content, flags=re.MULTILINE)
with open(generated_fp, 'w', encoding='utf8') as f: f.write(content)
# 生成在线预览html
file_name = '在线预览翻译' + gen_time_str() + '.html'
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
# 生成包含图片的压缩包
dest_folder = get_log_folder(chatbot.get_user())
zip_name = '翻译后的带图文档.zip'
zip_folder(source_folder=ex_folder, dest_folder=dest_folder, zip_name=zip_name)
zip_fp = os.path.join(dest_folder, zip_name)
promote_file_to_downloadzone(zip_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
md_zip_path = yield from pdf2markdown(fp)
yield from deliver_to_markdown_plugin(md_zip_path, user_request)
def 解析PDF_DOC2X(file_manifest, *args):
for index, fp in enumerate(file_manifest):
yield from 解析PDF_DOC2X_单文件(fp, *args)
return
def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url):
import copy, json
TOKEN_LIMIT_PER_FRAGMENT = 1024
generated_conclusion_files = []
generated_html_files = []
DST_LANG = "中文"
from crazy_functions.pdf_fns.report_gen_html import construct_html
for index, fp in enumerate(file_manifest):
chatbot.append(["当前进度:", f"正在连接GROBID服务,请稍候: {grobid_url}\n如果等待时间过长,请修改config中的GROBID_URL,可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
article_dict = parse_pdf(fp, grobid_url)
grobid_json_res = os.path.join(get_log_folder(), gen_time_str() + "grobid.json")
with open(grobid_json_res, 'w+', encoding='utf8') as f:
f.write(json.dumps(article_dict, indent=4, ensure_ascii=False))
promote_file_to_downloadzone(grobid_json_res, chatbot=chatbot)
if article_dict is None: raise RuntimeError("解析PDF失败,请检查PDF是否损坏。")
yield from translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
"""
此函数已经弃用
"""
import copy
TOKEN_LIMIT_PER_FRAGMENT = 1024
generated_conclusion_files = []
generated_html_files = []
from crazy_functions.pdf_fns.report_gen_html import construct_html
for index, fp in enumerate(file_manifest):
# 读取PDF文件
file_content, page_one = read_and_clean_pdf_text(fp)
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
# 递归地切割PDF文件
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=page_one, limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
# 为了更好的效果,我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
# 单线,获取文章meta信息
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取{paper_meta}",
inputs_show_user=f"请从{fp}中提取出“标题”、“收录会议或期刊”等基本信息。",
llm_kwargs=llm_kwargs,
chatbot=chatbot, history=[],
sys_prompt="Your job is to collect information from materials。",
)
# 多线,翻译
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=[
f"你需要翻译以下内容:\n{frag}" for frag in paper_fragments],
inputs_show_user_array=[f"\n---\n 原文: \n\n {frag.replace('#', '')} \n---\n 翻译:\n " for frag in paper_fragments],
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history_array=[[paper_meta] for _ in paper_fragments],
sys_prompt_array=[
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in paper_fragments],
# max_workers=5 # OpenAI所允许的最大并行过载
)
gpt_response_collection_md = copy.deepcopy(gpt_response_collection)
# 整理报告的格式
for i,k in enumerate(gpt_response_collection_md):
if i%2==0:
gpt_response_collection_md[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection_md)//2}] \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection_md)//2}]\n "
else:
gpt_response_collection_md[i] = gpt_response_collection_md[i]
final = ["一、论文概况\n\n---\n\n", paper_meta_info.replace('# ', '### ') + '\n\n---\n\n', "二、论文翻译", ""]
final.extend(gpt_response_collection_md)
create_report_file_name = f"{os.path.basename(fp)}.trans.md"
res = write_history_to_file(final, create_report_file_name)
promote_file_to_downloadzone(res, chatbot=chatbot)
# 更新UI
generated_conclusion_files.append(f'{get_log_folder()}/{create_report_file_name}')
chatbot.append((f"{fp}完成了吗?", res))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# write html
try:
ch = construct_html()
orig = ""
trans = ""
gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
for i,k in enumerate(gpt_response_collection_html):
if i%2==0:
gpt_response_collection_html[i] = paper_fragments[i//2].replace('#', '')
else:
gpt_response_collection_html[i] = gpt_response_collection_html[i]
final = ["论文概况", paper_meta_info.replace('# ', '### '), "二、论文翻译", ""]
final.extend(gpt_response_collection_html)
for i, k in enumerate(final):
if i%2==0:
orig = k
if i%2==1:
trans = k
ch.add_row(a=orig, b=trans)
create_report_file_name = f"{os.path.basename(fp)}.trans.html"
generated_html_files.append(ch.save_file(create_report_file_name))
except:
from toolbox import trimmed_format_exc
print('writing html result failed:', trimmed_format_exc())
# 准备文件的下载
for pdf_path in generated_conclusion_files:
# 重命名文件
rename_file = f'翻译-{os.path.basename(pdf_path)}'
promote_file_to_downloadzone(pdf_path, rename_file=rename_file, chatbot=chatbot)
for html_path in generated_html_files:
# 重命名文件
rename_file = f'翻译-{os.path.basename(html_path)}'
promote_file_to_downloadzone(html_path, rename_file=rename_file, chatbot=chatbot)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

查看文件

@@ -1,9 +1,20 @@
from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton
from shared_utils.char_visual_effect import scolling_visual_effect
import threading import threading
import os import os
import logging import logging
def input_clipping(inputs, history, max_token_limit): def input_clipping(inputs, history, max_token_limit):
"""
当输入文本 + 历史文本超出最大限制时,采取措施丢弃一部分文本。
输入:
- inputs 本次请求
- history 历史上下文
- max_token_limit 最大token限制
输出:
- inputs 本次请求经过clip
- history 历史上下文经过clip
"""
import numpy as np import numpy as np
from request_llms.bridge_all import model_info from request_llms.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer'] enc = model_info["gpt-3.5-turbo"]['tokenizer']
@@ -158,7 +169,7 @@ def can_multi_process(llm) -> bool:
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array, inputs_show_user_array, llm_kwargs, inputs_array, inputs_show_user_array, llm_kwargs,
chatbot, history_array, sys_prompt_array, chatbot, history_array, sys_prompt_array,
refresh_interval=0.2, max_workers=-1, scroller_max_len=30, refresh_interval=0.2, max_workers=-1, scroller_max_len=75,
handle_token_exceed=True, show_user_at_complete=False, handle_token_exceed=True, show_user_at_complete=False,
retry_times_at_unknown_error=2, retry_times_at_unknown_error=2,
): ):
@@ -283,6 +294,8 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
futures = [executor.submit(_req_gpt, index, inputs, history, sys_prompt) for index, inputs, history, sys_prompt in zip( futures = [executor.submit(_req_gpt, index, inputs, history, sys_prompt) for index, inputs, history, sys_prompt in zip(
range(len(inputs_array)), inputs_array, history_array, sys_prompt_array)] range(len(inputs_array)), inputs_array, history_array, sys_prompt_array)]
cnt = 0 cnt = 0
while True: while True:
# yield一次以刷新前端页面 # yield一次以刷新前端页面
time.sleep(refresh_interval) time.sleep(refresh_interval)
@@ -295,8 +308,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
mutable[thread_index][1] = time.time() mutable[thread_index][1] = time.time()
# 在前端打印些好玩的东西 # 在前端打印些好玩的东西
for thread_index, _ in enumerate(worker_done): for thread_index, _ in enumerate(worker_done):
print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\ print_something_really_funny = f"[ ...`{scolling_visual_effect(mutable[thread_index][0], scroller_max_len)}`... ]"
replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
observe_win.append(print_something_really_funny) observe_win.append(print_something_really_funny)
# 在前端打印些好玩的东西 # 在前端打印些好玩的东西
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n' stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
@@ -349,7 +361,7 @@ def read_and_clean_pdf_text(fp):
import fitz, copy import fitz, copy
import re import re
import numpy as np import numpy as np
from colorful import print亮黄, print亮绿 from shared_utils.colorful import print亮黄, print亮绿
fc = 0 # Index 0 文本 fc = 0 # Index 0 文本
fs = 1 # Index 1 字体 fs = 1 # Index 1 字体
fb = 2 # Index 2 框框 fb = 2 # Index 2 框框

查看文件

@@ -4,20 +4,28 @@ import pickle
class SafeUnpickler(pickle.Unpickler): class SafeUnpickler(pickle.Unpickler):
def get_safe_classes(self): def get_safe_classes(self):
from .latex_actions import LatexPaperFileGroup, LatexPaperSplit from crazy_functions.latex_fns.latex_actions import LatexPaperFileGroup, LatexPaperSplit
from crazy_functions.latex_fns.latex_toolbox import LinkedListNode
# 定义允许的安全类 # 定义允许的安全类
safe_classes = { safe_classes = {
# 在这里添加其他安全的类 # 在这里添加其他安全的类
'LatexPaperFileGroup': LatexPaperFileGroup, 'LatexPaperFileGroup': LatexPaperFileGroup,
'LatexPaperSplit' : LatexPaperSplit, 'LatexPaperSplit': LatexPaperSplit,
'LinkedListNode': LinkedListNode,
} }
return safe_classes return safe_classes
def find_class(self, module, name): def find_class(self, module, name):
# 只允许特定的类进行反序列化 # 只允许特定的类进行反序列化
self.safe_classes = self.get_safe_classes() self.safe_classes = self.get_safe_classes()
if f'{module}.{name}' in self.safe_classes: match_class_name = None
return self.safe_classes[f'{module}.{name}'] for class_name in self.safe_classes.keys():
if (class_name in f'{module}.{name}'):
match_class_name = class_name
if module == 'numpy' or module.startswith('numpy.'):
return super().find_class(module, name)
if match_class_name is not None:
return self.safe_classes[match_class_name]
# 如果尝试加载未授权的类,则抛出异常 # 如果尝试加载未授权的类,则抛出异常
raise pickle.UnpicklingError(f"Attempted to deserialize unauthorized class '{name}' from module '{module}'") raise pickle.UnpicklingError(f"Attempted to deserialize unauthorized class '{name}' from module '{module}'")

查看文件

@@ -4,7 +4,7 @@ from toolbox import promote_file_to_downloadzone
from toolbox import write_history_to_file, promote_file_to_downloadzone from toolbox import write_history_to_file, promote_file_to_downloadzone
from toolbox import get_conf from toolbox import get_conf
from toolbox import ProxyNetworkActivate from toolbox import ProxyNetworkActivate
from colorful import * from shared_utils.colorful import *
import requests import requests
import random import random
import copy import copy
@@ -72,7 +72,7 @@ def produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chat
generated_conclusion_files.append(res_path) generated_conclusion_files.append(res_path)
return res_path return res_path
def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG): def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG, plugin_kwargs={}):
from crazy_functions.pdf_fns.report_gen_html import construct_html from crazy_functions.pdf_fns.report_gen_html import construct_html
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
@@ -138,7 +138,7 @@ def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_fi
chatbot=chatbot, chatbot=chatbot,
history_array=[meta for _ in inputs_array], history_array=[meta for _ in inputs_array],
sys_prompt_array=[ sys_prompt_array=[
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in inputs_array], "请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" + plugin_kwargs.get("additional_prompt", "") for _ in inputs_array],
) )
# -=-=-=-=-=-=-=-= 写出Markdown文件 -=-=-=-=-=-=-=-= # -=-=-=-=-=-=-=-= 写出Markdown文件 -=-=-=-=-=-=-=-=
produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chatbot, fp, generated_conclusion_files) produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chatbot, fp, generated_conclusion_files)

查看文件

@@ -0,0 +1,26 @@
import os
from toolbox import CatchException, report_exception, get_log_folder, gen_time_str, check_packages
from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion
from toolbox import write_history_to_file, promote_file_to_downloadzone, get_conf, extract_archive
from crazy_functions.pdf_fns.parse_pdf import parse_pdf, translate_pdf
def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url):
import copy, json
TOKEN_LIMIT_PER_FRAGMENT = 1024
generated_conclusion_files = []
generated_html_files = []
DST_LANG = "中文"
from crazy_functions.pdf_fns.report_gen_html import construct_html
for index, fp in enumerate(file_manifest):
chatbot.append(["当前进度:", f"正在连接GROBID服务,请稍候: {grobid_url}\n如果等待时间过长,请修改config中的GROBID_URL,可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
article_dict = parse_pdf(fp, grobid_url)
grobid_json_res = os.path.join(get_log_folder(), gen_time_str() + "grobid.json")
with open(grobid_json_res, 'w+', encoding='utf8') as f:
f.write(json.dumps(article_dict, indent=4, ensure_ascii=False))
promote_file_to_downloadzone(grobid_json_res, chatbot=chatbot)
if article_dict is None: raise RuntimeError("解析PDF失败,请检查PDF是否损坏。")
yield from translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG, plugin_kwargs=plugin_kwargs)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

查看文件

@@ -0,0 +1,110 @@
from toolbox import get_log_folder
from toolbox import update_ui, promote_file_to_downloadzone
from toolbox import write_history_to_file, promote_file_to_downloadzone
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from crazy_functions.crazy_utils import read_and_clean_pdf_text
from shared_utils.colorful import *
import os
def 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
"""
注意此函数已经弃用新函数位于crazy_functions/pdf_fns/parse_pdf.py
"""
import copy
TOKEN_LIMIT_PER_FRAGMENT = 1024
generated_conclusion_files = []
generated_html_files = []
from crazy_functions.pdf_fns.report_gen_html import construct_html
for index, fp in enumerate(file_manifest):
# 读取PDF文件
file_content, page_one = read_and_clean_pdf_text(fp)
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
# 递归地切割PDF文件
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=page_one, limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
# 为了更好的效果,我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
# 单线,获取文章meta信息
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取{paper_meta}",
inputs_show_user=f"请从{fp}中提取出“标题”、“收录会议或期刊”等基本信息。",
llm_kwargs=llm_kwargs,
chatbot=chatbot, history=[],
sys_prompt="Your job is to collect information from materials。",
)
# 多线,翻译
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=[
f"你需要翻译以下内容:\n{frag}" for frag in paper_fragments],
inputs_show_user_array=[f"\n---\n 原文: \n\n {frag.replace('#', '')} \n---\n 翻译:\n " for frag in paper_fragments],
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history_array=[[paper_meta] for _ in paper_fragments],
sys_prompt_array=[
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" + plugin_kwargs.get("additional_prompt", "")
for _ in paper_fragments],
# max_workers=5 # OpenAI所允许的最大并行过载
)
gpt_response_collection_md = copy.deepcopy(gpt_response_collection)
# 整理报告的格式
for i,k in enumerate(gpt_response_collection_md):
if i%2==0:
gpt_response_collection_md[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection_md)//2}] \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection_md)//2}]\n "
else:
gpt_response_collection_md[i] = gpt_response_collection_md[i]
final = ["一、论文概况\n\n---\n\n", paper_meta_info.replace('# ', '### ') + '\n\n---\n\n', "二、论文翻译", ""]
final.extend(gpt_response_collection_md)
create_report_file_name = f"{os.path.basename(fp)}.trans.md"
res = write_history_to_file(final, create_report_file_name)
promote_file_to_downloadzone(res, chatbot=chatbot)
# 更新UI
generated_conclusion_files.append(f'{get_log_folder()}/{create_report_file_name}')
chatbot.append((f"{fp}完成了吗?", res))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# write html
try:
ch = construct_html()
orig = ""
trans = ""
gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
for i,k in enumerate(gpt_response_collection_html):
if i%2==0:
gpt_response_collection_html[i] = paper_fragments[i//2].replace('#', '')
else:
gpt_response_collection_html[i] = gpt_response_collection_html[i]
final = ["论文概况", paper_meta_info.replace('# ', '### '), "二、论文翻译", ""]
final.extend(gpt_response_collection_html)
for i, k in enumerate(final):
if i%2==0:
orig = k
if i%2==1:
trans = k
ch.add_row(a=orig, b=trans)
create_report_file_name = f"{os.path.basename(fp)}.trans.html"
generated_html_files.append(ch.save_file(create_report_file_name))
except:
from toolbox import trimmed_format_exc
print('writing html result failed:', trimmed_format_exc())
# 准备文件的下载
for pdf_path in generated_conclusion_files:
# 重命名文件
rename_file = f'翻译-{os.path.basename(pdf_path)}'
promote_file_to_downloadzone(pdf_path, rename_file=rename_file, chatbot=chatbot)
for html_path in generated_html_files:
# 重命名文件
rename_file = f'翻译-{os.path.basename(html_path)}'
promote_file_to_downloadzone(html_path, rename_file=rename_file, chatbot=chatbot)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

查看文件

@@ -0,0 +1,213 @@
from toolbox import get_log_folder, gen_time_str, get_conf
from toolbox import update_ui, promote_file_to_downloadzone
from toolbox import promote_file_to_downloadzone, extract_archive
from toolbox import generate_file_link, zip_folder
from crazy_functions.crazy_utils import get_files_from_everything
from shared_utils.colorful import *
import os
def refresh_key(doc2x_api_key):
import requests, json
url = "https://api.doc2x.noedgeai.com/api/token/refresh"
res = requests.post(
url,
headers={"Authorization": "Bearer " + doc2x_api_key}
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
res_json = json.loads(decoded)
doc2x_api_key = res_json['data']['token']
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
return doc2x_api_key
def 解析PDF_DOC2X_转Latex(pdf_file_path):
import requests, json, os
DOC2X_API_KEY = get_conf('DOC2X_API_KEY')
latex_dir = get_log_folder(plugin_name="pdf_ocr_latex")
doc2x_api_key = DOC2X_API_KEY
if doc2x_api_key.startswith('sk-'):
url = "https://api.doc2x.noedgeai.com/api/v1/pdf"
else:
doc2x_api_key = refresh_key(doc2x_api_key)
url = "https://api.doc2x.noedgeai.com/api/platform/pdf"
res = requests.post(
url,
files={"file": open(pdf_file_path, "rb")},
data={"ocr": "1"},
headers={"Authorization": "Bearer " + doc2x_api_key}
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
for z_decoded in decoded.split('\n'):
if len(z_decoded) == 0: continue
assert z_decoded.startswith("data: ")
z_decoded = z_decoded[len("data: "):]
decoded_json = json.loads(z_decoded)
res_json.append(decoded_json)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
uuid = res_json[0]['uuid']
to = "latex" # latex, md, docx
url = "https://api.doc2x.noedgeai.com/api/export"+"?request_id="+uuid+"&to="+to
res = requests.get(url, headers={"Authorization": "Bearer " + doc2x_api_key})
latex_zip_path = os.path.join(latex_dir, gen_time_str() + '.zip')
latex_unzip_path = os.path.join(latex_dir, gen_time_str())
if res.status_code == 200:
with open(latex_zip_path, "wb") as f: f.write(res.content)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
import zipfile
with zipfile.ZipFile(latex_zip_path, 'r') as zip_ref:
zip_ref.extractall(latex_unzip_path)
return latex_unzip_path
def 解析PDF_DOC2X_单文件(fp, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request):
def pdf2markdown(filepath):
import requests, json, os
markdown_dir = get_log_folder(plugin_name="pdf_ocr")
doc2x_api_key = DOC2X_API_KEY
if doc2x_api_key.startswith('sk-'):
url = "https://api.doc2x.noedgeai.com/api/v1/pdf"
else:
doc2x_api_key = refresh_key(doc2x_api_key)
url = "https://api.doc2x.noedgeai.com/api/platform/pdf"
chatbot.append((None, "加载PDF文件,发送至DOC2X解析..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
res = requests.post(
url,
files={"file": open(filepath, "rb")},
data={"ocr": "1"},
headers={"Authorization": "Bearer " + doc2x_api_key}
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
for z_decoded in decoded.split('\n'):
if len(z_decoded) == 0: continue
assert z_decoded.startswith("data: ")
z_decoded = z_decoded[len("data: "):]
decoded_json = json.loads(z_decoded)
res_json.append(decoded_json)
if 'limit exceeded' in decoded_json.get('status', ''):
raise RuntimeError("Doc2x API 页数受限,请联系 Doc2x 方面,并更换新的 API 秘钥。")
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
uuid = res_json[0]['uuid']
to = "md" # latex, md, docx
url = "https://api.doc2x.noedgeai.com/api/export"+"?request_id="+uuid+"&to="+to
chatbot.append((None, f"读取解析: {url} ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
res = requests.get(url, headers={"Authorization": "Bearer " + doc2x_api_key})
md_zip_path = os.path.join(markdown_dir, gen_time_str() + '.zip')
if res.status_code == 200:
with open(md_zip_path, "wb") as f: f.write(res.content)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
promote_file_to_downloadzone(md_zip_path, chatbot=chatbot)
chatbot.append((None, f"完成解析 {md_zip_path} ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return md_zip_path
def deliver_to_markdown_plugin(md_zip_path, user_request):
from crazy_functions.Markdown_Translate import Markdown英译中
import shutil, re
time_tag = gen_time_str()
target_path_base = get_log_folder(chatbot.get_user())
file_origin_name = os.path.basename(md_zip_path)
this_file_path = os.path.join(target_path_base, file_origin_name)
os.makedirs(target_path_base, exist_ok=True)
shutil.copyfile(md_zip_path, this_file_path)
ex_folder = this_file_path + ".extract"
extract_archive(
file_path=this_file_path, dest_dir=ex_folder
)
# edit markdown files
success, file_manifest, project_folder = get_files_from_everything(ex_folder, type='.md')
for generated_fp in file_manifest:
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f:
content = f.read()
# 将公式中的\[ \]替换成$$
content = content.replace(r'\[', r'$$').replace(r'\]', r'$$')
# 将公式中的\( \)替换成$
content = content.replace(r'\(', r'$').replace(r'\)', r'$')
content = content.replace('```markdown', '\n').replace('```', '\n')
with open(generated_fp, 'w', encoding='utf8') as f:
f.write(content)
promote_file_to_downloadzone(generated_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 生成在线预览html
file_name = '在线预览翻译(原文)' + gen_time_str() + '.html'
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
# # Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
# md = re.sub(r'^<table>', r'.<table>', md, flags=re.MULTILINE)
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
chatbot.append([None, f"生成在线预览:{generate_file_link([preview_fp])}"])
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
chatbot.append((None, f"调用Markdown插件 {ex_folder} ..."))
plugin_kwargs['markdown_expected_output_dir'] = ex_folder
translated_f_name = 'translated_markdown.md'
generated_fp = plugin_kwargs['markdown_expected_output_path'] = os.path.join(ex_folder, translated_f_name)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from Markdown英译中(ex_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
if os.path.exists(generated_fp):
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f: content = f.read()
content = content.replace('```markdown', '\n').replace('```', '\n')
# Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
# content = re.sub(r'^<table>', r'.<table>', content, flags=re.MULTILINE)
with open(generated_fp, 'w', encoding='utf8') as f: f.write(content)
# 生成在线预览html
file_name = '在线预览翻译' + gen_time_str() + '.html'
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
# 生成包含图片的压缩包
dest_folder = get_log_folder(chatbot.get_user())
zip_name = '翻译后的带图文档.zip'
zip_folder(source_folder=ex_folder, dest_folder=dest_folder, zip_name=zip_name)
zip_fp = os.path.join(dest_folder, zip_name)
promote_file_to_downloadzone(zip_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
md_zip_path = yield from pdf2markdown(fp)
yield from deliver_to_markdown_plugin(md_zip_path, user_request)
def 解析PDF_基于DOC2X(file_manifest, *args):
for index, fp in enumerate(file_manifest):
yield from 解析PDF_DOC2X_单文件(fp, *args)
return

查看文件

@@ -0,0 +1,52 @@
import os, json, base64
from pydantic import BaseModel, Field
from textwrap import dedent
from typing import List
class ArgProperty(BaseModel): # PLUGIN_ARG_MENU
title: str = Field(description="The title", default="")
description: str = Field(description="The description", default="")
default_value: str = Field(description="The default value", default="")
type: str = Field(description="The type", default="") # currently we support ['string', 'dropdown']
options: List[str] = Field(default=[], description="List of options available for the argument") # only used when type is 'dropdown'
class GptAcademicPluginTemplate():
def __init__(self):
# please note that `execute` method may run in different threads,
# thus you should not store any state in the plugin instance,
# which may be accessed by multiple threads
pass
def define_arg_selection_menu(self):
"""
An example as below:
```
def define_arg_selection_menu(self):
gui_definition = {
"main_input":
ArgProperty(title="main input", description="description", default_value="default_value", type="string").model_dump_json(),
"advanced_arg":
ArgProperty(title="advanced arguments", description="description", default_value="default_value", type="string").model_dump_json(),
"additional_arg_01":
ArgProperty(title="additional", description="description", default_value="default_value", type="string").model_dump_json(),
}
return gui_definition
```
"""
raise NotImplementedError("You need to implement this method in your plugin class")
def get_js_code_for_generating_menu(self, btnName):
define_arg_selection = self.define_arg_selection_menu()
if len(define_arg_selection.keys()) > 8:
raise ValueError("You can only have up to 8 arguments in the define_arg_selection")
# if "main_input" not in define_arg_selection:
# raise ValueError("You must have a 'main_input' in the define_arg_selection")
DEFINE_ARG_INPUT_INTERFACE = json.dumps(define_arg_selection)
return base64.b64encode(DEFINE_ARG_INPUT_INTERFACE.encode('utf-8')).decode('utf-8')
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
raise NotImplementedError("You need to implement this method in your plugin class")

查看文件

@@ -10,7 +10,7 @@ def read_avail_plugin_enum():
from crazy_functional import get_crazy_functions from crazy_functional import get_crazy_functions
plugin_arr = get_crazy_functions() plugin_arr = get_crazy_functions()
# remove plugins with out explaination # remove plugins with out explaination
plugin_arr = {k:v for k, v in plugin_arr.items() if 'Info' in v} plugin_arr = {k:v for k, v in plugin_arr.items() if ('Info' in v) and ('Function' in v)}
plugin_arr_info = {"F_{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)} plugin_arr_info = {"F_{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)} plugin_arr_dict = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict_parse = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)} plugin_arr_dict_parse = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}

查看文件

@@ -5,7 +5,7 @@ from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from .crazy_utils import read_and_clean_pdf_text from .crazy_utils import read_and_clean_pdf_text
from .pdf_fns.parse_pdf import parse_pdf, get_avail_grobid_url, translate_pdf from .pdf_fns.parse_pdf import parse_pdf, get_avail_grobid_url, translate_pdf
from colorful import * from shared_utils.colorful import *
import copy import copy
import os import os
import math import math

查看文件

@@ -1,8 +1,11 @@
from toolbox import CatchException, update_ui, report_exception from toolbox import CatchException, update_ui, report_exception
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
import datetime from crazy_functions.plugin_template.plugin_class_template import (
GptAcademicPluginTemplate,
)
from crazy_functions.plugin_template.plugin_class_template import ArgProperty
#以下是每类图表的PROMPT # 以下是每类图表的PROMPT
SELECT_PROMPT = """ SELECT_PROMPT = """
{subject} {subject}
============= =============
@@ -17,22 +20,24 @@ SELECT_PROMPT = """
8 象限提示图 8 象限提示图
不需要解释原因,仅需要输出单个不带任何标点符号的数字。 不需要解释原因,仅需要输出单个不带任何标点符号的数字。
""" """
#没有思维导图!!!测试发现模型始终会优先选择思维导图 # 没有思维导图!!!测试发现模型始终会优先选择思维导图
#流程图 # 流程图
PROMPT_1 = """ PROMPT_1 = """
请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,mermaid语法举例 请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid ```mermaid
graph TD graph TD
P(编程) --> L1(Python) P("编程") --> L1("Python")
P(编程) --> L2(C) P("编程") --> L2("C")
P(编程) --> L3(C++) P("编程") --> L3("C++")
P(编程) --> L4(Javascipt) P("编程") --> L4("Javascipt")
P(编程) --> L5(PHP) P("编程") --> L5("PHP")
``` ```
""" """
#序列图 # 序列图
PROMPT_2 = """ PROMPT_2 = """
请你给出围绕“{subject}”的序列图,使用mermaid语法,mermaid语法举例 请你给出围绕“{subject}”的序列图,使用mermaid语法
mermaid语法举例
```mermaid ```mermaid
sequenceDiagram sequenceDiagram
participant A as 用户 participant A as 用户
@@ -43,9 +48,10 @@ sequenceDiagram
B->>A: 返回数据 B->>A: 返回数据
``` ```
""" """
#类图 # 类图
PROMPT_3 = """ PROMPT_3 = """
请你给出围绕“{subject}”的类图,使用mermaid语法,mermaid语法举例 请你给出围绕“{subject}”的类图,使用mermaid语法
mermaid语法举例
```mermaid ```mermaid
classDiagram classDiagram
Class01 <|-- AveryLongClass : Cool Class01 <|-- AveryLongClass : Cool
@@ -63,9 +69,10 @@ classDiagram
Class08 <--> C2: Cool label Class08 <--> C2: Cool label
``` ```
""" """
#饼图 # 饼图
PROMPT_4 = """ PROMPT_4 = """
请你给出围绕“{subject}”的饼图,使用mermaid语法,mermaid语法举例 请你给出围绕“{subject}”的饼图,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid ```mermaid
pie title Pets adopted by volunteers pie title Pets adopted by volunteers
"" : 386 "" : 386
@@ -73,38 +80,41 @@ pie title Pets adopted by volunteers
"兔子" : 15 "兔子" : 15
``` ```
""" """
#甘特图 # 甘特图
PROMPT_5 = """ PROMPT_5 = """
请你给出围绕“{subject}”的甘特图,使用mermaid语法,mermaid语法举例 请你给出围绕“{subject}”的甘特图,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid ```mermaid
gantt gantt
title 项目开发流程 title "项目开发流程"
dateFormat YYYY-MM-DD dateFormat YYYY-MM-DD
section 设计 section "设计"
需求分析 :done, des1, 2024-01-06,2024-01-08 "需求分析" :done, des1, 2024-01-06,2024-01-08
原型设计 :active, des2, 2024-01-09, 3d "原型设计" :active, des2, 2024-01-09, 3d
UI设计 : des3, after des2, 5d "UI设计" : des3, after des2, 5d
section 开发 section "开发"
前端开发 :2024-01-20, 10d "前端开发" :2024-01-20, 10d
后端开发 :2024-01-20, 10d "后端开发" :2024-01-20, 10d
``` ```
""" """
#状态图 # 状态图
PROMPT_6 = """ PROMPT_6 = """
请你给出围绕“{subject}”的状态图,使用mermaid语法,mermaid语法举例 请你给出围绕“{subject}”的状态图,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid ```mermaid
stateDiagram-v2 stateDiagram-v2
[*] --> Still [*] --> "Still"
Still --> [*] "Still" --> [*]
Still --> Moving "Still" --> "Moving"
Moving --> Still "Moving" --> "Still"
Moving --> Crash "Moving" --> "Crash"
Crash --> [*] "Crash" --> [*]
``` ```
""" """
#实体关系图 # 实体关系图
PROMPT_7 = """ PROMPT_7 = """
请你给出围绕“{subject}”的实体关系图,使用mermaid语法,mermaid语法举例 请你给出围绕“{subject}”的实体关系图,使用mermaid语法
mermaid语法举例
```mermaid ```mermaid
erDiagram erDiagram
CUSTOMER ||--o{ ORDER : places CUSTOMER ||--o{ ORDER : places
@@ -124,118 +134,173 @@ erDiagram
} }
``` ```
""" """
#象限提示图 # 象限提示图
PROMPT_8 = """ PROMPT_8 = """
请你给出围绕“{subject}”的象限图,使用mermaid语法,mermaid语法举例 请你给出围绕“{subject}”的象限图,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid ```mermaid
graph LR graph LR
A[Hard skill] --> B(Programming) A["Hard skill"] --> B("Programming")
A[Hard skill] --> C(Design) A["Hard skill"] --> C("Design")
D[Soft skill] --> E(Coordination) D["Soft skill"] --> E("Coordination")
D[Soft skill] --> F(Communication) D["Soft skill"] --> F("Communication")
``` ```
""" """
#思维导图 # 思维导图
PROMPT_9 = """ PROMPT_9 = """
{subject} {subject}
========== ==========
请给出上方内容的思维导图,充分考虑其之间的逻辑,使用mermaid语法,mermaid语法举例 请给出上方内容的思维导图,充分考虑其之间的逻辑,使用mermaid语法,注意需要使用双引号将内容括起来。
mermaid语法举例
```mermaid ```mermaid
mindmap mindmap
root((mindmap)) root((mindmap))
Origins ("Origins")
Long history ("Long history")
::icon(fa fa-book) ::icon(fa fa-book)
Popularisation ("Popularisation")
British popular psychology author Tony Buzan ("British popular psychology author Tony Buzan")
Research ::icon(fa fa-user)
On effectiveness<br/>and features ("Research")
On Automatic creation ("On effectiveness<br/>and features")
Uses ::icon(fa fa-search)
Creative techniques ("On Automatic creation")
Strategic planning ::icon(fa fa-robot)
Argument mapping ("Uses")
Tools ("Creative techniques")
Pen and paper ::icon(fa fa-lightbulb-o)
Mermaid ("Strategic planning")
::icon(fa fa-flag)
("Argument mapping")
::icon(fa fa-comments)
("Tools")
("Pen and paper")
::icon(fa fa-pencil)
("Mermaid")
::icon(fa fa-code)
``` ```
""" """
def 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs):
def 解析历史输入(history, llm_kwargs, file_manifest, chatbot, plugin_kwargs):
############################## <第 0 步,切割输入> ################################## ############################## <第 0 步,切割输入> ##################################
# 借用PDF切割中的函数对文本进行切割 # 借用PDF切割中的函数对文本进行切割
TOKEN_LIMIT_PER_FRAGMENT = 2500 TOKEN_LIMIT_PER_FRAGMENT = 2500
txt = str(history).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars txt = (
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit str(history).encode("utf-8", "ignore").decode()
txt = breakdown_text_to_satisfy_token_limit(txt=txt, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model']) ) # avoid reading non-utf8 chars
from crazy_functions.pdf_fns.breakdown_txt import (
breakdown_text_to_satisfy_token_limit,
)
txt = breakdown_text_to_satisfy_token_limit(
txt=txt, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs["llm_model"]
)
############################## <第 1 步,迭代地历遍整个文章,提取精炼信息> ################################## ############################## <第 1 步,迭代地历遍整个文章,提取精炼信息> ##################################
results = [] results = []
MAX_WORD_TOTAL = 4096 MAX_WORD_TOTAL = 4096
n_txt = len(txt) n_txt = len(txt)
last_iteration_result = "从以下文本中提取摘要。" last_iteration_result = "从以下文本中提取摘要。"
if n_txt >= 20: print('文章极长,不能达到预期效果') if n_txt >= 20:
print("文章极长,不能达到预期效果")
for i in range(n_txt): for i in range(n_txt):
NUM_OF_WORD = MAX_WORD_TOTAL // n_txt NUM_OF_WORD = MAX_WORD_TOTAL // n_txt
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words in Chinese: {txt[i]}" i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words in Chinese: {txt[i]}"
i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...." i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...."
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问 gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
llm_kwargs, chatbot, i_say,
history=["The main content of the previous section is?", last_iteration_result], # 迭代上一次的结果 i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
sys_prompt="Extracts the main content from the text section where it is located for graphing purposes, answer me with Chinese." # 提示 llm_kwargs,
) chatbot,
history=[
"The main content of the previous section is?",
last_iteration_result,
], # 迭代上一次的结果
sys_prompt="Extracts the main content from the text section where it is located for graphing purposes, answer me with Chinese.", # 提示
)
results.append(gpt_say) results.append(gpt_say)
last_iteration_result = gpt_say last_iteration_result = gpt_say
############################## <第 2 步,根据整理的摘要选择图表类型> ################################## ############################## <第 2 步,根据整理的摘要选择图表类型> ##################################
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") gpt_say = str(plugin_kwargs) # 将图表类型参数赋值为插件参数
gpt_say = plugin_kwargs.get("advanced_arg", "") #将图表类型参数赋值为插件参数 results_txt = "\n".join(results) # 合并摘要
results_txt = '\n'.join(results) #合并摘要 if gpt_say not in [
if gpt_say not in ['1','2','3','4','5','6','7','8','9']: #如插件参数不正确则使用对话模型判断 "1",
i_say_show_user = f'接下来将判断适合的图表类型,如连续3次判断失败将会使用流程图进行绘制'; gpt_say = "[Local Message] 收到。" # 用户提示 "2",
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI "3",
"4",
"5",
"6",
"7",
"8",
"9",
]: # 如插件参数不正确则使用对话模型判断
i_say_show_user = (
f"接下来将判断适合的图表类型,如连续3次判断失败将会使用流程图进行绘制"
)
gpt_say = "[Local Message] 收到。" # 用户提示
chatbot.append([i_say_show_user, gpt_say])
yield from update_ui(chatbot=chatbot, history=[]) # 更新UI
i_say = SELECT_PROMPT.format(subject=results_txt) i_say = SELECT_PROMPT.format(subject=results_txt)
i_say_show_user = f'请判断适合使用的流程图类型,其中数字对应关系为:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图。由于不管提供文本是什么,模型大概率认为"思维导图"最合适,因此思维导图仅能通过参数调用。' i_say_show_user = f'请判断适合使用的流程图类型,其中数字对应关系为:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图。由于不管提供文本是什么,模型大概率认为"思维导图"最合适,因此思维导图仅能通过参数调用。'
for i in range(3): for i in range(3):
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say_show_user, inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs,
sys_prompt="" chatbot=chatbot,
history=[],
sys_prompt="",
) )
if gpt_say in ['1','2','3','4','5','6','7','8','9']: #判断返回是否正确 if gpt_say in [
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
]: # 判断返回是否正确
break break
if gpt_say not in ['1','2','3','4','5','6','7','8','9']: if gpt_say not in ["1", "2", "3", "4", "5", "6", "7", "8", "9"]:
gpt_say = '1' gpt_say = "1"
############################## <第 3 步,根据选择的图表类型绘制图表> ################################## ############################## <第 3 步,根据选择的图表类型绘制图表> ##################################
if gpt_say == '1': if gpt_say == "1":
i_say = PROMPT_1.format(subject=results_txt) i_say = PROMPT_1.format(subject=results_txt)
elif gpt_say == '2': elif gpt_say == "2":
i_say = PROMPT_2.format(subject=results_txt) i_say = PROMPT_2.format(subject=results_txt)
elif gpt_say == '3': elif gpt_say == "3":
i_say = PROMPT_3.format(subject=results_txt) i_say = PROMPT_3.format(subject=results_txt)
elif gpt_say == '4': elif gpt_say == "4":
i_say = PROMPT_4.format(subject=results_txt) i_say = PROMPT_4.format(subject=results_txt)
elif gpt_say == '5': elif gpt_say == "5":
i_say = PROMPT_5.format(subject=results_txt) i_say = PROMPT_5.format(subject=results_txt)
elif gpt_say == '6': elif gpt_say == "6":
i_say = PROMPT_6.format(subject=results_txt) i_say = PROMPT_6.format(subject=results_txt)
elif gpt_say == '7': elif gpt_say == "7":
i_say = PROMPT_7.replace("{subject}", results_txt) #由于实体关系图用到了{}符号 i_say = PROMPT_7.replace("{subject}", results_txt) # 由于实体关系图用到了{}符号
elif gpt_say == '8': elif gpt_say == "8":
i_say = PROMPT_8.format(subject=results_txt) i_say = PROMPT_8.format(subject=results_txt)
elif gpt_say == '9': elif gpt_say == "9":
i_say = PROMPT_9.format(subject=results_txt) i_say = PROMPT_9.format(subject=results_txt)
i_say_show_user = f'请根据判断结果绘制相应的图表。如需绘制思维导图请使用参数调用,同时过大的图表可能需要复制到在线编辑器中进行渲染。' i_say_show_user = f"请根据判断结果绘制相应的图表。如需绘制思维导图请使用参数调用,同时过大的图表可能需要复制到在线编辑器中进行渲染。"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs=i_say,
inputs_show_user=i_say_show_user, inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs,
sys_prompt="" chatbot=chatbot,
history=[],
sys_prompt="",
) )
history.append(gpt_say) history.append(gpt_say)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
@CatchException @CatchException
def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 生成多种Mermaid图表(
txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port
):
""" """
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
@@ -248,15 +313,21 @@ def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history,
import os import os
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append(
"函数插件功能?", [
"根据当前聊天历史或指定的路径文件(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\ "函数插件功能?",
\n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918"]) "根据当前聊天历史或指定的路径文件(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 \n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918",
]
)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if os.path.exists(txt): #如输入区无内容则直接解析历史记录 if os.path.exists(txt): # 如输入区无内容则直接解析历史记录
from crazy_functions.pdf_fns.parse_word import extract_text_from_files from crazy_functions.pdf_fns.parse_word import extract_text_from_files
file_exist, final_result, page_one, file_manifest, excption = extract_text_from_files(txt, chatbot, history)
file_exist, final_result, page_one, file_manifest, excption = (
extract_text_from_files(txt, chatbot, history)
)
else: else:
file_exist = False file_exist = False
excption = "" excption = ""
@@ -264,33 +335,104 @@ def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history,
if excption != "": if excption != "":
if excption == "word": if excption == "word":
report_exception(chatbot, history, report_exception(
a = f"解析项目: {txt}", chatbot,
b = f"找到了.doc文件,但是该文件格式不被支持,请先转化为.docx格式。") history,
a=f"解析项目: {txt}",
b=f"找到了.doc文件,但是该文件格式不被支持,请先转化为.docx格式。",
)
elif excption == "pdf": elif excption == "pdf":
report_exception(chatbot, history, report_exception(
a = f"解析项目: {txt}", chatbot,
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") history,
a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。",
)
elif excption == "word_pip": elif excption == "word_pip":
report_exception(chatbot, history, report_exception(
a=f"解析项目: {txt}", chatbot,
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。") history,
a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。",
)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
else: else:
if not file_exist: if not file_exist:
history.append(txt) #如输入区不是文件则将输入区内容加入历史记录 history.append(txt) # 如输入区不是文件则将输入区内容加入历史记录
i_say_show_user = f'首先你从历史记录中提取摘要。'; gpt_say = "[Local Message] 收到。" # 用户提示 i_say_show_user = f"首先你从历史记录中提取摘要。"
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI gpt_say = "[Local Message] 收到。" # 用户提示
yield from 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs) chatbot.append([i_say_show_user, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 更新UI
yield from 解析历史输入(
history, llm_kwargs, file_manifest, chatbot, plugin_kwargs
)
else: else:
file_num = len(file_manifest) file_num = len(file_manifest)
for i in range(file_num): #依次处理文件 for i in range(file_num): # 依次处理文件
i_say_show_user = f"[{i+1}/{file_num}]处理文件{file_manifest[i]}"; gpt_say = "[Local Message] 收到。" # 用户提示 i_say_show_user = f"[{i+1}/{file_num}]处理文件{file_manifest[i]}"
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI gpt_say = "[Local Message] 收到。" # 用户提示
history = [] #如输入区内容为文件则清空历史记录 chatbot.append([i_say_show_user, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 更新UI
history = [] # 如输入区内容为文件则清空历史记录
history.append(final_result[i]) history.append(final_result[i])
yield from 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs) yield from 解析历史输入(
history, llm_kwargs, file_manifest, chatbot, plugin_kwargs
)
class Mermaid_Gen(GptAcademicPluginTemplate):
def __init__(self):
pass
def define_arg_selection_menu(self):
gui_definition = {
"Type_of_Mermaid": ArgProperty(
title="绘制的Mermaid图表类型",
options=[
"由LLM决定",
"流程图",
"序列图",
"类图",
"饼图",
"甘特图",
"状态图",
"实体关系图",
"象限提示图",
"思维导图",
],
default_value="由LLM决定",
description="选择'由LLM决定'时将由对话模型判断适合的图表类型(不包括思维导图),选择其他类型时将直接绘制指定的图表类型。",
type="dropdown",
).model_dump_json(),
}
return gui_definition
def execute(
txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request
):
options = [
"由LLM决定",
"流程图",
"序列图",
"类图",
"饼图",
"甘特图",
"状态图",
"实体关系图",
"象限提示图",
"思维导图",
]
plugin_kwargs = options.index(plugin_kwargs['Type_of_Mermaid'])
yield from 生成多种Mermaid图表(
txt,
llm_kwargs,
plugin_kwargs,
chatbot,
history,
system_prompt,
user_request,
)

查看文件

@@ -1,6 +1,7 @@
from toolbox import update_ui, promote_file_to_downloadzone, disable_auto_promotion from toolbox import update_ui, promote_file_to_downloadzone, disable_auto_promotion
from toolbox import CatchException, report_exception, write_history_to_file from toolbox import CatchException, report_exception, write_history_to_file
from .crazy_utils import input_clipping from shared_utils.fastapi_server import validate_path_safety
from crazy_functions.crazy_utils import input_clipping
def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
import os, copy import os, copy
@@ -128,6 +129,7 @@ def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -146,6 +148,7 @@ def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析Matlab项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析Matlab项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -164,6 +167,7 @@ def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, his
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -184,6 +188,7 @@ def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -206,6 +211,7 @@ def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
@@ -228,6 +234,7 @@ def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
@@ -257,6 +264,7 @@ def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
@@ -278,6 +286,7 @@ def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
@@ -298,6 +307,7 @@ def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -320,6 +330,7 @@ def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -357,6 +368,7 @@ def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
import glob, os, re import glob, os, re
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")

查看文件

@@ -2,6 +2,10 @@ from toolbox import CatchException, update_ui
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
import datetime import datetime
####################################################################################################################
# Demo 1: 一个非常简单的插件 #########################################################################################
####################################################################################################################
高阶功能模板函数示意图 = f""" 高阶功能模板函数示意图 = f"""
```mermaid ```mermaid
flowchart TD flowchart TD
@@ -26,7 +30,7 @@ flowchart TD
""" """
@CatchException @CatchException
def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request): def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request, num_day=5):
""" """
# 高阶功能模板函数示意图https://mermaid.live/edit#pako:eNptk1tvEkEYhv8KmattQpvlvOyFCcdeeaVXuoYssBwie8gyhCIlqVoLhrbbtAWNUpEGUkyMEDW2Fmn_DDOL_8LZHdOwxrnamX3f7_3mmZk6yKhZCfAgV1KrmYKoQ9fDuKC4yChX0nld1Aou1JzjznQ5fWmejh8LYHW6vG2a47YAnlCLNSIRolnenKBXI_zRIBrcuqRT890u7jZx7zMDt-AaMbnW1--5olGiz2sQjwfoQxsZL0hxplSSU0-rop4vrzmKR6O2JxYjHmwcL2Y_HDatVMkXlf86YzHbGY9bO5j8XE7O8Nsbc3iNB3ukL2SMcH-XIQBgWoVOZzxuOxOJOyc63EPGV6ZQLENVrznViYStTiaJ2vw2M2d9bByRnOXkgCnXylCSU5quyto_IcmkbdvctELmJ-j1ASW3uB3g5xOmKqVTmqr_Na3AtuS_dtBFm8H90XJyHkDDT7S9xXWb4HGmRChx64AOL5HRpUm411rM5uh4H78Z4V7fCZzytjZz2seto9XaNPFue07clLaVZF8UNLygJ-VES8lah_n-O-5Ozc7-77NzJ0-K0yr0ZYrmHdqAk50t2RbA4qq9uNohBASw7YpSgaRkLWCCAtxAlnRZLGbJba9bPwUAC5IsCYAnn1kpJ1ZKUACC0iBSsQLVBzUlA3ioVyQ3qGhZEUrxokiehAz4nFgqk1VNVABfB1uAD_g2_AGPl-W8nMcbCvsDblADfNCz4feyobDPy3rYEMtxwYYbPFNVUoHdCPmDHBv2cP4AMfrCbiBli-Q-3afv0X6WdsIjW2-10fgDy1SAig # 高阶功能模板函数示意图https://mermaid.live/edit#pako:eNptk1tvEkEYhv8KmattQpvlvOyFCcdeeaVXuoYssBwie8gyhCIlqVoLhrbbtAWNUpEGUkyMEDW2Fmn_DDOL_8LZHdOwxrnamX3f7_3mmZk6yKhZCfAgV1KrmYKoQ9fDuKC4yChX0nld1Aou1JzjznQ5fWmejh8LYHW6vG2a47YAnlCLNSIRolnenKBXI_zRIBrcuqRT890u7jZx7zMDt-AaMbnW1--5olGiz2sQjwfoQxsZL0hxplSSU0-rop4vrzmKR6O2JxYjHmwcL2Y_HDatVMkXlf86YzHbGY9bO5j8XE7O8Nsbc3iNB3ukL2SMcH-XIQBgWoVOZzxuOxOJOyc63EPGV6ZQLENVrznViYStTiaJ2vw2M2d9bByRnOXkgCnXylCSU5quyto_IcmkbdvctELmJ-j1ASW3uB3g5xOmKqVTmqr_Na3AtuS_dtBFm8H90XJyHkDDT7S9xXWb4HGmRChx64AOL5HRpUm411rM5uh4H78Z4V7fCZzytjZz2seto9XaNPFue07clLaVZF8UNLygJ-VES8lah_n-O-5Ozc7-77NzJ0-K0yr0ZYrmHdqAk50t2RbA4qq9uNohBASw7YpSgaRkLWCCAtxAlnRZLGbJba9bPwUAC5IsCYAnn1kpJ1ZKUACC0iBSsQLVBzUlA3ioVyQ3qGhZEUrxokiehAz4nFgqk1VNVABfB1uAD_g2_AGPl-W8nMcbCvsDblADfNCz4feyobDPy3rYEMtxwYYbPFNVUoHdCPmDHBv2cP4AMfrCbiBli-Q-3afv0X6WdsIjW2-10fgDy1SAig
@@ -43,7 +47,7 @@ def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
"您正在调用插件:历史上的今天", "您正在调用插件:历史上的今天",
"[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板该函数只有20多行代码。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR" + 高阶功能模板函数示意图)) "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板该函数只有20多行代码。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR" + 高阶功能模板函数示意图))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
for i in range(5): for i in range(int(num_day)):
currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month
currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day
i_say = f'历史中哪些事件发生在{currentMonth}{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。' i_say = f'历史中哪些事件发生在{currentMonth}{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。'
@@ -59,6 +63,56 @@ def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
####################################################################################################################
# Demo 2: 一个带二级菜单的插件 #######################################################################################
####################################################################################################################
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
class Demo_Wrap(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
"""
gui_definition = {
"num_day":
ArgProperty(title="日期选择", options=["仅今天", "未来3天", "未来5天"], default_value="未来3天", description="", type="dropdown").model_dump_json(),
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
num_day = plugin_kwargs["num_day"]
if num_day == "仅今天": num_day = 1
if num_day == "未来3天": num_day = 3
if num_day == "未来5天": num_day = 5
yield from 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request, num_day=num_day)
####################################################################################################################
# Demo 3: 绘制脑图的Demo ############################################################################################
####################################################################################################################
PROMPT = """ PROMPT = """
请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,mermaid语法举例 请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,mermaid语法举例
```mermaid ```mermaid

查看文件

@@ -3,6 +3,9 @@
# 从NVIDIA源,从而支持显卡检查宿主的nvidia-smi中的cuda版本必须>=11.3 # 从NVIDIA源,从而支持显卡检查宿主的nvidia-smi中的cuda版本必须>=11.3
FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest
# edge-tts需要的依赖,某些pip包所需的依赖
RUN apt update && apt install ffmpeg build-essential -y
# use python3 as the system default python # use python3 as the system default python
WORKDIR /gpt WORKDIR /gpt
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8 RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8

查看文件

@@ -5,6 +5,9 @@
# 从NVIDIA源,从而支持显卡检查宿主的nvidia-smi中的cuda版本必须>=11.3 # 从NVIDIA源,从而支持显卡检查宿主的nvidia-smi中的cuda版本必须>=11.3
FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest
# edge-tts需要的依赖,某些pip包所需的依赖
RUN apt update && apt install ffmpeg build-essential -y
# use python3 as the system default python # use python3 as the system default python
WORKDIR /gpt WORKDIR /gpt
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8 RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
@@ -36,6 +39,7 @@ RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt RUN python3 -m pip install -r request_llms/requirements_newbing.txt
RUN python3 -m pip install nougat-ocr RUN python3 -m pip install nougat-ocr
# 预热Tiktoken模块 # 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -5,6 +5,8 @@ RUN apt-get update
RUN apt-get install -y curl proxychains curl gcc RUN apt-get install -y curl proxychains curl gcc
RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
# edge-tts需要的依赖,某些pip包所需的依赖
RUN apt update && apt install ffmpeg build-essential -y
# use python3 as the system default python # use python3 as the system default python
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8 RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
@@ -22,7 +24,6 @@ RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt RUN python3 -m pip install -r request_llms/requirements_newbing.txt
# 预热Tiktoken模块 # 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -23,6 +23,9 @@ RUN python3 -m pip install -r request_llms/requirements_jittorllms.txt -i https:
# 下载JittorLLMs # 下载JittorLLMs
RUN git clone https://github.com/binary-husky/JittorLLMs.git --depth 1 request_llms/jittorllms RUN git clone https://github.com/binary-husky/JittorLLMs.git --depth 1 request_llms/jittorllms
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 禁用缓存,确保更新代码 # 禁用缓存,确保更新代码
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
RUN git pull RUN git pull

查看文件

@@ -12,6 +12,8 @@ COPY . .
# 安装依赖 # 安装依赖
RUN pip3 install -r requirements.txt RUN pip3 install -r requirements.txt
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块 # 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -15,6 +15,9 @@ RUN pip3 install -r requirements.txt
# 安装语音插件的额外依赖 # 安装语音插件的额外依赖
RUN pip3 install aliyun-python-sdk-core==2.13.3 pyOpenSSL webrtcvad scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git RUN pip3 install aliyun-python-sdk-core==2.13.3 pyOpenSSL webrtcvad scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块 # 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -25,6 +25,9 @@ COPY . .
# 安装依赖 # 安装依赖
RUN pip3 install -r requirements.txt RUN pip3 install -r requirements.txt
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块 # 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -19,6 +19,9 @@ RUN pip3 install transformers protobuf langchain sentence-transformers faiss-cp
RUN pip3 install unstructured[all-docs] --upgrade RUN pip3 install unstructured[all-docs] --upgrade
RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()' RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()'
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块 # 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

查看文件

@@ -0,0 +1,189 @@
# 实现带二级菜单的插件
## 一、如何写带有二级菜单的插件
1. 声明一个 `Class`,继承父类 `GptAcademicPluginTemplate`
```python
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate
from crazy_functions.plugin_template.plugin_class_template import ArgProperty
class Demo_Wrap(GptAcademicPluginTemplate):
def __init__(self): ...
```
2. 声明二级菜单中需要的变量,覆盖父类的`define_arg_selection_menu`函数。
```python
class Demo_Wrap(GptAcademicPluginTemplate):
...
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
第三个参数,名称`allow_cache`,参数`type`声明这是一个下拉菜单,下拉菜单上方显示`title`+`description`,下拉菜单的选项为`options`,`default_value`为下拉菜单默认值;
"""
gui_definition = {
"main_input":
ArgProperty(title="ArxivID", description="输入Arxiv的ID或者网址", default_value="", type="string").model_dump_json(),
"advanced_arg":
ArgProperty(title="额外的翻译提示词",
description=r"如果有必要, 请在此处给出自定义翻译命令",
default_value="", type="string").model_dump_json(),
"allow_cache":
ArgProperty(title="是否允许从缓存中调取结果", options=["允许缓存", "从头执行"], default_value="允许缓存", description="无", type="dropdown").model_dump_json(),
}
return gui_definition
...
```
> [!IMPORTANT]
>
> ArgProperty 中每个条目对应一个参数,`type == "string"`时,使用文本块,`type == dropdown`时,使用下拉菜单。
>
> 注意:`main_input` 和 `advanced_arg`是两个特殊的参数。`main_input`会自动与界面右上角的`输入区`进行同步,而`advanced_arg`会自动与界面右下角的`高级参数输入区`同步。除此之外,参数名称可以任意选取。其他细节详见`crazy_functions/plugin_template/plugin_class_template.py`。
3. 编写插件程序,覆盖父类的`execute`函数。
例如:
```python
class Demo_Wrap(GptAcademicPluginTemplate):
...
...
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
plugin_kwargs字典中会包含用户的选择,与上述 `define_arg_selection_menu` 一一对应
"""
allow_cache = plugin_kwargs["allow_cache"]
advanced_arg = plugin_kwargs["advanced_arg"]
if allow_cache == "从头执行": plugin_kwargs["advanced_arg"] = "--no-cache " + plugin_kwargs["advanced_arg"]
yield from Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
```
4. 注册插件
将以下条目插入`crazy_functional.py`即可。注意,与旧插件不同的是,`Function`键值应该为None,而`Class`键值为上述插件的类名称(`Demo_Wrap`)。
```
"新插件": {
"Group": "学术",
"Color": "stop",
"AsButton": True,
"Info": "插件说明",
"Function": None,
"Class": Demo_Wrap,
},
```
5. 已经结束了,启动程序测试吧~
## 二、背后的原理需要JavaScript的前置知识
### (I) 首先介绍三个Gradio官方没有的重要前端函数
主javascript程序`common.js`中有三个Gradio官方没有的重要API
1. `get_data_from_gradio_component`
这个函数可以获取任意gradio组件的当前值,例如textbox中的字符,dropdown中的当前选项,chatbot当前的对话等等。调用方法举例
```javascript
// 获取当前的对话
let chatbot = await get_data_from_gradio_component('gpt-chatbot');
```
2. `get_gradio_component`
有时候我们不仅需要gradio组件的当前值,还需要它的label值、是否隐藏、下拉菜单其他可选选项等等,而通过这个函数可以直接获取这个组件的句柄。举例
```javascript
// 获取下拉菜单组件的句柄
var model_sel = await get_gradio_component("elem_model_sel");
// 获取它的所有属性,包括其所有可选选项
console.log(model_sel.props)
```
3. `push_data_to_gradio_component`
这个函数可以将数据推回gradio组件,例如textbox中的字符,dropdown中的当前选项等等。调用方法举例
```javascript
// 修改一个按钮上面的文本
push_data_to_gradio_component("btnName", "gradio_element_id", "string");
// 隐藏一个组件
push_data_to_gradio_component({ visible: false, __type__: 'update' }, "plugin_arg_menu", "obj");
// 修改组件label
push_data_to_gradio_component({ label: '新label的值', __type__: 'update' }, "gpt-chatbot", "obj")
// 第一个参数是value,
// - 可以是字符串调整textbox的文本,按钮的文本
// - 还可以是 { visible: false, __type__: 'update' } 这样的字典调整visible, label, choices
// 第二个参数是elem_id
// 第三个参数是"string" 或者 "obj"
```
### (II) 从点击插件到执行插件的逻辑过程
简述程序启动时把每个插件的二级菜单编码为BASE64,存储在用户的浏览器前端,用户调用对应功能时,会按照插件的BASE64编码,将平时隐藏的菜单有选择性地显示出来。
1. 启动阶段(主函数 `main.py` 中,遍历每个插件,生成二级菜单的BASE64编码,存入变量`register_advanced_plugin_init_code_arr`。
```python
def get_js_code_for_generating_menu(self, btnName):
define_arg_selection = self.define_arg_selection_menu()
DEFINE_ARG_INPUT_INTERFACE = json.dumps(define_arg_selection)
return base64.b64encode(DEFINE_ARG_INPUT_INTERFACE.encode('utf-8')).decode('utf-8')
```
2. 用户加载阶段主javascript程序`common.js`中),浏览器加载`register_advanced_plugin_init_code_arr`,存入本地的字典`advanced_plugin_init_code_lib`
```javascript
advanced_plugin_init_code_lib = {}
function register_advanced_plugin_init_code(key, code){
advanced_plugin_init_code_lib[key] = code;
}
```
3. 用户点击插件按钮(主函数 `main.py` 中时,仅执行以下javascript代码,唤醒隐藏的二级菜单生成菜单的代码在`common.js`中的`generate_menu`函数上):
```javascript
// 生成高级插件的选择菜单
function run_advanced_plugin_launch_code(key){
generate_menu(advanced_plugin_init_code_lib[key], key);
}
function on_flex_button_click(key){
run_advanced_plugin_launch_code(key);
}
```
```python
click_handle = plugins[k]["Button"].click(None, inputs=[], outputs=None, _js=f"""()=>run_advanced_plugin_launch_code("{k}")""")
```
4. 当用户点击二级菜单的执行键时,通过javascript脚本模拟点击一个隐藏按钮,触发后续程序`common.js`中的`execute_current_pop_up_plugin`,会把二级菜单中的参数缓存到`invisible_current_pop_up_plugin_arg_final`,然后模拟点击`invisible_callback_btn_for_plugin_exe`按钮)。隐藏按钮的定义在(主函数 `main.py` ),该隐藏按钮会最终触发`route_switchy_bt_with_arg`函数(定义于`themes/gui_advanced_plugin_class.py`
```python
click_handle_ng = new_plugin_callback.click(route_switchy_bt_with_arg, [
gr.State(["new_plugin_callback", "usr_confirmed_arg"] + input_combo_order),
new_plugin_callback, usr_confirmed_arg, *input_combo
], output_combo)
```
5. 最后,`route_switchy_bt_with_arg`中,会搜集所有用户参数,统一集中到`plugin_kwargs`参数中,并执行对应插件的`execute`函数。

查看文件

@@ -22,13 +22,13 @@
| crazy_functions\下载arxiv论文翻译摘要.py | 下载 `arxiv` 论文的 PDF 文件,并提取摘要和翻译 | | crazy_functions\下载arxiv论文翻译摘要.py | 下载 `arxiv` 论文的 PDF 文件,并提取摘要和翻译 |
| crazy_functions\代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 | | crazy_functions\代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 |
| crazy_functions\图片生成.py | 根据激励文本使用GPT模型生成相应的图像 | | crazy_functions\图片生成.py | 根据激励文本使用GPT模型生成相应的图像 |
| crazy_functions\对话历史存档.py | 将每次对话记录写入Markdown格式的文件中 | | crazy_functions\Conversation_To_File.py | 将每次对话记录写入Markdown格式的文件中 |
| crazy_functions\总结word文档.py | 对输入的word文档进行摘要生成 | | crazy_functions\总结word文档.py | 对输入的word文档进行摘要生成 |
| crazy_functions\总结音视频.py | 对输入的音视频文件进行摘要生成 | | crazy_functions\总结音视频.py | 对输入的音视频文件进行摘要生成 |
| crazy_functions\批量Markdown翻译.py | 将指定目录下的Markdown文件进行中英文翻译 | | crazy_functions\Markdown_Translate.py | 将指定目录下的Markdown文件进行中英文翻译 |
| crazy_functions\批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 | | crazy_functions\批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 |
| crazy_functions\批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 | | crazy_functions\批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 |
| crazy_functions\PDF批量翻译.py | 将指定目录下的PDF文件进行中英文翻译 | | crazy_functions\PDF_Translate.py | 将指定目录下的PDF文件进行中英文翻译 |
| crazy_functions\理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 | | crazy_functions\理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 |
| crazy_functions\生成函数注释.py | 自动生成Python函数的注释 | | crazy_functions\生成函数注释.py | 自动生成Python函数的注释 |
| crazy_functions\联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 | | crazy_functions\联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 |
@@ -155,9 +155,9 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
该程序文件提供了一个用于生成图像的函数`图片生成`。函数实现的过程中,会调用`gen_image`函数来生成图像,并返回图像生成的网址和本地文件地址。函数有多个参数,包括`prompt`(激励文本)、`llm_kwargs`(GPT模型的参数)、`plugin_kwargs`(插件模型的参数)等。函数核心代码使用了`requests`库向OpenAI API请求图像,并做了简单的处理和保存。函数还更新了交互界面,清空聊天历史并显示正在生成图像的消息和最终的图像网址和预览。 该程序文件提供了一个用于生成图像的函数`图片生成`。函数实现的过程中,会调用`gen_image`函数来生成图像,并返回图像生成的网址和本地文件地址。函数有多个参数,包括`prompt`(激励文本)、`llm_kwargs`(GPT模型的参数)、`plugin_kwargs`(插件模型的参数)等。函数核心代码使用了`requests`库向OpenAI API请求图像,并做了简单的处理和保存。函数还更新了交互界面,清空聊天历史并显示正在生成图像的消息和最终的图像网址和预览。
## [18/48] 请对下面的程序文件做一个概述: crazy_functions\对话历史存档.py ## [18/48] 请对下面的程序文件做一个概述: crazy_functions\Conversation_To_File.py
这个文件是名为crazy_functions\对话历史存档.py的Python程序文件,包含了4个函数 这个文件是名为crazy_functions\Conversation_To_File.py的Python程序文件,包含了4个函数
1. write_chat_to_file(chatbot, history=None, file_name=None)用来将对话记录以Markdown格式写入文件中,并且生成文件名,如果没指定文件名则用当前时间。写入完成后将文件路径打印出来。 1. write_chat_to_file(chatbot, history=None, file_name=None)用来将对话记录以Markdown格式写入文件中,并且生成文件名,如果没指定文件名则用当前时间。写入完成后将文件路径打印出来。
@@ -165,7 +165,7 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
3. read_file_to_chat(chatbot, history, file_name):从传入的文件中读取内容,解析出对话历史记录并更新聊天显示框。 3. read_file_to_chat(chatbot, history, file_name):从传入的文件中读取内容,解析出对话历史记录并更新聊天显示框。
4. 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。 4. Conversation_To_File(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。
## [19/48] 请对下面的程序文件做一个概述: crazy_functions\总结word文档.py ## [19/48] 请对下面的程序文件做一个概述: crazy_functions\总结word文档.py
@@ -175,9 +175,9 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
该程序文件包括两个函数split_audio_file()和AnalyAudio(),并且导入了一些必要的库并定义了一些工具函数。split_audio_file用于将音频文件分割成多个时长相等的片段,返回一个包含所有切割音频片段文件路径的列表,而AnalyAudio用来分析音频文件,通过调用whisper模型进行音频转文字并使用GPT模型对音频内容进行概述,最终将所有总结结果写入结果文件中。 该程序文件包括两个函数split_audio_file()和AnalyAudio(),并且导入了一些必要的库并定义了一些工具函数。split_audio_file用于将音频文件分割成多个时长相等的片段,返回一个包含所有切割音频片段文件路径的列表,而AnalyAudio用来分析音频文件,通过调用whisper模型进行音频转文字并使用GPT模型对音频内容进行概述,最终将所有总结结果写入结果文件中。
## [21/48] 请对下面的程序文件做一个概述: crazy_functions\批量Markdown翻译.py ## [21/48] 请对下面的程序文件做一个概述: crazy_functions\Markdown_Translate.py
该程序文件名为`批量Markdown翻译.py`,包含了以下功能读取Markdown文件,将长文本分离开来,将Markdown文件进行翻译英译中和中译英,整理结果并退出。程序使用了多线程以提高效率。程序使用了`tiktoken`依赖库,可能需要额外安装。文件中还有一些其他的函数和类,但与文件名所描述的功能无关。 该程序文件名为`Markdown_Translate.py`,包含了以下功能读取Markdown文件,将长文本分离开来,将Markdown文件进行翻译英译中和中译英,整理结果并退出。程序使用了多线程以提高效率。程序使用了`tiktoken`依赖库,可能需要额外安装。文件中还有一些其他的函数和类,但与文件名所描述的功能无关。
## [22/48] 请对下面的程序文件做一个概述: crazy_functions\批量总结PDF文档.py ## [22/48] 请对下面的程序文件做一个概述: crazy_functions\批量总结PDF文档.py
@@ -187,9 +187,9 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
该程序文件是一个用于批量总结PDF文档的函数插件,使用了pdfminer插件和BeautifulSoup库来提取PDF文档的文本内容,对每个PDF文件分别进行处理并生成中英文摘要。同时,该程序文件还包括一些辅助工具函数和处理异常的装饰器。 该程序文件是一个用于批量总结PDF文档的函数插件,使用了pdfminer插件和BeautifulSoup库来提取PDF文档的文本内容,对每个PDF文件分别进行处理并生成中英文摘要。同时,该程序文件还包括一些辅助工具函数和处理异常的装饰器。
## [24/48] 请对下面的程序文件做一个概述: crazy_functions\PDF批量翻译.py ## [24/48] 请对下面的程序文件做一个概述: crazy_functions\PDF_Translate.py
这个程序文件是一个Python脚本,文件名为“PDF批量翻译.py”。它主要使用了“toolbox”、“request_gpt_model_in_new_thread_with_ui_alive”、“request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency”、“colorful”等Python库和自定义的模块“crazy_utils”的一些函数。程序实现了一个批量翻译PDF文档的功能,可以自动解析PDF文件中的基础信息,递归地切割PDF文件,翻译和处理PDF论文中的所有内容,并生成相应的翻译结果文件包括md文件和html文件。功能比较复杂,其中需要调用多个函数和依赖库,涉及到多线程操作和UI更新。文件中有详细的注释和变量命名,代码比较清晰易读。 这个程序文件是一个Python脚本,文件名为“PDF_Translate.py”。它主要使用了“toolbox”、“request_gpt_model_in_new_thread_with_ui_alive”、“request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency”、“colorful”等Python库和自定义的模块“crazy_utils”的一些函数。程序实现了一个批量翻译PDF文档的功能,可以自动解析PDF文件中的基础信息,递归地切割PDF文件,翻译和处理PDF论文中的所有内容,并生成相应的翻译结果文件包括md文件和html文件。功能比较复杂,其中需要调用多个函数和依赖库,涉及到多线程操作和UI更新。文件中有详细的注释和变量命名,代码比较清晰易读。
## [25/48] 请对下面的程序文件做一个概述: crazy_functions\理解PDF文档内容.py ## [25/48] 请对下面的程序文件做一个概述: crazy_functions\理解PDF文档内容.py
@@ -331,19 +331,19 @@ check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, c
这些程序源文件提供了基础的文本和语言处理功能、工具函数和高级插件,使 Chatbot 能够处理各种复杂的学术文本问题,包括润色、翻译、搜索、下载、解析等。 这些程序源文件提供了基础的文本和语言处理功能、工具函数和高级插件,使 Chatbot 能够处理各种复杂的学术文本问题,包括润色、翻译、搜索、下载、解析等。
## 用一张Markdown表格简要描述以下文件的功能 ## 用一张Markdown表格简要描述以下文件的功能
crazy_functions\代码重写为全英文_多线程.py, crazy_functions\图片生成.py, crazy_functions\对话历史存档.py, crazy_functions\总结word文档.py, crazy_functions\总结音视频.py, crazy_functions\批量Markdown翻译.py, crazy_functions\批量总结PDF文档.py, crazy_functions\批量总结PDF文档pdfminer.py, crazy_functions\PDF批量翻译.py, crazy_functions\理解PDF文档内容.py, crazy_functions\生成函数注释.py, crazy_functions\联网的ChatGPT.py, crazy_functions\解析JupyterNotebook.py, crazy_functions\解析项目源代码.py, crazy_functions\询问多个大语言模型.py, crazy_functions\读文章写摘要.py。根据以上分析,用一句话概括程序的整体功能。 crazy_functions\代码重写为全英文_多线程.py, crazy_functions\图片生成.py, crazy_functions\Conversation_To_File.py, crazy_functions\总结word文档.py, crazy_functions\总结音视频.py, crazy_functions\Markdown_Translate.py, crazy_functions\批量总结PDF文档.py, crazy_functions\批量总结PDF文档pdfminer.py, crazy_functions\PDF_Translate.py, crazy_functions\理解PDF文档内容.py, crazy_functions\生成函数注释.py, crazy_functions\联网的ChatGPT.py, crazy_functions\解析JupyterNotebook.py, crazy_functions\解析项目源代码.py, crazy_functions\询问多个大语言模型.py, crazy_functions\读文章写摘要.py。根据以上分析,用一句话概括程序的整体功能。
| 文件名 | 功能简述 | | 文件名 | 功能简述 |
| --- | --- | | --- | --- |
| 代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 | | 代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 |
| 图片生成.py | 根据激励文本使用GPT模型生成相应的图像 | | 图片生成.py | 根据激励文本使用GPT模型生成相应的图像 |
| 对话历史存档.py | 将每次对话记录写入Markdown格式的文件中 | | Conversation_To_File.py | 将每次对话记录写入Markdown格式的文件中 |
| 总结word文档.py | 对输入的word文档进行摘要生成 | | 总结word文档.py | 对输入的word文档进行摘要生成 |
| 总结音视频.py | 对输入的音视频文件进行摘要生成 | | 总结音视频.py | 对输入的音视频文件进行摘要生成 |
| 批量Markdown翻译.py | 将指定目录下的Markdown文件进行中英文翻译 | | Markdown_Translate.py | 将指定目录下的Markdown文件进行中英文翻译 |
| 批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 | | 批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 |
| 批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 | | 批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 |
| PDF批量翻译.py | 将指定目录下的PDF文件进行中英文翻译 | | PDF_Translate.py | 将指定目录下的PDF文件进行中英文翻译 |
| 理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 | | 理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 |
| 生成函数注释.py | 自动生成Python函数的注释 | | 生成函数注释.py | 自动生成Python函数的注释 |
| 联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 | | 联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 |

查看文件

@@ -36,15 +36,12 @@
"总结word文档": "SummarizingWordDocuments", "总结word文档": "SummarizingWordDocuments",
"解析ipynb文件": "ParsingIpynbFiles", "解析ipynb文件": "ParsingIpynbFiles",
"解析JupyterNotebook": "ParsingJupyterNotebook", "解析JupyterNotebook": "ParsingJupyterNotebook",
"对话历史存档": "ConversationHistoryArchive", "载入Conversation_To_File": "LoadConversationHistoryArchive",
"载入对话历史存档": "LoadConversationHistoryArchive",
"删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords", "删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords",
"Markdown英译中": "TranslateMarkdownFromEnglishToChinese", "Markdown英译中": "TranslateMarkdownFromEnglishToChinese",
"批量Markdown翻译": "BatchTranslateMarkdown",
"批量总结PDF文档": "BatchSummarizePDFDocuments", "批量总结PDF文档": "BatchSummarizePDFDocuments",
"批量总结PDF文档pdfminer": "BatchSummarizePDFDocumentsUsingPdfminer", "批量总结PDF文档pdfminer": "BatchSummarizePDFDocumentsUsingPdfminer",
"批量翻译PDF文档": "BatchTranslatePDFDocuments", "批量翻译PDF文档": "BatchTranslatePDFDocuments",
"PDF批量翻译": "BatchTranslatePDFDocuments_MultiThreaded",
"谷歌检索小助手": "GoogleSearchAssistant", "谷歌检索小助手": "GoogleSearchAssistant",
"理解PDF文档内容标准文件输入": "UnderstandPdfDocumentContentStandardFileInput", "理解PDF文档内容标准文件输入": "UnderstandPdfDocumentContentStandardFileInput",
"理解PDF文档内容": "UnderstandPdfDocumentContent", "理解PDF文档内容": "UnderstandPdfDocumentContent",
@@ -313,7 +310,7 @@
"注意": "Attention", "注意": "Attention",
"以下“红颜色”标识的函数插件需从输入区读取路径作为参数": "The function plugins marked in 'red' below need to read the path from the input area as a parameter", "以下“红颜色”标识的函数插件需从输入区读取路径作为参数": "The function plugins marked in 'red' below need to read the path from the input area as a parameter",
"更多函数插件": "More function plugins", "更多函数插件": "More function plugins",
"打开插件列表": "Open plugin list", "点击这里搜索插件列表": "Click Here to Search the Plugin List",
"高级参数输入区": "Advanced parameter input area", "高级参数输入区": "Advanced parameter input area",
"这里是特殊函数插件的高级参数输入区": "Here is the advanced parameter input area for special function plugins", "这里是特殊函数插件的高级参数输入区": "Here is the advanced parameter input area for special function plugins",
"请先从插件列表中选择": "Please select from the plugin list first", "请先从插件列表中选择": "Please select from the plugin list first",
@@ -1668,7 +1665,6 @@
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage", "Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
"Langchain知识库": "LangchainKnowledgeBase", "Langchain知识库": "LangchainKnowledgeBase",
"Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison", "Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison",
"Latex输出PDF": "OutputPDFFromLatex",
"Latex翻译中文并重新编译PDF": "TranslateChineseToEnglishInLatexAndRecompilePDF", "Latex翻译中文并重新编译PDF": "TranslateChineseToEnglishInLatexAndRecompilePDF",
"sprint亮靛": "SprintIndigo", "sprint亮靛": "SprintIndigo",
"寻找Latex主文件": "FindLatexMainFile", "寻找Latex主文件": "FindLatexMainFile",
@@ -3748,5 +3744,237 @@
"中文省略号": "Chinese Ellipsis", "中文省略号": "Chinese Ellipsis",
"则不生效": "Will not take effect", "则不生效": "Will not take effect",
"目前是两位小数": "Currently is two decimal places", "目前是两位小数": "Currently is two decimal places",
"Incorrect API key. Cohere以提供了不正确的API_KEY为由": "Incorrect API key. Cohere reports an incorrect API_KEY." "Incorrect API key. Cohere以提供了不正确的API_KEY为由": "Incorrect API key. Cohere reports an incorrect API_KEY.",
"应当慎之又慎!": "Should be extremely cautious!",
"、后端setter": "backend setter",
"对于 Run1 的数据": "data for Run1",
"另一种更简单的setter方法": "another simpler setter method",
"完成解析": "complete parsing",
"自动同步": "automatic synchronization",
"**表8**": "**Table 8**",
"安装方法见": "Installation method see",
"通过更严格的 PID 选择对π介子和 K 介子进行过滤以减少主要鉴别为 π 介子的 K 介子等峰背景的污染": "Filtering π mesons and K mesons with a stricter PID to reduce contamination of K mesons mainly identified as π mesons",
"并且占据高质量边带的候选体会被拒绝": "And candidates occupying high-quality sidebands are rejected",
"GPT-SOVITS 文本转语音服务的运行地址": "Operating address of GPT-SOVITS text-to-speech service",
"PDF文件路径": "PDF file path",
"注意图片大约占用1": "Note that the image takes up about 1",
"以便可以研究BDT输入": "So that BDT inputs can be studied",
"是否自动打开浏览器页面": "Whether to automatically open the browser page",
"中此模型的APIKEY的名字": "The name of the APIKEY for this model",
"{0.8} $ 和 $ \\operatorname{ProbNNk}\\left": "{0.8} $ and $ \\operatorname{ProbNNk}\\left",
"请检测终端输出": "Please check the terminal output",
"注册账号并获取API KEY": "Register an account and get an API KEY",
"-=-=-=-=-=-=-= 👇 以下是多模型路由切换函数 -=-=-=-=-=-=-=": "-=-=-=-=-=-=-= 👇 The following is a multi-model route switching function -=-=-=-=-=-=-=",
"如不设置": "If not set",
"如果只询问“一个”大语言模型": "If only asking about 'one' large language model",
"并非为了计算权重而专门施加了附加选择": "Not specifically applying additional selection for weight calculation",
"DOC2X的PDF解析服务": "PDF parsing service of DOC2X",
"两兄弟": "Two brothers",
"相同的切割也用于Run2和Run1数据": "The same segmentation is also used for Run2 and Run1 data",
"返回的数据流第一次为空": "The returned data stream is empty for the first time",
"对于光子 PID": "For photon PID",
"例如chatglm&gpt-3.5-turbo&gpt-4": "For example chatglm&gpt-3.5-turbo&gpt-4",
"第二种方法": "The second method",
"BDT 模型的系统性误差使用通过拟合通过和未通过所选 BDT 截断值的 $ B $ 候选体质量分布的异构同位旋对称模式进行评估": "The systematic error of the BDT model is evaluated using the heterogeneous isospin symmetry mode of the candidate body mass distribution of $ B $ selected by fitting through and not through the selected BDT truncation value",
"通过比较模拟和真实的 $ {B}^{ + } \\rightarrow J/\\psi {K}^{* + } $ 衰变样本来计算权重": "Calculate weights by comparing simulated and real $ {B}^{ + } \\rightarrow J/\\psi {K}^{* + } $ decay samples",
"上下文长度超过glm-4v上限2000tokens": "The context length exceeds the upper limit of 2000 tokens for glm-4v",
"通过为每个模拟信号候选分配权重来校正模拟和碰撞数据之间的一些差异": "Correct some differences between simulated and collision data by assigning weights to each simulated signal candidate",
"2016 年上磁场数据集中通过松散选择": "Loose selection in the 2016 upper magnetic field data set",
"定义history的一个孪生的前端存储区": "Define a twin front-end storage area for history",
"为默认值;": "For the default value;",
"一个带二级菜单的插件": "A plugin with a secondary menu",
"用于": "Used for",
"每次请求的最大token数量": "Maximum token count for each request",
"输入Arxiv的ID或者网址": "Enter the Arxiv ID or URL",
"采用哪种方法执行转换": "Which method to use for transformation",
"定义history_cache-": "Define history_cache-",
"再点击该插件": "Click the plugin again",
"隐藏": "Hide",
"第三个参数": "The third parameter",
"声明这是一个文本框": "Declare this as a text box",
"其准则为拒绝已知 $ {B}^{ + } $ 质量内 $ \\pm {50}\\mathrm{{MeV}}/{c}^{2} $ 范围内的候选体": "Its criterion is to reject candidates within $ \\pm {50}\\mathrm{{MeV}}/{c}^{2} $ of the known $ {B}^{ + } $ mass",
"第一种方法": "The first method",
"正在尝试GROBID": "Trying GROBID",
"定义新一代插件的高级参数区": "Define the advanced parameter area for the new generation of plugins",
"047个tokens": "47 tokens",
"PDF解析方法": "PDF parsing method",
"缺失 DOC2X_API_KEY": "Missing DOC2X_API_KEY",
"第二个参数": "The second parameter",
"将只取第一张图片进行处理": "Only the first image will be processed",
"请检查配置文件的": "Please check the configuration file",
"此函数已经弃用!!新函数位于": "This function has been deprecated!! The new function is located at",
"同样地": "Similarly",
"的 $ J/\\psi {K}^{ + }{\\pi }^{0} $ 和 $ J/\\psi {K}^{ + } $ 质量的分布": "The distribution of the masses of $ J/\\psi {K}^{ + }{\\pi }^{0} $ and $ J/\\psi {K}^{ + } $",
"取消": "Cancel",
"3.8 对 BDT 系统误差的严格 PID 选择": "Strict PID selection for BDT system errors at 3.8",
"发送至DOC2X解析": "Send to DOC2X for parsing",
"在触发这个按钮时": "When triggering this button",
"例如对于01万物的yi-34b-chat-200k": "For example, for 010,000 items yi-34b-chat-200k",
"继续等待": "Continue waiting",
"留空则使用时间作为文件名": "Leave blank to use time as the file name",
"获得以下报错信息": "Get the following error message",
"ollama模型": "Ollama model",
"要求如下": "Requirements are as follows",
"不包括思维导图": "Excluding mind maps",
"则用指定模型覆盖全局模型": "Then override the global model with the specified model",
"DOC2X服务不可用": "DOC2X service is not available",
"则抛出异常": "Then throw an exception",
"幻方-深度求索大模型在线API -=-=-=-=-=-=-": "Magic Square - Deep Quest Large Model Online API -=-=-=-=-=-=-",
"详见 themes/common.js": "See themes/common.js",
"如果尝试加载未授权的类": "If trying to load unauthorized class",
"因此真实样本包含一定比例的背景": "Therefore, real samples contain a certain proportion of background",
"热更新Prompt & ModelOverride": "Hot update Prompt & ModelOverride",
"可能的原因是": "Possible reasons are",
"因此仅BDT进入相应的选择": "So only BDT enters the corresponding selection",
"⚠请不要与模型的最大token数量相混淆": "⚠️ Do not confuse with the maximum token number of the model",
"为openai格式的API生成响应函数": "Generate response function for OpenAI format API",
"API异常": "API exception",
"调用Markdown插件": "Call Markdown plugin",
"报告已经添加到右侧“文件下载区”": "The report has been added to the right 'File Download Area'",
"把PDF文件拖入对话": "Drag the PDF file into the dialogue",
"根据基础功能区 ModelOverride 参数调整模型类型": "Adjust the model type according to the ModelOverride parameter in the basic function area",
"vllm 对齐支持 -=-=-=-=-=-=-": "VLLM alignment support -=-=-=-=-=-=-",
"强制点击此基础功能按钮时": "When forcing to click this basic function button",
"请上传文件后": "Please upload the file first",
"解析错误": "Parsing error",
"APIKEY为空": "APIKEY is empty",
"效果最好": "Best effect",
"未来5天": "Next 5 days",
"会先执行js代码更新history_cache": "Will first execute js code to update history_cache",
"下拉菜单的选项为": "The options in the dropdown menu are",
"额外的翻译提示词": "Additional translation prompts",
"这三个切割也用于选择 $ {B}^{ + } \\rightarrow J/\\psi {K}^{* + } $ 衰变": "These three cuts are also used to select $ {B}^{ + } \\rightarrow J/\\psi {K}^{* + } $ decay",
"借鉴自同目录下的bridge_chatgpt.py": "Inspired by bridge_chatgpt.py in the same directory",
"其中质量从 DTF 四维向量重新计算以改善测量的线形": "Recalculate the mass from the DTF four-vector to improve the linearity of the measurement",
"移除任何不安全的元素": "Remove any unsafe elements",
"默认返回原参数": "Return the original parameters by default",
"三兄弟": "Three brothers",
"为下拉菜单默认值;": "As the default value for the dropdown menu;",
"翻译后的带图文档.zip": "Translated document with images.zip",
"是否使用代理": "Whether to use a proxy",
"新一代插件的高级参数区确认按钮": "Confirmation button for the advanced parameter area of the new generation plugin",
"声明这是一个下拉菜单": "Declare that this is a dropdown menu",
"ffmpeg未安装": "FFmpeg not installed",
"围绕 $ {K}^{* + } $ 的质量窗口从 $ \\pm {100} $ 缩小至 $ \\pm {75}\\mathrm{{MeV}}/{c}^{2} $": "Narrow the mass window around $ {K}^{* + } $ from $ \\pm {100} $ to $ \\pm {75}\\mathrm{{MeV}}/{c}^{2} $",
"保存文件名": "Save file name",
"第三种方法": "The third method",
"$ 缩减到 $ \\left\\lbrack {{75}": "$ Reduced to $ \\left\\lbrack {{75}",
"清理提取路径": "Clean up the extraction path",
"history的更新方法": "Method to update the history",
"定义history的后端state": "Define the backend state of the history",
"生成包含图片的压缩包": "Generate a compressed package containing images",
"执行插件": "Execute the plugin",
"使用指定的模型": "Use the specified model",
"只允许特定的类进行反序列化": "Only allow specific classes to be deserialized",
"是否允许从缓存中调取结果": "Whether to allow fetching results from the cache",
"效果不理想": "The effect is not ideal",
"这计算是在不需要BDT要求的情况下进行的": "This calculation is done without the need for BDT requirements",
"生成在线预览": "Generate online preview",
"主输入": "Primary input",
"定义允许的安全类": "Define allowed security classes",
"其最大请求数为4096": "Its maximum request number is 4096",
"在线预览翻译": "Online preview translation",
"其中传入参数": "Among the incoming parameters",
"下载Gradio主题时出现异常": "An exception occurred when downloading the Gradio theme",
"修正一些公式问题": "Correcting some formula issues",
"对专有名词、翻译语气等方面的要求": "Requirements for proper nouns, translation tone, etc.",
"替换成$$": "Replace with $$",
"主要用途": "Main purpose",
"允许 $ {\\pi }^{0} $ 候选体的质量范围从 $ \\left\\lbrack {0": "Allow the mass range of the $ {\\pi }^{0} $ candidate from $ \\left\\lbrack {0",
"$ {B}^{ + } $ 衰变到 $ J/\\psi {K}^{ + } $": "$ {B}^{ + } $ decays to $ J/\\psi {K}^{ + } $",
"未指定路径": "Path not specified",
"True为不使用": "True means not in use",
"尝试获取完整的错误信息": "Attempt to get the complete error message",
"仅今天": "Only today",
"图 12": "Figure 12",
"效果次优": "Effect is suboptimal",
"绘制的Mermaid图表类型": "Types of Mermaid charts drawn",
"vllm模型": "VLLM model",
"文本框上方显示": "Displayed above the text box",
"未来3天": "Next 3 days",
"在这里添加其他安全的类": "Add other secure classes here",
"额外提示词": "Additional prompt words",
"由于在等离子体共轭模式中没有光子": "Due to no photons in the plasma conjugate mode",
"将公式中的\\": "Escape the backslash in the formula",
"插件功能": "Plugin function",
"设置5秒不准咬人": "Disallow biting for 5 seconds",
"定义cookies的后端state": "Define the backend state of cookies",
"选择其他类型时将直接绘制指定的图表类型": "Directly draw the specified chart type when selecting another type",
"替换成$": "Replace with $",
"自动从输入框同步": "Automatically sync from the input box",
"第一个参数": "The first parameter",
"注意需要使用双引号将内容括起来": "Note that you need to enclose the content in double quotes",
"下拉菜单上方显示": "Display above the dropdown menu",
"把history转存history_cache备用": "Transfer history to history_cache for backup",
"从头执行": "Execute from the beginning",
"选择插件参数": "Select plugin parameters",
"您还可以在接入one-api/vllm/ollama时": "You can also access one-api/vllm/ollama",
"输入对话存档文件名": "Enter the dialogue archive file name",
"但是需要DOC2X服务": "But DOC2X service is required",
"相反": "On the contrary",
"你好👋": "Hello👋",
"生成在线预览html": "Generate online preview HTML",
"为简化拟合模型": "To simplify the fitting model",
"、前端": "Front end",
"定义插件的二级选项菜单": "Define the secondary option menu of the plugin",
"未选定任何插件": "No plugin selected",
"以上三种方法都试一遍": "Try all three methods above once",
"一个非常简单的插件": "A very simple plugin",
"为了更灵活地接入ollama多模型管理界面": "In order to more flexibly access the ollama multi-model management interface",
"文本框内部显示": "Text box internal display",
"☝️ 以上是模型路由 -=-=-=-=-=-=-=-=-=": "☝️ The above is the model route -=-=-=-=-=-=-=-=-=",
"则使用当前全局模型;如设置": "Then use the current global model; if set",
"由LLM决定": "Decided by LLM",
"4 对模拟的修正": "4 corrections to the simulation",
"glm-4v只支持一张图片": "glm-4v only supports one image",
"这个并发量稍微大一点": "This concurrency is slightly larger",
"无法处理EdgeTTS音频": "Unable to handle EdgeTTS audio",
"早期代码": "Early code",
"您可以调用下拉菜单中的“LoadChatHistoryArchive”还原当下的对话": "You can use the 'LoadChatHistoryArchive' in the drop-down menu to restore the current conversation",
"因此您在定义和使用类变量时": "So when you define and use class variables",
"这将通过sPlot方法进行减除": "This will be subtracted through the sPlot method",
"然后再执行python代码更新history": "Then execute python code to update history",
"新一代插件需要注册Class": "The new generation plugin needs to register Class",
"请选择": "Please select",
"旧插件的高级参数区确认按钮": "Confirm button in the advanced parameter area of the old plugin",
"多数情况": "In most cases",
"ollama 对齐支持 -=-=-=-=-=-=-": "ollama alignment support -=-=-=-=-=-=-",
"用该压缩包+Conversation_To_File进行反馈": "Use this compressed package + Conversation_To_File for feedback",
"名称": "Name",
"错误处理部分": "Error handling section",
"False为使用": "False for use",
"详细方法见第4节": "See Section 4 for detailed methods",
"在应用元组裁剪后": "After applying tuple clipping",
"深度求索": "Deep Search",
"绘制脑图的Demo": "Demo for Drawing Mind Maps",
"需要在表格前加上一个emoji": "Need to add an emoji in front of the table",
"批量Markdown翻译": "Batch Markdown Translation",
"将语言模型的生成文本朗读出来": "Read aloud the generated text of the language model",
"Function旧接口仅会在“VoidTerminal”中起作用": "The old interface of Function only works in 'VoidTerminal'",
"请配置 DOC2X_API_KEY": "Please configure DOC2X_API_KEY",
"如果同时询问“多个”大语言模型": "If inquiring about 'multiple' large language models at the same time",
"3.7 用于MC校正的宽松选择": "3.7 Loose selection for MC correction",
"咬的也不是人": "Not biting humans either",
"定义 后端state": "Define backend state",
"这个隐藏textbox负责装入当前弹出插件的属性": "This hidden textbox is responsible for loading the properties of the current pop-up plugin",
"会执行在不同的线程中": "Will be executed in different threads",
"定义cookies的一个孪生的前端存储区": "Define a twin front-end storage area for cookies",
"模型选择": "Model selection",
"应用于信号、标准化和等离子体共轭模式的最终切割": "Final cutting applied to signal, normalization, and plasma conjugate modes",
"确认参数并执行": "Confirm parameters and execute",
"请先上传文件": "Please upload the file first",
"以便公式渲染": "For formula rendering",
"加载PDF文件": "Load PDF file",
"LoadChatHistoryArchive | 输入参数为路径": "Load Chat History Archive | Input parameter is the path",
"日期选择": "Date selection",
"除 $ {B}^{ + } \\rightarrow J/\\psi {K}^{ + } $ 否决": "Veto except for $ {B}^{ + } \\rightarrow J/\\psi {K}^{ + } $",
"使用 0.2 的截断值会获得类似的效率": "Using a truncation value of 0.2 will achieve similar efficiency",
"请输入": "Please enter",
"当注册Class后": "After registering the Class",
"Markdown中使用不标准的表格": "Using non-standard tables in Markdown",
"采用非常宽松的截断值": "Using very loose truncation values",
"为了更灵活地接入vllm多模型管理界面": "To more flexibly access the vllm multi-model management interface",
"读取解析": "Read and parse",
"允许缓存": "Allow caching",
"Run2 中对 Kaon 鉴别的要求被收紧为 $ \\operatorname{ProbNNk}\\left": "The requirement for Kaon discrimination in Run2 has been tightened to $ \\operatorname{ProbNNk}\\left"
} }

查看文件

@@ -36,15 +36,15 @@
"总结word文档": "SummarizeWordDocument", "总结word文档": "SummarizeWordDocument",
"解析ipynb文件": "ParseIpynbFile", "解析ipynb文件": "ParseIpynbFile",
"解析JupyterNotebook": "ParseJupyterNotebook", "解析JupyterNotebook": "ParseJupyterNotebook",
"对话历史存档": "ConversationHistoryArchive", "Conversation_To_File": "ConversationHistoryArchive",
"载入对话历史存档": "LoadConversationHistoryArchive", "载入Conversation_To_File": "LoadConversationHistoryArchive",
"删除所有本地对话历史记录": "DeleteAllLocalChatHistory", "删除所有本地对话历史记录": "DeleteAllLocalChatHistory",
"Markdown英译中": "MarkdownTranslateFromEngToChi", "Markdown英译中": "MarkdownTranslateFromEngToChi",
"批量Markdown翻译": "BatchTranslateMarkdown", "Markdown_Translate": "BatchTranslateMarkdown",
"批量总结PDF文档": "BatchSummarizePDFDocuments", "批量总结PDF文档": "BatchSummarizePDFDocuments",
"批量总结PDF文档pdfminer": "BatchSummarizePDFDocumentsUsingPDFMiner", "批量总结PDF文档pdfminer": "BatchSummarizePDFDocumentsUsingPDFMiner",
"批量翻译PDF文档": "BatchTranslatePDFDocuments", "批量翻译PDF文档": "BatchTranslatePDFDocuments",
"PDF批量翻译": "BatchTranslatePDFDocumentsUsingMultiThreading", "PDF_Translate": "BatchTranslatePDFDocumentsUsingMultiThreading",
"谷歌检索小助手": "GoogleSearchAssistant", "谷歌检索小助手": "GoogleSearchAssistant",
"理解PDF文档内容标准文件输入": "StandardFileInputForUnderstandingPDFDocumentContent", "理解PDF文档内容标准文件输入": "StandardFileInputForUnderstandingPDFDocumentContent",
"理解PDF文档内容": "UnderstandingPDFDocumentContent", "理解PDF文档内容": "UnderstandingPDFDocumentContent",
@@ -1492,7 +1492,7 @@
"交互功能模板函数": "InteractiveFunctionTemplateFunction", "交互功能模板函数": "InteractiveFunctionTemplateFunction",
"交互功能函数模板": "InteractiveFunctionFunctionTemplate", "交互功能函数模板": "InteractiveFunctionFunctionTemplate",
"Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison", "Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison",
"Latex输出PDF": "LatexOutputPDFResult", "Latex_Function": "LatexOutputPDFResult",
"Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF", "Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF",
"语音助手": "VoiceAssistant", "语音助手": "VoiceAssistant",
"微调数据集生成": "FineTuneDatasetGeneration", "微调数据集生成": "FineTuneDatasetGeneration",

查看文件

@@ -6,17 +6,14 @@
"Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison", "Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison",
"下载arxiv论文并翻译摘要": "DownloadArxivPaperAndTranslateAbstract", "下载arxiv论文并翻译摘要": "DownloadArxivPaperAndTranslateAbstract",
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage", "Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
"PDF批量翻译": "BatchTranslatePDFDocuments_MultiThreaded",
"下载arxiv论文翻译摘要": "DownloadArxivPaperTranslateAbstract", "下载arxiv论文翻译摘要": "DownloadArxivPaperTranslateAbstract",
"解析一个Python项目": "ParsePythonProject", "解析一个Python项目": "ParsePythonProject",
"解析一个Golang项目": "ParseGolangProject", "解析一个Golang项目": "ParseGolangProject",
"代码重写为全英文_多线程": "RewriteCodeToEnglish_MultiThreaded", "代码重写为全英文_多线程": "RewriteCodeToEnglish_MultiThreaded",
"解析一个CSharp项目": "ParsingCSharpProject", "解析一个CSharp项目": "ParsingCSharpProject",
"删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords", "删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords",
"批量Markdown翻译": "BatchTranslateMarkdown",
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion", "连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
"Langchain知识库": "LangchainKnowledgeBase", "Langchain知识库": "LangchainKnowledgeBase",
"Latex输出PDF": "OutputPDFFromLatex",
"把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline", "把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline",
"Latex精细分解与转化": "DecomposeAndConvertLatex", "Latex精细分解与转化": "DecomposeAndConvertLatex",
"解析一个C项目的头文件": "ParseCProjectHeaderFiles", "解析一个C项目的头文件": "ParseCProjectHeaderFiles",
@@ -46,7 +43,7 @@
"高阶功能模板函数": "HighOrderFunctionTemplateFunctions", "高阶功能模板函数": "HighOrderFunctionTemplateFunctions",
"高级功能函数模板": "AdvancedFunctionTemplate", "高级功能函数模板": "AdvancedFunctionTemplate",
"总结word文档": "SummarizingWordDocuments", "总结word文档": "SummarizingWordDocuments",
"载入对话历史存档": "LoadConversationHistoryArchive", "载入Conversation_To_File": "LoadConversationHistoryArchive",
"Latex中译英": "LatexChineseToEnglish", "Latex中译英": "LatexChineseToEnglish",
"Latex英译中": "LatexEnglishToChinese", "Latex英译中": "LatexEnglishToChinese",
"连接网络回答问题": "ConnectToNetworkToAnswerQuestions", "连接网络回答问题": "ConnectToNetworkToAnswerQuestions",
@@ -70,7 +67,6 @@
"读文章写摘要": "ReadArticleWriteSummary", "读文章写摘要": "ReadArticleWriteSummary",
"生成函数注释": "GenerateFunctionComments", "生成函数注释": "GenerateFunctionComments",
"解析项目本身": "ParseProjectItself", "解析项目本身": "ParseProjectItself",
"对话历史存档": "ConversationHistoryArchive",
"专业词汇声明": "ProfessionalTerminologyDeclaration", "专业词汇声明": "ProfessionalTerminologyDeclaration",
"解析docx": "ParseDocx", "解析docx": "ParseDocx",
"解析源代码新": "ParsingSourceCodeNew", "解析源代码新": "ParsingSourceCodeNew",
@@ -104,5 +100,11 @@
"随机小游戏": "RandomMiniGame", "随机小游戏": "RandomMiniGame",
"互动小游戏": "InteractiveMiniGame", "互动小游戏": "InteractiveMiniGame",
"解析历史输入": "ParseHistoricalInput", "解析历史输入": "ParseHistoricalInput",
"高阶功能模板函数示意图": "HighOrderFunctionTemplateDiagram" "高阶功能模板函数示意图": "HighOrderFunctionTemplateDiagram",
"载入对话历史存档": "LoadChatHistoryArchive",
"对话历史存档": "ChatHistoryArchive",
"解析PDF_DOC2X_转Latex": "ParsePDF_DOC2X_toLatex",
"解析PDF_基于DOC2X": "ParsePDF_basedDOC2X",
"解析PDF_简单拆解": "ParsePDF_simpleDecomposition",
"解析PDF_DOC2X_单文件": "ParsePDF_DOC2X_singleFile"
} }

查看文件

@@ -35,15 +35,15 @@
"总结word文档": "SummarizeWordDocument", "总结word文档": "SummarizeWordDocument",
"解析ipynb文件": "ParseIpynbFile", "解析ipynb文件": "ParseIpynbFile",
"解析JupyterNotebook": "ParseJupyterNotebook", "解析JupyterNotebook": "ParseJupyterNotebook",
"对话历史存档": "ConversationHistoryArchive", "Conversation_To_File": "ConversationHistoryArchive",
"载入对话历史存档": "LoadConversationHistoryArchive", "载入Conversation_To_File": "LoadConversationHistoryArchive",
"删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords", "删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords",
"Markdown英译中": "MarkdownEnglishToChinese", "Markdown英译中": "MarkdownEnglishToChinese",
"批量Markdown翻译": "BatchMarkdownTranslation", "Markdown_Translate": "BatchMarkdownTranslation",
"批量总结PDF文档": "BatchSummarizePDFDocuments", "批量总结PDF文档": "BatchSummarizePDFDocuments",
"批量总结PDF文档pdfminer": "BatchSummarizePDFDocumentsPdfminer", "批量总结PDF文档pdfminer": "BatchSummarizePDFDocumentsPdfminer",
"批量翻译PDF文档": "BatchTranslatePDFDocuments", "批量翻译PDF文档": "BatchTranslatePDFDocuments",
"PDF批量翻译": "BatchTranslatePdfDocumentsMultithreaded", "PDF_Translate": "BatchTranslatePdfDocumentsMultithreaded",
"谷歌检索小助手": "GoogleSearchAssistant", "谷歌检索小助手": "GoogleSearchAssistant",
"理解PDF文档内容标准文件输入": "StandardFileInputForUnderstandingPdfDocumentContent", "理解PDF文档内容标准文件输入": "StandardFileInputForUnderstandingPdfDocumentContent",
"理解PDF文档内容": "UnderstandingPdfDocumentContent", "理解PDF文档内容": "UnderstandingPdfDocumentContent",
@@ -1468,7 +1468,7 @@
"交互功能模板函数": "InteractiveFunctionTemplateFunctions", "交互功能模板函数": "InteractiveFunctionTemplateFunctions",
"交互功能函数模板": "InteractiveFunctionFunctionTemplates", "交互功能函数模板": "InteractiveFunctionFunctionTemplates",
"Latex英文纠错加PDF对比": "LatexEnglishCorrectionWithPDFComparison", "Latex英文纠错加PDF对比": "LatexEnglishCorrectionWithPDFComparison",
"Latex输出PDF": "OutputPDFFromLatex", "Latex_Function": "OutputPDFFromLatex",
"Latex翻译中文并重新编译PDF": "TranslateLatexToChineseAndRecompilePDF", "Latex翻译中文并重新编译PDF": "TranslateLatexToChineseAndRecompilePDF",
"语音助手": "VoiceAssistant", "语音助手": "VoiceAssistant",
"微调数据集生成": "FineTuneDatasetGeneration", "微调数据集生成": "FineTuneDatasetGeneration",

201
main.py
查看文件

@@ -26,11 +26,12 @@ def enable_log(PATH_LOGGING):
def main(): def main():
import gradio as gr import gradio as gr
if gr.__version__ not in ['3.32.9']: if gr.__version__ not in ['3.32.9', '3.32.10', '3.32.11']:
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.") raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
from request_llms.bridge_all import predict from request_llms.bridge_all import predict
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, DummyWith from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, DummyWith
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址
# 读取配置
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION') proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT') CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU') ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU')
@@ -42,7 +43,7 @@ def main():
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
from check_proxy import get_current_version from check_proxy import get_current_version
from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_reset, js_code_show_or_hide, js_code_show_or_hide_group2 from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_reset, js_code_show_or_hide, js_code_show_or_hide_group2
from themes.theme import js_code_for_css_changing, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init from themes.theme import js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}" title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
@@ -70,6 +71,7 @@ def main():
from check_proxy import check_proxy, auto_update, warm_up_modules from check_proxy import check_proxy, auto_update, warm_up_modules
proxy_info = check_proxy(proxies) proxy_info = check_proxy(proxies)
# 切换布局
gr_L1 = lambda: gr.Row().style() gr_L1 = lambda: gr.Row().style()
gr_L2 = lambda scale, elem_id: gr.Column(scale=scale, elem_id=elem_id, min_width=400) gr_L2 = lambda scale, elem_id: gr.Column(scale=scale, elem_id=elem_id, min_width=400)
if LAYOUT == "TOP-DOWN": if LAYOUT == "TOP-DOWN":
@@ -84,7 +86,7 @@ def main():
with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as app_block: with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as app_block:
gr.HTML(title_html) gr.HTML(title_html)
secret_css = gr.Textbox(visible=False, elem_id="secret_css") secret_css = gr.Textbox(visible=False, elem_id="secret_css")
register_advanced_plugin_init_arr = ""
cookies, web_cookie_cache = make_cookie_cache() # 定义 后端statecookies、前端web_cookie_cache两兄弟 cookies, web_cookie_cache = make_cookie_cache() # 定义 后端statecookies、前端web_cookie_cache两兄弟
with gr_L1(): with gr_L1():
@@ -106,7 +108,7 @@ def main():
with gr.Row(): with gr.Row():
audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False) audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False)
with gr.Row(): with gr.Row():
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel") status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。支持将文件直接粘贴到输入区。", elem_id="state-panel")
with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn: with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn:
with gr.Row(): with gr.Row():
@@ -122,18 +124,20 @@ def main():
predefined_btns.update({k: functional[k]["Button"]}) predefined_btns.update({k: functional[k]["Button"]})
with gr.Accordion("函数插件区", open=True, elem_id="plugin-panel") as area_crazy_fn: with gr.Accordion("函数插件区", open=True, elem_id="plugin-panel") as area_crazy_fn:
with gr.Row(): with gr.Row():
gr.Markdown("插件可读取“输入区”文本/路径作为参数(上传文件自动修正路径)") gr.Markdown("<small>插件可读取“输入区”文本/路径作为参数(上传文件自动修正路径)</small>")
with gr.Row(elem_id="input-plugin-group"): with gr.Row(elem_id="input-plugin-group"):
plugin_group_sel = gr.Dropdown(choices=all_plugin_groups, label='', show_label=False, value=DEFAULT_FN_GROUPS, plugin_group_sel = gr.Dropdown(choices=all_plugin_groups, label='', show_label=False, value=DEFAULT_FN_GROUPS,
multiselect=True, interactive=True, elem_classes='normal_mut_select').style(container=False) multiselect=True, interactive=True, elem_classes='normal_mut_select').style(container=False)
with gr.Row(): with gr.Row():
for k, plugin in plugins.items(): for index, (k, plugin) in enumerate(plugins.items()):
if not plugin.get("AsButton", True): continue if not plugin.get("AsButton", True): continue
visible = True if match_group(plugin['Group'], DEFAULT_FN_GROUPS) else False visible = True if match_group(plugin['Group'], DEFAULT_FN_GROUPS) else False
variant = plugins[k]["Color"] if "Color" in plugin else "secondary" variant = plugins[k]["Color"] if "Color" in plugin else "secondary"
info = plugins[k].get("Info", k) info = plugins[k].get("Info", k)
btn_elem_id = f"plugin_btn_{index}"
plugin['Button'] = plugins[k]['Button'] = gr.Button(k, variant=variant, plugin['Button'] = plugins[k]['Button'] = gr.Button(k, variant=variant,
visible=visible, info_str=f'函数插件区: {info}').style(size="sm") visible=visible, info_str=f'函数插件区: {info}', elem_id=btn_elem_id).style(size="sm")
plugin['ButtonElemId'] = btn_elem_id
with gr.Row(): with gr.Row():
with gr.Accordion("更多函数插件", open=True): with gr.Accordion("更多函数插件", open=True):
dropdown_fn_list = [] dropdown_fn_list = []
@@ -142,86 +146,30 @@ def main():
if not plugin.get("AsButton", True): dropdown_fn_list.append(k) # 排除已经是按钮的插件 if not plugin.get("AsButton", True): dropdown_fn_list.append(k) # 排除已经是按钮的插件
elif plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示 elif plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示
with gr.Row(): with gr.Row():
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="", show_label=False).style(container=False) dropdown = gr.Dropdown(dropdown_fn_list, value=r"点击这里搜索插件列表", label="", show_label=False).style(container=False)
with gr.Row(): with gr.Row():
plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False, plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False, elem_id="advance_arg_input_legacy",
placeholder="这里是特殊函数插件的高级参数输入区").style(container=False) placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
with gr.Row(): with gr.Row():
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm") switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary", elem_id="elem_switchy_bt").style(size="sm")
with gr.Row(): with gr.Row():
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up: with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload") file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"): # 左上角工具栏定义
with gr.Row(): from themes.gui_toolbar import define_gui_toolbar
with gr.Tab("上传文件", elem_id="interact-panel"): checkboxes, checkboxes_2, max_length_sl, theme_dropdown, system_prompt, file_upload_2, md_dropdown, top_p, temperature = \
gr.Markdown("请上传本地文件/压缩包供“函数插件区”功能调用。请注意: 上传文件后会自动把输入区修改为相应路径。") define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode)
file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload_float")
with gr.Tab("更换模型", elem_id="interact-panel"):
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, elem_id="elem_model_sel", label="更换LLM模型/请求源").style(container=False)
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature", elem_id="elem_temperature")
max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength",)
system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=INIT_SYS_PROMPT, elem_id="elem_prompt")
temperature.change(None, inputs=[temperature], outputs=None,
_js="""(temperature)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_temperature_cookie", temperature)""")
system_prompt.change(None, inputs=[system_prompt], outputs=None,
_js="""(system_prompt)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_system_prompt_cookie", system_prompt)""")
md_dropdown.change(None, inputs=[md_dropdown], outputs=None,
_js="""(md_dropdown)=>gpt_academic_gradio_saveload("save", "elem_model_sel", "js_md_dropdown_cookie", md_dropdown)""")
with gr.Tab("界面外观", elem_id="interact-panel"):
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False)
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
opt = ["自定义菜单"]
value=[]
if ADD_WAIFU: opt += ["添加Live2D形象"]; value += ["添加Live2D形象"]
checkboxes_2 = gr.CheckboxGroup(opt, value=value, label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False)
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
with gr.Tab("帮助", elem_id="interact-panel"):
gr.Markdown(help_menu_description)
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_input_secondary:
with gr.Accordion("浮动输入区", open=True, elem_id="input-panel2"):
with gr.Row() as row:
row.style(equal_height=True)
with gr.Column(scale=10):
txt2 = gr.Textbox(show_label=False, placeholder="Input question here.",
elem_id='user_input_float', lines=8, label="输入区2").style(container=False)
with gr.Column(scale=1, min_width=40):
submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
clearBtn2 = gr.Button("清除", elem_id="elem_clear2", variant="secondary", visible=False); clearBtn2.style(size="sm")
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
with gr.Accordion("自定义菜单", open=True, elem_id="edit-panel"):
with gr.Row() as row:
with gr.Column(scale=10):
AVAIL_BTN = [btn for btn in customize_btns.keys()] + [k for k in functional]
basic_btn_dropdown = gr.Dropdown(AVAIL_BTN, value="自定义按钮1", label="选择一个需要自定义基础功能区按钮").style(container=False)
basic_fn_title = gr.Textbox(show_label=False, placeholder="输入新按钮名称", lines=1).style(container=False)
basic_fn_prefix = gr.Textbox(show_label=False, placeholder="输入新提示前缀", lines=4).style(container=False)
basic_fn_suffix = gr.Textbox(show_label=False, placeholder="输入新提示后缀", lines=4).style(container=False)
with gr.Column(scale=1, min_width=70):
basic_fn_confirm = gr.Button("确认并保存", variant="primary"); basic_fn_confirm.style(size="sm")
basic_fn_clean = gr.Button("恢复默认", variant="primary"); basic_fn_clean.style(size="sm")
from shared_utils.cookie_manager import assign_btn__fn_builder
assign_btn = assign_btn__fn_builder(customize_btns, predefined_btns, cookies, web_cookie_cache)
# update btn
h = basic_fn_confirm.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
[web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
h.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
# clean up btn
h2 = basic_fn_clean.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix, gr.State(True)],
[web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
h2.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
# 浮动菜单定义
from themes.gui_floating_menu import define_gui_floating_menu
area_input_secondary, txt2, area_customize, submitBtn2, resetBtn2, clearBtn2, stopBtn2 = \
define_gui_floating_menu(customize_btns, functional, predefined_btns, cookies, web_cookie_cache)
# 插件二级菜单的实现
from themes.gui_advanced_plugin_class import define_gui_advanced_plugin_class
new_plugin_callback, route_switchy_bt_with_arg, usr_confirmed_arg = \
define_gui_advanced_plugin_class(plugins)
# 功能区显示开关与功能区的互动 # 功能区显示开关与功能区的互动
def fn_area_visibility(a): def fn_area_visibility(a):
@@ -244,6 +192,7 @@ def main():
# 整理反复出现的控件句柄组合 # 整理反复出现的控件句柄组合
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg] input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg]
input_combo_order = ["cookies", "max_length_sl", "md_dropdown", "txt", "txt2", "top_p", "temperature", "chatbot", "history", "system_prompt", "plugin_advanced_arg"]
output_combo = [cookies, chatbot, history, status] output_combo = [cookies, chatbot, history, status]
predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True)], outputs=output_combo) predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True)], outputs=output_combo)
# 提交按钮、重置按钮 # 提交按钮、重置按钮
@@ -253,8 +202,7 @@ def main():
cancel_handles.append(submitBtn2.click(**predict_args)) cancel_handles.append(submitBtn2.click(**predict_args))
resetBtn.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status resetBtn.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
resetBtn2.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status resetBtn2.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
reset_server_side_args = (lambda history: ([], [], "已重置", json.dumps(history)), reset_server_side_args = (lambda history: ([], [], "已重置", json.dumps(history)), [history], [chatbot, history, status, history_cache])
[history], [chatbot, history, status, history_cache])
resetBtn.click(*reset_server_side_args) # 再在后端清除history,把history转存history_cache备用 resetBtn.click(*reset_server_side_args) # 再在后端清除history,把history转存history_cache备用
resetBtn2.click(*reset_server_side_args) # 再在后端清除history,把history转存history_cache备用 resetBtn2.click(*reset_server_side_args) # 再在后端清除history,把history转存history_cache备用
clearBtn.click(None, None, [txt, txt2], _js=js_code_clear) clearBtn.click(None, None, [txt, txt2], _js=js_code_clear)
@@ -276,27 +224,42 @@ def main():
file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}") file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}")
file_upload_2.upload(on_file_uploaded, [file_upload_2, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}") file_upload_2.upload(on_file_uploaded, [file_upload_2, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}")
# 函数插件-固定按钮区 # 函数插件-固定按钮区
for k in plugins: def encode_plugin_info(k, plugin)->str:
if not plugins[k].get("AsButton", True): continue import copy
click_handle = plugins[k]["Button"].click(ArgsGeneralWrapper(plugins[k]["Function"]), [*input_combo], output_combo) from themes.theme import to_cookie_str
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]).then(None, [plugins[k]["Button"]], None, _js=r"(fn)=>on_plugin_exe_complete(fn)") plugin_ = copy.copy(plugin)
cancel_handles.append(click_handle) plugin_.pop("Function", None)
# 函数插件-下拉菜单与随变按钮的互动 plugin_.pop("Class", None)
def on_dropdown_changed(k): plugin_.pop("Button", None)
variant = plugins[k]["Color"] if "Color" in plugins[k] else "secondary" plugin_["Info"] = plugin.get("Info", k)
info = plugins[k].get("Info", k) if plugin.get("AdvancedArgs", False):
ret = {switchy_bt: gr.update(value=k, variant=variant, info_str=f'函数插件区: {info}')} plugin_["Label"] = f"插件[{k}]的高级参数说明:" + plugin.get("ArgsReminder", f"没有提供高级参数功能说明")
if plugins[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区
ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + plugins[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
else: else:
ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")}) plugin_["Label"] = f"插件[{k}]不需要高级参数。"
return ret return to_cookie_str(plugin_)
dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt, plugin_advanced_arg] )
# 插件的注册(前端代码注册)
for k in plugins:
register_advanced_plugin_init_arr += f"""register_plugin_init("{k}","{encode_plugin_info(k, plugins[k])}");"""
if plugins[k].get("Class", None):
plugins[k]["JsMenu"] = plugins[k]["Class"]().get_js_code_for_generating_menu(k)
register_advanced_plugin_init_arr += """register_advanced_plugin_init_code("{k}","{gui_js}");""".format(k=k, gui_js=plugins[k]["JsMenu"])
if not plugins[k].get("AsButton", True): continue
if plugins[k].get("Class", None) is None:
assert plugins[k].get("Function", None) is not None
click_handle = plugins[k]["Button"].click(None, inputs=[], outputs=None, _js=f"""()=>run_classic_plugin_via_id("{plugins[k]["ButtonElemId"]}")""")
else:
click_handle = plugins[k]["Button"].click(None, inputs=[], outputs=None, _js=f"""()=>run_advanced_plugin_launch_code("{k}")""")
# 函数插件-下拉菜单与随变按钮的互动(新版-更流畅)
dropdown.select(None, [dropdown], None, _js=f"""(dropdown)=>run_dropdown_shift(dropdown)""")
# 模型切换时的回调
def on_md_dropdown_changed(k): def on_md_dropdown_changed(k):
return {chatbot: gr.update(label="当前模型:"+k)} return {chatbot: gr.update(label="当前模型:"+k)}
md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] ) md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot])
# 主题修改
def on_theme_dropdown_changed(theme, secret_css): def on_theme_dropdown_changed(theme, secret_css):
adjust_theme, css_part1, _, adjust_dynamic_theme = load_dynamic_theme(theme) adjust_theme, css_part1, _, adjust_dynamic_theme = load_dynamic_theme(theme)
if adjust_dynamic_theme: if adjust_dynamic_theme:
@@ -304,21 +267,29 @@ def main():
else: else:
css_part2 = adjust_theme()._get_theme_css() css_part2 = adjust_theme()._get_theme_css()
return css_part2 + css_part1 return css_part2 + css_part1
theme_handle = theme_dropdown.select(on_theme_dropdown_changed, [theme_dropdown, secret_css], [secret_css]) # , _js="""change_theme_prepare""")
theme_handle.then(None, [theme_dropdown, secret_css], None, _js="""change_theme""")
theme_handle = theme_dropdown.select(on_theme_dropdown_changed, [theme_dropdown, secret_css], [secret_css]) switchy_bt.click(None, [switchy_bt], None, _js="(switchy_bt)=>on_flex_button_click(switchy_bt)")
theme_handle.then(
None,
[secret_css],
None,
_js=js_code_for_css_changing
)
# 随变按钮的回调函数注册 # 随变按钮的回调函数注册
def route(request: gr.Request, k, *args, **kwargs): def route(request: gr.Request, k, *args, **kwargs):
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return if k not in [r"点击这里搜索插件列表", r"请先从插件列表中选择"]:
yield from ArgsGeneralWrapper(plugins[k]["Function"])(request, *args, **kwargs) if plugins[k].get("Class", None) is None:
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo], output_combo) assert plugins[k].get("Function", None) is not None
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]).then(None, [switchy_bt], None, _js=r"(fn)=>on_plugin_exe_complete(fn)") yield from ArgsGeneralWrapper(plugins[k]["Function"])(request, *args, **kwargs)
cancel_handles.append(click_handle) # 旧插件的高级参数区确认按钮(隐藏)
old_plugin_callback = gr.Button(r"未选定任何插件", variant="secondary", visible=False, elem_id="old_callback_btn_for_plugin_exe")
click_handle_ng = old_plugin_callback.click(route, [switchy_bt, *input_combo], output_combo)
click_handle_ng.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]).then(None, [switchy_bt], None, _js=r"(fn)=>on_plugin_exe_complete(fn)")
cancel_handles.append(click_handle_ng)
# 新一代插件的高级参数区确认按钮(隐藏)
click_handle_ng = new_plugin_callback.click(route_switchy_bt_with_arg,
[
gr.State(["new_plugin_callback", "usr_confirmed_arg"] + input_combo_order), # 第一个参数: 指定了后续参数的名称
new_plugin_callback, usr_confirmed_arg, *input_combo # 后续参数: 真正的参数
], output_combo)
click_handle_ng.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]).then(None, [switchy_bt], None, _js=r"(fn)=>on_plugin_exe_complete(fn)")
cancel_handles.append(click_handle_ng)
# 终止按钮的回调函数注册 # 终止按钮的回调函数注册
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles) stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles) stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
@@ -335,6 +306,8 @@ def main():
elif match_group(plugin['Group'], group_list): fns_list.append(k) # 刷新下拉列表 elif match_group(plugin['Group'], group_list): fns_list.append(k) # 刷新下拉列表
return [*btn_list, gr.Dropdown.update(choices=fns_list)] return [*btn_list, gr.Dropdown.update(choices=fns_list)]
plugin_group_sel.select(fn=on_group_change, inputs=[plugin_group_sel], outputs=[*[plugin['Button'] for name, plugin in plugins_as_btn.items()], dropdown]) plugin_group_sel.select(fn=on_group_change, inputs=[plugin_group_sel], outputs=[*[plugin['Button'] for name, plugin in plugins_as_btn.items()], dropdown])
# 是否启动语音输入功能
if ENABLE_AUDIO: if ENABLE_AUDIO:
from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution
rad = RealtimeAudioDistribution() rad = RealtimeAudioDistribution()
@@ -342,17 +315,18 @@ def main():
rad.feed(cookies['uuid'].hex, audio) rad.feed(cookies['uuid'].hex, audio)
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies]) audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
# 生成当前浏览器窗口的uuid刷新失效
app_block.load(assign_user_uuid, inputs=[cookies], outputs=[cookies]) app_block.load(assign_user_uuid, inputs=[cookies], outputs=[cookies])
# 初始化(前端)
from shared_utils.cookie_manager import load_web_cookie_cache__fn_builder from shared_utils.cookie_manager import load_web_cookie_cache__fn_builder
load_web_cookie_cache = load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns) load_web_cookie_cache = load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns)
app_block.load(load_web_cookie_cache, inputs = [web_cookie_cache, cookies], app_block.load(load_web_cookie_cache, inputs = [web_cookie_cache, cookies],
outputs = [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()], _js=js_code_for_persistent_cookie_init) outputs = [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()], _js=js_code_for_persistent_cookie_init)
app_block.load(None, inputs=[], outputs=None, _js=f"""()=>GptAcademicJavaScriptInit("{DARK_MODE}","{INIT_SYS_PROMPT}","{ADD_WAIFU}","{LAYOUT}","{TTS_TYPE}")""") # 配置暗色主题或亮色主题 app_block.load(None, inputs=[], outputs=None, _js=f"""()=>GptAcademicJavaScriptInit("{DARK_MODE}","{INIT_SYS_PROMPT}","{ADD_WAIFU}","{LAYOUT}","{TTS_TYPE}")""") # 配置暗色主题或亮色主题
app_block.load(None, inputs=[], outputs=None, _js="""()=>{REP}""".replace("REP", register_advanced_plugin_init_arr))
# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数 # Gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
def run_delayed_tasks(): def run_delayed_tasks():
import threading, webbrowser, time import threading, webbrowser, time
print(f"如果浏览器没有自动打开,请复制并转到以下URL") print(f"如果浏览器没有自动打开,请复制并转到以下URL")
@@ -364,8 +338,9 @@ def main():
def warm_up_mods(): time.sleep(6); warm_up_modules() def warm_up_mods(): time.sleep(6); warm_up_modules()
threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新 threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新
threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面
threading.Thread(target=warm_up_mods, name="warm-up", daemon=True).start() # 预热tiktoken模块 threading.Thread(target=warm_up_mods, name="warm-up", daemon=True).start() # 预热tiktoken模块
if get_conf('AUTO_OPEN_BROWSER'):
threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面
# 运行一些异步任务自动更新、打开浏览器页面、预热tiktoken模块 # 运行一些异步任务自动更新、打开浏览器页面、预热tiktoken模块
run_delayed_tasks() run_delayed_tasks()

查看文件

@@ -34,9 +34,14 @@ from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
from .bridge_zhipu import predict as zhipu_ui from .bridge_zhipu import predict as zhipu_ui
from .bridge_taichu import predict_no_ui_long_connection as taichu_noui
from .bridge_taichu import predict as taichu_ui
from .bridge_cohere import predict as cohere_ui from .bridge_cohere import predict as cohere_ui
from .bridge_cohere import predict_no_ui_long_connection as cohere_noui from .bridge_cohere import predict_no_ui_long_connection as cohere_noui
from .oai_std_model_template import get_predict_function
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044'] colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
class LazyloadTiktoken(object): class LazyloadTiktoken(object):
@@ -66,9 +71,10 @@ api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub" newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
gemini_endpoint = "https://generativelanguage.googleapis.com/v1beta/models" gemini_endpoint = "https://generativelanguage.googleapis.com/v1beta/models"
claude_endpoint = "https://api.anthropic.com/v1/messages" claude_endpoint = "https://api.anthropic.com/v1/messages"
yimodel_endpoint = "https://api.lingyiwanwu.com/v1/chat/completions"
cohere_endpoint = "https://api.cohere.ai/v1/chat" cohere_endpoint = "https://api.cohere.ai/v1/chat"
ollama_endpoint = "http://localhost:11434/api/chat" ollama_endpoint = "http://localhost:11434/api/chat"
yimodel_endpoint = "https://api.lingyiwanwu.com/v1/chat/completions"
deepseekapi_endpoint = "https://api.deepseek.com/v1/chat/completions"
if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/' if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/'
azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15' azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15'
@@ -86,9 +92,10 @@ if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_e
if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint] if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
if gemini_endpoint in API_URL_REDIRECT: gemini_endpoint = API_URL_REDIRECT[gemini_endpoint] if gemini_endpoint in API_URL_REDIRECT: gemini_endpoint = API_URL_REDIRECT[gemini_endpoint]
if claude_endpoint in API_URL_REDIRECT: claude_endpoint = API_URL_REDIRECT[claude_endpoint] if claude_endpoint in API_URL_REDIRECT: claude_endpoint = API_URL_REDIRECT[claude_endpoint]
if yimodel_endpoint in API_URL_REDIRECT: yimodel_endpoint = API_URL_REDIRECT[yimodel_endpoint]
if cohere_endpoint in API_URL_REDIRECT: cohere_endpoint = API_URL_REDIRECT[cohere_endpoint] if cohere_endpoint in API_URL_REDIRECT: cohere_endpoint = API_URL_REDIRECT[cohere_endpoint]
if ollama_endpoint in API_URL_REDIRECT: ollama_endpoint = API_URL_REDIRECT[ollama_endpoint] if ollama_endpoint in API_URL_REDIRECT: ollama_endpoint = API_URL_REDIRECT[ollama_endpoint]
if yimodel_endpoint in API_URL_REDIRECT: yimodel_endpoint = API_URL_REDIRECT[yimodel_endpoint]
if deepseekapi_endpoint in API_URL_REDIRECT: deepseekapi_endpoint = API_URL_REDIRECT[deepseekapi_endpoint]
# 获取tokenizer # 获取tokenizer
tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo") tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo")
@@ -112,6 +119,15 @@ model_info = {
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
"taichu": {
"fn_with_ui": taichu_ui,
"fn_without_ui": taichu_noui,
"endpoint": openai_endpoint,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"gpt-3.5-turbo-16k": { "gpt-3.5-turbo-16k": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
@@ -175,6 +191,26 @@ model_info = {
"token_cnt": get_token_num_gpt4, "token_cnt": get_token_num_gpt4,
}, },
"gpt-4o": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"has_multimodal_capacity": True,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-4o-2024-05-13": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"has_multimodal_capacity": True,
"endpoint": openai_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-4-turbo-preview": { "gpt-4-turbo-preview": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
@@ -205,6 +241,7 @@ model_info = {
"gpt-4-turbo": { "gpt-4-turbo": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
"has_multimodal_capacity": True,
"endpoint": openai_endpoint, "endpoint": openai_endpoint,
"max_token": 128000, "max_token": 128000,
"tokenizer": tokenizer_gpt4, "tokenizer": tokenizer_gpt4,
@@ -214,6 +251,7 @@ model_info = {
"gpt-4-turbo-2024-04-09": { "gpt-4-turbo-2024-04-09": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
"has_multimodal_capacity": True,
"endpoint": openai_endpoint, "endpoint": openai_endpoint,
"max_token": 128000, "max_token": 128000,
"tokenizer": tokenizer_gpt4, "tokenizer": tokenizer_gpt4,
@@ -268,6 +306,38 @@ model_info = {
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
"glm-4-0520": {
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-4-air": {
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-4-airx": {
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-4-flash": {
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-4v": { "glm-4v": {
"fn_with_ui": zhipu_ui, "fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui, "fn_without_ui": zhipu_noui,
@@ -654,14 +724,22 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# -=-=-=-=-=-=- 零一万物模型 -=-=-=-=-=-=- # -=-=-=-=-=-=- 零一万物模型 -=-=-=-=-=-=-
if "yi-34b-chat-0205" in AVAIL_LLM_MODELS or "yi-34b-chat-200k" in AVAIL_LLM_MODELS: # zhipuai yi_models = ["yi-34b-chat-0205","yi-34b-chat-200k","yi-large","yi-medium","yi-spark","yi-large-turbo","yi-large-preview"]
if any(item in yi_models for item in AVAIL_LLM_MODELS):
try: try:
from .bridge_yimodel import predict_no_ui_long_connection as yimodel_noui yimodel_4k_noui, yimodel_4k_ui = get_predict_function(
from .bridge_yimodel import predict as yimodel_ui api_key_conf_name="YIMODEL_API_KEY", max_output_token=600, disable_proxy=False
)
yimodel_16k_noui, yimodel_16k_ui = get_predict_function(
api_key_conf_name="YIMODEL_API_KEY", max_output_token=4000, disable_proxy=False
)
yimodel_200k_noui, yimodel_200k_ui = get_predict_function(
api_key_conf_name="YIMODEL_API_KEY", max_output_token=4096, disable_proxy=False
)
model_info.update({ model_info.update({
"yi-34b-chat-0205": { "yi-34b-chat-0205": {
"fn_with_ui": yimodel_ui, "fn_with_ui": yimodel_4k_ui,
"fn_without_ui": yimodel_noui, "fn_without_ui": yimodel_4k_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用 "can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint, "endpoint": yimodel_endpoint,
"max_token": 4000, "max_token": 4000,
@@ -669,14 +747,59 @@ if "yi-34b-chat-0205" in AVAIL_LLM_MODELS or "yi-34b-chat-200k" in AVAIL_LLM_MOD
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
"yi-34b-chat-200k": { "yi-34b-chat-200k": {
"fn_with_ui": yimodel_ui, "fn_with_ui": yimodel_200k_ui,
"fn_without_ui": yimodel_noui, "fn_without_ui": yimodel_200k_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用 "can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint, "endpoint": yimodel_endpoint,
"max_token": 200000, "max_token": 200000,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
"yi-large": {
"fn_with_ui": yimodel_16k_ui,
"fn_without_ui": yimodel_16k_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"yi-medium": {
"fn_with_ui": yimodel_16k_ui,
"fn_without_ui": yimodel_16k_noui,
"can_multi_thread": True, # 这个并发量稍微大一点
"endpoint": yimodel_endpoint,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"yi-spark": {
"fn_with_ui": yimodel_16k_ui,
"fn_without_ui": yimodel_16k_noui,
"can_multi_thread": True, # 这个并发量稍微大一点
"endpoint": yimodel_endpoint,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"yi-large-turbo": {
"fn_with_ui": yimodel_16k_ui,
"fn_without_ui": yimodel_16k_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"yi-large-preview": {
"fn_with_ui": yimodel_16k_ui,
"fn_without_ui": yimodel_16k_noui,
"can_multi_thread": False, # 目前来说,默认情况下并发量极低,因此禁用
"endpoint": yimodel_endpoint,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
@@ -737,6 +860,15 @@ if "sparkv3" in AVAIL_LLM_MODELS or "sparkv3.5" in AVAIL_LLM_MODELS: # 讯飞
"max_token": 4096, "max_token": 4096,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
},
"sparkv4":{
"fn_with_ui": spark_ui,
"fn_without_ui": spark_noui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
} }
}) })
except: except:
@@ -789,8 +921,34 @@ if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
# -=-=-=-=-=-=- 幻方-深度求索大模型在线API -=-=-=-=-=-=-
if "deepseek-chat" in AVAIL_LLM_MODELS or "deepseek-coder" in AVAIL_LLM_MODELS:
try:
deepseekapi_noui, deepseekapi_ui = get_predict_function(
api_key_conf_name="DEEPSEEK_API_KEY", max_output_token=4096, disable_proxy=False
)
model_info.update({
"deepseek-chat":{
"fn_with_ui": deepseekapi_ui,
"fn_without_ui": deepseekapi_noui,
"endpoint": deepseekapi_endpoint,
"can_multi_thread": True,
"max_token": 32000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"deepseek-coder":{
"fn_with_ui": deepseekapi_ui,
"fn_without_ui": deepseekapi_noui,
"endpoint": deepseekapi_endpoint,
"can_multi_thread": True,
"max_token": 16000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
except:
print(trimmed_format_exc())
# -=-=-=-=-=-=- one-api 对齐支持 -=-=-=-=-=-=- # -=-=-=-=-=-=- one-api 对齐支持 -=-=-=-=-=-=-
for model in [m for m in AVAIL_LLM_MODELS if m.startswith("one-api-")]: for model in [m for m in AVAIL_LLM_MODELS if m.startswith("one-api-")]:
# 为了更灵活地接入one-api多模型管理界面,设计了此接口,例子AVAIL_LLM_MODELS = ["one-api-mixtral-8x7b(max_token=6666)"] # 为了更灵活地接入one-api多模型管理界面,设计了此接口,例子AVAIL_LLM_MODELS = ["one-api-mixtral-8x7b(max_token=6666)"]
@@ -799,21 +957,31 @@ for model in [m for m in AVAIL_LLM_MODELS if m.startswith("one-api-")]:
# "mixtral-8x7b" 是模型名(必要) # "mixtral-8x7b" 是模型名(必要)
# "(max_token=6666)" 是配置(非必要) # "(max_token=6666)" 是配置(非必要)
try: try:
_, max_token_tmp = read_one_api_model_name(model) origin_model_name, max_token_tmp = read_one_api_model_name(model)
# 如果是已知模型,则尝试获取其信息
original_model_info = model_info.get(origin_model_name.replace("one-api-", "", 1), None)
except: except:
print(f"one-api模型 {model} 的 max_token 配置不是整数,请检查配置文件。") print(f"one-api模型 {model} 的 max_token 配置不是整数,请检查配置文件。")
continue continue
model_info.update({ this_model_info = {
model: { "fn_with_ui": chatgpt_ui,
"fn_with_ui": chatgpt_ui, "fn_without_ui": chatgpt_noui,
"fn_without_ui": chatgpt_noui, "can_multi_thread": True,
"can_multi_thread": True, "endpoint": openai_endpoint,
"endpoint": openai_endpoint, "max_token": max_token_tmp,
"max_token": max_token_tmp, "tokenizer": tokenizer_gpt35,
"tokenizer": tokenizer_gpt35, "token_cnt": get_token_num_gpt35,
"token_cnt": get_token_num_gpt35, }
},
}) # 同步已知模型的其他信息
attribute = "has_multimodal_capacity"
if original_model_info is not None and original_model_info.get(attribute, None) is not None: this_model_info.update({attribute: original_model_info.get(attribute, None)})
# attribute = "attribute2"
# if original_model_info is not None and original_model_info.get(attribute, None) is not None: this_model_info.update({attribute: original_model_info.get(attribute, None)})
# attribute = "attribute3"
# if original_model_info is not None and original_model_info.get(attribute, None) is not None: this_model_info.update({attribute: original_model_info.get(attribute, None)})
model_info.update({model: this_model_info})
# -=-=-=-=-=-=- vllm 对齐支持 -=-=-=-=-=-=- # -=-=-=-=-=-=- vllm 对齐支持 -=-=-=-=-=-=-
for model in [m for m in AVAIL_LLM_MODELS if m.startswith("vllm-")]: for model in [m for m in AVAIL_LLM_MODELS if m.startswith("vllm-")]:
# 为了更灵活地接入vllm多模型管理界面,设计了此接口,例子AVAIL_LLM_MODELS = ["vllm-/home/hmp/llm/cache/Qwen1___5-32B-Chat(max_token=6666)"] # 为了更灵活地接入vllm多模型管理界面,设计了此接口,例子AVAIL_LLM_MODELS = ["vllm-/home/hmp/llm/cache/Qwen1___5-32B-Chat(max_token=6666)"]
@@ -888,6 +1056,13 @@ if len(AZURE_CFG_ARRAY) > 0:
AVAIL_LLM_MODELS += [azure_model_name] AVAIL_LLM_MODELS += [azure_model_name]
# -=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=-=-=
# -=-=-=-=-=-=-=-=-=- ☝️ 以上是模型路由 -=-=-=-=-=-=-=-=-=
# -=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=-=-=
# -=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=-=-=
# -=-=-=-=-=-=-= 👇 以下是多模型路由切换函数 -=-=-=-=-=-=-=
# -=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=--=-=-=-=-=-=-=-=
def LLM_CATCH_EXCEPTION(f): def LLM_CATCH_EXCEPTION(f):
@@ -924,13 +1099,11 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list, sys
model = llm_kwargs['llm_model'] model = llm_kwargs['llm_model']
n_model = 1 n_model = 1
if '&' not in model: if '&' not in model:
# 如果只询问“一个”大语言模型(多数情况):
# 如果只询问1个大语言模型
method = model_info[model]["fn_without_ui"] method = model_info[model]["fn_without_ui"]
return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
else: else:
# 如果同时询问“多个”大语言模型,这个稍微啰嗦一点,但思路相同,您不必读这个else分支
# 如果同时询问多个大语言模型,这个稍微啰嗦一点,但思路相同,您不必读这个else分支
executor = ThreadPoolExecutor(max_workers=4) executor = ThreadPoolExecutor(max_workers=4)
models = model.split('&') models = model.split('&')
n_model = len(models) n_model = len(models)
@@ -983,8 +1156,26 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list, sys
res = '<br/><br/>\n\n---\n\n'.join(return_string_collect) res = '<br/><br/>\n\n---\n\n'.join(return_string_collect)
return res return res
# 根据基础功能区 ModelOverride 参数调整模型类型,用于 `predict` 中
import importlib
import core_functional
def execute_model_override(llm_kwargs, additional_fn, method):
functional = core_functional.get_core_functions()
if (additional_fn in functional) and 'ModelOverride' in functional[additional_fn]:
# 热更新Prompt & ModelOverride
importlib.reload(core_functional)
functional = core_functional.get_core_functions()
model_override = functional[additional_fn]['ModelOverride']
if model_override not in model_info:
raise ValueError(f"模型覆盖参数 '{model_override}' 指向一个暂不支持的模型,请检查配置文件。")
method = model_info[model_override]["fn_with_ui"]
llm_kwargs['llm_model'] = model_override
return llm_kwargs, additional_fn, method
# 默认返回原参数
return llm_kwargs, additional_fn, method
def predict(inputs:str, llm_kwargs:dict, *args, **kwargs): def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot,
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
""" """
发送至LLM,流式获取输出。 发送至LLM,流式获取输出。
用于基础的对话功能。 用于基础的对话功能。
@@ -1003,6 +1194,11 @@ def predict(inputs:str, llm_kwargs:dict, *args, **kwargs):
""" """
inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm") inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")
method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错,检查config中的AVAIL_LLM_MODELS选项
yield from method(inputs, llm_kwargs, *args, **kwargs) method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错,检查config中的AVAIL_LLM_MODELS选项
if additional_fn: # 根据基础功能区 ModelOverride 参数调整模型类型
llm_kwargs, additional_fn, method = execute_model_override(llm_kwargs, additional_fn, method)
yield from method(inputs, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, stream, additional_fn)

查看文件

@@ -1,5 +1,3 @@
# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目
""" """
该文件中主要包含三个函数 该文件中主要包含三个函数
@@ -11,19 +9,19 @@
""" """
import json import json
import os
import re
import time import time
import gradio as gr
import logging import logging
import traceback import traceback
import requests import requests
import importlib
import random import random
# config_private.py放自己的秘密如API和代理网址 # config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件 # 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
from toolbox import trimmed_format_exc, is_the_upload_folder, read_one_api_model_name, log_chat from toolbox import trimmed_format_exc, is_the_upload_folder, read_one_api_model_name, log_chat
from toolbox import ChatBotWithCookies from toolbox import ChatBotWithCookies, have_any_recent_upload_image_files, encode_image
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \ proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY') get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
@@ -41,6 +39,57 @@ def get_full_error(chunk, stream_response):
break break
return chunk return chunk
def make_multimodal_input(inputs, image_paths):
image_base64_array = []
for image_path in image_paths:
path = os.path.abspath(image_path)
base64 = encode_image(path)
inputs = inputs + f'<br/><br/><div align="center"><img src="file={path}" base64="{base64}"></div>'
image_base64_array.append(base64)
return inputs, image_base64_array
def reverse_base64_from_input(inputs):
# 定义一个正则表达式来匹配 Base64 字符串(假设格式为 base64="<Base64编码>"
# pattern = re.compile(r'base64="([^"]+)"></div>')
pattern = re.compile(r'<br/><br/><div align="center"><img[^<>]+base64="([^"]+)"></div>')
# 使用 findall 方法查找所有匹配的 Base64 字符串
base64_strings = pattern.findall(inputs)
# 返回反转后的 Base64 字符串列表
return base64_strings
def contain_base64(inputs):
base64_strings = reverse_base64_from_input(inputs)
return len(base64_strings) > 0
def append_image_if_contain_base64(inputs):
if not contain_base64(inputs):
return inputs
else:
image_base64_array = reverse_base64_from_input(inputs)
pattern = re.compile(r'<br/><br/><div align="center"><img[^><]+></div>')
inputs = re.sub(pattern, '', inputs)
res = []
res.append({
"type": "text",
"text": inputs
})
for image_base64 in image_base64_array:
res.append({
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{image_base64}"
}
})
return res
def remove_image_if_contain_base64(inputs):
if not contain_base64(inputs):
return inputs
else:
pattern = re.compile(r'<br/><br/><div align="center"><img[^><]+></div>')
inputs = re.sub(pattern, '', inputs)
return inputs
def decode_chunk(chunk): def decode_chunk(chunk):
# 提前读取一些信息 (用于判断异常) # 提前读取一些信息 (用于判断异常)
chunk_decoded = chunk.decode() chunk_decoded = chunk.decode()
@@ -159,6 +208,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
additional_fn代表点击的哪个按钮,按钮见functional.py additional_fn代表点击的哪个按钮,按钮见functional.py
""" """
from .bridge_all import model_info
if is_any_api_key(inputs): if is_any_api_key(inputs):
chatbot._cookies['api_key'] = inputs chatbot._cookies['api_key'] = inputs
chatbot.append(("输入已识别为openai的api_key", what_keys(inputs))) chatbot.append(("输入已识别为openai的api_key", what_keys(inputs)))
@@ -174,9 +224,17 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
from core_functional import handle_core_functionality from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot) inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
raw_input = inputs # 多模态模型
# logging.info(f'[raw_input] {raw_input}') has_multimodal_capacity = model_info[llm_kwargs['llm_model']].get('has_multimodal_capacity', False)
chatbot.append((inputs, "")) if has_multimodal_capacity:
has_recent_image_upload, image_paths = have_any_recent_upload_image_files(chatbot, pop=True)
else:
has_recent_image_upload, image_paths = False, []
if has_recent_image_upload:
_inputs, image_base64_array = make_multimodal_input(inputs, image_paths)
else:
_inputs, image_base64_array = inputs, []
chatbot.append((_inputs, ""))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
# check mis-behavior # check mis-behavior
@@ -186,7 +244,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
time.sleep(2) time.sleep(2)
try: try:
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream) headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, image_base64_array, has_multimodal_capacity, stream)
except RuntimeError as e: except RuntimeError as e:
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。") chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
@@ -194,7 +252,6 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
# 检查endpoint是否合法 # 检查endpoint是否合法
try: try:
from .bridge_all import model_info
endpoint = verify_endpoint(model_info[llm_kwargs['llm_model']]['endpoint']) endpoint = verify_endpoint(model_info[llm_kwargs['llm_model']]['endpoint'])
except: except:
tb_str = '```\n' + trimmed_format_exc() + '```' tb_str = '```\n' + trimmed_format_exc() + '```'
@@ -202,7 +259,11 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
yield from update_ui(chatbot=chatbot, history=history, msg="Endpoint不满足要求") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="Endpoint不满足要求") # 刷新界面
return return
history.append(inputs); history.append("") # 加入历史
if has_recent_image_upload:
history.extend([_inputs, ""])
else:
history.extend([inputs, ""])
retry = 0 retry = 0
while True: while True:
@@ -316,7 +377,7 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}") chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
return chatbot, history return chatbot, history
def generate_payload(inputs, llm_kwargs, history, system_prompt, stream): def generate_payload(inputs:str, llm_kwargs:dict, history:list, system_prompt:str, image_base64_array:list=[], has_multimodal_capacity:bool=False, stream:bool=True):
""" """
整合所有信息,选择LLM模型,生成http请求,为发送请求做准备 整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
""" """
@@ -339,29 +400,74 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"] azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"]
headers.update({"api-key": azure_api_key_unshared}) headers.update({"api-key": azure_api_key_unshared})
conversation_cnt = len(history) // 2 if has_multimodal_capacity:
# 当以下条件满足时,启用多模态能力:
# 1. 模型本身是多模态模型has_multimodal_capacity
# 2. 输入包含图像len(image_base64_array) > 0
# 3. 历史输入包含图像( any([contain_base64(h) for h in history])
enable_multimodal_capacity = (len(image_base64_array) > 0) or any([contain_base64(h) for h in history])
else:
enable_multimodal_capacity = False
if not enable_multimodal_capacity:
# 不使用多模态能力
conversation_cnt = len(history) // 2
messages = [{"role": "system", "content": system_prompt}]
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = remove_image_if_contain_base64(history[index])
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = remove_image_if_contain_base64(history[index+1])
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "": continue
if what_gpt_answer["content"] == timeout_bot_msg: continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = what_gpt_answer['content']
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = inputs
messages.append(what_i_ask_now)
else:
# 多模态能力
conversation_cnt = len(history) // 2
messages = [{"role": "system", "content": system_prompt}]
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = append_image_if_contain_base64(history[index])
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = append_image_if_contain_base64(history[index+1])
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "": continue
if what_gpt_answer["content"] == timeout_bot_msg: continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = what_gpt_answer['content']
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = []
what_i_ask_now["content"].append({
"type": "text",
"text": inputs
})
for image_base64 in image_base64_array:
what_i_ask_now["content"].append({
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{image_base64}"
}
})
messages.append(what_i_ask_now)
messages = [{"role": "system", "content": system_prompt}]
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = history[index]
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = history[index+1]
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "": continue
if what_gpt_answer["content"] == timeout_bot_msg: continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = what_gpt_answer['content']
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = inputs
messages.append(what_i_ask_now)
model = llm_kwargs['llm_model'] model = llm_kwargs['llm_model']
if llm_kwargs['llm_model'].startswith('api2d-'): if llm_kwargs['llm_model'].startswith('api2d-'):
model = llm_kwargs['llm_model'][len('api2d-'):] model = llm_kwargs['llm_model'][len('api2d-'):]
@@ -389,8 +495,6 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
"top_p": llm_kwargs['top_p'], # 1.0, "top_p": llm_kwargs['top_p'], # 1.0,
"n": 1, "n": 1,
"stream": stream, "stream": stream,
"presence_penalty": 0,
"frequency_penalty": 0,
} }
try: try:
print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........") print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")

查看文件

@@ -27,10 +27,8 @@ timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check
def report_invalid_key(key): def report_invalid_key(key):
if get_conf("BLOCK_INVALID_APIKEY"): # 弃用功能
# 实验性功能,自动检测并屏蔽失效的KEY,请勿使用 return
from request_llms.key_manager import ApiKeyManager
api_key = ApiKeyManager().add_key_to_blacklist(key)
def get_full_error(chunk, stream_response): def get_full_error(chunk, stream_response):
""" """

查看文件

@@ -82,6 +82,9 @@ def generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt):
"ERNIE-Bot": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/completions", "ERNIE-Bot": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/completions",
"ERNIE-Bot-turbo": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/eb-instant", "ERNIE-Bot-turbo": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/eb-instant",
"BLOOMZ-7B": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/bloomz_7b1", "BLOOMZ-7B": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/bloomz_7b1",
"ERNIE-Speed-128K": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie-speed-128k",
"ERNIE-Speed-8K": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie_speed",
"ERNIE-Lite-8K": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie-lite-8k",
"Llama-2-70B-Chat": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_2_70b", "Llama-2-70B-Chat": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_2_70b",
"Llama-2-13B-Chat": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_2_13b", "Llama-2-13B-Chat": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_2_13b",

查看文件

@@ -1,7 +1,7 @@
import time import time
import os import os
from toolbox import update_ui, get_conf, update_ui_lastest_msg from toolbox import update_ui, get_conf, update_ui_lastest_msg
from toolbox import check_packages, report_exception from toolbox import check_packages, report_exception, log_chat
model_name = 'Qwen' model_name = 'Qwen'
@@ -59,6 +59,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot[-1] = (inputs, response) chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=response)
# 总结输出 # 总结输出
if response == f"[Local Message] 等待{model_name}响应中 ...": if response == f"[Local Message] 等待{model_name}响应中 ...":
response = f"[Local Message] {model_name}响应异常 ..." response = f"[Local Message] {model_name}响应异常 ..."

查看文件

@@ -0,0 +1,72 @@
import time
import os
from toolbox import update_ui, get_conf, update_ui_lastest_msg, log_chat
from toolbox import check_packages, report_exception, have_any_recent_upload_image_files
from toolbox import ChatBotWithCookies
# model_name = 'Taichu-2.0'
# taichu_default_model = 'taichu_llm'
def validate_key():
TAICHU_API_KEY = get_conf("TAICHU_API_KEY")
if TAICHU_API_KEY == '': return False
return True
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
"""
⭐多线程方法
函数的说明请见 request_llms/bridge_all.py
"""
watch_dog_patience = 5
response = ""
# if llm_kwargs["llm_model"] == "taichu":
# llm_kwargs["llm_model"] = "taichu"
if validate_key() is False:
raise RuntimeError('请配置 TAICHU_API_KEY')
# 开始接收回复
from .com_taichu import TaichuChatInit
zhipu_bro_init = TaichuChatInit()
for chunk, response in zhipu_bro_init.generate_chat(inputs, llm_kwargs, history, sys_prompt):
if len(observe_window) >= 1:
observe_window[0] = response
if len(observe_window) >= 2:
if (time.time() - observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。")
return response
def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
"""
⭐单线程方法
函数的说明请见 request_llms/bridge_all.py
"""
chatbot.append([inputs, ""])
yield from update_ui(chatbot=chatbot, history=history)
if validate_key() is False:
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0)
return
if additional_fn is not None:
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
chatbot[-1] = [inputs, ""]
yield from update_ui(chatbot=chatbot, history=history)
# if llm_kwargs["llm_model"] == "taichu":
# llm_kwargs["llm_model"] = taichu_default_model
# 开始接收回复
from .com_taichu import TaichuChatInit
zhipu_bro_init = TaichuChatInit()
for chunk, response in zhipu_bro_init.generate_chat(inputs, llm_kwargs, history, system_prompt):
chatbot[-1] = [inputs, response]
yield from update_ui(chatbot=chatbot, history=history)
history.extend([inputs, response])
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=response)
yield from update_ui(chatbot=chatbot, history=history)

查看文件

@@ -1,283 +0,0 @@
# 借鉴自同目录下的bridge_chatgpt.py
"""
该文件中主要包含三个函数
不具备多线程能力的函数:
1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
具备多线程调用能力的函数
2. predict_no_ui_long_connection支持多线程
"""
import json
import time
import gradio as gr
import logging
import traceback
import requests
import importlib
import random
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件
from toolbox import get_conf, update_ui, trimmed_format_exc, is_the_upload_folder, read_one_api_model_name
proxies, TIMEOUT_SECONDS, MAX_RETRY, YIMODEL_API_KEY = \
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'YIMODEL_API_KEY')
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
def get_full_error(chunk, stream_response):
"""
获取完整的从Openai返回的报错
"""
while True:
try:
chunk += next(stream_response)
except:
break
return chunk
def decode_chunk(chunk):
# 提前读取一些信息(用于判断异常)
chunk_decoded = chunk.decode()
chunkjson = None
is_last_chunk = False
try:
chunkjson = json.loads(chunk_decoded[6:])
is_last_chunk = chunkjson.get("lastOne", False)
except:
pass
return chunk_decoded, chunkjson, is_last_chunk
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
"""
发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
inputs
是本次问询的输入
sys_prompt:
系统静默prompt
llm_kwargs
chatGPT的内部调优参数
history
是之前的对话列表
observe_window = None
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]观测窗。observe_window[1]:看门狗
"""
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
if inputs == "": inputs = "空空如也的输入栏"
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
retry = 0
while True:
try:
# make a POST request to the API endpoint, stream=False
from .bridge_all import model_info
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
response = requests.post(endpoint, headers=headers, proxies=proxies,
json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
except requests.exceptions.ReadTimeout as e:
retry += 1
traceback.print_exc()
if retry > MAX_RETRY: raise TimeoutError
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
stream_response = response.iter_lines()
result = ''
is_head_of_the_stream = True
while True:
try: chunk = next(stream_response)
except StopIteration:
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
chunk_decoded, chunkjson, is_last_chunk = decode_chunk(chunk)
if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r'"role":"assistant"' in chunk_decoded):
# 数据流的第一帧不携带content
is_head_of_the_stream = False; continue
if chunk:
try:
if is_last_chunk:
# 判定为数据流的结束,gpt_replying_buffer也写完了
logging.info(f'[response] {result}')
break
result += chunkjson['choices'][0]["delta"]["content"]
if not console_slience: print(chunkjson['choices'][0]["delta"]["content"], end='')
if observe_window is not None:
# 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1:
observe_window[0] += chunkjson['choices'][0]["delta"]["content"]
# 看门狗,如果超过期限没有喂狗,则终止
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("用户取消了程序。")
except Exception as e:
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
print(error_msg)
raise RuntimeError("Json解析不合常规")
return result
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
发送至chatGPT,流式获取输出。
用于基础的对话功能。
inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
additional_fn代表点击的哪个按钮,按钮见functional.py
"""
if len(YIMODEL_API_KEY) == 0:
raise RuntimeError("没有设置YIMODEL_API_KEY选项")
if inputs == "": inputs = "空空如也的输入栏"
user_input = inputs
if additional_fn is not None:
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
raw_input = inputs
logging.info(f'[raw_input] {raw_input}')
chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
# check mis-behavior
if is_the_upload_folder(user_input):
chatbot[-1] = (inputs, f"[Local Message] 检测到操作错误!当您上传文档之后,需点击“**函数插件区**”按钮进行处理,请勿点击“提交”按钮或者“基础功能区”按钮。")
yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
time.sleep(2)
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
from .bridge_all import model_info
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
history.append(inputs); history.append("")
retry = 0
while True:
try:
# make a POST request to the API endpoint, stream=True
response = requests.post(endpoint, headers=headers, proxies=proxies,
json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
except:
retry += 1
chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg))
retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
if retry > MAX_RETRY: raise TimeoutError
gpt_replying_buffer = ""
is_head_of_the_stream = True
if stream:
stream_response = response.iter_lines()
while True:
try:
chunk = next(stream_response)
except StopIteration:
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
# 提前读取一些信息 (用于判断异常)
chunk_decoded, chunkjson, is_last_chunk = decode_chunk(chunk)
if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r'"role":"assistant"' in chunk_decoded):
# 数据流的第一帧不携带content
is_head_of_the_stream = False; continue
if chunk:
try:
if is_last_chunk:
# 判定为数据流的结束,gpt_replying_buffer也写完了
logging.info(f'[response] {gpt_replying_buffer}')
break
# 处理数据流的主体
status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
# 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面
except Exception as e:
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
print(error_msg)
return
def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg):
from .bridge_all import model_info
if "bad_request" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] 已经超过了模型的最大上下文或是模型格式错误,请尝试削减单次输入的文本量。")
elif "authentication_error" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. 请确保API key有效。")
elif "not_found" in error_msg:
chatbot[-1] = (chatbot[-1][0], f"[Local Message] {llm_kwargs['llm_model']} 无效,请确保使用小写的模型名称。")
elif "rate_limit" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] 遇到了控制请求速率限制,请一分钟后重试。")
elif "system_busy" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] 系统繁忙,请一分钟后重试。")
else:
from toolbox import regular_txt_to_markdown
tb_str = '```\n' + trimmed_format_exc() + '```'
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
return chatbot, history
def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
"""
整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
"""
api_key = f"Bearer {YIMODEL_API_KEY}"
headers = {
"Content-Type": "application/json",
"Authorization": api_key
}
conversation_cnt = len(history) // 2
messages = [{"role": "system", "content": system_prompt}]
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = history[index]
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = history[index+1]
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "": continue
if what_gpt_answer["content"] == timeout_bot_msg: continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = what_gpt_answer['content']
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = inputs
messages.append(what_i_ask_now)
model = llm_kwargs['llm_model']
if llm_kwargs['llm_model'].startswith('one-api-'):
model = llm_kwargs['llm_model'][len('one-api-'):]
model, _ = read_one_api_model_name(model)
tokens = 600 if llm_kwargs['llm_model'] == 'yi-34b-chat-0205' else 4096 #yi-34b-chat-0205只有4k上下文...
payload = {
"model": model,
"messages": messages,
"temperature": llm_kwargs['temperature'], # 1.0,
"stream": stream,
"max_tokens": tokens
}
try:
print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")
except:
print('输入中可能存在乱码。')
return headers,payload

查看文件

@@ -65,8 +65,12 @@ class QwenRequestInstance():
self.result_buf += f"[Local Message] 请求错误:状态码:{response.status_code},错误码:{response.code},消息:{response.message}" self.result_buf += f"[Local Message] 请求错误:状态码:{response.status_code},错误码:{response.code},消息:{response.message}"
yield self.result_buf yield self.result_buf
break break
logging.info(f'[raw_input] {inputs}')
logging.info(f'[response] {self.result_buf}') # 耗尽generator避免报错
while True:
try: next(responses)
except: break
return self.result_buf return self.result_buf

查看文件

@@ -67,6 +67,7 @@ class SparkRequestInstance():
self.gpt_url_v3 = "ws://spark-api.xf-yun.com/v3.1/chat" self.gpt_url_v3 = "ws://spark-api.xf-yun.com/v3.1/chat"
self.gpt_url_v35 = "wss://spark-api.xf-yun.com/v3.5/chat" self.gpt_url_v35 = "wss://spark-api.xf-yun.com/v3.5/chat"
self.gpt_url_img = "wss://spark-api.cn-huabei-1.xf-yun.com/v2.1/image" self.gpt_url_img = "wss://spark-api.cn-huabei-1.xf-yun.com/v2.1/image"
self.gpt_url_v4 = "wss://spark-api.xf-yun.com/v4.0/chat"
self.time_to_yield_event = threading.Event() self.time_to_yield_event = threading.Event()
self.time_to_exit_event = threading.Event() self.time_to_exit_event = threading.Event()
@@ -94,6 +95,8 @@ class SparkRequestInstance():
gpt_url = self.gpt_url_v3 gpt_url = self.gpt_url_v3
elif llm_kwargs['llm_model'] == 'sparkv3.5': elif llm_kwargs['llm_model'] == 'sparkv3.5':
gpt_url = self.gpt_url_v35 gpt_url = self.gpt_url_v35
elif llm_kwargs['llm_model'] == 'sparkv4':
gpt_url = self.gpt_url_v4
else: else:
gpt_url = self.gpt_url gpt_url = self.gpt_url
file_manifest = [] file_manifest = []
@@ -194,6 +197,7 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt, file_manifest)
"sparkv2": "generalv2", "sparkv2": "generalv2",
"sparkv3": "generalv3", "sparkv3": "generalv3",
"sparkv3.5": "generalv3.5", "sparkv3.5": "generalv3.5",
"sparkv4": "4.0Ultra"
} }
domains_select = domains[llm_kwargs['llm_model']] domains_select = domains[llm_kwargs['llm_model']]
if file_manifest: domains_select = 'image' if file_manifest: domains_select = 'image'

55
request_llms/com_taichu.py 普通文件
查看文件

@@ -0,0 +1,55 @@
# encoding: utf-8
# @Time : 2024/1/22
# @Author : Kilig947 & binary husky
# @Descr : 兼容最新的智谱Ai
from toolbox import get_conf
from toolbox import get_conf, encode_image, get_pictures_list
import logging, os, requests
import json
class TaichuChatInit:
def __init__(self): ...
def __conversation_user(self, user_input: str, llm_kwargs:dict):
return {"role": "user", "content": user_input}
def __conversation_history(self, history:list, llm_kwargs:dict):
messages = []
conversation_cnt = len(history) // 2
if conversation_cnt:
for index in range(0, 2 * conversation_cnt, 2):
what_i_have_asked = self.__conversation_user(history[index], llm_kwargs)
what_gpt_answer = {
"role": "assistant",
"content": history[index + 1]
}
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
return messages
def generate_chat(self, inputs:str, llm_kwargs:dict, history:list, system_prompt:str):
TAICHU_API_KEY = get_conf("TAICHU_API_KEY")
params = {
'api_key': TAICHU_API_KEY,
'model_code': 'taichu_llm',
'question': '\n\n'.join(history) + inputs,
'prefix': system_prompt,
'temperature': llm_kwargs.get('temperature', 0.95),
'stream_format': 'json'
}
api = 'https://ai-maas.wair.ac.cn/maas/v1/model_api/invoke'
response = requests.post(api, json=params, stream=True)
results = ""
if response.status_code == 200:
response.encoding = 'utf-8'
for line in response.iter_lines(decode_unicode=True):
delta = json.loads(line)['choices'][0]['text']
results += delta
yield delta, results
else:
raise ValueError
if __name__ == '__main__':
zhipu = TaichuChatInit()
zhipu.generate_chat('你好', {'llm_model': 'glm-4'}, [], '你是WPSAi')

查看文件

@@ -0,0 +1,402 @@
import json
import time
import logging
import traceback
import requests
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控,如果有,则覆盖原config文件
from toolbox import (
get_conf,
update_ui,
is_the_upload_folder,
)
proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf(
"proxies", "TIMEOUT_SECONDS", "MAX_RETRY"
)
timeout_bot_msg = (
"[Local Message] Request timeout. Network error. Please check proxy settings in config.py."
+ "网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。"
)
def get_full_error(chunk, stream_response):
"""
尝试获取完整的错误信息
"""
while True:
try:
chunk += next(stream_response)
except:
break
return chunk
def decode_chunk(chunk):
"""
用于解读"content""finish_reason"的内容
"""
chunk = chunk.decode()
respose = ""
finish_reason = "False"
try:
chunk = json.loads(chunk[6:])
except:
respose = "API_ERROR"
finish_reason = chunk
# 错误处理部分
if "error" in chunk:
respose = "API_ERROR"
try:
chunk = json.loads(chunk)
finish_reason = chunk["error"]["code"]
except:
finish_reason = "API_ERROR"
return respose, finish_reason
try:
respose = chunk["choices"][0]["delta"]["content"]
except:
pass
try:
finish_reason = chunk["choices"][0]["finish_reason"]
except:
pass
return respose, finish_reason
def generate_message(input, model, key, history, max_output_token, system_prompt, temperature):
"""
整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
"""
api_key = f"Bearer {key}"
headers = {"Content-Type": "application/json", "Authorization": api_key}
conversation_cnt = len(history) // 2
messages = [{"role": "system", "content": system_prompt}]
if conversation_cnt:
for index in range(0, 2 * conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = history[index]
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = history[index + 1]
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "":
continue
if what_gpt_answer["content"] == timeout_bot_msg:
continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]["content"] = what_gpt_answer["content"]
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = input
messages.append(what_i_ask_now)
playload = {
"model": model,
"messages": messages,
"temperature": temperature,
"stream": True,
"max_tokens": max_output_token,
}
try:
print(f" {model} : {conversation_cnt} : {input[:100]} ..........")
except:
print("输入中可能存在乱码。")
return headers, playload
def get_predict_function(
api_key_conf_name,
max_output_token,
disable_proxy = False
):
"""
为openai格式的API生成响应函数,其中传入参数
api_key_conf_name
`config.py`中此模型的APIKEY的名字,例如"YIMODEL_API_KEY"
max_output_token
每次请求的最大token数量,例如对于01万物的yi-34b-chat-200k,其最大请求数为4096
请不要与模型的最大token数量相混淆。
disable_proxy
是否使用代理,True为不使用,False为使用。
"""
APIKEY = get_conf(api_key_conf_name)
def predict_no_ui_long_connection(
inputs,
llm_kwargs,
history=[],
sys_prompt="",
observe_window=None,
console_slience=False,
):
"""
发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
inputs
是本次问询的输入
sys_prompt:
系统静默prompt
llm_kwargs
chatGPT的内部调优参数
history
是之前的对话列表
observe_window = None
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]观测窗。observe_window[1]:看门狗
"""
watch_dog_patience = 5 # 看门狗的耐心,设置5秒不准咬人(咬的也不是人
if len(APIKEY) == 0:
raise RuntimeError(f"APIKEY为空,请检查配置文件的{APIKEY}")
if inputs == "":
inputs = "你好👋"
headers, playload = generate_message(
input=inputs,
model=llm_kwargs["llm_model"],
key=APIKEY,
history=history,
max_output_token=max_output_token,
system_prompt=sys_prompt,
temperature=llm_kwargs["temperature"],
)
retry = 0
while True:
try:
from .bridge_all import model_info
endpoint = model_info[llm_kwargs["llm_model"]]["endpoint"]
if not disable_proxy:
response = requests.post(
endpoint,
headers=headers,
proxies=proxies,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
else:
response = requests.post(
endpoint,
headers=headers,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
break
except:
retry += 1
traceback.print_exc()
if retry > MAX_RETRY:
raise TimeoutError
if MAX_RETRY != 0:
print(f"请求超时,正在重试 ({retry}/{MAX_RETRY}) ……")
stream_response = response.iter_lines()
result = ""
while True:
try:
chunk = next(stream_response)
except StopIteration:
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
response_text, finish_reason = decode_chunk(chunk)
# 返回的数据流第一次为空,继续等待
if response_text == "" and finish_reason != "False":
continue
if response_text == "API_ERROR" and (
finish_reason != "False" or finish_reason != "stop"
):
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
print(chunk_decoded)
raise RuntimeError(
f"API异常,请检测终端输出。可能的原因是:{finish_reason}"
)
if chunk:
try:
if finish_reason == "stop":
logging.info(f"[response] {result}")
break
result += response_text
if not console_slience:
print(response_text, end="")
if observe_window is not None:
# 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1:
observe_window[0] += response_text
# 看门狗,如果超过期限没有喂狗,则终止
if len(observe_window) >= 2:
if (time.time() - observe_window[1]) > watch_dog_patience:
raise RuntimeError("用户取消了程序。")
except Exception as e:
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
print(error_msg)
raise RuntimeError("Json解析不合常规")
return result
def predict(
inputs,
llm_kwargs,
plugin_kwargs,
chatbot,
history=[],
system_prompt="",
stream=True,
additional_fn=None,
):
"""
发送至chatGPT,流式获取输出。
用于基础的对话功能。
inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
additional_fn代表点击的哪个按钮,按钮见functional.py
"""
if len(APIKEY) == 0:
raise RuntimeError(f"APIKEY为空,请检查配置文件的{APIKEY}")
if inputs == "":
inputs = "你好👋"
if additional_fn is not None:
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(
additional_fn, inputs, history, chatbot
)
logging.info(f"[raw_input] {inputs}")
chatbot.append((inputs, ""))
yield from update_ui(
chatbot=chatbot, history=history, msg="等待响应"
) # 刷新界面
# check mis-behavior
if is_the_upload_folder(inputs):
chatbot[-1] = (
inputs,
f"[Local Message] 检测到操作错误!当您上传文档之后,需点击“**函数插件区**”按钮进行处理,请勿点击“提交”按钮或者“基础功能区”按钮。",
)
yield from update_ui(
chatbot=chatbot, history=history, msg="正常"
) # 刷新界面
time.sleep(2)
headers, playload = generate_message(
input=inputs,
model=llm_kwargs["llm_model"],
key=APIKEY,
history=history,
max_output_token=max_output_token,
system_prompt=system_prompt,
temperature=llm_kwargs["temperature"],
)
history.append(inputs)
history.append("")
retry = 0
while True:
try:
from .bridge_all import model_info
endpoint = model_info[llm_kwargs["llm_model"]]["endpoint"]
if not disable_proxy:
response = requests.post(
endpoint,
headers=headers,
proxies=proxies,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
else:
response = requests.post(
endpoint,
headers=headers,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
break
except:
retry += 1
chatbot[-1] = (chatbot[-1][0], timeout_bot_msg)
retry_msg = (
f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
)
yield from update_ui(
chatbot=chatbot, history=history, msg="请求超时" + retry_msg
) # 刷新界面
if retry > MAX_RETRY:
raise TimeoutError
gpt_replying_buffer = ""
stream_response = response.iter_lines()
while True:
try:
chunk = next(stream_response)
except StopIteration:
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
response_text, finish_reason = decode_chunk(chunk)
# 返回的数据流第一次为空,继续等待
if response_text == "" and finish_reason != "False":
continue
if chunk:
try:
if response_text == "API_ERROR" and (
finish_reason != "False" or finish_reason != "stop"
):
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
chatbot[-1] = (
chatbot[-1][0],
"[Local Message] {finish_reason},获得以下报错信息:\n"
+ chunk_decoded,
)
yield from update_ui(
chatbot=chatbot,
history=history,
msg="API异常:" + chunk_decoded,
) # 刷新界面
print(chunk_decoded)
return
if finish_reason == "stop":
logging.info(f"[response] {gpt_replying_buffer}")
break
status_text = f"finish_reason: {finish_reason}"
gpt_replying_buffer += response_text
# 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield from update_ui(
chatbot=chatbot, history=history, msg=status_text
) # 刷新界面
except Exception as e:
yield from update_ui(
chatbot=chatbot, history=history, msg="Json解析不合常规"
) # 刷新界面
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
chatbot[-1] = (
chatbot[-1][0],
"[Local Message] 解析错误,获得以下报错信息:\n" + chunk_decoded,
)
yield from update_ui(
chatbot=chatbot, history=history, msg="Json异常" + chunk_decoded
) # 刷新界面
print(chunk_decoded)
return
return predict_no_ui_long_connection, predict

查看文件

@@ -1,4 +1,4 @@
https://public.agent-matrix.com/publish/gradio-3.32.9-py3-none-any.whl https://public.agent-matrix.com/publish/gradio-3.32.10-py3-none-any.whl
fastapi==0.110 fastapi==0.110
gradio-client==0.8 gradio-client==0.8
pypdf2==2.12.1 pypdf2==2.12.1

查看文件

@@ -46,6 +46,16 @@ code_highlight_configs_block_mermaid = {
}, },
} }
mathpatterns = {
r"(?<!\\|\$)(\$)([^\$]+)(\$)": {"allow_multi_lines": False}, #  $...$
r"(?<!\\)(\$\$)([^\$]+)(\$\$)": {"allow_multi_lines": True}, # $$...$$
r"(?<!\\)(\\\[)(.+?)(\\\])": {"allow_multi_lines": False}, # \[...\]
r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
}
def tex2mathml_catch_exception(content, *args, **kwargs): def tex2mathml_catch_exception(content, *args, **kwargs):
try: try:
content = tex2mathml(content, *args, **kwargs) content = tex2mathml(content, *args, **kwargs)
@@ -96,14 +106,7 @@ def is_equation(txt):
return False return False
if "$" not in txt and "\\[" not in txt: if "$" not in txt and "\\[" not in txt:
return False return False
mathpatterns = {
r"(?<!\\|\$)(\$)([^\$]+)(\$)": {"allow_multi_lines": False}, #  $...$
r"(?<!\\)(\$\$)([^\$]+)(\$\$)": {"allow_multi_lines": True}, # $$...$$
r"(?<!\\)(\\\[)(.+?)(\\\])": {"allow_multi_lines": False}, # \[...\]
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
}
matches = [] matches = []
for pattern, property in mathpatterns.items(): for pattern, property in mathpatterns.items():
flags = re.ASCII | re.DOTALL if property["allow_multi_lines"] else re.ASCII flags = re.ASCII | re.DOTALL if property["allow_multi_lines"] else re.ASCII
@@ -207,13 +210,68 @@ def fix_code_segment_indent(txt):
return txt return txt
def fix_dollar_sticking_bug(txt):
"""
修复不标准的dollar公式符号的问题
"""
txt_result = ""
single_stack_height = 0
double_stack_height = 0
while True:
while True:
index = txt.find('$')
if index == -1:
txt_result += txt
return txt_result
if single_stack_height > 0:
if txt[:(index+1)].find('\n') > 0 or txt[:(index+1)].find('<td>') > 0 or txt[:(index+1)].find('</td>') > 0:
print('公式之中出现了异常 (Unexpect element in equation)')
single_stack_height = 0
txt_result += ' $'
continue
if double_stack_height > 0:
if txt[:(index+1)].find('\n\n') > 0:
print('公式之中出现了异常 (Unexpect element in equation)')
double_stack_height = 0
txt_result += '$$'
continue
is_double = (txt[index+1] == '$')
if is_double:
if single_stack_height != 0:
# add a padding
txt = txt[:(index+1)] + " " + txt[(index+1):]
continue
if double_stack_height == 0:
double_stack_height = 1
else:
double_stack_height = 0
txt_result += txt[:(index+2)]
txt = txt[(index+2):]
else:
if double_stack_height != 0:
# print(txt[:(index)])
print('发现异常嵌套公式')
if single_stack_height == 0:
single_stack_height = 1
else:
single_stack_height = 0
# print(txt[:(index)])
txt_result += txt[:(index+1)]
txt = txt[(index+1):]
break
def markdown_convertion_for_file(txt): def markdown_convertion_for_file(txt):
""" """
将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。 将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。
""" """
from themes.theme import advanced_css from themes.theme import advanced_css
pre = f""" pre = f"""
<!DOCTYPE html><head><meta charset="utf-8"><title>对话历史</title><style>{advanced_css}</style></head> <!DOCTYPE html><head><meta charset="utf-8"><title>PDF文档翻译</title><style>{advanced_css}</style></head>
<body> <body>
<div class="test_temp1" style="width:10%; height: 500px; float:left;"></div> <div class="test_temp1" style="width:10%; height: 500px; float:left;"></div>
<div class="test_temp2" style="width:80%;padding: 40px;float:left;padding-left: 20px;padding-right: 20px;box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px 8px;border-radius: 10px;"> <div class="test_temp2" style="width:80%;padding: 40px;float:left;padding-left: 20px;padding-right: 20px;box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px 8px;border-radius: 10px;">
@@ -232,10 +290,10 @@ def markdown_convertion_for_file(txt):
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>' find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
txt = fix_markdown_indent(txt) txt = fix_markdown_indent(txt)
convert_stage_1 = fix_dollar_sticking_bug(txt)
# convert everything to html format # convert everything to html format
split = markdown.markdown(text="---") convert_stage_2 = markdown.markdown(
convert_stage_1 = markdown.markdown( text=convert_stage_1,
text=txt,
extensions=[ extensions=[
"sane_lists", "sane_lists",
"tables", "tables",
@@ -245,14 +303,24 @@ def markdown_convertion_for_file(txt):
], ],
extension_configs={**markdown_extension_configs, **code_highlight_configs}, extension_configs={**markdown_extension_configs, **code_highlight_configs},
) )
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
def repl_fn(match):
content = match.group(2)
return f'<script type="math/tex">{content}</script>'
pattern = "|".join([pattern for pattern, property in mathpatterns.items() if not property["allow_multi_lines"]])
pattern = re.compile(pattern, flags=re.ASCII)
convert_stage_3 = pattern.sub(repl_fn, convert_stage_2)
convert_stage_4 = markdown_bug_hunt(convert_stage_3)
# 2. convert to rendered equation # 2. convert to rendered equation
convert_stage_2_2, n = re.subn( convert_stage_5, n = re.subn(
find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL find_equation_pattern, replace_math_render, convert_stage_4, flags=re.DOTALL
) )
# cat them together # cat them together
return pre + convert_stage_2_2 + suf return pre + convert_stage_5 + suf
@lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度 @lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度
def markdown_convertion(txt): def markdown_convertion(txt):

查看文件

@@ -0,0 +1,25 @@
def is_full_width_char(ch):
"""判断给定的单个字符是否是全角字符"""
if '\u4e00' <= ch <= '\u9fff':
return True # 中文字符
if '\uff01' <= ch <= '\uff5e':
return True # 全角符号
if '\u3000' <= ch <= '\u303f':
return True # CJK标点符号
return False
def scolling_visual_effect(text, scroller_max_len):
text = text.\
replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.')
place_take_cnt = 0
pointer = len(text) - 1
if len(text) < scroller_max_len:
return text
while place_take_cnt < scroller_max_len and pointer > 0:
if is_full_width_char(text[pointer]): place_take_cnt += 2
else: place_take_cnt += 1
pointer -= 1
return text[pointer:]

查看文件

@@ -2,7 +2,7 @@ import importlib
import time import time
import os import os
from functools import lru_cache from functools import lru_cache
from colorful import print亮红, print亮绿, print亮蓝 from shared_utils.colorful import print亮红, print亮绿, print亮蓝
pj = os.path.join pj = os.path.join
default_user_name = 'default_user' default_user_name = 'default_user'

查看文件

@@ -15,13 +15,13 @@ import os
def get_plugin_handle(plugin_name): def get_plugin_handle(plugin_name):
""" """
e.g. plugin_name = 'crazy_functions.批量Markdown翻译->Markdown翻译指定语言' e.g. plugin_name = 'crazy_functions.Markdown_Translate->Markdown翻译指定语言'
""" """
import importlib import importlib
assert ( assert (
"->" in plugin_name "->" in plugin_name
), "Example of plugin_name: crazy_functions.批量Markdown翻译->Markdown翻译指定语言" ), "Example of plugin_name: crazy_functions.Markdown_Translate->Markdown翻译指定语言"
module, fn_name = plugin_name.split("->") module, fn_name = plugin_name.split("->")
f_hot_reload = getattr(importlib.import_module(module, fn_name), fn_name) f_hot_reload = getattr(importlib.import_module(module, fn_name), fn_name)
return f_hot_reload return f_hot_reload

查看文件

@@ -1,4 +1,5 @@
import json import json
import base64
from typing import Callable from typing import Callable
def load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns)->Callable: def load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns)->Callable:
@@ -86,3 +87,58 @@ def make_history_cache():
history_cache_update = gr.Button("", elem_id="elem_update_history", visible=False).click( history_cache_update = gr.Button("", elem_id="elem_update_history", visible=False).click(
process_history_cache, inputs=[history_cache], outputs=[history]) process_history_cache, inputs=[history_cache], outputs=[history])
return history, history_cache, history_cache_update return history, history_cache, history_cache_update
# """
# with gr.Row():
# txt = gr.Textbox(show_label=False, placeholder="Input question here.", elem_id='user_input_main').style(container=False)
# txtx = gr.Textbox(show_label=False, placeholder="Input question here.", elem_id='user_input_main').style(container=False)
# with gr.Row():
# btn_value = "Test"
# elem_id = "TestCase"
# variant = "primary"
# input_list = [txt, txtx]
# output_list = [txt, txtx]
# input_name_list = ["txt(input)", "txtx(input)"]
# output_name_list = ["txt", "txtx"]
# js_callback = """(txt, txtx)=>{console.log(txt); console.log(txtx);}"""
# def function(txt, txtx):
# return "booo", "goooo"
# create_button_with_javascript_callback(btn_value, elem_id, variant, js_callback, input_list, output_list, function, input_name_list, output_name_list)
# """
def create_button_with_javascript_callback(btn_value, elem_id, variant, js_callback, input_list, output_list, function, input_name_list, output_name_list):
import gradio as gr
middle_ware_component = gr.Textbox(visible=False, elem_id=elem_id+'_buffer')
def get_fn_wrap():
def fn_wrap(*args):
summary_dict = {}
for name, value in zip(input_name_list, args):
summary_dict.update({name: value})
res = function(*args)
for name, value in zip(output_name_list, res):
summary_dict.update({name: value})
summary = base64.b64encode(json.dumps(summary_dict).encode('utf8')).decode("utf-8")
return (*res, summary)
return fn_wrap
btn = gr.Button(btn_value, elem_id=elem_id, variant=variant)
call_args = ""
for name in output_name_list:
call_args += f"""Data["{name}"],"""
call_args = call_args.rstrip(",")
_js_callback = """
(base64MiddleString)=>{
console.log('hello')
const stringData = atob(base64MiddleString);
let Data = JSON.parse(stringData);
call = JS_CALLBACK_GEN;
call(CALL_ARGS);
}
""".replace("JS_CALLBACK_GEN", js_callback).replace("CALL_ARGS", call_args)
btn.click(get_fn_wrap(), input_list, output_list+[middle_ware_component]).then(None, [middle_ware_component], None, _js=_js_callback)
return btn

查看文件

@@ -47,6 +47,28 @@ queue cocurrent effectiveness
import os, requests, threading, time import os, requests, threading, time
import uvicorn import uvicorn
def validate_path_safety(path_or_url, user):
from toolbox import get_conf, default_user_name
from toolbox import FriendlyException
PATH_PRIVATE_UPLOAD, PATH_LOGGING = get_conf('PATH_PRIVATE_UPLOAD', 'PATH_LOGGING')
sensitive_path = None
path_or_url = os.path.relpath(path_or_url)
if path_or_url.startswith(PATH_LOGGING): # 日志文件(按用户划分)
sensitive_path = PATH_LOGGING
elif path_or_url.startswith(PATH_PRIVATE_UPLOAD): # 用户的上传目录(按用户划分)
sensitive_path = PATH_PRIVATE_UPLOAD
elif path_or_url.startswith('tests'): # 一个常用的测试目录
return True
else:
raise FriendlyException(f"输入文件的路径 ({path_or_url}) 存在,但位置非法。请将文件上传后再执行该任务。") # return False
if sensitive_path:
allowed_users = [user, 'autogen', 'arxiv_cache', default_user_name] # three user path that can be accessed
for user_allowed in allowed_users:
if f"{os.sep}".join(path_or_url.split(os.sep)[:2]) == os.path.join(sensitive_path, user_allowed):
return True
raise FriendlyException(f"输入文件的路径 ({path_or_url}) 存在,但属于其他用户。请将文件上传后再执行该任务。") # return False
return True
def _authorize_user(path_or_url, request, gradio_app): def _authorize_user(path_or_url, request, gradio_app):
from toolbox import get_conf, default_user_name from toolbox import get_conf, default_user_name
PATH_PRIVATE_UPLOAD, PATH_LOGGING = get_conf('PATH_PRIVATE_UPLOAD', 'PATH_LOGGING') PATH_PRIVATE_UPLOAD, PATH_LOGGING = get_conf('PATH_PRIVATE_UPLOAD', 'PATH_LOGGING')
@@ -59,7 +81,7 @@ def _authorize_user(path_or_url, request, gradio_app):
if sensitive_path: if sensitive_path:
token = request.cookies.get("access-token") or request.cookies.get("access-token-unsecure") token = request.cookies.get("access-token") or request.cookies.get("access-token-unsecure")
user = gradio_app.tokens.get(token) # get user user = gradio_app.tokens.get(token) # get user
allowed_users = [user, 'autogen', default_user_name] # three user path that can be accessed allowed_users = [user, 'autogen', 'arxiv_cache', default_user_name] # three user path that can be accessed
for user_allowed in allowed_users: for user_allowed in allowed_users:
# exact match # exact match
if f"{os.sep}".join(path_or_url.split(os.sep)[:2]) == os.path.join(sensitive_path, user_allowed): if f"{os.sep}".join(path_or_url.split(os.sep)[:2]) == os.path.join(sensitive_path, user_allowed):
@@ -77,7 +99,7 @@ class Server(uvicorn.Server):
self.thread = threading.Thread(target=self.run, daemon=True) self.thread = threading.Thread(target=self.run, daemon=True)
self.thread.start() self.thread.start()
while not self.started: while not self.started:
time.sleep(1e-3) time.sleep(5e-2)
def close(self): def close(self):
self.should_exit = True self.should_exit = True
@@ -137,6 +159,16 @@ def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SS
return "越权访问!" return "越权访问!"
return await endpoint(path_or_url, request) return await endpoint(path_or_url, request)
from fastapi import Request, status
from fastapi.responses import FileResponse, RedirectResponse
@gradio_app.get("/academic_logout")
async def logout():
response = RedirectResponse(url=CUSTOM_PATH, status_code=status.HTTP_302_FOUND)
response.delete_cookie('access-token')
response.delete_cookie('access-token-unsecure')
return response
# --- --- enable TTS (text-to-speech) functionality --- ---
TTS_TYPE = get_conf("TTS_TYPE") TTS_TYPE = get_conf("TTS_TYPE")
if TTS_TYPE != "DISABLE": if TTS_TYPE != "DISABLE":
# audio generation functionality # audio generation functionality
@@ -160,11 +192,14 @@ def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SS
temp_file_name = str(uuid.uuid4().hex) temp_file_name = str(uuid.uuid4().hex)
temp_file = os.path.join(temp_folder, f'{temp_file_name}.mp3') temp_file = os.path.join(temp_folder, f'{temp_file_name}.mp3')
await tts.save(temp_file) await tts.save(temp_file)
mp3_audio = AudioSegment.from_file(temp_file, format="mp3") try:
mp3_audio.export(temp_file, format="wav") mp3_audio = AudioSegment.from_file(temp_file, format="mp3")
with open(temp_file, 'rb') as wav_file: t = wav_file.read() mp3_audio.export(temp_file, format="wav")
os.remove(temp_file) with open(temp_file, 'rb') as wav_file: t = wav_file.read()
return Response(content=t) os.remove(temp_file)
return Response(content=t)
except:
raise RuntimeError("ffmpeg未安装,无法处理EdgeTTS音频。安装方法见`https://github.com/jiaaro/pydub#getting-ffmpeg-set-up`")
if TTS_TYPE == "LOCAL_SOVITS_API": if TTS_TYPE == "LOCAL_SOVITS_API":
# Forward the request to the target service # Forward the request to the target service
TARGET_URL = get_conf("GPT_SOVITS_URL") TARGET_URL = get_conf("GPT_SOVITS_URL")
@@ -195,13 +230,22 @@ def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SS
fastapi_app = FastAPI(lifespan=app_lifespan) fastapi_app = FastAPI(lifespan=app_lifespan)
fastapi_app.mount(CUSTOM_PATH, gradio_app) fastapi_app.mount(CUSTOM_PATH, gradio_app)
# --- --- favicon --- --- # --- --- favicon and block fastapi api reference routes --- ---
from starlette.responses import JSONResponse
if CUSTOM_PATH != '/': if CUSTOM_PATH != '/':
from fastapi.responses import FileResponse from fastapi.responses import FileResponse
@fastapi_app.get("/favicon.ico") @fastapi_app.get("/favicon.ico")
async def favicon(): async def favicon():
return FileResponse(app_block.favicon_path) return FileResponse(app_block.favicon_path)
@fastapi_app.middleware("http")
async def middleware(request: Request, call_next):
if request.scope['path'] in ["/docs", "/redoc", "/openapi.json"]:
return JSONResponse(status_code=404, content={"message": "Not Found"})
response = await call_next(request)
return response
# --- --- uvicorn.Config --- --- # --- --- uvicorn.Config --- ---
ssl_keyfile = None if SSL_KEYFILE == "" else SSL_KEYFILE ssl_keyfile = None if SSL_KEYFILE == "" else SSL_KEYFILE
ssl_certfile = None if SSL_CERTFILE == "" else SSL_CERTFILE ssl_certfile = None if SSL_CERTFILE == "" else SSL_CERTFILE

查看文件

@@ -0,0 +1,22 @@
"""
对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py
"""
import os, sys, importlib
def validate_path():
dir_name = os.path.dirname(__file__)
root_dir_assume = os.path.abspath(dir_name + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # 返回项目根路径
if __name__ == "__main__":
plugin_test = importlib.import_module('test_utils').plugin_test
plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2203.01927")

查看文件

@@ -14,12 +14,13 @@ validate_path() # validate path so you can run from base directory
if "在线模型": if "在线模型":
if __name__ == "__main__": if __name__ == "__main__":
from request_llms.bridge_cohere import predict_no_ui_long_connection from request_llms.bridge_taichu import predict_no_ui_long_connection
# from request_llms.bridge_cohere import predict_no_ui_long_connection
# from request_llms.bridge_spark import predict_no_ui_long_connection # from request_llms.bridge_spark import predict_no_ui_long_connection
# from request_llms.bridge_zhipu import predict_no_ui_long_connection # from request_llms.bridge_zhipu import predict_no_ui_long_connection
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection # from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
llm_kwargs = { llm_kwargs = {
"llm_model": "command-r-plus", "llm_model": "taichu",
"max_length": 4096, "max_length": 4096,
"top_p": 1, "top_p": 1,
"temperature": 1, "temperature": 1,

查看文件

@@ -18,14 +18,16 @@ validate_path() # 返回项目根路径
if __name__ == "__main__": if __name__ == "__main__":
from tests.test_utils import plugin_test from tests.test_utils import plugin_test
plugin_test(plugin='crazy_functions.Internet_GPT->连接网络回答问题', main_input="谁是应急食品?")
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"}) # plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
# plugin_test(plugin='crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF', main_input="2307.07522") # plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2307.07522")
plugin_test(plugin='crazy_functions.PDF批量翻译->批量翻译PDF文档', main_input='build/pdf/t1.pdf') # plugin_test(plugin='crazy_functions.PDF_Translate->批量翻译PDF文档', main_input='build/pdf/t1.pdf')
# plugin_test( # plugin_test(
# plugin="crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF", # plugin="crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF",
# main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix", # main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix",
# ) # )
@@ -43,9 +45,9 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.Latex全文润色->Latex英文润色', main_input="crazy_functions/test_project/latex/attention") # plugin_test(plugin='crazy_functions.Latex全文润色->Latex英文润色', main_input="crazy_functions/test_project/latex/attention")
# plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown中译英', main_input="README.md") # plugin_test(plugin='crazy_functions.Markdown_Translate->Markdown中译英', main_input="README.md")
# plugin_test(plugin='crazy_functions.PDF批量翻译->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf') # plugin_test(plugin='crazy_functions.PDF_Translate->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
# plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=") # plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=")
@@ -60,7 +62,7 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A ball split into 2, and then split into 4, and finally split into 8.") # plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A ball split into 2, and then split into 4, and finally split into 8.")
# for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]: # for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]:
# plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown翻译指定语言', main_input="README.md", advanced_arg={"advanced_arg": lang}) # plugin_test(plugin='crazy_functions.Markdown_Translate->Markdown翻译指定语言', main_input="README.md", advanced_arg={"advanced_arg": lang})
# plugin_test(plugin='crazy_functions.知识库文件注入->知识库文件注入', main_input="./") # plugin_test(plugin='crazy_functions.知识库文件注入->知识库文件注入', main_input="./")
@@ -68,7 +70,7 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?") # plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
# plugin_test(plugin='crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF', main_input="2210.03629") # plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2210.03629")
# advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求100字以内,用第二人称。' --system_prompt=''" } # advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求100字以内,用第二人称。' --system_prompt=''" }
# plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg) # plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg)

17
tests/test_safe_pickle.py 普通文件
查看文件

@@ -0,0 +1,17 @@
def validate_path():
import os, sys
os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # validate path so you can run from base directory
from crazy_functions.latex_fns.latex_pickle_io import objdump, objload
from crazy_functions.latex_fns.latex_actions import LatexPaperFileGroup, LatexPaperSplit
pfg = LatexPaperFileGroup()
pfg.get_token_num = None
pfg.target = "target_elem"
x = objdump(pfg)
t = objload()
print(t.target)

查看文件

@@ -0,0 +1,102 @@
def validate_path():
import os, sys
os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # validate path so you can run from base directory
def write_chat_to_file(chatbot, history=None, file_name=None):
"""
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
"""
import os
import time
from themes.theme import advanced_css
# debug
import pickle
# def objdump(obj, file="objdump.tmp"):
# with open(file, "wb+") as f:
# pickle.dump(obj, f)
# return
def objload(file="objdump.tmp"):
import os
if not os.path.exists(file):
return
with open(file, "rb") as f:
return pickle.load(f)
# objdump((chatbot, history))
chatbot, history = objload()
with open("test.html", 'w', encoding='utf8') as f:
from textwrap import dedent
form = dedent("""
<!DOCTYPE html><head><meta charset="utf-8"><title>对话存档</title><style>{CSS}</style></head>
<body>
<div class="test_temp1" style="width:10%; height: 500px; float:left;"></div>
<div class="test_temp2" style="width:80%;padding: 40px;float:left;padding-left: 20px;padding-right: 20px;box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px 8px;border-radius: 10px;">
<div class="chat-body" style="display: flex;justify-content: center;flex-direction: column;align-items: center;flex-wrap: nowrap;">
{CHAT_PREVIEW}
<div></div>
<div></div>
<div style="text-align: center;width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">对话(原始数据)</div>
{HISTORY_PREVIEW}
</div>
</div>
<div class="test_temp3" style="width:10%; height: 500px; float:left;"></div>
</body>
""")
qa_from = dedent("""
<div class="QaBox" style="width:80%;padding: 20px;margin-bottom: 20px;box-shadow: rgb(0 255 159 / 50%) 0px 0px 1px 2px;border-radius: 4px;">
<div class="Question" style="border-radius: 2px;">{QUESTION}</div>
<hr color="blue" style="border-top: dotted 2px #ccc;">
<div class="Answer" style="border-radius: 2px;">{ANSWER}</div>
</div>
""")
history_from = dedent("""
<div class="historyBox" style="width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">
<div class="entry" style="border-radius: 2px;">{ENTRY}</div>
</div>
""")
CHAT_PREVIEW_BUF = ""
for i, contents in enumerate(chatbot):
question, answer = contents[0], contents[1]
if question is None: question = ""
try: question = str(question)
except: question = ""
if answer is None: answer = ""
try: answer = str(answer)
except: answer = ""
CHAT_PREVIEW_BUF += qa_from.format(QUESTION=question, ANSWER=answer)
HISTORY_PREVIEW_BUF = ""
for h in history:
HISTORY_PREVIEW_BUF += history_from.format(ENTRY=h)
html_content = form.format(CHAT_PREVIEW=CHAT_PREVIEW_BUF, HISTORY_PREVIEW=HISTORY_PREVIEW_BUF, CSS=advanced_css)
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_content, 'lxml')
# 提取QaBox信息
qa_box_list = []
qa_boxes = soup.find_all("div", class_="QaBox")
for box in qa_boxes:
question = box.find("div", class_="Question").get_text(strip=False)
answer = box.find("div", class_="Answer").get_text(strip=False)
qa_box_list.append({"Question": question, "Answer": answer})
# 提取historyBox信息
history_box_list = []
history_boxes = soup.find_all("div", class_="historyBox")
for box in history_boxes:
entry = box.find("div", class_="entry").get_text(strip=False)
history_box_list.append(entry)
print('')
write_chat_to_file(None, None, None)

58
tests/test_searxng.py 普通文件
查看文件

@@ -0,0 +1,58 @@
def validate_path():
import os, sys
os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # validate path so you can run from base directory
from toolbox import get_conf
import requests
def searxng_request(query, proxies, categories='general', searxng_url=None, engines=None):
url = 'http://localhost:50001/'
if engines is None:
engine = 'bing,'
if categories == 'general':
params = {
'q': query, # 搜索查询
'format': 'json', # 输出格式为JSON
'language': 'zh', # 搜索语言
'engines': engine,
}
elif categories == 'science':
params = {
'q': query, # 搜索查询
'format': 'json', # 输出格式为JSON
'language': 'zh', # 搜索语言
'categories': 'science'
}
else:
raise ValueError('不支持的检索类型')
headers = {
'Accept-Language': 'zh-CN,zh;q=0.9',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
'X-Forwarded-For': '112.112.112.112',
'X-Real-IP': '112.112.112.112'
}
results = []
response = requests.post(url, params=params, headers=headers, proxies=proxies, timeout=30)
if response.status_code == 200:
json_result = response.json()
for result in json_result['results']:
item = {
"title": result.get("title", ""),
"content": result.get("content", ""),
"link": result["url"],
}
print(result['engines'])
results.append(item)
return results
else:
if response.status_code == 429:
raise ValueError("Searxng在线搜索服务当前使用人数太多,请稍后。")
else:
raise ValueError("在线搜索失败,状态码: " + str(response.status_code) + '\t' + response.content.decode('utf-8'))
res = searxng_request("vr environment", None, categories='science', searxng_url=None, engines=None)
print(res)

查看文件

@@ -1,3 +1,9 @@
#plugin_arg_menu {
transform: translate(-50%, -50%);
border: dashed;
}
/* hide remove all button */ /* hide remove all button */
.remove-all.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e { .remove-all.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
visibility: hidden; visibility: hidden;
@@ -109,6 +115,7 @@
border-width: thin; border-width: thin;
user-select: none; user-select: none;
padding-left: 2%; padding-left: 2%;
text-align: center;
} }
.floating-component #input-panel2 { .floating-component #input-panel2 {
@@ -118,3 +125,20 @@
border-width: thin; border-width: thin;
border-top-width: 0; border-top-width: 0;
} }
.floating-component #plugin_arg_panel {
border-top-left-radius: 0px;
border-top-right-radius: 0px;
border: solid;
border-width: thin;
border-top-width: 0;
}
.floating-component #edit-panel {
border-top-left-radius: 0px;
border-top-right-radius: 0px;
border: solid;
border-width: thin;
border-top-width: 0;
}

查看文件

@@ -1,3 +1,6 @@
// 标志位
enable_tts = false;
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 1 部分: 工具函数 // 第 1 部分: 工具函数
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@@ -331,6 +334,9 @@ function addCopyButton(botElement, index, is_last_in_arr) {
toast_push('正在合成语音 & 自动朗读已开启 (再次点击此按钮可禁用自动朗读)。', 3000); toast_push('正在合成语音 & 自动朗读已开启 (再次点击此按钮可禁用自动朗读)。', 3000);
// toast_push('正在合成语音', 3000); // toast_push('正在合成语音', 3000);
const readText = botElement.innerText; const readText = botElement.innerText;
prev_chatbot_index = index;
prev_text = readText;
prev_text_already_pushed = readText;
push_text_to_audio(readText); push_text_to_audio(readText);
setCookie("js_auto_read_cookie", "True", 365); setCookie("js_auto_read_cookie", "True", 365);
} }
@@ -828,39 +834,44 @@ function limit_scroll_position() {
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function loadLive2D() { function loadLive2D() {
try { if (document.querySelector(".waifu") )
$("<link>").attr({ href: "file=themes/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css" }).appendTo('head'); {
$('body').append('<div class="waifu"><div class="waifu-tips"></div><canvas id="live2d" class="live2d"></canvas><div class="waifu-tool"><span class="fui-home"></span> <span class="fui-chat"></span> <span class="fui-eye"></span> <span class="fui-user"></span> <span class="fui-photo"></span> <span class="fui-info-circle"></span> <span class="fui-cross"></span></div></div>'); $('.waifu').show();
$.ajax({ } else {
url: "file=themes/waifu_plugin/waifu-tips.js", dataType: "script", cache: true, success: function () { try {
$.ajax({ $("<link>").attr({ href: "file=themes/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css" }).appendTo('head');
url: "file=themes/waifu_plugin/live2d.js", dataType: "script", cache: true, success: function () { $('body').append('<div class="waifu"><div class="waifu-tips"></div><canvas id="live2d" class="live2d"></canvas><div class="waifu-tool"><span class="fui-home"></span> <span class="fui-chat"></span> <span class="fui-eye"></span> <span class="fui-user"></span> <span class="fui-photo"></span> <span class="fui-info-circle"></span> <span class="fui-cross"></span></div></div>');
/* 可直接修改部分参数 */ $.ajax({
live2d_settings['hitokotoAPI'] = "hitokoto.cn"; // 一言 API url: "file=themes/waifu_plugin/waifu-tips.js", dataType: "script", cache: true, success: function () {
live2d_settings['modelId'] = 3; // 默认模型 ID $.ajax({
live2d_settings['modelTexturesId'] = 44; // 默认材质 ID url: "file=themes/waifu_plugin/live2d.js", dataType: "script", cache: true, success: function () {
live2d_settings['modelStorage'] = false; // 不储存模型 ID /* 可直接修改部分参数 */
live2d_settings['waifuSize'] = '210x187'; live2d_settings['hitokotoAPI'] = "hitokoto.cn"; // 一言 API
live2d_settings['waifuTipsSize'] = '187x52'; live2d_settings['modelId'] = 3; // 默认模型 ID
live2d_settings['canSwitchModel'] = true; live2d_settings['modelTexturesId'] = 44; // 默认材质 ID
live2d_settings['canSwitchTextures'] = true; live2d_settings['modelStorage'] = false; // 不储存模型 ID
live2d_settings['canSwitchHitokoto'] = false; live2d_settings['waifuSize'] = '210x187';
live2d_settings['canTakeScreenshot'] = false; live2d_settings['waifuTipsSize'] = '187x52';
live2d_settings['canTurnToHomePage'] = false; live2d_settings['canSwitchModel'] = true;
live2d_settings['canTurnToAboutPage'] = false; live2d_settings['canSwitchTextures'] = true;
live2d_settings['showHitokoto'] = false; // 显示一言 live2d_settings['canSwitchHitokoto'] = false;
live2d_settings['showF12Status'] = false; // 显示加载状态 live2d_settings['canTakeScreenshot'] = false;
live2d_settings['showF12Message'] = false; // 显示看板娘消息 live2d_settings['canTurnToHomePage'] = false;
live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示 live2d_settings['canTurnToAboutPage'] = false;
live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示 live2d_settings['showHitokoto'] = false; // 显示一言
live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词 live2d_settings['showF12Status'] = false; // 显示加载状态
/* 在 initModel 前添加 */ live2d_settings['showF12Message'] = false; // 显示看板娘消息
initModel("file=themes/waifu_plugin/waifu-tips.json"); live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示
} live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示
}); live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词
} /* 在 initModel 前添加 */
}); initModel("file=themes/waifu_plugin/waifu-tips.json");
} catch (err) { console.log("[Error] JQuery is not defined.") } }
});
}
});
} catch (err) { console.log("[Error] JQuery is not defined.") }
}
} }
@@ -906,134 +917,9 @@ function gpt_academic_gradio_saveload(
} }
} }
enable_tts = false;
async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
// 第一部分,布局初始化
audio_fn_init();
minor_ui_adjustment();
chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
var chatbotObserver = new MutationObserver(() => {
chatbotContentChanged(1);
});
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
if (layout === "LEFT-RIGHT") { chatbotAutoHeight(); }
if (layout === "LEFT-RIGHT") { limit_scroll_position(); }
// 第二部分,读取Cookie,初始话界面
let searchString = "";
let bool_value = "";
// darkmode 深色模式
if (getCookie("js_darkmode_cookie")) {
dark = getCookie("js_darkmode_cookie")
}
dark = dark == "True";
if (document.querySelectorAll('.dark').length) {
if (!dark) {
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
}
} else {
if (dark) {
document.querySelector('body').classList.add('dark');
}
}
// 自动朗读
if (tts != "DISABLE"){
enable_tts = true;
if (getCookie("js_auto_read_cookie")) {
auto_read_tts = getCookie("js_auto_read_cookie")
auto_read_tts = auto_read_tts == "True";
if (auto_read_tts) {
allow_auto_read_tts_flag = true;
}
}
}
// SysPrompt 系统静默提示词
gpt_academic_gradio_saveload("load", "elem_prompt", "js_system_prompt_cookie", null, "str");
// Temperature 大模型温度参数
gpt_academic_gradio_saveload("load", "elem_temperature", "js_temperature_cookie", null, "float");
// md_dropdown 大模型类型选择
if (getCookie("js_md_dropdown_cookie")) {
const cached_model = getCookie("js_md_dropdown_cookie");
var model_sel = await get_gradio_component("elem_model_sel");
// deterine whether the cached model is in the choices
if (model_sel.props.choices.includes(cached_model)){
// change dropdown
gpt_academic_gradio_saveload("load", "elem_model_sel", "js_md_dropdown_cookie", null, "str");
// 连锁修改chatbot的label
push_data_to_gradio_component({
label: '当前模型:' + getCookie("js_md_dropdown_cookie"),
__type__: 'update'
}, "gpt-chatbot", "obj")
}
}
// clearButton 自动清除按钮
if (getCookie("js_clearbtn_show_cookie")) {
// have cookie
bool_value = getCookie("js_clearbtn_show_cookie")
bool_value = bool_value == "True";
searchString = "输入清除键";
if (bool_value) {
// make btns appear
let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "block";
let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "block";
// deal with checkboxes
let arr_with_clear_btn = update_array(
await get_data_from_gradio_component('cbs'), "输入清除键", "add"
)
push_data_to_gradio_component(arr_with_clear_btn, "cbs", "no_conversion");
} else {
// make btns disappear
let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "none";
let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "none";
// deal with checkboxes
let arr_without_clear_btn = update_array(
await get_data_from_gradio_component('cbs'), "输入清除键", "remove"
)
push_data_to_gradio_component(arr_without_clear_btn, "cbs", "no_conversion");
}
}
// live2d 显示
if (getCookie("js_live2d_show_cookie")) {
// have cookie
searchString = "添加Live2D形象";
bool_value = getCookie("js_live2d_show_cookie");
bool_value = bool_value == "True";
if (bool_value) {
loadLive2D();
let arr_with_live2d = update_array(
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "add"
)
push_data_to_gradio_component(arr_with_live2d, "cbsc", "no_conversion");
} else {
try {
$('.waifu').hide();
let arr_without_live2d = update_array(
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "remove"
)
push_data_to_gradio_component(arr_without_live2d, "cbsc", "no_conversion");
} catch (error) {
}
}
} else {
// do not have cookie
if (live2d) {
loadLive2D();
} else {
}
}
}
function reset_conversation(a, b) { function reset_conversation(a, b) {
console.log("js_code_reset"); // console.log("js_code_reset");
a = btoa(unescape(encodeURIComponent(JSON.stringify(a)))); a = btoa(unescape(encodeURIComponent(JSON.stringify(a))));
setCookie("js_previous_chat_cookie", a, 1); setCookie("js_previous_chat_cookie", a, 1);
gen_restore_btn(); gen_restore_btn();
@@ -1173,7 +1059,7 @@ async function on_plugin_exe_complete(fn_name) {
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 8 部分: TTS语音生成函数 // 第 8 部分: TTS语音生成函数
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
audio_debug = false;
class AudioPlayer { class AudioPlayer {
constructor() { constructor() {
this.audioCtx = new (window.AudioContext || window.webkitAudioContext)(); this.audioCtx = new (window.AudioContext || window.webkitAudioContext)();
@@ -1321,14 +1207,14 @@ function trigger(T, fire) {
} }
prev_text = ""; prev_text = ""; // previous text, this is used to check chat changes
prev_text_already_pushed = ""; prev_text_already_pushed = ""; // previous text already pushed to audio, this is used to check where we should continue to play audio
prev_chatbot_index = -1; prev_chatbot_index = -1;
const delay_live_text_update = trigger(3000, on_live_stream_terminate); const delay_live_text_update = trigger(3000, on_live_stream_terminate);
function on_live_stream_terminate(latest_text) { function on_live_stream_terminate(latest_text) {
// remove `prev_text_already_pushed` from `latest_text` // remove `prev_text_already_pushed` from `latest_text`
console.log("on_live_stream_terminate", latest_text) if (audio_debug) console.log("on_live_stream_terminate", latest_text);
remaining_text = latest_text.slice(prev_text_already_pushed.length); remaining_text = latest_text.slice(prev_text_already_pushed.length);
if ((!isEmptyOrWhitespaceOnly(remaining_text)) && remaining_text.length != 0) { if ((!isEmptyOrWhitespaceOnly(remaining_text)) && remaining_text.length != 0) {
prev_text_already_pushed = latest_text; prev_text_already_pushed = latest_text;
@@ -1393,19 +1279,19 @@ function process_latest_text_output(text, chatbot_index) {
delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit
} }
else if (chatbot_index == prev_chatbot_index && !is_continue) { else if (chatbot_index == prev_chatbot_index && !is_continue) {
console.log('---------------------') if (audio_debug) console.log('---------------------');
console.log('text twisting!') if (audio_debug) console.log('text twisting!');
console.log('[new message begin]', 'text', text, 'prev_text_already_pushed', prev_text_already_pushed) if (audio_debug) console.log('[new message begin]', 'text', text, 'prev_text_already_pushed', prev_text_already_pushed);
console.log('---------------------') if (audio_debug) console.log('---------------------');
prev_text_already_pushed = ""; prev_text_already_pushed = "";
delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit
} }
else { else {
// on_new_message_begin, we have to clear `prev_text_already_pushed` // on_new_message_begin, we have to clear `prev_text_already_pushed`
console.log('---------------------') if (audio_debug) console.log('---------------------');
console.log('new message begin!') if (audio_debug) console.log('new message begin!');
console.log('[new message begin]', 'text', text, 'prev_text_already_pushed', prev_text_already_pushed) if (audio_debug) console.log('[new message begin]', 'text', text, 'prev_text_already_pushed', prev_text_already_pushed);
console.log('---------------------') if (audio_debug) console.log('---------------------');
prev_text_already_pushed = ""; prev_text_already_pushed = "";
process_increased_text(text); process_increased_text(text);
delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit
@@ -1433,7 +1319,7 @@ async function push_text_to_audio(text) {
// Call the async postData function and log the response // Call the async postData function and log the response
post_text(url, payload, send_index); post_text(url, payload, send_index);
send_index = send_index + 1; send_index = send_index + 1;
console.log(send_index, audio_buf_text) if (audio_debug) console.log(send_index, audio_buf_text);
// sleep 2 seconds // sleep 2 seconds
if (allow_auto_read_tts_flag) { if (allow_auto_read_tts_flag) {
await delay(3000); await delay(3000);
@@ -1450,10 +1336,10 @@ to_be_processed = [];
async function UpdatePlayQueue(cnt, audio_buf_wave) { async function UpdatePlayQueue(cnt, audio_buf_wave) {
if (cnt != recv_index) { if (cnt != recv_index) {
to_be_processed.push([cnt, audio_buf_wave]); to_be_processed.push([cnt, audio_buf_wave]);
console.log('cache', cnt); if (audio_debug) console.log('cache', cnt);
} }
else { else {
console.log('processing', cnt); if (audio_debug) console.log('processing', cnt);
recv_index = recv_index + 1; recv_index = recv_index + 1;
if (audio_buf_wave) { if (audio_buf_wave) {
audioPlayer.enqueueAudio(audio_buf_wave); audioPlayer.enqueueAudio(audio_buf_wave);
@@ -1463,7 +1349,7 @@ async function UpdatePlayQueue(cnt, audio_buf_wave) {
find_any = false; find_any = false;
for (i = to_be_processed.length - 1; i >= 0; i--) { for (i = to_be_processed.length - 1; i >= 0; i--) {
if (to_be_processed[i][0] == recv_index) { if (to_be_processed[i][0] == recv_index) {
console.log('processing cached', recv_index); if (audio_debug) console.log('processing cached', recv_index);
if (to_be_processed[i][1]) { if (to_be_processed[i][1]) {
audioPlayer.enqueueAudio(to_be_processed[i][1]); audioPlayer.enqueueAudio(to_be_processed[i][1]);
} }
@@ -1522,17 +1408,259 @@ async function postData(url = '', data = {}) {
} }
} }
async function generate_menu(guiBase64String, btnName){
// assign the button and menu data
push_data_to_gradio_component(guiBase64String, "invisible_current_pop_up_plugin_arg", "string");
push_data_to_gradio_component(btnName, "invisible_callback_btn_for_plugin_exe", "string");
// Base64 to dict
const stringData = atob(guiBase64String);
let guiJsonData = JSON.parse(stringData);
let menu = document.getElementById("plugin_arg_menu");
gui_args = {}
for (const key in guiJsonData) {
if (guiJsonData.hasOwnProperty(key)) {
const innerJSONString = guiJsonData[key];
const decodedObject = JSON.parse(innerJSONString);
gui_args[key] = decodedObject;
}
}
// 使参数菜单显现
push_data_to_gradio_component({
visible: true,
__type__: 'update'
}, "plugin_arg_menu", "obj");
hide_all_elem();
// 根据 gui_args, 使得对应参数项显现
let text_cnt = 0;
let dropdown_cnt = 0;
// PLUGIN_ARG_MENU
for (const key in gui_args) {
if (gui_args.hasOwnProperty(key)) {
///////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////// Textbox ////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////
if (gui_args[key].type=='string'){ // PLUGIN_ARG_MENU
const component_name = "plugin_arg_txt_" + text_cnt;
push_data_to_gradio_component({
visible: true,
label: gui_args[key].title + "(" + gui_args[key].description + ")",
// label: gui_args[key].title,
placeholder: gui_args[key].description,
__type__: 'update'
}, component_name, "obj");
if (key === "main_input"){
// 为了与旧插件兼容,生成菜单时,自动加载输入栏的值
let current_main_input = await get_data_from_gradio_component('user_input_main');
let current_main_input_2 = await get_data_from_gradio_component('user_input_float');
push_data_to_gradio_component(current_main_input + current_main_input_2, component_name, "obj");
}
else if (key === "advanced_arg"){
// 为了与旧插件兼容,生成菜单时,自动加载旧高级参数输入区的值
let advance_arg_input_legacy = await get_data_from_gradio_component('advance_arg_input_legacy');
push_data_to_gradio_component(advance_arg_input_legacy, component_name, "obj");
}
else {
push_data_to_gradio_component(gui_args[key].default_value, component_name, "obj");
}
document.getElementById(component_name).parentNode.parentNode.style.display = '';
text_cnt += 1;
}
///////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////// Dropdown ////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////
if (gui_args[key].type=='dropdown'){ // PLUGIN_ARG_MENU
const component_name = "plugin_arg_drop_" + dropdown_cnt;
push_data_to_gradio_component({
visible: true,
choices: gui_args[key].options,
label: gui_args[key].title + "(" + gui_args[key].description + ")",
// label: gui_args[key].title,
placeholder: gui_args[key].description,
__type__: 'update'
}, component_name, "obj");
push_data_to_gradio_component(gui_args[key].default_value, component_name, "obj");
document.getElementById(component_name).parentNode.style.display = '';
dropdown_cnt += 1;
}
}
}
}
async function execute_current_pop_up_plugin(){
let guiBase64String = await get_data_from_gradio_component('invisible_current_pop_up_plugin_arg');
const stringData = atob(guiBase64String);
let guiJsonData = JSON.parse(stringData);
gui_args = {}
for (const key in guiJsonData) {
if (guiJsonData.hasOwnProperty(key)) {
const innerJSONString = guiJsonData[key];
const decodedObject = JSON.parse(innerJSONString);
gui_args[key] = decodedObject;
}
}
// read user confirmed value
let text_cnt = 0;
for (const key in gui_args) {
if (gui_args.hasOwnProperty(key)) {
if (gui_args[key].type=='string'){ // PLUGIN_ARG_MENU
corrisponding_elem_id = "plugin_arg_txt_"+text_cnt
gui_args[key].user_confirmed_value = await get_data_from_gradio_component(corrisponding_elem_id);
text_cnt += 1;
}
}
}
let dropdown_cnt = 0;
for (const key in gui_args) {
if (gui_args.hasOwnProperty(key)) {
if (gui_args[key].type=='dropdown'){ // PLUGIN_ARG_MENU
corrisponding_elem_id = "plugin_arg_drop_"+dropdown_cnt
gui_args[key].user_confirmed_value = await get_data_from_gradio_component(corrisponding_elem_id);
dropdown_cnt += 1;
}
}
}
// close menu
push_data_to_gradio_component({
visible: false,
__type__: 'update'
}, "plugin_arg_menu", "obj");
hide_all_elem();
// execute the plugin
push_data_to_gradio_component(JSON.stringify(gui_args), "invisible_current_pop_up_plugin_arg_final", "string");
document.getElementById("invisible_callback_btn_for_plugin_exe").click();
}
function hide_all_elem(){
// PLUGIN_ARG_MENU
for (text_cnt = 0; text_cnt < 8; text_cnt++){
push_data_to_gradio_component({
visible: false,
label: "",
__type__: 'update'
}, "plugin_arg_txt_"+text_cnt, "obj");
document.getElementById("plugin_arg_txt_"+text_cnt).parentNode.parentNode.style.display = 'none';
}
for (dropdown_cnt = 0; dropdown_cnt < 8; dropdown_cnt++){
push_data_to_gradio_component({
visible: false,
choices: [],
label: "",
__type__: 'update'
}, "plugin_arg_drop_"+dropdown_cnt, "obj");
document.getElementById("plugin_arg_drop_"+dropdown_cnt).parentNode.style.display = 'none';
}
}
function close_current_pop_up_plugin(){
// PLUGIN_ARG_MENU
push_data_to_gradio_component({
visible: false,
__type__: 'update'
}, "plugin_arg_menu", "obj");
hide_all_elem();
}
// 生成高级插件的选择菜单
plugin_init_info_lib = {}
function register_plugin_init(key, base64String){
// console.log('x')
const stringData = atob(base64String);
let guiJsonData = JSON.parse(stringData);
if (key in plugin_init_info_lib)
{
}
else
{
plugin_init_info_lib[key] = {};
}
plugin_init_info_lib[key].info = guiJsonData.Info;
plugin_init_info_lib[key].color = guiJsonData.Color;
plugin_init_info_lib[key].elem_id = guiJsonData.ButtonElemId;
plugin_init_info_lib[key].label = guiJsonData.Label
plugin_init_info_lib[key].enable_advanced_arg = guiJsonData.AdvancedArgs;
plugin_init_info_lib[key].arg_reminder = guiJsonData.ArgsReminder;
}
function register_advanced_plugin_init_code(key, code){
if (key in plugin_init_info_lib)
{
}
else
{
plugin_init_info_lib[key] = {};
}
plugin_init_info_lib[key].secondary_menu_code = code;
}
function run_advanced_plugin_launch_code(key){
// convert js code string to function
generate_menu(plugin_init_info_lib[key].secondary_menu_code, key);
}
function on_flex_button_click(key){
if (plugin_init_info_lib.hasOwnProperty(key) && plugin_init_info_lib[key].hasOwnProperty('secondary_menu_code')){
run_advanced_plugin_launch_code(key);
}else{
document.getElementById("old_callback_btn_for_plugin_exe").click();
}
}
async function run_dropdown_shift(dropdown){
let key = dropdown;
push_data_to_gradio_component({
value: key,
variant: plugin_init_info_lib[key].color,
info_str: plugin_init_info_lib[key].info,
__type__: 'update'
}, "elem_switchy_bt", "obj");
if (plugin_init_info_lib[key].enable_advanced_arg){
push_data_to_gradio_component({
visible: true,
label: plugin_init_info_lib[key].label,
__type__: 'update'
}, "advance_arg_input_legacy", "obj");
} else {
push_data_to_gradio_component({
visible: false,
label: plugin_init_info_lib[key].label,
__type__: 'update'
}, "advance_arg_input_legacy", "obj");
}
}
async function run_classic_plugin_via_id(plugin_elem_id){
// find elementid
for (key in plugin_init_info_lib){
if (plugin_init_info_lib[key].elem_id == plugin_elem_id){
let current_btn_name = await get_data_from_gradio_component(plugin_elem_id);
console.log(current_btn_name);
gui_args = {}
// 关闭菜单 (如果处于开启状态)
push_data_to_gradio_component({
visible: false,
__type__: 'update'
}, "plugin_arg_menu", "obj");
hide_all_elem();
// 为了与旧插件兼容,生成菜单时,自动加载旧高级参数输入区的值
let advance_arg_input_legacy = await get_data_from_gradio_component('advance_arg_input_legacy');
if (advance_arg_input_legacy.length != 0){
gui_args["advanced_arg"] = {};
gui_args["advanced_arg"].user_confirmed_value = advance_arg_input_legacy;
}
// execute the plugin
push_data_to_gradio_component(JSON.stringify(gui_args), "invisible_current_pop_up_plugin_arg_final", "string");
push_data_to_gradio_component(current_btn_name, "invisible_callback_btn_for_plugin_exe", "string");
document.getElementById("invisible_callback_btn_for_plugin_exe").click();
return;
}
}
// console.log('unable to find function');
return;
}

查看文件

@@ -1,3 +1,4 @@
from functools import cache
from toolbox import get_conf from toolbox import get_conf
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT") CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
@@ -23,22 +24,30 @@ def minimize_js(common_js_path):
except: except:
return common_js_path return common_js_path
@cache
def get_common_html_javascript_code(): def get_common_html_javascript_code():
js = "\n" js = "\n"
common_js_path = "themes/common.js" common_js_path_list = [
minimized_js_path = minimize_js(common_js_path) "themes/common.js",
for jsf in [ "themes/theme.js",
f"file={minimized_js_path}", "themes/init.js",
]: ]
js += f"""<script src="{jsf}"></script>\n"""
# 添加Live2D if ADD_WAIFU: # 添加Live2D
if ADD_WAIFU: common_js_path_list += [
"themes/waifu_plugin/jquery.min.js",
"themes/waifu_plugin/jquery-ui.min.js",
]
for common_js_path in common_js_path_list:
if '.min.' not in common_js_path:
minimized_js_path = minimize_js(common_js_path)
for jsf in [ for jsf in [
"file=themes/waifu_plugin/jquery.min.js", f"file={minimized_js_path}",
"file=themes/waifu_plugin/jquery-ui.min.js",
]: ]:
js += f"""<script src="{jsf}"></script>\n""" js += f"""<script src="{jsf}"></script>\n"""
else:
if not ADD_WAIFU:
js += """<script>window.loadLive2D = function(){};</script>\n""" js += """<script>window.loadLive2D = function(){};</script>\n"""
return js return js

查看文件

@@ -24,8 +24,8 @@
/* 小按钮 */ /* 小按钮 */
#basic-panel .sm { #basic-panel .sm {
font-family: "Microsoft YaHei UI", "Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"; font-family: "Microsoft YaHei UI", "Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui";
--button-small-text-weight: 600; --button-small-text-weight: 400;
--button-small-text-size: 16px; --button-small-text-size: 14px;
border-bottom-right-radius: 6px; border-bottom-right-radius: 6px;
border-bottom-left-radius: 6px; border-bottom-left-radius: 6px;
border-top-right-radius: 6px; border-top-right-radius: 6px;

查看文件

@@ -0,0 +1,57 @@
import gradio as gr
import json
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, DummyWith
def define_gui_advanced_plugin_class(plugins):
# 定义新一代插件的高级参数区
with gr.Floating(init_x="50%", init_y="50%", visible=False, width="30%", drag="top", elem_id="plugin_arg_menu"):
with gr.Accordion("选择插件参数", open=True, elem_id="plugin_arg_panel"):
for u in range(8):
with gr.Row():
gr.Textbox(show_label=True, label="T1", placeholder="请输入", lines=1, visible=False, elem_id=f"plugin_arg_txt_{u}").style(container=False)
for u in range(8):
with gr.Row(): # PLUGIN_ARG_MENU
gr.Dropdown(label="T1", value="请选择", choices=[], visible=True, elem_id=f"plugin_arg_drop_{u}", interactive=True)
with gr.Row():
# 这个隐藏textbox负责装入当前弹出插件的属性
gr.Textbox(show_label=False, placeholder="请输入", lines=1, visible=False,
elem_id=f"invisible_current_pop_up_plugin_arg").style(container=False)
usr_confirmed_arg = gr.Textbox(show_label=False, placeholder="请输入", lines=1, visible=False,
elem_id=f"invisible_current_pop_up_plugin_arg_final").style(container=False)
arg_confirm_btn = gr.Button("确认参数并执行", variant="stop")
arg_confirm_btn.style(size="sm")
arg_cancel_btn = gr.Button("取消", variant="stop")
arg_cancel_btn.click(None, None, None, _js="""()=>close_current_pop_up_plugin()""")
arg_cancel_btn.style(size="sm")
arg_confirm_btn.click(None, None, None, _js="""()=>execute_current_pop_up_plugin()""")
invisible_callback_btn_for_plugin_exe = gr.Button(r"未选定任何插件", variant="secondary", visible=False, elem_id="invisible_callback_btn_for_plugin_exe").style(size="sm")
# 随变按钮的回调函数注册
def route_switchy_bt_with_arg(request: gr.Request, input_order, *arg):
arguments = {k:v for k,v in zip(input_order, arg)} # 重新梳理输入参数,转化为kwargs字典
which_plugin = arguments.pop('new_plugin_callback') # 获取需要执行的插件名称
if which_plugin in [r"未选定任何插件"]: return
usr_confirmed_arg = arguments.pop('usr_confirmed_arg') # 获取插件参数
arg_confirm: dict = {}
usr_confirmed_arg_dict = json.loads(usr_confirmed_arg) # 读取插件参数
for arg_name in usr_confirmed_arg_dict:
arg_confirm.update({arg_name: str(usr_confirmed_arg_dict[arg_name]['user_confirmed_value'])})
if plugins[which_plugin].get("Class", None) is not None: # 获取插件执行函数
plugin_obj = plugins[which_plugin]["Class"]
plugin_exe = plugin_obj.execute
else:
plugin_exe = plugins[which_plugin]["Function"]
arguments['plugin_advanced_arg'] = arg_confirm # 更新高级参数输入区的参数
if arg_confirm.get('main_input', None) is not None: # 更新主输入区的参数
arguments['txt'] = arg_confirm['main_input']
# 万事俱备,开始执行
yield from ArgsGeneralWrapper(plugin_exe)(request, *arguments.values())
return invisible_callback_btn_for_plugin_exe, route_switchy_bt_with_arg, usr_confirmed_arg

41
themes/gui_floating_menu.py 普通文件
查看文件

@@ -0,0 +1,41 @@
import gradio as gr
def define_gui_floating_menu(customize_btns, functional, predefined_btns, cookies, web_cookie_cache):
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_input_secondary:
with gr.Accordion("浮动输入区", open=True, elem_id="input-panel2"):
with gr.Row() as row:
row.style(equal_height=True)
with gr.Column(scale=10):
txt2 = gr.Textbox(show_label=False, placeholder="Input question here.",
elem_id='user_input_float', lines=8, label="输入区2").style(container=False)
with gr.Column(scale=1, min_width=40):
submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
clearBtn2 = gr.Button("清除", elem_id="elem_clear2", variant="secondary", visible=False); clearBtn2.style(size="sm")
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
with gr.Accordion("自定义菜单", open=True, elem_id="edit-panel"):
with gr.Row() as row:
with gr.Column(scale=10):
AVAIL_BTN = [btn for btn in customize_btns.keys()] + [k for k in functional]
basic_btn_dropdown = gr.Dropdown(AVAIL_BTN, value="自定义按钮1", label="选择一个需要自定义基础功能区按钮").style(container=False)
basic_fn_title = gr.Textbox(show_label=False, placeholder="输入新按钮名称", lines=1).style(container=False)
basic_fn_prefix = gr.Textbox(show_label=False, placeholder="输入新提示前缀", lines=4).style(container=False)
basic_fn_suffix = gr.Textbox(show_label=False, placeholder="输入新提示后缀", lines=4).style(container=False)
with gr.Column(scale=1, min_width=70):
basic_fn_confirm = gr.Button("确认并保存", variant="primary"); basic_fn_confirm.style(size="sm")
basic_fn_clean = gr.Button("恢复默认", variant="primary"); basic_fn_clean.style(size="sm")
from shared_utils.cookie_manager import assign_btn__fn_builder
assign_btn = assign_btn__fn_builder(customize_btns, predefined_btns, cookies, web_cookie_cache)
# update btn
h = basic_fn_confirm.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
[web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
h.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
# clean up btn
h2 = basic_fn_clean.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix, gr.State(True)],
[web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
h2.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
return area_input_secondary, txt2, area_customize, submitBtn2, resetBtn2, clearBtn2, stopBtn2

34
themes/gui_toolbar.py 普通文件
查看文件

@@ -0,0 +1,34 @@
import gradio as gr
def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode):
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"):
with gr.Row():
with gr.Tab("上传文件", elem_id="interact-panel"):
gr.Markdown("请上传本地文件/压缩包供“函数插件区”功能调用。请注意: 上传文件后会自动把输入区修改为相应路径。")
file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload_float")
with gr.Tab("更换模型", elem_id="interact-panel"):
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, elem_id="elem_model_sel", label="更换LLM模型/请求源").style(container=False)
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature", elem_id="elem_temperature")
max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength",)
system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=INIT_SYS_PROMPT, elem_id="elem_prompt")
temperature.change(None, inputs=[temperature], outputs=None,
_js="""(temperature)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_temperature_cookie", temperature)""")
system_prompt.change(None, inputs=[system_prompt], outputs=None,
_js="""(system_prompt)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_system_prompt_cookie", system_prompt)""")
md_dropdown.change(None, inputs=[md_dropdown], outputs=None,
_js="""(md_dropdown)=>gpt_academic_gradio_saveload("save", "elem_model_sel", "js_md_dropdown_cookie", md_dropdown)""")
with gr.Tab("界面外观", elem_id="interact-panel"):
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False)
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
opt = ["自定义菜单"]
value=[]
if ADD_WAIFU: opt += ["添加Live2D形象"]; value += ["添加Live2D形象"]
checkboxes_2 = gr.CheckboxGroup(opt, value=value, label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False)
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
with gr.Tab("帮助", elem_id="interact-panel"):
gr.Markdown(help_menu_description)
return checkboxes, checkboxes_2, max_length_sl, theme_dropdown, system_prompt, file_upload_2, md_dropdown, top_p, temperature

125
themes/init.js 普通文件
查看文件

@@ -0,0 +1,125 @@
async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
// 第一部分,布局初始化
audio_fn_init();
minor_ui_adjustment();
chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
var chatbotObserver = new MutationObserver(() => {
chatbotContentChanged(1);
});
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
if (layout === "LEFT-RIGHT") { chatbotAutoHeight(); }
if (layout === "LEFT-RIGHT") { limit_scroll_position(); }
// 第二部分,读取Cookie,初始话界面
let searchString = "";
let bool_value = "";
// darkmode 深色模式
if (getCookie("js_darkmode_cookie")) {
dark = getCookie("js_darkmode_cookie")
}
dark = dark == "True";
if (document.querySelectorAll('.dark').length) {
if (!dark) {
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
}
} else {
if (dark) {
document.querySelector('body').classList.add('dark');
}
}
// 自动朗读
if (tts != "DISABLE"){
enable_tts = true;
if (getCookie("js_auto_read_cookie")) {
auto_read_tts = getCookie("js_auto_read_cookie")
auto_read_tts = auto_read_tts == "True";
if (auto_read_tts) {
allow_auto_read_tts_flag = true;
}
}
}
// SysPrompt 系统静默提示词
gpt_academic_gradio_saveload("load", "elem_prompt", "js_system_prompt_cookie", null, "str");
// Temperature 大模型温度参数
gpt_academic_gradio_saveload("load", "elem_temperature", "js_temperature_cookie", null, "float");
// md_dropdown 大模型类型选择
if (getCookie("js_md_dropdown_cookie")) {
const cached_model = getCookie("js_md_dropdown_cookie");
var model_sel = await get_gradio_component("elem_model_sel");
// determine whether the cached model is in the choices
if (model_sel.props.choices.includes(cached_model)){
// change dropdown
gpt_academic_gradio_saveload("load", "elem_model_sel", "js_md_dropdown_cookie", null, "str");
// 连锁修改chatbot的label
push_data_to_gradio_component({
label: '当前模型:' + getCookie("js_md_dropdown_cookie"),
__type__: 'update'
}, "gpt-chatbot", "obj")
}
}
// clearButton 自动清除按钮
if (getCookie("js_clearbtn_show_cookie")) {
// have cookie
bool_value = getCookie("js_clearbtn_show_cookie")
bool_value = bool_value == "True";
searchString = "输入清除键";
if (bool_value) {
// make btns appear
let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "block";
let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "block";
// deal with checkboxes
let arr_with_clear_btn = update_array(
await get_data_from_gradio_component('cbs'), "输入清除键", "add"
)
push_data_to_gradio_component(arr_with_clear_btn, "cbs", "no_conversion");
} else {
// make btns disappear
let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "none";
let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "none";
// deal with checkboxes
let arr_without_clear_btn = update_array(
await get_data_from_gradio_component('cbs'), "输入清除键", "remove"
)
push_data_to_gradio_component(arr_without_clear_btn, "cbs", "no_conversion");
}
}
// live2d 显示
if (getCookie("js_live2d_show_cookie")) {
// have cookie
searchString = "添加Live2D形象";
bool_value = getCookie("js_live2d_show_cookie");
bool_value = bool_value == "True";
if (bool_value) {
loadLive2D();
let arr_with_live2d = update_array(
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "add"
)
push_data_to_gradio_component(arr_with_live2d, "cbsc", "no_conversion");
} else {
try {
$('.waifu').hide();
let arr_without_live2d = update_array(
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "remove"
)
push_data_to_gradio_component(arr_without_live2d, "cbsc", "no_conversion");
} catch (error) {
}
}
} else {
// do not have cookie
if (live2d) {
loadLive2D();
} else {
}
}
// 主题加载(恢复到上次)
change_theme("", "")
}

41
themes/theme.js 普通文件
查看文件

@@ -0,0 +1,41 @@
async function try_load_previous_theme(){
if (getCookie("js_theme_selection_cookie")) {
theme_selection = getCookie("js_theme_selection_cookie");
let css = localStorage.getItem('theme-' + theme_selection);
if (css) {
change_theme(theme_selection, css);
}
}
}
async function change_theme(theme_selection, css) {
if (theme_selection.length==0) {
try_load_previous_theme();
return;
}
var existingStyles = document.querySelectorAll("body > gradio-app > div > style")
for (var i = 0; i < existingStyles.length; i++) {
var style = existingStyles[i];
style.parentNode.removeChild(style);
}
var existingStyles = document.querySelectorAll("style[data-loaded-css]");
for (var i = 0; i < existingStyles.length; i++) {
var style = existingStyles[i];
style.parentNode.removeChild(style);
}
setCookie("js_theme_selection_cookie", theme_selection, 3);
localStorage.setItem('theme-' + theme_selection, css);
var styleElement = document.createElement('style');
styleElement.setAttribute('data-loaded-css', 'placeholder');
styleElement.innerHTML = css;
document.body.appendChild(styleElement);
}
// // 记录本次的主题切换
// async function change_theme_prepare(theme_selection, secret_css) {
// setCookie("js_theme_selection_cookie", theme_selection, 3);
// }

查看文件

@@ -71,29 +71,10 @@ def from_cookie_str(c):
""" """
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
第 3 部分 第 3 部分
内嵌的javascript代码 内嵌的javascript代码这部分代码会逐渐移动到common.js中
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
""" """
js_code_for_css_changing = """(css) => {
var existingStyles = document.querySelectorAll("body > gradio-app > div > style")
for (var i = 0; i < existingStyles.length; i++) {
var style = existingStyles[i];
style.parentNode.removeChild(style);
}
var existingStyles = document.querySelectorAll("style[data-loaded-css]");
for (var i = 0; i < existingStyles.length; i++) {
var style = existingStyles[i];
style.parentNode.removeChild(style);
}
var styleElement = document.createElement('style');
styleElement.setAttribute('data-loaded-css', 'placeholder');
styleElement.innerHTML = css;
document.body.appendChild(styleElement);
}
"""
js_code_for_toggle_darkmode = """() => { js_code_for_toggle_darkmode = """() => {
if (document.querySelectorAll('.dark').length) { if (document.querySelectorAll('.dark').length) {
setCookie("js_darkmode_cookie", "False", 365); setCookie("js_darkmode_cookie", "False", 365);
@@ -114,6 +95,8 @@ js_code_for_persistent_cookie_init = """(web_cookie_cache, cookie) => {
# 详见 themes/common.js # 详见 themes/common.js
js_code_reset = """ js_code_reset = """
(a,b,c)=>{ (a,b,c)=>{
let stopButton = document.getElementById("elem_stop");
stopButton.click();
return reset_conversation(a,b); return reset_conversation(a,b);
} }
""" """

查看文件

@@ -142,7 +142,13 @@ function initModel(waifuPath, type) {
if (live2d_settings.waifuEdgeSide[0] == 'left') $(".waifu").css("left",live2d_settings.waifuEdgeSide[1]+'px'); if (live2d_settings.waifuEdgeSide[0] == 'left') $(".waifu").css("left",live2d_settings.waifuEdgeSide[1]+'px');
else if (live2d_settings.waifuEdgeSide[0] == 'right') $(".waifu").css("right",live2d_settings.waifuEdgeSide[1]+'px'); else if (live2d_settings.waifuEdgeSide[0] == 'right') $(".waifu").css("right",live2d_settings.waifuEdgeSide[1]+'px');
window.waifuResize = function() { $(window).width() <= Number(live2d_settings.waifuMinWidth.replace('px','')) ? $(".waifu").hide() : $(".waifu").show(); }; window.waifuResize = function() {
console.log('resize');
if ($('.waifu')[0].style.display === "none" ){
} else{
$(window).width() <= Number(live2d_settings.waifuMinWidth.replace('px','')) ? $(".waifu").hide() : $(".waifu").show();
}
};
if (live2d_settings.waifuMinWidth != 'disable') { waifuResize(); $(window).resize(function() {waifuResize()}); } if (live2d_settings.waifuMinWidth != 'disable') { waifuResize(); $(window).resize(function() {waifuResize()}); }
try { try {

查看文件

@@ -1,3 +1,4 @@
import importlib import importlib
import time import time
import inspect import inspect
@@ -10,6 +11,7 @@ import glob
import logging import logging
import uuid import uuid
from functools import wraps from functools import wraps
from textwrap import dedent
from shared_utils.config_loader import get_conf from shared_utils.config_loader import get_conf
from shared_utils.config_loader import set_conf from shared_utils.config_loader import set_conf
from shared_utils.config_loader import set_multi_conf from shared_utils.config_loader import set_multi_conf
@@ -90,7 +92,7 @@ def ArgsGeneralWrapper(f):
""" """
def decorated(request: gradio.Request, cookies:dict, max_length:int, llm_model:str, def decorated(request: gradio.Request, cookies:dict, max_length:int, llm_model:str,
txt:str, txt2:str, top_p:float, temperature:float, chatbot:list, txt:str, txt2:str, top_p:float, temperature:float, chatbot:list,
history:list, system_prompt:str, plugin_advanced_arg:str, *args): history:list, system_prompt:str, plugin_advanced_arg:dict, *args):
txt_passon = txt txt_passon = txt
if txt == "" and txt2 != "": txt_passon = txt2 if txt == "" and txt2 != "": txt_passon = txt2
# 引入一个有cookie的chatbot # 引入一个有cookie的chatbot
@@ -114,9 +116,10 @@ def ArgsGeneralWrapper(f):
'client_ip': request.client.host, 'client_ip': request.client.host,
'most_recent_uploaded': cookies.get('most_recent_uploaded') 'most_recent_uploaded': cookies.get('most_recent_uploaded')
} }
plugin_kwargs = { if isinstance(plugin_advanced_arg, str):
"advanced_arg": plugin_advanced_arg, plugin_kwargs = {"advanced_arg": plugin_advanced_arg}
} else:
plugin_kwargs = plugin_advanced_arg
chatbot_with_cookie = ChatBotWithCookies(cookies) chatbot_with_cookie = ChatBotWithCookies(cookies)
chatbot_with_cookie.write_list(chatbot) chatbot_with_cookie.write_list(chatbot)
@@ -192,9 +195,20 @@ def trimmed_format_exc():
replace_path = "." replace_path = "."
return str.replace(current_path, replace_path) return str.replace(current_path, replace_path)
def trimmed_format_exc_markdown(): def trimmed_format_exc_markdown():
return '\n\n```\n' + trimmed_format_exc() + '```' return '\n\n```\n' + trimmed_format_exc() + '```'
class FriendlyException(Exception):
def generate_error_html(self):
return dedent(f"""
<div class="center-div" style="color: crimson;text-align: center;">
{"<br>".join(self.args)}
</div>
""")
def CatchException(f): def CatchException(f):
""" """
装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。 装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
@@ -205,13 +219,18 @@ def CatchException(f):
chatbot_with_cookie:ChatBotWithCookies, history:list, *args, **kwargs): chatbot_with_cookie:ChatBotWithCookies, history:list, *args, **kwargs):
try: try:
yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs) yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)
except FriendlyException as e:
if len(chatbot_with_cookie) == 0:
chatbot_with_cookie.clear()
chatbot_with_cookie.append(["插件调度异常", None])
chatbot_with_cookie[-1] = [chatbot_with_cookie[-1][0], e.generate_error_html()]
yield from update_ui(chatbot=chatbot_with_cookie, history=history, msg=f'异常') # 刷新界面
except Exception as e: except Exception as e:
from toolbox import get_conf
tb_str = '```\n' + trimmed_format_exc() + '```' tb_str = '```\n' + trimmed_format_exc() + '```'
if len(chatbot_with_cookie) == 0: if len(chatbot_with_cookie) == 0:
chatbot_with_cookie.clear() chatbot_with_cookie.clear()
chatbot_with_cookie.append(["插件调度异常", "异常原因"]) chatbot_with_cookie.append(["插件调度异常", "异常原因"])
chatbot_with_cookie[-1] = (chatbot_with_cookie[-1][0], f"[Local Message] 插件调用出错: \n\n{tb_str} \n") chatbot_with_cookie[-1] = [chatbot_with_cookie[-1][0], f"[Local Message] 插件调用出错: \n\n{tb_str} \n"]
yield from update_ui(chatbot=chatbot_with_cookie, history=history, msg=f'异常 {e}') # 刷新界面 yield from update_ui(chatbot=chatbot_with_cookie, history=history, msg=f'异常 {e}') # 刷新界面
return decorated return decorated
@@ -562,7 +581,7 @@ def on_report_generated(cookies:dict, files:List[str], chatbot:ChatBotWithCookie
file_links += ( file_links += (
f'<br/><a href="file={os.path.abspath(f)}" target="_blank">{f}</a>' f'<br/><a href="file={os.path.abspath(f)}" target="_blank">{f}</a>'
) )
chatbot.append(["报告如何远程获取?", f"报告已经添加到右侧“文件上传区”(可能处于折叠状态),请查收。{file_links}"]) chatbot.append(["报告如何远程获取?", f"报告已经添加到右侧“文件下载区”(可能处于折叠状态),请查收。{file_links}"])
return cookies, report_files, chatbot return cookies, report_files, chatbot
@@ -902,15 +921,18 @@ def get_pictures_list(path):
return file_manifest return file_manifest
def have_any_recent_upload_image_files(chatbot:ChatBotWithCookies): def have_any_recent_upload_image_files(chatbot:ChatBotWithCookies, pop:bool=False):
_5min = 5 * 60 _5min = 5 * 60
if chatbot is None: if chatbot is None:
return False, None # chatbot is None return False, None # chatbot is None
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None) if pop:
most_recent_uploaded = chatbot._cookies.pop("most_recent_uploaded", None)
else:
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
# most_recent_uploaded 是一个放置最新上传图像的路径
if not most_recent_uploaded: if not most_recent_uploaded:
return False, None # most_recent_uploaded is None return False, None # most_recent_uploaded is None
if time.time() - most_recent_uploaded["time"] < _5min: if time.time() - most_recent_uploaded["time"] < _5min:
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
path = most_recent_uploaded["path"] path = most_recent_uploaded["path"]
file_manifest = get_pictures_list(path) file_manifest = get_pictures_list(path)
if len(file_manifest) == 0: if len(file_manifest) == 0:

查看文件

@@ -1,5 +1,5 @@
{ {
"version": 3.75, "version": 3.81,
"show_feature": true, "show_feature": true,
"new_feature": "添加TTS语音输出EdgeTTS和SoVits语音克隆 <-> Doc2x PDF翻译 <-> 添加回溯对话按钮" "new_feature": "支持更复杂的插件框架 <-> 上传文件时显示进度条 <-> 添加TTS语音输出EdgeTTS和SoVits语音克隆 <-> Doc2x PDF翻译 <-> 添加回溯对话按钮"
} }