比较提交

..

57 次代码提交

作者 SHA1 备注 提交日期
binary-husky
7894e2f02d renew pip url 2024-02-25 22:15:24 +08:00
binary-husky
9e9bd7aa27 Merge branch 'master' into huggingfacelocal 2024-02-25 22:14:50 +08:00
binary-husky
6b784035fa Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2024-02-25 21:13:56 +08:00
binary-husky
8bb3d84912 fix zip chinese file name error 2024-02-25 21:13:41 +08:00
binary-husky
a0193cf227 edit dep url 2024-02-23 13:28:49 +08:00
binary-husky
b72289bfb0 Fix missing MATHPIX_APPID and MATHPIX_APPKEY
configuration
2024-02-21 14:20:10 +08:00
Menghuan1918
bdfe3862eb 添加部分翻译 (#1566) 2024-02-21 14:14:06 +08:00
binary-husky
dae180b9ea update spark v3.5, fix glm parallel problem 2024-02-18 14:08:35 +08:00
binary-husky
e359fff040 Fix response message bug in bridge_qianfan.py,
bridge_qwen.py, and bridge_skylark2.py
2024-02-15 00:02:24 +08:00
binary-husky
2e9b4a5770 Merge Frontier, Update to Version 3.72 (#1553)
* Zhipu sdk update 适配最新的智谱SDK,支持GLM4v (#1502)

* 适配 google gemini 优化为从用户input中提取文件

* 适配最新的智谱SDK、支持glm-4v

* requirements.txt fix

* pending history check

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* Update "生成多种Mermaid图表" plugin: Separate out the file reading function (#1520)

* Update crazy_functional.py with new functionality deal with PDF

* Update crazy_functional.py and Mermaid.py for plugin_kwargs

* Update crazy_functional.py with new chart type: mind map

* Update SELECT_PROMPT and i_say_show_user messages

* Update ArgsReminder message in get_crazy_functions() function

* Update with read md file and update PROMPTS

* Return the PROMPTS as the test found that the initial version worked best

* Update Mermaid chart generation function

* version 3.71

* 解决issues #1510

* Remove unnecessary text from sys_prompt in 解析历史输入 function

* Remove sys_prompt message in 解析历史输入 function

* Update bridge_all.py: supports gpt-4-turbo-preview (#1517)

* Update bridge_all.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update bridge_all.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* Update config.py: supports gpt-4-turbo-preview (#1516)

* Update config.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update config.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* Refactor 解析历史输入 function to handle file input

* Update Mermaid chart generation functionality

* rename files and functions

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com>
Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* 接入mathpix ocr功能 (#1468)

* Update Latex输出PDF结果.py

借助mathpix实现了PDF翻译中文并重新编译PDF

* Update config.py

add mathpix appid & appkey

* Add 'PDF翻译中文并重新编译PDF' feature to plugins.

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* fix zhipuai

* check picture

* remove glm-4 due to bug

* 修改config

* 检查MATHPIX_APPID

* Remove unnecessary code and update
function_plugins dictionary

* capture non-standard token overflow

* bug fix #1524

* change mermaid style

* 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 (#1530)

* 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽

* 微调未果 先stage一下

* update

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* ver 3.72

* change live2d

* save the status of ``clear btn` in cookie

* 前端选择保持

* js ui bug fix

* reset btn bug fix

* update live2d tips

* fix missing get_token_num method

* fix live2d toggle switch

* fix persistent custom btn with cookie

* fix zhipuai feedback with core functionality

* Refactor button update and clean up functions

---------

Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com>
Co-authored-by: Menghuan1918 <menghuan2003@outlook.com>
Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com>
Co-authored-by: Hao Ma <893017927@qq.com>
Co-authored-by: zeyuan huang <599012428@qq.com>
2024-02-14 18:35:09 +08:00
binary-husky
e0c5859cf9 update Column min_width parameter 2024-02-12 23:37:31 +08:00
binary-husky
b9b1e12dc9 fix missing get_token_num method 2024-02-12 15:58:55 +08:00
binary-husky
8814026ec3 fix gradio-client version (#1548) 2024-02-09 13:25:01 +08:00
binary-husky
c534814363 Merge branch 'master' into huggingfacelocal 2024-01-23 15:52:51 +08:00
binary-husky
8a525a4560 ADD_WAIFU 2023-12-27 00:04:53 +08:00
binary-husky
22f58d0953 Toggle dark mode in config.py 2023-12-26 23:58:53 +08:00
binary-husky
59b4345945 Merge branch 'master' into huggingfacelocal 2023-12-26 23:57:23 +08:00
binary-husky
72ba7e9738 Merge branch 'master' into huggingfacelocal 2023-11-29 00:35:20 +08:00
binary-husky
d7f4b07fe4 Merge branch 'master' into huggingfacelocal 2023-11-20 01:17:04 +08:00
binary-husky
df27843e51 Merge branch 'master' into huggingfacelocal 2023-10-06 11:59:18 +08:00
binary-husky
e2fefec2e3 减小Latex容器体积 2023-10-06 11:47:27 +08:00
binary-husky
2bf71bd1a8 Chuanhu-Small-and-Beautiful 2023-09-15 17:30:31 +08:00
binary-husky
6a56fb7477 Merge branch 'master' into huggingfacelocal 2023-09-15 17:19:59 +08:00
binary-husky
a2e7ea748c up 2023-09-09 19:02:51 +08:00
binary-husky
8153c1b49d Merge branch 'master' into huggingfacelocal 2023-09-09 18:54:53 +08:00
binary-husky
2552126744 up 2023-08-28 01:43:45 +08:00
binary-husky
2475163337 Merge branch 'master' into huggingfacelocal 2023-08-28 01:39:20 +08:00
binary-husky
a3bdb69e30 sync 2023-08-16 13:28:44 +08:00
binary-husky
81874b380f qw 2023-08-16 13:01:41 +08:00
binary-husky
96c1852abc Merge branch 'master' into huggingface 2023-06-30 12:09:25 +08:00
binary-husky
cd145c0794 1 2023-06-29 15:04:03 +08:00
binary-husky
7a4d4ad956 Merge branch 'huggingface' of github.com:binary-husky/chatgpt_academic into huggingface 2023-06-29 12:54:24 +08:00
binary-husky
9f9848c6e9 again 2023-06-29 12:54:19 +08:00
binary-husky
94425c49fd again 2023-05-28 21:34:50 +08:00
binary-husky
e874a16050 try again 2023-05-28 21:33:28 +08:00
binary-husky
c28388c5fe load version 2023-05-28 21:32:10 +08:00
binary-husky
b4a56d391b Merge branch 'huggingface' of github.com:binary-husky/chatgpt_academic into huggingface 2023-05-28 21:30:34 +08:00
binary-husky
7075092f86 fix app 2023-05-28 21:30:29 +08:00
binary-husky
1086ff8092 Merge branch 'huggingface' of github.com:binary-husky/chatgpt_academic into huggingface 2023-05-28 21:27:31 +08:00
binary-husky
3a22446b47 try4 2023-05-28 21:27:25 +08:00
binary-husky
7842cf03cc Merge branch 'master' into huggingface 2023-05-28 21:27:20 +08:00
binary-husky
54f55c32f2 213 2023-05-28 21:25:45 +08:00
binary-husky
94318ff0a2 try3 2023-05-28 21:24:46 +08:00
binary-husky
5be6b83762 try2 2023-05-28 21:24:02 +08:00
binary-husky
6f18d1716e Merge branch 'master' into huggingface 2023-05-28 21:21:12 +08:00
binary-husky
90944bd744 up 2023-05-25 15:04:53 +08:00
binary-husky
752937cb70 Merge branch 'master' into huggingface 2023-05-25 15:01:30 +08:00
binary-husky
c584cbac5b fix ver 2023-05-19 14:08:47 +08:00
binary-husky
309d12b404 Merge branch 'master' into huggingface 2023-05-19 14:05:23 +08:00
binary-husky
52ea0acd61 Merge branch 'master' into huggingface 2023-05-06 23:06:53 +08:00
binary-husky
9f5e3e0fd5 Merge branch 'master' into huggingface 2023-05-05 18:24:36 +08:00
binary-husky
315e78e5d9 Merge branch 'master' into huggingface 2023-04-29 03:53:32 +08:00
binary-husky
b6b4ba684a Merge branch 'master' into huggingface 2023-04-24 18:32:56 +08:00
binary-husky
2281a5ca7f 修改提示 2023-04-24 12:55:53 +08:00
binary-husky
49558686f2 Merge branch 'master' into huggingface 2023-04-24 12:30:59 +08:00
Your Name
b050ccedb5 Merge branch 'master' into huggingface 2023-04-21 18:48:00 +08:00
Your Name
ae56cab6f4 huggingface 2023-04-19 18:07:32 +08:00
共有 56 个文件被更改,包括 1458 次插入10040 次删除

查看文件

@@ -1,8 +1,20 @@
> [!IMPORTANT] ---
> 2024.1.18: 更新3.70版本,支持Mermaid绘图库让大模型绘制脑图 title: GPT-Academic
> 2024.1.17: 恭迎GLM4,全力支持Qwen、GLM、DeepseekCoder等国内中文大语言基座模型 emoji: 😻
> 2024.1.17: 某些依赖包尚不兼容python 3.12,推荐python 3.11。 colorFrom: blue
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。 colorTo: blue
sdk: gradio
sdk_version: 3.32.0
app_file: app.py
pinned: false
---
# ChatGPT 学术优化
> **Note**
>
> 2023.11.12: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
>
> 2023.12.26: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
<br> <br>

查看文件

@@ -14,23 +14,25 @@ help_menu_description = \
</br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交网页刷新后失效""" </br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交网页刷新后失效"""
def main(): def main():
import subprocess, sys
subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'https://public.agent-matrix.com/publish/gradio-3.32.8-py3-none-any.whl'])
import gradio as gr import gradio as gr
if gr.__version__ not in ['3.32.6', '3.32.7', '3.32.8']: if gr.__version__ not in ['3.32.8']:
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.") raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
from request_llms.bridge_all import predict from request_llms.bridge_all import predict
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址 # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION') proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT') CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME') ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU')
DARK_MODE, NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('DARK_MODE', 'NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE') DARK_MODE, NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('DARK_MODE', 'NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE')
INIT_SYS_PROMPT = get_conf('INIT_SYS_PROMPT') INIT_SYS_PROMPT = get_conf('INIT_SYS_PROMPT')
# 如果WEB_PORT是-1, 则随机选取WEB端口 # 如果WEB_PORT是-1, 则随机选取WEB端口
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
from check_proxy import get_current_version from check_proxy import get_current_version
from themes.theme import adjust_theme, advanced_css, theme_declaration from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_reset, js_code_show_or_hide, js_code_show_or_hide_group2
from themes.theme import js_code_for_css_changing, js_code_for_darkmode_init, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init from themes.theme import js_code_for_css_changing, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, init_cookie from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, init_cookie
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}" title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
@@ -65,7 +67,7 @@ def main():
proxy_info = check_proxy(proxies) proxy_info = check_proxy(proxies)
gr_L1 = lambda: gr.Row().style() gr_L1 = lambda: gr.Row().style()
gr_L2 = lambda scale, elem_id: gr.Column(scale=scale, elem_id=elem_id) gr_L2 = lambda scale, elem_id: gr.Column(scale=scale, elem_id=elem_id, min_width=400)
if LAYOUT == "TOP-DOWN": if LAYOUT == "TOP-DOWN":
gr_L1 = lambda: DummyWith() gr_L1 = lambda: DummyWith()
gr_L2 = lambda scale, elem_id: gr.Row() gr_L2 = lambda scale, elem_id: gr.Row()
@@ -76,7 +78,8 @@ def main():
predefined_btns = {} predefined_btns = {}
with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo: with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo:
gr.HTML(title_html) gr.HTML(title_html)
secret_css, dark_mode, persistent_cookie = gr.Textbox(visible=False), gr.Textbox(DARK_MODE, visible=False), gr.Textbox(visible=False) gr.HTML('''<center><a href="https://huggingface.co/spaces/qingxu98/gpt-academic?duplicate=true"><img src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a>请您打开此页面后务必点击上方的“复制空间”Duplicate Space按钮<font color="#FF00FF">使用时,先在输入框填入API-KEY然后回车。</font><br/>切忌在“复制空间”Duplicate Space之前填入API_KEY或进行提问,否则您的API_KEY将极可能被空间所有者攫取<br/>支持任意数量的OpenAI的密钥和API2D的密钥共存,例如输入"OpenAI密钥1,API2D密钥2",然后提交,即可同时使用两种模型接口。</center>''')
secret_css, dark_mode, py_pickle_cookie = gr.Textbox(visible=False), gr.Textbox(DARK_MODE, visible=False), gr.Textbox(visible=False)
cookies = gr.State(load_chat_cookies()) cookies = gr.State(load_chat_cookies())
with gr_L1(): with gr_L1():
with gr_L2(scale=2, elem_id="gpt-chat"): with gr_L2(scale=2, elem_id="gpt-chat"):
@@ -86,7 +89,7 @@ def main():
with gr_L2(scale=1, elem_id="gpt-panel"): with gr_L2(scale=1, elem_id="gpt-panel"):
with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary: with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary:
with gr.Row(): with gr.Row():
txt = gr.Textbox(show_label=False, placeholder="Input question here.", elem_id='user_input_main').style(container=False) txt = gr.Textbox(show_label=False, lines=2, placeholder="输入问题或API密钥,输入多个密钥时,用英文逗号间隔。支持多个OpenAI密钥共存。").style(container=False)
with gr.Row(): with gr.Row():
submitBtn = gr.Button("提交", elem_id="elem_submit", variant="primary") submitBtn = gr.Button("提交", elem_id="elem_submit", variant="primary")
with gr.Row(): with gr.Row():
@@ -98,6 +101,7 @@ def main():
audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False) audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False)
with gr.Row(): with gr.Row():
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel") status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel")
with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn: with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn:
with gr.Row(): with gr.Row():
for k in range(NUM_CUSTOM_BASIC_BTN): for k in range(NUM_CUSTOM_BASIC_BTN):
@@ -142,7 +146,6 @@ def main():
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up: with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload") file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"): with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"):
with gr.Row(): with gr.Row():
with gr.Tab("上传文件", elem_id="interact-panel"): with gr.Tab("上传文件", elem_id="interact-panel"):
@@ -158,10 +161,11 @@ def main():
with gr.Tab("界面外观", elem_id="interact-panel"): with gr.Tab("界面外观", elem_id="interact-panel"):
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False) theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False)
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"], checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False) opt = ["自定义菜单"]
checkboxes_2 = gr.CheckboxGroup(["自定义菜单"], value=[]
value=[], label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False) if ADD_WAIFU: opt += ["添加Live2D形象"]; value += ["添加Live2D形象"]
checkboxes_2 = gr.CheckboxGroup(opt, value=value, label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False)
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm") dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode) dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
with gr.Tab("帮助", elem_id="interact-panel"): with gr.Tab("帮助", elem_id="interact-panel"):
@@ -178,7 +182,7 @@ def main():
submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm") submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm") resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm") stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm") clearBtn2 = gr.Button("清除", elem_id="elem_clear2", variant="secondary", visible=False); clearBtn2.style(size="sm")
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize: with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
@@ -192,10 +196,12 @@ def main():
basic_fn_suffix = gr.Textbox(show_label=False, placeholder="输入新提示后缀", lines=4).style(container=False) basic_fn_suffix = gr.Textbox(show_label=False, placeholder="输入新提示后缀", lines=4).style(container=False)
with gr.Column(scale=1, min_width=70): with gr.Column(scale=1, min_width=70):
basic_fn_confirm = gr.Button("确认并保存", variant="primary"); basic_fn_confirm.style(size="sm") basic_fn_confirm = gr.Button("确认并保存", variant="primary"); basic_fn_confirm.style(size="sm")
basic_fn_load = gr.Button("加载已保存", variant="primary"); basic_fn_load.style(size="sm") basic_fn_clean = gr.Button("恢复默认", variant="primary"); basic_fn_clean.style(size="sm")
def assign_btn(persistent_cookie_, cookies_, basic_btn_dropdown_, basic_fn_title, basic_fn_prefix, basic_fn_suffix): def assign_btn(persistent_cookie_, cookies_, basic_btn_dropdown_, basic_fn_title, basic_fn_prefix, basic_fn_suffix, clean_up=False):
ret = {} ret = {}
# 读取之前的自定义按钮
customize_fn_overwrite_ = cookies_['customize_fn_overwrite'] customize_fn_overwrite_ = cookies_['customize_fn_overwrite']
# 更新新的自定义按钮
customize_fn_overwrite_.update({ customize_fn_overwrite_.update({
basic_btn_dropdown_: basic_btn_dropdown_:
{ {
@@ -205,20 +211,34 @@ def main():
} }
} }
) )
cookies_.update(customize_fn_overwrite_) if clean_up:
customize_fn_overwrite_ = {}
cookies_.update(customize_fn_overwrite_) # 更新cookie
visible = (not clean_up) and (basic_fn_title != "")
if basic_btn_dropdown_ in customize_btns: if basic_btn_dropdown_ in customize_btns:
ret.update({customize_btns[basic_btn_dropdown_]: gr.update(visible=True, value=basic_fn_title)}) # 是自定义按钮,不是预定义按钮
ret.update({customize_btns[basic_btn_dropdown_]: gr.update(visible=visible, value=basic_fn_title)})
else: else:
ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=True, value=basic_fn_title)}) # 是预定义按钮
ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=visible, value=basic_fn_title)})
ret.update({cookies: cookies_}) ret.update({cookies: cookies_})
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
except: persistent_cookie_ = {} except: persistent_cookie_ = {}
persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value
persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict
ret.update({persistent_cookie: persistent_cookie_}) # write persistent cookie ret.update({py_pickle_cookie: persistent_cookie_}) # write persistent cookie
return ret return ret
def reflesh_btn(persistent_cookie_, cookies_): # update btn
h = basic_fn_confirm.click(assign_btn, [py_pickle_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
[py_pickle_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
h.then(None, [py_pickle_cookie], None, _js="""(py_pickle_cookie)=>{setCookie("py_pickle_cookie", py_pickle_cookie, 365);}""")
# clean up btn
h2 = basic_fn_clean.click(assign_btn, [py_pickle_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix, gr.State(True)],
[py_pickle_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
h2.then(None, [py_pickle_cookie], None, _js="""(py_pickle_cookie)=>{setCookie("py_pickle_cookie", py_pickle_cookie, 365);}""")
def persistent_cookie_reload(persistent_cookie_, cookies_):
ret = {} ret = {}
for k in customize_btns: for k in customize_btns:
ret.update({customize_btns[k]: gr.update(visible=False, value="")}) ret.update({customize_btns[k]: gr.update(visible=False, value="")})
@@ -236,25 +256,16 @@ def main():
else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])}) else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
return ret return ret
basic_fn_load.click(reflesh_btn, [persistent_cookie, cookies], [cookies, *customize_btns.values(), *predefined_btns.values()])
h = basic_fn_confirm.click(assign_btn, [persistent_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
[persistent_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
# save persistent cookie
h.then(None, [persistent_cookie], None, _js="""(persistent_cookie)=>{setCookie("persistent_cookie", persistent_cookie, 5);}""")
# 功能区显示开关与功能区的互动 # 功能区显示开关与功能区的互动
def fn_area_visibility(a): def fn_area_visibility(a):
ret = {} ret = {}
ret.update({area_basic_fn: gr.update(visible=("基础功能区" in a))})
ret.update({area_crazy_fn: gr.update(visible=("函数插件区" in a))})
ret.update({area_input_primary: gr.update(visible=("浮动输入区" not in a))}) ret.update({area_input_primary: gr.update(visible=("浮动输入区" not in a))})
ret.update({area_input_secondary: gr.update(visible=("浮动输入区" in a))}) ret.update({area_input_secondary: gr.update(visible=("浮动输入区" in a))})
ret.update({clearBtn: gr.update(visible=("输入清除键" in a))})
ret.update({clearBtn2: gr.update(visible=("输入清除键" in a))})
ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))}) ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))})
if "浮动输入区" in a: ret.update({txt: gr.update(value="")}) if "浮动输入区" in a: ret.update({txt: gr.update(value="")})
return ret return ret
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2, plugin_advanced_arg] ) checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, plugin_advanced_arg] )
checkboxes.select(None, [checkboxes], None, _js=js_code_show_or_hide)
# 功能区显示开关与功能区的互动 # 功能区显示开关与功能区的互动
def fn_area_visibility_2(a): def fn_area_visibility_2(a):
@@ -262,6 +273,7 @@ def main():
ret.update({area_customize: gr.update(visible=("自定义菜单" in a))}) ret.update({area_customize: gr.update(visible=("自定义菜单" in a))})
return ret return ret
checkboxes_2.select(fn_area_visibility_2, [checkboxes_2], [area_customize] ) checkboxes_2.select(fn_area_visibility_2, [checkboxes_2], [area_customize] )
checkboxes_2.select(None, [checkboxes_2], None, _js=js_code_show_or_hide_group2)
# 整理反复出现的控件句柄组合 # 整理反复出现的控件句柄组合
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg] input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg]
@@ -272,15 +284,17 @@ def main():
cancel_handles.append(txt2.submit(**predict_args)) cancel_handles.append(txt2.submit(**predict_args))
cancel_handles.append(submitBtn.click(**predict_args)) cancel_handles.append(submitBtn.click(**predict_args))
cancel_handles.append(submitBtn2.click(**predict_args)) cancel_handles.append(submitBtn2.click(**predict_args))
resetBtn.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) resetBtn.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) resetBtn2.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
clearBtn.click(lambda: ("",""), None, [txt, txt2]) resetBtn.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) # 再在后端清除history
clearBtn2.click(lambda: ("",""), None, [txt, txt2]) resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) # 再在后端清除history
clearBtn.click(None, None, [txt, txt2], _js=js_code_clear)
clearBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
if AUTO_CLEAR_TXT: if AUTO_CLEAR_TXT:
submitBtn.click(lambda: ("",""), None, [txt, txt2]) submitBtn.click(None, None, [txt, txt2], _js=js_code_clear)
submitBtn2.click(lambda: ("",""), None, [txt, txt2]) submitBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
txt.submit(lambda: ("",""), None, [txt, txt2]) txt.submit(None, None, [txt, txt2], _js=js_code_clear)
txt2.submit(lambda: ("",""), None, [txt, txt2]) txt2.submit(None, None, [txt, txt2], _js=js_code_clear)
# 基础功能区的回调函数注册 # 基础功能区的回调函数注册
for k in functional: for k in functional:
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
@@ -360,10 +374,10 @@ def main():
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies]) audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
demo.load(init_cookie, inputs=[cookies, chatbot], outputs=[cookies]) demo.load(init_cookie, inputs=[cookies], outputs=[cookies])
darkmode_js = js_code_for_darkmode_init demo.load(persistent_cookie_reload, inputs = [py_pickle_cookie, cookies],
demo.load(None, inputs=None, outputs=[persistent_cookie], _js=js_code_for_persistent_cookie_init) outputs = [py_pickle_cookie, cookies, *customize_btns.values(), *predefined_btns.values()], _js=js_code_for_persistent_cookie_init)
demo.load(None, inputs=[dark_mode], outputs=None, _js=darkmode_js) # 配置暗色主题或亮色主题 demo.load(None, inputs=[dark_mode], outputs=None, _js="""(dark_mode)=>{apply_cookie_for_checkbox(dark_mode);}""") # 配置暗色主题或亮色主题
demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}') demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}')
# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数 # gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
@@ -382,16 +396,8 @@ def main():
threading.Thread(target=warm_up_mods, name="warm-up", daemon=True).start() # 预热tiktoken模块 threading.Thread(target=warm_up_mods, name="warm-up", daemon=True).start() # 预热tiktoken模块
run_delayed_tasks() run_delayed_tasks()
demo.queue(concurrency_count=CONCURRENT_COUNT).launch( demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", share=False, favicon_path="docs/logo.png", blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"])
quiet=True,
server_name="0.0.0.0",
ssl_keyfile=None if SSL_KEYFILE == "" else SSL_KEYFILE,
ssl_certfile=None if SSL_CERTFILE == "" else SSL_CERTFILE,
ssl_verify=False,
server_port=PORT,
favicon_path=os.path.join(os.path.dirname(__file__), "docs/logo.png"),
auth=AUTHENTICATION if len(AUTHENTICATION) != 0 else None,
blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile",f"{PATH_LOGGING}/admin"])
# 如果需要在二级路径下运行 # 如果需要在二级路径下运行
# CUSTOM_PATH = get_conf('CUSTOM_PATH') # CUSTOM_PATH = get_conf('CUSTOM_PATH')

查看文件

@@ -2,8 +2,8 @@
以下所有配置也都支持利用环境变量覆写,环境变量配置格式见docker-compose.yml。 以下所有配置也都支持利用环境变量覆写,环境变量配置格式见docker-compose.yml。
读取优先级:环境变量 > config_private.py > config.py 读取优先级:环境变量 > config_private.py > config.py
--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
All the following configurations also support using environment variables to override, All the following configurations also support using environment variables to override,
and the environment variable configuration format can be seen in docker-compose.yml. and the environment variable configuration format can be seen in docker-compose.yml.
Configuration reading priority: environment variable > config_private.py > config.py Configuration reading priority: environment variable > config_private.py > config.py
""" """
@@ -11,6 +11,10 @@
API_KEY = "此处填API密钥" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4" API_KEY = "此处填API密钥" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4"
# [step 1]>> API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"。极少数情况下,还需要填写组织格式如org-123456789abcdefghijklmno的,请向下翻,找 API_ORG 设置项
API_KEY = "此处填API密钥" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4"
# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改;如果使用本地或无地域限制的大模型时,此处也不需要修改 # [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改;如果使用本地或无地域限制的大模型时,此处也不需要修改
USE_PROXY = False USE_PROXY = False
if USE_PROXY: if USE_PROXY:
@@ -33,7 +37,7 @@ else:
# ------------------------------------ 以下配置可以优化体验, 但大部分场合下并不需要修改 ------------------------------------ # ------------------------------------ 以下配置可以优化体验, 但大部分场合下并不需要修改 ------------------------------------
# 重新URL重新定向,实现更换API_URL的作用高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人 # 重新URL重新定向,实现更换API_URL的作用高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人
# 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"} # 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
# 举例: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://reverse-proxy-url/v1/chat/completions"} # 举例: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://reverse-proxy-url/v1/chat/completions"}
API_URL_REDIRECT = {} API_URL_REDIRECT = {}
@@ -45,7 +49,7 @@ DEFAULT_WORKER_NUM = 3
# 色彩主题, 可选 ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast"] # 色彩主题, 可选 ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast"]
# 更多主题, 请查阅Gradio主题商店: https://huggingface.co/spaces/gradio/theme-gallery 可选 ["Gstaff/Xkcd", "NoCrypt/Miku", ...] # 更多主题, 请查阅Gradio主题商店: https://huggingface.co/spaces/gradio/theme-gallery 可选 ["Gstaff/Xkcd", "NoCrypt/Miku", ...]
THEME = "Default" THEME = "Chuanhu-Small-and-Beautiful"
AVAIL_THEMES = ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast", "Gstaff/Xkcd", "NoCrypt/Miku"] AVAIL_THEMES = ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast", "Gstaff/Xkcd", "NoCrypt/Miku"]
@@ -66,7 +70,7 @@ LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下
# 暗色模式 / 亮色模式 # 暗色模式 / 亮色模式
DARK_MODE = True DARK_MODE = False
# 发送请求到OpenAI后,等待多久判定为超时 # 发送请求到OpenAI后,等待多久判定为超时
@@ -80,6 +84,9 @@ WEB_PORT = -1
# 如果OpenAI不响应网络卡顿、代理失败、KEY失效,重试的次数限制 # 如果OpenAI不响应网络卡顿、代理失败、KEY失效,重试的次数限制
MAX_RETRY = 2 MAX_RETRY = 2
# OpenAI模型选择是gpt4现在只对申请成功的人开放
LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm"
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo", "spark", "azure-gpt-3.5"]
# 插件分类默认选项 # 插件分类默认选项
DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体'] DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
@@ -89,11 +96,11 @@ DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓ LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview", AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5", "gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4", "gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-3-turbo",
"gemini-pro", "chatglm3", "claude-2", "zhipuai"] "gemini-pro", "chatglm3", "claude-2"]
# P.S. 其他可用的模型还包括 [ # P.S. 其他可用的模型还包括 [
# "moss", "qwen-turbo", "qwen-plus", "qwen-max" # "moss", "qwen-turbo", "qwen-plus", "qwen-max"
# "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613", # "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613",
# "gpt-3.5-turbo-16k-0613", "gpt-3.5-random", "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k', # "gpt-3.5-turbo-16k-0613", "gpt-3.5-random", "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama" # "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"
# ] # ]
@@ -136,7 +143,7 @@ AUTO_CLEAR_TXT = False
# 加一个live2d装饰 # 加一个live2d装饰
ADD_WAIFU = False ADD_WAIFU = True
# 设置用户名和密码不需要修改相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个 # 设置用户名和密码不需要修改相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个
@@ -158,7 +165,7 @@ API_ORG = ""
# 如果需要使用Slack Claude,使用教程详情见 request_llms/README.md # 如果需要使用Slack Claude,使用教程详情见 request_llms/README.md
SLACK_CLAUDE_BOT_ID = '' SLACK_CLAUDE_BOT_ID = ''
SLACK_CLAUDE_USER_TOKEN = '' SLACK_CLAUDE_USER_TOKEN = ''
@@ -195,7 +202,7 @@ XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
# 接入智谱大模型 # 接入智谱大模型
ZHIPUAI_API_KEY = "" ZHIPUAI_API_KEY = ""
ZHIPUAI_MODEL = "glm-4" # 可选 "glm-3-turbo" "glm-4" ZHIPUAI_MODEL = "" # 此选项已废弃,不再需要填写
# # 火山引擎YUNQUE大模型 # # 火山引擎YUNQUE大模型
@@ -208,6 +215,11 @@ ZHIPUAI_MODEL = "glm-4" # 可选 "glm-3-turbo" "glm-4"
ANTHROPIC_API_KEY = "" ANTHROPIC_API_KEY = ""
# Mathpix 拥有执行PDF的OCR功能,但是需要注册账号
MATHPIX_APPID = ""
MATHPIX_APPKEY = ""
# 自定义API KEY格式 # 自定义API KEY格式
CUSTOM_API_KEY_PATTERN = "" CUSTOM_API_KEY_PATTERN = ""
@@ -224,8 +236,8 @@ HUGGINGFACE_ACCESS_TOKEN = "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"
# 获取方法复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space" # 获取方法复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space"
GROBID_URLS = [ GROBID_URLS = [
"https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space", "https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space",
"https://qingxu98-grobid4.hf.space","https://qingxu98-grobid5.hf.space", "https://qingxu98-grobid6.hf.space", "https://qingxu98-grobid4.hf.space","https://qingxu98-grobid5.hf.space", "https://qingxu98-grobid6.hf.space",
"https://qingxu98-grobid7.hf.space", "https://qingxu98-grobid8.hf.space", "https://qingxu98-grobid7.hf.space", "https://qingxu98-grobid8.hf.space",
] ]
@@ -246,7 +258,7 @@ PATH_LOGGING = "gpt_log"
# 除了连接OpenAI之外,还有哪些场合允许使用代理,请勿修改 # 除了连接OpenAI之外,还有哪些场合允许使用代理,请勿修改
WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid", WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
"Warmup_Modules", "Nougat_Download", "AutoGen"] "Warmup_Modules", "Nougat_Download", "AutoGen"]
@@ -297,9 +309,8 @@ NUM_CUSTOM_BASIC_BTN = 4
│ ├── BAIDU_CLOUD_API_KEY │ ├── BAIDU_CLOUD_API_KEY
│ └── BAIDU_CLOUD_SECRET_KEY │ └── BAIDU_CLOUD_SECRET_KEY
├── "zhipuai" 智谱AI大模型chatglm_turbo ├── "glm-4", "glm-3-turbo", "zhipuai" 智谱AI大模型
── ZHIPUAI_API_KEY ── ZHIPUAI_API_KEY
│ └── ZHIPUAI_MODEL
├── "qwen-turbo" 等通义千问大模型 ├── "qwen-turbo" 等通义千问大模型
│ └── DASHSCOPE_API_KEY │ └── DASHSCOPE_API_KEY
@@ -311,7 +322,7 @@ NUM_CUSTOM_BASIC_BTN = 4
├── NEWBING_STYLE ├── NEWBING_STYLE
└── NEWBING_COOKIES └── NEWBING_COOKIES
本地大模型示意图 本地大模型示意图
├── "chatglm3" ├── "chatglm3"
@@ -351,6 +362,9 @@ NUM_CUSTOM_BASIC_BTN = 4
│ └── ALIYUN_SECRET │ └── ALIYUN_SECRET
└── PDF文档精准解析 └── PDF文档精准解析
── GROBID_URLS ── GROBID_URLS
├── MATHPIX_APPID
└── MATHPIX_APPKEY
""" """

查看文件

@@ -70,11 +70,11 @@ def get_crazy_functions():
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数", "Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
"Function": HotReload(清除缓存), "Function": HotReload(清除缓存),
}, },
"生成多种Mermaid图表(从当前对话或文件(.pdf/.md)中生产图表)": { "生成多种Mermaid图表(从当前对话或路径(.pdf/.md/.docx)中生产图表)": {
"Group": "对话", "Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"Info" : "基于当前对话或PDF生成多种Mermaid图表,图表类型由模型判断", "Info" : "基于当前对话或文件生成多种Mermaid图表,图表类型由模型判断",
"Function": HotReload(生成多种Mermaid图表), "Function": HotReload(生成多种Mermaid图表),
"AdvancedArgs": True, "AdvancedArgs": True,
"ArgsReminder": "请输入图类型对应的数字,不输入则为模型自行判断:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图,9-思维导图", "ArgsReminder": "请输入图类型对应的数字,不输入则为模型自行判断:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图,9-思维导图",
@@ -532,8 +532,9 @@ def get_crazy_functions():
print("Load function plugin failed") print("Load function plugin failed")
try: try:
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比 from crazy_functions.Latex输出PDF import Latex英文纠错加PDF对比
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF from crazy_functions.Latex输出PDF import Latex翻译中文并重新编译PDF
from crazy_functions.Latex输出PDF import PDF翻译中文并重新编译PDF
function_plugins.update( function_plugins.update(
{ {
@@ -550,9 +551,9 @@ def get_crazy_functions():
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, "AdvancedArgs": True,
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " "ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ', r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695", "Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
"Function": HotReload(Latex翻译中文并重新编译PDF), "Function": HotReload(Latex翻译中文并重新编译PDF),
}, },
@@ -561,11 +562,22 @@ def get_crazy_functions():
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, "AdvancedArgs": True,
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " "ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ', r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "本地Latex论文精细翻译 | 输入参数是路径", "Info": "本地Latex论文精细翻译 | 输入参数是路径",
"Function": HotReload(Latex翻译中文并重新编译PDF), "Function": HotReload(Latex翻译中文并重新编译PDF),
},
"PDF翻译中文并重新编译PDF上传PDF[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True,
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "PDF翻译中文,并重新编译PDF | 输入参数为路径",
"Function": HotReload(PDF翻译中文并重新编译PDF)
} }
} }
) )

查看文件

@@ -1,232 +0,0 @@
from collections.abc import Callable, Iterable, Mapping
from typing import Any
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc
from toolbox import promote_file_to_downloadzone, get_log_folder
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from .crazy_utils import input_clipping, try_install_deps
from multiprocessing import Process, Pipe
import os
import time
templete = """
```python
import ... # Put dependencies here, e.g. import numpy as np
class TerminalFunction(object): # Do not change the name of the class, The name of the class must be `TerminalFunction`
def run(self, path): # The name of the function must be `run`, it takes only a positional argument.
# rewrite the function you have just written here
...
return generated_file_path
```
"""
def inspect_dependency(chatbot, history):
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return True
def get_code_block(reply):
import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) == 1:
return matches[0].strip('python') # code block
for match in matches:
if 'class TerminalFunction' in match:
return match.strip('python') # code block
raise RuntimeError("GPT is not generating proper code.")
def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
# 输入
prompt_compose = [
f'Your job:\n'
f'1. write a single Python function, which takes a path of a `{file_type}` file as the only argument and returns a `string` containing the result of analysis or the path of generated files. \n',
f"2. You should write this function to perform following task: " + txt + "\n",
f"3. Wrap the output python function with markdown codeblock."
]
i_say = "".join(prompt_compose)
demo = []
# 第一步
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo,
sys_prompt= r"You are a programmer."
)
history.extend([i_say, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
# 第二步
prompt_compose = [
"If previous stage is successful, rewrite the function you have just written to satisfy following templete: \n",
templete
]
i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable templete. "
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=inputs_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt= r"You are a programmer."
)
code_to_return = gpt_say
history.extend([i_say, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
# # 第三步
# i_say = "Please list to packages to install to run the code above. Then show me how to use `try_install_deps` function to install them."
# i_say += 'For instance. `try_install_deps(["opencv-python", "scipy", "numpy"])`'
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
# inputs=i_say, inputs_show_user=inputs_show_user,
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
# sys_prompt= r"You are a programmer."
# )
# # # 第三步
# i_say = "Show me how to use `pip` to install packages to run the code above. "
# i_say += 'For instance. `pip install -r opencv-python scipy numpy`'
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
# inputs=i_say, inputs_show_user=i_say,
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
# sys_prompt= r"You are a programmer."
# )
installation_advance = ""
return code_to_return, installation_advance, txt, file_type, llm_kwargs, chatbot, history
def make_module(code):
module_file = 'gpt_fn_' + gen_time_str().replace('-','_')
with open(f'{get_log_folder()}/{module_file}.py', 'w', encoding='utf8') as f:
f.write(code)
def get_class_name(class_string):
import re
# Use regex to extract the class name
class_name = re.search(r'class (\w+)\(', class_string).group(1)
return class_name
class_name = get_class_name(code)
return f"{get_log_folder().replace('/', '.')}.{module_file}->{class_name}"
def init_module_instance(module):
import importlib
module_, class_ = module.split('->')
init_f = getattr(importlib.import_module(module_), class_)
return init_f()
def for_immediate_show_off_when_possible(file_type, fp, chatbot):
if file_type in ['png', 'jpg']:
image_path = os.path.abspath(fp)
chatbot.append(['这是一张图片, 展示如下:',
f'本地文件地址: <br/>`{image_path}`<br/>'+
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
])
return chatbot
def subprocess_worker(instance, file_path, return_dict):
return_dict['result'] = instance.run(file_path)
def have_any_recent_upload_files(chatbot):
_5min = 5 * 60
if not chatbot: return False # chatbot is None
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
if not most_recent_uploaded: return False # most_recent_uploaded is None
if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
else: return False # most_recent_uploaded is too old
def get_recent_file_prompt_support(chatbot):
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
path = most_recent_uploaded['path']
return path
@CatchException
def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数,暂时没有用武之地
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
user_request 当前用户的请求信息IP地址等
"""
raise NotImplementedError
# 清空历史,以免输入溢出
history = []; clear_file_downloadzone(chatbot)
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"CodeInterpreter开源版, 此插件处于开发阶段, 建议暂时不要使用, 插件初始化中 ..."
])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if have_any_recent_upload_files(chatbot):
file_path = get_recent_file_prompt_support(chatbot)
else:
chatbot.append(["文件检索", "没有发现任何近期上传的文件。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 读取文件
if ("recently_uploaded_files" in plugin_kwargs) and (plugin_kwargs["recently_uploaded_files"] == ""): plugin_kwargs.pop("recently_uploaded_files")
recently_uploaded_files = plugin_kwargs.get("recently_uploaded_files", None)
file_path = recently_uploaded_files[-1]
file_type = file_path.split('.')[-1]
# 粗心检查
if is_the_upload_folder(txt):
chatbot.append([
"...",
f"请在输入框内填写需求,然后再次点击该插件(文件路径 {file_path} 已经被记忆)"
])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 开始干正事
for j in range(5): # 最多重试5次
try:
code, installation_advance, txt, file_type, llm_kwargs, chatbot, history = \
yield from gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history)
code = get_code_block(code)
res = make_module(code)
instance = init_module_instance(res)
break
except Exception as e:
chatbot.append([f"{j}次代码生成尝试,失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 代码生成结束, 开始执行
try:
import multiprocessing
manager = multiprocessing.Manager()
return_dict = manager.dict()
p = multiprocessing.Process(target=subprocess_worker, args=(instance, file_path, return_dict))
# only has 10 seconds to run
p.start(); p.join(timeout=10)
if p.is_alive(): p.terminate(); p.join()
p.close()
res = return_dict['result']
# res = instance.run(file_path)
except Exception as e:
chatbot.append(["执行失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
# chatbot.append(["如果是缺乏依赖,请参考以下建议", installation_advance])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 顺利完成,收尾
res = str(res)
if os.path.exists(res):
chatbot.append(["执行成功了,结果是一个有效文件", "结果:" + res])
new_file_path = promote_file_to_downloadzone(res, chatbot=chatbot)
chatbot = for_immediate_show_off_when_possible(file_type, new_file_path, chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
else:
chatbot.append(["执行成功了,结果是一个字符串", "结果:" + res])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
"""
测试:
裁剪图像,保留下半部分
交换图像的蓝色通道和红色通道
将图像转为灰度图像
将csv文件转excel表格
"""

查看文件

@@ -0,0 +1,484 @@
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
from functools import partial
import glob, os, requests, time, json, tarfile
pj = os.path.join
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
def switch_prompt(pfg, mode, more_requirement):
"""
Generate prompts and system prompts based on the mode for proofreading or translating.
Args:
- pfg: Proofreader or Translator instance.
- mode: A string specifying the mode, either 'proofread' or 'translate_zh'.
Returns:
- inputs_array: A list of strings containing prompts for users to respond to.
- sys_prompt_array: A list of strings containing prompts for system prompts.
"""
n_split = len(pfg.sp_file_contents)
if mode == 'proofread_en':
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + more_requirement +
r"Answer me only with the revised text:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents]
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
elif mode == 'translate_zh':
inputs_array = [
r"Below is a section from an English academic paper, translate it into Chinese. " + more_requirement +
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
r"Answer me only with the translated text:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents]
sys_prompt_array = ["You are a professional translator." for _ in range(n_split)]
else:
assert False, "未知指令"
return inputs_array, sys_prompt_array
def desend_to_extracted_folder_if_exist(project_folder):
"""
Descend into the extracted folder if it exists, otherwise return the original folder.
Args:
- project_folder: A string specifying the folder path.
Returns:
- A string specifying the path to the extracted folder, or the original folder if there is no extracted folder.
"""
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
if len(maybe_dir) == 0: return project_folder
if maybe_dir[0].endswith('.extract'): return maybe_dir[0]
return project_folder
def move_project(project_folder, arxiv_id=None):
"""
Create a new work folder and copy the project folder to it.
Args:
- project_folder: A string specifying the folder path of the project.
Returns:
- A string specifying the path to the new work folder.
"""
import shutil, time
time.sleep(2) # avoid time string conflict
if arxiv_id is not None:
new_workfolder = pj(ARXIV_CACHE_DIR, arxiv_id, 'workfolder')
else:
new_workfolder = f'{get_log_folder()}/{gen_time_str()}'
try:
shutil.rmtree(new_workfolder)
except:
pass
# align subfolder if there is a folder wrapper
items = glob.glob(pj(project_folder, '*'))
items = [item for item in items if os.path.basename(item) != '__MACOSX']
if len(glob.glob(pj(project_folder, '*.tex'))) == 0 and len(items) == 1:
if os.path.isdir(items[0]): project_folder = items[0]
shutil.copytree(src=project_folder, dst=new_workfolder)
return new_workfolder
def arxiv_download(chatbot, history, txt, allow_cache=True):
def check_cached_translation_pdf(arxiv_id):
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'translation')
if not os.path.exists(translation_dir):
os.makedirs(translation_dir)
target_file = pj(translation_dir, 'translate_zh.pdf')
if os.path.exists(target_file):
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
target_file_compare = pj(translation_dir, 'comparison.pdf')
if os.path.exists(target_file_compare):
promote_file_to_downloadzone(target_file_compare, rename_file=None, chatbot=chatbot)
return target_file
return False
def is_float(s):
try:
float(s)
return True
except ValueError:
return False
if ('.' in txt) and ('/' not in txt) and is_float(txt): # is arxiv ID
txt = 'https://arxiv.org/abs/' + txt.strip()
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
txt = 'https://arxiv.org/abs/' + txt[:10]
if not txt.startswith('https://arxiv.org'):
return txt, None # 是本地文件,跳过下载
# <-------------- inspect format ------------->
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
yield from update_ui(chatbot=chatbot, history=history)
time.sleep(1) # 刷新界面
url_ = txt # https://arxiv.org/abs/1707.06690
if not txt.startswith('https://arxiv.org/abs/'):
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}"
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
return msg, None
# <-------------- set format ------------->
arxiv_id = url_.split('/abs/')[-1]
if 'v' in arxiv_id: arxiv_id = arxiv_id[:10]
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
if cached_translation_pdf and allow_cache: return cached_translation_pdf, arxiv_id
url_tar = url_.replace('/abs/', '/e-print/')
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
os.makedirs(translation_dir, exist_ok=True)
# <-------------- download arxiv source file ------------->
dst = pj(translation_dir, arxiv_id + '.tar')
if os.path.exists(dst):
yield from update_ui_lastest_msg("调用缓存", chatbot=chatbot, history=history) # 刷新界面
else:
yield from update_ui_lastest_msg("开始下载", chatbot=chatbot, history=history) # 刷新界面
proxies = get_conf('proxies')
r = requests.get(url_tar, proxies=proxies)
with open(dst, 'wb+') as f:
f.write(r.content)
# <-------------- extract file ------------->
yield from update_ui_lastest_msg("下载完成", chatbot=chatbot, history=history) # 刷新界面
from toolbox import extract_archive
extract_archive(file_path=dst, dest_dir=extract_dst)
return extract_dst, arxiv_id
def pdf2tex_project(pdf_file_path):
# Mathpix API credentials
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
headers = {"app_id": app_id, "app_key": app_key}
# Step 1: Send PDF file for processing
options = {
"conversion_formats": {"tex.zip": True},
"math_inline_delimiters": ["$", "$"],
"rm_spaces": True
}
response = requests.post(url="https://api.mathpix.com/v3/pdf",
headers=headers,
data={"options_json": json.dumps(options)},
files={"file": open(pdf_file_path, "rb")})
if response.ok:
pdf_id = response.json()["pdf_id"]
print(f"PDF processing initiated. PDF ID: {pdf_id}")
# Step 2: Check processing status
while True:
conversion_response = requests.get(f"https://api.mathpix.com/v3/pdf/{pdf_id}", headers=headers)
conversion_data = conversion_response.json()
if conversion_data["status"] == "completed":
print("PDF processing completed.")
break
elif conversion_data["status"] == "error":
print("Error occurred during processing.")
else:
print(f"Processing status: {conversion_data['status']}")
time.sleep(5) # wait for a few seconds before checking again
# Step 3: Save results to local files
output_dir = os.path.join(os.path.dirname(pdf_file_path), 'mathpix_output')
if not os.path.exists(output_dir):
os.makedirs(output_dir)
url = f"https://api.mathpix.com/v3/pdf/{pdf_id}.tex"
response = requests.get(url, headers=headers)
file_name_wo_dot = '_'.join(os.path.basename(pdf_file_path).split('.')[:-1])
output_name = f"{file_name_wo_dot}.tex.zip"
output_path = os.path.join(output_dir, output_name)
with open(output_path, "wb") as output_file:
output_file.write(response.content)
print(f"tex.zip file saved at: {output_path}")
import zipfile
unzip_dir = os.path.join(output_dir, file_name_wo_dot)
with zipfile.ZipFile(output_path, 'r') as zip_ref:
zip_ref.extractall(unzip_dir)
return unzip_dir
else:
print(f"Error sending PDF for processing. Status code: {response.status_code}")
return None
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# <-------------- information about this plugin ------------->
chatbot.append(["函数插件功能?",
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
more_req = plugin_kwargs.get("advanced_arg", "")
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
# <-------------- check deps ------------->
try:
import glob, os, time, subprocess
subprocess.Popen(['pdflatex', '-version'])
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
except Exception as e:
chatbot.append([f"解析项目: {txt}",
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- clear history and read input ------------->
history = []
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
project_folder = move_project(project_folder, arxiv_id=None)
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_proofread_en.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='proofread_en',
switch_prompt=_switch_prompt_)
# <-------------- compile PDF ------------->
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
main_file_modified='merge_proofread_en',
work_folder_original=project_folder, work_folder_modified=project_folder,
work_folder=project_folder)
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了",
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done ------------->
return success
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# <-------------- information about this plugin ------------->
chatbot.append([
"函数插件功能?",
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
more_req = plugin_kwargs.get("advanced_arg", "")
no_cache = more_req.startswith("--no-cache")
if no_cache: more_req.lstrip("--no-cache")
allow_cache = not no_cache
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
# <-------------- check deps ------------->
try:
import glob, os, time, subprocess
subprocess.Popen(['pdflatex', '-version'])
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
except Exception as e:
chatbot.append([f"解析项目: {txt}",
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- clear history and read input ------------->
history = []
try:
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
except tarfile.ReadError as e:
yield from update_ui_lastest_msg(
"无法自动下载该论文的Latex源码,请前往arxiv打开此论文下载页面,点other Formats,然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。",
chatbot=chatbot, history=history)
return
if txt.endswith('.pdf'):
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"发现已经存在翻译好的PDF文档")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无法处理: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
project_folder = move_project(project_folder, arxiv_id)
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='translate_zh',
switch_prompt=_switch_prompt_)
# <-------------- compile PDF ------------->
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
main_file_modified='merge_translate_zh', mode='translate_zh',
work_folder_original=project_folder, work_folder_modified=project_folder,
work_folder=project_folder)
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了",
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体见Github wiki ...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done ------------->
return success
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 插件主程序3 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException
def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
# <-------------- information about this plugin ------------->
chatbot.append([
"函数插件功能?",
"将PDF转换为Latex项目,翻译为中文后重新编译为PDF。函数插件贡献者: Marroh。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
more_req = plugin_kwargs.get("advanced_arg", "")
no_cache = more_req.startswith("--no-cache")
if no_cache: more_req.lstrip("--no-cache")
allow_cache = not no_cache
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
# <-------------- check deps ------------->
try:
import glob, os, time, subprocess
subprocess.Popen(['pdflatex', '-version'])
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
except Exception as e:
chatbot.append([f"解析项目: {txt}",
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- clear history and read input ------------->
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无法处理: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.pdf文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
if len(file_manifest) != 1:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"不支持同时处理多个pdf文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
if len(app_id) == 0 or len(app_key) == 0:
report_exception(chatbot, history, a="缺失 MATHPIX_APPID 和 MATHPIX_APPKEY。", b=f"请配置 MATHPIX_APPID 和 MATHPIX_APPKEY")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- convert pdf into tex ------------->
project_folder = pdf2tex_project(file_manifest[0])
# Translate English Latex to Chinese Latex, and compile it
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
project_folder = move_project(project_folder)
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='translate_zh',
switch_prompt=_switch_prompt_)
# <-------------- compile PDF ------------->
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
main_file_modified='merge_translate_zh', mode='translate_zh',
work_folder_original=project_folder, work_folder_modified=project_folder,
work_folder=project_folder)
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了",
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体见Github wiki ...'))
yield from update_ui(chatbot=chatbot, history=history);
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done ------------->
return success

查看文件

@@ -1,313 +0,0 @@
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
from functools import partial
import glob, os, requests, time, tarfile
pj = os.path.join
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
def switch_prompt(pfg, mode, more_requirement):
"""
Generate prompts and system prompts based on the mode for proofreading or translating.
Args:
- pfg: Proofreader or Translator instance.
- mode: A string specifying the mode, either 'proofread' or 'translate_zh'.
Returns:
- inputs_array: A list of strings containing prompts for users to respond to.
- sys_prompt_array: A list of strings containing prompts for system prompts.
"""
n_split = len(pfg.sp_file_contents)
if mode == 'proofread_en':
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + more_requirement +
r"Answer me only with the revised text:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents]
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
elif mode == 'translate_zh':
inputs_array = [r"Below is a section from an English academic paper, translate it into Chinese. " + more_requirement +
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
r"Answer me only with the translated text:" +
f"\n\n{frag}" for frag in pfg.sp_file_contents]
sys_prompt_array = ["You are a professional translator." for _ in range(n_split)]
else:
assert False, "未知指令"
return inputs_array, sys_prompt_array
def desend_to_extracted_folder_if_exist(project_folder):
"""
Descend into the extracted folder if it exists, otherwise return the original folder.
Args:
- project_folder: A string specifying the folder path.
Returns:
- A string specifying the path to the extracted folder, or the original folder if there is no extracted folder.
"""
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
if len(maybe_dir) == 0: return project_folder
if maybe_dir[0].endswith('.extract'): return maybe_dir[0]
return project_folder
def move_project(project_folder, arxiv_id=None):
"""
Create a new work folder and copy the project folder to it.
Args:
- project_folder: A string specifying the folder path of the project.
Returns:
- A string specifying the path to the new work folder.
"""
import shutil, time
time.sleep(2) # avoid time string conflict
if arxiv_id is not None:
new_workfolder = pj(ARXIV_CACHE_DIR, arxiv_id, 'workfolder')
else:
new_workfolder = f'{get_log_folder()}/{gen_time_str()}'
try:
shutil.rmtree(new_workfolder)
except:
pass
# align subfolder if there is a folder wrapper
items = glob.glob(pj(project_folder,'*'))
items = [item for item in items if os.path.basename(item)!='__MACOSX']
if len(glob.glob(pj(project_folder,'*.tex'))) == 0 and len(items) == 1:
if os.path.isdir(items[0]): project_folder = items[0]
shutil.copytree(src=project_folder, dst=new_workfolder)
return new_workfolder
def arxiv_download(chatbot, history, txt, allow_cache=True):
def check_cached_translation_pdf(arxiv_id):
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'translation')
if not os.path.exists(translation_dir):
os.makedirs(translation_dir)
target_file = pj(translation_dir, 'translate_zh.pdf')
if os.path.exists(target_file):
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
target_file_compare = pj(translation_dir, 'comparison.pdf')
if os.path.exists(target_file_compare):
promote_file_to_downloadzone(target_file_compare, rename_file=None, chatbot=chatbot)
return target_file
return False
def is_float(s):
try:
float(s)
return True
except ValueError:
return False
if ('.' in txt) and ('/' not in txt) and is_float(txt): # is arxiv ID
txt = 'https://arxiv.org/abs/' + txt.strip()
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
txt = 'https://arxiv.org/abs/' + txt[:10]
if not txt.startswith('https://arxiv.org'):
return txt, None # 是本地文件,跳过下载
# <-------------- inspect format ------------->
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
yield from update_ui(chatbot=chatbot, history=history)
time.sleep(1) # 刷新界面
url_ = txt # https://arxiv.org/abs/1707.06690
if not txt.startswith('https://arxiv.org/abs/'):
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}"
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
return msg, None
# <-------------- set format ------------->
arxiv_id = url_.split('/abs/')[-1]
if 'v' in arxiv_id: arxiv_id = arxiv_id[:10]
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
if cached_translation_pdf and allow_cache: return cached_translation_pdf, arxiv_id
url_tar = url_.replace('/abs/', '/e-print/')
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
os.makedirs(translation_dir, exist_ok=True)
# <-------------- download arxiv source file ------------->
dst = pj(translation_dir, arxiv_id+'.tar')
if os.path.exists(dst):
yield from update_ui_lastest_msg("调用缓存", chatbot=chatbot, history=history) # 刷新界面
else:
yield from update_ui_lastest_msg("开始下载", chatbot=chatbot, history=history) # 刷新界面
proxies = get_conf('proxies')
r = requests.get(url_tar, proxies=proxies)
with open(dst, 'wb+') as f:
f.write(r.content)
# <-------------- extract file ------------->
yield from update_ui_lastest_msg("下载完成", chatbot=chatbot, history=history) # 刷新界面
from toolbox import extract_archive
extract_archive(file_path=dst, dest_dir=extract_dst)
return extract_dst, arxiv_id
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# <-------------- information about this plugin ------------->
chatbot.append([ "函数插件功能?",
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
more_req = plugin_kwargs.get("advanced_arg", "")
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
# <-------------- check deps ------------->
try:
import glob, os, time, subprocess
subprocess.Popen(['pdflatex', '-version'])
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
except Exception as e:
chatbot.append([ f"解析项目: {txt}",
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- clear history and read input ------------->
history = []
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
project_folder = move_project(project_folder, arxiv_id=None)
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_proofread_en.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='proofread_en', switch_prompt=_switch_prompt_)
# <-------------- compile PDF ------------->
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_proofread_en',
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done ------------->
return success
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@CatchException
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# <-------------- information about this plugin ------------->
chatbot.append([
"函数插件功能?",
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
more_req = plugin_kwargs.get("advanced_arg", "")
no_cache = more_req.startswith("--no-cache")
if no_cache: more_req.lstrip("--no-cache")
allow_cache = not no_cache
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
# <-------------- check deps ------------->
try:
import glob, os, time, subprocess
subprocess.Popen(['pdflatex', '-version'])
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
except Exception as e:
chatbot.append([ f"解析项目: {txt}",
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- clear history and read input ------------->
history = []
try:
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
except tarfile.ReadError as e:
yield from update_ui_lastest_msg(
"无法自动下载该论文的Latex源码,请前往arxiv打开此论文下载页面,点other Formats,然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。",
chatbot=chatbot, history=history)
return
if txt.endswith('.pdf'):
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"发现已经存在翻译好的PDF文档")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无法处理: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder ------------->
project_folder = move_project(project_folder, arxiv_id)
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
chatbot, history, system_prompt, mode='translate_zh', switch_prompt=_switch_prompt_)
# <-------------- compile PDF ------------->
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_translate_zh', mode='translate_zh',
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体见Github wiki ...'))
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <-------------- we are done ------------->
return success

查看文件

@@ -12,7 +12,7 @@ def input_clipping(inputs, history, max_token_limit):
mode = 'input-and-history' mode = 'input-and-history'
# 当 输入部分的token占比 小于 全文的一半时,只裁剪历史 # 当 输入部分的token占比 小于 全文的一半时,只裁剪历史
input_token_num = get_token_num(inputs) input_token_num = get_token_num(inputs)
if input_token_num < max_token_limit//2: if input_token_num < max_token_limit//2:
mode = 'only-history' mode = 'only-history'
max_token_limit = max_token_limit - input_token_num max_token_limit = max_token_limit - input_token_num
@@ -21,7 +21,7 @@ def input_clipping(inputs, history, max_token_limit):
n_token = get_token_num('\n'.join(everything)) n_token = get_token_num('\n'.join(everything))
everything_token = [get_token_num(e) for e in everything] everything_token = [get_token_num(e) for e in everything]
delta = max(everything_token) // 16 # 截断时的颗粒度 delta = max(everything_token) // 16 # 截断时的颗粒度
while n_token > max_token_limit: while n_token > max_token_limit:
where = np.argmax(everything_token) where = np.argmax(everything_token)
encoded = enc.encode(everything[where], disallowed_special=()) encoded = enc.encode(everything[where], disallowed_special=())
@@ -38,9 +38,9 @@ def input_clipping(inputs, history, max_token_limit):
return inputs, history return inputs, history
def request_gpt_model_in_new_thread_with_ui_alive( def request_gpt_model_in_new_thread_with_ui_alive(
inputs, inputs_show_user, llm_kwargs, inputs, inputs_show_user, llm_kwargs,
chatbot, history, sys_prompt, refresh_interval=0.2, chatbot, history, sys_prompt, refresh_interval=0.2,
handle_token_exceed=True, handle_token_exceed=True,
retry_times_at_unknown_error=2, retry_times_at_unknown_error=2,
): ):
""" """
@@ -77,7 +77,7 @@ def request_gpt_model_in_new_thread_with_ui_alive(
exceeded_cnt = 0 exceeded_cnt = 0
while True: while True:
# watchdog error # watchdog error
if len(mutable) >= 2 and (time.time()-mutable[1]) > watch_dog_patience: if len(mutable) >= 2 and (time.time()-mutable[1]) > watch_dog_patience:
raise RuntimeError("检测到程序终止。") raise RuntimeError("检测到程序终止。")
try: try:
# 【第一种情况】:顺利完成 # 【第一种情况】:顺利完成
@@ -140,12 +140,12 @@ def can_multi_process(llm):
if llm.startswith('api2d-'): return True if llm.startswith('api2d-'): return True
if llm.startswith('azure-'): return True if llm.startswith('azure-'): return True
if llm.startswith('spark'): return True if llm.startswith('spark'): return True
if llm.startswith('zhipuai'): return True if llm.startswith('zhipuai') or llm.startswith('glm-'): return True
return False return False
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array, inputs_show_user_array, llm_kwargs, inputs_array, inputs_show_user_array, llm_kwargs,
chatbot, history_array, sys_prompt_array, chatbot, history_array, sys_prompt_array,
refresh_interval=0.2, max_workers=-1, scroller_max_len=30, refresh_interval=0.2, max_workers=-1, scroller_max_len=30,
handle_token_exceed=True, show_user_at_complete=False, handle_token_exceed=True, show_user_at_complete=False,
retry_times_at_unknown_error=2, retry_times_at_unknown_error=2,
@@ -189,7 +189,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿 # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
if not can_multi_process(llm_kwargs['llm_model']): if not can_multi_process(llm_kwargs['llm_model']):
max_workers = 1 max_workers = 1
executor = ThreadPoolExecutor(max_workers=max_workers) executor = ThreadPoolExecutor(max_workers=max_workers)
n_frag = len(inputs_array) n_frag = len(inputs_array)
# 用户反馈 # 用户反馈
@@ -214,7 +214,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
try: try:
# 【第一种情况】:顺利完成 # 【第一种情况】:顺利完成
gpt_say = predict_no_ui_long_connection( gpt_say = predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=history, inputs=inputs, llm_kwargs=llm_kwargs, history=history,
sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True
) )
mutable[index][2] = "已成功" mutable[index][2] = "已成功"
@@ -246,7 +246,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
print(tb_str) print(tb_str)
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n" gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
if retry_op > 0: if retry_op > 0:
retry_op -= 1 retry_op -= 1
wait = random.randint(5, 20) wait = random.randint(5, 20)
if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str): if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str):
@@ -287,8 +287,8 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]" replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
observe_win.append(print_something_really_funny) observe_win.append(print_something_really_funny)
# 在前端打印些好玩的东西 # 在前端打印些好玩的东西
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n' stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
if not done else f'`{mutable[thread_index][2]}`\n\n' if not done else f'`{mutable[thread_index][2]}`\n\n'
for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)]) for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)])
# 在前端打印些好玩的东西 # 在前端打印些好玩的东西
chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))] chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))]
@@ -302,7 +302,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
for inputs_show_user, f in zip(inputs_show_user_array, futures): for inputs_show_user, f in zip(inputs_show_user_array, futures):
gpt_res = f.result() gpt_res = f.result()
gpt_response_collection.extend([inputs_show_user, gpt_res]) gpt_response_collection.extend([inputs_show_user, gpt_res])
# 是否在结束时,在界面上显示结果 # 是否在结束时,在界面上显示结果
if show_user_at_complete: if show_user_at_complete:
for inputs_show_user, f in zip(inputs_show_user_array, futures): for inputs_show_user, f in zip(inputs_show_user_array, futures):
@@ -352,7 +352,7 @@ def read_and_clean_pdf_text(fp):
if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0 if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0
fsize_statiscs[wtf['size']] += len(wtf['text']) fsize_statiscs[wtf['size']] += len(wtf['text'])
return max(fsize_statiscs, key=fsize_statiscs.get) return max(fsize_statiscs, key=fsize_statiscs.get)
def ffsize_same(a,b): def ffsize_same(a,b):
""" """
提取字体大小是否近似相等 提取字体大小是否近似相等
@@ -388,7 +388,7 @@ def read_and_clean_pdf_text(fp):
if index == 0: if index == 0:
page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace( page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace(
'- ', '') for t in text_areas['blocks'] if 'lines' in t] '- ', '') for t in text_areas['blocks'] if 'lines' in t]
############################## <第 2 步,获取正文主字体> ################################## ############################## <第 2 步,获取正文主字体> ##################################
try: try:
fsize_statiscs = {} fsize_statiscs = {}
@@ -404,7 +404,7 @@ def read_and_clean_pdf_text(fp):
mega_sec = [] mega_sec = []
sec = [] sec = []
for index, line in enumerate(meta_line): for index, line in enumerate(meta_line):
if index == 0: if index == 0:
sec.append(line[fc]) sec.append(line[fc])
continue continue
if REMOVE_FOOT_NOTE: if REMOVE_FOOT_NOTE:
@@ -501,12 +501,12 @@ def get_files_from_everything(txt, type): # type='.md'
""" """
这个函数是用来获取指定目录下所有指定类型(如.md的文件,并且对于网络上的文件,也可以获取它。 这个函数是用来获取指定目录下所有指定类型(如.md的文件,并且对于网络上的文件,也可以获取它。
下面是对每个参数和返回值的说明: 下面是对每个参数和返回值的说明:
参数 参数
- txt: 路径或网址,表示要搜索的文件或者文件夹路径或网络上的文件。 - txt: 路径或网址,表示要搜索的文件或者文件夹路径或网络上的文件。
- type: 字符串,表示要搜索的文件类型。默认是.md。 - type: 字符串,表示要搜索的文件类型。默认是.md。
返回值 返回值
- success: 布尔值,表示函数是否成功执行。 - success: 布尔值,表示函数是否成功执行。
- file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。 - file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。
- project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。 - project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。
该函数详细注释已添加,请确认是否满足您的需要。 该函数详细注释已添加,请确认是否满足您的需要。
""" """
@@ -570,7 +570,7 @@ class nougat_interface():
def NOUGAT_parse_pdf(self, fp, chatbot, history): def NOUGAT_parse_pdf(self, fp, chatbot, history):
from toolbox import update_ui_lastest_msg from toolbox import update_ui_lastest_msg
yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度:正在排队, 等待线程锁...", yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度:正在排队, 等待线程锁...",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
self.threadLock.acquire() self.threadLock.acquire()
import glob, threading, os import glob, threading, os
@@ -578,7 +578,7 @@ class nougat_interface():
dst = os.path.join(get_log_folder(plugin_name='nougat'), gen_time_str()) dst = os.path.join(get_log_folder(plugin_name='nougat'), gen_time_str())
os.makedirs(dst) os.makedirs(dst)
yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度正在加载NOUGAT... 提示首次运行需要花费较长时间下载NOUGAT参数", yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度正在加载NOUGAT... 提示首次运行需要花费较长时间下载NOUGAT参数",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
self.nougat_with_timeout(f'nougat --out "{os.path.abspath(dst)}" "{os.path.abspath(fp)}"', os.getcwd(), timeout=3600) self.nougat_with_timeout(f'nougat --out "{os.path.abspath(dst)}" "{os.path.abspath(fp)}"', os.getcwd(), timeout=3600)
res = glob.glob(os.path.join(dst,'*.mmd')) res = glob.glob(os.path.join(dst,'*.mmd'))

查看文件

@@ -0,0 +1,85 @@
from crazy_functions.crazy_utils import read_and_clean_pdf_text, get_files_from_everything
import os
import re
def extract_text_from_files(txt, chatbot, history):
"""
查找pdf/md/word并获取文本内容并返回状态以及文本
输入参数 Args:
chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化)
history (list): List of chat history (历史,对话历史列表)
输出 Returns:
文件是否存在(bool)
final_result(list):文本内容
page_one(list):第一页内容/摘要
file_manifest(list):文件路径
excption(string):需要用户手动处理的信息,如没出错则保持为空
"""
final_result = []
page_one = []
file_manifest = []
excption = ""
if txt == "":
final_result.append(txt)
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
#查找输入区内容中的文件
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
file_word,word_manifest,folder_word = get_files_from_everything(txt, '.docx')
file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc')
if file_doc:
excption = "word"
return False, final_result, page_one, file_manifest, excption
file_num = len(pdf_manifest) + len(md_manifest) + len(word_manifest)
if file_num == 0:
final_result.append(txt)
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
if file_pdf:
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
import fitz
except:
excption = "pdf"
return False, final_result, page_one, file_manifest, excption
for index, fp in enumerate(pdf_manifest):
file_content, pdf_one = read_and_clean_pdf_text(fp) # 尝试按照章节切割PDF
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
pdf_one = str(pdf_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
final_result.append(file_content)
page_one.append(pdf_one)
file_manifest.append(os.path.relpath(fp, folder_pdf))
if file_md:
for index, fp in enumerate(md_manifest):
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
file_content = f.read()
file_content = file_content.encode('utf-8', 'ignore').decode()
headers = re.findall(r'^#\s(.*)$', file_content, re.MULTILINE) #接下来提取md中的一级/二级标题作为摘要
if len(headers) > 0:
page_one.append("\n".join(headers)) #合并所有的标题,以换行符分割
else:
page_one.append("")
final_result.append(file_content)
file_manifest.append(os.path.relpath(fp, folder_md))
if file_word:
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
from docx import Document
except:
excption = "word_pip"
return False, final_result, page_one, file_manifest, excption
for index, fp in enumerate(word_manifest):
doc = Document(fp)
file_content = '\n'.join([p.text for p in doc.paragraphs])
file_content = file_content.encode('utf-8', 'ignore').decode()
page_one.append(file_content[:200])
final_result.append(file_content)
file_manifest.append(os.path.relpath(fp, folder_word))
return True, final_result, page_one, file_manifest, excption

查看文件

@@ -1,6 +1,5 @@
from toolbox import CatchException, update_ui, report_exception from toolbox import CatchException, update_ui, report_exception
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from .crazy_utils import read_and_clean_pdf_text
import datetime import datetime
#以下是每类图表的PROMPT #以下是每类图表的PROMPT
@@ -162,7 +161,7 @@ mindmap
``` ```
""" """
def 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs): def 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs):
############################## <第 0 步,切割输入> ################################## ############################## <第 0 步,切割输入> ##################################
# 借用PDF切割中的函数对文本进行切割 # 借用PDF切割中的函数对文本进行切割
TOKEN_LIMIT_PER_FRAGMENT = 2500 TOKEN_LIMIT_PER_FRAGMENT = 2500
@@ -170,8 +169,6 @@ def 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs):
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
txt = breakdown_text_to_satisfy_token_limit(txt=txt, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model']) txt = breakdown_text_to_satisfy_token_limit(txt=txt, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
############################## <第 1 步,迭代地历遍整个文章,提取精炼信息> ################################## ############################## <第 1 步,迭代地历遍整个文章,提取精炼信息> ##################################
i_say_show_user = f'首先你从历史记录或文件中提取摘要。'; gpt_say = "[Local Message] 收到。" # 用户提示
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI
results = [] results = []
MAX_WORD_TOTAL = 4096 MAX_WORD_TOTAL = 4096
n_txt = len(txt) n_txt = len(txt)
@@ -179,7 +176,7 @@ def 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs):
if n_txt >= 20: print('文章极长,不能达到预期效果') if n_txt >= 20: print('文章极长,不能达到预期效果')
for i in range(n_txt): for i in range(n_txt):
NUM_OF_WORD = MAX_WORD_TOTAL // n_txt NUM_OF_WORD = MAX_WORD_TOTAL // n_txt
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i]}" i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words in Chinese: {txt[i]}"
i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...." i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...."
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问 gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
llm_kwargs, chatbot, llm_kwargs, chatbot,
@@ -232,34 +229,10 @@ def 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs):
inputs=i_say, inputs=i_say,
inputs_show_user=i_say_show_user, inputs_show_user=i_say_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="你精通使用mermaid语法来绘制图表,首先确保语法正确,其次避免在mermaid语法中使用不允许的字符,此外也应当分考虑图表的可读性。" sys_prompt=""
) )
history.append(gpt_say) history.append(gpt_say)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
def 输入区文件处理(txt):
if txt == "": return False, txt
success = True
import glob
from .crazy_utils import get_files_from_everything
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
if len(pdf_manifest) == 0 and len(md_manifest) == 0:
return False, txt #如输入区内容不是文件则直接返回输入区内容
final_result = ""
if file_pdf:
for index, fp in enumerate(pdf_manifest):
file_content, page_one = read_and_clean_pdf_text(fp) # 尝试按照章节切割PDF
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
final_result += "\n" + file_content
if file_md:
for index, fp in enumerate(md_manifest):
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
file_content = f.read()
file_content = file_content.encode('utf-8', 'ignore').decode()
final_result += "\n" + file_content
return True, final_result
@CatchException @CatchException
def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
@@ -277,26 +250,47 @@ def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history,
# 基本信息:功能、贡献者 # 基本信息:功能、贡献者
chatbot.append([ chatbot.append([
"函数插件功能?", "函数插件功能?",
"根据当前聊天历史或文件(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\ "根据当前聊天历史或指定的路径文件(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\
\n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918"]) \n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import fitz
except:
report_exception(chatbot, history,
a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
if os.path.exists(txt): #如输入区无内容则直接解析历史记录 if os.path.exists(txt): #如输入区无内容则直接解析历史记录
file_exist, txt = 输入区文件处理(txt) from crazy_functions.pdf_fns.parse_word import extract_text_from_files
file_exist, final_result, page_one, file_manifest, excption = extract_text_from_files(txt, chatbot, history)
else: else:
file_exist = False file_exist = False
excption = ""
file_manifest = []
if file_exist : history = [] #如输入区内容为文件则清空历史记录 if excption != "":
history.append(txt) #将解析后的txt传递加入到历史中 if excption == "word":
report_exception(chatbot, history,
yield from 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs) a = f"解析项目: {txt}",
b = f"找到了.doc文件,但是该文件格式不被支持,请先转化为.docx格式。")
elif excption == "pdf":
report_exception(chatbot, history,
a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
elif excption == "word_pip":
report_exception(chatbot, history,
a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
else:
if not file_exist:
history.append(txt) #如输入区不是文件则将输入区内容加入历史记录
i_say_show_user = f'首先你从历史记录中提取摘要。'; gpt_say = "[Local Message] 收到。" # 用户提示
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI
yield from 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs)
else:
file_num = len(file_manifest)
for i in range(file_num): #依次处理文件
i_say_show_user = f"[{i+1}/{file_num}]处理文件{file_manifest[i]}"; gpt_say = "[Local Message] 收到。" # 用户提示
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI
history = [] #如输入区内容为文件则清空历史记录
history.append(final_result[i])
yield from 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs)

查看文件

@@ -12,6 +12,12 @@ class PaperFileGroup():
self.sp_file_index = [] self.sp_file_index = []
self.sp_file_tag = [] self.sp_file_tag = []
# count_token
from request_llms.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer']
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
self.get_token_num = get_token_num
def run_file_split(self, max_token_limit=1900): def run_file_split(self, max_token_limit=1900):
""" """
将长文本分离开来 将长文本分离开来
@@ -54,7 +60,7 @@ def parseNotebook(filename, enable_markdown=1):
Code += f"This is {idx+1}th code block: \n" Code += f"This is {idx+1}th code block: \n"
Code += code+"\n" Code += code+"\n"
return Code return Code
def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):

查看文件

@@ -0,0 +1,28 @@
# encoding: utf-8
# @Time : 2023/4/19
# @Author : Spike
# @Descr :
from toolbox import update_ui
from toolbox import CatchException, report_execption, write_results_to_file
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
@CatchException
def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
if txt:
show_say = txt
prompt = txt+'\n回答完问题后,再列出用户可能提出的三个问题。'
else:
prompt = history[-1]+"\n分析上述回答,再列出用户可能提出的三个问题。"
show_say = '分析上述回答,再列出用户可能提出的三个问题。'
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=prompt,
inputs_show_user=show_say,
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history=history,
sys_prompt=system_prompt
)
chatbot[-1] = (show_say, gpt_say)
history.extend([show_say, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

查看文件

@@ -10,6 +10,9 @@ ENV PATH "$PATH:/usr/local/texlive/2024/bin/x86_64-linux"
ENV PATH "$PATH:/usr/local/texlive/2025/bin/x86_64-linux" ENV PATH "$PATH:/usr/local/texlive/2025/bin/x86_64-linux"
ENV PATH "$PATH:/usr/local/texlive/2026/bin/x86_64-linux" ENV PATH "$PATH:/usr/local/texlive/2026/bin/x86_64-linux"
# 删除文档文件以节约空间
RUN rm -rf /usr/local/texlive/2023/texmf-dist/doc
# 指定路径 # 指定路径
WORKDIR /gpt WORKDIR /gpt

查看文件

@@ -1668,7 +1668,7 @@
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage", "Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
"Langchain知识库": "LangchainKnowledgeBase", "Langchain知识库": "LangchainKnowledgeBase",
"Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison", "Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison",
"Latex输出PDF结果": "OutputPDFFromLatex", "Latex输出PDF": "OutputPDFFromLatex",
"Latex翻译中文并重新编译PDF": "TranslateChineseToEnglishInLatexAndRecompilePDF", "Latex翻译中文并重新编译PDF": "TranslateChineseToEnglishInLatexAndRecompilePDF",
"sprint亮靛": "SprintIndigo", "sprint亮靛": "SprintIndigo",
"寻找Latex主文件": "FindLatexMainFile", "寻找Latex主文件": "FindLatexMainFile",
@@ -3004,5 +3004,7 @@
"1. 上传图片": "TranslatedText", "1. 上传图片": "TranslatedText",
"保存状态": "TranslatedText", "保存状态": "TranslatedText",
"GPT-Academic对话存档": "TranslatedText", "GPT-Academic对话存档": "TranslatedText",
"Arxiv论文精细翻译": "TranslatedText" "Arxiv论文精细翻译": "TranslatedText",
"from crazy_functions.AdvancedFunctionTemplate import 测试图表渲染": "from crazy_functions.AdvancedFunctionTemplate import test_chart_rendering",
"测试图表渲染": "test_chart_rendering"
} }

查看文件

@@ -1492,7 +1492,7 @@
"交互功能模板函数": "InteractiveFunctionTemplateFunction", "交互功能模板函数": "InteractiveFunctionTemplateFunction",
"交互功能函数模板": "InteractiveFunctionFunctionTemplate", "交互功能函数模板": "InteractiveFunctionFunctionTemplate",
"Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison", "Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison",
"Latex输出PDF结果": "LatexOutputPDFResult", "Latex输出PDF": "LatexOutputPDFResult",
"Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF", "Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF",
"语音助手": "VoiceAssistant", "语音助手": "VoiceAssistant",
"微调数据集生成": "FineTuneDatasetGeneration", "微调数据集生成": "FineTuneDatasetGeneration",

查看文件

@@ -16,7 +16,7 @@
"批量Markdown翻译": "BatchTranslateMarkdown", "批量Markdown翻译": "BatchTranslateMarkdown",
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion", "连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
"Langchain知识库": "LangchainKnowledgeBase", "Langchain知识库": "LangchainKnowledgeBase",
"Latex输出PDF结果": "OutputPDFFromLatex", "Latex输出PDF": "OutputPDFFromLatex",
"把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline", "把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline",
"Latex精细分解与转化": "DecomposeAndConvertLatex", "Latex精细分解与转化": "DecomposeAndConvertLatex",
"解析一个C项目的头文件": "ParseCProjectHeaderFiles", "解析一个C项目的头文件": "ParseCProjectHeaderFiles",
@@ -97,5 +97,12 @@
"多智能体": "MultiAgent", "多智能体": "MultiAgent",
"图片生成_DALLE2": "ImageGeneration_DALLE2", "图片生成_DALLE2": "ImageGeneration_DALLE2",
"图片生成_DALLE3": "ImageGeneration_DALLE3", "图片生成_DALLE3": "ImageGeneration_DALLE3",
"图片修改_DALLE2": "ImageModification_DALLE2" "图片修改_DALLE2": "ImageModification_DALLE2",
} "生成多种Mermaid图表": "GenerateMultipleMermaidCharts",
"知识库文件注入": "InjectKnowledgeBaseFiles",
"PDF翻译中文并重新编译PDF": "TranslatePDFToChineseAndRecompilePDF",
"随机小游戏": "RandomMiniGame",
"互动小游戏": "InteractiveMiniGame",
"解析历史输入": "ParseHistoricalInput",
"高阶功能模板函数示意图": "HighOrderFunctionTemplateDiagram"
}

查看文件

@@ -1468,7 +1468,7 @@
"交互功能模板函数": "InteractiveFunctionTemplateFunctions", "交互功能模板函数": "InteractiveFunctionTemplateFunctions",
"交互功能函数模板": "InteractiveFunctionFunctionTemplates", "交互功能函数模板": "InteractiveFunctionFunctionTemplates",
"Latex英文纠错加PDF对比": "LatexEnglishCorrectionWithPDFComparison", "Latex英文纠错加PDF对比": "LatexEnglishCorrectionWithPDFComparison",
"Latex输出PDF结果": "OutputPDFFromLatex", "Latex输出PDF": "OutputPDFFromLatex",
"Latex翻译中文并重新编译PDF": "TranslateLatexToChineseAndRecompilePDF", "Latex翻译中文并重新编译PDF": "TranslateLatexToChineseAndRecompilePDF",
"语音助手": "VoiceAssistant", "语音助手": "VoiceAssistant",
"微调数据集生成": "FineTuneDatasetGeneration", "微调数据集生成": "FineTuneDatasetGeneration",

查看文件

@@ -1,30 +0,0 @@
try {
$("<link>").attr({href: "file=docs/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css"}).appendTo('head');
$('body').append('<div class="waifu"><div class="waifu-tips"></div><canvas id="live2d" class="live2d"></canvas><div class="waifu-tool"><span class="fui-home"></span> <span class="fui-chat"></span> <span class="fui-eye"></span> <span class="fui-user"></span> <span class="fui-photo"></span> <span class="fui-info-circle"></span> <span class="fui-cross"></span></div></div>');
$.ajax({url: "file=docs/waifu_plugin/waifu-tips.js", dataType:"script", cache: true, success: function() {
$.ajax({url: "file=docs/waifu_plugin/live2d.js", dataType:"script", cache: true, success: function() {
/* 可直接修改部分参数 */
live2d_settings['hitokotoAPI'] = "hitokoto.cn"; // 一言 API
live2d_settings['modelId'] = 5; // 默认模型 ID
live2d_settings['modelTexturesId'] = 1; // 默认材质 ID
live2d_settings['modelStorage'] = false; // 不储存模型 ID
live2d_settings['waifuSize'] = '210x187';
live2d_settings['waifuTipsSize'] = '187x52';
live2d_settings['canSwitchModel'] = true;
live2d_settings['canSwitchTextures'] = true;
live2d_settings['canSwitchHitokoto'] = false;
live2d_settings['canTakeScreenshot'] = false;
live2d_settings['canTurnToHomePage'] = false;
live2d_settings['canTurnToAboutPage'] = false;
live2d_settings['showHitokoto'] = false; // 显示一言
live2d_settings['showF12Status'] = false; // 显示加载状态
live2d_settings['showF12Message'] = false; // 显示看板娘消息
live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示
live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示
live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词
/* 在 initModel 前添加 */
initModel("file=docs/waifu_plugin/waifu-tips.json");
}});
}});
} catch(err) { console.log("[Error] JQuery is not defined.") }

查看文件

@@ -31,6 +31,9 @@ from .bridge_qianfan import predict as qianfan_ui
from .bridge_google_gemini import predict as genai_ui from .bridge_google_gemini import predict as genai_ui
from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
from .bridge_zhipu import predict as zhipu_ui
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044'] colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
class LazyloadTiktoken(object): class LazyloadTiktoken(object):
@@ -44,13 +47,13 @@ class LazyloadTiktoken(object):
tmp = tiktoken.encoding_for_model(model) tmp = tiktoken.encoding_for_model(model)
print('加载tokenizer完毕') print('加载tokenizer完毕')
return tmp return tmp
def encode(self, *args, **kwargs): def encode(self, *args, **kwargs):
encoder = self.get_encoder(self.model) encoder = self.get_encoder(self.model)
return encoder.encode(*args, **kwargs) return encoder.encode(*args, **kwargs)
def decode(self, *args, **kwargs): def decode(self, *args, **kwargs):
encoder = self.get_encoder(self.model) encoder = self.get_encoder(self.model)
return encoder.decode(*args, **kwargs) return encoder.decode(*args, **kwargs)
# Endpoint 重定向 # Endpoint 重定向
@@ -63,7 +66,7 @@ azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/compl
# 兼容旧版的配置 # 兼容旧版的配置
try: try:
API_URL = get_conf("API_URL") API_URL = get_conf("API_URL")
if API_URL != "https://api.openai.com/v1/chat/completions": if API_URL != "https://api.openai.com/v1/chat/completions":
openai_endpoint = API_URL openai_endpoint = API_URL
print("警告API_URL配置选项将被弃用,请更换为API_URL_REDIRECT配置") print("警告API_URL配置选项将被弃用,请更换为API_URL_REDIRECT配置")
except: except:
@@ -95,7 +98,7 @@ model_info = {
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
"gpt-3.5-turbo-16k": { "gpt-3.5-turbo-16k": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
@@ -185,7 +188,7 @@ model_info = {
"tokenizer": tokenizer_gpt4, "tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4, "token_cnt": get_token_num_gpt4,
}, },
"gpt-4-vision-preview": { "gpt-4-vision-preview": {
"fn_with_ui": chatgpt_vision_ui, "fn_with_ui": chatgpt_vision_ui,
"fn_without_ui": chatgpt_vision_noui, "fn_without_ui": chatgpt_vision_noui,
@@ -215,16 +218,25 @@ model_info = {
"token_cnt": get_token_num_gpt4, "token_cnt": get_token_num_gpt4,
}, },
# api_2d (此后不需要在此处添加api2d的接口了,因为下面的代码会自动添加) # 智谱AI
"api2d-gpt-3.5-turbo": { "glm-4": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": zhipu_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": zhipu_noui,
"endpoint": api2d_endpoint, "endpoint": None,
"max_token": 4096, "max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-3-turbo": {
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 4,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
# api_2d (此后不需要在此处添加api2d的接口了,因为下面的代码会自动添加)
"api2d-gpt-4": { "api2d-gpt-4": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
@@ -548,7 +560,7 @@ if "sparkv2" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
if "sparkv3" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型 if "sparkv3" in AVAIL_LLM_MODELS or "sparkv3.5" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
try: try:
from .bridge_spark import predict_no_ui_long_connection as spark_noui from .bridge_spark import predict_no_ui_long_connection as spark_noui
from .bridge_spark import predict as spark_ui from .bridge_spark import predict as spark_ui
@@ -560,6 +572,14 @@ if "sparkv3" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
"max_token": 4096, "max_token": 4096,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
},
"sparkv3.5": {
"fn_with_ui": spark_ui,
"fn_without_ui": spark_noui,
"endpoint": None,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
} }
}) })
except: except:
@@ -580,19 +600,17 @@ if "llama2" in AVAIL_LLM_MODELS: # llama2
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai 是glm-4的别名,向后兼容配置
try: try:
from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
from .bridge_zhipu import predict as zhipu_ui
model_info.update({ model_info.update({
"zhipuai": { "zhipuai": {
"fn_with_ui": zhipu_ui, "fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui, "fn_without_ui": zhipu_noui,
"endpoint": None, "endpoint": None,
"max_token": 4096, "max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
} },
}) })
except: except:
print(trimmed_format_exc()) print(trimmed_format_exc())
@@ -635,7 +653,7 @@ AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")
if len(AZURE_CFG_ARRAY) > 0: if len(AZURE_CFG_ARRAY) > 0:
for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items(): for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items():
# 可能会覆盖之前的配置,但这是意料之中的 # 可能会覆盖之前的配置,但这是意料之中的
if not azure_model_name.startswith('azure'): if not azure_model_name.startswith('azure'):
raise ValueError("AZURE_CFG_ARRAY中配置的模型必须以azure开头") raise ValueError("AZURE_CFG_ARRAY中配置的模型必须以azure开头")
endpoint_ = azure_cfg_dict["AZURE_ENDPOINT"] + \ endpoint_ = azure_cfg_dict["AZURE_ENDPOINT"] + \
f'openai/deployments/{azure_cfg_dict["AZURE_ENGINE"]}/chat/completions?api-version=2023-05-15' f'openai/deployments/{azure_cfg_dict["AZURE_ENGINE"]}/chat/completions?api-version=2023-05-15'
@@ -701,7 +719,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
executor = ThreadPoolExecutor(max_workers=4) executor = ThreadPoolExecutor(max_workers=4)
models = model.split('&') models = model.split('&')
n_model = len(models) n_model = len(models)
window_len = len(observe_window) window_len = len(observe_window)
assert window_len==3 assert window_len==3
window_mutex = [["", time.time(), ""] for _ in range(n_model)] + [True] window_mutex = [["", time.time(), ""] for _ in range(n_model)] + [True]
@@ -720,7 +738,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
time.sleep(0.25) time.sleep(0.25)
if not window_mutex[-1]: break if not window_mutex[-1]: break
# 看门狗watchdog # 看门狗watchdog
for i in range(n_model): for i in range(n_model):
window_mutex[i][1] = observe_window[1] window_mutex[i][1] = observe_window[1]
# 观察窗window # 观察窗window
chat_string = [] chat_string = []

查看文件

@@ -113,6 +113,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
error_msg = get_full_error(chunk, stream_response).decode() error_msg = get_full_error(chunk, stream_response).decode()
if "reduce the length" in error_msg: if "reduce the length" in error_msg:
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg) raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
elif """type":"upstream_error","param":"307""" in error_msg:
raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。")
else: else:
raise RuntimeError("OpenAI拒绝了请求" + error_msg) raise RuntimeError("OpenAI拒绝了请求" + error_msg)
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成 if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成

查看文件

@@ -57,6 +57,10 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if "vision" in llm_kwargs["llm_model"]: if "vision" in llm_kwargs["llm_model"]:
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot) have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
if not have_recent_file:
chatbot.append((inputs, "没有检测到任何近期上传的图像文件,请上传jpg格式的图片,此外,请注意拓展名需要小写"))
yield from update_ui(chatbot=chatbot, history=history, msg="等待图片") # 刷新界面
return
def make_media_input(inputs, image_paths): def make_media_input(inputs, image_paths):
for image_path in image_paths: for image_path in image_paths:
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>' inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'

查看文件

@@ -146,21 +146,17 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
# 开始接收回复 # 开始接收回复
try: try:
response = f"[Local Message] 等待{model_name}响应中 ..."
for response in generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt): for response in generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt):
chatbot[-1] = (inputs, response) chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history)
except ConnectionAbortedError as e: except ConnectionAbortedError as e:
from .bridge_all import model_info from .bridge_all import model_info
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出 if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'], history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一 max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)") chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
yield from update_ui(chatbot=chatbot, history=history, msg="异常") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="异常") # 刷新界面
return return
# 总结输出
response = f"[Local Message] {model_name}响应异常 ..."
if response == f"[Local Message] 等待{model_name}响应中 ...":
response = f"[Local Message] {model_name}响应异常 ..."
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history)

查看文件

@@ -51,6 +51,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
# 开始接收回复 # 开始接收回复
from .com_qwenapi import QwenRequestInstance from .com_qwenapi import QwenRequestInstance
sri = QwenRequestInstance() sri = QwenRequestInstance()
response = f"[Local Message] 等待{model_name}响应中 ..."
for response in sri.generate(inputs, llm_kwargs, history, system_prompt): for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
chatbot[-1] = (inputs, response) chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)

查看文件

@@ -56,6 +56,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
# 开始接收回复 # 开始接收回复
from .com_skylark2api import YUNQUERequestInstance from .com_skylark2api import YUNQUERequestInstance
sri = YUNQUERequestInstance() sri = YUNQUERequestInstance()
response = f"[Local Message] 等待{model_name}响应中 ..."
for response in sri.generate(inputs, llm_kwargs, history, system_prompt): for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
chatbot[-1] = (inputs, response) chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)

查看文件

@@ -9,7 +9,7 @@ model_name = '星火认知大模型'
def validate_key(): def validate_key():
XFYUN_APPID = get_conf('XFYUN_APPID') XFYUN_APPID = get_conf('XFYUN_APPID')
if XFYUN_APPID == '00000000' or XFYUN_APPID == '': if XFYUN_APPID == '00000000' or XFYUN_APPID == '':
return False return False
return True return True
@@ -49,9 +49,10 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
from core_functional import handle_core_functionality from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot) inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
# 开始接收回复 # 开始接收回复
from .com_sparkapi import SparkRequestInstance from .com_sparkapi import SparkRequestInstance
sri = SparkRequestInstance() sri = SparkRequestInstance()
response = f"[Local Message] 等待{model_name}响应中 ..."
for response in sri.generate(inputs, llm_kwargs, history, system_prompt, use_image_api=True): for response in sri.generate(inputs, llm_kwargs, history, system_prompt, use_image_api=True):
chatbot[-1] = (inputs, response) chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)

查看文件

@@ -1,15 +1,21 @@
import time import time
import os
from toolbox import update_ui, get_conf, update_ui_lastest_msg from toolbox import update_ui, get_conf, update_ui_lastest_msg
from toolbox import check_packages, report_exception from toolbox import check_packages, report_exception, have_any_recent_upload_image_files
model_name = '智谱AI大模型' model_name = '智谱AI大模型'
zhipuai_default_model = 'glm-4'
def validate_key(): def validate_key():
ZHIPUAI_API_KEY = get_conf("ZHIPUAI_API_KEY") ZHIPUAI_API_KEY = get_conf("ZHIPUAI_API_KEY")
if ZHIPUAI_API_KEY == '': return False if ZHIPUAI_API_KEY == '': return False
return True return True
def make_media_input(inputs, image_paths):
for image_path in image_paths:
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
return inputs
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
""" """
⭐多线程方法 ⭐多线程方法
@@ -18,34 +24,40 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
watch_dog_patience = 5 watch_dog_patience = 5
response = "" response = ""
if llm_kwargs["llm_model"] == "zhipuai":
llm_kwargs["llm_model"] = zhipuai_default_model
if validate_key() is False: if validate_key() is False:
raise RuntimeError('请配置ZHIPUAI_API_KEY') raise RuntimeError('请配置ZHIPUAI_API_KEY')
from .com_zhipuapi import ZhipuRequestInstance # 开始接收回复
sri = ZhipuRequestInstance() from .com_zhipuglm import ZhipuChatInit
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt): zhipu_bro_init = ZhipuChatInit()
for chunk, response in zhipu_bro_init.generate_chat(inputs, llm_kwargs, history, sys_prompt):
if len(observe_window) >= 1: if len(observe_window) >= 1:
observe_window[0] = response observe_window[0] = response
if len(observe_window) >= 2: if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。") if (time.time() - observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。")
return response return response
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None):
""" """
⭐单线程方法 ⭐单线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
""" """
chatbot.append((inputs, "")) chatbot.append([inputs, ""])
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
try: try:
check_packages(["zhipuai"]) check_packages(["zhipuai"])
except: except:
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install zhipuai==1.0.7```。", yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
return return
if validate_key() is False: if validate_key() is False:
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0) yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0)
return return
@@ -53,16 +65,29 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if additional_fn is not None: if additional_fn is not None:
from core_functional import handle_core_functionality from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot) inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
chatbot[-1] = [inputs, ""]
# 开始接收回复
from .com_zhipuapi import ZhipuRequestInstance
sri = ZhipuRequestInstance()
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
# 总结输出 if llm_kwargs["llm_model"] == "zhipuai":
if response == f"[Local Message] 等待{model_name}响应中 ...": llm_kwargs["llm_model"] = zhipuai_default_model
response = f"[Local Message] {model_name}响应异常 ..."
if llm_kwargs["llm_model"] in ["glm-4v"]:
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
if not have_recent_file:
chatbot.append((inputs, "没有检测到任何近期上传的图像文件,请上传jpg格式的图片,此外,请注意拓展名需要小写"))
yield from update_ui(chatbot=chatbot, history=history, msg="等待图片") # 刷新界面
return
if have_recent_file:
inputs = make_media_input(inputs, image_paths)
chatbot[-1] = [inputs, ""]
yield from update_ui(chatbot=chatbot, history=history)
# 开始接收回复
from .com_zhipuglm import ZhipuChatInit
zhipu_bro_init = ZhipuChatInit()
for chunk, response in zhipu_bro_init.generate_chat(inputs, llm_kwargs, history, system_prompt):
chatbot[-1] = [inputs, response]
yield from update_ui(chatbot=chatbot, history=history)
history.extend([inputs, response]) history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)

查看文件

@@ -7,7 +7,7 @@ import os
import re import re
import requests import requests
from typing import List, Dict, Tuple from typing import List, Dict, Tuple
from toolbox import get_conf, encode_image, get_pictures_list from toolbox import get_conf, encode_image, get_pictures_list, to_markdown_tabs
proxies, TIMEOUT_SECONDS = get_conf("proxies", "TIMEOUT_SECONDS") proxies, TIMEOUT_SECONDS = get_conf("proxies", "TIMEOUT_SECONDS")
@@ -112,34 +112,6 @@ def html_local_img(__file, layout="left", max_width=None, max_height=None, md=Tr
return a return a
def to_markdown_tabs(head: list, tabs: list, alignment=":---:", column=False):
"""
Args:
head: 表头:[]
tabs: 表值:[[列1], [列2], [列3], [列4]]
alignment: :--- 左对齐, :---: 居中对齐, ---: 右对齐
column: True to keep data in columns, False to keep data in rows (default).
Returns:
A string representation of the markdown table.
"""
if column:
transposed_tabs = list(map(list, zip(*tabs)))
else:
transposed_tabs = tabs
# Find the maximum length among the columns
max_len = max(len(column) for column in transposed_tabs)
tab_format = "| %s "
tabs_list = "".join([tab_format % i for i in head]) + "|\n"
tabs_list += "".join([tab_format % alignment for i in head]) + "|\n"
for i in range(max_len):
row_data = [tab[i] if i < len(tab) else "" for tab in transposed_tabs]
row_data = file_manifest_filter_html(row_data, filter_=None)
tabs_list += "".join([tab_format % i for i in row_data]) + "|\n"
return tabs_list
class GoogleChatInit: class GoogleChatInit:
def __init__(self): def __init__(self):

查看文件

@@ -65,6 +65,7 @@ class SparkRequestInstance():
self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat" self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat"
self.gpt_url_v2 = "ws://spark-api.xf-yun.com/v2.1/chat" self.gpt_url_v2 = "ws://spark-api.xf-yun.com/v2.1/chat"
self.gpt_url_v3 = "ws://spark-api.xf-yun.com/v3.1/chat" self.gpt_url_v3 = "ws://spark-api.xf-yun.com/v3.1/chat"
self.gpt_url_v35 = "wss://spark-api.xf-yun.com/v3.5/chat"
self.gpt_url_img = "wss://spark-api.cn-huabei-1.xf-yun.com/v2.1/image" self.gpt_url_img = "wss://spark-api.cn-huabei-1.xf-yun.com/v2.1/image"
self.time_to_yield_event = threading.Event() self.time_to_yield_event = threading.Event()
@@ -91,6 +92,8 @@ class SparkRequestInstance():
gpt_url = self.gpt_url_v2 gpt_url = self.gpt_url_v2
elif llm_kwargs['llm_model'] == 'sparkv3': elif llm_kwargs['llm_model'] == 'sparkv3':
gpt_url = self.gpt_url_v3 gpt_url = self.gpt_url_v3
elif llm_kwargs['llm_model'] == 'sparkv3.5':
gpt_url = self.gpt_url_v35
else: else:
gpt_url = self.gpt_url gpt_url = self.gpt_url
file_manifest = [] file_manifest = []
@@ -190,6 +193,7 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt, file_manifest)
"spark": "general", "spark": "general",
"sparkv2": "generalv2", "sparkv2": "generalv2",
"sparkv3": "generalv3", "sparkv3": "generalv3",
"sparkv3.5": "generalv3.5",
} }
domains_select = domains[llm_kwargs['llm_model']] domains_select = domains[llm_kwargs['llm_model']]
if file_manifest: domains_select = 'image' if file_manifest: domains_select = 'image'

查看文件

@@ -1,70 +0,0 @@
from toolbox import get_conf
import threading
import logging
timeout_bot_msg = '[Local Message] Request timeout. Network error.'
class ZhipuRequestInstance():
def __init__(self):
self.time_to_yield_event = threading.Event()
self.time_to_exit_event = threading.Event()
self.result_buf = ""
def generate(self, inputs, llm_kwargs, history, system_prompt):
# import _thread as thread
import zhipuai
ZHIPUAI_API_KEY, ZHIPUAI_MODEL = get_conf("ZHIPUAI_API_KEY", "ZHIPUAI_MODEL")
zhipuai.api_key = ZHIPUAI_API_KEY
self.result_buf = ""
response = zhipuai.model_api.sse_invoke(
model=ZHIPUAI_MODEL,
prompt=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
top_p=llm_kwargs['top_p']*0.7, # 智谱的API抽风,手动*0.7给做个线性变换
temperature=llm_kwargs['temperature']*0.95, # 智谱的API抽风,手动*0.7给做个线性变换
)
for event in response.events():
if event.event == "add":
# if self.result_buf == "" and event.data.startswith(" "):
# event.data = event.data.lstrip(" ") # 每次智谱为啥都要带个空格开头呢?
self.result_buf += event.data
yield self.result_buf
elif event.event == "error" or event.event == "interrupted":
raise RuntimeError("Unknown error:" + event.data)
elif event.event == "finish":
yield self.result_buf
break
else:
raise RuntimeError("Unknown error:" + str(event))
if self.result_buf == "":
yield "智谱没有返回任何数据, 请检查ZHIPUAI_API_KEY和ZHIPUAI_MODEL是否填写正确."
logging.info(f'[raw_input] {inputs}')
logging.info(f'[response] {self.result_buf}')
return self.result_buf
def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
conversation_cnt = len(history) // 2
messages = [{"role": "user", "content": system_prompt}, {"role": "assistant", "content": "Certainly!"}]
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = history[index]
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = history[index+1]
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "":
continue
if what_gpt_answer["content"] == timeout_bot_msg:
continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = what_gpt_answer['content']
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = inputs
messages.append(what_i_ask_now)
return messages

查看文件

@@ -0,0 +1,84 @@
# encoding: utf-8
# @Time : 2024/1/22
# @Author : Kilig947 & binary husky
# @Descr : 兼容最新的智谱Ai
from toolbox import get_conf
from zhipuai import ZhipuAI
from toolbox import get_conf, encode_image, get_pictures_list
import logging, os
def input_encode_handler(inputs, llm_kwargs):
if llm_kwargs["most_recent_uploaded"].get("path"):
image_paths = get_pictures_list(llm_kwargs["most_recent_uploaded"]["path"])
md_encode = []
for md_path in image_paths:
type_ = os.path.splitext(md_path)[1].replace(".", "")
type_ = "jpeg" if type_ == "jpg" else type_
md_encode.append({"data": encode_image(md_path), "type": type_})
return inputs, md_encode
class ZhipuChatInit:
def __init__(self):
ZHIPUAI_API_KEY, ZHIPUAI_MODEL = get_conf("ZHIPUAI_API_KEY", "ZHIPUAI_MODEL")
if len(ZHIPUAI_MODEL) > 0:
logging.error('ZHIPUAI_MODEL 配置项选项已经弃用,请在LLM_MODEL中配置')
self.zhipu_bro = ZhipuAI(api_key=ZHIPUAI_API_KEY)
self.model = ''
def __conversation_user(self, user_input: str, llm_kwargs):
if self.model not in ["glm-4v"]:
return {"role": "user", "content": user_input}
else:
input_, encode_img = input_encode_handler(user_input, llm_kwargs=llm_kwargs)
what_i_have_asked = {"role": "user", "content": []}
what_i_have_asked['content'].append({"type": 'text', "text": user_input})
if encode_img:
img_d = {"type": "image_url",
"image_url": {'url': encode_img}}
what_i_have_asked['content'].append(img_d)
return what_i_have_asked
def __conversation_history(self, history, llm_kwargs):
messages = []
conversation_cnt = len(history) // 2
if conversation_cnt:
for index in range(0, 2 * conversation_cnt, 2):
what_i_have_asked = self.__conversation_user(history[index], llm_kwargs)
what_gpt_answer = {
"role": "assistant",
"content": history[index + 1]
}
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
return messages
def __conversation_message_payload(self, inputs, llm_kwargs, history, system_prompt):
messages = []
if system_prompt:
messages.append({"role": "system", "content": system_prompt})
self.model = llm_kwargs['llm_model']
messages.extend(self.__conversation_history(history, llm_kwargs)) # 处理 history
messages.append(self.__conversation_user(inputs, llm_kwargs)) # 处理用户对话
response = self.zhipu_bro.chat.completions.create(
model=self.model, messages=messages, stream=True,
temperature=llm_kwargs.get('temperature', 0.95) * 0.95, # 只能传默认的 temperature 和 top_p
top_p=llm_kwargs.get('top_p', 0.7) * 0.7,
max_tokens=llm_kwargs.get('max_tokens', 1024 * 4), # 最大输出模型的一半
)
return response
def generate_chat(self, inputs, llm_kwargs, history, system_prompt):
self.model = llm_kwargs['llm_model']
response = self.__conversation_message_payload(inputs, llm_kwargs, history, system_prompt)
bro_results = ''
for chunk in response:
bro_results += chunk.choices[0].delta.content
yield chunk.choices[0].delta.content, bro_results
if __name__ == '__main__':
zhipu = ZhipuChatInit()
zhipu.generate_chat('你好', {'llm_model': 'glm-4'}, [], '你是WPSAi')

查看文件

@@ -1,10 +1,10 @@
https://public.gpt-academic.top/publish/gradio-3.32.7-py3-none-any.whl https://public.agent-matrix.com/publish/gradio-3.32.8-py3-none-any.whl
gradio-client==0.8 gradio-client==0.8
pypdf2==2.12.1 pypdf2==2.12.1
zhipuai<2 zhipuai>=2
tiktoken>=0.3.3 tiktoken>=0.3.3
requests[socks] requests[socks]
pydantic==1.10.11 pydantic==2.5.2
protobuf==3.18 protobuf==3.18
transformers>=4.27.1 transformers>=4.27.1
scipdf_parser>=0.52 scipdf_parser>=0.52

查看文件

@@ -0,0 +1,137 @@
import importlib
import time
import inspect
import re
import os
import base64
import gradio
import shutil
import glob
from shared_utils.config_loader import get_conf
def html_local_file(file):
base_path = os.path.dirname(__file__) # 项目目录
if os.path.exists(str(file)):
file = f'file={file.replace(base_path, ".")}'
return file
def html_local_img(__file, layout="left", max_width=None, max_height=None, md=True):
style = ""
if max_width is not None:
style += f"max-width: {max_width};"
if max_height is not None:
style += f"max-height: {max_height};"
__file = html_local_file(__file)
a = f'<div align="{layout}"><img src="{__file}" style="{style}"></div>'
if md:
a = f"![{__file}]({__file})"
return a
def file_manifest_filter_type(file_list, filter_: list = None):
new_list = []
if not filter_:
filter_ = ["png", "jpg", "jpeg"]
for file in file_list:
if str(os.path.basename(file)).split(".")[-1] in filter_:
new_list.append(html_local_img(file, md=False))
else:
new_list.append(file)
return new_list
def zip_extract_member_new(self, member, targetpath, pwd):
# 修复中文乱码的问题
"""Extract the ZipInfo object 'member' to a physical
file on the path targetpath.
"""
import zipfile
if not isinstance(member, zipfile.ZipInfo):
member = self.getinfo(member)
# build the destination pathname, replacing
# forward slashes to platform specific separators.
arcname = member.filename.replace('/', os.path.sep)
arcname = arcname.encode('cp437', errors='replace').decode('gbk', errors='replace')
if os.path.altsep:
arcname = arcname.replace(os.path.altsep, os.path.sep)
# interpret absolute pathname as relative, remove drive letter or
# UNC path, redundant separators, "." and ".." components.
arcname = os.path.splitdrive(arcname)[1]
invalid_path_parts = ('', os.path.curdir, os.path.pardir)
arcname = os.path.sep.join(x for x in arcname.split(os.path.sep)
if x not in invalid_path_parts)
if os.path.sep == '\\':
# filter illegal characters on Windows
arcname = self._sanitize_windows_name(arcname, os.path.sep)
targetpath = os.path.join(targetpath, arcname)
targetpath = os.path.normpath(targetpath)
# Create all upper directories if necessary.
upperdirs = os.path.dirname(targetpath)
if upperdirs and not os.path.exists(upperdirs):
os.makedirs(upperdirs)
if member.is_dir():
if not os.path.isdir(targetpath):
os.mkdir(targetpath)
return targetpath
with self.open(member, pwd=pwd) as source, \
open(targetpath, "wb") as target:
shutil.copyfileobj(source, target)
return targetpath
def extract_archive(file_path, dest_dir):
import zipfile
import tarfile
import os
# Get the file extension of the input file
file_extension = os.path.splitext(file_path)[1]
# Extract the archive based on its extension
if file_extension == ".zip":
with zipfile.ZipFile(file_path, "r") as zipobj:
zipobj._extract_member = lambda a,b,c: zip_extract_member_new(zipobj, a,b,c) # 修复中文乱码的问题
zipobj.extractall(path=dest_dir)
print("Successfully extracted zip archive to {}".format(dest_dir))
elif file_extension in [".tar", ".gz", ".bz2"]:
with tarfile.open(file_path, "r:*") as tarobj:
tarobj.extractall(path=dest_dir)
print("Successfully extracted tar archive to {}".format(dest_dir))
# 第三方库,需要预先pip install rarfile
# 此外,Windows上还需要安装winrar软件,配置其Path环境变量,如"C:\Program Files\WinRAR"才可以
elif file_extension == ".rar":
try:
import rarfile
with rarfile.RarFile(file_path) as rf:
rf.extractall(path=dest_dir)
print("Successfully extracted rar archive to {}".format(dest_dir))
except:
print("Rar format requires additional dependencies to install")
return "\n\n解压失败! 需要安装pip install rarfile来解压rar文件。建议使用zip压缩格式。"
# 第三方库,需要预先pip install py7zr
elif file_extension == ".7z":
try:
import py7zr
with py7zr.SevenZipFile(file_path, mode="r") as f:
f.extractall(path=dest_dir)
print("Successfully extracted 7z archive to {}".format(dest_dir))
except:
print("7z format requires additional dependencies to install")
return "\n\n解压失败! 需要安装pip install py7zr来解压7z文件"
else:
return ""
return ""

查看文件

@@ -20,10 +20,10 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"}) # plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2307.07522") # plugin_test(plugin='crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF', main_input="2307.07522")
plugin_test( plugin_test(
plugin="crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF", plugin="crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF",
main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix", main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix",
) )
@@ -66,7 +66,7 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?") # plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2210.03629") # plugin_test(plugin='crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF', main_input="2210.03629")
# advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求100字以内,用第二人称。' --system_prompt=''" } # advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求100字以内,用第二人称。' --system_prompt=''" }
# plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg) # plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg)

查看文件

@@ -1,296 +1 @@
/** // we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0
* base64.ts
*
* Licensed under the BSD 3-Clause License.
* http://opensource.org/licenses/BSD-3-Clause
*
* References:
* http://en.wikipedia.org/wiki/Base64
*
* @author Dan Kogai (https://github.com/dankogai)
*/
const version = '3.7.2';
/**
* @deprecated use lowercase `version`.
*/
const VERSION = version;
const _hasatob = typeof atob === 'function';
const _hasbtoa = typeof btoa === 'function';
const _hasBuffer = typeof Buffer === 'function';
const _TD = typeof TextDecoder === 'function' ? new TextDecoder() : undefined;
const _TE = typeof TextEncoder === 'function' ? new TextEncoder() : undefined;
const b64ch = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=';
const b64chs = Array.prototype.slice.call(b64ch);
const b64tab = ((a) => {
let tab = {};
a.forEach((c, i) => tab[c] = i);
return tab;
})(b64chs);
const b64re = /^(?:[A-Za-z\d+\/]{4})*?(?:[A-Za-z\d+\/]{2}(?:==)?|[A-Za-z\d+\/]{3}=?)?$/;
const _fromCC = String.fromCharCode.bind(String);
const _U8Afrom = typeof Uint8Array.from === 'function'
? Uint8Array.from.bind(Uint8Array)
: (it, fn = (x) => x) => new Uint8Array(Array.prototype.slice.call(it, 0).map(fn));
const _mkUriSafe = (src) => src
.replace(/=/g, '').replace(/[+\/]/g, (m0) => m0 == '+' ? '-' : '_');
const _tidyB64 = (s) => s.replace(/[^A-Za-z0-9\+\/]/g, '');
/**
* polyfill version of `btoa`
*/
const btoaPolyfill = (bin) => {
// console.log('polyfilled');
let u32, c0, c1, c2, asc = '';
const pad = bin.length % 3;
for (let i = 0; i < bin.length;) {
if ((c0 = bin.charCodeAt(i++)) > 255 ||
(c1 = bin.charCodeAt(i++)) > 255 ||
(c2 = bin.charCodeAt(i++)) > 255)
throw new TypeError('invalid character found');
u32 = (c0 << 16) | (c1 << 8) | c2;
asc += b64chs[u32 >> 18 & 63]
+ b64chs[u32 >> 12 & 63]
+ b64chs[u32 >> 6 & 63]
+ b64chs[u32 & 63];
}
return pad ? asc.slice(0, pad - 3) + "===".substring(pad) : asc;
};
/**
* does what `window.btoa` of web browsers do.
* @param {String} bin binary string
* @returns {string} Base64-encoded string
*/
const _btoa = _hasbtoa ? (bin) => btoa(bin)
: _hasBuffer ? (bin) => Buffer.from(bin, 'binary').toString('base64')
: btoaPolyfill;
const _fromUint8Array = _hasBuffer
? (u8a) => Buffer.from(u8a).toString('base64')
: (u8a) => {
// cf. https://stackoverflow.com/questions/12710001/how-to-convert-uint8-array-to-base64-encoded-string/12713326#12713326
const maxargs = 0x1000;
let strs = [];
for (let i = 0, l = u8a.length; i < l; i += maxargs) {
strs.push(_fromCC.apply(null, u8a.subarray(i, i + maxargs)));
}
return _btoa(strs.join(''));
};
/**
* converts a Uint8Array to a Base64 string.
* @param {boolean} [urlsafe] URL-and-filename-safe a la RFC4648 §5
* @returns {string} Base64 string
*/
const fromUint8Array = (u8a, urlsafe = false) => urlsafe ? _mkUriSafe(_fromUint8Array(u8a)) : _fromUint8Array(u8a);
// This trick is found broken https://github.com/dankogai/js-base64/issues/130
// const utob = (src: string) => unescape(encodeURIComponent(src));
// reverting good old fationed regexp
const cb_utob = (c) => {
if (c.length < 2) {
var cc = c.charCodeAt(0);
return cc < 0x80 ? c
: cc < 0x800 ? (_fromCC(0xc0 | (cc >>> 6))
+ _fromCC(0x80 | (cc & 0x3f)))
: (_fromCC(0xe0 | ((cc >>> 12) & 0x0f))
+ _fromCC(0x80 | ((cc >>> 6) & 0x3f))
+ _fromCC(0x80 | (cc & 0x3f)));
}
else {
var cc = 0x10000
+ (c.charCodeAt(0) - 0xD800) * 0x400
+ (c.charCodeAt(1) - 0xDC00);
return (_fromCC(0xf0 | ((cc >>> 18) & 0x07))
+ _fromCC(0x80 | ((cc >>> 12) & 0x3f))
+ _fromCC(0x80 | ((cc >>> 6) & 0x3f))
+ _fromCC(0x80 | (cc & 0x3f)));
}
};
const re_utob = /[\uD800-\uDBFF][\uDC00-\uDFFFF]|[^\x00-\x7F]/g;
/**
* @deprecated should have been internal use only.
* @param {string} src UTF-8 string
* @returns {string} UTF-16 string
*/
const utob = (u) => u.replace(re_utob, cb_utob);
//
const _encode = _hasBuffer
? (s) => Buffer.from(s, 'utf8').toString('base64')
: _TE
? (s) => _fromUint8Array(_TE.encode(s))
: (s) => _btoa(utob(s));
/**
* converts a UTF-8-encoded string to a Base64 string.
* @param {boolean} [urlsafe] if `true` make the result URL-safe
* @returns {string} Base64 string
*/
const encode = (src, urlsafe = false) => urlsafe
? _mkUriSafe(_encode(src))
: _encode(src);
/**
* converts a UTF-8-encoded string to URL-safe Base64 RFC4648 §5.
* @returns {string} Base64 string
*/
const encodeURI = (src) => encode(src, true);
// This trick is found broken https://github.com/dankogai/js-base64/issues/130
// const btou = (src: string) => decodeURIComponent(escape(src));
// reverting good old fationed regexp
const re_btou = /[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF]{2}|[\xF0-\xF7][\x80-\xBF]{3}/g;
const cb_btou = (cccc) => {
switch (cccc.length) {
case 4:
var cp = ((0x07 & cccc.charCodeAt(0)) << 18)
| ((0x3f & cccc.charCodeAt(1)) << 12)
| ((0x3f & cccc.charCodeAt(2)) << 6)
| (0x3f & cccc.charCodeAt(3)), offset = cp - 0x10000;
return (_fromCC((offset >>> 10) + 0xD800)
+ _fromCC((offset & 0x3FF) + 0xDC00));
case 3:
return _fromCC(((0x0f & cccc.charCodeAt(0)) << 12)
| ((0x3f & cccc.charCodeAt(1)) << 6)
| (0x3f & cccc.charCodeAt(2)));
default:
return _fromCC(((0x1f & cccc.charCodeAt(0)) << 6)
| (0x3f & cccc.charCodeAt(1)));
}
};
/**
* @deprecated should have been internal use only.
* @param {string} src UTF-16 string
* @returns {string} UTF-8 string
*/
const btou = (b) => b.replace(re_btou, cb_btou);
/**
* polyfill version of `atob`
*/
const atobPolyfill = (asc) => {
// console.log('polyfilled');
asc = asc.replace(/\s+/g, '');
if (!b64re.test(asc))
throw new TypeError('malformed base64.');
asc += '=='.slice(2 - (asc.length & 3));
let u24, bin = '', r1, r2;
for (let i = 0; i < asc.length;) {
u24 = b64tab[asc.charAt(i++)] << 18
| b64tab[asc.charAt(i++)] << 12
| (r1 = b64tab[asc.charAt(i++)]) << 6
| (r2 = b64tab[asc.charAt(i++)]);
bin += r1 === 64 ? _fromCC(u24 >> 16 & 255)
: r2 === 64 ? _fromCC(u24 >> 16 & 255, u24 >> 8 & 255)
: _fromCC(u24 >> 16 & 255, u24 >> 8 & 255, u24 & 255);
}
return bin;
};
/**
* does what `window.atob` of web browsers do.
* @param {String} asc Base64-encoded string
* @returns {string} binary string
*/
const _atob = _hasatob ? (asc) => atob(_tidyB64(asc))
: _hasBuffer ? (asc) => Buffer.from(asc, 'base64').toString('binary')
: atobPolyfill;
//
const _toUint8Array = _hasBuffer
? (a) => _U8Afrom(Buffer.from(a, 'base64'))
: (a) => _U8Afrom(_atob(a), c => c.charCodeAt(0));
/**
* converts a Base64 string to a Uint8Array.
*/
const toUint8Array = (a) => _toUint8Array(_unURI(a));
//
const _decode = _hasBuffer
? (a) => Buffer.from(a, 'base64').toString('utf8')
: _TD
? (a) => _TD.decode(_toUint8Array(a))
: (a) => btou(_atob(a));
const _unURI = (a) => _tidyB64(a.replace(/[-_]/g, (m0) => m0 == '-' ? '+' : '/'));
/**
* converts a Base64 string to a UTF-8 string.
* @param {String} src Base64 string. Both normal and URL-safe are supported
* @returns {string} UTF-8 string
*/
const decode = (src) => _decode(_unURI(src));
/**
* check if a value is a valid Base64 string
* @param {String} src a value to check
*/
const isValid = (src) => {
if (typeof src !== 'string')
return false;
const s = src.replace(/\s+/g, '').replace(/={0,2}$/, '');
return !/[^\s0-9a-zA-Z\+/]/.test(s) || !/[^\s0-9a-zA-Z\-_]/.test(s);
};
//
const _noEnum = (v) => {
return {
value: v, enumerable: false, writable: true, configurable: true
};
};
/**
* extend String.prototype with relevant methods
*/
const extendString = function () {
const _add = (name, body) => Object.defineProperty(String.prototype, name, _noEnum(body));
_add('fromBase64', function () { return decode(this); });
_add('toBase64', function (urlsafe) { return encode(this, urlsafe); });
_add('toBase64URI', function () { return encode(this, true); });
_add('toBase64URL', function () { return encode(this, true); });
_add('toUint8Array', function () { return toUint8Array(this); });
};
/**
* extend Uint8Array.prototype with relevant methods
*/
const extendUint8Array = function () {
const _add = (name, body) => Object.defineProperty(Uint8Array.prototype, name, _noEnum(body));
_add('toBase64', function (urlsafe) { return fromUint8Array(this, urlsafe); });
_add('toBase64URI', function () { return fromUint8Array(this, true); });
_add('toBase64URL', function () { return fromUint8Array(this, true); });
};
/**
* extend Builtin prototypes with relevant methods
*/
const extendBuiltins = () => {
extendString();
extendUint8Array();
};
const gBase64 = {
version: version,
VERSION: VERSION,
atob: _atob,
atobPolyfill: atobPolyfill,
btoa: _btoa,
btoaPolyfill: btoaPolyfill,
fromBase64: decode,
toBase64: encode,
encode: encode,
encodeURI: encodeURI,
encodeURL: encodeURI,
utob: utob,
btou: btou,
decode: decode,
isValid: isValid,
fromUint8Array: fromUint8Array,
toUint8Array: toUint8Array,
extendString: extendString,
extendUint8Array: extendUint8Array,
extendBuiltins: extendBuiltins,
};
// makecjs:CUT //
export { version };
export { VERSION };
export { _atob as atob };
export { atobPolyfill };
export { _btoa as btoa };
export { btoaPolyfill };
export { decode as fromBase64 };
export { encode as toBase64 };
export { utob };
export { encode };
export { encodeURI };
export { encodeURI as encodeURL };
export { btou };
export { decode };
export { isValid };
export { fromUint8Array };
export { toUint8Array };
export { extendString };
export { extendUint8Array };
export { extendBuiltins };
// and finally,
export { gBase64 as Base64 };

查看文件

@@ -59,6 +59,7 @@
/* Scrollbar Width */ /* Scrollbar Width */
::-webkit-scrollbar { ::-webkit-scrollbar {
height: 12px;
width: 12px; width: 12px;
} }

查看文件

@@ -234,7 +234,7 @@ let timeoutID = null;
let lastInvocationTime = 0; let lastInvocationTime = 0;
let lastArgs = null; let lastArgs = null;
function do_something_but_not_too_frequently(min_interval, func) { function do_something_but_not_too_frequently(min_interval, func) {
return function(...args) { return function (...args) {
lastArgs = args; lastArgs = args;
const now = Date.now(); const now = Date.now();
if (!lastInvocationTime || (now - lastInvocationTime) >= min_interval) { if (!lastInvocationTime || (now - lastInvocationTime) >= min_interval) {
@@ -263,13 +263,8 @@ function chatbotContentChanged(attempt = 1, force = false) {
gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton); gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton);
}, i === 0 ? 0 : 200); }, i === 0 ? 0 : 200);
} }
// we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0
const run_mermaid_render = do_something_but_not_too_frequently(1000, function () {
const blocks = document.querySelectorAll(`pre.mermaid, diagram-div`);
if (blocks.length == 0) { return; }
uml("mermaid");
});
run_mermaid_render();
} }
@@ -672,9 +667,9 @@ function limit_scroll_position() {
let scrollableDiv = document.querySelector('#gpt-chatbot > div.wrap'); let scrollableDiv = document.querySelector('#gpt-chatbot > div.wrap');
scrollableDiv.addEventListener('wheel', function (e) { scrollableDiv.addEventListener('wheel', function (e) {
let preventScroll = false; let preventScroll = false;
if (e.deltaX != 0) { prevented_offset = 0; return;} if (e.deltaX != 0) { prevented_offset = 0; return; }
if (this.scrollHeight == this.clientHeight) { prevented_offset = 0; return;} if (this.scrollHeight == this.clientHeight) { prevented_offset = 0; return; }
if (e.deltaY < 0) { prevented_offset = 0; return;} if (e.deltaY < 0) { prevented_offset = 0; return; }
if (e.deltaY > 0 && this.scrollHeight - this.clientHeight - this.scrollTop <= 1) { preventScroll = true; } if (e.deltaY > 0 && this.scrollHeight - this.clientHeight - this.scrollTop <= 1) { preventScroll = true; }
if (preventScroll) { if (preventScroll) {
@@ -713,3 +708,161 @@ function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
// setInterval(function () { uml("mermaid") }, 5000); // 每50毫秒执行一次 // setInterval(function () { uml("mermaid") }, 5000); // 每50毫秒执行一次
} }
function loadLive2D() {
try {
$("<link>").attr({ href: "file=themes/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css" }).appendTo('head');
$('body').append('<div class="waifu"><div class="waifu-tips"></div><canvas id="live2d" class="live2d"></canvas><div class="waifu-tool"><span class="fui-home"></span> <span class="fui-chat"></span> <span class="fui-eye"></span> <span class="fui-user"></span> <span class="fui-photo"></span> <span class="fui-info-circle"></span> <span class="fui-cross"></span></div></div>');
$.ajax({
url: "file=themes/waifu_plugin/waifu-tips.js", dataType: "script", cache: true, success: function () {
$.ajax({
url: "file=themes/waifu_plugin/live2d.js", dataType: "script", cache: true, success: function () {
/* 可直接修改部分参数 */
live2d_settings['hitokotoAPI'] = "hitokoto.cn"; // 一言 API
live2d_settings['modelId'] = 3; // 默认模型 ID
live2d_settings['modelTexturesId'] = 44; // 默认材质 ID
live2d_settings['modelStorage'] = false; // 不储存模型 ID
live2d_settings['waifuSize'] = '210x187';
live2d_settings['waifuTipsSize'] = '187x52';
live2d_settings['canSwitchModel'] = true;
live2d_settings['canSwitchTextures'] = true;
live2d_settings['canSwitchHitokoto'] = false;
live2d_settings['canTakeScreenshot'] = false;
live2d_settings['canTurnToHomePage'] = false;
live2d_settings['canTurnToAboutPage'] = false;
live2d_settings['showHitokoto'] = false; // 显示一言
live2d_settings['showF12Status'] = false; // 显示加载状态
live2d_settings['showF12Message'] = false; // 显示看板娘消息
live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示
live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示
live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词
/* 在 initModel 前添加 */
initModel("file=themes/waifu_plugin/waifu-tips.json");
}
});
}
});
} catch (err) { console.log("[Error] JQuery is not defined.") }
}
function get_checkbox_selected_items(elem_id){
display_panel_arr = [];
document.getElementById(elem_id).querySelector('[data-testid="checkbox-group"]').querySelectorAll('label').forEach(label => {
// Get the span text
const spanText = label.querySelector('span').textContent;
// Get the input value
const checked = label.querySelector('input').checked;
if (checked) {
display_panel_arr.push(spanText)
}
});
return display_panel_arr;
}
function set_checkbox(key, bool, set_twice=false) {
set_success = false;
elem_ids = ["cbsc", "cbs"]
elem_ids.forEach(id => {
document.getElementById(id).querySelector('[data-testid="checkbox-group"]').querySelectorAll('label').forEach(label => {
// Get the span text
const spanText = label.querySelector('span').textContent;
if (spanText === key) {
if (bool){
label.classList.add('selected');
} else {
if (label.classList.contains('selected')) {
label.classList.remove('selected');
}
}
if (set_twice){
setTimeout(() => {
if (bool){
label.classList.add('selected');
} else {
if (label.classList.contains('selected')) {
label.classList.remove('selected');
}
}
}, 5000);
}
label.querySelector('input').checked = bool;
set_success = true;
return
}
});
});
if (!set_success){
console.log("设置checkbox失败,没有找到对应的key")
}
}
function apply_cookie_for_checkbox(dark) {
// console.log("apply_cookie_for_checkboxes")
let searchString = "输入清除键";
let bool_value = "False";
////////////////// darkmode ///////////////////
if (getCookie("js_darkmode_cookie")) {
dark = getCookie("js_darkmode_cookie")
}
dark = dark == "True";
if (document.querySelectorAll('.dark').length) {
if (!dark) {
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
}
} else {
if (dark) {
document.querySelector('body').classList.add('dark');
}
}
////////////////////// clearButton ///////////////////////////
if (getCookie("js_clearbtn_show_cookie")) {
// have cookie
bool_value = getCookie("js_clearbtn_show_cookie")
bool_value = bool_value == "True";
searchString = "输入清除键";
if (bool_value) {
let clearButton = document.getElementById("elem_clear");
let clearButton2 = document.getElementById("elem_clear2");
clearButton.style.display = "block";
clearButton2.style.display = "block";
set_checkbox(searchString, true);
} else {
let clearButton = document.getElementById("elem_clear");
let clearButton2 = document.getElementById("elem_clear2");
clearButton.style.display = "none";
clearButton2.style.display = "none";
set_checkbox(searchString, false);
}
}
////////////////////// live2d ///////////////////////////
if (getCookie("js_live2d_show_cookie")) {
// have cookie
searchString = "添加Live2D形象";
bool_value = getCookie("js_live2d_show_cookie");
bool_value = bool_value == "True";
if (bool_value) {
loadLive2D();
set_checkbox(searchString, true);
} else {
$('.waifu').hide();
set_checkbox(searchString, false);
}
} else {
// do not have cookie
// get conf
display_panel_arr = get_checkbox_selected_items("cbsc");
searchString = "添加Live2D形象";
if (display_panel_arr.includes(searchString)) {
loadLive2D();
} else {
}
}
}

查看文件

@@ -5,17 +5,14 @@ def get_common_html_javascript_code():
js = "\n" js = "\n"
for jsf in [ for jsf in [
"file=themes/common.js", "file=themes/common.js",
"file=themes/mermaid.min.js",
"file=themes/mermaid_loader.js",
]: ]:
js += f"""<script src="{jsf}"></script>\n""" js += f"""<script src="{jsf}"></script>\n"""
# 添加Live2D # 添加Live2D
if ADD_WAIFU: if ADD_WAIFU:
for jsf in [ for jsf in [
"file=docs/waifu_plugin/jquery.min.js", "file=themes/waifu_plugin/jquery.min.js",
"file=docs/waifu_plugin/jquery-ui.min.js", "file=themes/waifu_plugin/jquery-ui.min.js",
"file=docs/waifu_plugin/autoload.js",
]: ]:
js += f"""<script src="{jsf}"></script>\n""" js += f"""<script src="{jsf}"></script>\n"""
return js return js

1590
themes/mermaid.min.js vendored

文件差异因一行或多行过长而隐藏

查看文件

@@ -1,55 +1 @@
import { deflate, inflate } from '/file=themes/pako.esm.mjs'; // we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0
import { toUint8Array, fromUint8Array, toBase64, fromBase64 } from '/file=themes/base64.mjs';
const base64Serde = {
serialize: (state) => {
return toBase64(state, true);
},
deserialize: (state) => {
return fromBase64(state);
}
};
const pakoSerde = {
serialize: (state) => {
const data = new TextEncoder().encode(state);
const compressed = deflate(data, { level: 9 });
return fromUint8Array(compressed, true);
},
deserialize: (state) => {
const data = toUint8Array(state);
return inflate(data, { to: 'string' });
}
};
const serdes = {
base64: base64Serde,
pako: pakoSerde
};
export const serializeState = (state, serde = 'pako') => {
if (!(serde in serdes)) {
throw new Error(`Unknown serde type: ${serde}`);
}
const json = JSON.stringify(state);
const serialized = serdes[serde].serialize(json);
return `${serde}:${serialized}`;
};
const deserializeState = (state) => {
let type, serialized;
if (state.includes(':')) {
let tempType;
[tempType, serialized] = state.split(':');
if (tempType in serdes) {
type = tempType;
} else {
throw new Error(`Unknown serde type: ${tempType}`);
}
} else {
type = 'base64';
serialized = state;
}
const json = serdes[type].deserialize(serialized);
return JSON.parse(json);
};

查看文件

@@ -1,197 +1 @@
const uml = async className => { // we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0
// Custom element to encapsulate Mermaid content.
class MermaidDiv extends HTMLElement {
/**
* Creates a special Mermaid div shadow DOM.
* Works around issues of shared IDs.
* @return {void}
*/
constructor() {
super()
// Create the Shadow DOM and attach style
const shadow = this.attachShadow({ mode: "open" })
const style = document.createElement("style")
style.textContent = `
:host {
display: block;
line-height: initial;
font-size: 16px;
}
div.diagram {
margin: 0;
overflow: visible;
}`
shadow.appendChild(style)
}
}
if (typeof customElements.get("diagram-div") === "undefined") {
customElements.define("diagram-div", MermaidDiv)
}
const getFromCode = parent => {
// Handles <pre><code> text extraction.
let text = ""
for (let j = 0; j < parent.childNodes.length; j++) {
const subEl = parent.childNodes[j]
if (subEl.tagName.toLowerCase() === "code") {
for (let k = 0; k < subEl.childNodes.length; k++) {
const child = subEl.childNodes[k]
const whitespace = /^\s*$/
if (child.nodeName === "#text" && !(whitespace.test(child.nodeValue))) {
text = child.nodeValue
break
}
}
}
}
return text
}
function createOrUpdateHyperlink(parentElement, linkText, linkHref) {
// Search for an existing anchor element within the parentElement
let existingAnchor = parentElement.querySelector("a");
// Check if an anchor element already exists
if (existingAnchor) {
// Update the hyperlink reference if it's different from the current one
if (existingAnchor.href !== linkHref) {
existingAnchor.href = linkHref;
}
// Update the target attribute to ensure it opens in a new tab
existingAnchor.target = '_blank';
// If the text must be dynamic, uncomment and use the following line:
// existingAnchor.textContent = linkText;
} else {
// If no anchor exists, create one and append it to the parentElement
let anchorElement = document.createElement("a");
anchorElement.href = linkHref; // Set hyperlink reference
anchorElement.textContent = linkText; // Set text displayed
anchorElement.target = '_blank'; // Ensure it opens in a new tab
parentElement.appendChild(anchorElement); // Append the new anchor element to the parent
}
}
function removeLastLine(str) {
// 将字符串按换行符分割成数组
var lines = str.split('\n');
lines.pop();
// 将数组重新连接成字符串,并按换行符连接
var result = lines.join('\n');
return result;
}
// 给出配置 Provide a default config in case one is not specified
const defaultConfig = {
startOnLoad: false,
theme: "default",
flowchart: {
htmlLabels: false
},
er: {
useMaxWidth: false
},
sequence: {
useMaxWidth: false,
noteFontWeight: "14px",
actorFontSize: "14px",
messageFontSize: "16px"
}
}
if (document.body.classList.contains("dark")) {
defaultConfig.theme = "dark"
}
const Module = await import('/file=themes/mermaid_editor.js');
function do_render(block, code, codeContent, cnt) {
var rendered_content = mermaid.render(`_diagram_${cnt}`, code);
////////////////////////////// 记录有哪些代码已经被渲染了 ///////////////////////////////////
let codeFinishRenderElement = block.querySelector("code_finish_render"); // 如果block下已存在code_already_rendered元素,则获取它
if (codeFinishRenderElement) { // 如果block下已存在code_already_rendered元素
codeFinishRenderElement.style.display = "none";
} else {
// 如果不存在code_finish_render元素,则将code元素中的内容添加到新创建的code_finish_render元素中
let codeFinishRenderElementNew = document.createElement("code_finish_render"); // 创建一个新的code_already_rendered元素
codeFinishRenderElementNew.style.display = "none";
codeFinishRenderElementNew.textContent = "";
block.appendChild(codeFinishRenderElementNew); // 将新创建的code_already_rendered元素添加到block中
codeFinishRenderElement = codeFinishRenderElementNew;
}
////////////////////////////// 创建一个用于渲染的容器 ///////////////////////////////////
let mermaidRender = block.querySelector(".mermaid_render"); // 尝试获取已存在的<div class='mermaid_render'>
if (!mermaidRender) {
mermaidRender = document.createElement("div"); // 不存在,创建新的<div class='mermaid_render'>
mermaidRender.classList.add("mermaid_render");
block.appendChild(mermaidRender); // 将新创建的元素附加到block
}
mermaidRender.innerHTML = rendered_content
codeFinishRenderElement.textContent = code // 标记已经渲染的部分
////////////////////////////// 创建一个“点击这里编辑脑图” ///////////////////////////////
let pako_encode = Module.serializeState({
"code": codeContent,
"mermaid": "{\n \"theme\": \"default\"\n}",
"autoSync": true,
"updateDiagram": false
});
createOrUpdateHyperlink(block, "点击这里编辑脑图", "https://mermaid.live/edit#" + pako_encode)
}
// 加载配置 Load up the config
mermaid.mermaidAPI.globalReset() // 全局复位
const config = (typeof mermaidConfig === "undefined") ? defaultConfig : mermaidConfig
mermaid.initialize(config)
// 查找需要渲染的元素 Find all of our Mermaid sources and render them.
const blocks = document.querySelectorAll(`pre.mermaid`);
for (let i = 0; i < blocks.length; i++) {
var block = blocks[i]
////////////////////////////// 如果代码没有发生变化,就不渲染了 ///////////////////////////////////
var code = getFromCode(block);
let code_elem = block.querySelector("code");
let codeContent = code_elem.textContent; // 获取code元素中的文本内容
// 判断codeContent是否包含'<gpt_academic_hide_mermaid_code>',如果是,则使code_elem隐藏
if (codeContent.indexOf('<gpt_academic_hide_mermaid_code>') !== -1) {
code_elem.style.display = "none";
}
// 如果block下已存在code_already_rendered元素,则获取它
let codePendingRenderElement = block.querySelector("code_pending_render");
if (codePendingRenderElement) { // 如果block下已存在code_pending_render元素
codePendingRenderElement.style.display = "none";
if (codePendingRenderElement.textContent !== codeContent) {
codePendingRenderElement.textContent = codeContent; // 如果现有的code_pending_render元素中的内容与code元素中的内容不同,更新code_pending_render元素中的内容
}
else {
continue; // 如果相同,就不处理了
}
} else { // 如果不存在code_pending_render元素,则将code元素中的内容添加到新创建的code_pending_render元素中
let codePendingRenderElementNew = document.createElement("code_pending_render"); // 创建一个新的code_already_rendered元素
codePendingRenderElementNew.style.display = "none";
codePendingRenderElementNew.textContent = codeContent;
block.appendChild(codePendingRenderElementNew); // 将新创建的code_pending_render元素添加到block中
codePendingRenderElement = codePendingRenderElementNew;
}
////////////////////////////// 在这里才真正开始渲染 ///////////////////////////////////
try {
do_render(block, code, codeContent, i);
// console.log("渲染", codeContent);
} catch (err) {
try {
var lines = code.split('\n'); if (lines.length < 2) { continue; }
do_render(block, removeLastLine(code), codeContent, i);
// console.log("渲染", codeContent);
} catch (err) {
console.log("以下代码不能渲染", code, removeLastLine(code), err);
}
}
}
}

文件差异内容过多而无法显示 加载差异

查看文件

@@ -46,8 +46,7 @@ cookie相关工具函数
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
""" """
def init_cookie(cookies):
def init_cookie(cookies, chatbot):
# 为每一位访问的用户赋予一个独一无二的uuid编码 # 为每一位访问的用户赋予一个独一无二的uuid编码
cookies.update({"uuid": uuid.uuid4()}) cookies.update({"uuid": uuid.uuid4()})
return cookies return cookies
@@ -91,31 +90,107 @@ js_code_for_css_changing = """(css) => {
} }
""" """
js_code_for_darkmode_init = """(dark) => {
dark = dark == "True";
if (document.querySelectorAll('.dark').length) {
if (!dark){
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
}
} else {
if (dark){
document.querySelector('body').classList.add('dark');
}
}
}
"""
js_code_for_toggle_darkmode = """() => { js_code_for_toggle_darkmode = """() => {
if (document.querySelectorAll('.dark').length) { if (document.querySelectorAll('.dark').length) {
setCookie("js_darkmode_cookie", "False", 365);
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark')); document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
} else { } else {
setCookie("js_darkmode_cookie", "True", 365);
document.querySelector('body').classList.add('dark'); document.querySelector('body').classList.add('dark');
} }
document.querySelectorAll('code_pending_render').forEach(code => {code.remove();}) document.querySelectorAll('code_pending_render').forEach(code => {code.remove();})
}""" }"""
js_code_for_persistent_cookie_init = """(persistent_cookie) => { js_code_for_persistent_cookie_init = """(py_pickle_cookie, cookie) => {
return getCookie("persistent_cookie"); return [getCookie("py_pickle_cookie"), cookie];
}
"""
js_code_reset = """
(a,b,c)=>{
return [[], [], "已重置"];
}
"""
js_code_clear = """
(a,b)=>{
return ["", ""];
}
"""
js_code_show_or_hide = """
(display_panel_arr)=>{
setTimeout(() => {
// get conf
display_panel_arr = get_checkbox_selected_items("cbs");
////////////////////// 输入清除键 ///////////////////////////
let searchString = "输入清除键";
let ele = "none";
if (display_panel_arr.includes(searchString)) {
let clearButton = document.getElementById("elem_clear");
let clearButton2 = document.getElementById("elem_clear2");
clearButton.style.display = "block";
clearButton2.style.display = "block";
setCookie("js_clearbtn_show_cookie", "True", 365);
} else {
let clearButton = document.getElementById("elem_clear");
let clearButton2 = document.getElementById("elem_clear2");
clearButton.style.display = "none";
clearButton2.style.display = "none";
setCookie("js_clearbtn_show_cookie", "False", 365);
}
////////////////////// 基础功能区 ///////////////////////////
searchString = "基础功能区";
if (display_panel_arr.includes(searchString)) {
ele = document.getElementById("basic-panel");
ele.style.display = "block";
} else {
ele = document.getElementById("basic-panel");
ele.style.display = "none";
}
////////////////////// 函数插件区 ///////////////////////////
searchString = "函数插件区";
if (display_panel_arr.includes(searchString)) {
ele = document.getElementById("plugin-panel");
ele.style.display = "block";
} else {
ele = document.getElementById("plugin-panel");
ele.style.display = "none";
}
}, 50);
}
"""
js_code_show_or_hide_group2 = """
(display_panel_arr)=>{
setTimeout(() => {
// console.log("display_panel_arr");
// get conf
display_panel_arr = get_checkbox_selected_items("cbsc");
////////////////////// 添加Live2D形象 ///////////////////////////
let searchString = "添加Live2D形象";
let ele = "none";
if (display_panel_arr.includes(searchString)) {
setCookie("js_live2d_show_cookie", "True", 365);
loadLive2D();
} else {
setCookie("js_live2d_show_cookie", "False", 365);
$('.waifu').hide();
}
}, 50);
} }
""" """

查看文件

查看文件

之前

宽度:  |  高度:  |  大小: 56 KiB

之后

宽度:  |  高度:  |  大小: 56 KiB

查看文件

@@ -92,7 +92,7 @@ String.prototype.render = function(context) {
}; };
var re = /x/; var re = /x/;
console.log(re); // console.log(re);
function empty(obj) {return typeof obj=="undefined"||obj==null||obj==""?true:false} function empty(obj) {return typeof obj=="undefined"||obj==null||obj==""?true:false}
function getRandText(text) {return Array.isArray(text) ? text[Math.floor(Math.random() * text.length + 1)-1] : text} function getRandText(text) {return Array.isArray(text) ? text[Math.floor(Math.random() * text.length + 1)-1] : text}
@@ -120,7 +120,7 @@ function hideMessage(timeout) {
function initModel(waifuPath, type) { function initModel(waifuPath, type) {
/* console welcome message */ /* console welcome message */
eval(function(p,a,c,k,e,r){e=function(c){return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('8.d(" ");8.d("\\U,.\\y\\5.\\1\\1\\1\\1/\\1,\\u\\2 \\H\\n\\1\\1\\1\\1\\1\\b \', !-\\r\\j-i\\1/\\1/\\g\\n\\1\\1\\1 \\1 \\a\\4\\f\'\\1\\1\\1 L/\\a\\4\\5\\2\\n\\1\\1 \\1 /\\1 \\a,\\1 /|\\1 ,\\1 ,\\1\\1\\1 \',\\n\\1\\1\\1\\q \\1/ /-\\j/\\1\\h\\E \\9 \\5!\\1 i\\n\\1\\1\\1 \\3 \\6 7\\q\\4\\c\\1 \\3\'\\s-\\c\\2!\\t|\\1 |\\n\\1\\1\\1\\1 !,/7 \'0\'\\1\\1 \\X\\w| \\1 |\\1\\1\\1\\n\\1\\1\\1\\1 |.\\x\\"\\1\\l\\1\\1 ,,,, / |./ \\1 |\\n\\1\\1\\1\\1 \\3\'| i\\z.\\2,,A\\l,.\\B / \\1.i \\1|\\n\\1\\1\\1\\1\\1 \\3\'| | / C\\D/\\3\'\\5,\\1\\9.\\1|\\n\\1\\1\\1\\1\\1\\1 | |/i \\m|/\\1 i\\1,.\\6 |\\F\\1|\\n\\1\\1\\1\\1\\1\\1.|/ /\\1\\h\\G \\1 \\6!\\1\\1\\b\\1|\\n\\1\\1\\1 \\1 \\1 k\\5>\\2\\9 \\1 o,.\\6\\2 \\1 /\\2!\\n\\1\\1\\1\\1\\1\\1 !\'\\m//\\4\\I\\g\', \\b \\4\'7\'\\J\'\\n\\1\\1\\1\\1\\1\\1 \\3\'\\K|M,p,\\O\\3|\\P\\n\\1\\1\\1\\1\\1 \\1\\1\\1\\c-,/\\1|p./\\n\\1\\1\\1\\1\\1 \\1\\1\\1\'\\f\'\\1\\1!o,.:\\Q \\R\\S\\T v"+e.V+" / W "+e.N);8.d(" ");',60,60,'|u3000|uff64|uff9a|uff40|u30fd|uff8d||console|uff8a|uff0f|uff3c|uff84|log|live2d_settings|uff70|u00b4|uff49||u2010||u3000_|u3008||_|___|uff72|u2500|uff67|u30cf|u30fc||u30bd|u4ece|u30d8|uff1e|__|u30a4|k_|uff17_|u3000L_|u3000i|uff1a|u3009|uff34|uff70r|u30fdL__||___i|l2dVerDate|u30f3|u30ce|nLive2D|u770b|u677f|u5a18|u304f__|l2dVersion|FGHRSH|u00b40i'.split('|'),0,{})); // eval(function(p,a,c,k,e,r){e=function(c){return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('8.d(" ");8.d("\\U,.\\y\\5.\\1\\1\\1\\1/\\1,\\u\\2 \\H\\n\\1\\1\\1\\1\\1\\b \', !-\\r\\j-i\\1/\\1/\\g\\n\\1\\1\\1 \\1 \\a\\4\\f\'\\1\\1\\1 L/\\a\\4\\5\\2\\n\\1\\1 \\1 /\\1 \\a,\\1 /|\\1 ,\\1 ,\\1\\1\\1 \',\\n\\1\\1\\1\\q \\1/ /-\\j/\\1\\h\\E \\9 \\5!\\1 i\\n\\1\\1\\1 \\3 \\6 7\\q\\4\\c\\1 \\3\'\\s-\\c\\2!\\t|\\1 |\\n\\1\\1\\1\\1 !,/7 \'0\'\\1\\1 \\X\\w| \\1 |\\1\\1\\1\\n\\1\\1\\1\\1 |.\\x\\"\\1\\l\\1\\1 ,,,, / |./ \\1 |\\n\\1\\1\\1\\1 \\3\'| i\\z.\\2,,A\\l,.\\B / \\1.i \\1|\\n\\1\\1\\1\\1\\1 \\3\'| | / C\\D/\\3\'\\5,\\1\\9.\\1|\\n\\1\\1\\1\\1\\1\\1 | |/i \\m|/\\1 i\\1,.\\6 |\\F\\1|\\n\\1\\1\\1\\1\\1\\1.|/ /\\1\\h\\G \\1 \\6!\\1\\1\\b\\1|\\n\\1\\1\\1 \\1 \\1 k\\5>\\2\\9 \\1 o,.\\6\\2 \\1 /\\2!\\n\\1\\1\\1\\1\\1\\1 !\'\\m//\\4\\I\\g\', \\b \\4\'7\'\\J\'\\n\\1\\1\\1\\1\\1\\1 \\3\'\\K|M,p,\\O\\3|\\P\\n\\1\\1\\1\\1\\1 \\1\\1\\1\\c-,/\\1|p./\\n\\1\\1\\1\\1\\1 \\1\\1\\1\'\\f\'\\1\\1!o,.:\\Q \\R\\S\\T v"+e.V+" / W "+e.N);8.d(" ");',60,60,'|u3000|uff64|uff9a|uff40|u30fd|uff8d||console|uff8a|uff0f|uff3c|uff84|log|live2d_settings|uff70|u00b4|uff49||u2010||u3000_|u3008||_|___|uff72|u2500|uff67|u30cf|u30fc||u30bd|u4ece|u30d8|uff1e|__|u30a4|k_|uff17_|u3000L_|u3000i|uff1a|u3009|uff34|uff70r|u30fdL__||___i|l2dVerDate|u30f3|u30ce|nLive2D|u770b|u677f|u5a18|u304f__|l2dVersion|FGHRSH|u00b40i'.split('|'),0,{}));
/* 判断 JQuery */ /* 判断 JQuery */
if (typeof($.ajax) != 'function') typeof(jQuery.ajax) == 'function' ? window.$ = jQuery : console.log('[Error] JQuery is not defined.'); if (typeof($.ajax) != 'function') typeof(jQuery.ajax) == 'function' ? window.$ = jQuery : console.log('[Error] JQuery is not defined.');

查看文件

@@ -44,8 +44,8 @@
{ "selector": ".container a[href^='http']", "text": ["要看看 <span style=\"color:#0099cc;\">{text}</span> 么?"] }, { "selector": ".container a[href^='http']", "text": ["要看看 <span style=\"color:#0099cc;\">{text}</span> 么?"] },
{ "selector": ".fui-home", "text": ["点击前往首页,想回到上一页可以使用浏览器的后退功能哦"] }, { "selector": ".fui-home", "text": ["点击前往首页,想回到上一页可以使用浏览器的后退功能哦"] },
{ "selector": ".fui-chat", "text": ["一言一语,一颦一笑。一字一句,一颗赛艇。"] }, { "selector": ".fui-chat", "text": ["一言一语,一颦一笑。一字一句,一颗赛艇。"] },
{ "selector": ".fui-eye", "text": ["嗯··· 要切换 看板娘 吗?"] }, { "selector": ".fui-eye", "text": ["嗯··· 要切换 Live2D形象 吗?"] },
{ "selector": ".fui-user", "text": ["喜欢换装 Play 吗?"] }, { "selector": ".fui-user", "text": ["喜欢换装吗?"] },
{ "selector": ".fui-photo", "text": ["要拍张纪念照片吗?"] }, { "selector": ".fui-photo", "text": ["要拍张纪念照片吗?"] },
{ "selector": ".fui-info-circle", "text": ["这里有关于我的信息呢"] }, { "selector": ".fui-info-circle", "text": ["这里有关于我的信息呢"] },
{ "selector": ".fui-cross", "text": ["你不喜欢我了吗..."] }, { "selector": ".fui-cross", "text": ["你不喜欢我了吗..."] },
@@ -77,14 +77,28 @@
"看什么看(*^▽^*)", "看什么看(*^▽^*)",
"焦虑时,吃顿大餐心情就好啦^_^", "焦虑时,吃顿大餐心情就好啦^_^",
"你这个年纪,怎么睡得着觉的你^_^", "你这个年纪,怎么睡得着觉的你^_^",
"修改ADD_WAIFU=False,我就不再打扰你了~", "打开“界面外观”菜单,可选择关闭Live2D形象",
"经常去github看看我们的更新吧,也许有好玩的新功能呢。", "经常去Github看看我们的更新吧,也许有好玩的新功能呢。",
"试试本地大模型吧,有的也很强大的哦。", "试试本地大模型吧,有的也很强大的哦。",
"很多强大的函数插件隐藏在下拉菜单中呢。", "很多强大的函数插件隐藏在下拉菜单中呢。",
"红色的插件使用之前需要把文件上传进去哦。", "插件使用之前需要把文件上传进去哦。",
"想添加功能按钮吗?读读readme很容易就学会啦。", "上传文件时,可以把文件直接拖进对话中的哦。",
"上传文件时,可以文件或图片粘贴到输入区哦。",
"想添加基础功能按钮吗?打开“界面外观”菜单进行自定义吧!",
"敏感或机密的信息,不可以问AI的哦", "敏感或机密的信息,不可以问AI的哦",
"LLM究竟是划时代的创新,还是扼杀创造力的毒药呢?" "LLM究竟是划时代的创新,还是扼杀创造力的毒药呢?",
"休息一下,起来走动走动吧!",
"今天的阳光也很不错哦,不妨外出晒晒。",
"笑一笑,生活更美好!",
"遇到难题,深呼吸就能解决一半。",
"偶尔换换环境,灵感也许就来了。",
"小憩片刻,醒来便是满血复活。",
"技术改变生活,让我们共同进步。",
"保持好奇心,探索未知的世界。",
"遇到困难,记得还有朋友和AI陪在你身边。",
"劳逸结合,方能长久。",
"偶尔给自己放个假,放松心情。",
"不要害怕失败,勇敢尝试才能成功。"
] } ] }
], ],
"click": [ "click": [

查看文件

@@ -25,6 +25,10 @@ from shared_utils.text_mask import apply_gpt_academic_string_mask
from shared_utils.text_mask import build_gpt_academic_masked_string from shared_utils.text_mask import build_gpt_academic_masked_string
from shared_utils.text_mask import apply_gpt_academic_string_mask_langbased from shared_utils.text_mask import apply_gpt_academic_string_mask_langbased
from shared_utils.text_mask import build_gpt_academic_masked_string_langbased from shared_utils.text_mask import build_gpt_academic_masked_string_langbased
from shared_utils.handle_upload import html_local_file
from shared_utils.handle_upload import html_local_img
from shared_utils.handle_upload import file_manifest_filter_type
from shared_utils.handle_upload import extract_archive
pj = os.path.join pj = os.path.join
default_user_name = "default_user" default_user_name = "default_user"
@@ -329,54 +333,6 @@ def find_free_port():
return s.getsockname()[1] return s.getsockname()[1]
def extract_archive(file_path, dest_dir):
import zipfile
import tarfile
import os
# Get the file extension of the input file
file_extension = os.path.splitext(file_path)[1]
# Extract the archive based on its extension
if file_extension == ".zip":
with zipfile.ZipFile(file_path, "r") as zipobj:
zipobj.extractall(path=dest_dir)
print("Successfully extracted zip archive to {}".format(dest_dir))
elif file_extension in [".tar", ".gz", ".bz2"]:
with tarfile.open(file_path, "r:*") as tarobj:
tarobj.extractall(path=dest_dir)
print("Successfully extracted tar archive to {}".format(dest_dir))
# 第三方库,需要预先pip install rarfile
# 此外,Windows上还需要安装winrar软件,配置其Path环境变量,如"C:\Program Files\WinRAR"才可以
elif file_extension == ".rar":
try:
import rarfile
with rarfile.RarFile(file_path) as rf:
rf.extractall(path=dest_dir)
print("Successfully extracted rar archive to {}".format(dest_dir))
except:
print("Rar format requires additional dependencies to install")
return "\n\n解压失败! 需要安装pip install rarfile来解压rar文件。建议使用zip压缩格式。"
# 第三方库,需要预先pip install py7zr
elif file_extension == ".7z":
try:
import py7zr
with py7zr.SevenZipFile(file_path, mode="r") as f:
f.extractall(path=dest_dir)
print("Successfully extracted 7z archive to {}".format(dest_dir))
except:
print("7z format requires additional dependencies to install")
return "\n\n解压失败! 需要安装pip install py7zr来解压7z文件"
else:
return ""
return ""
def find_recent_files(directory): def find_recent_files(directory):
""" """
me: find files that is created with in one minutes under a directory with python, write a function me: find files that is created with in one minutes under a directory with python, write a function
@@ -474,39 +430,8 @@ def del_outdated_uploads(outdate_time_seconds, target_path_base=None):
return return
def html_local_file(file):
base_path = os.path.dirname(__file__) # 项目目录
if os.path.exists(str(file)):
file = f'file={file.replace(base_path, ".")}'
return file
def to_markdown_tabs(head: list, tabs: list, alignment=":---:", column=False, omit_path=None):
def html_local_img(__file, layout="left", max_width=None, max_height=None, md=True):
style = ""
if max_width is not None:
style += f"max-width: {max_width};"
if max_height is not None:
style += f"max-height: {max_height};"
__file = html_local_file(__file)
a = f'<div align="{layout}"><img src="{__file}" style="{style}"></div>'
if md:
a = f"![{__file}]({__file})"
return a
def file_manifest_filter_type(file_list, filter_: list = None):
new_list = []
if not filter_:
filter_ = ["png", "jpg", "jpeg"]
for file in file_list:
if str(os.path.basename(file)).split(".")[-1] in filter_:
new_list.append(html_local_img(file, md=False))
else:
new_list.append(file)
return new_list
def to_markdown_tabs(head: list, tabs: list, alignment=":---:", column=False):
""" """
Args: Args:
head: 表头:[] head: 表头:[]
@@ -530,6 +455,9 @@ def to_markdown_tabs(head: list, tabs: list, alignment=":---:", column=False):
for i in range(max_len): for i in range(max_len):
row_data = [tab[i] if i < len(tab) else "" for tab in transposed_tabs] row_data = [tab[i] if i < len(tab) else "" for tab in transposed_tabs]
row_data = file_manifest_filter_type(row_data, filter_=None) row_data = file_manifest_filter_type(row_data, filter_=None)
# for dat in row_data:
# if (omit_path is not None) and os.path.exists(dat):
# dat = os.path.relpath(dat, omit_path)
tabs_list += "".join([tab_format % i for i in row_data]) + "|\n" tabs_list += "".join([tab_format % i for i in row_data]) + "|\n"
return tabs_list return tabs_list
@@ -565,15 +493,21 @@ def on_file_uploaded(
) )
# 整理文件集合 输出消息 # 整理文件集合 输出消息
moved_files = [fp for fp in glob.glob(f"{target_path_base}/**/*", recursive=True)] files = glob.glob(f"{target_path_base}/**/*", recursive=True)
moved_files_str = to_markdown_tabs(head=["文件"], tabs=[moved_files]) moved_files = [fp for fp in files]
max_file_to_show = 10
if len(moved_files) > max_file_to_show:
moved_files = moved_files[:max_file_to_show//2] + [f'... ( 📌省略{len(moved_files) - max_file_to_show}个文件的显示 ) ...'] + \
moved_files[-max_file_to_show//2:]
moved_files_str = to_markdown_tabs(head=["文件"], tabs=[moved_files], omit_path=target_path_base)
chatbot.append( chatbot.append(
[ [
"我上传了文件,请查收", "我上传了文件,请查收",
f"[Local Message] 收到以下文件: \n\n{moved_files_str}" f"[Local Message] 收到以下文件 (上传到路径:{target_path_base}: " +
+ f"\n\n调用路径参数已自动修正到: \n\n{txt}" f"\n\n{moved_files_str}" +
+ f"\n\n现在您点击任意函数插件时,以上文件将被作为输入参数" f"\n\n调用路径参数已自动修正到: \n\n{txt}" +
+ upload_msg, f"\n\n现在您点击任意函数插件时,以上文件将被作为输入参数" +
upload_msg,
] ]
) )

查看文件

@@ -1,5 +1,5 @@
{ {
"version": 3.71, "version": 3.72,
"show_feature": true, "show_feature": true,
"new_feature": "用绘图功能增强部分插件 <-> 基础功能区支持自动切换中英提示词 <-> 支持Mermaid绘图库让大模型绘制脑图 <-> 支持Gemini-pro <-> 支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区" "new_feature": "支持切换多个智谱ai模型 <-> 用绘图功能增强部分插件 <-> 基础功能区支持自动切换中英提示词 <-> 支持Mermaid绘图库让大模型绘制脑图 <-> 支持Gemini-pro <-> 支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区"
} }