比较提交

..

101 次代码提交

作者 SHA1 备注 提交日期
binary-husky
8413fb15ba optimize welcome page 2024-12-18 23:35:25 +08:00
binary-husky
72b2ce9b62 ollama patch 2024-12-18 23:05:55 +08:00
binary-husky
f43ef909e2 roll version to 3.91 2024-12-18 22:56:41 +08:00
binary-husky
9651ad488f Merge branch 'master' into frontier 2024-12-18 22:27:12 +08:00
binary-husky
81da7bb1a5 remove welcome card when layout overflows 2024-12-18 17:48:02 +08:00
binary-husky
4127162ee7 add tts test 2024-12-18 17:47:23 +08:00
binary-husky
98e5cb7b77 update readme 2024-12-09 23:57:10 +08:00
binary-husky
c88d8047dd cookie storage to local storage 2024-12-09 23:52:02 +08:00
binary-husky
e4bebea28d update requirements 2024-12-09 23:40:23 +08:00
YE Ke 叶柯
294df6c2d5 Add ChatGLM4 local deployment support and refactor ChatGLM bridge's path configuration (#2062)
*  feat(request_llms and config.py): ChatGLM4 Deployment

Add support for local deployment of ChatGLM4 model

* 🦄 refactor(bridge_chatglm3.py): ChatGLM3 model path

Added ChatGLM3 path customization (in config.py).
Removed useless quantization model options that have been annotated

---------

Co-authored-by: MarkDeia <17290550+MarkDeia@users.noreply.github.com>
2024-12-07 23:43:51 +08:00
Zhenhong Du
239894544e Add support for grok-beta model from x.ai (#2060)
* Update config.py

add support for `grok-beta` model

* Update bridge_all.py

add support for `grok-beta` model
2024-12-07 23:41:53 +08:00
Menghuan
ed5fc84d4e 添加为windows的环境打包以及一键启动脚本 (#2068)
* 新增自动打包windows下的环境依赖

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-12-07 23:41:02 +08:00
Menghuan
e3f84069ee 改进Doc2X请求,并增加xelatex编译的支持 (#2058)
* doc2x请求函数格式清理

* 更新中间部分

* 添加doc2x超时设置并添加对xelatex编译的支持

* Bug修复以及增加对xelatex安装的检测

* 增强弱网环境下的稳定性

* 修复模型中_无法显示的问题

* add xelatex logs

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-12-07 23:23:59 +08:00
binary-husky
7bf094b6b6 remove 2024-12-07 22:43:03 +08:00
binary-husky
57d7dc33d3 sync common.js 2024-12-07 17:10:01 +08:00
binary-husky
94ccd77480 remove gen restore btn 2024-12-07 16:22:29 +08:00
binary-husky
48e53cba05 update gradio 2024-12-07 16:18:05 +08:00
binary-husky
e9a7f9439f upgrade gradio 2024-12-07 15:59:30 +08:00
binary-husky
a88b119bf0 change urls 2024-12-05 22:13:59 +08:00
binary-husky
eee8115434 add a config note 2024-12-04 23:55:22 +08:00
binary-husky
4f6a272113 remove keyword extraction 2024-12-04 01:33:31 +08:00
binary-husky
cf3dd5ddb6 add fail fallback option for media plugin 2024-12-04 01:06:12 +08:00
binary-husky
f6f10b7230 media plugin update 2024-12-04 00:36:34 +08:00
binary-husky
bd7b219e8f update web search functionality 2024-12-02 01:55:01 +08:00
binary-husky
e62decac21 change some open fn encoding to utf-8 2024-11-19 15:53:50 +00:00
binary-husky
588b22e039 comment remove 2024-11-19 15:05:48 +00:00
binary-husky
ef18aeda81 adjust rag 2024-11-19 14:59:50 +00:00
binary-husky
3520131ca2 public media gpt 2024-11-18 18:38:49 +00:00
binary-husky
ff5901d8c0 Merge branch 'master' into frontier 2024-11-17 18:16:19 +00:00
binary-husky
2305576410 unify mutex button manifest 2024-11-17 18:14:45 +00:00
binary-husky
52f23c505c media-gpt update 2024-11-17 17:45:53 +00:00
binary-husky
34cc484635 chatgpt-4o-latest 2024-11-11 15:58:57 +00:00
binary-husky
d152f62894 renamed plugins 2024-11-11 14:55:05 +00:00
binary-husky
ca35f56f9b fix: media gpt upgrade 2024-11-11 14:48:29 +00:00
binary-husky
d616fd121a update experimental media agent 2024-11-10 16:42:31 +00:00
binary-husky
f3fda0d3fc Merge branch 'master' into frontier 2024-11-10 13:41:44 +00:00
binary-husky
197287fc30 Enhance archive extraction with error handling for tar and gzip formats 2024-11-09 10:10:46 +00:00
Bingchen Jiang
c37fcc9299 Adding support to new openai apikey format (#2030) 2024-11-09 13:41:19 +08:00
binary-husky
91f5e6b8f7 resolve pickle security issue 2024-11-04 13:49:49 +00:00
hcy2206
4f0851f703 增加了对于glm-4-plus的支持 (#2014)
* 增加对于讯飞星火大模型Spark4.0的支持

* Create github action sync.yml

* 增加对于智谱glm-4-plus的支持

* feat: change arxiv io param

* catch comment source code exception

* upgrade auto comment

* add security patch

---------

Co-authored-by: GH Action - Upstream Sync <action@github.com>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-11-03 22:41:16 +08:00
binary-husky
2821f27756 add security patch 2024-11-03 14:34:17 +00:00
binary-husky
8f91a048a8 dfa algo imp 2024-11-03 09:39:14 +00:00
binary-husky
58eac38b4d Merge branch 'master' into frontier 2024-10-30 13:42:17 +00:00
binary-husky
180550b8f0 upgrade auto comment 2024-10-30 13:37:35 +00:00
binary-husky
7497dcb852 catch comment source code exception 2024-10-30 11:40:47 +00:00
binary-husky
23ef2ffb22 feat: change arxiv io param 2024-10-27 16:54:29 +00:00
binary-husky
848d0f65c7 share paper network beta 2024-10-27 16:08:25 +00:00
Menghuan1918
f0b0364f74 修复并改进build with latex的Docker构建 (#2020)
* 改进构建文件

* 修复问题

* 更改docker注释,同时测试拉取大小
2024-10-27 23:17:03 +08:00
binary-husky
d7f0cbe68e Merge branch 'master' into frontier 2024-10-21 14:31:25 +00:00
binary-husky
69f3755682 adjust max_token_limit for pdf translation plugin 2024-10-21 14:31:11 +00:00
binary-husky
04c9077265 Merge branch 'papershare_beta' into frontier 2024-10-21 14:06:52 +00:00
binary-husky
6afd7db1e3 Merge branch 'master' into frontier 2024-10-21 14:06:23 +00:00
binary-husky
4727113243 update doc2x functions 2024-10-21 14:05:42 +00:00
binary-husky
42d10a9481 update doc2x functions 2024-10-21 14:05:05 +00:00
binary-husky
50a1ea83ef control whether to allow sharing translation results with GPTAC academic cloud. 2024-10-18 18:05:50 +00:00
binary-husky
a9c86a7fb8 pre 2024-10-18 14:16:24 +00:00
binary-husky
2b299cf579 Merge branch 'master' into frontier 2024-10-16 15:22:27 +00:00
wsg1873
310122f5a7 solve the concatenate error. (#2011) 2024-10-16 00:56:24 +08:00
binary-husky
0121cacc84 Merge branch 'master' into frontier 2024-10-15 09:10:36 +00:00
binary-husky
c83bf214d0 change arxiv download attempt url order 2024-10-15 09:09:24 +00:00
binary-husky
e34c49dce5 compat: deal with arxiv url change 2024-10-15 09:07:39 +00:00
binary-husky
f2dcd6ad55 compat: arxiv translation src shift 2024-10-15 09:06:57 +00:00
binary-husky
42d9712f20 Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-10-15 08:24:01 +00:00
binary-husky
3890467c84 replace rm with rm -f 2024-10-15 07:32:29 +00:00
binary-husky
074b3c9828 explicitly declare default value 2024-10-15 06:41:12 +00:00
Nextstrain
b8e8457a01 关于o1系列模型无法正常请求的修复,多模型轮询KeyError: 'finish_reason'的修复 (#1992)
* Update bridge_all.py

* Update bridge_chatgpt.py

* Update bridge_chatgpt.py

* Update bridge_all.py

* Update bridge_all.py
2024-10-15 14:36:51 +08:00
binary-husky
2c93a24d7e fix dockerfile: try align python 2024-10-15 06:35:35 +00:00
binary-husky
e9af6ef3a0 fix: github action glitch 2024-10-15 06:32:47 +00:00
wsg1873
5ae8981dbb add the '/Fit' destination (#2009) 2024-10-14 22:50:56 +08:00
Boyin Liu
7f0ffa58f0 Boyin rag (#1983)
* first_version

* rag document support

* RAG interactive prompts added, issues resolved

* Resolve conflicts

* Resolve conflicts

* Resolve conflicts

* more file format support

* move import

* Resolve LlamaIndexRagWorker bug

* new resolve

* Address import  LlamaIndexRagWorker problem

* change import order

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-10-14 22:48:24 +08:00
binary-husky
5888d038aa move import 2024-10-13 16:17:10 +00:00
binary-husky
ee8213e936 Merge branch 'boyin_rag' into frontier 2024-10-13 16:12:51 +00:00
binary-husky
a57dcbcaeb Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-10-13 08:26:06 +00:00
binary-husky
b812392a9d Merge branch 'master' into frontier 2024-10-13 08:25:47 +00:00
lbykkkk
fce4fa1ec7 more file format support 2024-10-12 18:25:33 +00:00
Boyin Liu
d13f1e270c Merge branch 'master' into boyin_rag 2024-10-11 22:31:07 +08:00
lbykkkk
85cf3d08eb Resolve conflicts 2024-10-11 22:29:56 +08:00
lbykkkk
584e747565 Resolve conflicts 2024-10-11 22:27:57 +08:00
lbykkkk
02ba653c19 Resolve conflicts 2024-10-11 22:21:53 +08:00
lbykkkk
2d12b5b27d RAG interactive prompts added, issues resolved 2024-10-11 01:06:17 +08:00
binary-husky
a4bcd262f9 Merge branch 'master' into frontier 2024-10-07 05:20:49 +00:00
Boyin Liu
748e31102f Merge branch 'master' into boyin_rag 2024-10-05 23:58:43 +08:00
binary-husky
97eef45ab7 Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-10-01 11:59:14 +00:00
binary-husky
0c0e2acb9b remove logging extra 2024-10-01 11:57:47 +00:00
Ren Lifei
9fba8e0142 Added some modules to support openrouter (#1975)
* Added some modules for supporting openrouter model

Added some modules for supporting openrouter model

* Update config.py

* Update .gitignore

* Update bridge_openrouter.py

* Not changed actually

* Refactor logging in bridge_openrouter.py

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-09-28 18:05:34 +08:00
binary-husky
7d7867fb64 remove comment 2024-09-23 15:16:13 +00:00
lbykkkk
7ea791d83a rag document support 2024-09-22 21:37:57 +08:00
binary-husky
f9dbaa39fb Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-09-21 15:40:24 +00:00
binary-husky
bbc2288c5b relax llama index version 2024-09-21 15:40:10 +00:00
Steven Moder
64ab916838 fix: loguru argument error with proxy enabled (#1977) 2024-09-21 23:32:00 +08:00
binary-husky
8fe559da9f update translation matrix 2024-09-21 14:56:10 +00:00
binary-husky
09fd22091a fix: console output 2024-09-21 14:41:36 +00:00
lbykkkk
df717f8bba first_version 2024-09-20 00:06:59 +08:00
binary-husky
e296719b23 Merge branch 'purge_print' into frontier 2024-09-16 09:56:25 +00:00
binary-husky
2f343179a2 logging -> loguru: final stage 2024-09-15 15:51:51 +00:00
binary-husky
4d9604f2e9 update social helper 2024-09-15 15:16:36 +00:00
binary-husky
bbf9e9f868 logging -> loguru stage 4 2024-09-14 16:00:09 +00:00
binary-husky
aa1f967dd7 support o1-preview and o1-mini 2024-09-13 03:11:53 +00:00
binary-husky
0d082327c8 logging -> loguru: stage 3 2024-09-11 08:49:55 +00:00
binary-husky
80acd9c875 import loguru: stage 2 2024-09-11 08:18:01 +00:00
binary-husky
17cd4f8210 logging sys to loguru: stage 1 complete 2024-09-11 03:30:30 +00:00
共有 65 个文件被更改,包括 5883 次插入1110 次删除

56
.github/workflows/conda-pack-windows.yml vendored 普通文件
查看文件

@@ -0,0 +1,56 @@
name: Create Conda Environment Package
on:
workflow_dispatch:
jobs:
build:
runs-on: windows-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Miniconda
uses: conda-incubator/setup-miniconda@v3
with:
auto-activate-base: true
activate-environment: ""
- name: Create new Conda environment
shell: bash -l {0}
run: |
conda create -n gpt python=3.11 -y
conda activate gpt
- name: Install requirements
shell: bash -l {0}
run: |
conda activate gpt
pip install -r requirements.txt
- name: Install conda-pack
shell: bash -l {0}
run: |
conda activate gpt
conda install conda-pack -y
- name: Pack conda environment
shell: bash -l {0}
run: |
conda activate gpt
conda pack -n gpt -o gpt.tar.gz
- name: Create workspace zip
shell: pwsh
run: |
mkdir workspace
Get-ChildItem -Exclude "workspace" | Copy-Item -Destination workspace -Recurse
Remove-Item -Path workspace/.git* -Recurse -Force -ErrorAction SilentlyContinue
Copy-Item gpt.tar.gz workspace/ -Force
- name: Upload packed files
uses: actions/upload-artifact@v4
with:
name: gpt-academic-package
path: workspace

4
.gitignore vendored
查看文件

@@ -160,4 +160,6 @@ test.*
temp.*
objdump*
*.min.*.js
TODO
TODO
experimental_mods
search_results

查看文件

@@ -1,4 +1,7 @@
> [!IMPORTANT]
> `frontier开发分支`最新动态(2024.12.9): 更新对话时间线功能,优化xelatex论文翻译
> `wiki文档`最新动态(2024.12.5): 更新ollama接入指南
>
> 2024.10.10: 突发停电,紧急恢复了提供[whl包](https://drive.google.com/file/d/19U_hsLoMrjOlQSzYS3pzWX9fTzyusArP/view?usp=sharing)的文件服务器
> 2024.10.8: 版本3.90加入对llama-index的初步支持,版本3.80加入插件二级菜单功能详见wiki
> 2024.5.1: 加入Doc2x翻译PDF论文的功能,[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x)
@@ -170,26 +173,32 @@ flowchart TD
```
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
<details><summary>如果需要支持清华ChatGLM系列/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
<p>
【可选步骤】如果需要支持清华ChatGLM3/复旦MOSS作为后端,需要额外安装更多依赖前提条件熟悉Python + 用过Pytorch + 电脑配置够强):
【可选步骤】如果需要支持清华ChatGLM系列/复旦MOSS作为后端,需要额外安装更多依赖前提条件熟悉Python + 用过Pytorch + 电脑配置够强):
```sh
# 【可选步骤I】支持清华ChatGLM3。清华ChatGLM备注如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
# 【可选步骤II】支持复旦MOSS
# 【可选步骤II】支持清华ChatGLM4 注意此模型至少需要24G显存
python -m pip install -r request_llms/requirements_chatglm4.txt
# 可使用modelscope下载ChatGLM4模型
# pip install modelscope
# modelscope download --model ZhipuAI/glm-4-9b-chat --local_dir ./THUDM/glm-4-9b-chat
# 【可选步骤III】支持复旦MOSS
python -m pip install -r request_llms/requirements_moss.txt
git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llms/moss # 注意执行此行代码时,必须处于项目根路径
# 【可选步骤III】支持RWKV Runner
# 【可选步骤IV】支持RWKV Runner
参考wikihttps://github.com/binary-husky/gpt_academic/wiki/%E9%80%82%E9%85%8DRWKV-Runner
# 【可选步骤IV】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案)
# 【可选步骤V】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案)
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
# 【可选步骤V】支持本地模型INT8,INT4量化这里所指的模型本身不是量化版本,目前deepseek-coder支持,后面测试后会加入更多模型量化选择
# 【可选步骤VI】支持本地模型INT8,INT4量化这里所指的模型本身不是量化版本,目前deepseek-coder支持,后面测试后会加入更多模型量化选择
pip install bitsandbyte
# windows用户安装bitsandbytes需要使用下面bitsandbytes-windows-webui
python -m pip install bitsandbytes --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-windows-webui

查看文件

@@ -1,24 +1,36 @@
from loguru import logger
def check_proxy(proxies, return_ip=False):
"""
检查代理配置并返回结果。
Args:
proxies (dict): 包含http和https代理配置的字典。
return_ip (bool, optional): 是否返回代理的IP地址。默认为False。
Returns:
str or None: 检查的结果信息或代理的IP地址如果`return_ip`为True
"""
import requests
proxies_https = proxies['https'] if proxies is not None else ''
ip = None
try:
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4) # ⭐ 执行GET请求以获取代理信息
data = response.json()
if 'country_name' in data:
country = data['country_name']
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
if 'ip' in data: ip = data['ip']
if 'ip' in data:
ip = data['ip']
elif 'error' in data:
alternative, ip = _check_with_backup_source(proxies)
alternative, ip = _check_with_backup_source(proxies) # ⭐ 调用备用方法检查代理配置
if alternative is None:
result = f"代理配置 {proxies_https}, 代理所在地未知,IP查询频率受限"
else:
result = f"代理配置 {proxies_https}, 代理所在地:{alternative}"
else:
result = f"代理配置 {proxies_https}, 代理数据解析失败:{data}"
if not return_ip:
logger.warning(result)
return result
@@ -33,17 +45,33 @@ def check_proxy(proxies, return_ip=False):
return ip
def _check_with_backup_source(proxies):
"""
通过备份源检查代理,并获取相应信息。
Args:
proxies (dict): 包含代理信息的字典。
Returns:
tuple: 代理信息(geo)和IP地址(ip)的元组。
"""
import random, string, requests
random_string = ''.join(random.choices(string.ascii_letters + string.digits, k=32))
try:
res_json = requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json()
res_json = requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json() # ⭐ 执行代理检查和备份源请求
return res_json['dns']['geo'], res_json['dns']['ip']
except:
return None, None
def backup_and_download(current_version, remote_version):
"""
一键更新协议:备份和下载
一键更新协议:备份当前版本,下载远程版本并解压缩。
Args:
current_version (str): 当前版本号。
remote_version (str): 远程版本号。
Returns:
str: 新版本目录的路径。
"""
from toolbox import get_conf
import shutil
@@ -60,7 +88,7 @@ def backup_and_download(current_version, remote_version):
proxies = get_conf('proxies')
try: r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
except: r = requests.get('https://public.agent-matrix.com/publish/master.zip', proxies=proxies, stream=True)
zip_file_path = backup_dir+'/master.zip'
zip_file_path = backup_dir+'/master.zip' # ⭐ 保存备份文件的路径
with open(zip_file_path, 'wb+') as f:
f.write(r.content)
dst_path = new_version_dir
@@ -76,6 +104,17 @@ def backup_and_download(current_version, remote_version):
def patch_and_restart(path):
"""
一键更新协议:覆盖和重启
Args:
path (str): 新版本代码所在的路径
注意事项:
如果您的程序没有使用config_private.py私密配置文件,则会将config.py重命名为config_private.py以避免配置丢失。
更新流程:
- 复制最新版本代码到当前目录
- 更新pip包依赖
- 如果更新失败,则提示手动安装依赖库并重启
"""
from distutils import dir_util
import shutil
@@ -84,32 +123,43 @@ def patch_and_restart(path):
import time
import glob
from shared_utils.colorful import log亮黄, log亮绿, log亮红
# if not using config_private, move origin config.py as config_private.py
if not os.path.exists('config_private.py'):
log亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
'另外您可以随时在history子文件夹下找回旧版的程序。')
shutil.copyfile('config.py', 'config_private.py')
path_new_version = glob.glob(path + '/*-master')[0]
dir_util.copy_tree(path_new_version, './')
dir_util.copy_tree(path_new_version, './') # ⭐ 将最新版本代码复制到当前目录
log亮绿('代码已经更新,即将更新pip包依赖……')
for i in reversed(range(5)): time.sleep(1); log亮绿(i)
try:
import subprocess
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
except:
log亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
log亮绿('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启')
log亮红('假如重启失败,您可能需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
log亮绿(' ------------------------------ -----------------------------------')
for i in reversed(range(8)): time.sleep(1); log亮绿(i)
os.execl(sys.executable, sys.executable, *sys.argv)
os.execl(sys.executable, sys.executable, *sys.argv) # 重启程序
def get_current_version():
"""
获取当前的版本号。
Returns:
str: 当前的版本号。如果无法获取版本号,则返回空字符串。
"""
import json
try:
with open('./version', 'r', encoding='utf8') as f:
current_version = json.loads(f.read())['version']
current_version = json.loads(f.read())['version'] # ⭐ 从读取的json数据中提取版本号
except:
current_version = ""
return current_version
@@ -118,6 +168,12 @@ def get_current_version():
def auto_update(raise_error=False):
"""
一键更新协议:查询版本和用户意见
Args:
raise_error (bool, optional): 是否在出错时抛出错误。默认为 False。
Returns:
None
"""
try:
from toolbox import get_conf
@@ -137,13 +193,13 @@ def auto_update(raise_error=False):
current_version = json.loads(current_version)['version']
if (remote_version - current_version) >= 0.01-1e-5:
from shared_utils.colorful import log亮黄
log亮黄(f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}{new_feature}')
log亮黄(f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}{new_feature}') # ⭐ 在控制台打印新版本信息
logger.info('1Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
user_instruction = input('2是否一键更新代码Y+回车=确认,输入其他/无输入+回车=不更新)?')
if user_instruction in ['Y', 'y']:
path = backup_and_download(current_version, remote_version)
path = backup_and_download(current_version, remote_version) # ⭐ 备份并下载文件
try:
patch_and_restart(path)
patch_and_restart(path) # ⭐ 执行覆盖并重启操作
except:
msg = '更新失败。'
if raise_error:
@@ -163,6 +219,9 @@ def auto_update(raise_error=False):
logger.info(msg)
def warm_up_modules():
"""
预热模块,加载特定模块并执行预热操作。
"""
logger.info('正在执行一些模块的预热 ...')
from toolbox import ProxyNetworkActivate
from request_llms.bridge_all import model_info
@@ -173,6 +232,16 @@ def warm_up_modules():
enc.encode("模块预热", disallowed_special=())
def warm_up_vectordb():
"""
执行一些模块的预热操作。
本函数主要用于执行一些模块的预热操作,确保在后续的流程中能够顺利运行。
⭐ 关键作用:预热模块
Returns:
None
"""
logger.info('正在执行一些模块的预热 ...')
from toolbox import ProxyNetworkActivate
with ProxyNetworkActivate("Warmup_Modules"):
@@ -185,4 +254,4 @@ if __name__ == '__main__':
os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
from toolbox import get_conf
proxies = get_conf('proxies')
check_proxy(proxies)
check_proxy(proxies)

查看文件

@@ -36,7 +36,7 @@ AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-p
"gpt-4o", "gpt-4o-mini", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo",
"gemini-1.5-pro", "chatglm3"
"gemini-1.5-pro", "chatglm3", "chatglm4"
]
EMBEDDING_MODEL = "text-embedding-3-small"
@@ -55,6 +55,7 @@ EMBEDDING_MODEL = "text-embedding-3-small"
# "deepseek-chat" ,"deepseek-coder",
# "gemini-1.5-flash",
# "yi-34b-chat-0205","yi-34b-chat-200k","yi-large","yi-medium","yi-spark","yi-large-turbo","yi-large-preview",
# "grok-beta",
# ]
# --- --- --- ---
# 此外,您还可以在接入one-api/vllm/ollama/Openroute时,
@@ -142,6 +143,9 @@ BAIDU_CLOUD_SECRET_KEY = ''
BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot-4"(文心大模型4.0), "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat", "ERNIE-Speed-128K", "ERNIE-Speed-8K", "ERNIE-Lite-8K"
# 如果使用ChatGLM3或ChatGLM4本地模型,请把 LLM_MODEL="chatglm3" 或LLM_MODEL="chatglm4",并在此处指定模型路径
CHATGLM_LOCAL_MODEL_PATH = "THUDM/glm-4-9b-chat" # 例如"/home/hmp/ChatGLM3-6B/"
# 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径
CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
@@ -234,7 +238,6 @@ MOONSHOT_API_KEY = ""
# 零一万物(Yi Model) API KEY
YIMODEL_API_KEY = ""
# 深度求索(DeepSeek) API KEY,默认请求地址为"https://api.deepseek.com/v1/chat/completions"
DEEPSEEK_API_KEY = ""
@@ -242,6 +245,8 @@ DEEPSEEK_API_KEY = ""
# 紫东太初大模型 https://ai-maas.wair.ac.cn
TAICHU_API_KEY = ""
# Grok API KEY
GROK_API_KEY = ""
# Mathpix 拥有执行PDF的OCR功能,但是需要注册账号
MATHPIX_APPID = ""
@@ -273,8 +278,8 @@ GROBID_URLS = [
]
# Searxng互联网检索服务
SEARXNG_URL = "https://cloud-1.agent-matrix.com/"
# Searxng互联网检索服务这是一个huggingface空间,请前往huggingface复制该空间,然后把自己新的空间地址填在这里
SEARXNG_URLS = [ f"https://kaletianlre-beardvs{i}dd.hf.space/" for i in range(1,5) ]
# 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭
@@ -310,6 +315,10 @@ PLUGIN_HOT_RELOAD = False
NUM_CUSTOM_BASIC_BTN = 4
# 媒体智能体的服务地址这是一个huggingface空间,请前往huggingface复制该空间,然后把自己新的空间地址填在这里
DAAS_SERVER_URLS = [ f"https://niuziniu-biligpt{i}.hf.space/stream" for i in range(1,5) ]
"""
--------------- 配置关联关系说明 ---------------
@@ -369,6 +378,7 @@ NUM_CUSTOM_BASIC_BTN = 4
本地大模型示意图
├── "chatglm4"
├── "chatglm3"
├── "chatglm"
├── "chatglm_onnx"
@@ -399,7 +409,7 @@ NUM_CUSTOM_BASIC_BTN = 4
插件在线服务配置依赖关系示意图
├── 互联网检索
│ └── SEARXNG_URL
│ └── SEARXNG_URLS
├── 语音功能
│ ├── ENABLE_AUDIO

查看文件

@@ -2,7 +2,6 @@ from toolbox import HotReload # HotReload 的意思是热更新,修改函数
from toolbox import trimmed_format_exc
from loguru import logger
def get_crazy_functions():
from crazy_functions.读文章写摘要 import 读文章写摘要
from crazy_functions.生成函数注释 import 批量生成函数注释
@@ -17,7 +16,7 @@ def get_crazy_functions():
from crazy_functions.SourceCode_Analyse import 解析一个前端项目
from crazy_functions.高级功能函数模板 import 高阶功能模板函数
from crazy_functions.高级功能函数模板 import Demo_Wrap
from crazy_functions.Latex全文润色 import Latex英文润色
from crazy_functions.Latex_Project_Polish import Latex英文润色
from crazy_functions.询问多个大语言模型 import 同时问询
from crazy_functions.SourceCode_Analyse import 解析一个Lua项目
from crazy_functions.SourceCode_Analyse import 解析一个CSharp项目
@@ -33,8 +32,8 @@ def get_crazy_functions():
from crazy_functions.PDF_Translate import 批量翻译PDF文档
from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
from crazy_functions.Latex全文润色 import Latex中文润色
from crazy_functions.Latex全文润色 import Latex英文纠错
from crazy_functions.Latex_Project_Polish import Latex中文润色
from crazy_functions.Latex_Project_Polish import Latex英文纠错
from crazy_functions.Markdown_Translate import Markdown中译英
from crazy_functions.虚空终端 import 虚空终端
from crazy_functions.生成多种Mermaid图表 import Mermaid_Gen
@@ -49,8 +48,17 @@ def get_crazy_functions():
from crazy_functions.Image_Generate import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
from crazy_functions.Image_Generate_Wrap import ImageGen_Wrap
from crazy_functions.SourceCode_Comment import 注释Python项目
from crazy_functions.SourceCode_Comment_Wrap import SourceCodeComment_Wrap
from crazy_functions.VideoResource_GPT import 多媒体任务
function_plugins = {
"多媒体智能体": {
"Group": "智能体",
"Color": "stop",
"AsButton": False,
"Info": "【仅测试】多媒体任务",
"Function": HotReload(多媒体任务),
},
"虚空终端": {
"Group": "对话|编程|学术|智能体",
"Color": "stop",
@@ -71,6 +79,7 @@ def get_crazy_functions():
"AsButton": False,
"Info": "上传一系列python源文件(或者压缩包), 为这些代码添加docstring | 输入参数为路径",
"Function": HotReload(注释Python项目),
"Class": SourceCodeComment_Wrap,
},
"载入对话历史存档(先上传存档或输入路径)": {
"Group": "对话",
@@ -718,12 +727,6 @@ def get_crazy_functions():
logger.error("Load function plugin failed")
# try:
# from crazy_functions.高级功能函数模板 import 测试图表渲染
# function_plugins.update({
@@ -738,19 +741,6 @@ def get_crazy_functions():
# logger.error(trimmed_format_exc())
# print('Load function plugin failed')
# try:
# from crazy_functions.chatglm微调工具 import 微调数据集生成
# function_plugins.update({
# "黑盒模型学习: 微调数据集生成 (先上传数据集)": {
# "Color": "stop",
# "AsButton": False,
# "AdvancedArgs": True,
# "ArgsReminder": "针对数据集输入(如 绿帽子*深蓝色衬衫*黑色运动裤)给出指令,例如您可以将以下命令复制到下方: --llm_to_learn=azure-gpt-3.5 --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、过去经历进行描写。要求100字以内,用第二人称。' --system_prompt=''",
# "Function": HotReload(微调数据集生成)
# }
# })
# except:
# print('Load function plugin failed')
"""
设置默认值:
@@ -770,3 +760,23 @@ def get_crazy_functions():
function_plugins[name]["Color"] = "secondary"
return function_plugins
def get_multiplex_button_functions():
"""多路复用主提交按钮的功能映射
"""
return {
"常规对话":
"",
"多模型对话":
"询问多个GPT模型", # 映射到上面的 `询问多个GPT模型` 插件
"智能召回 RAG":
"Rag智能召回", # 映射到上面的 `Rag智能召回` 插件
"多媒体查询":
"多媒体智能体", # 映射到上面的 `多媒体智能体` 插件
}

查看文件

@@ -7,7 +7,7 @@ from bs4 import BeautifulSoup
from functools import lru_cache
from itertools import zip_longest
from check_proxy import check_proxy
from toolbox import CatchException, update_ui, get_conf
from toolbox import CatchException, update_ui, get_conf, update_ui_lastest_msg
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
from request_llms.bridge_all import model_info
from request_llms.bridge_all import predict_no_ui_long_connection
@@ -115,7 +115,8 @@ def get_auth_ip():
def searxng_request(query, proxies, categories='general', searxng_url=None, engines=None):
if searxng_url is None:
url = get_conf("SEARXNG_URL")
urls = get_conf("SEARXNG_URLS")
url = random.choice(urls)
else:
url = searxng_url
@@ -192,6 +193,38 @@ def scrape_text(url, proxies) -> str:
text = "\n".join(chunk for chunk in chunks if chunk)
return text
def internet_search_with_analysis_prompt(prompt, analysis_prompt, llm_kwargs, chatbot):
from toolbox import get_conf
proxies = get_conf('proxies')
categories = 'general'
searxng_url = None # 使用默认的searxng_url
engines = None # 使用默认的搜索引擎
yield from update_ui_lastest_msg(lastmsg=f"检索中: {prompt} ...", chatbot=chatbot, history=[], delay=1)
urls = searxng_request(prompt, proxies, categories, searxng_url, engines=engines)
yield from update_ui_lastest_msg(lastmsg=f"依次访问搜索到的网站 ...", chatbot=chatbot, history=[], delay=1)
if len(urls) == 0:
return None
max_search_result = 5 # 最多收纳多少个网页的结果
history = []
for index, url in enumerate(urls[:max_search_result]):
yield from update_ui_lastest_msg(lastmsg=f"依次访问搜索到的网站: {url['link']} ...", chatbot=chatbot, history=[], delay=1)
res = scrape_text(url['link'], proxies)
prefix = f"{index}份搜索结果 [源自{url['source'][0]}搜索] {url['title'][:25]}"
history.extend([prefix, res])
i_say = f"从以上搜索结果中抽取信息,然后回答问题:{prompt} {analysis_prompt}"
i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
inputs=i_say,
history=history,
max_token_limit=8192
)
gpt_say = predict_no_ui_long_connection(
inputs=i_say,
llm_kwargs=llm_kwargs,
history=history,
sys_prompt="请从搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。",
console_slience=False,
)
return gpt_say
@CatchException
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):

查看文件

@@ -1,4 +1,4 @@
import random
from toolbox import get_conf
from crazy_functions.Internet_GPT import 连接网络回答问题
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
@@ -20,6 +20,9 @@ class NetworkGPT_Wrap(GptAcademicPluginTemplate):
第三个参数,名称`allow_cache`,参数`type`声明这是一个下拉菜单,下拉菜单上方显示`title`+`description`,下拉菜单的选项为`options`,`default_value`为下拉菜单默认值;
"""
urls = get_conf("SEARXNG_URLS")
url = random.choice(urls)
gui_definition = {
"main_input":
ArgProperty(title="输入问题", description="待通过互联网检索的问题,会自动读取输入框内容", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
@@ -30,7 +33,7 @@ class NetworkGPT_Wrap(GptAcademicPluginTemplate):
"optimizer":
ArgProperty(title="搜索优化", options=["关闭", "开启", "开启(增强)"], default_value="关闭", description="是否使用搜索增强。注意这可能会消耗较多token", type="dropdown").model_dump_json(),
"searxng_url":
ArgProperty(title="Searxng服务地址", description="输入Searxng的地址", default_value=get_conf("SEARXNG_URL"), type="string").model_dump_json(), # 主输入,自动从输入框同步
ArgProperty(title="Searxng服务地址", description="输入Searxng的地址", default_value=url, type="string").model_dump_json(), # 主输入,自动从输入框同步
}
return gui_definition

查看文件

@@ -3,7 +3,7 @@ from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip
from functools import partial
from loguru import logger
import glob, os, requests, time, json, tarfile
import glob, os, requests, time, json, tarfile, threading
pj = os.path.join
ARXIV_CACHE_DIR = get_conf("ARXIV_CACHE_DIR")
@@ -138,25 +138,43 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
if cached_translation_pdf and allow_cache: return cached_translation_pdf, arxiv_id
url_tar = url_.replace('/abs/', '/e-print/')
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
os.makedirs(translation_dir, exist_ok=True)
# <-------------- download arxiv source file ------------->
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
dst = pj(translation_dir, arxiv_id + '.tar')
if os.path.exists(dst):
yield from update_ui_lastest_msg("调用缓存", chatbot=chatbot, history=history) # 刷新界面
os.makedirs(translation_dir, exist_ok=True)
# <-------------- download arxiv source file ------------->
def fix_url_and_download():
# for url_tar in [url_.replace('/abs/', '/e-print/'), url_.replace('/abs/', '/src/')]:
for url_tar in [url_.replace('/abs/', '/src/'), url_.replace('/abs/', '/e-print/')]:
proxies = get_conf('proxies')
r = requests.get(url_tar, proxies=proxies)
if r.status_code == 200:
with open(dst, 'wb+') as f:
f.write(r.content)
return True
return False
if os.path.exists(dst) and allow_cache:
yield from update_ui_lastest_msg(f"调用缓存 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面
success = True
else:
yield from update_ui_lastest_msg("开始下载", chatbot=chatbot, history=history) # 刷新界面
proxies = get_conf('proxies')
r = requests.get(url_tar, proxies=proxies)
with open(dst, 'wb+') as f:
f.write(r.content)
yield from update_ui_lastest_msg(f"开始下载 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面
success = fix_url_and_download()
yield from update_ui_lastest_msg(f"下载完成 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面
if not success:
yield from update_ui_lastest_msg(f"下载失败 {arxiv_id}", chatbot=chatbot, history=history)
raise tarfile.ReadError(f"论文下载失败 {arxiv_id}")
# <-------------- extract file ------------->
yield from update_ui_lastest_msg("下载完成", chatbot=chatbot, history=history) # 刷新界面
from toolbox import extract_archive
extract_archive(file_path=dst, dest_dir=extract_dst)
try:
extract_archive(file_path=dst, dest_dir=extract_dst)
except tarfile.ReadError:
os.remove(dst)
raise tarfile.ReadError(f"论文下载失败")
return extract_dst, arxiv_id
@@ -320,11 +338,17 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
# <-------------- more requirements ------------->
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
more_req = plugin_kwargs.get("advanced_arg", "")
no_cache = more_req.startswith("--no-cache")
if no_cache: more_req.lstrip("--no-cache")
no_cache = ("--no-cache" in more_req)
if no_cache: more_req = more_req.replace("--no-cache", "").strip()
allow_gptac_cloud_io = ("--allow-cloudio" in more_req) # 从云端下载翻译结果,以及上传翻译结果到云端
if allow_gptac_cloud_io: more_req = more_req.replace("--allow-cloudio", "").strip()
allow_cache = not no_cache
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
# <-------------- check deps ------------->
try:
import glob, os, time, subprocess
@@ -351,6 +375,20 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# #################################################################
if allow_gptac_cloud_io and arxiv_id:
# 访问 GPTAC学术云,查询云端是否存在该论文的翻译版本
from crazy_functions.latex_fns.latex_actions import check_gptac_cloud
success, downloaded = check_gptac_cloud(arxiv_id, chatbot)
if success:
chatbot.append([
f"检测到GPTAC云端存在翻译版本, 如果不满意翻译结果, 请禁用云端分享, 然后重新执行。",
None
])
yield from update_ui(chatbot=chatbot, history=history)
return
#################################################################
if os.path.exists(txt):
project_folder = txt
else:
@@ -388,14 +426,21 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
# <-------------- zip PDF ------------->
zip_res = zip_result(project_folder)
if success:
if allow_gptac_cloud_io and arxiv_id:
# 如果用户允许,我们将翻译好的arxiv论文PDF上传到GPTAC学术云
from crazy_functions.latex_fns.latex_actions import upload_to_gptac_cloud_if_user_allow
threading.Thread(target=upload_to_gptac_cloud_if_user_allow,
args=(chatbot, arxiv_id), daemon=True).start()
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
yield from update_ui(chatbot=chatbot, history=history);
yield from update_ui(chatbot=chatbot, history=history)
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else:
chatbot.append((f"失败了",
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体见Github wiki ...'))
yield from update_ui(chatbot=chatbot, history=history);
yield from update_ui(chatbot=chatbot, history=history)
time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
@@ -514,7 +559,7 @@ def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, h
project_folder = move_project(project_folder)
# <-------------- set a hash tag for repeat-checking ------------->
with open(pj(project_folder, hash_tag + '.tag'), 'w') as f:
with open(pj(project_folder, hash_tag + '.tag'), 'w', encoding='utf8') as f:
f.write(hash_tag)
f.close()

查看文件

@@ -30,6 +30,8 @@ class Arxiv_Localize(GptAcademicPluginTemplate):
default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步
"allow_cache":
ArgProperty(title="是否允许从缓存中调取结果", options=["允许缓存", "从头执行"], default_value="允许缓存", description="", type="dropdown").model_dump_json(),
"allow_cloudio":
ArgProperty(title="是否允许从GPTAC学术云下载(或者上传)翻译结果(仅针对Arxiv论文)", options=["允许", "禁止"], default_value="禁止", description="共享文献,互助互利", type="dropdown").model_dump_json(),
}
return gui_definition
@@ -38,9 +40,14 @@ class Arxiv_Localize(GptAcademicPluginTemplate):
执行插件
"""
allow_cache = plugin_kwargs["allow_cache"]
allow_cloudio = plugin_kwargs["allow_cloudio"]
advanced_arg = plugin_kwargs["advanced_arg"]
if allow_cache == "从头执行": plugin_kwargs["advanced_arg"] = "--no-cache " + plugin_kwargs["advanced_arg"]
# 从云端下载翻译结果,以及上传翻译结果到云端;人人为我,我为人人。
if allow_cloudio == "允许": plugin_kwargs["advanced_arg"] = "--allow-cloudio " + plugin_kwargs["advanced_arg"]
yield from Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)

查看文件

@@ -65,7 +65,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
pfg.file_contents.append(file_content)
# <-------- 拆分过长的Markdown文件 ---------->
pfg.run_file_split(max_token_limit=2048)
pfg.run_file_split(max_token_limit=1024)
n_split = len(pfg.sp_file_contents)
# <-------- 多线程翻译开始 ---------->

查看文件

@@ -47,7 +47,7 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
yield from 解析PDF_基于DOC2X(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request)
return
except:
chatbot.append([None, f"DOC2X服务不可用,现在将执行效果稍差的旧版代码{trimmed_format_exc_markdown()}"])
chatbot.append([None, f"DOC2X服务不可用,请检查报错详细{trimmed_format_exc_markdown()}"])
yield from update_ui(chatbot=chatbot, history=history)
if method == "GROBID":

查看文件

@@ -1,4 +1,11 @@
import os,glob
from typing import List
from shared_utils.fastapi_server import validate_path_safety
from toolbox import report_exception
from toolbox import CatchException, update_ui, get_conf, get_log_folder, update_ui_lastest_msg
from shared_utils.fastapi_server import validate_path_safety
from crazy_functions.crazy_utils import input_clipping
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
@@ -7,6 +14,37 @@ MAX_HISTORY_ROUND = 5
MAX_CONTEXT_TOKEN_LIMIT = 4096
REMEMBER_PREVIEW = 1000
@CatchException
def handle_document_upload(files: List[str], llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request, rag_worker):
"""
Handles document uploads by extracting text and adding it to the vector store.
"""
from llama_index.core import Document
from crazy_functions.rag_fns.rag_file_support import extract_text, supports_format
user_name = chatbot.get_user()
checkpoint_dir = get_log_folder(user_name, plugin_name='experimental_rag')
for file_path in files:
try:
validate_path_safety(file_path, user_name)
text = extract_text(file_path)
if text is None:
chatbot.append(
[f"上传文件: {os.path.basename(file_path)}", f"文件解析失败,无法提取文本内容,请更换文件。失败原因可能为1.文档格式过于复杂;2. 不支持的文件格式,支持的文件格式后缀有:" + ", ".join(supports_format)])
else:
chatbot.append(
[f"上传文件: {os.path.basename(file_path)}", f"上传文件前50个字符为:{text[:50]}"])
document = Document(text=text, metadata={"source": file_path})
rag_worker.add_documents_to_vector_store([document])
chatbot.append([f"上传文件: {os.path.basename(file_path)}", "文件已成功添加到知识库。"])
except Exception as e:
report_exception(chatbot, history, a=f"处理文件: {file_path}", b=str(e))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# Main Q&A function with document upload support
@CatchException
def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
@@ -23,28 +61,45 @@ def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, u
# 1. we retrieve rag worker from global context
user_name = chatbot.get_user()
checkpoint_dir = get_log_folder(user_name, plugin_name='experimental_rag')
if user_name in RAG_WORKER_REGISTER:
rag_worker = RAG_WORKER_REGISTER[user_name]
else:
rag_worker = RAG_WORKER_REGISTER[user_name] = LlamaIndexRagWorker(
user_name,
llm_kwargs,
checkpoint_dir=checkpoint_dir,
auto_load_checkpoint=True)
user_name,
llm_kwargs,
checkpoint_dir=checkpoint_dir,
auto_load_checkpoint=True
)
current_context = f"{VECTOR_STORE_TYPE} @ {checkpoint_dir}"
tip = "提示输入“清空向量数据库”可以清空RAG向量数据库"
if txt == "清空向量数据库":
chatbot.append([txt, f'正在清空 ({current_context}) ...'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
rag_worker.purge()
yield from update_ui_lastest_msg('已清空', chatbot, history, delay=0) # 刷新界面
# 2. Handle special commands
if os.path.exists(txt) and os.path.isdir(txt):
project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
# Extract file paths from the user input
# Assuming the user inputs file paths separated by commas after the command
file_paths = [f for f in glob.glob(f'{project_folder}/**/*', recursive=True)]
chatbot.append([txt, f'正在处理上传的文档 ({current_context}) ...'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from handle_document_upload(file_paths, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request, rag_worker)
return
chatbot.append([txt, f'正在召回知识 ({current_context}) ...'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
elif txt == "清空向量数据库":
chatbot.append([txt, f'正在清空 ({current_context}) ...'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
rag_worker.purge_vector_store()
yield from update_ui_lastest_msg('已清空', chatbot, history, delay=0) # 刷新界面
return
# 2. clip history to reduce token consumption
# 2-1. reduce chat round
# 3. Normal Q&A processing
chatbot.append([txt, f'正在召回知识 ({current_context}) ...'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 4. Clip history to reduce token consumption
txt_origin = txt
if len(history) > MAX_HISTORY_ROUND * 2:
@@ -52,41 +107,47 @@ def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, u
txt_clip, history, flags = input_clipping(txt, history, max_token_limit=MAX_CONTEXT_TOKEN_LIMIT, return_clip_flags=True)
input_is_clipped_flag = (flags["original_input_len"] != flags["clipped_input_len"])
# 2-2. if input is clipped, add input to vector store before retrieve
# 5. If input is clipped, add input to vector store before retrieve
if input_is_clipped_flag:
yield from update_ui_lastest_msg('检测到长输入, 正在向量化 ...', chatbot, history, delay=0) # 刷新界面
# save input to vector store
yield from update_ui_lastest_msg('检测到长输入, 正在向量化 ...', chatbot, history, delay=0) # 刷新界面
# Save input to vector store
rag_worker.add_text_to_vector_store(txt_origin)
yield from update_ui_lastest_msg('向量化完成 ...', chatbot, history, delay=0) # 刷新界面
yield from update_ui_lastest_msg('向量化完成 ...', chatbot, history, delay=0) # 刷新界面
if len(txt_origin) > REMEMBER_PREVIEW:
HALF = REMEMBER_PREVIEW//2
HALF = REMEMBER_PREVIEW // 2
i_say_to_remember = txt[:HALF] + f" ...\n...(省略{len(txt_origin)-REMEMBER_PREVIEW}字)...\n... " + txt[-HALF:]
if (flags["original_input_len"] - flags["clipped_input_len"]) > HALF:
txt_clip = txt_clip + f" ...\n...(省略{len(txt_origin)-len(txt_clip)-HALF}字)...\n... " + txt[-HALF:]
else:
pass
i_say = txt_clip
txt_clip = txt_clip + f" ...\n...(省略{len(txt_origin)-len(txt_clip)-HALF}字)...\n... " + txt[-HALF:]
else:
i_say_to_remember = i_say = txt_clip
else:
i_say_to_remember = i_say = txt_clip
# 3. we search vector store and build prompts
# 6. Search vector store and build prompts
nodes = rag_worker.retrieve_from_store_with_query(i_say)
prompt = rag_worker.build_prompt(query=i_say, nodes=nodes)
# 7. Query language model
if len(chatbot) != 0:
chatbot.pop(-1) # Pop temp chat, because we are going to add them again inside `request_gpt_model_in_new_thread_with_ui_alive`
# 4. it is time to query llms
if len(chatbot) != 0: chatbot.pop(-1) # pop temp chat, because we are going to add them again inside `request_gpt_model_in_new_thread_with_ui_alive`
model_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=prompt, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
inputs=prompt,
inputs_show_user=i_say,
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history=history,
sys_prompt=system_prompt,
retry_times_at_unknown_error=0
)
# 5. remember what has been asked / answered
yield from update_ui_lastest_msg(model_say + '</br></br>' + f'对话记忆中, 请稍等 ({current_context}) ...', chatbot, history, delay=0.5) # 刷新界面
# 8. Remember Q&A
yield from update_ui_lastest_msg(
model_say + '</br></br>' + f'对话记忆中, 请稍等 ({current_context}) ...',
chatbot, history, delay=0.5
)
rag_worker.remember_qa(i_say_to_remember, model_say)
history.extend([i_say, model_say])
yield from update_ui_lastest_msg(model_say, chatbot, history, delay=0, msg=tip) # 刷新界面
# 9. Final UI Update
yield from update_ui_lastest_msg(model_say, chatbot, history, delay=0, msg=tip)

查看文件

@@ -6,7 +6,10 @@ from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_ver
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.agent_fns.python_comment_agent import PythonCodeComment
from crazy_functions.diagram_fns.file_tree import FileNode
from crazy_functions.agent_fns.watchdog import WatchDog
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
from loguru import logger
def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
@@ -24,12 +27,13 @@ def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
file_tree_struct.add_file(file_path, file_path)
# <第一步,逐个文件分析,多线程>
lang = "" if not plugin_kwargs["use_chinese"] else " (you must use Chinese)"
for index, fp in enumerate(file_manifest):
# 读取文件
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
file_content = f.read()
prefix = ""
i_say = prefix + f'Please conclude the following source code at {os.path.relpath(fp, project_folder)} with only one sentence, the code is:\n```{file_content}```'
i_say = prefix + f'Please conclude the following source code at {os.path.relpath(fp, project_folder)} with only one sentence{lang}, the code is:\n```{file_content}```'
i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 请用一句话对下面的程序文件做一个整体概述: {fp}'
# 装载请求内容
MAX_TOKEN_SINGLE_FILE = 2560
@@ -37,7 +41,7 @@ def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
inputs_array.append(i_say)
inputs_show_user_array.append(i_say_show_user)
history_array.append([])
sys_prompt_array.append("You are a software architecture analyst analyzing a source code project. Do not dig into details, tell me what the code is doing in general. Your answer must be short, simple and clear.")
sys_prompt_array.append(f"You are a software architecture analyst analyzing a source code project. Do not dig into details, tell me what the code is doing in general. Your answer must be short, simple and clear{lang}.")
# 文件读取完成,对每一个源代码文件,生成一个请求线程,发送到大模型进行分析
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array = inputs_array,
@@ -50,10 +54,20 @@ def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
)
# <第二步,逐个文件分析,生成带注释文件>
tasks = ["" for _ in range(len(file_manifest))]
def bark_fn(tasks):
for i in range(len(tasks)): tasks[i] = "watchdog is dead"
wd = WatchDog(timeout=10, bark_fn=lambda: bark_fn(tasks), interval=3, msg="ThreadWatcher timeout")
wd.begin_watch()
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers=get_conf('DEFAULT_WORKER_NUM'))
def _task_multi_threading(i_say, gpt_say, fp, file_tree_struct):
pcc = PythonCodeComment(llm_kwargs, language='English')
def _task_multi_threading(i_say, gpt_say, fp, file_tree_struct, index):
language = 'Chinese' if plugin_kwargs["use_chinese"] else 'English'
def observe_window_update(x):
if tasks[index] == "watchdog is dead":
raise TimeoutError("ThreadWatcher: watchdog is dead")
tasks[index] = x
pcc = PythonCodeComment(llm_kwargs, plugin_kwargs, language=language, observe_window_update=observe_window_update)
pcc.read_file(path=fp, brief=gpt_say)
revised_path, revised_content = pcc.begin_comment_source_code(None, None)
file_tree_struct.manifest[fp].revised_path = revised_path
@@ -65,7 +79,8 @@ def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
with open("crazy_functions/agent_fns/python_comment_compare.html", 'r', encoding='utf-8') as f:
html_template = f.read()
warp = lambda x: "```python\n\n" + x + "\n\n```"
from themes.theme import advanced_css
from themes.theme import load_dynamic_theme
_, advanced_css, _, _ = load_dynamic_theme("Default")
html_template = html_template.replace("ADVANCED_CSS", advanced_css)
html_template = html_template.replace("REPLACE_CODE_FILE_LEFT", pcc.get_markdown_block_in_html(markdown_convertion_for_file(warp(pcc.original_content))))
html_template = html_template.replace("REPLACE_CODE_FILE_RIGHT", pcc.get_markdown_block_in_html(markdown_convertion_for_file(warp(revised_content))))
@@ -73,17 +88,21 @@ def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
file_tree_struct.manifest[fp].compare_html = compare_html_path
with open(compare_html_path, 'w', encoding='utf-8') as f:
f.write(html_template)
# print('done 1')
tasks[index] = ""
chatbot.append([None, f"正在处理:"])
futures = []
index = 0
for i_say, gpt_say, fp in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], file_manifest):
future = executor.submit(_task_multi_threading, i_say, gpt_say, fp, file_tree_struct)
future = executor.submit(_task_multi_threading, i_say, gpt_say, fp, file_tree_struct, index)
index += 1
futures.append(future)
# <第三步,等待任务完成>
cnt = 0
while True:
cnt += 1
wd.feed()
time.sleep(3)
worker_done = [h.done() for h in futures]
remain = len(worker_done) - sum(worker_done)
@@ -92,14 +111,18 @@ def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
preview_html_list = []
for done, fp in zip(worker_done, file_manifest):
if not done: continue
preview_html_list.append(file_tree_struct.manifest[fp].compare_html)
if hasattr(file_tree_struct.manifest[fp], 'compare_html'):
preview_html_list.append(file_tree_struct.manifest[fp].compare_html)
else:
logger.error(f"文件: {fp} 的注释结果未能成功")
file_links = generate_file_link(preview_html_list)
yield from update_ui_lastest_msg(
f"剩余源文件数量: {remain}.\n\n" +
f"已完成的文件: {sum(worker_done)}.\n\n" +
f"当前任务: <br/>{'<br/>'.join(tasks)}.<br/>" +
f"剩余源文件数量: {remain}.<br/>" +
f"已完成的文件: {sum(worker_done)}.<br/>" +
file_links +
"\n\n" +
"<br/>" +
''.join(['.']*(cnt % 10 + 1)
), chatbot=chatbot, history=history, delay=0)
yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
@@ -120,6 +143,7 @@ def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
@CatchException
def 注释Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出
plugin_kwargs["use_chinese"] = plugin_kwargs.get("use_chinese", False)
import glob, os
if os.path.exists(txt):
project_folder = txt

查看文件

@@ -0,0 +1,36 @@
from toolbox import get_conf, update_ui
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
from crazy_functions.SourceCode_Comment import 注释Python项目
class SourceCodeComment_Wrap(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
"""
gui_definition = {
"main_input":
ArgProperty(title="路径", description="程序路径(上传文件后自动填写)", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
"use_chinese":
ArgProperty(title="注释语言", options=["英文", "中文"], default_value="英文", description="", type="dropdown").model_dump_json(),
# "use_emoji":
# ArgProperty(title="在注释中使用emoji", options=["禁止", "允许"], default_value="禁止", description="无", type="dropdown").model_dump_json(),
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
if plugin_kwargs["use_chinese"] == "中文":
plugin_kwargs["use_chinese"] = True
else:
plugin_kwargs["use_chinese"] = False
yield from 注释Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)

查看文件

@@ -0,0 +1,204 @@
import requests
import random
import time
import re
import json
from bs4 import BeautifulSoup
from functools import lru_cache
from itertools import zip_longest
from check_proxy import check_proxy
from toolbox import CatchException, update_ui, get_conf, promote_file_to_downloadzone, update_ui_lastest_msg, generate_file_link
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
from request_llms.bridge_all import model_info
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.prompts.internet import SearchOptimizerPrompt, SearchAcademicOptimizerPrompt
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
from textwrap import dedent
from loguru import logger
from pydantic import BaseModel, Field
class Query(BaseModel):
search_keyword: str = Field(description="search query for video resource")
class VideoResource(BaseModel):
thought: str = Field(description="analysis of the search results based on the user's query")
title: str = Field(description="title of the video")
author: str = Field(description="author/uploader of the video")
bvid: str = Field(description="unique ID of the video")
another_failsafe_bvid: str = Field(description="provide another bvid, the other one is not working")
def get_video_resource(search_keyword):
from crazy_functions.media_fns.get_media import search_videos
# Search for videos and return the first result
videos = search_videos(
search_keyword
)
# Return the first video if results exist, otherwise return None
return videos
def download_video(bvid, user_name, chatbot, history):
# from experimental_mods.get_bilibili_resource import download_bilibili
from crazy_functions.media_fns.get_media import download_video
# pause a while
tic_time = 8
for i in range(tic_time):
yield from update_ui_lastest_msg(
lastmsg=f"即将下载音频。等待{tic_time-i}秒后自动继续, 点击“停止”键取消此操作。",
chatbot=chatbot, history=[], delay=1)
# download audio
chatbot.append((None, "下载音频, 请稍等...")); yield from update_ui(chatbot=chatbot, history=history)
downloaded_files = yield from download_video(bvid, only_audio=True, user_name=user_name, chatbot=chatbot, history=history)
if len(downloaded_files) == 0:
# failed to download audio
return []
# preview
preview_list = [promote_file_to_downloadzone(fp) for fp in downloaded_files]
file_links = generate_file_link(preview_list)
yield from update_ui_lastest_msg(f"已完成的文件: <br/>" + file_links, chatbot=chatbot, history=history, delay=0)
chatbot.append((None, f"即将下载视频。"))
# pause a while
tic_time = 16
for i in range(tic_time):
yield from update_ui_lastest_msg(
lastmsg=f"即将下载视频。等待{tic_time-i}秒后自动继续, 点击“停止”键取消此操作。",
chatbot=chatbot, history=[], delay=1)
# download video
chatbot.append((None, "下载视频, 请稍等...")); yield from update_ui(chatbot=chatbot, history=history)
downloaded_files_part2 = yield from download_video(bvid, only_audio=False, user_name=user_name, chatbot=chatbot, history=history)
# preview
preview_list = [promote_file_to_downloadzone(fp) for fp in downloaded_files_part2]
file_links = generate_file_link(preview_list)
yield from update_ui_lastest_msg(f"已完成的文件: <br/>" + file_links, chatbot=chatbot, history=history, delay=0)
# return
return downloaded_files + downloaded_files_part2
class Strategy(BaseModel):
thought: str = Field(description="analysis of the user's wish, for example, can you recall the name of the resource?")
which_methods: str = Field(description="Which method to use to find the necessary information? choose from 'method_1' and 'method_2'.")
method_1_search_keywords: str = Field(description="Generate keywords to search the internet if you choose method 1, otherwise empty.")
method_2_generate_keywords: str = Field(description="Generate keywords for video download engine if you choose method 2, otherwise empty.")
@CatchException
def 多媒体任务(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
user_wish: str = txt
# query demos:
# - "我想找一首歌,里面有句歌词是“turn your face towards the sun”"
# - "一首歌,第一句是红豆生南国"
# - "一首音乐,中国航天任务专用的那首"
# - "戴森球计划在熔岩星球的音乐"
# - "hanser的百变什么精"
# - "打大圣残躯时的bgm"
# - "渊下宫战斗音乐"
# 搜索
chatbot.append((txt, "检索中, 请稍等..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if "跳过联网搜索" not in user_wish:
# 结构化生成
internet_search_keyword = user_wish
yield from update_ui_lastest_msg(lastmsg=f"发起互联网检索: {internet_search_keyword} ...", chatbot=chatbot, history=[], delay=1)
from crazy_functions.Internet_GPT import internet_search_with_analysis_prompt
result = yield from internet_search_with_analysis_prompt(
prompt=internet_search_keyword,
analysis_prompt="请根据搜索结果分析,获取用户需要找的资源的名称、作者、出处等信息。",
llm_kwargs=llm_kwargs,
chatbot=chatbot
)
yield from update_ui_lastest_msg(lastmsg=f"互联网检索结论: {result} \n\n 正在生成进一步检索方案 ...", chatbot=chatbot, history=[], delay=1)
rf_req = dedent(f"""
The user wish to get the following resource:
{user_wish}
Meanwhile, you can access another expert's opinion on the user's wish:
{result}
Generate search keywords (less than 5 keywords) for video download engine accordingly.
""")
else:
user_wish = user_wish.replace("跳过联网搜索", "").strip()
rf_req = dedent(f"""
The user wish to get the following resource:
{user_wish}
Generate reseach keywords (less than 5 keywords) accordingly.
""")
gpt_json_io = GptJsonIO(Query)
inputs = rf_req + gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
analyze_res = run_gpt_fn(inputs, "")
logger.info(analyze_res)
query: Query = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
video_engine_keywords = query.search_keyword
# 关键词展示
chatbot.append((None, f"检索关键词已确认: {video_engine_keywords}。筛选中, 请稍等..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 获取候选资源
candadate_dictionary: dict = get_video_resource(video_engine_keywords)
candadate_dictionary_as_str = json.dumps(candadate_dictionary, ensure_ascii=False, indent=4)
# 展示候选资源
candadate_display = "\n".join([f"{i+1}. {it['title']}" for i, it in enumerate(candadate_dictionary)])
chatbot.append((None, f"候选:\n\n{candadate_display}"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 结构化生成
rf_req_2 = dedent(f"""
The user wish to get the following resource:
{user_wish}
Select the most relevant and suitable video resource from the following search results:
{candadate_dictionary_as_str}
Note:
1. The first several search video results are more likely to satisfy the user's wish.
2. The time duration of the video should be less than 10 minutes.
3. You should analyze the search results first, before giving your answer.
4. Use Chinese if possible.
5. Beside the primary video selection, give a backup video resource `bvid`.
""")
gpt_json_io = GptJsonIO(VideoResource)
inputs = rf_req_2 + gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
analyze_res = run_gpt_fn(inputs, "")
logger.info(analyze_res)
video_resource: VideoResource = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
# Display
chatbot.append(
(None,
f"分析:{video_resource.thought}" "<br/>"
f"选择: `{video_resource.title}`。" "<br/>"
f"作者:{video_resource.author}"
)
)
chatbot.append((None, f"下载中, 请稍等..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if video_resource and video_resource.bvid:
logger.info(video_resource)
downloaded = yield from download_video(video_resource.bvid, chatbot.get_user(), chatbot, history)
if not downloaded:
chatbot.append((None, f"下载失败, 尝试备选 ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
downloaded = yield from download_video(video_resource.another_failsafe_bvid, chatbot.get_user(), chatbot, history)
@CatchException
def debug(bvid, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
yield from download_video(bvid, chatbot.get_user(), chatbot, history)

查看文件

@@ -68,6 +68,7 @@ Be aware:
1. You must NOT modify the indent of code.
2. You are NOT authorized to change or translate non-comment code, and you are NOT authorized to add empty lines either, toggle qu.
3. Use {LANG} to add comments and docstrings. Do NOT translate Chinese that is already in the code.
4. Besides adding a docstring, use the ⭐ symbol to annotate the most core and important line of code within the function, explaining its role.
------------------ Example ------------------
INPUT:
@@ -116,10 +117,66 @@ def zip_result(folder):
'''
revise_funtion_prompt_chinese = '''
您需要阅读以下代码,并根据以下说明修订源代码({FILE_BASENAME}):
1. 如果源代码中包含函数的话, 你应该分析给定函数实现了什么功能
2. 如果源代码中包含函数的话, 你需要为函数添加docstring, docstring必须使用中文
请注意:
1. 你不得修改代码的缩进
2. 你无权更改或翻译代码中的非注释部分,也不允许添加空行
3. 使用 {LANG} 添加注释和文档字符串。不要翻译代码中已有的中文
4. 除了添加docstring之外, 使用⭐符号给该函数中最核心、最重要的一行代码添加注释,并说明其作用
------------------ 示例 ------------------
INPUT:
```
L0000 |
L0001 |def zip_result(folder):
L0002 | t = gen_time_str()
L0003 | zip_folder(folder, get_log_folder(), f"result.zip")
L0004 | return os.path.join(get_log_folder(), f"result.zip")
L0005 |
L0006 |
```
OUTPUT:
<instruction_1_purpose>
该函数用于压缩指定文件夹,并返回生成的`zip`文件的路径。
</instruction_1_purpose>
<instruction_2_revised_code>
```
def zip_result(folder):
"""
该函数将指定的文件夹压缩成ZIP文件, 并将其存储在日志文件夹中。
输入参数:
folder (str): 需要压缩的文件夹的路径。
返回值:
str: 日志文件夹中创建的ZIP文件的路径。
"""
t = gen_time_str()
zip_folder(folder, get_log_folder(), f"result.zip") # ⭐ 执行文件夹的压缩
return os.path.join(get_log_folder(), f"result.zip")
```
</instruction_2_revised_code>
------------------ End of Example ------------------
------------------ the real INPUT you need to process NOW ({FILE_BASENAME}) ------------------
```
{THE_CODE}
```
{INDENT_REMINDER}
{BRIEF_REMINDER}
{HINT_REMINDER}
'''
class PythonCodeComment():
def __init__(self, llm_kwargs, language) -> None:
def __init__(self, llm_kwargs, plugin_kwargs, language, observe_window_update) -> None:
self.original_content = ""
self.full_context = []
self.full_context_with_line_no = []
@@ -127,7 +184,13 @@ class PythonCodeComment():
self.page_limit = 100 # 100 lines of code each page
self.ignore_limit = 20
self.llm_kwargs = llm_kwargs
self.plugin_kwargs = plugin_kwargs
self.language = language
self.observe_window_update = observe_window_update
if self.language == "chinese":
self.core_prompt = revise_funtion_prompt_chinese
else:
self.core_prompt = revise_funtion_prompt
self.path = None
self.file_basename = None
self.file_brief = ""
@@ -258,7 +321,7 @@ class PythonCodeComment():
hint_reminder = "" if hint is None else f"(Reminder: do not ignore or modify code such as `{hint}`, provide complete code in the OUTPUT.)"
self.llm_kwargs['temperature'] = 0
result = predict_no_ui_long_connection(
inputs=revise_funtion_prompt.format(
inputs=self.core_prompt.format(
LANG=self.language,
FILE_BASENAME=self.file_basename,
THE_CODE=code,
@@ -348,6 +411,7 @@ class PythonCodeComment():
try:
# yield from update_ui_lastest_msg(f"({self.file_basename}) 正在读取下一段代码片段:\n", chatbot=chatbot, history=history, delay=0)
next_batch, line_no_start, line_no_end = self.get_next_batch()
self.observe_window_update(f"正在处理{self.file_basename} - {line_no_start}/{len(self.full_context)}\n")
# yield from update_ui_lastest_msg(f"({self.file_basename}) 处理代码片段:\n\n{next_batch}", chatbot=chatbot, history=history, delay=0)
hint = None

查看文件

@@ -1,39 +1,47 @@
import ast
import token
import tokenize
import copy
import io
class CommentRemover(ast.NodeTransformer):
def visit_FunctionDef(self, node):
# 移除函数的文档字符串
if (node.body and isinstance(node.body[0], ast.Expr) and
isinstance(node.body[0].value, ast.Str)):
node.body = node.body[1:]
self.generic_visit(node)
return node
def visit_ClassDef(self, node):
# 移除类的文档字符串
if (node.body and isinstance(node.body[0], ast.Expr) and
isinstance(node.body[0].value, ast.Str)):
node.body = node.body[1:]
self.generic_visit(node)
return node
def remove_python_comments(input_source: str) -> str:
source_flag = copy.copy(input_source)
source = io.StringIO(input_source)
ls = input_source.split('\n')
prev_toktype = token.INDENT
readline = source.readline
def visit_Module(self, node):
# 移除模块的文档字符串
if (node.body and isinstance(node.body[0], ast.Expr) and
isinstance(node.body[0].value, ast.Str)):
node.body = node.body[1:]
self.generic_visit(node)
return node
def get_char_index(lineno, col):
# find the index of the char in the source code
if lineno == 1:
return len('\n'.join(ls[:(lineno-1)])) + col
else:
return len('\n'.join(ls[:(lineno-1)])) + col + 1
def replace_char_between(start_lineno, start_col, end_lineno, end_col, source, replace_char, ls):
# replace char between start_lineno, start_col and end_lineno, end_col with replace_char, but keep '\n' and ' '
b = get_char_index(start_lineno, start_col)
e = get_char_index(end_lineno, end_col)
for i in range(b, e):
if source[i] == '\n':
source = source[:i] + '\n' + source[i+1:]
elif source[i] == ' ':
source = source[:i] + ' ' + source[i+1:]
else:
source = source[:i] + replace_char + source[i+1:]
return source
tokgen = tokenize.generate_tokens(readline)
for toktype, ttext, (slineno, scol), (elineno, ecol), ltext in tokgen:
if toktype == token.STRING and (prev_toktype == token.INDENT):
source_flag = replace_char_between(slineno, scol, elineno, ecol, source_flag, ' ', ls)
elif toktype == token.STRING and (prev_toktype == token.NEWLINE):
source_flag = replace_char_between(slineno, scol, elineno, ecol, source_flag, ' ', ls)
elif toktype == tokenize.COMMENT:
source_flag = replace_char_between(slineno, scol, elineno, ecol, source_flag, ' ', ls)
prev_toktype = toktype
return source_flag
def remove_python_comments(source_code):
# 解析源代码为 AST
tree = ast.parse(source_code)
# 移除注释
transformer = CommentRemover()
tree = transformer.visit(tree)
# 将处理后的 AST 转换回源代码
return ast.unparse(tree)
# 示例使用
if __name__ == "__main__":

查看文件

@@ -1,141 +0,0 @@
from toolbox import CatchException, update_ui, promote_file_to_downloadzone
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
import datetime, json
def fetch_items(list_of_items, batch_size):
for i in range(0, len(list_of_items), batch_size):
yield list_of_items[i:i + batch_size]
def string_to_options(arguments):
import argparse
import shlex
# Create an argparse.ArgumentParser instance
parser = argparse.ArgumentParser()
# Add command-line arguments
parser.add_argument("--llm_to_learn", type=str, help="LLM model to learn", default="gpt-3.5-turbo")
parser.add_argument("--prompt_prefix", type=str, help="Prompt prefix", default='')
parser.add_argument("--system_prompt", type=str, help="System prompt", default='')
parser.add_argument("--batch", type=int, help="System prompt", default=50)
parser.add_argument("--pre_seq_len", type=int, help="pre_seq_len", default=50)
parser.add_argument("--learning_rate", type=float, help="learning_rate", default=2e-2)
parser.add_argument("--num_gpus", type=int, help="num_gpus", default=1)
parser.add_argument("--json_dataset", type=str, help="json_dataset", default="")
parser.add_argument("--ptuning_directory", type=str, help="ptuning_directory", default="")
# Parse the arguments
args = parser.parse_args(shlex.split(arguments))
return args
@CatchException
def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
user_request 当前用户的请求信息IP地址等
"""
history = [] # 清空历史,以免输入溢出
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
args = plugin_kwargs.get("advanced_arg", None)
if args is None:
chatbot.append(("没给定指令", "退出"))
yield from update_ui(chatbot=chatbot, history=history); return
else:
arguments = string_to_options(arguments=args)
dat = []
with open(txt, 'r', encoding='utf8') as f:
for line in f.readlines():
json_dat = json.loads(line)
dat.append(json_dat["content"])
llm_kwargs['llm_model'] = arguments.llm_to_learn
for batch in fetch_items(dat, arguments.batch):
res = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=[f"{arguments.prompt_prefix}\n\n{b}" for b in (batch)],
inputs_show_user_array=[f"Show Nothing" for _ in (batch)],
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history_array=[[] for _ in (batch)],
sys_prompt_array=[arguments.system_prompt for _ in (batch)],
max_workers=10 # OpenAI所允许的最大并行过载
)
with open(txt+'.generated.json', 'a+', encoding='utf8') as f:
for b, r in zip(batch, res[1::2]):
f.write(json.dumps({"content":b, "summary":r}, ensure_ascii=False)+'\n')
promote_file_to_downloadzone(txt+'.generated.json', rename_file='generated.json', chatbot=chatbot)
return
@CatchException
def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
plugin_kwargs 插件模型的参数
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
user_request 当前用户的请求信息IP地址等
"""
import subprocess
history = [] # 清空历史,以免输入溢出
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
args = plugin_kwargs.get("advanced_arg", None)
if args is None:
chatbot.append(("没给定指令", "退出"))
yield from update_ui(chatbot=chatbot, history=history); return
else:
arguments = string_to_options(arguments=args)
pre_seq_len = arguments.pre_seq_len # 128
learning_rate = arguments.learning_rate # 2e-2
num_gpus = arguments.num_gpus # 1
json_dataset = arguments.json_dataset # 't_code.json'
ptuning_directory = arguments.ptuning_directory # '/home/hmp/ChatGLM2-6B/ptuning'
command = f"torchrun --standalone --nnodes=1 --nproc-per-node={num_gpus} main.py \
--do_train \
--train_file AdvertiseGen/{json_dataset} \
--validation_file AdvertiseGen/{json_dataset} \
--preprocessing_num_workers 20 \
--prompt_column content \
--response_column summary \
--overwrite_cache \
--model_name_or_path THUDM/chatglm2-6b \
--output_dir output/clothgen-chatglm2-6b-pt-{pre_seq_len}-{learning_rate} \
--overwrite_output_dir \
--max_source_length 256 \
--max_target_length 256 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 16 \
--predict_with_generate \
--max_steps 100 \
--logging_steps 10 \
--save_steps 20 \
--learning_rate {learning_rate} \
--pre_seq_len {pre_seq_len} \
--quantization_bit 4"
process = subprocess.Popen(command, shell=True, cwd=ptuning_directory)
try:
process.communicate(timeout=3600*24)
except subprocess.TimeoutExpired:
process.kill()
return

查看文件

@@ -169,6 +169,7 @@ def can_multi_process(llm) -> bool:
def default_condition(llm) -> bool:
# legacy condition
if llm.startswith('gpt-'): return True
if llm.startswith('chatgpt-'): return True
if llm.startswith('api2d-'): return True
if llm.startswith('azure-'): return True
if llm.startswith('spark'): return True

查看文件

@@ -3,7 +3,7 @@ import re
import shutil
import numpy as np
from loguru import logger
from toolbox import update_ui, update_ui_lastest_msg, get_log_folder
from toolbox import update_ui, update_ui_lastest_msg, get_log_folder, gen_time_str
from toolbox import get_conf, promote_file_to_downloadzone
from crazy_functions.latex_fns.latex_toolbox import PRESERVE, TRANSFORM
from crazy_functions.latex_fns.latex_toolbox import set_forbidden_text, set_forbidden_text_begin_end, set_forbidden_text_careful_brace
@@ -300,7 +300,8 @@ def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin
write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder)
# <-------- 写出文件 ---------->
msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}"
model_name = llm_kwargs['llm_model'].replace('_', '\\_') # 替换LLM模型名称中的下划线为转义字符
msg = f"当前大语言模型: {model_name},当前语言模型温度设定: {llm_kwargs['temperature']}"
final_tex = lps.merge_result(pfg.file_result, mode, msg)
objdump((lps, pfg.file_result, mode, msg), file=pj(project_folder,'merge_result.pkl'))
@@ -351,6 +352,41 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
chatbot.append([f"正在编译PDF文档", f'编译已经开始。当前工作路径为{work_folder},如果程序停顿5分钟以上,请直接去该路径下取回翻译结果,或者重启之后再度尝试 ...']); yield from update_ui(chatbot=chatbot, history=history)
chatbot.append([f"正在编译PDF文档", '...']); yield from update_ui(chatbot=chatbot, history=history); time.sleep(1); chatbot[-1] = list(chatbot[-1]) # 刷新界面
yield from update_ui_lastest_msg('编译已经开始...', chatbot, history) # 刷新Gradio前端界面
# 检查是否需要使用xelatex
def check_if_need_xelatex(tex_path):
try:
with open(tex_path, 'r', encoding='utf-8', errors='replace') as f:
content = f.read(5000)
# 检查是否有使用xelatex的宏包
need_xelatex = any(
pkg in content
for pkg in ['fontspec', 'xeCJK', 'xetex', 'unicode-math', 'xltxtra', 'xunicode']
)
if need_xelatex:
logger.info(f"检测到宏包需要xelatex编译, 切换至xelatex编译")
else:
logger.info(f"未检测到宏包需要xelatex编译, 使用pdflatex编译")
return need_xelatex
except Exception:
return False
# 根据编译器类型返回编译命令
def get_compile_command(compiler, filename):
compile_command = f'{compiler} -interaction=batchmode -file-line-error {filename}.tex'
logger.info('Latex 编译指令: ', compile_command)
return compile_command
# 确定使用的编译器
compiler = 'pdflatex'
if check_if_need_xelatex(pj(work_folder_modified, f'{main_file_modified}.tex')):
logger.info("检测到宏包需要xelatex编译,切换至xelatex编译")
# Check if xelatex is installed
try:
import subprocess
subprocess.run(['xelatex', '--version'], capture_output=True, check=True)
compiler = 'xelatex'
except (subprocess.CalledProcessError, FileNotFoundError):
raise RuntimeError("检测到需要使用xelatex编译,但系统中未安装xelatex。请先安装texlive或其他提供xelatex的LaTeX发行版。")
while True:
import os
@@ -361,10 +397,10 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
# https://stackoverflow.com/questions/738755/dont-make-me-manually-abort-a-latex-compile-when-theres-an-error
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译原始PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')):
# 只有第二步成功,才能继续下面的步骤
@@ -375,10 +411,10 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
ok = compile_latex_with_timeout(f'bibtex {main_file_modified}.aux', work_folder_modified)
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译文献交叉引用 ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
if mode!='translate_zh':
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 使用latexdiff生成论文转化前后对比 ...', chatbot, history) # 刷新Gradio前端界面
@@ -386,10 +422,10 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
ok = compile_latex_with_timeout(f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex', os.getcwd())
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 正在编译对比PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder)
ok = compile_latex_with_timeout(f'bibtex merge_diff.aux', work_folder)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder)
ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder)
# <---------- 检查结果 ----------->
results_ = ""
@@ -468,3 +504,70 @@ def write_html(sp_file_contents, sp_file_result, chatbot, project_folder):
except:
from toolbox import trimmed_format_exc
logger.error('writing html result failed:', trimmed_format_exc())
def upload_to_gptac_cloud_if_user_allow(chatbot, arxiv_id):
try:
# 如果用户允许,我们将arxiv论文PDF上传到GPTAC学术云
from toolbox import map_file_to_sha256
# 检查是否顺利,如果没有生成预期的文件,则跳过
is_result_good = False
for file_path in chatbot._cookies.get("files_to_promote", []):
if file_path.endswith('translate_zh.pdf'):
is_result_good = True
if not is_result_good:
return
# 上传文件
for file_path in chatbot._cookies.get("files_to_promote", []):
align_name = None
# normalized name
for name in ['translate_zh.pdf', 'comparison.pdf']:
if file_path.endswith(name): align_name = name
# if match any align name
if align_name:
logger.info(f'Uploading to GPTAC cloud as the user has set `allow_cloud_io`: {file_path}')
with open(file_path, 'rb') as f:
import requests
url = 'https://cloud-2.agent-matrix.com/arxiv_tf_paper_normal_upload'
files = {'file': (align_name, f, 'application/octet-stream')}
data = {
'arxiv_id': arxiv_id,
'file_hash': map_file_to_sha256(file_path),
'language': 'zh',
'trans_prompt': 'to_be_implemented',
'llm_model': 'to_be_implemented',
'llm_model_param': 'to_be_implemented',
}
resp = requests.post(url=url, files=files, data=data, timeout=30)
logger.info(f'Uploading terminate ({resp.status_code})`: {file_path}')
except:
# 如果上传失败,不会中断程序,因为这是次要功能
pass
def check_gptac_cloud(arxiv_id, chatbot):
import requests
success = False
downloaded = []
try:
for pdf_target in ['translate_zh.pdf', 'comparison.pdf']:
url = 'https://cloud-2.agent-matrix.com/arxiv_tf_paper_normal_exist'
data = {
'arxiv_id': arxiv_id,
'name': pdf_target,
}
resp = requests.post(url=url, data=data)
cache_hit_result = resp.text.strip('"')
if cache_hit_result.startswith("http"):
url = cache_hit_result
logger.info(f'Downloading from GPTAC cloud: {url}')
resp = requests.get(url=url, timeout=30)
target = os.path.join(get_log_folder(plugin_name='gptac_cloud'), gen_time_str(), pdf_target)
os.makedirs(os.path.dirname(target), exist_ok=True)
with open(target, 'wb') as f:
f.write(resp.content)
new_path = promote_file_to_downloadzone(target, chatbot=chatbot)
success = True
downloaded.append(new_path)
except:
pass
return success, downloaded

查看文件

@@ -6,12 +6,16 @@ class SafeUnpickler(pickle.Unpickler):
def get_safe_classes(self):
from crazy_functions.latex_fns.latex_actions import LatexPaperFileGroup, LatexPaperSplit
from crazy_functions.latex_fns.latex_toolbox import LinkedListNode
from numpy.core.multiarray import scalar
from numpy import dtype
# 定义允许的安全类
safe_classes = {
# 在这里添加其他安全的类
'LatexPaperFileGroup': LatexPaperFileGroup,
'LatexPaperSplit': LatexPaperSplit,
'LinkedListNode': LinkedListNode,
'scalar': scalar,
'dtype': dtype,
}
return safe_classes
@@ -22,8 +26,6 @@ class SafeUnpickler(pickle.Unpickler):
for class_name in self.safe_classes.keys():
if (class_name in f'{module}.{name}'):
match_class_name = class_name
if module == 'numpy' or module.startswith('numpy.'):
return super().find_class(module, name)
if match_class_name is not None:
return self.safe_classes[match_class_name]
# 如果尝试加载未授权的类,则抛出异常

查看文件

@@ -712,134 +712,137 @@ def _merge_pdfs_ng(pdf1_path, pdf2_path, output_path):
# 内部链接:跳转到文档中的某个页面
dest = action.get("/D") # 目标页或目标位置
# if dest and annot.idnum in page2_annot_id:
if dest in pdf2_reader.named_destinations:
# 获取原始文件中跳转信息,包括跳转页面
destination = pdf2_reader.named_destinations[
dest
]
page_number = (
pdf2_reader.get_destination_page_number(
destination
)
)
# 更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
# “/D”:[10,'/XYZ',100,100,0]
if destination.dest_array[1] == "/XYZ":
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
FloatObject(
destination.dest_array[
2
]
+ int(
page1.mediaBox.getWidth()
)
),
destination.dest_array[3],
destination.dest_array[4],
]
) # 确保键和值是 PdfObject
}
)
else:
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
]
) # 确保键和值是 PdfObject
}
)
rect = annot_obj.get("/Rect")
# 更新点击坐标
rect = ArrayObject(
[
FloatObject(
rect[0]
+ int(page1.mediaBox.getWidth())
),
rect[1],
FloatObject(
rect[2]
+ int(page1.mediaBox.getWidth())
),
rect[3],
# if dest in pdf2_reader.named_destinations:
if dest and page2.annotations:
if annot in page2.annotations:
# 获取原始文件中跳转信息,包括跳转页面
destination = pdf2_reader.named_destinations[
dest
]
)
annot_obj.update(
{
NameObject(
"/Rect"
): rect # 确保键和值是 PdfObject
}
)
page_number = (
pdf2_reader.get_destination_page_number(
destination
)
)
# 更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
# “/D”:[10,'/XYZ',100,100,0]
if destination.dest_array[1] == "/XYZ":
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
FloatObject(
destination.dest_array[
2
]
+ int(
page1.mediaBox.getWidth()
)
),
destination.dest_array[3],
destination.dest_array[4],
]
) # 确保键和值是 PdfObject
}
)
else:
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
]
) # 确保键和值是 PdfObject
}
)
rect = annot_obj.get("/Rect")
# 更新点击坐标
rect = ArrayObject(
[
FloatObject(
rect[0]
+ int(page1.mediaBox.getWidth())
),
rect[1],
FloatObject(
rect[2]
+ int(page1.mediaBox.getWidth())
),
rect[3],
]
)
annot_obj.update(
{
NameObject(
"/Rect"
): rect # 确保键和值是 PdfObject
}
)
# if dest and annot.idnum in page1_annot_id:
if dest in pdf1_reader.named_destinations:
# 获取原始文件中跳转信息,包括跳转页面
destination = pdf1_reader.named_destinations[
dest
]
page_number = (
pdf1_reader.get_destination_page_number(
destination
)
)
# 更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
# “/D”:[10,'/XYZ',100,100,0]
if destination.dest_array[1] == "/XYZ":
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
FloatObject(
destination.dest_array[
2
]
),
destination.dest_array[3],
destination.dest_array[4],
]
) # 确保键和值是 PdfObject
}
)
else:
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
]
) # 确保键和值是 PdfObject
}
)
rect = annot_obj.get("/Rect")
rect = ArrayObject(
[
FloatObject(rect[0]),
rect[1],
FloatObject(rect[2]),
rect[3],
# if dest in pdf1_reader.named_destinations:
if dest and page1.annotations:
if annot in page1.annotations:
# 获取原始文件中跳转信息,包括跳转页面
destination = pdf1_reader.named_destinations[
dest
]
)
annot_obj.update(
{
NameObject(
"/Rect"
): rect # 确保键和值是 PdfObject
}
)
page_number = (
pdf1_reader.get_destination_page_number(
destination
)
)
# 更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
# “/D”:[10,'/XYZ',100,100,0]
if destination.dest_array[1] == "/XYZ":
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
FloatObject(
destination.dest_array[
2
]
),
destination.dest_array[3],
destination.dest_array[4],
]
) # 确保键和值是 PdfObject
}
)
else:
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
]
) # 确保键和值是 PdfObject
}
)
rect = annot_obj.get("/Rect")
rect = ArrayObject(
[
FloatObject(rect[0]),
rect[1],
FloatObject(rect[2]),
rect[3],
]
)
annot_obj.update(
{
NameObject(
"/Rect"
): rect # 确保键和值是 PdfObject
}
)
elif "/S" in action and action["/S"] == "/URI":
# 外部链接跳转到某个URI

查看文件

@@ -0,0 +1,43 @@
from toolbox import update_ui, get_conf, promote_file_to_downloadzone, update_ui_lastest_msg, generate_file_link
from shared_utils.docker_as_service_api import stream_daas
from shared_utils.docker_as_service_api import DockerServiceApiComModel
import random
def download_video(video_id, only_audio, user_name, chatbot, history):
from toolbox import get_log_folder
chatbot.append([None, "Processing..."])
yield from update_ui(chatbot, history)
client_command = f'{video_id} --audio-only' if only_audio else video_id
server_urls = get_conf('DAAS_SERVER_URLS')
server_url = random.choice(server_urls)
docker_service_api_com_model = DockerServiceApiComModel(client_command=client_command)
save_file_dir = get_log_folder(user_name, plugin_name='media_downloader')
for output_manifest in stream_daas(docker_service_api_com_model, server_url, save_file_dir):
status_buf = ""
status_buf += "DaaS message: \n\n"
status_buf += output_manifest['server_message'].replace('\n', '<br/>')
status_buf += "\n\n"
status_buf += "DaaS standard error: \n\n"
status_buf += output_manifest['server_std_err'].replace('\n', '<br/>')
status_buf += "\n\n"
status_buf += "DaaS standard output: \n\n"
status_buf += output_manifest['server_std_out'].replace('\n', '<br/>')
status_buf += "\n\n"
status_buf += "DaaS file attach: \n\n"
status_buf += str(output_manifest['server_file_attach'])
yield from update_ui_lastest_msg(status_buf, chatbot, history)
return output_manifest['server_file_attach']
def search_videos(keywords):
from toolbox import get_log_folder
client_command = keywords
server_urls = get_conf('DAAS_SERVER_URLS')
server_url = random.choice(server_urls)
server_url = server_url.replace('stream', 'search')
docker_service_api_com_model = DockerServiceApiComModel(client_command=client_command)
save_file_dir = get_log_folder("default_user", plugin_name='media_downloader')
for output_manifest in stream_daas(docker_service_api_com_model, server_url, save_file_dir):
return output_manifest['server_message']

查看文件

@@ -4,126 +4,228 @@ from toolbox import promote_file_to_downloadzone, extract_archive
from toolbox import generate_file_link, zip_folder
from crazy_functions.crazy_utils import get_files_from_everything
from shared_utils.colorful import *
from loguru import logger
import os
import requests
import time
def retry_request(max_retries=3, delay=3):
"""
Decorator for retrying HTTP requests
Args:
max_retries: Maximum number of retry attempts
delay: Delay between retries in seconds
"""
def decorator(func):
def wrapper(*args, **kwargs):
for attempt in range(max_retries):
try:
return func(*args, **kwargs)
except Exception as e:
if attempt < max_retries - 1:
logger.error(
f"Request failed, retrying... ({attempt + 1}/{max_retries}) Error: {e}"
)
time.sleep(delay)
continue
raise e
return None
return wrapper
return decorator
@retry_request()
def make_request(method, url, **kwargs):
"""
Make HTTP request with retry mechanism
"""
return requests.request(method, url, **kwargs)
def doc2x_api_response_status(response, uid=""):
"""
Check the status of Doc2x API response
Args:
response_data: Response object from Doc2x API
"""
response_json = response.json()
response_data = response_json.get("data", {})
code = response_json.get("code", "Unknown")
meg = response_data.get("message", response_json)
trace_id = response.headers.get("trace-id", "Failed to get trace-id")
if response.status_code != 200:
raise RuntimeError(
f"Doc2x return an error:\nTrace ID: {trace_id} {uid}\n{response.status_code} - {response_json}"
)
if code in ["parse_page_limit_exceeded", "parse_concurrency_limit"]:
raise RuntimeError(
f"Reached the limit of Doc2x:\nTrace ID: {trace_id} {uid}\n{code} - {meg}"
)
if code not in ["ok", "success"]:
raise RuntimeError(
f"Doc2x return an error:\nTrace ID: {trace_id} {uid}\n{code} - {meg}"
)
return response_data
def refresh_key(doc2x_api_key):
import requests, json
url = "https://api.doc2x.noedgeai.com/api/token/refresh"
res = requests.post(
url,
headers={"Authorization": "Bearer " + doc2x_api_key}
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
res_json = json.loads(decoded)
doc2x_api_key = res_json['data']['token']
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
return doc2x_api_key
def 解析PDF_DOC2X_转Latex(pdf_file_path):
import requests, json, os
DOC2X_API_KEY = get_conf('DOC2X_API_KEY')
zip_file_path, unzipped_folder = 解析PDF_DOC2X(pdf_file_path, format="tex")
return unzipped_folder
def 解析PDF_DOC2X(pdf_file_path, format="tex"):
"""
format: 'tex', 'md', 'docx'
"""
DOC2X_API_KEY = get_conf("DOC2X_API_KEY")
latex_dir = get_log_folder(plugin_name="pdf_ocr_latex")
markdown_dir = get_log_folder(plugin_name="pdf_ocr")
doc2x_api_key = DOC2X_API_KEY
if doc2x_api_key.startswith('sk-'):
url = "https://api.doc2x.noedgeai.com/api/v1/pdf"
else:
doc2x_api_key = refresh_key(doc2x_api_key)
url = "https://api.doc2x.noedgeai.com/api/platform/pdf"
res = requests.post(
url,
files={"file": open(pdf_file_path, "rb")},
data={"ocr": "1"},
headers={"Authorization": "Bearer " + doc2x_api_key}
# < ------ 第1步预上传获取URL,然后上传文件 ------ >
logger.info("Doc2x 上传文件预上传获取URL")
res = make_request(
"POST",
"https://v2.doc2x.noedgeai.com/api/v2/parse/preupload",
headers={"Authorization": "Bearer " + doc2x_api_key},
timeout=15,
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
for z_decoded in decoded.split('\n'):
if len(z_decoded) == 0: continue
assert z_decoded.startswith("data: ")
z_decoded = z_decoded[len("data: "):]
decoded_json = json.loads(z_decoded)
res_json.append(decoded_json)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
res_data = doc2x_api_response_status(res)
upload_url = res_data["url"]
uuid = res_data["uid"]
uuid = res_json[0]['uuid']
to = "latex" # latex, md, docx
url = "https://api.doc2x.noedgeai.com/api/export"+"?request_id="+uuid+"&to="+to
logger.info("Doc2x 上传文件:上传文件")
with open(pdf_file_path, "rb") as file:
res = make_request("PUT", upload_url, data=file, timeout=60)
res.raise_for_status()
res = requests.get(url, headers={"Authorization": "Bearer " + doc2x_api_key})
latex_zip_path = os.path.join(latex_dir, gen_time_str() + '.zip')
latex_unzip_path = os.path.join(latex_dir, gen_time_str())
if res.status_code == 200:
with open(latex_zip_path, "wb") as f: f.write(res.content)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
import zipfile
with zipfile.ZipFile(latex_zip_path, 'r') as zip_ref:
zip_ref.extractall(latex_unzip_path)
return latex_unzip_path
def 解析PDF_DOC2X_单文件(fp, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request):
def pdf2markdown(filepath):
import requests, json, os
markdown_dir = get_log_folder(plugin_name="pdf_ocr")
doc2x_api_key = DOC2X_API_KEY
if doc2x_api_key.startswith('sk-'):
url = "https://api.doc2x.noedgeai.com/api/v1/pdf"
else:
doc2x_api_key = refresh_key(doc2x_api_key)
url = "https://api.doc2x.noedgeai.com/api/platform/pdf"
chatbot.append((None, "加载PDF文件,发送至DOC2X解析..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
res = requests.post(
url,
files={"file": open(filepath, "rb")},
data={"ocr": "1"},
headers={"Authorization": "Bearer " + doc2x_api_key}
# < ------ 第2步轮询等待 ------ >
logger.info("Doc2x 处理文件中:轮询等待")
params = {"uid": uuid}
max_attempts = 60
attempt = 0
while attempt < max_attempts:
res = make_request(
"GET",
"https://v2.doc2x.noedgeai.com/api/v2/parse/status",
headers={"Authorization": "Bearer " + doc2x_api_key},
params=params,
timeout=15,
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
for z_decoded in decoded.split('\n'):
if len(z_decoded) == 0: continue
assert z_decoded.startswith("data: ")
z_decoded = z_decoded[len("data: "):]
decoded_json = json.loads(z_decoded)
res_json.append(decoded_json)
if 'limit exceeded' in decoded_json.get('status', ''):
raise RuntimeError("Doc2x API 页数受限,请联系 Doc2x 方面,并更换新的 API 秘钥。")
res_data = doc2x_api_response_status(res)
if res_data["status"] == "success":
break
elif res_data["status"] == "processing":
time.sleep(5)
logger.info(f"Doc2x is processing at {res_data['progress']}%")
attempt += 1
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
uuid = res_json[0]['uuid']
to = "md" # latex, md, docx
url = "https://api.doc2x.noedgeai.com/api/export"+"?request_id="+uuid+"&to="+to
raise RuntimeError(f"Doc2x return an error: {res_data}")
if attempt >= max_attempts:
raise RuntimeError("Doc2x processing timeout after maximum attempts")
chatbot.append((None, f"读取解析: {url} ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# < ------ 第3步提交转化 ------ >
logger.info("Doc2x 第3步提交转化")
data = {
"uid": uuid,
"to": format,
"formula_mode": "dollar",
"filename": "output"
}
res = make_request(
"POST",
"https://v2.doc2x.noedgeai.com/api/v2/convert/parse",
headers={"Authorization": "Bearer " + doc2x_api_key},
json=data,
timeout=15,
)
doc2x_api_response_status(res, uid=f"uid: {uuid}")
# < ------ 第4步等待结果 ------ >
logger.info("Doc2x 第4步等待结果")
params = {"uid": uuid}
max_attempts = 36
attempt = 0
while attempt < max_attempts:
res = make_request(
"GET",
"https://v2.doc2x.noedgeai.com/api/v2/convert/parse/result",
headers={"Authorization": "Bearer " + doc2x_api_key},
params=params,
timeout=15,
)
res_data = doc2x_api_response_status(res, uid=f"uid: {uuid}")
if res_data["status"] == "success":
break
elif res_data["status"] == "processing":
time.sleep(3)
logger.info("Doc2x still processing to convert file")
attempt += 1
if attempt >= max_attempts:
raise RuntimeError("Doc2x conversion timeout after maximum attempts")
# < ------ 第5步最后的处理 ------ >
logger.info("Doc2x 第5步下载转换后的文件")
if format == "tex":
target_path = latex_dir
if format == "md":
target_path = markdown_dir
os.makedirs(target_path, exist_ok=True)
max_attempt = 3
# < ------ 下载 ------ >
for attempt in range(max_attempt):
try:
result_url = res_data["url"]
res = make_request("GET", result_url, timeout=60)
zip_path = os.path.join(target_path, gen_time_str() + ".zip")
unzip_path = os.path.join(target_path, gen_time_str())
if res.status_code == 200:
with open(zip_path, "wb") as f:
f.write(res.content)
else:
raise RuntimeError(f"Doc2x return an error: {res.json()}")
except Exception as e:
if attempt < max_attempt - 1:
logger.error(f"Failed to download uid = {uuid} file, retrying... {e}")
time.sleep(3)
continue
else:
raise e
# < ------ 解压 ------ >
import zipfile
with zipfile.ZipFile(zip_path, "r") as zip_ref:
zip_ref.extractall(unzip_path)
return zip_path, unzip_path
def 解析PDF_DOC2X_单文件(
fp,
project_folder,
llm_kwargs,
plugin_kwargs,
chatbot,
history,
system_prompt,
DOC2X_API_KEY,
user_request,
):
def pdf2markdown(filepath):
chatbot.append((None, f"Doc2x 解析中"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
md_zip_path, unzipped_folder = 解析PDF_DOC2X(filepath, format="md")
res = requests.get(url, headers={"Authorization": "Bearer " + doc2x_api_key})
md_zip_path = os.path.join(markdown_dir, gen_time_str() + '.zip')
if res.status_code == 200:
with open(md_zip_path, "wb") as f: f.write(res.content)
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
promote_file_to_downloadzone(md_zip_path, chatbot=chatbot)
chatbot.append((None, f"完成解析 {md_zip_path} ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return md_zip_path
def deliver_to_markdown_plugin(md_zip_path, user_request):
@@ -137,77 +239,97 @@ def 解析PDF_DOC2X_单文件(fp, project_folder, llm_kwargs, plugin_kwargs, cha
os.makedirs(target_path_base, exist_ok=True)
shutil.copyfile(md_zip_path, this_file_path)
ex_folder = this_file_path + ".extract"
extract_archive(
file_path=this_file_path, dest_dir=ex_folder
)
extract_archive(file_path=this_file_path, dest_dir=ex_folder)
# edit markdown files
success, file_manifest, project_folder = get_files_from_everything(ex_folder, type='.md')
success, file_manifest, project_folder = get_files_from_everything(
ex_folder, type=".md"
)
for generated_fp in file_manifest:
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f:
with open(generated_fp, "r", encoding="utf8") as f:
content = f.read()
# 将公式中的\[ \]替换成$$
content = content.replace(r'\[', r'$$').replace(r'\]', r'$$')
content = content.replace(r"\[", r"$$").replace(r"\]", r"$$")
# 将公式中的\( \)替换成$
content = content.replace(r'\(', r'$').replace(r'\)', r'$')
content = content.replace('```markdown', '\n').replace('```', '\n')
with open(generated_fp, 'w', encoding='utf8') as f:
content = content.replace(r"\(", r"$").replace(r"\)", r"$")
content = content.replace("```markdown", "\n").replace("```", "\n")
with open(generated_fp, "w", encoding="utf8") as f:
f.write(content)
promote_file_to_downloadzone(generated_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 生成在线预览html
file_name = '在线预览翻译(原文)' + gen_time_str() + '.html'
file_name = "在线预览翻译(原文)" + gen_time_str() + ".html"
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
from shared_utils.advanced_markdown_format import (
markdown_convertion_for_file,
)
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
# # Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
# md = re.sub(r'^<table>', r'.<table>', md, flags=re.MULTILINE)
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
with open(preview_fp, "w", encoding="utf-8") as f:
f.write(html)
chatbot.append([None, f"生成在线预览:{generate_file_link([preview_fp])}"])
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
chatbot.append((None, f"调用Markdown插件 {ex_folder} ..."))
plugin_kwargs['markdown_expected_output_dir'] = ex_folder
plugin_kwargs["markdown_expected_output_dir"] = ex_folder
translated_f_name = 'translated_markdown.md'
generated_fp = plugin_kwargs['markdown_expected_output_path'] = os.path.join(ex_folder, translated_f_name)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from Markdown英译中(ex_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
translated_f_name = "translated_markdown.md"
generated_fp = plugin_kwargs["markdown_expected_output_path"] = os.path.join(
ex_folder, translated_f_name
)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from Markdown英译中(
ex_folder,
llm_kwargs,
plugin_kwargs,
chatbot,
history,
system_prompt,
user_request,
)
if os.path.exists(generated_fp):
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f: content = f.read()
content = content.replace('```markdown', '\n').replace('```', '\n')
with open(generated_fp, "r", encoding="utf8") as f:
content = f.read()
content = content.replace("```markdown", "\n").replace("```", "\n")
# Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
# content = re.sub(r'^<table>', r'.<table>', content, flags=re.MULTILINE)
with open(generated_fp, 'w', encoding='utf8') as f: f.write(content)
with open(generated_fp, "w", encoding="utf8") as f:
f.write(content)
# 生成在线预览html
file_name = '在线预览翻译' + gen_time_str() + '.html'
file_name = "在线预览翻译" + gen_time_str() + ".html"
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
from shared_utils.advanced_markdown_format import (
markdown_convertion_for_file,
)
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
with open(preview_fp, "w", encoding="utf-8") as f:
f.write(html)
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
# 生成包含图片的压缩包
dest_folder = get_log_folder(chatbot.get_user())
zip_name = '翻译后的带图文档.zip'
zip_folder(source_folder=ex_folder, dest_folder=dest_folder, zip_name=zip_name)
zip_name = "翻译后的带图文档.zip"
zip_folder(
source_folder=ex_folder, dest_folder=dest_folder, zip_name=zip_name
)
zip_fp = os.path.join(dest_folder, zip_name)
promote_file_to_downloadzone(zip_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
md_zip_path = yield from pdf2markdown(fp)
yield from deliver_to_markdown_plugin(md_zip_path, user_request)
def 解析PDF_基于DOC2X(file_manifest, *args):
for index, fp in enumerate(file_manifest):
yield from 解析PDF_DOC2X_单文件(fp, *args)
return

查看文件

@@ -1,17 +1,13 @@
import llama_index
import os
import atexit
from loguru import logger
from typing import List
from llama_index.core import Document
from llama_index.core.schema import TextNode
from request_llms.embed_models.openai_embed import OpenAiEmbeddingModel
from shared_utils.connect_void_terminal import get_chat_default_kwargs
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from crazy_functions.rag_fns.vector_store_index import GptacVectorStoreIndex
from llama_index.core.ingestion import run_transformations
from llama_index.core import PromptTemplate
from llama_index.core.response_synthesizers import TreeSummarize
from llama_index.core.schema import TextNode
from crazy_functions.rag_fns.vector_store_index import GptacVectorStoreIndex
from request_llms.embed_models.openai_embed import OpenAiEmbeddingModel
DEFAULT_QUERY_GENERATION_PROMPT = """\
Now, you have context information as below:
@@ -63,7 +59,7 @@ class SaveLoad():
def purge(self):
import shutil
shutil.rmtree(self.checkpoint_dir, ignore_errors=True)
self.vs_index = self.create_new_vs()
self.vs_index = self.create_new_vs(self.checkpoint_dir)
class LlamaIndexRagWorker(SaveLoad):
@@ -75,7 +71,7 @@ class LlamaIndexRagWorker(SaveLoad):
if auto_load_checkpoint:
self.vs_index = self.load_from_checkpoint(checkpoint_dir)
else:
self.vs_index = self.create_new_vs(checkpoint_dir)
self.vs_index = self.create_new_vs()
atexit.register(lambda: self.save_to_checkpoint(checkpoint_dir))
def assign_embedding_model(self):
@@ -91,40 +87,52 @@ class LlamaIndexRagWorker(SaveLoad):
logger.info('oo --------inspect_vector_store end--------')
return vector_store_preview
def add_documents_to_vector_store(self, document_list):
documents = [Document(text=t) for t in document_list]
def add_documents_to_vector_store(self, document_list: List[Document]):
"""
Adds a list of Document objects to the vector store after processing.
"""
documents = document_list
documents_nodes = run_transformations(
documents, # type: ignore
self.vs_index._transformations,
show_progress=True
)
documents, # type: ignore
self.vs_index._transformations,
show_progress=True
)
self.vs_index.insert_nodes(documents_nodes)
if self.debug_mode: self.inspect_vector_store()
if self.debug_mode:
self.inspect_vector_store()
def add_text_to_vector_store(self, text):
def add_text_to_vector_store(self, text: str):
node = TextNode(text=text)
documents_nodes = run_transformations(
[node],
self.vs_index._transformations,
show_progress=True
)
[node],
self.vs_index._transformations,
show_progress=True
)
self.vs_index.insert_nodes(documents_nodes)
if self.debug_mode: self.inspect_vector_store()
if self.debug_mode:
self.inspect_vector_store()
def remember_qa(self, question, answer):
formatted_str = QUESTION_ANSWER_RECORD.format(question=question, answer=answer)
self.add_text_to_vector_store(formatted_str)
def retrieve_from_store_with_query(self, query):
if self.debug_mode: self.inspect_vector_store()
if self.debug_mode:
self.inspect_vector_store()
retriever = self.vs_index.as_retriever()
return retriever.retrieve(query)
def build_prompt(self, query, nodes):
context_str = self.generate_node_array_preview(nodes)
return DEFAULT_QUERY_GENERATION_PROMPT.format(context_str=context_str, query_str=query)
def generate_node_array_preview(self, nodes):
buf = "\n".join(([f"(No.{i+1} | score {n.score:.3f}): {n.text}" for i, n in enumerate(nodes)]))
if self.debug_mode: logger.info(buf)
return buf
def purge_vector_store(self):
"""
Purges the current vector store and creates a new one.
"""
self.purge()

查看文件

@@ -0,0 +1,22 @@
import os
from llama_index.core import SimpleDirectoryReader
supports_format = ['.csv', '.docx', '.epub', '.ipynb', '.mbox', '.md', '.pdf', '.txt', '.ppt',
'.pptm', '.pptx']
# 修改后的 extract_text 函数,结合 SimpleDirectoryReader 和自定义解析逻辑
def extract_text(file_path):
_, ext = os.path.splitext(file_path.lower())
# 使用 SimpleDirectoryReader 处理它支持的文件格式
if ext in supports_format:
try:
reader = SimpleDirectoryReader(input_files=[file_path])
documents = reader.load_data()
if len(documents) > 0:
return documents[0].text
except Exception as e:
pass
return None

文件差异内容过多而无法显示 加载差异

查看文件

@@ -180,6 +180,7 @@ version: '3'
services:
gpt_academic_with_latex:
image: ghcr.io/binary-husky/gpt_academic_with_latex:master # (Auto Built by Dockerfile: docs/GithubAction+NoLocal+Latex)
# 对于ARM64设备,请将以上镜像名称替换为 ghcr.io/binary-husky/gpt_academic_with_latex_arm:master
environment:
# 请查阅 `config.py` 以查看所有的配置信息
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '

查看文件

@@ -1,4 +1,4 @@
# 此Dockerfile适用于无本地模型的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
# 此Dockerfile适用于"无本地模型"的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
# - 1 修改 `config.py`
# - 2 构建 docker build -t gpt-academic-nolocal-latex -f docs/GithubAction+NoLocal+Latex .
# - 3 运行 docker run -v /home/fuqingxu/arxiv_cache:/root/arxiv_cache --rm -it --net=host gpt-academic-nolocal-latex
@@ -7,15 +7,28 @@ FROM menghuan1918/ubuntu_uv_ctex:latest
ENV DEBIAN_FRONTEND=noninteractive
SHELL ["/bin/bash", "-c"]
WORKDIR /gpt
COPY . .
RUN /root/.cargo/bin/uv venv --seed \
&& source .venv/bin/activate \
&& /root/.cargo/bin/uv pip install openai numpy arxiv rich colorama Markdown pygments pymupdf python-docx pdfminer \
&& /root/.cargo/bin/uv pip install -r requirements.txt \
&& /root/.cargo/bin/uv clean
# 先复制依赖文件
COPY requirements.txt .
# 安装依赖
RUN pip install --break-system-packages openai numpy arxiv rich colorama Markdown pygments pymupdf python-docx pdfminer \
&& pip install --break-system-packages -r requirements.txt \
&& if [ "$(uname -m)" = "x86_64" ]; then \
pip install --break-system-packages nougat-ocr; \
fi \
&& pip cache purge \
&& rm -rf /root/.cache/pip/*
# 创建非root用户
RUN useradd -m gptuser && chown -R gptuser /gpt
USER gptuser
# 最后才复制代码文件,这样代码更新时只需重建最后几层,可以大幅减少docker pull所需的大小
COPY --chown=gptuser:gptuser . .
# 可选步骤,用于预热模块
RUN .venv/bin/python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
# 启动
CMD [".venv/bin/python3", "-u", "main.py"]
CMD ["python3", "-u", "main.py"]

26
docs/WindowsRun.bat 普通文件
查看文件

@@ -0,0 +1,26 @@
@echo off
setlocal
:: 设置环境变量
set ENV_NAME=gpt
set ENV_PATH=%~dp0%ENV_NAME%
set SCRIPT_PATH=%~dp0main.py
:: 判断环境是否已解压
if not exist "%ENV_PATH%" (
echo Extracting environment...
mkdir "%ENV_PATH%"
tar -xzf gpt.tar.gz -C "%ENV_PATH%"
:: 运行conda环境激活脚本
call "%ENV_PATH%\Scripts\activate.bat"
) else (
:: 如果环境已存在,直接激活
call "%ENV_PATH%\Scripts\activate.bat"
)
echo Start to run program:
:: 运行Python脚本
python "%SCRIPT_PATH%"
endlocal
pause

35
main.py
查看文件

@@ -34,9 +34,9 @@ def encode_plugin_info(k, plugin)->str:
def main():
import gradio as gr
if gr.__version__ not in ['3.32.9', '3.32.10', '3.32.11']:
if gr.__version__ not in ['3.32.12']:
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
# 一些基础工具
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, DummyWith
@@ -57,8 +57,8 @@ def main():
# 如果WEB_PORT是-1, 则随机选取WEB端口
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
from check_proxy import get_current_version
from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_reset, js_code_show_or_hide, js_code_show_or_hide_group2
from themes.theme import js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_show_or_hide, js_code_show_or_hide_group2
from themes.theme import js_code_for_toggle_darkmode
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
@@ -68,7 +68,7 @@ def main():
functional = get_core_functions()
# 高级函数插件
from crazy_functional import get_crazy_functions
from crazy_functional import get_crazy_functions, get_multiplex_button_functions
DEFAULT_FN_GROUPS = get_conf('DEFAULT_FN_GROUPS')
plugins = get_crazy_functions()
all_plugin_groups = list(set([g for _, plugin in plugins.items() for g in plugin['Group'].split('|')]))
@@ -106,7 +106,7 @@ def main():
with gr_L2(scale=2, elem_id="gpt-chat"):
chatbot = gr.Chatbot(label=f"当前模型:{LLM_MODEL}", elem_id="gpt-chatbot")
if LAYOUT == "TOP-DOWN": chatbot.style(height=CHATBOT_HEIGHT)
history, history_cache, history_cache_update = make_history_cache() # 定义 后端statehistory、前端history_cache、后端setterhistory_cache_update三兄弟
history, _, _ = make_history_cache() # 定义 后端statehistory、前端history_cache、后端setterhistory_cache_update三兄弟
with gr_L2(scale=1, elem_id="gpt-panel"):
with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary:
with gr.Row():
@@ -114,12 +114,7 @@ def main():
with gr.Row(elem_id="gpt-submit-row"):
multiplex_submit_btn = gr.Button("提交", elem_id="elem_submit_visible", variant="primary")
multiplex_sel = gr.Dropdown(
choices=[
"常规对话",
"多模型对话",
"智能召回 RAG",
# "智能上下文",
], value="常规对话",
choices=get_multiplex_button_functions().keys(), value="常规对话",
interactive=True, label='', show_label=False,
elem_classes='normal_mut_select', elem_id="gpt-submit-dropdown").style(container=False)
submit_btn = gr.Button("提交", elem_id="elem_submit", variant="primary", visible=False)
@@ -179,6 +174,7 @@ def main():
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
# 左上角工具栏定义
from themes.gui_toolbar import define_gui_toolbar
checkboxes, checkboxes_2, max_length_sl, theme_dropdown, system_prompt, file_upload_2, md_dropdown, top_p, temperature = \
@@ -188,6 +184,9 @@ def main():
from themes.gui_floating_menu import define_gui_floating_menu
area_input_secondary, txt2, area_customize, _, resetBtn2, clearBtn2, stopBtn2 = \
define_gui_floating_menu(customize_btns, functional, predefined_btns, cookies, web_cookie_cache)
# 浮动时间线定义
gr.Spark()
# 插件二级菜单的实现
from themes.gui_advanced_plugin_class import define_gui_advanced_plugin_class
@@ -227,11 +226,11 @@ def main():
multiplex_sel.select(
None, [multiplex_sel], None, _js=f"""(multiplex_sel)=>run_multiplex_shift(multiplex_sel)""")
cancel_handles.append(submit_btn.click(**predict_args))
resetBtn.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
resetBtn2.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
reset_server_side_args = (lambda history: ([], [], "已重置", json.dumps(history)), [history], [chatbot, history, status, history_cache])
resetBtn.click(*reset_server_side_args) # 再在后端清除history,把history转存history_cache备用
resetBtn2.click(*reset_server_side_args) # 再在后端清除history,把history转存history_cache备用
resetBtn.click(None, None, [chatbot, history, status], _js="""(a,b,c)=>clear_conversation(a,b,c)""") # 先在前端快速清除chatbot&status
resetBtn2.click(None, None, [chatbot, history, status], _js="""(a,b,c)=>clear_conversation(a,b,c)""") # 先在前端快速清除chatbot&status
# reset_server_side_args = (lambda history: ([], [], "已重置"), [history], [chatbot, history, status])
# resetBtn.click(*reset_server_side_args) # 再在后端清除history
# resetBtn2.click(*reset_server_side_args) # 再在后端清除history
clearBtn.click(None, None, [txt, txt2], _js=js_code_clear)
clearBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
if AUTO_CLEAR_TXT:
@@ -331,7 +330,7 @@ def main():
from shared_utils.cookie_manager import load_web_cookie_cache__fn_builder
load_web_cookie_cache = load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns)
app_block.load(load_web_cookie_cache, inputs = [web_cookie_cache, cookies],
outputs = [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()], _js=js_code_for_persistent_cookie_init)
outputs = [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()], _js="""persistent_cookie_init""")
app_block.load(None, inputs=[], outputs=None, _js=f"""()=>GptAcademicJavaScriptInit("{DARK_MODE}","{INIT_SYS_PROMPT}","{ADD_WAIFU}","{LAYOUT}","{TTS_TYPE}")""") # 配置暗色主题或亮色主题
app_block.load(None, inputs=[], outputs=None, _js="""()=>{REP}""".replace("REP", register_advanced_plugin_init_arr))

查看文件

@@ -26,6 +26,9 @@ from .bridge_chatglm import predict as chatglm_ui
from .bridge_chatglm3 import predict_no_ui_long_connection as chatglm3_noui
from .bridge_chatglm3 import predict as chatglm3_ui
from .bridge_chatglm4 import predict_no_ui_long_connection as chatglm4_noui
from .bridge_chatglm4 import predict as chatglm4_ui
from .bridge_qianfan import predict_no_ui_long_connection as qianfan_noui
from .bridge_qianfan import predict as qianfan_ui
@@ -76,6 +79,7 @@ cohere_endpoint = "https://api.cohere.ai/v1/chat"
ollama_endpoint = "http://localhost:11434/api/chat"
yimodel_endpoint = "https://api.lingyiwanwu.com/v1/chat/completions"
deepseekapi_endpoint = "https://api.deepseek.com/v1/chat/completions"
grok_model_endpoint = "https://api.x.ai/v1/chat/completions"
if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/'
azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15'
@@ -97,6 +101,7 @@ if cohere_endpoint in API_URL_REDIRECT: cohere_endpoint = API_URL_REDIRECT[coher
if ollama_endpoint in API_URL_REDIRECT: ollama_endpoint = API_URL_REDIRECT[ollama_endpoint]
if yimodel_endpoint in API_URL_REDIRECT: yimodel_endpoint = API_URL_REDIRECT[yimodel_endpoint]
if deepseekapi_endpoint in API_URL_REDIRECT: deepseekapi_endpoint = API_URL_REDIRECT[deepseekapi_endpoint]
if grok_model_endpoint in API_URL_REDIRECT: grok_model_endpoint = API_URL_REDIRECT[grok_model_endpoint]
# 获取tokenizer
tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo")
@@ -212,6 +217,16 @@ model_info = {
"token_cnt": get_token_num_gpt4,
},
"chatgpt-4o-latest": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"has_multimodal_capacity": True,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-4o-2024-05-13": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
@@ -385,6 +400,14 @@ model_info = {
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-4-plus":{
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
# api_2d (此后不需要在此处添加api2d的接口了,因为下面的代码会自动添加)
"api2d-gpt-4": {
@@ -396,6 +419,7 @@ model_info = {
"token_cnt": get_token_num_gpt4,
},
# ChatGLM本地模型
# 将 chatglm 直接对齐到 chatglm2
"chatglm": {
"fn_with_ui": chatglm_ui,
@@ -421,6 +445,14 @@ model_info = {
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"chatglm4": {
"fn_with_ui": chatglm4_ui,
"fn_without_ui": chatglm4_noui,
"endpoint": None,
"max_token": 8192,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"qianfan": {
"fn_with_ui": qianfan_ui,
"fn_without_ui": qianfan_noui,
@@ -868,6 +900,31 @@ if any(item in yi_models for item in AVAIL_LLM_MODELS):
})
except:
logger.error(trimmed_format_exc())
# -=-=-=-=-=-=- Grok model from x.ai -=-=-=-=-=-=-
grok_models = ["grok-beta"]
if any(item in grok_models for item in AVAIL_LLM_MODELS):
try:
grok_beta_128k_noui, grok_beta_128k_ui = get_predict_function(
api_key_conf_name="GROK_API_KEY", max_output_token=8192, disable_proxy=False
)
model_info.update({
"grok-beta": {
"fn_with_ui": grok_beta_128k_ui,
"fn_without_ui": grok_beta_128k_noui,
"can_multi_thread": True,
"endpoint": grok_model_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
except:
logger.error(trimmed_format_exc())
# -=-=-=-=-=-=- 讯飞星火认知大模型 -=-=-=-=-=-=-
if "spark" in AVAIL_LLM_MODELS:
try:
@@ -1285,4 +1342,3 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot,
# 更新一下llm_kwargs的参数,否则会出现参数不匹配的问题
yield from method(inputs, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, stream, additional_fn)

查看文件

@@ -23,39 +23,33 @@ class GetGLM3Handle(LocalLLMHandle):
import os
import platform
LOCAL_MODEL_QUANT, device = get_conf("LOCAL_MODEL_QUANT", "LOCAL_MODEL_DEVICE")
_model_name_ = "THUDM/chatglm3-6b"
# if LOCAL_MODEL_QUANT == "INT4": # INT4
# _model_name_ = "THUDM/chatglm3-6b-int4"
# elif LOCAL_MODEL_QUANT == "INT8": # INT8
# _model_name_ = "THUDM/chatglm3-6b-int8"
# else:
# _model_name_ = "THUDM/chatglm3-6b" # FP16
LOCAL_MODEL_PATH, LOCAL_MODEL_QUANT, device = get_conf("CHATGLM_LOCAL_MODEL_PATH", "LOCAL_MODEL_QUANT", "LOCAL_MODEL_DEVICE")
model_path = LOCAL_MODEL_PATH
with ProxyNetworkActivate("Download_LLM"):
chatglm_tokenizer = AutoTokenizer.from_pretrained(
_model_name_, trust_remote_code=True
model_path, trust_remote_code=True
)
if device == "cpu":
chatglm_model = AutoModel.from_pretrained(
_model_name_,
model_path,
trust_remote_code=True,
device="cpu",
).float()
elif LOCAL_MODEL_QUANT == "INT4": # INT4
chatglm_model = AutoModel.from_pretrained(
pretrained_model_name_or_path=_model_name_,
pretrained_model_name_or_path=model_path,
trust_remote_code=True,
quantization_config=BitsAndBytesConfig(load_in_4bit=True),
)
elif LOCAL_MODEL_QUANT == "INT8": # INT8
chatglm_model = AutoModel.from_pretrained(
pretrained_model_name_or_path=_model_name_,
pretrained_model_name_or_path=model_path,
trust_remote_code=True,
quantization_config=BitsAndBytesConfig(load_in_8bit=True),
)
else:
chatglm_model = AutoModel.from_pretrained(
pretrained_model_name_or_path=_model_name_,
pretrained_model_name_or_path=model_path,
trust_remote_code=True,
device="cuda",
)

查看文件

@@ -0,0 +1,81 @@
model_name = "ChatGLM4"
cmd_to_install = """
`pip install -r request_llms/requirements_chatglm4.txt`
`pip install modelscope`
`modelscope download --model ZhipuAI/glm-4-9b-chat --local_dir ./THUDM/glm-4-9b-chat`
"""
from toolbox import get_conf, ProxyNetworkActivate
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------
class GetGLM4Handle(LocalLLMHandle):
def load_model_info(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
self.model_name = model_name
self.cmd_to_install = cmd_to_install
def load_model_and_tokenizer(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
import torch
from transformers import AutoModel, AutoModelForCausalLM, AutoTokenizer
import os
LOCAL_MODEL_PATH, device = get_conf("CHATGLM_LOCAL_MODEL_PATH", "LOCAL_MODEL_DEVICE")
model_path = LOCAL_MODEL_PATH
chatglm_tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
chatglm_model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True,
device=device
).eval().to(device)
self._model = chatglm_model
self._tokenizer = chatglm_tokenizer
return self._model, self._tokenizer
def llm_stream_generator(self, **kwargs):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
def adaptor(kwargs):
query = kwargs["query"]
max_length = kwargs["max_length"]
top_p = kwargs["top_p"]
temperature = kwargs["temperature"]
history = kwargs["history"]
return query, max_length, top_p, temperature, history
query, max_length, top_p, temperature, history = adaptor(kwargs)
inputs = self._tokenizer.apply_chat_template([{"role": "user", "content": query}],
add_generation_prompt=True,
tokenize=True,
return_tensors="pt",
return_dict=True
).to(self._model.device)
gen_kwargs = {"max_length": max_length, "do_sample": True, "top_k": top_p}
outputs = self._model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
response = self._tokenizer.decode(outputs[0], skip_special_tokens=True)
yield response
def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行
import importlib
# importlib.import_module('modelscope')
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 GPT-Academic Interface
# ------------------------------------------------------------------------------------------------------------------------
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(
GetGLM4Handle, model_name, history_format="chatglm3"
)

查看文件

@@ -202,10 +202,13 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[],
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("用户取消了程序。")
else: raise RuntimeError("意外Json结构"+delta)
if json_data and json_data['finish_reason'] == 'content_filter':
raise RuntimeError("由于提问含不合规内容被Azure过滤。")
if json_data and json_data['finish_reason'] == 'length':
finish_reason = json_data.get('finish_reason', None) if json_data else None
if finish_reason == 'content_filter':
raise RuntimeError("由于提问含不合规内容被过滤。")
if finish_reason == 'length':
raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。")
return result
@@ -536,4 +539,3 @@ def generate_payload(inputs:str, llm_kwargs:dict, history:list, system_prompt:st
return headers,payload

查看文件

@@ -26,7 +26,7 @@ class GetLlamaHandle(LocalLLMHandle):
import platform
huggingface_token, device = get_conf('HUGGINGFACE_ACCESS_TOKEN', 'LOCAL_MODEL_DEVICE')
assert len(huggingface_token) != 0, "没有填写 HUGGINGFACE_ACCESS_TOKEN"
with open(os.path.expanduser('~/.cache/huggingface/token'), 'w') as f:
with open(os.path.expanduser('~/.cache/huggingface/token'), 'w', encoding='utf8') as f:
f.write(huggingface_token)
model_id = 'meta-llama/Llama-2-7b-chat-hf'
with ProxyNetworkActivate('Download_LLM'):

查看文件

@@ -31,7 +31,7 @@ class MoonShotInit:
files.append(f)
for file in files:
if file.split('.')[-1] in ['pdf']:
with open(file, 'r') as fp:
with open(file, 'r', encoding='utf8') as fp:
from crazy_functions.crazy_utils import read_and_clean_pdf_text
file_content, _ = read_and_clean_pdf_text(fp)
what_ask.append({"role": "system", "content": file_content})

查看文件

@@ -75,7 +75,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
# make a POST request to the API endpoint, stream=False
from .bridge_all import model_info
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
response = requests.post(endpoint, headers=headers, proxies=proxies,
response = requests.post(endpoint, headers=headers, proxies=None,
json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
except requests.exceptions.ReadTimeout as e:
retry += 1
@@ -152,10 +152,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
history.append(inputs); history.append("")
retry = 0
if proxies is not None:
logger.error("Ollama不会使用代理服务器, 忽略了proxies的设置。")
while True:
try:
# make a POST request to the API endpoint, stream=True
response = requests.post(endpoint, headers=headers, proxies=proxies,
response = requests.post(endpoint, headers=headers, proxies=None,
json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
except:
retry += 1

查看文件

@@ -0,0 +1,7 @@
protobuf
cpm_kernels
torch>=1.10
transformers>=4.44
mdtex2html
sentencepiece
accelerate

查看文件

@@ -1,4 +1,4 @@
https://public.agent-matrix.com/publish/gradio-3.32.10-py3-none-any.whl
https://public.agent-matrix.com/publish/gradio-3.32.12-py3-none-any.whl
fastapi==0.110
gradio-client==0.8
pypdf2==2.12.1
@@ -25,7 +25,7 @@ pyautogen
colorama
Markdown
pygments
edge-tts
edge-tts>=7.0.0
pymupdf
openai
rjsmin

查看文件

@@ -77,16 +77,28 @@ def make_history_cache():
# 定义 后端statehistory、前端history_cache、后端setterhistory_cache_update三兄弟
import gradio as gr
# 定义history的后端state
history = gr.State([])
# 定义history的一个孪生的前端存储区隐藏
history_cache = gr.Textbox(visible=False, elem_id="history_cache")
# 定义history_cache->history的更新方法隐藏。在触发这个按钮时,会先执行js代码更新history_cache,然后再执行python代码更新history
def process_history_cache(history_cache):
return json.loads(history_cache)
# 另一种更简单的setter方法
history_cache_update = gr.Button("", elem_id="elem_update_history", visible=False).click(
process_history_cache, inputs=[history_cache], outputs=[history])
return history, history_cache, history_cache_update
# history = gr.State([])
history = gr.Textbox(visible=False, elem_id="history-ng")
# # 定义history的一个孪生的前端存储区隐藏
# history_cache = gr.Textbox(visible=False, elem_id="history_cache")
# # 定义history_cache->history的更新方法隐藏。在触发这个按钮时,会先执行js代码更新history_cache,然后再执行python代码更新history
# def process_history_cache(history_cache):
# return json.loads(history_cache)
# # 另一种更简单的setter方法
# history_cache_update = gr.Button("", elem_id="elem_update_history", visible=False).click(
# process_history_cache, inputs=[history_cache], outputs=[history])
# # save history to history_cache
# def process_history_cache(history_cache):
# return json.dumps(history_cache)
# # 定义history->history_cache的更新方法隐藏
# def sync_history_cache(history):
# print("sync_history_cache", history)
# return json.dumps(history)
# # history.change(sync_history_cache, inputs=[history], outputs=[history_cache])
# # history_cache_sync = gr.Button("", elem_id="elem_sync_history", visible=False).click(
# # lambda history: (json.dumps(history)), inputs=[history_cache], outputs=[history])
return history, None, None

查看文件

@@ -0,0 +1,83 @@
import requests
import pickle
import io
import os
from pydantic import BaseModel, Field
from typing import Optional, Dict, Any
from loguru import logger
class DockerServiceApiComModel(BaseModel):
client_command: Optional[str] = Field(default=None, title="Client command", description="The command to be executed on the client side")
client_file_attach: Optional[dict] = Field(default=None, title="Client file attach", description="The file to be attached to the client side")
server_message: Optional[Any] = Field(default=None, title="Server standard error", description="The standard error from the server side")
server_std_err: Optional[str] = Field(default=None, title="Server standard error", description="The standard error from the server side")
server_std_out: Optional[str] = Field(default=None, title="Server standard output", description="The standard output from the server side")
server_file_attach: Optional[dict] = Field(default=None, title="Server file attach", description="The file to be attached to the server side")
def process_received(received: DockerServiceApiComModel, save_file_dir="./daas_output", output_manifest=None):
# Process the received data
if received.server_message:
try:
output_manifest['server_message'] += received.server_message
except:
output_manifest['server_message'] = received.server_message
if received.server_std_err:
output_manifest['server_std_err'] += received.server_std_err
if received.server_std_out:
output_manifest['server_std_out'] += received.server_std_out
if received.server_file_attach:
# print(f"Recv file attach: {received.server_file_attach}")
for file_name, file_content in received.server_file_attach.items():
new_fp = os.path.join(save_file_dir, file_name)
new_fp_dir = os.path.dirname(new_fp)
if not os.path.exists(new_fp_dir):
os.makedirs(new_fp_dir, exist_ok=True)
with open(new_fp, 'wb') as f:
f.write(file_content)
output_manifest['server_file_attach'].append(new_fp)
return output_manifest
def stream_daas(docker_service_api_com_model, server_url, save_file_dir):
# Prepare the file
# Pickle the object
pickled_data = pickle.dumps(docker_service_api_com_model)
# Create a file-like object from the pickled data
file_obj = io.BytesIO(pickled_data)
# Prepare the file for sending
files = {'file': ('docker_service_api_com_model.pkl', file_obj, 'application/octet-stream')}
# Send the POST request
response = requests.post(server_url, files=files, stream=True)
max_full_package_size = 1024 * 1024 * 1024 * 1 # 1 GB
received_output_manifest = {}
received_output_manifest['server_message'] = ""
received_output_manifest['server_std_err'] = ""
received_output_manifest['server_std_out'] = ""
received_output_manifest['server_file_attach'] = []
# Check if the request was successful
if response.status_code == 200:
# Process the streaming response
chunk_buf = None
for chunk in response.iter_content(max_full_package_size):
if chunk:
if chunk_buf is None: chunk_buf = chunk
else: chunk_buf += chunk
try:
received = pickle.loads(chunk_buf)
chunk_buf = None
received_output_manifest = process_received(received, save_file_dir, output_manifest = received_output_manifest)
yield received_output_manifest
except Exception as e:
# logger.error(f"pickle data was truncated, but don't worry, we will continue to receive the rest of the data.")
continue
else:
logger.error(f"Error: Received status code {response.status_code}, response.text: {response.text}")
return received_output_manifest

查看文件

@@ -138,7 +138,9 @@ def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SS
app_block.is_sagemaker = False
gradio_app = App.create_app(app_block)
for route in list(gradio_app.router.routes):
if route.path == "/proxy={url_path:path}":
gradio_app.router.routes.remove(route)
# --- --- replace gradio endpoint to forbid access to sensitive files --- ---
if len(AUTHENTICATION) > 0:
dependencies = []
@@ -154,9 +156,13 @@ def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SS
@gradio_app.head("/file={path_or_url:path}", dependencies=dependencies)
@gradio_app.get("/file={path_or_url:path}", dependencies=dependencies)
async def file(path_or_url: str, request: fastapi.Request):
if len(AUTHENTICATION) > 0:
if not _authorize_user(path_or_url, request, gradio_app):
return "越权访问!"
if not _authorize_user(path_or_url, request, gradio_app):
return "越权访问!"
stripped = path_or_url.lstrip().lower()
if stripped.startswith("https://") or stripped.startswith("http://"):
return "账户密码授权模式下, 禁止链接!"
if '../' in stripped:
return "非法路径!"
return await endpoint(path_or_url, request)
from fastapi import Request, status
@@ -167,6 +173,26 @@ def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SS
response.delete_cookie('access-token')
response.delete_cookie('access-token-unsecure')
return response
else:
dependencies = []
endpoint = None
for route in list(gradio_app.router.routes):
if route.path == "/file/{path:path}":
gradio_app.router.routes.remove(route)
if route.path == "/file={path_or_url:path}":
dependencies = route.dependencies
endpoint = route.endpoint
gradio_app.router.routes.remove(route)
@gradio_app.get("/file/{path:path}", dependencies=dependencies)
@gradio_app.head("/file={path_or_url:path}", dependencies=dependencies)
@gradio_app.get("/file={path_or_url:path}", dependencies=dependencies)
async def file(path_or_url: str, request: fastapi.Request):
stripped = path_or_url.lstrip().lower()
if stripped.startswith("https://") or stripped.startswith("http://"):
return "账户密码授权模式下, 禁止链接!"
if '../' in stripped:
return "非法路径!"
return await endpoint(path_or_url, request)
# --- --- enable TTS (text-to-speech) functionality --- ---
TTS_TYPE = get_conf("TTS_TYPE")

查看文件

@@ -104,17 +104,27 @@ def extract_archive(file_path, dest_dir):
logger.info("Successfully extracted zip archive to {}".format(dest_dir))
elif file_extension in [".tar", ".gz", ".bz2"]:
with tarfile.open(file_path, "r:*") as tarobj:
# 清理提取路径,移除任何不安全的元素
for member in tarobj.getmembers():
member_path = os.path.normpath(member.name)
full_path = os.path.join(dest_dir, member_path)
full_path = os.path.abspath(full_path)
if not full_path.startswith(os.path.abspath(dest_dir) + os.sep):
raise Exception(f"Attempted Path Traversal in {member.name}")
try:
with tarfile.open(file_path, "r:*") as tarobj:
# 清理提取路径,移除任何不安全的元素
for member in tarobj.getmembers():
member_path = os.path.normpath(member.name)
full_path = os.path.join(dest_dir, member_path)
full_path = os.path.abspath(full_path)
if not full_path.startswith(os.path.abspath(dest_dir) + os.sep):
raise Exception(f"Attempted Path Traversal in {member.name}")
tarobj.extractall(path=dest_dir)
logger.info("Successfully extracted tar archive to {}".format(dest_dir))
tarobj.extractall(path=dest_dir)
logger.info("Successfully extracted tar archive to {}".format(dest_dir))
except tarfile.ReadError as e:
if file_extension == ".gz":
# 一些特别奇葩的项目,是一个gz文件,里面不是tar,只有一个tex文件
import gzip
with gzip.open(file_path, 'rb') as f_in:
with open(os.path.join(dest_dir, 'main.tex'), 'wb') as f_out:
f_out.write(f_in.read())
else:
raise e
# 第三方库,需要预先pip install rarfile
# 此外,Windows上还需要安装winrar软件,配置其Path环境变量,如"C:\Program Files\WinRAR"才可以

查看文件

@@ -14,6 +14,7 @@ openai_regex = re.compile(
r"sk-[a-zA-Z0-9_-]{92}$|" +
r"sk-proj-[a-zA-Z0-9_-]{48}$|"+
r"sk-proj-[a-zA-Z0-9_-]{124}$|"+
r"sk-proj-[a-zA-Z0-9_-]{156}$|"+ #新版apikey位数不匹配故修改此正则表达式
r"sess-[a-zA-Z0-9]{40}$"
)
def is_openai_api_key(key):
@@ -77,7 +78,8 @@ def select_api_key(keys, llm_model):
avail_key_list = []
key_list = keys.split(',')
if llm_model.startswith('gpt-') or llm_model.startswith('one-api-') or llm_model.startswith('o1-'):
if llm_model.startswith('gpt-') or llm_model.startswith('chatgpt-') or \
llm_model.startswith('one-api-') or llm_model.startswith('o1-'):
for k in key_list:
if is_openai_api_key(k): avail_key_list.append(k)

15
tests/test_bilibili_down.py 普通文件
查看文件

@@ -0,0 +1,15 @@
"""
对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py
"""
import init_test
import os, sys
if __name__ == "__main__":
from experimental_mods.get_bilibili_resource import download_bilibili
download_bilibili("BV1LSSHYXEtv", only_audio=True, user_name="test")
# if __name__ == "__main__":
# from test_utils import plugin_test
# plugin_test(plugin='crazy_functions.VideoResource_GPT->视频任务', main_input="帮我找到《天文馆的猫》,歌手泠鸢")

7
tests/test_doc2x.py 普通文件
查看文件

@@ -0,0 +1,7 @@
import init_test
from crazy_functions.pdf_fns.parse_pdf_via_doc2x import 解析PDF_DOC2X_转Latex
# 解析PDF_DOC2X_转Latex("gpt_log/arxiv_cache_old/2410.10819/workfolder/merge.pdf")
# 解析PDF_DOC2X_转Latex("gpt_log/arxiv_cache_ooo/2410.07095/workfolder/merge.pdf")
解析PDF_DOC2X_转Latex("2410.11190v2.pdf")

查看文件

@@ -19,4 +19,8 @@ if __name__ == "__main__":
plugin_test = importlib.import_module('test_utils').plugin_test
plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2203.01927")
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2203.01927")
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="gpt_log/arxiv_cache/2203.01927/workfolder")
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2410.05779")
plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="gpt_log/default_user/workfolder")

查看文件

@@ -29,8 +29,18 @@ graph TD
E --> B
D --> F[Save Image and Code]
F --> B
```
<details>
<summary><b>My section header in bold</b></summary>
Any folded content here. It requires an empty line just above it.
</details>
"""
def validate_path():
import os, sys
@@ -44,8 +54,8 @@ def validate_path():
validate_path() # validate path so you can run from base directory
from toolbox import markdown_convertion
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
with open("gpt_log/default_user/shared/2024-04-22-01-27-43.zip.extract/translated_markdown.md", "r", encoding="utf-8") as f:
md = f.read()
# with open("gpt_log/default_user/shared/2024-04-22-01-27-43.zip.extract/translated_markdown.md", "r", encoding="utf-8") as f:
# md = f.read()
html = markdown_convertion_for_file(md)
# print(html)
with open("test.html", "w", encoding="utf-8") as f:

67
tests/test_media.py 普通文件
查看文件

@@ -0,0 +1,67 @@
"""
对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py
"""
import init_test
import os, sys
if __name__ == "__main__":
from test_utils import plugin_test
plugin_test(plugin='crazy_functions.VideoResource_GPT->多媒体任务', main_input="我想找一首歌,里面有句歌词是“turn your face towards the sun”")
# plugin_test(plugin='crazy_functions.Internet_GPT->连接网络回答问题', main_input="谁是应急食品?")
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2307.07522")
# plugin_test(plugin='crazy_functions.PDF_Translate->批量翻译PDF文档', main_input='build/pdf/t1.pdf')
# plugin_test(
# plugin="crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF",
# main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix",
# )
# plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='修改api-key为sk-jhoejriotherjep')
# plugin_test(plugin='crazy_functions.批量翻译PDF文档_NOUGAT->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
# plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='调用插件,对C:/Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns中的python文件进行解析')
# plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表')
# plugin_test(plugin='crazy_functions.SourceCode_Analyse->解析一个Python项目', main_input="crazy_functions/test_project/python/dqn")
# plugin_test(plugin='crazy_functions.SourceCode_Analyse->解析一个C项目', main_input="crazy_functions/test_project/cpp/cppipc")
# plugin_test(plugin='crazy_functions.Latex_Project_Polish->Latex英文润色', main_input="crazy_functions/test_project/latex/attention")
# plugin_test(plugin='crazy_functions.Markdown_Translate->Markdown中译英', main_input="README.md")
# plugin_test(plugin='crazy_functions.PDF_Translate->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
# plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=")
# plugin_test(plugin='crazy_functions.总结word文档->总结word文档', main_input="crazy_functions/test_project/pdf_and_word")
# plugin_test(plugin='crazy_functions.下载arxiv论文翻译摘要->下载arxiv论文并翻译摘要', main_input="1812.10695")
# plugin_test(plugin='crazy_functions.联网的ChatGPT->连接网络回答问题', main_input="谁是应急食品?")
# plugin_test(plugin='crazy_functions.解析JupyterNotebook->解析ipynb文件', main_input="crazy_functions/test_samples")
# plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A ball split into 2, and then split into 4, and finally split into 8.")
# for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]:
# plugin_test(plugin='crazy_functions.Markdown_Translate->Markdown翻译指定语言', main_input="README.md", advanced_arg={"advanced_arg": lang})
# plugin_test(plugin='crazy_functions.知识库文件注入->知识库文件注入', main_input="./")
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="What is the installation method?")
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2210.03629")

查看文件

@@ -36,7 +36,7 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.SourceCode_Analyse->解析一个C项目', main_input="crazy_functions/test_project/cpp/cppipc")
# plugin_test(plugin='crazy_functions.Latex全文润色->Latex英文润色', main_input="crazy_functions/test_project/latex/attention")
# plugin_test(plugin='crazy_functions.Latex_Project_Polish->Latex英文润色', main_input="crazy_functions/test_project/latex/attention")
# plugin_test(plugin='crazy_functions.Markdown_Translate->Markdown中译英', main_input="README.md")
@@ -65,8 +65,3 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2210.03629")
# advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求100字以内,用第二人称。' --system_prompt=''" }
# plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg)
# advanced_arg = {"advanced_arg":"--pre_seq_len=128 --learning_rate=2e-2 --num_gpus=1 --json_dataset='t_code.json' --ptuning_directory='/home/hmp/ChatGLM2-6B/ptuning' " }
# plugin_test(plugin='crazy_functions.chatglm微调工具->启动微调', main_input='build/dev.json', advanced_arg=advanced_arg)

33
tests/test_tts.py 普通文件
查看文件

@@ -0,0 +1,33 @@
import edge_tts
import os
import httpx
from toolbox import get_conf
async def test_tts():
async with httpx.AsyncClient() as client:
try:
# Forward the request to the target service
import tempfile
import edge_tts
import wave
import uuid
from pydub import AudioSegment
voice = get_conf("EDGE_TTS_VOICE")
tts = edge_tts.Communicate(text="测试", voice=voice)
temp_folder = tempfile.gettempdir()
temp_file_name = str(uuid.uuid4().hex)
temp_file = os.path.join(temp_folder, f'{temp_file_name}.mp3')
await tts.save(temp_file)
try:
mp3_audio = AudioSegment.from_file(temp_file, format="mp3")
mp3_audio.export(temp_file, format="wav")
with open(temp_file, 'rb') as wav_file: t = wav_file.read()
except:
raise RuntimeError("ffmpeg未安装,无法处理EdgeTTS音频。安装方法见`https://github.com/jiaaro/pydub#getting-ffmpeg-set-up`")
except httpx.RequestError as e:
raise RuntimeError(f"请求失败: {e}")
if __name__ == "__main__":
import asyncio
asyncio.run(test_tts())

查看文件

@@ -270,4 +270,9 @@
}
#gpt-submit-row #gpt-submit-dropdown > *:hover {
cursor: context-menu;
}
.tooltip.svelte-p2nen8.svelte-p2nen8 {
box-shadow: 10px 10px 15px rgba(0, 0, 0, 0.5);
left: 10px;
}

查看文件

@@ -318,7 +318,7 @@ function addCopyButton(botElement, index, is_last_in_arr) {
}
});
if (enable_tts){
if (enable_tts) {
var audioButton = document.createElement('button');
audioButton.classList.add('audio-toggle-btn');
audioButton.innerHTML = audioIcon;
@@ -346,7 +346,7 @@ function addCopyButton(botElement, index, is_last_in_arr) {
var messageBtnColumn = document.createElement('div');
messageBtnColumn.classList.add('message-btn-row');
messageBtnColumn.appendChild(copyButton);
if (enable_tts){
if (enable_tts) {
messageBtnColumn.appendChild(audioButton);
}
botElement.appendChild(messageBtnColumn);
@@ -391,6 +391,8 @@ function chatbotContentChanged(attempt = 1, force = false) {
// Now pass both the message element and the is_last_in_arr boolean to addCopyButton
addCopyButton(message, index, is_last_in_arr);
save_conversation_history();
});
// gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton);
}, i === 0 ? 0 : 200);
@@ -854,8 +856,7 @@ function limit_scroll_position() {
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function loadLive2D() {
if (document.querySelector(".waifu") )
{
if (document.querySelector(".waifu")) {
$('.waifu').show();
} else {
try {
@@ -922,12 +923,12 @@ function gpt_academic_gradio_saveload(
if (save_or_load === "load") {
let value = getCookie(cookie_key);
if (value) {
console.log('加载cookie', elem_id, value)
// console.log('加载cookie', elem_id, value)
push_data_to_gradio_component(value, elem_id, load_type);
}
else {
if (load_default) {
console.log('加载cookie的默认值', elem_id, load_default_value)
// console.log('加载cookie的默认值', elem_id, load_default_value)
push_data_to_gradio_component(load_default_value, elem_id, load_type);
}
}
@@ -937,113 +938,149 @@ function gpt_academic_gradio_saveload(
}
}
function update_conversation_metadata() {
// Create a conversation UUID and timestamp
const conversationId = crypto.randomUUID();
const timestamp = new Date().toISOString();
const conversationData = {
id: conversationId,
timestamp: timestamp
};
// Save to cookie
setCookie("conversation_metadata", JSON.stringify(conversationData), 2);
// read from cookie
let conversation_metadata = getCookie("conversation_metadata");
// console.log("conversation_metadata", conversation_metadata);
}
// Helper function to generate conversation preview
function generatePreview(conversation, timestamp, maxLength = 100) {
if (!conversation || conversation.length === 0) return "";
// Join all messages with dash separator
let preview = conversation.join("\n");
const readableDate = new Date(timestamp).toLocaleString();
preview = readableDate + "\n" + preview;
if (preview.length <= maxLength) return preview;
return preview.substring(0, maxLength) + "...";
}
async function save_conversation_history() {
// 505030475
let chatbot = await get_data_from_gradio_component('gpt-chatbot');
let history = await get_data_from_gradio_component('history-ng');
let conversation_metadata = getCookie("conversation_metadata");
conversation_metadata = JSON.parse(conversation_metadata);
// console.log("conversation_metadata", conversation_metadata);
let conversation = {
timestamp: conversation_metadata.timestamp,
id: conversation_metadata.id,
metadata: conversation_metadata,
conversation: chatbot,
history: history,
preview: generatePreview(JSON.parse(history), conversation_metadata.timestamp)
};
// Get existing conversation history from local storage
let conversation_history = [];
try {
const stored = localStorage.getItem('conversation_history');
if (stored) {
conversation_history = JSON.parse(stored);
}
} catch (e) {
// console.error('Error reading conversation history from localStorage:', e);
}
// Find existing conversation with same ID
const existingIndex = conversation_history.findIndex(c => c.id === conversation.id);
if (existingIndex >= 0) {
// Update existing conversation
conversation_history[existingIndex] = conversation;
} else {
// Add new conversation
conversation_history.push(conversation);
}
// Sort conversations by timestamp, newest first
conversation_history.sort((a, b) => {
const timeA = new Date(a.timestamp).getTime();
const timeB = new Date(b.timestamp).getTime();
return timeB - timeA;
});
// Save back to local storage
try {
localStorage.setItem('conversation_history', JSON.stringify(conversation_history));
const LOCAL_STORAGE_UPDATED = "gptac_conversation_history_updated";
window.dispatchEvent(
new CustomEvent(LOCAL_STORAGE_UPDATED, {
detail: conversation_history
})
);
} catch (e) {
console.error('Error saving conversation history to localStorage:', e);
}
}
function restore_chat_from_local_storage(event) {
let conversation = event.detail;
push_data_to_gradio_component(conversation.conversation, "gpt-chatbot", "obj");
push_data_to_gradio_component(conversation.history, "history-ng", "obj");
// console.log("restore_chat_from_local_storage", conversation);
// Create a conversation UUID and timestamp
const conversationId = conversation.id;
const timestamp = conversation.timestamp;
const conversationData = {
id: conversationId,
timestamp: timestamp
};
// Save to cookie
setCookie("conversation_metadata", JSON.stringify(conversationData), 2);
// read from cookie
let conversation_metadata = getCookie("conversation_metadata");
}
function clear_conversation(a, b, c) {
update_conversation_metadata();
let stopButton = document.getElementById("elem_stop");
stopButton.click();
// console.log("clear_conversation");
return reset_conversation(a, b);
}
function reset_conversation(a, b) {
// console.log("js_code_reset");
a = btoa(unescape(encodeURIComponent(JSON.stringify(a))));
setCookie("js_previous_chat_cookie", a, 1);
gen_restore_btn();
b = btoa(unescape(encodeURIComponent(JSON.stringify(b))));
setCookie("js_previous_history_cookie", b, 1);
// gen_restore_btn();
return [[], [], "已重置"];
}
// clear -> 将 history 缓存至 history_cache -> 点击复原 -> restore_previous_chat() -> 触发elem_update_history -> 读取 history_cache
function restore_previous_chat() {
console.log("restore_previous_chat");
// console.log("restore_previous_chat");
let chat = getCookie("js_previous_chat_cookie");
chat = JSON.parse(decodeURIComponent(escape(atob(chat))));
push_data_to_gradio_component(chat, "gpt-chatbot", "obj");
document.querySelector("#elem_update_history").click(); // in order to call set_history_gr_state, and send history state to server
let history = getCookie("js_previous_history_cookie");
history = JSON.parse(decodeURIComponent(escape(atob(history))));
push_data_to_gradio_component(history, "history-ng", "obj");
// document.querySelector("#elem_update_history").click(); // in order to call set_history_gr_state, and send history state to server
}
function gen_restore_btn() {
// 创建按钮元素
const button = document.createElement('div');
// const recvIcon = '<span><svg stroke="currentColor" fill="none" stroke-width="2" viewBox="0 0 24 24" stroke-linecap="round" stroke-linejoin="round" height=".8em" width=".8em" xmlns="http://www.w3.org/2000/svg"><polyline points="20 6 9 17 4 12"></polyline></svg></span>';
const rec_svg = '<svg t="1714361184567" style="transform:translate(1px, 2.5px)" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="4389" width="35" height="35"><path d="M320 512h384v64H320zM320 384h384v64H320zM320 640h192v64H320z" p-id="4390" fill="#ffffff"></path><path d="M863.7 544c-1.9 44-11.4 86.8-28.5 127.2-18.5 43.8-45.1 83.2-78.9 117-33.8 33.8-73.2 60.4-117 78.9C593.9 886.3 545.7 896 496 896s-97.9-9.7-143.2-28.9c-43.8-18.5-83.2-45.1-117-78.9-33.8-33.8-60.4-73.2-78.9-117C137.7 625.9 128 577.7 128 528s9.7-97.9 28.9-143.2c18.5-43.8 45.1-83.2 78.9-117s73.2-60.4 117-78.9C398.1 169.7 446.3 160 496 160s97.9 9.7 143.2 28.9c23.5 9.9 45.8 22.2 66.5 36.7l-119.7 20 9.9 59.4 161.6-27 59.4-9.9-9.9-59.4-27-161.5-59.4 9.9 19 114.2C670.3 123.8 586.4 96 496 96 257.4 96 64 289.4 64 528s193.4 432 432 432c233.2 0 423.3-184.8 431.7-416h-64z" p-id="4391" fill="#ffffff"></path></svg>'
const recvIcon = '<span>' + rec_svg + '</span>';
// 设置按钮的样式和属性
button.id = 'floatingButton';
button.className = 'glow';
button.style.textAlign = 'center';
button.style.position = 'fixed';
button.style.bottom = '10px';
button.style.left = '10px';
button.style.width = '50px';
button.style.height = '50px';
button.style.borderRadius = '50%';
button.style.backgroundColor = '#007bff';
button.style.color = 'white';
button.style.display = 'flex';
button.style.alignItems = 'center';
button.style.justifyContent = 'center';
button.style.cursor = 'pointer';
button.style.transition = 'all 0.3s ease';
button.style.boxShadow = '0 0 10px rgba(0,0,0,0.2)';
button.innerHTML = recvIcon;
// 添加发光动画的关键帧
const styleSheet = document.createElement('style');
styleSheet.id = 'floatingButtonStyle';
styleSheet.innerText = `
@keyframes glow {
from {
box-shadow: 0 0 10px rgba(0,0,0,0.2);
}
to {
box-shadow: 0 0 13px rgba(0,0,0,0.5);
}
}
#floatingButton.glow {
animation: glow 1s infinite alternate;
}
#floatingButton:hover {
transform: scale(1.2);
box-shadow: 0 0 20px rgba(0,0,0,0.4);
}
#floatingButton.disappearing {
animation: shrinkAndDisappear 0.5s forwards;
}
`;
// only add when not exist
if (!document.getElementById('recvButtonStyle'))
{
document.head.appendChild(styleSheet);
}
// 鼠标悬停和移开的事件监听器
button.addEventListener('mouseover', function () {
this.textContent = "还原\n对话";
});
button.addEventListener('mouseout', function () {
this.innerHTML = recvIcon;
});
// 点击事件监听器
button.addEventListener('click', function () {
// 添加一个类来触发缩小和消失的动画
restore_previous_chat();
this.classList.add('disappearing');
// 在动画结束后移除按钮
document.body.removeChild(this);
});
// only add when not exist
if (!document.getElementById('recvButton'))
{
document.body.appendChild(button);
}
// 将按钮添加到页面中
}
async function on_plugin_exe_complete(fn_name) {
console.log(fn_name);
// console.log(fn_name);
if (fn_name === "保存当前的对话") {
// get chat profile path
let chatbot = await get_data_from_gradio_component('gpt-chatbot');
@@ -1062,15 +1099,15 @@ async function on_plugin_exe_complete(fn_name) {
}
let href = get_href(may_have_chat_profile_info);
if (href) {
const cleanedHref = href.replace('file=', ''); // /home/fuqingxu/chatgpt_academic/gpt_log/default_user/chat_history/GPT-Academic对话存档2024-04-12-00-35-06.html
console.log(cleanedHref);
const cleanedHref = href.replace('file=', ''); // gpt_log/default_user/chat_history/GPT-Academic对话存档2024-04-12-00-35-06.html
// console.log(cleanedHref);
}
}
}
async function generate_menu(guiBase64String, btnName){
async function generate_menu(guiBase64String, btnName) {
// assign the button and menu data
push_data_to_gradio_component(guiBase64String, "invisible_current_pop_up_plugin_arg", "string");
push_data_to_gradio_component(btnName, "invisible_callback_btn_for_plugin_exe", "string");
@@ -1104,22 +1141,22 @@ async function generate_menu(guiBase64String, btnName){
///////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////// Textbox ////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////
if (gui_args[key].type=='string'){ // PLUGIN_ARG_MENU
if (gui_args[key].type == 'string') { // PLUGIN_ARG_MENU
const component_name = "plugin_arg_txt_" + text_cnt;
push_data_to_gradio_component({
visible: true,
label: gui_args[key].title + "(" + gui_args[key].description + ")",
label: gui_args[key].title + "(" + gui_args[key].description + ")",
// label: gui_args[key].title,
placeholder: gui_args[key].description,
__type__: 'update'
}, component_name, "obj");
if (key === "main_input"){
if (key === "main_input") {
// 为了与旧插件兼容,生成菜单时,自动加载输入栏的值
let current_main_input = await get_data_from_gradio_component('user_input_main');
let current_main_input_2 = await get_data_from_gradio_component('user_input_float');
push_data_to_gradio_component(current_main_input + current_main_input_2, component_name, "obj");
}
else if (key === "advanced_arg"){
else if (key === "advanced_arg") {
// 为了与旧插件兼容,生成菜单时,自动加载旧高级参数输入区的值
let advance_arg_input_legacy = await get_data_from_gradio_component('advance_arg_input_legacy');
push_data_to_gradio_component(advance_arg_input_legacy, component_name, "obj");
@@ -1134,12 +1171,12 @@ async function generate_menu(guiBase64String, btnName){
///////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////// Dropdown ////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////
if (gui_args[key].type=='dropdown'){ // PLUGIN_ARG_MENU
if (gui_args[key].type == 'dropdown') { // PLUGIN_ARG_MENU
const component_name = "plugin_arg_drop_" + dropdown_cnt;
push_data_to_gradio_component({
visible: true,
choices: gui_args[key].options,
label: gui_args[key].title + "(" + gui_args[key].description + ")",
label: gui_args[key].title + "(" + gui_args[key].description + ")",
// label: gui_args[key].title,
placeholder: gui_args[key].description,
__type__: 'update'
@@ -1154,7 +1191,7 @@ async function generate_menu(guiBase64String, btnName){
}
}
async function execute_current_pop_up_plugin(){
async function execute_current_pop_up_plugin() {
let guiBase64String = await get_data_from_gradio_component('invisible_current_pop_up_plugin_arg');
const stringData = atob(guiBase64String);
let guiJsonData = JSON.parse(stringData);
@@ -1170,8 +1207,8 @@ async function execute_current_pop_up_plugin(){
let text_cnt = 0;
for (const key in gui_args) {
if (gui_args.hasOwnProperty(key)) {
if (gui_args[key].type=='string'){ // PLUGIN_ARG_MENU
corrisponding_elem_id = "plugin_arg_txt_"+text_cnt
if (gui_args[key].type == 'string') { // PLUGIN_ARG_MENU
corrisponding_elem_id = "plugin_arg_txt_" + text_cnt
gui_args[key].user_confirmed_value = await get_data_from_gradio_component(corrisponding_elem_id);
text_cnt += 1;
}
@@ -1180,8 +1217,8 @@ async function execute_current_pop_up_plugin(){
let dropdown_cnt = 0;
for (const key in gui_args) {
if (gui_args.hasOwnProperty(key)) {
if (gui_args[key].type=='dropdown'){ // PLUGIN_ARG_MENU
corrisponding_elem_id = "plugin_arg_drop_"+dropdown_cnt
if (gui_args[key].type == 'dropdown') { // PLUGIN_ARG_MENU
corrisponding_elem_id = "plugin_arg_drop_" + dropdown_cnt
gui_args[key].user_confirmed_value = await get_data_from_gradio_component(corrisponding_elem_id);
dropdown_cnt += 1;
}
@@ -1200,29 +1237,29 @@ async function execute_current_pop_up_plugin(){
}
function hide_all_elem(){
// PLUGIN_ARG_MENU
for (text_cnt = 0; text_cnt < 8; text_cnt++){
function hide_all_elem() {
// PLUGIN_ARG_MENU
for (text_cnt = 0; text_cnt < 8; text_cnt++) {
push_data_to_gradio_component({
visible: false,
label: "",
__type__: 'update'
}, "plugin_arg_txt_"+text_cnt, "obj");
document.getElementById("plugin_arg_txt_"+text_cnt).parentNode.parentNode.style.display = 'none';
}, "plugin_arg_txt_" + text_cnt, "obj");
document.getElementById("plugin_arg_txt_" + text_cnt).parentNode.parentNode.style.display = 'none';
}
for (dropdown_cnt = 0; dropdown_cnt < 8; dropdown_cnt++){
for (dropdown_cnt = 0; dropdown_cnt < 8; dropdown_cnt++) {
push_data_to_gradio_component({
visible: false,
choices: [],
label: "",
__type__: 'update'
}, "plugin_arg_drop_"+dropdown_cnt, "obj");
document.getElementById("plugin_arg_drop_"+dropdown_cnt).parentNode.style.display = 'none';
}, "plugin_arg_drop_" + dropdown_cnt, "obj");
document.getElementById("plugin_arg_drop_" + dropdown_cnt).parentNode.style.display = 'none';
}
}
function close_current_pop_up_plugin(){
// PLUGIN_ARG_MENU
function close_current_pop_up_plugin() {
// PLUGIN_ARG_MENU
push_data_to_gradio_component({
visible: false,
__type__: 'update'
@@ -1233,15 +1270,13 @@ function close_current_pop_up_plugin(){
// 生成高级插件的选择菜单
plugin_init_info_lib = {}
function register_plugin_init(key, base64String){
function register_plugin_init(key, base64String) {
// console.log('x')
const stringData = atob(base64String);
let guiJsonData = JSON.parse(stringData);
if (key in plugin_init_info_lib)
{
if (key in plugin_init_info_lib) {
}
else
{
else {
plugin_init_info_lib[key] = {};
}
plugin_init_info_lib[key].info = guiJsonData.Info;
@@ -1251,28 +1286,26 @@ function register_plugin_init(key, base64String){
plugin_init_info_lib[key].enable_advanced_arg = guiJsonData.AdvancedArgs;
plugin_init_info_lib[key].arg_reminder = guiJsonData.ArgsReminder;
}
function register_advanced_plugin_init_code(key, code){
if (key in plugin_init_info_lib)
{
function register_advanced_plugin_init_code(key, code) {
if (key in plugin_init_info_lib) {
}
else
{
else {
plugin_init_info_lib[key] = {};
}
plugin_init_info_lib[key].secondary_menu_code = code;
}
function run_advanced_plugin_launch_code(key){
function run_advanced_plugin_launch_code(key) {
// convert js code string to function
generate_menu(plugin_init_info_lib[key].secondary_menu_code, key);
}
function on_flex_button_click(key){
if (plugin_init_info_lib.hasOwnProperty(key) && plugin_init_info_lib[key].hasOwnProperty('secondary_menu_code')){
function on_flex_button_click(key) {
if (plugin_init_info_lib.hasOwnProperty(key) && plugin_init_info_lib[key].hasOwnProperty('secondary_menu_code')) {
run_advanced_plugin_launch_code(key);
}else{
} else {
document.getElementById("old_callback_btn_for_plugin_exe").click();
}
}
async function run_dropdown_shift(dropdown){
async function run_dropdown_shift(dropdown) {
let key = dropdown;
push_data_to_gradio_component({
value: key,
@@ -1281,7 +1314,7 @@ async function run_dropdown_shift(dropdown){
__type__: 'update'
}, "elem_switchy_bt", "obj");
if (plugin_init_info_lib[key].enable_advanced_arg){
if (plugin_init_info_lib[key].enable_advanced_arg) {
push_data_to_gradio_component({
visible: true,
label: plugin_init_info_lib[key].label,
@@ -1303,9 +1336,9 @@ async function duplicate_in_new_window() {
window.open(url, '_blank');
}
async function run_classic_plugin_via_id(plugin_elem_id){
for (key in plugin_init_info_lib){
if (plugin_init_info_lib[key].elem_id == plugin_elem_id){
async function run_classic_plugin_via_id(plugin_elem_id) {
for (key in plugin_init_info_lib) {
if (plugin_init_info_lib[key].elem_id == plugin_elem_id) {
// 获取按钮名称
let current_btn_name = await get_data_from_gradio_component(plugin_elem_id);
// 执行
@@ -1326,7 +1359,7 @@ async function call_plugin_via_name(current_btn_name) {
hide_all_elem();
// 为了与旧插件兼容,生成菜单时,自动加载旧高级参数输入区的值
let advance_arg_input_legacy = await get_data_from_gradio_component('advance_arg_input_legacy');
if (advance_arg_input_legacy.length != 0){
if (advance_arg_input_legacy.length != 0) {
gui_args["advanced_arg"] = {};
gui_args["advanced_arg"].user_confirmed_value = advance_arg_input_legacy;
}
@@ -1349,18 +1382,11 @@ async function multiplex_function_begin(multiplex_sel) {
click_real_submit_btn();
return;
}
if (multiplex_sel === "多模型对话") {
let _align_name_in_crazy_function_py = "询问多个GPT模型";
call_plugin_via_name(_align_name_in_crazy_function_py);
return;
}
if (multiplex_sel === "智能召回 RAG") {
let _align_name_in_crazy_function_py = "Rag智能召回";
call_plugin_via_name(_align_name_in_crazy_function_py);
return;
}
// do not delete `REPLACE_EXTENDED_MULTIPLEX_FUNCTIONS_HERE`! It will be read and replaced by Python code.
// REPLACE_EXTENDED_MULTIPLEX_FUNCTIONS_HERE
}
async function run_multiplex_shift(multiplex_sel){
async function run_multiplex_shift(multiplex_sel) {
let key = multiplex_sel;
if (multiplex_sel === "常规对话") {
key = "提交";
@@ -1372,3 +1398,8 @@ async function run_multiplex_shift(multiplex_sel){
__type__: 'update'
}, "elem_submit_visible", "obj");
}
async function persistent_cookie_init(web_cookie_cache, cookie) {
return [localStorage.getItem('web_cookie_cache'), cookie];
}

查看文件

@@ -2,6 +2,25 @@ from functools import lru_cache
from toolbox import get_conf
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
def inject_mutex_button_code(js_content):
from crazy_functional import get_multiplex_button_functions
fns = get_multiplex_button_functions()
template = """
if (multiplex_sel === "{x}") {
let _align_name_in_crazy_function_py = "{y}";
call_plugin_via_name(_align_name_in_crazy_function_py);
return;
}
"""
replacement = ""
for fn in fns.keys():
if fn == "常规对话": continue
replacement += template.replace("{x}", fn).replace("{y}", fns[fn])
js_content = js_content.replace("// REPLACE_EXTENDED_MULTIPLEX_FUNCTIONS_HERE", replacement)
return js_content
def minimize_js(common_js_path):
try:
import rjsmin, hashlib, glob, os
@@ -10,14 +29,16 @@ def minimize_js(common_js_path):
os.remove(old_min_js)
# use rjsmin to minimize `common_js_path`
c_jsmin = rjsmin.jsmin
with open(common_js_path, "r") as f:
with open(common_js_path, "r", encoding='utf-8') as f:
js_content = f.read()
if common_js_path == "themes/common.js":
js_content = inject_mutex_button_code(js_content)
minimized_js_content = c_jsmin(js_content)
# compute sha256 hash of minimized js content
sha_hash = hashlib.sha256(minimized_js_content.encode()).hexdigest()[:8]
minimized_js_path = common_js_path + '.min.' + sha_hash + '.js'
# save to minimized js file
with open(minimized_js_path, "w") as f:
with open(minimized_js_path, "w", encoding='utf-8') as f:
f.write(minimized_js_content)
# return minimized js file path
return minimized_js_path

文件差异内容过多而无法显示 加载差异

查看文件

@@ -1,7 +1,7 @@
import gradio as gr
def define_gui_floating_menu(customize_btns, functional, predefined_btns, cookies, web_cookie_cache):
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_input_secondary:
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top", elem_id="f_area_input_secondary") as area_input_secondary:
with gr.Accordion("浮动输入区", open=True, elem_id="input-panel2"):
with gr.Row() as row:
row.style(equal_height=True)
@@ -17,7 +17,7 @@ def define_gui_floating_menu(customize_btns, functional, predefined_btns, cookie
clearBtn2 = gr.Button("清除", elem_id="elem_clear2", variant="secondary", visible=False); clearBtn2.style(size="sm")
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top", elem_id="f_area_customize") as area_customize:
with gr.Accordion("自定义菜单", open=True, elem_id="edit-panel"):
with gr.Row() as row:
with gr.Column(scale=10):
@@ -35,9 +35,9 @@ def define_gui_floating_menu(customize_btns, functional, predefined_btns, cookie
# update btn
h = basic_fn_confirm.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
[web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
h.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
h.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{localStorage.setItem("web_cookie_cache", web_cookie_cache);}""")
# clean up btn
h2 = basic_fn_clean.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix, gr.State(True)],
[web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
h2.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
h2.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{localStorage.setItem("web_cookie_cache", web_cookie_cache);}""")
return area_input_secondary, txt2, area_customize, submitBtn2, resetBtn2, clearBtn2, stopBtn2

查看文件

@@ -3,6 +3,8 @@ async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
audio_fn_init();
minor_ui_adjustment();
ButtonWithDropdown_init();
update_conversation_metadata();
window.addEventListener("gptac_restore_chat_from_local_storage", restore_chat_from_local_storage);
// 加载欢迎页面
const welcomeMessage = new WelcomeMessage();

查看文件

@@ -87,21 +87,6 @@ js_code_for_toggle_darkmode = """() => {
}"""
js_code_for_persistent_cookie_init = """(web_cookie_cache, cookie) => {
return [getCookie("web_cookie_cache"), cookie];
}
"""
# 详见 themes/common.js
js_code_reset = """
(a,b,c)=>{
let stopButton = document.getElementById("elem_stop");
stopButton.click();
return reset_conversation(a,b);
}
"""
js_code_clear = """
(a,b)=>{
return ["", ""];

查看文件

@@ -84,7 +84,7 @@ class WelcomeMessage {
this.max_welcome_card_num = 6;
this.card_array = [];
this.static_welcome_message_previous = [];
this.reflesh_time_interval = 15*1000;
this.reflesh_time_interval = 15 * 1000;
const reflesh_render_status = () => {
@@ -96,6 +96,9 @@ class WelcomeMessage {
};
const pageFocusHandler = new PageFocusHandler();
pageFocusHandler.addFocusCallback(reflesh_render_status);
// call update when page size change, call this.update when page size change
window.addEventListener('resize', this.update.bind(this));
}
begin_render() {
@@ -105,7 +108,7 @@ class WelcomeMessage {
async startRefleshCards() {
await new Promise(r => setTimeout(r, this.reflesh_time_interval));
await this.reflesh_cards();
if (this.visible){
if (this.visible) {
setTimeout(() => {
this.startRefleshCards.call(this);
}, 1);
@@ -113,7 +116,7 @@ class WelcomeMessage {
}
async reflesh_cards() {
if (!this.visible){
if (!this.visible) {
return;
}
@@ -173,18 +176,18 @@ class WelcomeMessage {
}
shuffle(array) {
var currentIndex = array.length, randomIndex;
var currentIndex = array.length, randomIndex;
// While there remain elements to shuffle...
while (currentIndex != 0) {
// Pick a remaining element...
randomIndex = Math.floor(Math.random() * currentIndex);
currentIndex--;
// Pick a remaining element...
randomIndex = Math.floor(Math.random() * currentIndex);
currentIndex--;
// And swap it with the current element.
[array[currentIndex], array[randomIndex]] = [
array[randomIndex], array[currentIndex]];
// And swap it with the current element.
[array[currentIndex], array[randomIndex]] = [
array[randomIndex], array[currentIndex]];
}
return array;
@@ -192,23 +195,33 @@ class WelcomeMessage {
async update() {
// console.log('update')
const elem_chatbot = document.getElementById('gpt-chatbot');
const chatbot_top = elem_chatbot.getBoundingClientRect().top;
const welcome_card_container = document.getElementsByClassName('welcome-card-container')[0];
let welcome_card_overflow = false;
if (welcome_card_container) {
const welcome_card_top = welcome_card_container.getBoundingClientRect().top;
if (welcome_card_top < chatbot_top) {
welcome_card_overflow = true;
// console.log("welcome_card_overflow");
}
}
var page_width = document.documentElement.clientWidth;
const width_to_hide_welcome = 1200;
if (!await this.isChatbotEmpty() || page_width < width_to_hide_welcome) {
if (!await this.isChatbotEmpty() || page_width < width_to_hide_welcome || welcome_card_overflow) {
if (this.visible) {
this.removeWelcome();
this.visible = false;
console.log("remove welcome");
this.removeWelcome(); this.visible = false; // this two lines must always be together
this.card_array = [];
this.static_welcome_message_previous = [];
}
return;
}
if (this.visible){
if (this.visible) {
return;
}
// console.log("welcome");
this.showWelcome();
this.visible = true;
console.log("show welcome");
this.showWelcome(); this.visible = true; // this two lines must always be together
this.startRefleshCards();
}
@@ -220,28 +233,28 @@ class WelcomeMessage {
const title = document.createElement('div');
title.classList.add('welcome-card-title');
// 创建图标
const svg = document.createElement('img');
svg.classList.add('welcome-svg');
svg.src = message.svg;
svg.style.height = '30px';
title.appendChild(svg);
// 创建图标
const svg = document.createElement('img');
svg.classList.add('welcome-svg');
svg.src = message.svg;
svg.style.height = '30px';
title.appendChild(svg);
// 创建标题
const text = document.createElement('a');
text.textContent = message.title;
text.classList.add('welcome-title-text');
text.href = message.url;
text.target = "_blank";
title.appendChild(text)
// 创建标题
const text = document.createElement('a');
text.textContent = message.title;
text.classList.add('welcome-title-text');
text.href = message.url;
text.target = "_blank";
title.appendChild(text)
// 创建内容
const content = document.createElement('div');
content.classList.add('welcome-content');
const content_c = document.createElement('div');
content_c.classList.add('welcome-content-c');
content_c.textContent = message.content;
content.appendChild(content_c);
const content_c = document.createElement('div');
content_c.classList.add('welcome-content-c');
content_c.textContent = message.content;
content.appendChild(content_c);
// 将标题和内容添加到卡片 div 中
card.appendChild(title);
@@ -307,28 +320,28 @@ class WelcomeMessage {
class PageFocusHandler {
constructor() {
this.hasReturned = false;
this.focusCallbacks = [];
this.hasReturned = false;
this.focusCallbacks = [];
// Bind the focus and blur event handlers
window.addEventListener('visibilitychange', this.handleFocus.bind(this));
// Bind the focus and blur event handlers
window.addEventListener('visibilitychange', this.handleFocus.bind(this));
}
// Method to handle the focus event
handleFocus() {
if (this.hasReturned) {
this.focusCallbacks.forEach(callback => callback());
}
this.hasReturned = true;
if (this.hasReturned) {
this.focusCallbacks.forEach(callback => callback());
}
this.hasReturned = true;
}
// Method to add a custom callback function
addFocusCallback(callback) {
if (typeof callback === 'function') {
this.focusCallbacks.push(callback);
} else {
throw new Error('Callback must be a function');
}
if (typeof callback === 'function') {
this.focusCallbacks.push(callback);
} else {
throw new Error('Callback must be a function');
}
}
}

查看文件

@@ -8,6 +8,7 @@ import base64
import gradio
import shutil
import glob
import json
import uuid
from loguru import logger
from functools import wraps
@@ -92,8 +93,9 @@ def ArgsGeneralWrapper(f):
"""
def decorated(request: gradio.Request, cookies:dict, max_length:int, llm_model:str,
txt:str, txt2:str, top_p:float, temperature:float, chatbot:list,
history:list, system_prompt:str, plugin_advanced_arg:dict, *args):
json_history:str, system_prompt:str, plugin_advanced_arg:dict, *args):
txt_passon = txt
history = json.loads(json_history) if json_history else []
if txt == "" and txt2 != "": txt_passon = txt2
# 引入一个有cookie的chatbot
if request.username is not None:
@@ -148,10 +150,11 @@ def ArgsGeneralWrapper(f):
return decorated
def update_ui(chatbot:ChatBotWithCookies, history, msg="正常", **kwargs): # 刷新界面
def update_ui(chatbot:ChatBotWithCookies, history:list, msg:str="正常", **kwargs): # 刷新界面
"""
刷新用户界面
"""
assert isinstance(history, list), "history必须是一个list"
assert isinstance(
chatbot, ChatBotWithCookies
), "在传递chatbot的过程中不要将其丢弃。必要时, 可用clear将其清空, 然后用for+append循环重新赋值。"
@@ -175,10 +178,11 @@ def update_ui(chatbot:ChatBotWithCookies, history, msg="正常", **kwargs): #
else:
chatbot_gr = chatbot
yield cookies, chatbot_gr, history, msg
json_history = json.dumps(history, ensure_ascii=False)
yield cookies, chatbot_gr, json_history, msg
def update_ui_lastest_msg(lastmsg:str, chatbot:ChatBotWithCookies, history:list, delay=1, msg="正常"): # 刷新界面
def update_ui_lastest_msg(lastmsg:str, chatbot:ChatBotWithCookies, history:list, delay:float=1, msg:str="正常"): # 刷新界面
"""
刷新用户界面
"""

查看文件

@@ -1,5 +1,5 @@
{
"version": 3.90,
"version": 3.91,
"show_feature": true,
"new_feature": "增加RAG组件 <-> 升级多合一主提交键"
"new_feature": "优化前端并修复TTS的BUG <-> 添加时间线回溯功能 <-> 支持chatgpt-4o-latest <-> 增加RAG组件 <-> 升级多合一主提交键"
}