镜像自地址
https://github.com/binary-husky/gpt_academic.git
已同步 2025-12-06 14:36:48 +00:00
比较提交
68 次代码提交
version3.4
...
version3.4
| 作者 | SHA1 | 提交日期 | |
|---|---|---|---|
|
|
d16329c1af | ||
|
|
d5b4d7ab90 | ||
|
|
8199a9a12e | ||
|
|
cb10a8abec | ||
|
|
0dbcda89b7 | ||
|
|
78a8259b82 | ||
|
|
f22fdb4f94 | ||
|
|
450645a9d0 | ||
|
|
af23730f8f | ||
|
|
0b11260d6f | ||
|
|
31ab97dd09 | ||
|
|
c0c4834cfc | ||
|
|
2dae40f4ba | ||
|
|
587c7400d1 | ||
|
|
8dd2e2a6b7 | ||
|
|
aaf4f37403 | ||
|
|
3e2e81a968 | ||
|
|
cc1be5585b | ||
|
|
5050016b22 | ||
|
|
7662196514 | ||
|
|
8ddaca09e0 | ||
|
|
71c692dcef | ||
|
|
184e417fec | ||
|
|
7a99560183 | ||
|
|
48f4d6aa2a | ||
|
|
c17fc2a9b5 | ||
|
|
4d70b3786f | ||
|
|
9bee676cd2 | ||
|
|
0a37106692 | ||
|
|
57d4541d4e | ||
|
|
d7dd586f09 | ||
|
|
b6b53ce2a4 | ||
|
|
43809c107d | ||
|
|
1721edc990 | ||
|
|
bfb7aab4a0 | ||
|
|
f4a87d6380 | ||
|
|
c0c337988f | ||
|
|
27f65c251a | ||
|
|
87f099f740 | ||
|
|
484f16e365 | ||
|
|
37afcc709b | ||
|
|
9cbe9f240d | ||
|
|
f6567c02f6 | ||
|
|
8c83061a93 | ||
|
|
23f2adfdc3 | ||
|
|
61698444b1 | ||
|
|
109afcf8f6 | ||
|
|
19ef6a530a | ||
|
|
e08bd9669e | ||
|
|
155a7e1174 | ||
|
|
86e33ea99a | ||
|
|
524684f8bd | ||
|
|
2a362cec84 | ||
|
|
2747c23868 | ||
|
|
f446dbb62d | ||
|
|
8d37d94e2c | ||
|
|
e4ba0e6c85 | ||
|
|
4216c5196e | ||
|
|
2df660a718 | ||
|
|
bb496a9c2c | ||
|
|
4e0737c0c2 | ||
|
|
4bb3cba5c8 | ||
|
|
08b9b0d140 | ||
|
|
3577a72a3b | ||
|
|
0328d6f498 | ||
|
|
d437305a4f | ||
|
|
c4899bcb20 | ||
|
|
4295764f8c |
2
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
2
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@@ -11,6 +11,8 @@ body:
|
||||
- Please choose | 请选择
|
||||
- Pip Install (I ignored requirements.txt)
|
||||
- Pip Install (I used latest requirements.txt)
|
||||
- OneKeyInstall (一键安装脚本-windows)
|
||||
- OneKeyInstall (一键安装脚本-mac)
|
||||
- Anaconda (I ignored requirements.txt)
|
||||
- Anaconda (I used latest requirements.txt)
|
||||
- Docker(Windows/Mac)
|
||||
|
||||
44
.github/workflows/build-with-audio-assistant.yml
vendored
普通文件
44
.github/workflows/build-with-audio-assistant.yml
vendored
普通文件
@@ -0,0 +1,44 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: build-with-audio-assistant
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'master'
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}_audio_assistant
|
||||
|
||||
jobs:
|
||||
build-and-push-image:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata (tags, labels) for Docker
|
||||
id: meta
|
||||
uses: docker/metadata-action@v4
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
file: docs/GithubAction+NoLocal+AudioAssistant
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -151,3 +151,4 @@ multi-language
|
||||
request_llm/moss
|
||||
media
|
||||
flagged
|
||||
request_llm/ChatGLM-6b-onnx-u8s8
|
||||
|
||||
21
README.md
21
README.md
@@ -27,6 +27,7 @@ To translate this project to arbitary language with GPT, read and run [`multi_la
|
||||
|
||||
功能(⭐= 近期新增功能) | 描述
|
||||
--- | ---
|
||||
⭐接入新模型! | ⭐阿里达摩院[通义千问](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/)
|
||||
一键润色 | 支持一键润色、一键查找论文语法错误
|
||||
一键中英互译 | 一键中英互译
|
||||
一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
|
||||
@@ -52,8 +53,8 @@ Latex论文一键校对 | [函数插件] 仿Grammarly对Latex文章进行语法
|
||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧?
|
||||
⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
|
||||
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
||||
⭐[虚空终端](https://github.com/binary-husky/void-terminal)pip包 | 脱离GUI,在Python中直接调用本项目的函数插件(开发中)
|
||||
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
@@ -116,7 +117,7 @@ python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步
|
||||
```
|
||||
|
||||
|
||||
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS作为后端,请点击展开此处</summary>
|
||||
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
||||
<p>
|
||||
|
||||
【可选步骤】如果需要支持清华ChatGLM2/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
||||
@@ -128,7 +129,10 @@ python -m pip install -r request_llm/requirements_chatglm.txt
|
||||
python -m pip install -r request_llm/requirements_moss.txt
|
||||
git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llm/moss # 注意执行此行代码时,必须处于项目根路径
|
||||
|
||||
# 【可选步骤III】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案):
|
||||
# 【可选步骤III】支持RWKV Runner
|
||||
参考wiki:https://github.com/binary-husky/gpt_academic/wiki/%E9%80%82%E9%85%8DRWKV-Runner
|
||||
|
||||
# 【可选步骤IV】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
```
|
||||
|
||||
@@ -147,6 +151,7 @@ python main.py
|
||||
1. 仅ChatGPT(推荐大多数人选择,等价于docker-compose方案1)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
|
||||
|
||||
``` sh
|
||||
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git # 下载项目
|
||||
@@ -195,10 +200,12 @@ docker-compose up
|
||||
5. 远程云服务器部署(需要云服务器知识与经验)。
|
||||
请访问[部署wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
6. 使用WSL2(Windows Subsystem for Linux 子系统)。
|
||||
6. 使用Sealos[一键部署](https://github.com/binary-husky/gpt_academic/issues/993)。
|
||||
|
||||
7. 使用WSL2(Windows Subsystem for Linux 子系统)。
|
||||
请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
|
||||
7. 如何在二级网址(如`http://localhost/subpath`)下运行。
|
||||
8. 如何在二级网址(如`http://localhost/subpath`)下运行。
|
||||
请访问[FastAPI运行说明](docs/WithFastapi.md)
|
||||
|
||||
|
||||
@@ -286,6 +293,10 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/476f66d9-7716-4537-b5c1-735372c25adb" height="200">
|
||||
</div>
|
||||
|
||||
11. 语言、主题切换
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/b6799499-b6fb-4f0c-9c8e-1b441872f4e8" width="500" >
|
||||
</div>
|
||||
|
||||
|
||||
### II:版本:
|
||||
|
||||
@@ -3,15 +3,18 @@ def check_proxy(proxies):
|
||||
import requests
|
||||
proxies_https = proxies['https'] if proxies is not None else '无'
|
||||
try:
|
||||
response = requests.get("https://ipapi.co/json/",
|
||||
proxies=proxies, timeout=4)
|
||||
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
|
||||
data = response.json()
|
||||
print(f'查询代理的地理位置,返回的结果是{data}')
|
||||
if 'country_name' in data:
|
||||
country = data['country_name']
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
|
||||
elif 'error' in data:
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限"
|
||||
alternative = _check_with_backup_source(proxies)
|
||||
if alternative is None:
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限"
|
||||
else:
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:{alternative}"
|
||||
else:
|
||||
result = f"代理配置 {proxies_https}, 代理数据解析失败:{data}"
|
||||
print(result)
|
||||
@@ -21,6 +24,11 @@ def check_proxy(proxies):
|
||||
print(result)
|
||||
return result
|
||||
|
||||
def _check_with_backup_source(proxies):
|
||||
import random, string, requests
|
||||
random_string = ''.join(random.choices(string.ascii_letters + string.digits, k=32))
|
||||
try: return requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json()['dns']['geo']
|
||||
except: return None
|
||||
|
||||
def backup_and_download(current_version, remote_version):
|
||||
"""
|
||||
|
||||
14
config.py
14
config.py
@@ -71,7 +71,7 @@ MAX_RETRY = 2
|
||||
# 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
|
||||
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss", "newbing", "stack-claude"]
|
||||
# P.S. 其他可用的模型还包括 ["gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "claude-1-100k", "claude-2", "internlm", "jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
# P.S. 其他可用的模型还包括 ["qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "spark", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
|
||||
|
||||
# ChatGLM(2) Finetune Model Path (如果使用ChatGLM2微调模型,需要把"chatglmft"加入AVAIL_LLM_MODELS中)
|
||||
@@ -132,8 +132,16 @@ put your new bing cookies here
|
||||
|
||||
# 阿里云实时语音识别 配置难度较高 仅建议高手用户使用 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
|
||||
ENABLE_AUDIO = False
|
||||
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
|
||||
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
|
||||
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
|
||||
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
|
||||
ALIYUN_ACCESSKEY="" # (无需填写)
|
||||
ALIYUN_SECRET="" # (无需填写)
|
||||
|
||||
|
||||
# 接入讯飞星火大模型 https://console.xfyun.cn/services/iat
|
||||
XFYUN_APPID = "00000000"
|
||||
XFYUN_API_SECRET = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
|
||||
XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
||||
|
||||
|
||||
# Claude API KEY
|
||||
|
||||
@@ -1,20 +1,25 @@
|
||||
# 'primary' 颜色对应 theme.py 中的 primary_hue
|
||||
# 'secondary' 颜色对应 theme.py 中的 neutral_hue
|
||||
# 'stop' 颜色对应 theme.py 中的 color_er
|
||||
# 默认按钮颜色是 secondary
|
||||
import importlib
|
||||
from toolbox import clear_line_break
|
||||
|
||||
|
||||
def get_core_functions():
|
||||
return {
|
||||
"英语学术润色": {
|
||||
# 前言
|
||||
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
||||
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
|
||||
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
|
||||
r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
|
||||
# 后语
|
||||
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||
"Suffix": r"",
|
||||
"Color": r"secondary", # 按钮颜色
|
||||
# 按钮颜色 (默认 secondary)
|
||||
"Color": r"secondary",
|
||||
# 按钮是否可见 (默认 True,即可见)
|
||||
"Visible": True,
|
||||
# 是否在触发时清除历史 (默认 False,即不处理之前的对话历史)
|
||||
"AutoClearHistory": False
|
||||
},
|
||||
"中文学术润色": {
|
||||
"Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
|
||||
@@ -76,3 +81,14 @@ def get_core_functions():
|
||||
"Suffix": r"",
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def handle_core_functionality(additional_fn, inputs, history, chatbot):
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
if core_functional[additional_fn].get("AutoClearHistory", False):
|
||||
history = []
|
||||
return inputs, history
|
||||
|
||||
@@ -1,249 +0,0 @@
|
||||
"""
|
||||
这是什么?
|
||||
这个文件用于函数插件的单元测试
|
||||
运行方法 python crazy_functions/crazy_functions_test.py
|
||||
"""
|
||||
|
||||
# ==============================================================================================================================
|
||||
|
||||
def validate_path():
|
||||
import os, sys
|
||||
dir_name = os.path.dirname(__file__)
|
||||
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
|
||||
os.chdir(root_dir_assume)
|
||||
sys.path.append(root_dir_assume)
|
||||
validate_path() # validate path so you can run from base directory
|
||||
|
||||
# ==============================================================================================================================
|
||||
|
||||
from colorful import *
|
||||
from toolbox import get_conf, ChatBotWithCookies
|
||||
import contextlib
|
||||
import os
|
||||
import sys
|
||||
from functools import wraps
|
||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
|
||||
get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
|
||||
|
||||
llm_kwargs = {
|
||||
'api_key': API_KEY,
|
||||
'llm_model': LLM_MODEL,
|
||||
'top_p':1.0,
|
||||
'max_length': None,
|
||||
'temperature':1.0,
|
||||
}
|
||||
plugin_kwargs = { }
|
||||
chatbot = ChatBotWithCookies(llm_kwargs)
|
||||
history = []
|
||||
system_prompt = "Serve me as a writing and programming assistant."
|
||||
web_port = 1024
|
||||
|
||||
# ==============================================================================================================================
|
||||
|
||||
def silence_stdout(func):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
_original_stdout = sys.stdout
|
||||
sys.stdout = open(os.devnull, 'w')
|
||||
for q in func(*args, **kwargs):
|
||||
sys.stdout = _original_stdout
|
||||
yield q
|
||||
sys.stdout = open(os.devnull, 'w')
|
||||
sys.stdout.close()
|
||||
sys.stdout = _original_stdout
|
||||
return wrapper
|
||||
|
||||
class CLI_Printer():
|
||||
def __init__(self) -> None:
|
||||
self.pre_buf = ""
|
||||
|
||||
def print(self, buf):
|
||||
bufp = ""
|
||||
for index, chat in enumerate(buf):
|
||||
a, b = chat
|
||||
bufp += sprint亮靛('[Me]:' + a) + '\n'
|
||||
bufp += '[GPT]:' + b
|
||||
if index < len(buf)-1:
|
||||
bufp += '\n'
|
||||
|
||||
if self.pre_buf!="" and bufp.startswith(self.pre_buf):
|
||||
print(bufp[len(self.pre_buf):], end='')
|
||||
else:
|
||||
print('\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n'+bufp, end='')
|
||||
self.pre_buf = bufp
|
||||
return
|
||||
|
||||
cli_printer = CLI_Printer()
|
||||
# ==============================================================================================================================
|
||||
def test_解析一个Python项目():
|
||||
from crazy_functions.解析项目源代码 import 解析一个Python项目
|
||||
txt = "crazy_functions/test_project/python/dqn"
|
||||
for cookies, cb, hist, msg in 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_解析一个Cpp项目():
|
||||
from crazy_functions.解析项目源代码 import 解析一个C项目
|
||||
txt = "crazy_functions/test_project/cpp/cppipc"
|
||||
for cookies, cb, hist, msg in 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_Latex英文润色():
|
||||
from crazy_functions.Latex全文润色 import Latex英文润色
|
||||
txt = "crazy_functions/test_project/latex/attention"
|
||||
for cookies, cb, hist, msg in Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_Markdown中译英():
|
||||
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
||||
txt = "README.md"
|
||||
for cookies, cb, hist, msg in Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_批量翻译PDF文档():
|
||||
from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档
|
||||
txt = "crazy_functions/test_project/pdf_and_word"
|
||||
for cookies, cb, hist, msg in 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_谷歌检索小助手():
|
||||
from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
|
||||
txt = "https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG="
|
||||
for cookies, cb, hist, msg in 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_总结word文档():
|
||||
from crazy_functions.总结word文档 import 总结word文档
|
||||
txt = "crazy_functions/test_project/pdf_and_word"
|
||||
for cookies, cb, hist, msg in 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_下载arxiv论文并翻译摘要():
|
||||
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
|
||||
txt = "1812.10695"
|
||||
for cookies, cb, hist, msg in 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_联网回答问题():
|
||||
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
||||
# txt = "谁是应急食品?"
|
||||
# >> '根据以上搜索结果可以得知,应急食品是“原神”游戏中的角色派蒙的外号。'
|
||||
# txt = "道路千万条,安全第一条。后面两句是?"
|
||||
# >> '行车不规范,亲人两行泪。'
|
||||
# txt = "You should have gone for the head. What does that mean?"
|
||||
# >> The phrase "You should have gone for the head" is a quote from the Marvel movies, Avengers: Infinity War and Avengers: Endgame. It was spoken by the character Thanos in Infinity War and by Thor in Endgame.
|
||||
txt = "AutoGPT是什么?"
|
||||
for cookies, cb, hist, msg in 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print("当前问答:", cb[-1][-1].replace("\n"," "))
|
||||
for i, it in enumerate(cb): print亮蓝(it[0]); print亮黄(it[1])
|
||||
|
||||
def test_解析ipynb文件():
|
||||
from crazy_functions.解析JupyterNotebook import 解析ipynb文件
|
||||
txt = "crazy_functions/test_samples"
|
||||
for cookies, cb, hist, msg in 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
|
||||
def test_数学动画生成manim():
|
||||
from crazy_functions.数学动画生成manim import 动画生成
|
||||
txt = "A ball split into 2, and then split into 4, and finally split into 8."
|
||||
for cookies, cb, hist, msg in 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
|
||||
|
||||
def test_Markdown多语言():
|
||||
from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
|
||||
txt = "README.md"
|
||||
history = []
|
||||
for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]:
|
||||
plugin_kwargs = {"advanced_arg": lang}
|
||||
for cookies, cb, hist, msg in Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_Langchain知识库():
|
||||
from crazy_functions.Langchain知识库 import 知识库问答
|
||||
txt = "./"
|
||||
chatbot = ChatBotWithCookies(llm_kwargs)
|
||||
for cookies, cb, hist, msg in silence_stdout(知识库问答)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
cli_printer.print(cb) # print(cb)
|
||||
|
||||
chatbot = ChatBotWithCookies(cookies)
|
||||
from crazy_functions.Langchain知识库 import 读取知识库作答
|
||||
txt = "What is the installation method?"
|
||||
for cookies, cb, hist, msg in silence_stdout(读取知识库作答)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
cli_printer.print(cb) # print(cb)
|
||||
|
||||
def test_Langchain知识库读取():
|
||||
from crazy_functions.Langchain知识库 import 读取知识库作答
|
||||
txt = "远程云服务器部署?"
|
||||
for cookies, cb, hist, msg in silence_stdout(读取知识库作答)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
cli_printer.print(cb) # print(cb)
|
||||
|
||||
def test_Latex():
|
||||
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比, Latex翻译中文并重新编译PDF
|
||||
|
||||
# txt = r"https://arxiv.org/abs/1706.03762"
|
||||
# txt = r"https://arxiv.org/abs/1902.03185"
|
||||
# txt = r"https://arxiv.org/abs/2305.18290"
|
||||
# txt = r"https://arxiv.org/abs/2305.17608"
|
||||
# txt = r"https://arxiv.org/abs/2211.16068" # ACE
|
||||
# txt = r"C:\Users\x\arxiv_cache\2211.16068\workfolder" # ACE
|
||||
# txt = r"https://arxiv.org/abs/2002.09253"
|
||||
# txt = r"https://arxiv.org/abs/2306.07831"
|
||||
# txt = r"https://arxiv.org/abs/2212.10156"
|
||||
# txt = r"https://arxiv.org/abs/2211.11559"
|
||||
# txt = r"https://arxiv.org/abs/2303.08774"
|
||||
# txt = r"https://arxiv.org/abs/2303.12712"
|
||||
# txt = r"C:\Users\fuqingxu\arxiv_cache\2303.12712\workfolder"
|
||||
# txt = r"2306.17157" # 这个paper有个input命令文件名大小写错误!
|
||||
# txt = "https://arxiv.org/abs/2205.14135"
|
||||
# txt = r"C:\Users\fuqingxu\arxiv_cache\2205.14135\workfolder"
|
||||
# txt = r"C:\Users\fuqingxu\arxiv_cache\2205.14135\workfolder"
|
||||
txt = r"2210.03629"
|
||||
txt = r"2307.04964"
|
||||
for cookies, cb, hist, msg in (Latex翻译中文并重新编译PDF)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
cli_printer.print(cb) # print(cb)
|
||||
|
||||
|
||||
|
||||
# txt = "2302.02948.tar"
|
||||
# print(txt)
|
||||
# main_tex, work_folder = Latex预处理(txt)
|
||||
# print('main tex:', main_tex)
|
||||
# res = 编译Latex(main_tex, work_folder)
|
||||
# # for cookies, cb, hist, msg in silence_stdout(编译Latex)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# cli_printer.print(cb) # print(cb)
|
||||
|
||||
def test_chatglm_finetune():
|
||||
from crazy_functions.chatglm微调工具 import 微调数据集生成, 启动微调
|
||||
txt = 'build/dev.json'
|
||||
plugin_kwargs = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求:100字以内,用第二人称。' --system_prompt=''" }
|
||||
|
||||
# for cookies, cb, hist, msg in (微调数据集生成)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# cli_printer.print(cb)
|
||||
|
||||
plugin_kwargs = {"advanced_arg":
|
||||
" --pre_seq_len=128 --learning_rate=2e-2 --num_gpus=1 --json_dataset='t_code.json' --ptuning_directory='/home/hmp/ChatGLM2-6B/ptuning' " }
|
||||
|
||||
for cookies, cb, hist, msg in (启动微调)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
cli_printer.print(cb)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# test_解析一个Python项目()
|
||||
# test_Latex英文润色()
|
||||
# test_Markdown中译英()
|
||||
# test_批量翻译PDF文档()
|
||||
# test_谷歌检索小助手()
|
||||
# test_总结word文档()
|
||||
# test_下载arxiv论文并翻译摘要()
|
||||
# test_解析一个Cpp项目()
|
||||
# test_联网回答问题()
|
||||
# test_解析ipynb文件()
|
||||
# test_数学动画生成manim()
|
||||
# test_Langchain知识库()
|
||||
# test_Langchain知识库读取()
|
||||
test_Latex()
|
||||
# test_chatglm_finetune()
|
||||
input("程序完成,回车退出。")
|
||||
print("退出。")
|
||||
@@ -19,7 +19,7 @@ class AliyunASR():
|
||||
pass
|
||||
|
||||
def test_on_error(self, message, *args):
|
||||
# print("on_error args=>{}".format(args))
|
||||
print("on_error args=>{}".format(args))
|
||||
pass
|
||||
|
||||
def test_on_close(self, *args):
|
||||
@@ -50,6 +50,8 @@ class AliyunASR():
|
||||
rad.clean_up()
|
||||
temp_folder = tempfile.gettempdir()
|
||||
TOKEN, APPKEY = get_conf('ALIYUN_TOKEN', 'ALIYUN_APPKEY')
|
||||
if len(TOKEN) == 0:
|
||||
TOKEN = self.get_token()
|
||||
self.aliyun_service_ok = True
|
||||
URL="wss://nls-gateway.aliyuncs.com/ws/v1"
|
||||
sr = nls.NlsSpeechTranscriber(
|
||||
@@ -91,3 +93,38 @@ class AliyunASR():
|
||||
self.stop = True
|
||||
self.stop_msg = 'Aliyun音频服务异常,请检查ALIYUN_TOKEN和ALIYUN_APPKEY是否过期。'
|
||||
r = sr.stop()
|
||||
|
||||
def get_token(self):
|
||||
from toolbox import get_conf
|
||||
import json
|
||||
from aliyunsdkcore.request import CommonRequest
|
||||
from aliyunsdkcore.client import AcsClient
|
||||
AccessKey_ID, AccessKey_secret = get_conf('ALIYUN_ACCESSKEY', 'ALIYUN_SECRET')
|
||||
|
||||
# 创建AcsClient实例
|
||||
client = AcsClient(
|
||||
AccessKey_ID,
|
||||
AccessKey_secret,
|
||||
"cn-shanghai"
|
||||
)
|
||||
|
||||
# 创建request,并设置参数。
|
||||
request = CommonRequest()
|
||||
request.set_method('POST')
|
||||
request.set_domain('nls-meta.cn-shanghai.aliyuncs.com')
|
||||
request.set_version('2019-02-28')
|
||||
request.set_action_name('CreateToken')
|
||||
|
||||
try:
|
||||
response = client.do_action_with_exception(request)
|
||||
print(response)
|
||||
jss = json.loads(response)
|
||||
if 'Token' in jss and 'Id' in jss['Token']:
|
||||
token = jss['Token']['Id']
|
||||
expireTime = jss['Token']['ExpireTime']
|
||||
print("token = " + token)
|
||||
print("expireTime = " + str(expireTime))
|
||||
except Exception as e:
|
||||
print(e)
|
||||
|
||||
return token
|
||||
|
||||
31
crazy_functions/命令行助手.py
普通文件
31
crazy_functions/命令行助手.py
普通文件
@@ -0,0 +1,31 @@
|
||||
from toolbox import CatchException, update_ui, gen_time_str
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from .crazy_utils import input_clipping
|
||||
import copy, json
|
||||
|
||||
@CatchException
|
||||
def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数, 暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄, 用于显示给用户
|
||||
history 聊天历史, 前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
# 清空历史, 以免输入溢出
|
||||
history = []
|
||||
|
||||
# 输入
|
||||
i_say = "请写bash命令实现以下功能:" + txt
|
||||
# 开始
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=txt,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||
sys_prompt="你是一个Linux大师级用户。注意,当我要求你写bash命令时,尽可能地仅用一行命令解决我的要求。"
|
||||
)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
|
||||
|
||||
|
||||
@@ -55,7 +55,7 @@ def 图片生成(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("这是什么功能?", "[Local Message] 生成图像, 请先把模型切换至gpt-xxxx或者api2d-xxxx。如果中文效果不理想, 尝试Prompt。正在处理中 ....."))
|
||||
chatbot.append(("这是什么功能?", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
resolution = plugin_kwargs.get("advanced_arg", '256x256')
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
from toolbox import update_ui, trimmed_format_exc, gen_time_str
|
||||
from toolbox import CatchException, report_execption, write_results_to_file
|
||||
import glob, time, os, re
|
||||
from toolbox import update_ui, trimmed_format_exc, gen_time_str, disable_auto_promotion
|
||||
from toolbox import CatchException, report_execption, write_history_to_file
|
||||
from toolbox import promote_file_to_downloadzone, get_log_folder
|
||||
fast_debug = False
|
||||
|
||||
class PaperFileGroup():
|
||||
@@ -42,13 +44,13 @@ class PaperFileGroup():
|
||||
def write_result(self, language):
|
||||
manifest = []
|
||||
for path, res in zip(self.file_paths, self.file_result):
|
||||
with open(path + f'.{gen_time_str()}.{language}.md', 'w', encoding='utf8') as f:
|
||||
manifest.append(path + f'.{gen_time_str()}.{language}.md')
|
||||
dst_file = os.path.join(get_log_folder(), f'{gen_time_str()}.md')
|
||||
with open(dst_file, 'w', encoding='utf8') as f:
|
||||
manifest.append(dst_file)
|
||||
f.write(res)
|
||||
return manifest
|
||||
|
||||
def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
|
||||
import time, os, re
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
|
||||
# <-------- 读取Markdown文件,删除其中的所有注释 ---------->
|
||||
@@ -102,28 +104,38 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
print(trimmed_format_exc())
|
||||
|
||||
# <-------- 整理结果,退出 ---------->
|
||||
create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
|
||||
res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name)
|
||||
create_report_file_name = gen_time_str() + f"-chatgpt.md"
|
||||
res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name)
|
||||
promote_file_to_downloadzone(res, chatbot=chatbot)
|
||||
history = gpt_response_collection
|
||||
chatbot.append((f"{fp}完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
def get_files_from_everything(txt):
|
||||
import glob, os
|
||||
|
||||
def get_files_from_everything(txt, preference=''):
|
||||
if txt == "": return False, None, None
|
||||
success = True
|
||||
if txt.startswith('http'):
|
||||
# 网络的远程文件
|
||||
txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/")
|
||||
txt = txt.replace("/blob/", "/")
|
||||
import requests
|
||||
from toolbox import get_conf
|
||||
proxies, = get_conf('proxies')
|
||||
# 网络的远程文件
|
||||
if preference == 'Github':
|
||||
print('正在从github下载资源 ...')
|
||||
if not txt.endswith('.md'):
|
||||
# Make a request to the GitHub API to retrieve the repository information
|
||||
url = txt.replace("https://github.com/", "https://api.github.com/repos/") + '/readme'
|
||||
response = requests.get(url, proxies=proxies)
|
||||
txt = response.json()['download_url']
|
||||
else:
|
||||
txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/")
|
||||
txt = txt.replace("/blob/", "/")
|
||||
|
||||
r = requests.get(txt, proxies=proxies)
|
||||
with open('./gpt_log/temp.md', 'wb+') as f: f.write(r.content)
|
||||
project_folder = './gpt_log/'
|
||||
file_manifest = ['./gpt_log/temp.md']
|
||||
download_local = f'{get_log_folder(plugin_name="批量Markdown翻译")}/raw-readme-{gen_time_str()}.md'
|
||||
project_folder = f'{get_log_folder(plugin_name="批量Markdown翻译")}'
|
||||
with open(download_local, 'wb+') as f: f.write(r.content)
|
||||
file_manifest = [download_local]
|
||||
elif txt.endswith('.md'):
|
||||
# 直接给定文件
|
||||
file_manifest = [txt]
|
||||
@@ -145,11 +157,11 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
"函数插件功能?",
|
||||
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
disable_auto_promotion(chatbot)
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import tiktoken
|
||||
import glob, os
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
@@ -158,7 +170,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
return
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
|
||||
success, file_manifest, project_folder = get_files_from_everything(txt)
|
||||
success, file_manifest, project_folder = get_files_from_everything(txt, preference="Github")
|
||||
|
||||
if not success:
|
||||
# 什么都没有
|
||||
@@ -185,11 +197,11 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
"函数插件功能?",
|
||||
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
disable_auto_promotion(chatbot)
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import tiktoken
|
||||
import glob, os
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
@@ -218,11 +230,11 @@ def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history,
|
||||
"函数插件功能?",
|
||||
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
disable_auto_promotion(chatbot)
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import tiktoken
|
||||
import glob, os
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
|
||||
@@ -1,87 +1,70 @@
|
||||
from toolbox import CatchException, update_ui, gen_time_str
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from .crazy_utils import input_clipping
|
||||
import copy, json
|
||||
|
||||
|
||||
prompt = """
|
||||
I have to achieve some functionalities by calling one of the functions below.
|
||||
Your job is to find the correct funtion to use to satisfy my requirement,
|
||||
and then write python code to call this function with correct parameters.
|
||||
|
||||
These are functions you are allowed to choose from:
|
||||
1.
|
||||
功能描述: 总结音视频内容
|
||||
调用函数: ConcludeAudioContent(txt, llm_kwargs)
|
||||
参数说明:
|
||||
txt: 音频文件的路径
|
||||
llm_kwargs: 模型参数, 永远给定None
|
||||
2.
|
||||
功能描述: 将每次对话记录写入Markdown格式的文件中
|
||||
调用函数: WriteMarkdown()
|
||||
3.
|
||||
功能描述: 将指定目录下的PDF文件从英文翻译成中文
|
||||
调用函数: BatchTranslatePDFDocuments_MultiThreaded(txt, llm_kwargs)
|
||||
参数说明:
|
||||
txt: PDF文件所在的路径
|
||||
llm_kwargs: 模型参数, 永远给定None
|
||||
4.
|
||||
功能描述: 根据文本使用GPT模型生成相应的图像
|
||||
调用函数: ImageGeneration(txt, llm_kwargs)
|
||||
参数说明:
|
||||
txt: 图像生成所用到的提示文本
|
||||
llm_kwargs: 模型参数, 永远给定None
|
||||
5.
|
||||
功能描述: 对输入的word文档进行摘要生成
|
||||
调用函数: SummarizingWordDocuments(input_path, output_path)
|
||||
参数说明:
|
||||
input_path: 待处理的word文档路径
|
||||
output_path: 摘要生成后的文档路径
|
||||
|
||||
|
||||
You should always anwser with following format:
|
||||
----------------
|
||||
Code:
|
||||
```
|
||||
class AutoAcademic(object):
|
||||
def __init__(self):
|
||||
self.selected_function = "FILL_CORRECT_FUNCTION_HERE" # e.g., "GenerateImage"
|
||||
self.txt = "FILL_MAIN_PARAMETER_HERE" # e.g., "荷叶上的蜻蜓"
|
||||
self.llm_kwargs = None
|
||||
```
|
||||
Explanation:
|
||||
只有GenerateImage和生成图像相关, 因此选择GenerateImage函数。
|
||||
----------------
|
||||
|
||||
Now, this is my requirement:
|
||||
|
||||
"""
|
||||
def get_fn_lib():
|
||||
return {
|
||||
"BatchTranslatePDFDocuments_MultiThreaded": ("crazy_functions.批量翻译PDF文档_多线程", "批量翻译PDF文档"),
|
||||
"SummarizingWordDocuments": ("crazy_functions.总结word文档", "总结word文档"),
|
||||
"ImageGeneration": ("crazy_functions.图片生成", "图片生成"),
|
||||
"TranslateMarkdownFromEnglishToChinese": ("crazy_functions.批量Markdown翻译", "Markdown中译英"),
|
||||
"SummaryAudioVideo": ("crazy_functions.总结音视频", "总结音视频"),
|
||||
"BatchTranslatePDFDocuments_MultiThreaded": {
|
||||
"module": "crazy_functions.批量翻译PDF文档_多线程",
|
||||
"function": "批量翻译PDF文档",
|
||||
"description": "Translate PDF Documents",
|
||||
"arg_1_description": "A path containing pdf files.",
|
||||
},
|
||||
"SummarizingWordDocuments": {
|
||||
"module": "crazy_functions.总结word文档",
|
||||
"function": "总结word文档",
|
||||
"description": "Summarize Word Documents",
|
||||
"arg_1_description": "A path containing Word files.",
|
||||
},
|
||||
"ImageGeneration": {
|
||||
"module": "crazy_functions.图片生成",
|
||||
"function": "图片生成",
|
||||
"description": "Generate a image that satisfies some description.",
|
||||
"arg_1_description": "Descriptions about the image to be generated.",
|
||||
},
|
||||
"TranslateMarkdownFromEnglishToChinese": {
|
||||
"module": "crazy_functions.批量Markdown翻译",
|
||||
"function": "Markdown中译英",
|
||||
"description": "Translate Markdown Documents from English to Chinese.",
|
||||
"arg_1_description": "A path containing Markdown files.",
|
||||
},
|
||||
"SummaryAudioVideo": {
|
||||
"module": "crazy_functions.总结音视频",
|
||||
"function": "总结音视频",
|
||||
"description": "Get text from a piece of audio and summarize this audio.",
|
||||
"arg_1_description": "A path containing audio files.",
|
||||
},
|
||||
}
|
||||
|
||||
functions = [
|
||||
{
|
||||
"name": k,
|
||||
"description": v['description'],
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"plugin_arg_1": {
|
||||
"type": "string",
|
||||
"description": v['arg_1_description'],
|
||||
},
|
||||
},
|
||||
"required": ["plugin_arg_1"],
|
||||
},
|
||||
} for k, v in get_fn_lib().items()
|
||||
]
|
||||
|
||||
def inspect_dependency(chatbot, history):
|
||||
return True
|
||||
|
||||
def eval_code(code, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
import subprocess, sys, os, shutil, importlib
|
||||
|
||||
with open('gpt_log/void_terminal_runtime.py', 'w', encoding='utf8') as f:
|
||||
f.write(code)
|
||||
|
||||
import importlib
|
||||
try:
|
||||
AutoAcademic = getattr(importlib.import_module('gpt_log.void_terminal_runtime', 'AutoAcademic'), 'AutoAcademic')
|
||||
# importlib.reload(AutoAcademic)
|
||||
auto_dict = AutoAcademic()
|
||||
selected_function = auto_dict.selected_function
|
||||
txt = auto_dict.txt
|
||||
fp, fn = get_fn_lib()[selected_function]
|
||||
tmp = get_fn_lib()[code['name']]
|
||||
fp, fn = tmp['module'], tmp['function']
|
||||
fn_plugin = getattr(importlib.import_module(fp, fn), fn)
|
||||
yield from fn_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||
arg = json.loads(code['arguments'])['plugin_arg_1']
|
||||
yield from fn_plugin(arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||
except:
|
||||
from toolbox import trimmed_format_exc
|
||||
chatbot.append(["执行错误", f"\n```\n{trimmed_format_exc()}\n```\n"])
|
||||
@@ -110,22 +93,27 @@ def 终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_
|
||||
history = []
|
||||
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append(["函数插件功能?", "根据自然语言执行插件命令, 作者: binary-husky, 插件初始化中 ..."])
|
||||
chatbot.append(["虚空终端插件的功能?", "根据自然语言的描述, 执行任意插件的命令."])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# # 尝试导入依赖, 如果缺少依赖, 则给出安装建议
|
||||
# dep_ok = yield from inspect_dependency(chatbot=chatbot, history=history) # 刷新界面
|
||||
# if not dep_ok: return
|
||||
|
||||
# 输入
|
||||
i_say = prompt + txt
|
||||
i_say = txt
|
||||
# 开始
|
||||
llm_kwargs_function_call = copy.deepcopy(llm_kwargs)
|
||||
llm_kwargs_function_call['llm_model'] = 'gpt-call-fn' # 修改调用函数
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=txt,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||
sys_prompt=""
|
||||
llm_kwargs=llm_kwargs_function_call, chatbot=chatbot, history=[],
|
||||
sys_prompt=functions
|
||||
)
|
||||
|
||||
# 将代码转为动画
|
||||
code = get_code_block(gpt_say)
|
||||
yield from eval_code(code, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||
res = json.loads(gpt_say)['choices'][0]
|
||||
if res['finish_reason'] == 'function_call':
|
||||
code = json.loads(gpt_say)['choices'][0]
|
||||
yield from eval_code(code['message']['function_call'], llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||
else:
|
||||
chatbot.append(["无法调用相关功能", res])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
|
||||
@@ -42,12 +42,14 @@ def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history,
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append((txt, "正在同时咨询ChatGPT和ChatGLM……"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
# llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
||||
llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
||||
|
||||
chatbot.append((txt, f"正在同时咨询{llm_kwargs['llm_model']}"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=txt, inputs_show_user=txt,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
||||
|
||||
@@ -97,7 +97,7 @@ class InterviewAssistant(AliyunASR):
|
||||
# 初始化音频采集线程
|
||||
self.captured_audio = np.array([])
|
||||
self.keep_latest_n_second = 10
|
||||
self.commit_after_pause_n_second = 1.5
|
||||
self.commit_after_pause_n_second = 2.0
|
||||
self.ready_audio_flagment = None
|
||||
self.stop = False
|
||||
self.plugin_wd = WatchDog(timeout=5, bark_fn=self.__del__, msg="程序终止")
|
||||
@@ -179,12 +179,12 @@ def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
||||
import nls
|
||||
from scipy import io
|
||||
except:
|
||||
chatbot.append(["导入依赖失败", "使用该模块需要额外依赖, 安装方法:```pip install --upgrade pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git```"])
|
||||
chatbot.append(["导入依赖失败", "使用该模块需要额外依赖, 安装方法:```pip install --upgrade aliyun-python-sdk-core==2.13.3 pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git```"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
TOKEN, APPKEY = get_conf('ALIYUN_TOKEN', 'ALIYUN_APPKEY')
|
||||
if TOKEN == "" or APPKEY == "":
|
||||
APPKEY = get_conf('ALIYUN_APPKEY')
|
||||
if APPKEY == "":
|
||||
chatbot.append(["导入依赖失败", "没有阿里云语音识别APPKEY和TOKEN, 详情见https://help.aliyun.com/document_detail/450255.html"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
@@ -115,3 +115,36 @@ services:
|
||||
command: >
|
||||
bash -c "python3 -u main.py"
|
||||
|
||||
|
||||
## ===================================================
|
||||
## 【方案五】 ChatGPT + 语音助手 (请先阅读 docs/use_audio.md)
|
||||
## ===================================================
|
||||
version: '3'
|
||||
services:
|
||||
gpt_academic_with_audio:
|
||||
image: ghcr.io/binary-husky/gpt_academic_audio_assistant:master
|
||||
environment:
|
||||
# 请查阅 `config.py` 以查看所有的配置信息
|
||||
API_KEY: ' fk195831-IdP0Pb3W6DCMUIbQwVX6MsSiyxwqybyS '
|
||||
USE_PROXY: ' False '
|
||||
proxies: ' None '
|
||||
LLM_MODEL: ' gpt-3.5-turbo '
|
||||
AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "gpt-4"] '
|
||||
ENABLE_AUDIO: ' True '
|
||||
LOCAL_MODEL_DEVICE: ' cuda '
|
||||
DEFAULT_WORKER_NUM: ' 20 '
|
||||
WEB_PORT: ' 12343 '
|
||||
ADD_WAIFU: ' True '
|
||||
THEME: ' Chuanhu-Small-and-Beautiful '
|
||||
ALIYUN_APPKEY: ' RoP1ZrM84DnAFkZK '
|
||||
ALIYUN_TOKEN: ' f37f30e0f9934c34a992f6f64f7eba4f '
|
||||
# (无需填写) ALIYUN_ACCESSKEY: ' LTAI5q6BrFUzoRXVGUWnekh1 '
|
||||
# (无需填写) ALIYUN_SECRET: ' eHmI20AVWIaQZ0CiTD2bGQVsaP9i68 '
|
||||
|
||||
# 与宿主的网络融合
|
||||
network_mode: "host"
|
||||
|
||||
# 不使用代理网络拉取最新代码
|
||||
command: >
|
||||
bash -c "python3 -u main.py"
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
|
||||
RUN python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
||||
# 下载分支
|
||||
WORKDIR /gpt
|
||||
RUN git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
||||
WORKDIR /gpt/chatgpt_academic
|
||||
RUN git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss
|
||||
RUN python3 -m pip install -r requirements.txt
|
||||
|
||||
@@ -13,7 +13,7 @@ RUN python3 -m pip install torch --extra-index-url https://download.pytorch.org/
|
||||
|
||||
# 下载分支
|
||||
WORKDIR /gpt
|
||||
RUN git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
||||
WORKDIR /gpt/chatgpt_academic
|
||||
RUN python3 -m pip install -r requirements.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_chatglm.txt
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
|
||||
# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic-nolocal -f docs/Dockerfile+NoLocal .
|
||||
# 如何运行: docker run --rm -it --net=host gpt-academic-nolocal
|
||||
FROM python:3.11
|
||||
|
||||
# 指定路径
|
||||
WORKDIR /gpt
|
||||
|
||||
# 装载项目文件
|
||||
COPY . .
|
||||
|
||||
# 安装依赖
|
||||
RUN pip3 install -r requirements.txt
|
||||
|
||||
# 安装语音插件的额外依赖
|
||||
RUN pip3 install pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
||||
|
||||
# 可选步骤,用于预热模块
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
|
||||
# 启动
|
||||
CMD ["python3", "-u", "main.py"]
|
||||
@@ -2085,5 +2085,81 @@
|
||||
"欢迎使用 MOSS 人工智能助手!输入内容即可进行对话": "Welcome to use MOSS AI assistant! Enter the content to start the conversation.",
|
||||
"记住当前的label": "Remember the current label.",
|
||||
"不能正常加载ChatGLMFT的参数!": "Cannot load ChatGLMFT parameters normally!",
|
||||
"建议直接在API_KEY处填写": "It is recommended to fill in directly at API_KEY."
|
||||
"建议直接在API_KEY处填写": "It is recommended to fill in directly at API_KEY.",
|
||||
"创建request": "Create request",
|
||||
"默认 secondary": "Default secondary",
|
||||
"会被加在你的输入之前": "Will be added before your input",
|
||||
"缺少": "Missing",
|
||||
"前者是API2D的结束条件": "The former is the termination condition of API2D",
|
||||
"无需填写": "No need to fill in",
|
||||
"后缀": "Suffix",
|
||||
"扭转的范围": "Range of twisting",
|
||||
"是否在触发时清除历史": "Whether to clear history when triggered",
|
||||
"⭐多线程方法": "⭐Multi-threaded method",
|
||||
"消耗大量的内存": "Consumes a large amount of memory",
|
||||
"重组": "Reorganize",
|
||||
"高危设置! 常规情况下不要修改! 通过修改此设置": "High-risk setting! Do not modify under normal circumstances! Modify this setting",
|
||||
"检查USE_PROXY": "Check USE_PROXY",
|
||||
"标注节点的行数范围": "Range of line numbers for annotated nodes",
|
||||
"即不处理之前的对话历史": "That is, do not process previous conversation history",
|
||||
"即将编译PDF": "Compiling PDF",
|
||||
"没有设置ANTHROPIC_API_KEY选项": "ANTHROPIC_API_KEY option is not set",
|
||||
"非Openai官方接口返回了错误": "Non-Openai official interface returned an error",
|
||||
"您的 API_KEY 不满足任何一种已知的密钥格式": "Your API_KEY does not meet any known key format",
|
||||
"格式": "Format",
|
||||
"不能正常加载": "Cannot load properly",
|
||||
"🏃♂️🏃♂️🏃♂️ 子进程执行": "🏃♂️🏃♂️🏃♂️ Subprocess execution",
|
||||
"前缀": "Prefix",
|
||||
"创建AcsClient实例": "Create AcsClient instance",
|
||||
"⭐主进程执行": "⭐Main process execution",
|
||||
"增强稳健性": "Enhance robustness",
|
||||
"用来描述你的要求": "Used to describe your requirements",
|
||||
"举例": "For example",
|
||||
"⭐单线程方法": "⭐Single-threaded method",
|
||||
"后者是OPENAI的结束条件": "The latter is the termination condition of OPENAI",
|
||||
"防止proxies单独起作用": "Prevent proxies from working alone",
|
||||
"将两个PDF拼接": "Concatenate two PDFs",
|
||||
"最后一步处理": "The last step processing",
|
||||
"正在从github下载资源": "Downloading resources from github",
|
||||
"失败时": "When failed",
|
||||
"尚未加载": "Not loaded yet",
|
||||
"配合前缀可以把你的输入内容用引号圈起来": "With the prefix, you can enclose your input content in quotation marks",
|
||||
"我好!": "I'm good!",
|
||||
"默认 False": "Default False",
|
||||
"的依赖": "Dependencies of",
|
||||
"并设置参数": "and set parameters",
|
||||
"会被加在你的输入之后": "Will be added after your input",
|
||||
"安装": "Installation",
|
||||
"一个单实例装饰器": "Single instance decorator",
|
||||
"自定义API KEY格式": "Customize API KEY format",
|
||||
"的参数": "Parameters of",
|
||||
"api2d等请求源": "api2d and other request sources",
|
||||
"逆转出错的段落": "Reverse the wrong paragraph",
|
||||
"没有设置ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY is not set",
|
||||
"默认 True": "Default True",
|
||||
"本项目现已支持OpenAI和Azure的api-key": "This project now supports OpenAI and Azure's api-key",
|
||||
"即可见": "Visible immediately",
|
||||
"请问什么是质子": "What is a proton?",
|
||||
"按钮是否可见": "Is the button visible?",
|
||||
"调用": "Call",
|
||||
"如果要使用": "If you want to use",
|
||||
"的参数!": "parameters!",
|
||||
"例如翻译、解释代码、润色等等": "such as translation, code interpretation, polishing, etc.",
|
||||
"响应异常": "Response exception",
|
||||
"响应中": "Responding",
|
||||
"请尝试英文Prompt": "Try English Prompt",
|
||||
"在运行过程中动态地修改多个配置": "Dynamically modify multiple configurations during runtime",
|
||||
"无法调用相关功能": "Unable to invoke related functions",
|
||||
"接驳虚空终端": "Connect to Void Terminal",
|
||||
"虚空终端插件的功能": "Functionality of Void Terminal plugin",
|
||||
"执行任意插件的命令": "Execute commands of any plugin",
|
||||
"修改调用函数": "Modify calling function",
|
||||
"获取简单聊天的默认参数": "Get default parameters for simple chat",
|
||||
"根据自然语言的描述": "Based on natural language description",
|
||||
"获取插件的句柄": "Get handle of plugin",
|
||||
"第四部分": "Part Four",
|
||||
"在运行过程中动态地修改配置": "Dynamically modify configurations during runtime",
|
||||
"请先把模型切换至gpt-*或者api2d-*": "Please switch the model to gpt-* or api2d-* first",
|
||||
"获取简单聊天的句柄": "Get handle of simple chat",
|
||||
"获取插件的默认参数": "Get default parameters of plugin"
|
||||
}
|
||||
@@ -939,7 +939,6 @@
|
||||
"以下は学術論文の基本情報です": "以下は学術論文の基本情報です",
|
||||
"出力が不完全になる原因となる": "出力が不完全になる原因となる",
|
||||
"ハイフンを使って": "ハイフンを使って",
|
||||
"シングルスレッド": "シングルスレッド",
|
||||
"请先把模型切换至gpt-xxxx或者api2d-xxxx": "Please switch the model to gpt-xxxx or api2d-xxxx first.",
|
||||
"路径或网址": "Path or URL",
|
||||
"*代表通配符": "* represents a wildcard",
|
||||
@@ -1484,5 +1483,632 @@
|
||||
"请提交新问题": "新しい問題を提出してください",
|
||||
"您正在调用一个": "あなたは呼び出しています",
|
||||
"请编辑以下文本": "以下のテキストを編集してください",
|
||||
"常见协议无非socks5h/http": "一般的なプロトコルはsocks5h/http以外ありません"
|
||||
"常见协议无非socks5h/http": "一般的なプロトコルはsocks5h/http以外ありません",
|
||||
"Latex英文纠错": "LatexEnglishErrorCorrection",
|
||||
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
|
||||
"联网的ChatGPT_bing版": "OnlineChatGPT_BingVersion",
|
||||
"总结音视频": "SummarizeAudioVideo",
|
||||
"动画生成": "GenerateAnimation",
|
||||
"数学动画生成manim": "GenerateMathematicalAnimationManim",
|
||||
"Markdown翻译指定语言": "TranslateMarkdownSpecifiedLanguage",
|
||||
"知识库问答": "KnowledgeBaseQuestionAnswer",
|
||||
"Langchain知识库": "LangchainKnowledgeBase",
|
||||
"读取知识库作答": "ReadKnowledgeBaseAnswer",
|
||||
"交互功能模板函数": "InteractiveFunctionTemplateFunction",
|
||||
"交互功能函数模板": "InteractiveFunctionFunctionTemplate",
|
||||
"Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison",
|
||||
"Latex输出PDF结果": "LatexOutputPDFResult",
|
||||
"Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF",
|
||||
"语音助手": "VoiceAssistant",
|
||||
"微调数据集生成": "FineTuneDatasetGeneration",
|
||||
"chatglm微调工具": "ChatGLMFineTuningTool",
|
||||
"启动微调": "StartFineTuning",
|
||||
"sprint亮靛": "SprintAzureIndigo",
|
||||
"专业词汇声明": "ProfessionalVocabularyDeclaration",
|
||||
"Latex精细分解与转化": "LatexDetailedDecompositionAndConversion",
|
||||
"编译Latex": "CompileLatex",
|
||||
"将代码转为动画": "コードをアニメーションに変換する",
|
||||
"解析arxiv网址失败": "arxivのURLの解析に失敗しました",
|
||||
"其他模型转化效果未知": "他のモデルの変換効果は不明です",
|
||||
"把文件复制过去": "ファイルをコピーする",
|
||||
"!!!如果需要运行量化版本": "!!!量子化バージョンを実行する必要がある場合",
|
||||
"报错信息如下. 如果是与网络相关的问题": "エラーメッセージは次のとおりです。ネットワークに関連する問題の場合",
|
||||
"请检查ALIYUN_TOKEN和ALIYUN_APPKEY是否过期": "ALIYUN_TOKENとALIYUN_APPKEYの有効期限を確認してください",
|
||||
"编译结束": "コンパイル終了",
|
||||
"只读": "読み取り専用",
|
||||
"模型选择是": "モデルの選択は",
|
||||
"正在从github下载资源": "GitHubからリソースをダウンロードしています",
|
||||
"同时分解长句": "同時に長い文を分解する",
|
||||
"寻找主tex文件": "メインのtexファイルを検索する",
|
||||
"例如您可以将以下命令复制到下方": "たとえば、以下のコマンドを下にコピーできます",
|
||||
"使用中文总结音频“": "中国語で音声を要約する",
|
||||
"此处填API密钥": "ここにAPIキーを入力してください",
|
||||
"裁剪输入": "入力をトリミングする",
|
||||
"当前语言模型温度设定": "現在の言語モデルの温度設定",
|
||||
"history 是之前的对话列表": "historyは以前の対話リストです",
|
||||
"对输入的word文档进行摘要生成": "入力されたWord文書の要約を生成する",
|
||||
"输入问题后点击该插件": "質問を入力した後、このプラグインをクリックします",
|
||||
"仅在Windows系统进行了测试": "Windowsシステムでのみテストされています",
|
||||
"reverse 操作必须放在最后": "reverse操作は最後に配置する必要があります",
|
||||
"即将编译PDF": "PDFをコンパイルする予定です",
|
||||
"执行错误": "エラーが発生しました",
|
||||
"段音频完成了吗": "セグメントのオーディオは完了しましたか",
|
||||
"然后重启程序": "それからプログラムを再起動してください",
|
||||
"是所有LLM的通用接口": "これはすべてのLLMの共通インターフェースです",
|
||||
"当前报错的latex代码处于第": "現在のエラーのあるLaTeXコードは第",
|
||||
"🏃♂️🏃♂️🏃♂️ 子进程执行": "🏃♂️🏃♂️🏃♂️ サブプロセスの実行",
|
||||
"用来描述你的要求": "要求を説明するために使用されます",
|
||||
"原始PDF编译是否成功": "元のPDFのコンパイルは成功しましたか",
|
||||
"本地Latex论文精细翻译": "ローカルのLaTeX論文の詳細な翻訳",
|
||||
"设置OpenAI密钥和模型": "OpenAIキーとモデルの設定",
|
||||
"如果使用ChatGLM2微调模型": "ChatGLM2ファインチューニングモデルを使用する場合",
|
||||
"项目Github地址 \\url{https": "プロジェクトのGithubアドレス \\url{https",
|
||||
"将前后断行符脱离": "前後の改行文字を削除します",
|
||||
"该项目的Latex主文件是": "このプロジェクトのLaTeXメインファイルは",
|
||||
"编译已经开始": "コンパイルが開始されました",
|
||||
"*{\\scriptsize\\textbf{警告": "*{\\scriptsize\\textbf{警告",
|
||||
"从一批文件": "一連のファイルから",
|
||||
"等待用户的再次调用": "ユーザーの再呼び出しを待っています",
|
||||
"目前仅支持GPT3.5/GPT4": "現在、GPT3.5/GPT4のみをサポートしています",
|
||||
"如果一句话小于7个字": "1つの文が7文字未満の場合",
|
||||
"目前对机器学习类文献转化效果最好": "現在、機械学習の文献変換効果が最も良いです",
|
||||
"寻找主文件": "メインファイルを検索中",
|
||||
"解除插件状态": "プラグインの状態を解除します",
|
||||
"默认为Chinese": "デフォルトはChineseです",
|
||||
"依赖不足": "不足の依存関係",
|
||||
"编译文献交叉引用": "文献の相互参照をコンパイルする",
|
||||
"对不同latex源文件扣分": "異なるLaTeXソースファイルに罰則を課す",
|
||||
"再列出用户可能提出的三个问题": "ユーザーが提出する可能性のある3つの問題を再リスト化する",
|
||||
"建议排查": "トラブルシューティングの提案",
|
||||
"生成时间戳": "タイムスタンプの生成",
|
||||
"检查config中的AVAIL_LLM_MODELS选项": "configのAVAIL_LLM_MODELSオプションを確認する",
|
||||
"chatglmft 没有 sys_prompt 接口": "chatglmftにはsys_promptインターフェースがありません",
|
||||
"在一个异步线程中采集音频": "非同期スレッドでオーディオを収集する",
|
||||
"初始化插件状态": "プラグインの状態を初期化する",
|
||||
"内含已经翻译的Tex文档": "翻訳済みのTexドキュメントが含まれています",
|
||||
"请注意自我隐私保护哦!": "プライバシー保護に注意してください!",
|
||||
"使用正则表达式查找半行注释": "正規表現を使用して半行コメントを検索する",
|
||||
"不能正常加载ChatGLMFT的参数!": "ChatGLMFTのパラメータを正常にロードできません!",
|
||||
"首先你在中文语境下通读整篇论文": "まず、中国語の文脈で論文全体を読んでください",
|
||||
"如 绿帽子*深蓝色衬衫*黑色运动裤": "例えば、緑の帽子*濃い青のシャツ*黒のスポーツパンツ",
|
||||
"默认为default": "デフォルトはdefaultです",
|
||||
"将": "置き換える",
|
||||
"使用 Unsplash API": "Unsplash APIを使用する",
|
||||
"会被加在你的输入之前": "あなたの入力の前に追加されます",
|
||||
"还需要填写组织": "組織を入力する必要があります",
|
||||
"test_LangchainKnowledgeBase读取": "test_LangchainKnowledgeBaseの読み込み",
|
||||
"目前不支持历史消息查询": "現在、過去のメッセージのクエリはサポートされていません",
|
||||
"临时存储用于调试": "デバッグ用の一時的なストレージ",
|
||||
"提取总结": "テキストの翻訳",
|
||||
"每秒采样数量": "テキストの翻訳",
|
||||
"但通常不会出现在正文": "テキストの翻訳",
|
||||
"通过调用conversations_open方法打开一个频道": "テキストの翻訳",
|
||||
"导致输出不完整": "テキストの翻訳",
|
||||
"获取已打开频道的最新消息并返回消息列表": "テキストの翻訳",
|
||||
"Tex源文件缺失!": "テキストの翻訳",
|
||||
"如果需要使用Slack Claude": "テキストの翻訳",
|
||||
"扭转的范围": "テキストの翻訳",
|
||||
"使用latexdiff生成论文转化前后对比": "テキストの翻訳",
|
||||
"--读取文件": "テキストの翻訳",
|
||||
"调用openai api 使用whisper-1模型": "テキストの翻訳",
|
||||
"避免遗忘导致死锁": "テキストの翻訳",
|
||||
"在多Tex文档中": "テキストの翻訳",
|
||||
"失败时": "テキストの翻訳",
|
||||
"然后转移到指定的另一个路径中": "テキストの翻訳",
|
||||
"使用Newbing": "テキストの翻訳",
|
||||
"的参数": "テキストの翻訳",
|
||||
"后者是OPENAI的结束条件": "テキストの翻訳",
|
||||
"构建知识库": "テキストの翻訳",
|
||||
"吸收匿名公式": "テキストの翻訳",
|
||||
"前缀": "テキストの翻訳",
|
||||
"会直接转到该函数": "テキストの翻訳",
|
||||
"Claude失败": "テキストの翻訳",
|
||||
"P.S. 但愿没人把latex模板放在里面传进来": "P.S. 但愿没人把latex模板放在里面传进来",
|
||||
"临时地启动代理网络": "临时地启动代理网络",
|
||||
"读取文件内容到内存": "読み込んだファイルの内容をメモリに保存する",
|
||||
"总结音频": "音声をまとめる",
|
||||
"没有找到任何可读取文件": "読み込み可能なファイルが見つかりません",
|
||||
"获取Slack消息失败": "Slackメッセージの取得に失敗しました",
|
||||
"用黑色标注转换区": "黒い注釈で変換エリアをマークする",
|
||||
"此插件处于开发阶段": "このプラグインは開発中です",
|
||||
"其他操作系统表现未知": "他のオペレーティングシステムの動作は不明です",
|
||||
"返回找到的第一个": "最初に見つかったものを返す",
|
||||
"发现已经存在翻译好的PDF文档": "翻訳済みのPDFドキュメントが既に存在することがわかりました",
|
||||
"不包含任何可用于": "使用できるものは含まれていません",
|
||||
"发送到openai音频解析终端": "openai音声解析端に送信する",
|
||||
"========================================= 插件主程序2 =====================================================": "========================================= プラグインメインプログラム2 =====================================================",
|
||||
"正在重试": "再試行中",
|
||||
"从而更全面地理解项目的整体功能": "プロジェクトの全体的な機能をより理解するために",
|
||||
"正在等您说完问题": "質問が完了するのをお待ちしています",
|
||||
"使用教程详情见 request_llm/README.md": "使用方法の詳細については、request_llm/README.mdを参照してください",
|
||||
"6.25 加入判定latex模板的代码": "6.25 テンプレートの判定コードを追加",
|
||||
"找不到任何音频或视频文件": "音声またはビデオファイルが見つかりません",
|
||||
"请求GPT模型的": "GPTモデルのリクエスト",
|
||||
"行": "行",
|
||||
"分析上述回答": "上記の回答を分析する",
|
||||
"如果要使用ChatGLMFT": "ChatGLMFTを使用する場合",
|
||||
"上传Latex项目": "Latexプロジェクトをアップロードする",
|
||||
"如参考文献、脚注、图注等": "参考文献、脚注、図のキャプションなど",
|
||||
"未配置": "設定されていません",
|
||||
"请在此处给出自定义翻译命令": "カスタム翻訳コマンドをここに入力してください",
|
||||
"第二部分": "第2部分",
|
||||
"解压失败! 需要安装pip install py7zr来解压7z文件": "解凍に失敗しました!7zファイルを解凍するにはpip install py7zrをインストールする必要があります",
|
||||
"吸收在42行以内的begin-end组合": "42行以内のbegin-endの組み合わせを取り込む",
|
||||
"Latex文件融合完成": "Latexファイルの統合が完了しました",
|
||||
"输出html调试文件": "HTMLデバッグファイルの出力",
|
||||
"论文概况": "論文の概要",
|
||||
"修复括号": "括弧の修復",
|
||||
"赋予插件状态": "プラグインの状態を付与する",
|
||||
"标注节点的行数范围": "ノードの行数範囲を注釈する",
|
||||
"MOSS can understand and communicate fluently in the language chosen by the user such as English and 中文. MOSS can perform any language-based tasks.": "MOSSは、ユーザーが選択した言語(英語や中文など)でスムーズに理解し、コミュニケーションすることができます。MOSSは、言語に基づくさまざまなタスクを実行できます。",
|
||||
"LLM_MODEL是默认选中的模型": "LLM_MODELはデフォルトで選択されたモデルです",
|
||||
"配合前缀可以把你的输入内容用引号圈起来": "接頭辞と組み合わせて、入力内容を引用符で囲むことができます",
|
||||
"获取关键词": "キーワードの取得",
|
||||
"本项目现已支持OpenAI和Azure的api-key": "このプロジェクトは、OpenAIおよびAzureのAPIキーをサポートしています",
|
||||
"欢迎使用 MOSS 人工智能助手!": "MOSS AIアシスタントをご利用いただきありがとうございます!",
|
||||
"在执行完成之后": "実行が完了した後",
|
||||
"正在听您讲话": "お話をお聞きしています",
|
||||
"Claude回复的片段": "Claudeの返信の一部",
|
||||
"返回": "戻る",
|
||||
"期望格式例如": "期待される形式の例",
|
||||
"gpt 多线程请求": "GPTマルチスレッドリクエスト",
|
||||
"当前工作路径为": "現在の作業パスは",
|
||||
"该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成": "このPDFはGPT-Academicオープンソースプロジェクトによって大規模言語モデル+Latex翻訳プラグインを使用して一括生成されました",
|
||||
"解决插件锁定时的界面显示问题": "プラグインのロック時のインターフェース表示の問題を解決する",
|
||||
"默认 secondary": "デフォルトのセカンダリ",
|
||||
"会把列表拆解": "リストを分解します",
|
||||
"暂时不支持历史消息": "一時的に歴史メッセージはサポートされていません",
|
||||
"或者重启之后再度尝试": "または再起動後に再試行してください",
|
||||
"吸收其他杂项": "他の雑項を吸収する",
|
||||
"双手离开鼠标键盘吧": "両手をマウスとキーボードから離してください",
|
||||
"建议更换代理协议": "プロキシプロトコルの変更をお勧めします",
|
||||
"音频助手": "オーディオアシスタント",
|
||||
"请耐心等待": "お待ちください",
|
||||
"翻译结果": "翻訳結果",
|
||||
"请在此处追加更细致的矫错指令": "ここにより詳細なエラー修正命令を追加してください",
|
||||
"编译原始PDF": "元のPDFをコンパイルする",
|
||||
"-构建知识库": "-ナレッジベースの構築",
|
||||
"删除中间文件夹": "中間フォルダを削除する",
|
||||
"这段代码定义了一个名为TempProxy的空上下文管理器": "このコードはTempProxyという名前の空のコンテキストマネージャを定義しています",
|
||||
"参数说明": "パラメータの説明",
|
||||
"正在预热文本向量化模组": "テキストベクトル化モジュールのプリヒート中",
|
||||
"函数插件": "関数プラグイン",
|
||||
"右下角更换模型菜单中可切换openai": "右下のモデルメニューでopenaiを切り替えることができます",
|
||||
"先上传数据集": "まずデータセットをアップロードしてください",
|
||||
"LatexEnglishErrorCorrection+高亮修正位置": "テキストの翻訳",
|
||||
"正在构建知识库": "テキストの翻訳",
|
||||
"用红色标注处保留区": "テキストの翻訳",
|
||||
"安装Claude的依赖": "テキストの翻訳",
|
||||
"已禁用": "テキストの翻訳",
|
||||
"是否在提交时自动清空输入框": "テキストの翻訳",
|
||||
"GPT 学术优化": "テキストの翻訳",
|
||||
"需要特殊依赖": "テキストの翻訳",
|
||||
"test_联网回答问题": "テキストの翻訳",
|
||||
"除非您是论文的原作者": "テキストの翻訳",
|
||||
"即可见": "テキストの翻訳",
|
||||
"解析为简体中文": "テキストの翻訳",
|
||||
"解析整个Python项目": "テキストの翻訳",
|
||||
"========================================= 插件主程序1 =====================================================": "テキストの翻訳",
|
||||
"当前参数": "テキストの翻訳",
|
||||
"处理个别特殊插件的锁定状态": "テキストの翻訳",
|
||||
"已知某些代码的局部作用是": "テキストの翻訳",
|
||||
"请务必用 pip install -r requirements.txt 指令安装依赖": "テキストの翻訳",
|
||||
"安装": "テキストの翻訳",
|
||||
"请登录OpenAI查看详情 https": "テキストの翻訳",
|
||||
"必须包含documentclass": "テキストの翻訳",
|
||||
"极少数情况下": "テキストの翻訳",
|
||||
"并将返回的频道ID保存在属性CHANNEL_ID中": "テキストの翻訳",
|
||||
"您的 API_KEY 不满足任何一种已知的密钥格式": "テキストの翻訳",
|
||||
"-预热文本向量化模组": "テキストの翻訳",
|
||||
"什么都没有": "テキストの翻訳",
|
||||
"等待GPT响应": "テキストの翻訳",
|
||||
"请尝试把以下指令复制到高级参数区": "テキストの翻訳",
|
||||
"模型参数": "テキストの翻訳",
|
||||
"先删除": "テキストの翻訳",
|
||||
"响应中": "テキストの翻訳",
|
||||
"开始接收chatglmft的回复": "テキストの翻訳",
|
||||
"手动指定语言": "テキストの翻訳",
|
||||
"获取线程锁": "テキストの翻訳",
|
||||
"当前大语言模型": "テキストの翻訳",
|
||||
"段音频的第": "テキストの翻訳",
|
||||
"正在编译对比PDF": "テキストの翻訳",
|
||||
"根据需要切换prompt": "テキストの翻訳",
|
||||
"取评分最高者返回": "テキストの翻訳",
|
||||
"如果您是论文原作者": "テキストの翻訳",
|
||||
"段音频的主要内容": "テキストの翻訳",
|
||||
"为啥chatgpt会把cite里面的逗号换成中文逗号呀": "テキストの翻訳",
|
||||
"为每一位访问的用户赋予一个独一无二的uuid编码": "テキストの翻訳",
|
||||
"将每次对话记录写入Markdown格式的文件中": "テキストの翻訳",
|
||||
"ChatGLMFT尚未加载": "テキストの翻訳",
|
||||
"切割音频文件": "テキストの翻訳",
|
||||
"例如 f37f30e0f9934c34a992f6f64f7eba4f": "テキストの翻訳",
|
||||
"work_folder = Latex预处理": "テキストの翻訳",
|
||||
"出问题了": "問題が発生しました",
|
||||
"等待Claude响应中": "Claudeの応答を待っています",
|
||||
"增强稳健性": "信頼性を向上させる",
|
||||
"赋予插件锁定 锁定插件回调路径": "プラグインにコールバックパスをロックする",
|
||||
"将多文件tex工程融合为一个巨型tex": "複数のファイルのtexプロジェクトを1つの巨大なtexに統合する",
|
||||
"参考文献转Bib": "参考文献をBibに変換する",
|
||||
"由于提问含不合规内容被Azure过滤": "質問が規則に違反しているため、Azureによってフィルタリングされました",
|
||||
"读取优先级": "優先度を読み取る",
|
||||
"格式如org-xxxxxxxxxxxxxxxxxxxxxxxx": "形式はorg-xxxxxxxxxxxxxxxxxxxxxxxxのようです",
|
||||
"辅助gpt生成代码": "GPTのコード生成を補助する",
|
||||
"读取音频文件": "音声ファイルを読み取る",
|
||||
"输入arxivID": "arxivIDを入力する",
|
||||
"转化PDF编译是否成功": "PDFのコンパイルが成功したかどうかを変換する",
|
||||
"Call ChatGLMFT fail 不能正常加载ChatGLMFT的参数": "ChatGLMFTのパラメータを正常にロードできませんでした",
|
||||
"创建AcsClient实例": "AcsClientのインスタンスを作成する",
|
||||
"将 chatglm 直接对齐到 chatglm2": "chatglmをchatglm2に直接整列させる",
|
||||
"要求": "要求",
|
||||
"子任务失败时的重试次数": "サブタスクが失敗した場合のリトライ回数",
|
||||
"请求子进程": "サブプロセスを要求する",
|
||||
"按钮是否可见": "ボタンが表示可能かどうか",
|
||||
"将 \\include 命令转换为 \\input 命令": "\\includeコマンドを\\inputコマンドに変換する",
|
||||
"用户填3": "ユーザーが3を入力する",
|
||||
"后面是英文逗号": "後ろに英語のカンマがあります",
|
||||
"吸收iffalse注释": "iffalseコメントを吸収する",
|
||||
"请稍候": "お待ちください",
|
||||
"摘要生成后的文档路径": "要約生成後のドキュメントのパス",
|
||||
"主程序即将开始": "メインプログラムがすぐに開始されます",
|
||||
"处理历史信息": "履歴情報の処理",
|
||||
"根据给定的切割时长将音频文件切割成多个片段": "指定された分割時間に基づいてオーディオファイルを複数のセグメントに分割する",
|
||||
"解决部分词汇翻译不准确的问题": "一部の用語の翻訳の不正確さを解決する",
|
||||
"即将退出": "すぐに終了します",
|
||||
"用于给一小段代码上代理": "一部のコードにプロキシを適用するために使用されます",
|
||||
"提取文件扩展名": "ファイルの拡張子を抽出する",
|
||||
"目前支持的格式": "現在サポートされている形式",
|
||||
"第一次调用": "最初の呼び出し",
|
||||
"异步方法": "非同期メソッド",
|
||||
"P.S. 顺便把Latex的注释去除": "P.S. LaTeXのコメントを削除する",
|
||||
"构建完成": "ビルドが完了しました",
|
||||
"缺少": "不足しています",
|
||||
"建议暂时不要使用": "一時的に使用しないことをお勧めします",
|
||||
"对比PDF编译是否成功": "PDFのコンパイルが成功したかどうかを比較する",
|
||||
"填入azure openai api的密钥": "Azure OpenAI APIのキーを入力してください",
|
||||
"功能尚不稳定": "機能はまだ安定していません",
|
||||
"则跳过GPT请求环节": "GPTリクエストのスキップ",
|
||||
"即不处理之前的对话历史": "以前の対話履歴を処理しない",
|
||||
"非Openai官方接口返回了错误": "非公式のOpenAI APIがエラーを返しました",
|
||||
"其他类型文献转化效果未知": "他のタイプの文献の変換効果は不明です",
|
||||
"给出一些判定模板文档的词作为扣分项": "テンプレートドキュメントの単語を減点項目として提供する",
|
||||
"找 API_ORG 设置项": "API_ORGの設定項目を検索します",
|
||||
"调用函数": "関数を呼び出します",
|
||||
"需要手动安装新增的依赖库": "新しい依存ライブラリを手動でインストールする必要があります",
|
||||
"或者使用此插件继续上传更多文件": "または、このプラグインを使用してさらにファイルをアップロードします",
|
||||
"640个字节为一组": "640バイトごとにグループ化します",
|
||||
"逆转出错的段落": "エラーのあるパラグラフを逆転させます",
|
||||
"对话助手函数插件": "対話アシスタント関数プラグイン",
|
||||
"前者是API2D的结束条件": "前者はAPI2Dの終了条件です",
|
||||
"终端": "ターミナル",
|
||||
"仅调试": "デバッグのみ",
|
||||
"论文": "論文",
|
||||
"想象一个穿着者": "着用者を想像してください",
|
||||
"音频内容是": "音声の内容は",
|
||||
"如果需要使用AZURE 详情请见额外文档 docs\\use_azure.md": "AZUREを使用する必要がある場合は、詳細については別のドキュメント docs\\use_azure.md を参照してください",
|
||||
"请先将.doc文档转换为.docx文档": ".docドキュメントを.docxドキュメントに変換してください",
|
||||
"请查看终端的输出或耐心等待": "ターミナルの出力を確認するか、お待ちください",
|
||||
"初始化音频采集线程": "オーディオキャプチャスレッドを初期化します",
|
||||
"用该压缩包+ConversationHistoryArchive进行反馈": "この圧縮ファイル+ConversationHistoryArchiveを使用してフィードバックします",
|
||||
"阿里云实时语音识别 配置难度较高 仅建议高手用户使用 参考 https": "阿里云リアルタイム音声認識の設定は難しいため、上級ユーザーのみに推奨されます 参考 https",
|
||||
"多线程翻译开始": "マルチスレッド翻訳が開始されました",
|
||||
"只有GenerateImage和生成图像相关": "GenerateImageと関連する画像の生成のみ",
|
||||
"代理数据解析失败": "プロキシデータの解析に失敗しました",
|
||||
"建议使用英文单词": "英単語の使用をお勧めします",
|
||||
"功能描述": "機能の説明",
|
||||
"读 docs\\use_azure.md": "ドキュメントを読む",
|
||||
"将消耗较长时间下载中文向量化模型": "中国語のベクトル化モデルをダウンロードするのに時間がかかります",
|
||||
"表示频道ID": "チャネルIDを表示する",
|
||||
"未知指令": "不明なコマンド",
|
||||
"包含documentclass关键字": "documentclassキーワードを含む",
|
||||
"中读取数据构建知识库": "データを読み取って知識ベースを構築する",
|
||||
"远程云服务器部署": "リモートクラウドサーバーにデプロイする",
|
||||
"输入部分太自由": "入力が自由すぎる",
|
||||
"读取pdf文件": "PDFファイルを読み込む",
|
||||
"将两个PDF拼接": "2つのPDFを結合する",
|
||||
"默认值为1000": "デフォルト値は1000です",
|
||||
"写出文件": "ファイルに書き出す",
|
||||
"生成的视频文件路径": "生成されたビデオファイルのパス",
|
||||
"Arixv论文精细翻译": "Arixv論文の詳細な翻訳",
|
||||
"用latex编译为PDF对修正处做高亮": "LaTeXでコンパイルしてPDFに修正をハイライトする",
|
||||
"点击“停止”键可终止程序": "「停止」ボタンをクリックしてプログラムを終了できます",
|
||||
"否则将导致每个人的Claude问询历史互相渗透": "さもないと、各人のClaudeの問い合わせ履歴が相互に侵入します",
|
||||
"音频文件名": "オーディオファイル名",
|
||||
"的参数!": "のパラメータ!",
|
||||
"对话历史": "対話履歴",
|
||||
"当下一次用户提交时": "次のユーザーの提出時に",
|
||||
"数学GenerateAnimation": "数学GenerateAnimation",
|
||||
"如果要使用Claude": "Claudeを使用する場合は",
|
||||
"请向下翻": "下にスクロールしてください",
|
||||
"报告已经添加到右侧“文件上传区”": "報告は右側の「ファイルアップロードエリア」に追加されました",
|
||||
"删除整行的空注释": "空のコメントを含む行を削除する",
|
||||
"建议直接在API_KEY处填写": "API_KEYの場所に直接入力することをお勧めします",
|
||||
"暗色模式 / 亮色模式": "ダークモード/ライトモード",
|
||||
"做一些外观色彩上的调整": "外観の色調整を行う",
|
||||
"请切换至“KnowledgeBaseQuestionAnswer”插件进行知识库访问": "ナレッジベースのアクセスには「KnowledgeBaseQuestionAnswer」プラグインに切り替えてください",
|
||||
"它*必须*被包含在AVAIL_LLM_MODELS列表中": "それはAVAIL_LLM_MODELSリストに含まれている必要があります",
|
||||
"并设置参数": "パラメータを設定する",
|
||||
"待处理的word文档路径": "処理待ちのWord文書のパス",
|
||||
"调用缓存": "キャッシュを呼び出す",
|
||||
"片段": "フラグメント",
|
||||
"否则结束循环": "それ以外の場合はループを終了する",
|
||||
"请对下面的音频片段做概述": "以下のオーディオフラグメントについて概要を作成してください",
|
||||
"高危设置! 常规情况下不要修改! 通过修改此设置": "高リスクの設定!通常は変更しないでください!この設定を変更することで",
|
||||
"插件锁定中": "プラグインがロックされています",
|
||||
"开始": "開始",
|
||||
"但请查收结果": "結果を確認してください",
|
||||
"刷新Gradio前端界面": "Gradioフロントエンドインターフェースをリフレッシュする",
|
||||
"批量SummarizeAudioVideo": "オーディオビデオを一括要約する",
|
||||
"一个单实例装饰器": "単一のインスタンスデコレータ",
|
||||
"Claude响应异常": "Claudeの応答が異常です",
|
||||
"但内部用stream的方法避免中途网线被掐": "ただし、途中でネットワーク接続が切断されることを避けるために、内部ではストリームを使用しています",
|
||||
"检查USE_PROXY": "USE_PROXYを確認する",
|
||||
"永远给定None": "常にNoneを指定する",
|
||||
"报告如何远程获取": "報告のリモート取得方法",
|
||||
"您可以到Github Issue区": "GithubのIssueエリアにアクセスできます",
|
||||
"如果只询问1个大语言模型": "1つの大規模言語モデルにのみ質問する場合",
|
||||
"为了防止大语言模型的意外谬误产生扩散影响": "大規模言語モデルの誤った結果が広がるのを防ぐために",
|
||||
"编译BibTex": "BibTexのコンパイル",
|
||||
"⭐多线程方法": "マルチスレッドの方法",
|
||||
"推荐http": "httpをおすすめします",
|
||||
"如果要使用": "使用する場合",
|
||||
"的单词": "の単語",
|
||||
"如果本地使用不建议加这个": "ローカルで使用する場合はお勧めしません",
|
||||
"避免线程阻塞": "スレッドのブロックを回避する",
|
||||
"吸收title与作者以上的部分": "タイトルと著者以上の部分を吸収する",
|
||||
"作者": "著者",
|
||||
"5刀": "5ドル",
|
||||
"ChatGLMFT响应异常": "ChatGLMFTの応答異常",
|
||||
"才能继续下面的步骤": "次の手順に進むために",
|
||||
"对这个人外貌、身处的环境、内心世界、过去经历进行描写": "この人の外見、環境、内面世界、過去の経験について描写する",
|
||||
"找不到微调模型检查点": "ファインチューニングモデルのチェックポイントが見つかりません",
|
||||
"请仔细鉴别并以原文为准": "注意深く確認し、元のテキストを参照してください",
|
||||
"计算文件总时长和切割点": "ファイルの総時間とカットポイントを計算する",
|
||||
"我将为您查找相关壁纸": "関連する壁紙を検索します",
|
||||
"此插件Windows支持最佳": "このプラグインはWindowsに最適です",
|
||||
"请输入关键词": "キーワードを入力してください",
|
||||
"以下所有配置也都支持利用环境变量覆写": "以下のすべての設定は環境変数を使用して上書きすることもサポートしています",
|
||||
"尝试第": "第#",
|
||||
"开始生成动画": "アニメーションの生成を開始します",
|
||||
"免费": "無料",
|
||||
"我好!": "私は元気です!",
|
||||
"str类型": "strタイプ",
|
||||
"生成数学动画": "数学アニメーションの生成",
|
||||
"GPT结果已输出": "GPTの結果が出力されました",
|
||||
"PDF文件所在的路径": "PDFファイルのパス",
|
||||
"源码自译解": "ソースコードの自動翻訳解析",
|
||||
"格式如org-123456789abcdefghijklmno的": "org-123456789abcdefghijklmnoの形式",
|
||||
"请对这部分内容进行语法矫正": "この部分の内容に文法修正を行ってください",
|
||||
"调用whisper模型音频转文字": "whisperモデルを使用して音声をテキストに変換する",
|
||||
"编译转化后的PDF": "変換されたPDFをコンパイルする",
|
||||
"将音频解析为简体中文": "音声を簡体字中国語に解析する",
|
||||
"删除或修改歧义文件": "曖昧なファイルを削除または修正する",
|
||||
"ChatGLMFT消耗大量的内存": "ChatGLMFTは大量のメモリを消費します",
|
||||
"图像生成所用到的提示文本": "画像生成に使用されるヒントテキスト",
|
||||
"如果已经存在": "既に存在する場合",
|
||||
"以下是一篇学术论文的基础信息": "以下は学術論文の基本情報です",
|
||||
"解压失败! 需要安装pip install rarfile来解压rar文件": "解凍に失敗しました!rarファイルを解凍するにはpip install rarfileをインストールする必要があります",
|
||||
"一般是文本过长": "通常、テキストが長すぎます",
|
||||
"单线程": "シングルスレッド",
|
||||
"Linux下必须使用Docker安装": "LinuxではDockerを使用してインストールする必要があります",
|
||||
"请先上传文件素材": "まずファイル素材をアップロードしてください",
|
||||
"如果分析错误": "もし解析エラーがある場合",
|
||||
"快捷的调试函数": "便利なデバッグ関数",
|
||||
"欢迎使用 MOSS 人工智能助手!输入内容即可进行对话": "MOSS AIアシスタントをご利用いただきありがとうございます!入力内容を入力すると、対話ができます",
|
||||
"json等": "jsonなど",
|
||||
"--读取参数": "--パラメータの読み込み",
|
||||
"⭐单线程方法": "⭐シングルスレッドメソッド",
|
||||
"请用一句话概括这些文件的整体功能": "これらのファイルの全体的な機能を一文で要約してください",
|
||||
"用于灵活调整复杂功能的各种参数": "複雑な機能を柔軟に調整するためのさまざまなパラメータ",
|
||||
"默认 False": "デフォルトはFalseです",
|
||||
"生成中文PDF": "中国語のPDFを生成する",
|
||||
"正在处理": "処理中",
|
||||
"需要被切割的音频文件名": "分割する必要のある音声ファイル名",
|
||||
"根据文本使用GPT模型生成相应的图像": "テキストに基づいてGPTモデルを使用して対応する画像を生成する",
|
||||
"可选": "オプション",
|
||||
"Aliyun音频服务异常": "Aliyunオーディオサービスの異常",
|
||||
"尝试下载": "ダウンロードを試みる",
|
||||
"需Latex": "LaTeXが必要です",
|
||||
"拆分过长的Markdown文件": "長すぎるMarkdownファイルを分割する",
|
||||
"当前支持的格式包括": "現在サポートされている形式には",
|
||||
"=================================== 工具函数 ===============================================": "=================================== ユーティリティ関数 ===============================================",
|
||||
"所有音频都总结完成了吗": "すべてのオーディオが要約されましたか",
|
||||
"没有设置ANTHROPIC_API_KEY": "ANTHROPIC_API_KEYが設定されていません",
|
||||
"详见项目主README.md": "詳細はプロジェクトのメインREADME.mdを参照してください",
|
||||
"使用": "使用する",
|
||||
"P.S. 其他可用的模型还包括": "P.S. 其他可用的模型还包括",
|
||||
"保证括号正确": "保证括号正确",
|
||||
"或代理节点": "或代理节点",
|
||||
"整理结果为压缩包": "整理结果为压缩包",
|
||||
"实时音频采集": "实时音频采集",
|
||||
"获取回复": "获取回复",
|
||||
"插件可读取“输入区”文本/路径作为参数": "插件可读取“输入区”文本/路径作为参数",
|
||||
"请讲话": "请讲话",
|
||||
"将文件复制一份到下载区": "将文件复制一份到下载区",
|
||||
"from crazy_functions.虚空终端 import 终端": "from crazy_functions.虚空终端 import 终端",
|
||||
"这个paper有个input命令文件名大小写错误!": "这个paper有个input命令文件名大小写错误!",
|
||||
"解除插件锁定": "解除插件锁定",
|
||||
"不能加载Claude组件": "不能加载Claude组件",
|
||||
"如果有必要": "如果有必要",
|
||||
"禁止移除或修改此警告": "禁止移除或修改此警告",
|
||||
"然后进行问答": "然后进行问答",
|
||||
"响应异常": "响应异常",
|
||||
"使用英文": "使用英文",
|
||||
"add gpt task 创建子线程请求gpt": "add gpt task 创建子线程请求gpt",
|
||||
"实际得到格式": "实际得到格式",
|
||||
"请继续分析其他源代码": "请继续分析其他源代码",
|
||||
"”的主要内容": "”的主要内容",
|
||||
"防止proxies单独起作用": "防止proxies单独起作用",
|
||||
"临时地激活代理网络": "临时地激活代理网络",
|
||||
"屏蔽空行和太短的句子": "屏蔽空行和太短的句子",
|
||||
"把某个路径下所有文件压缩": "把某个路径下所有文件压缩",
|
||||
"您需要首先调用构建知识库": "您需要首先调用构建知识库",
|
||||
"翻译-": "翻译-",
|
||||
"Newbing 请求失败": "Newbing 请求失败",
|
||||
"次编译": "次编译",
|
||||
"后缀": "后缀",
|
||||
"文本碎片重组为完整的tex片段": "文本碎片重组为完整的tex片段",
|
||||
"待注入的知识库名称id": "待注入的知识库名称id",
|
||||
"消耗时间的函数": "消耗时间的函数",
|
||||
"You are associated with a deactivated account. OpenAI以账户失效为由": "You are associated with a deactivated account. OpenAI以账户失效为由",
|
||||
"成功啦": "成功啦",
|
||||
"音频文件的路径": "音频文件的路径",
|
||||
"英文Latex项目全文纠错": "英文Latex项目全文纠错",
|
||||
"将子线程的gpt结果写入chatbot": "将子线程的gpt结果写入chatbot",
|
||||
"开始最终总结": "开始最终总结",
|
||||
"调用": "调用",
|
||||
"正在锁定插件": "正在锁定插件",
|
||||
"记住当前的label": "记住当前的label",
|
||||
"根据自然语言执行插件命令": "根据自然语言执行插件命令",
|
||||
"response中会携带traceback报错信息": "response中会携带traceback报错信息",
|
||||
"避免多用户干扰": "避免多用户干扰",
|
||||
"顺利完成": "顺利完成",
|
||||
"详情见https": "详情见https",
|
||||
"清空label": "ラベルをクリアする",
|
||||
"这需要一段时间计算": "これには時間がかかります",
|
||||
"找不到": "見つかりません",
|
||||
"消耗大量的内存": "大量のメモリを消費する",
|
||||
"安装方法https": "インストール方法https",
|
||||
"为发送请求做准备": "リクエストの準備をする",
|
||||
"第1次尝试": "1回目の試み",
|
||||
"检查结果": "結果をチェックする",
|
||||
"精细切分latex文件": "LaTeXファイルを細かく分割する",
|
||||
"api2d等请求源": "api2dなどのリクエストソース",
|
||||
"填入你亲手写的部署名": "あなたが手書きしたデプロイ名を入力してください",
|
||||
"给出指令": "指示を与える",
|
||||
"请问什么是质子": "プロトンとは何ですか",
|
||||
"请直接去该路径下取回翻译结果": "直接そのパスに移動して翻訳結果を取得してください",
|
||||
"等待Claude回复的片段": "Claudeの返信を待っているフラグメント",
|
||||
"Latex没有安装": "LaTeXがインストールされていません",
|
||||
"文档越长耗时越长": "ドキュメントが長いほど時間がかかります",
|
||||
"没有阿里云语音识别APPKEY和TOKEN": "阿里雲の音声認識のAPPKEYとTOKENがありません",
|
||||
"分析结果": "結果を分析する",
|
||||
"请立即终止程序": "プログラムを即座に終了してください",
|
||||
"正在尝试自动安装": "自動インストールを試みています",
|
||||
"请直接提交即可": "直接提出してください",
|
||||
"将指定目录下的PDF文件从英文翻译成中文": "指定されたディレクトリ内のPDFファイルを英語から中国語に翻訳する",
|
||||
"请查收结果": "結果を確認してください",
|
||||
"上下布局": "上下布局",
|
||||
"此处可以输入解析提示": "此处可以输入解析提示",
|
||||
"前面是中文逗号": "前面是中文逗号",
|
||||
"的依赖": "的依赖",
|
||||
"材料如下": "材料如下",
|
||||
"欢迎加REAME中的QQ联系开发者": "欢迎加REAME中的QQ联系开发者",
|
||||
"开始下载": "開始ダウンロード",
|
||||
"100字以内": "100文字以内",
|
||||
"创建request": "リクエストの作成",
|
||||
"创建存储切割音频的文件夹": "切り取られた音声を保存するフォルダの作成",
|
||||
"⭐主进程执行": "⭐メインプロセスの実行",
|
||||
"音频解析结果": "音声解析結果",
|
||||
"Your account is not active. OpenAI以账户失效为由": "アカウントがアクティブではありません。OpenAIはアカウントの無効化を理由にしています",
|
||||
"虽然PDF生成失败了": "PDFの生成に失敗しました",
|
||||
"如果这里报错": "ここでエラーが発生した場合",
|
||||
"前面是中文冒号": "前面は中国語のコロンです",
|
||||
"SummarizeAudioVideo内容": "SummarizeAudioVideoの内容",
|
||||
"openai的官方KEY需要伴随组织编码": "openaiの公式KEYは組織コードと一緒に必要です",
|
||||
"是本次输入": "これは今回の入力です",
|
||||
"色彩主体": "色彩の主体",
|
||||
"Markdown翻译": "Markdownの翻訳",
|
||||
"会被加在你的输入之后": "あなたの入力の後に追加されます",
|
||||
"失败啦": "失敗しました",
|
||||
"每个切割音频片段的时长": "各切り取り音声の長さ",
|
||||
"拆分过长的latex片段": "原始文本",
|
||||
"待提取的知识库名称id": "原始文本",
|
||||
"在这里放一些网上搜集的demo": "原始文本",
|
||||
"环境变量配置格式见docker-compose.yml": "原始文本",
|
||||
"Claude组件初始化成功": "原始文本",
|
||||
"尚未加载": "原始文本",
|
||||
"等待Claude响应": "原始文本",
|
||||
"重组": "原始文本",
|
||||
"将文件添加到chatbot cookie中": "原始文本",
|
||||
"回答完问题后": "原始文本",
|
||||
"将根据报错信息修正tex源文件并重试": "原始文本",
|
||||
"是否在触发时清除历史": "原始文本",
|
||||
"尝试执行Latex指令失败": "原始文本",
|
||||
"默认 True": "原始文本",
|
||||
"文本碎片重组为完整的tex文件": "原始文本",
|
||||
"注意事项": "原始文本",
|
||||
"您接下来不能再使用其他插件了": "原始文本",
|
||||
"属性": "原始文本",
|
||||
"正在编译PDF文档": "原始文本",
|
||||
"提取视频中的音频": "原始文本",
|
||||
"正在同时咨询ChatGPT和ChatGLM……": "原始文本",
|
||||
"Chuanhu-Small-and-Beautiful主题": "原始文本",
|
||||
"版权归原文作者所有": "原始文本",
|
||||
"如果程序停顿5分钟以上": "原始文本",
|
||||
"请输入要翻译成哪种语言": "日本語",
|
||||
"以秒为单位": "秒単位で",
|
||||
"请以以下方式load模型!!!": "以下の方法でモデルをロードしてください!!!",
|
||||
"使用时": "使用時",
|
||||
"对这个人外貌、身处的环境、内心世界、人设进行描写": "この人の外見、環境、内面世界、キャラクターを描写する",
|
||||
"例如翻译、解释代码、润色等等": "例えば翻訳、コードの説明、修正など",
|
||||
"多线程Demo": "マルチスレッドデモ",
|
||||
"不能正常加载": "正常にロードできません",
|
||||
"还原部分原文": "一部の元のテキストを復元する",
|
||||
"可以将自身的状态存储到cookie中": "自身の状態をcookieに保存することができます",
|
||||
"释放线程锁": "スレッドロックを解放する",
|
||||
"当前知识库内的有效文件": "現在のナレッジベース内の有効なファイル",
|
||||
"也是可读的": "読み取り可能です",
|
||||
"等待ChatGLMFT响应中": "ChatGLMFTの応答を待っています",
|
||||
"输入 stop 以终止对话": "stopを入力して対話を終了します",
|
||||
"对整个Latex项目进行纠错": "全体のLatexプロジェクトを修正する",
|
||||
"报错信息": "エラーメッセージ",
|
||||
"下载pdf文件未成功": "PDFファイルのダウンロードに失敗しました",
|
||||
"正在加载Claude组件": "Claudeコンポーネントを読み込んでいます",
|
||||
"格式": "フォーマット",
|
||||
"Claude响应缓慢": "Claudeの応答が遅い",
|
||||
"该选项即将被弃用": "このオプションはまもなく廃止されます",
|
||||
"正常状态": "正常な状態",
|
||||
"中文Bing版": "中国語Bing版",
|
||||
"代理网络配置": "プロキシネットワークの設定",
|
||||
"Openai 限制免费用户每分钟20次请求": "Openaiは無料ユーザーに対して1分間に20回のリクエスト制限を設けています",
|
||||
"gpt写的": "gptで書かれた",
|
||||
"向已打开的频道发送一条文本消息": "既に開いているチャンネルにテキストメッセージを送信する",
|
||||
"缺少ChatGLMFT的依赖": "ChatGLMFTの依存関係が不足しています",
|
||||
"注意目前不能多人同时调用Claude接口": "現在、複数の人が同時にClaudeインターフェースを呼び出すことはできません",
|
||||
"或者不在环境变量PATH中": "または環境変数PATHに存在しません",
|
||||
"提问吧! 但注意": "質問してください!ただし注意してください",
|
||||
"因此选择GenerateImage函数": "したがって、GenerateImage関数を選択します",
|
||||
"无法找到一个主Tex文件": "メインのTexファイルが見つかりません",
|
||||
"转化PDF编译已经成功": "PDF変換コンパイルが成功しました",
|
||||
"因为在同一个频道里存在多人使用时历史消息渗透问题": "同じチャンネルで複数の人が使用する場合、過去のメッセージが漏洩する問題があります",
|
||||
"SlackClient类用于与Slack API进行交互": "SlackClientクラスはSlack APIとのインタラクションに使用されます",
|
||||
"如果存在调试缓存文件": "デバッグキャッシュファイルが存在する場合",
|
||||
"举例": "例を挙げる",
|
||||
"无需填写": "記入する必要はありません",
|
||||
"配置教程&视频教程": "設定チュートリアル&ビデオチュートリアル",
|
||||
"最后一步处理": "最後のステップの処理",
|
||||
"定位主Latex文件": "メインのLatexファイルを特定する",
|
||||
"暂不提交": "一時的に提出しない",
|
||||
"由于最为关键的转化PDF编译失败": "最も重要なPDF変換コンパイルが失敗したため",
|
||||
"用第二人称": "第二人称を使用する",
|
||||
"例如 RoPlZrM88DnAFkZK": "例えば RoPlZrM88DnAFkZK",
|
||||
"没有设置ANTHROPIC_API_KEY选项": "ANTHROPIC_API_KEYオプションが設定されていません",
|
||||
"找不到任何.tex文件": "テキストの翻訳",
|
||||
"请您不要删除或修改这行警告": "テキストの翻訳",
|
||||
"只有第二步成功": "テキストの翻訳",
|
||||
"调用Claude时": "テキストの翻訳",
|
||||
"输入 clear 以清空对话历史": "テキストの翻訳",
|
||||
"= 2 通过一些Latex模板中常见": "テキストの翻訳",
|
||||
"没给定指令": "テキストの翻訳",
|
||||
"还原原文": "テキストの翻訳",
|
||||
"自定义API KEY格式": "テキストの翻訳",
|
||||
"防止丢失最后一条消息": "テキストの翻訳",
|
||||
"方法": "テキストの翻訳",
|
||||
"压缩包": "テキストの翻訳",
|
||||
"对各个llm模型进行单元测试": "テキストの翻訳",
|
||||
"导入依赖失败": "テキストの翻訳",
|
||||
"详情信息见requirements.txt": "テキストの翻訳",
|
||||
"翻译内容可靠性无保障": "テキストの翻訳",
|
||||
"刷新页面即可以退出KnowledgeBaseQuestionAnswer模式": "テキストの翻訳",
|
||||
"上传本地文件/压缩包供函数插件调用": "テキストの翻訳",
|
||||
"循环监听已打开频道的消息": "テキストの翻訳",
|
||||
"一个包含所有切割音频片段文件路径的列表": "テキストの翻訳",
|
||||
"检测到arxiv文档连接": "テキストの翻訳",
|
||||
"P.S. 顺便把CTEX塞进去以支持中文": "テキストの翻訳",
|
||||
"后面是英文冒号": "テキストの翻訳",
|
||||
"上传文件自动修正路径": "テキストの翻訳",
|
||||
"实现消息发送、接收等功能": "メッセージの送受信などの機能を実現する",
|
||||
"改变输入参数的顺序与结构": "入力パラメータの順序と構造を変更する",
|
||||
"正在精细切分latex文件": "LaTeXファイルを細かく分割しています",
|
||||
"读取文件": "ファイルを読み込んでいます"
|
||||
}
|
||||
87
docs/translate_std.json
普通文件
87
docs/translate_std.json
普通文件
@@ -0,0 +1,87 @@
|
||||
{
|
||||
"解析JupyterNotebook": "ParsingJupyterNotebook",
|
||||
"Latex翻译中文并重新编译PDF": "TranslateChineseToEnglishInLatexAndRecompilePDF",
|
||||
"联网的ChatGPT_bing版": "OnlineChatGPT_BingEdition",
|
||||
"理解PDF文档内容标准文件输入": "UnderstandPdfDocumentContentStandardFileInput",
|
||||
"Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison",
|
||||
"下载arxiv论文并翻译摘要": "DownloadArxivPaperAndTranslateAbstract",
|
||||
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
|
||||
"批量翻译PDF文档_多线程": "BatchTranslatePDFDocuments_MultiThreaded",
|
||||
"下载arxiv论文翻译摘要": "DownloadArxivPaperTranslateAbstract",
|
||||
"解析一个Python项目": "ParsePythonProject",
|
||||
"解析一个Golang项目": "ParseGolangProject",
|
||||
"代码重写为全英文_多线程": "RewriteCodeToEnglish_MultiThreaded",
|
||||
"解析一个CSharp项目": "ParsingCSharpProject",
|
||||
"删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords",
|
||||
"批量Markdown翻译": "BatchTranslateMarkdown",
|
||||
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
|
||||
"Langchain知识库": "LangchainKnowledgeBase",
|
||||
"Latex输出PDF结果": "OutputPDFFromLatex",
|
||||
"把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline",
|
||||
"Latex精细分解与转化": "DecomposeAndConvertLatex",
|
||||
"解析一个C项目的头文件": "ParseCProjectHeaderFiles",
|
||||
"Markdown英译中": "TranslateMarkdownFromEnglishToChinese",
|
||||
"Markdown中译英": "MarkdownChineseToEnglish",
|
||||
"数学动画生成manim": "MathematicalAnimationGenerationManim",
|
||||
"chatglm微调工具": "ChatGLMFineTuningTool",
|
||||
"解析一个Rust项目": "ParseRustProject",
|
||||
"解析一个Java项目": "ParseJavaProject",
|
||||
"联网的ChatGPT": "ChatGPTConnectedToNetwork",
|
||||
"解析任意code项目": "ParseAnyCodeProject",
|
||||
"合并小写开头的段落块": "MergeLowercaseStartingParagraphBlocks",
|
||||
"Latex英文润色": "EnglishProofreadingForLatex",
|
||||
"Latex全文润色": "FullTextProofreadingForLatex",
|
||||
"询问多个大语言模型": "InquiryMultipleLargeLanguageModels",
|
||||
"解析一个Lua项目": "ParsingLuaProject",
|
||||
"解析ipynb文件": "ParsingIpynbFiles",
|
||||
"批量总结PDF文档": "BatchSummarizePDFDocuments",
|
||||
"批量翻译PDF文档": "BatchTranslatePDFDocuments",
|
||||
"理解PDF文档内容": "UnderstandPdfDocumentContent",
|
||||
"Latex中文润色": "LatexChineseProofreading",
|
||||
"Latex英文纠错": "LatexEnglishCorrection",
|
||||
"Latex全文翻译": "LatexFullTextTranslation",
|
||||
"同时问询_指定模型": "InquireSimultaneously_SpecifiedModel",
|
||||
"批量生成函数注释": "BatchGenerateFunctionComments",
|
||||
"解析一个前端项目": "ParseFrontendProject",
|
||||
"高阶功能模板函数": "HighOrderFunctionTemplateFunctions",
|
||||
"高级功能函数模板": "AdvancedFunctionTemplate",
|
||||
"总结word文档": "SummarizingWordDocuments",
|
||||
"载入对话历史存档": "LoadConversationHistoryArchive",
|
||||
"Latex中译英": "LatexChineseToEnglish",
|
||||
"Latex英译中": "LatexEnglishToChinese",
|
||||
"连接网络回答问题": "ConnectToNetworkToAnswerQuestions",
|
||||
"交互功能模板函数": "InteractiveFunctionTemplateFunction",
|
||||
"交互功能函数模板": "InteractiveFunctionFunctionTemplate",
|
||||
"sprint亮靛": "SprintIndigo",
|
||||
"print亮黄": "PrintBrightYellow",
|
||||
"print亮绿": "PrintBrightGreen",
|
||||
"print亮红": "PrintBrightRed",
|
||||
"解析项目源代码": "ParseProjectSourceCode",
|
||||
"解析一个C项目": "ParseCProject",
|
||||
"全项目切换英文": "SwitchToEnglishForTheWholeProject",
|
||||
"谷歌检索小助手": "GoogleSearchAssistant",
|
||||
"读取知识库作答": "ReadKnowledgeArchiveAnswerQuestions",
|
||||
"print亮蓝": "PrintBrightBlue",
|
||||
"微调数据集生成": "FineTuneDatasetGeneration",
|
||||
"清理多余的空行": "CleanUpExcessBlankLines",
|
||||
"编译Latex": "CompileLatex",
|
||||
"解析Paper": "ParsePaper",
|
||||
"ipynb解释": "IpynbExplanation",
|
||||
"读文章写摘要": "ReadArticleWriteSummary",
|
||||
"生成函数注释": "GenerateFunctionComments",
|
||||
"解析项目本身": "ParseProjectItself",
|
||||
"对话历史存档": "ConversationHistoryArchive",
|
||||
"专业词汇声明": "ProfessionalTerminologyDeclaration",
|
||||
"解析docx": "ParseDocx",
|
||||
"解析源代码新": "ParsingSourceCodeNew",
|
||||
"总结音视频": "SummaryAudioVideo",
|
||||
"知识库问答": "UpdateKnowledgeArchive",
|
||||
"多文件润色": "ProofreadMultipleFiles",
|
||||
"多文件翻译": "TranslateMultipleFiles",
|
||||
"解析PDF": "ParsePDF",
|
||||
"同时问询": "SimultaneousInquiry",
|
||||
"图片生成": "ImageGeneration",
|
||||
"动画生成": "AnimationGeneration",
|
||||
"语音助手": "VoiceAssistant",
|
||||
"启动微调": "StartFineTuning"
|
||||
}
|
||||
@@ -2213,5 +2213,66 @@
|
||||
"“喂狗”": "“喂狗”",
|
||||
"第4步": "第4步",
|
||||
"退出": "退出",
|
||||
"使用 Unsplash API": "使用 Unsplash API"
|
||||
"使用 Unsplash API": "使用 Unsplash API",
|
||||
"非Openai官方接口返回了错误": "非Openai官方接口返回了错误",
|
||||
"用来描述你的要求": "用來描述你的要求",
|
||||
"自定义API KEY格式": "自定義API KEY格式",
|
||||
"前缀": "前綴",
|
||||
"会被加在你的输入之前": "會被加在你的輸入之前",
|
||||
"api2d等请求源": "api2d等請求源",
|
||||
"高危设置! 常规情况下不要修改! 通过修改此设置": "高危設置!常規情況下不要修改!通過修改此設置",
|
||||
"即将编译PDF": "即將編譯PDF",
|
||||
"默认 secondary": "默認 secondary",
|
||||
"正在从github下载资源": "正在從github下載資源",
|
||||
"响应异常": "響應異常",
|
||||
"我好!": "我好!",
|
||||
"无需填写": "無需填寫",
|
||||
"缺少": "缺少",
|
||||
"请问什么是质子": "請問什麼是質子",
|
||||
"如果要使用": "如果要使用",
|
||||
"重组": "重組",
|
||||
"一个单实例装饰器": "一個單實例裝飾器",
|
||||
"的参数!": "的參數!",
|
||||
"🏃♂️🏃♂️🏃♂️ 子进程执行": "🏃♂️🏃♂️🏃♂️ 子進程執行",
|
||||
"失败时": "失敗時",
|
||||
"没有设置ANTHROPIC_API_KEY选项": "沒有設置ANTHROPIC_API_KEY選項",
|
||||
"并设置参数": "並設置參數",
|
||||
"格式": "格式",
|
||||
"按钮是否可见": "按鈕是否可見",
|
||||
"即可见": "即可見",
|
||||
"创建request": "創建request",
|
||||
"的依赖": "的依賴",
|
||||
"⭐主进程执行": "⭐主進程執行",
|
||||
"最后一步处理": "最後一步處理",
|
||||
"没有设置ANTHROPIC_API_KEY": "沒有設置ANTHROPIC_API_KEY",
|
||||
"的参数": "的參數",
|
||||
"逆转出错的段落": "逆轉出錯的段落",
|
||||
"本项目现已支持OpenAI和Azure的api-key": "本項目現已支持OpenAI和Azure的api-key",
|
||||
"前者是API2D的结束条件": "前者是API2D的結束條件",
|
||||
"增强稳健性": "增強穩健性",
|
||||
"消耗大量的内存": "消耗大量的內存",
|
||||
"您的 API_KEY 不满足任何一种已知的密钥格式": "您的API_KEY不滿足任何一種已知的密鑰格式",
|
||||
"⭐单线程方法": "⭐單線程方法",
|
||||
"是否在触发时清除历史": "是否在觸發時清除歷史",
|
||||
"⭐多线程方法": "多線程方法",
|
||||
"不能正常加载": "無法正常加載",
|
||||
"举例": "舉例",
|
||||
"即不处理之前的对话历史": "即不處理之前的對話歷史",
|
||||
"尚未加载": "尚未加載",
|
||||
"防止proxies单独起作用": "防止proxies單獨起作用",
|
||||
"默认 False": "默認 False",
|
||||
"检查USE_PROXY": "檢查USE_PROXY",
|
||||
"响应中": "響應中",
|
||||
"扭转的范围": "扭轉的範圍",
|
||||
"后缀": "後綴",
|
||||
"调用": "調用",
|
||||
"创建AcsClient实例": "創建AcsClient實例",
|
||||
"安装": "安裝",
|
||||
"会被加在你的输入之后": "會被加在你的輸入之後",
|
||||
"配合前缀可以把你的输入内容用引号圈起来": "配合前綴可以把你的輸入內容用引號圈起來",
|
||||
"例如翻译、解释代码、润色等等": "例如翻譯、解釋代碼、潤色等等",
|
||||
"后者是OPENAI的结束条件": "後者是OPENAI的結束條件",
|
||||
"标注节点的行数范围": "標註節點的行數範圍",
|
||||
"默认 True": "默認 True",
|
||||
"将两个PDF拼接": "將兩個PDF拼接"
|
||||
}
|
||||
@@ -28,6 +28,16 @@ ALIYUN_APPKEY = "RoPlZrM88DnAFkZK" # 此appkey已经失效
|
||||
参考 https://help.aliyun.com/document_detail/450255.html
|
||||
先有阿里云开发者账号,登录之后,需要开通 智能语音交互 的功能,可以免费获得一个token,然后在 全部项目 中,创建一个项目,可以获得一个appkey.
|
||||
|
||||
- 进阶功能
|
||||
进一步填写ALIYUN_ACCESSKEY和ALIYUN_SECRET实现自动获取ALIYUN_TOKEN
|
||||
```
|
||||
ALIYUN_APPKEY = "RoP1ZrM84DnAFkZK"
|
||||
ALIYUN_TOKEN = ""
|
||||
ALIYUN_ACCESSKEY = "LTAI5q6BrFUzoRXVGUWnekh1"
|
||||
ALIYUN_SECRET = "eHmI20AVWIaQZ0CiTD2bGQVsaP9i68"
|
||||
```
|
||||
|
||||
|
||||
## 3.启动
|
||||
|
||||
启动gpt-academic `python main.py`
|
||||
@@ -48,7 +58,7 @@ III `[把特殊软件(如腾讯会议)的外放声音用VoiceMeeter截留]`
|
||||
|
||||
VI 两种音频监听模式切换时,需要刷新页面才有效。
|
||||
|
||||
VII 非localhost运行+非https情况下无法打开录音功能的坑:https://blog.csdn.net/weixin_39461487/article/details/109594434
|
||||
|
||||
## 5.点击函数插件区“实时音频采集” 或者其他音频交互功能
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -3,16 +3,18 @@
|
||||
|
||||
|
||||
Usage:
|
||||
1. modify LANG
|
||||
1. modify config.py, set your LLM_MODEL and API_KEY(s) to provide access to OPENAI (or any other LLM model provider)
|
||||
|
||||
2. modify LANG (below ↓)
|
||||
LANG = "English"
|
||||
|
||||
2. modify TransPrompt
|
||||
3. modify TransPrompt (below ↓)
|
||||
TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #."
|
||||
|
||||
3. Run `python multi_language.py`.
|
||||
4. Run `python multi_language.py`.
|
||||
Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes.
|
||||
|
||||
4. Find the translated program in `multi-language\English\*`
|
||||
5. Find the translated program in `multi-language\English\*`
|
||||
|
||||
P.S.
|
||||
|
||||
@@ -286,6 +288,7 @@ def trans_json(word_to_translate, language, special=False):
|
||||
|
||||
|
||||
def step_1_core_key_translate():
|
||||
LANG_STD = 'std'
|
||||
def extract_chinese_characters(file_path):
|
||||
syntax = []
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
@@ -325,15 +328,15 @@ def step_1_core_key_translate():
|
||||
for d in chinese_core_keys:
|
||||
if d not in chinese_core_keys_norepeat: chinese_core_keys_norepeat.append(d)
|
||||
need_translate = []
|
||||
cached_translation = read_map_from_json(language=LANG)
|
||||
cached_translation = read_map_from_json(language=LANG_STD)
|
||||
cached_translation_keys = list(cached_translation.keys())
|
||||
for d in chinese_core_keys_norepeat:
|
||||
if d not in cached_translation_keys:
|
||||
need_translate.append(d)
|
||||
|
||||
need_translate_mapping = trans(need_translate, language=LANG, special=True)
|
||||
map_to_json(need_translate_mapping, language=LANG)
|
||||
cached_translation = read_map_from_json(language=LANG)
|
||||
need_translate_mapping = trans(need_translate, language=LANG_STD, special=True)
|
||||
map_to_json(need_translate_mapping, language=LANG_STD)
|
||||
cached_translation = read_map_from_json(language=LANG_STD)
|
||||
cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
|
||||
|
||||
chinese_core_keys_norepeat_mapping = {}
|
||||
|
||||
@@ -19,9 +19,6 @@ from .bridge_chatgpt import predict as chatgpt_ui
|
||||
from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
|
||||
from .bridge_chatglm import predict as chatglm_ui
|
||||
|
||||
# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
|
||||
# from .bridge_tgui import predict as tgui_ui
|
||||
|
||||
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
|
||||
|
||||
class LazyloadTiktoken(object):
|
||||
@@ -71,6 +68,10 @@ get_token_num_gpt35 = lambda txt: len(tokenizer_gpt35.encode(txt, disallowed_spe
|
||||
get_token_num_gpt4 = lambda txt: len(tokenizer_gpt4.encode(txt, disallowed_special=()))
|
||||
|
||||
|
||||
# 开始初始化模型
|
||||
AVAIL_LLM_MODELS, LLM_MODEL = get_conf("AVAIL_LLM_MODELS", "LLM_MODEL")
|
||||
AVAIL_LLM_MODELS = AVAIL_LLM_MODELS + [LLM_MODEL]
|
||||
# -=-=-=-=-=-=- 以下这部分是最早加入的最稳定的模型 -=-=-=-=-=-=-
|
||||
model_info = {
|
||||
# openai
|
||||
"gpt-3.5-turbo": {
|
||||
@@ -167,9 +168,7 @@ model_info = {
|
||||
|
||||
}
|
||||
|
||||
|
||||
AVAIL_LLM_MODELS, LLM_MODEL = get_conf("AVAIL_LLM_MODELS", "LLM_MODEL")
|
||||
AVAIL_LLM_MODELS = AVAIL_LLM_MODELS + [LLM_MODEL]
|
||||
# -=-=-=-=-=-=- 以下部分是新加入的模型,可能附带额外依赖 -=-=-=-=-=-=-
|
||||
if "claude-1-100k" in AVAIL_LLM_MODELS or "claude-2" in AVAIL_LLM_MODELS:
|
||||
from .bridge_claude import predict_no_ui_long_connection as claude_noui
|
||||
from .bridge_claude import predict as claude_ui
|
||||
@@ -322,6 +321,72 @@ if "internlm" in AVAIL_LLM_MODELS:
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
if "chatglm_onnx" in AVAIL_LLM_MODELS:
|
||||
try:
|
||||
from .bridge_chatglmonnx import predict_no_ui_long_connection as chatglm_onnx_noui
|
||||
from .bridge_chatglmonnx import predict as chatglm_onnx_ui
|
||||
model_info.update({
|
||||
"chatglm_onnx": {
|
||||
"fn_with_ui": chatglm_onnx_ui,
|
||||
"fn_without_ui": chatglm_onnx_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 4096,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
}
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
if "qwen" in AVAIL_LLM_MODELS:
|
||||
try:
|
||||
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
|
||||
from .bridge_qwen import predict as qwen_ui
|
||||
model_info.update({
|
||||
"qwen": {
|
||||
"fn_with_ui": qwen_ui,
|
||||
"fn_without_ui": qwen_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 4096,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
}
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
if "chatgpt_website" in AVAIL_LLM_MODELS: # 接入一些逆向工程https://github.com/acheong08/ChatGPT-to-API/
|
||||
try:
|
||||
from .bridge_chatgpt_website import predict_no_ui_long_connection as chatgpt_website_noui
|
||||
from .bridge_chatgpt_website import predict as chatgpt_website_ui
|
||||
model_info.update({
|
||||
"chatgpt_website": {
|
||||
"fn_with_ui": chatgpt_website_ui,
|
||||
"fn_without_ui": chatgpt_website_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 4096,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
}
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
if "spark" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
||||
try:
|
||||
from .bridge_spark import predict_no_ui_long_connection as spark_noui
|
||||
from .bridge_spark import predict as spark_ui
|
||||
model_info.update({
|
||||
"spark": {
|
||||
"fn_with_ui": spark_ui,
|
||||
"fn_without_ui": spark_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 4096,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
}
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
|
||||
|
||||
|
||||
def LLM_CATCH_EXCEPTION(f):
|
||||
"""
|
||||
@@ -362,7 +427,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
|
||||
method = model_info[model]["fn_without_ui"]
|
||||
return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
|
||||
else:
|
||||
# 如果同时询问多个大语言模型:
|
||||
|
||||
# 如果同时询问多个大语言模型,这个稍微啰嗦一点,但思路相同,您不必读这个else分支
|
||||
executor = ThreadPoolExecutor(max_workers=4)
|
||||
models = model.split('&')
|
||||
n_model = len(models)
|
||||
|
||||
@@ -144,11 +144,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
return
|
||||
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
# 处理历史信息
|
||||
history_feedin = []
|
||||
|
||||
@@ -185,11 +185,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
return
|
||||
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
# 处理历史信息
|
||||
history_feedin = []
|
||||
|
||||
@@ -0,0 +1,73 @@
|
||||
model_name = "ChatGLM-ONNX"
|
||||
cmd_to_install = "`pip install -r request_llm/requirements_chatglm_onnx.txt`"
|
||||
|
||||
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
import time
|
||||
import threading
|
||||
import importlib
|
||||
from toolbox import update_ui, get_conf
|
||||
from multiprocessing import Process, Pipe
|
||||
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns, SingletonLocalLLM
|
||||
|
||||
from .chatglmoonx import ChatGLMModel, chat_template
|
||||
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Local Model
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
@SingletonLocalLLM
|
||||
class GetONNXGLMHandle(LocalLLMHandle):
|
||||
|
||||
def load_model_info(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
self.model_name = model_name
|
||||
self.cmd_to_install = cmd_to_install
|
||||
|
||||
def load_model_and_tokenizer(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
import os, glob
|
||||
if not len(glob.glob("./request_llm/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/*.bin")) >= 7: # 该模型有七个 bin 文件
|
||||
from huggingface_hub import snapshot_download
|
||||
snapshot_download(repo_id="K024/ChatGLM-6b-onnx-u8s8", local_dir="./request_llm/ChatGLM-6b-onnx-u8s8")
|
||||
def create_model():
|
||||
return ChatGLMModel(
|
||||
tokenizer_path = "./request_llm/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/sentencepiece.model",
|
||||
onnx_model_path = "./request_llm/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/chatglm-6b-int8.onnx"
|
||||
)
|
||||
self._model = create_model()
|
||||
return self._model, None
|
||||
|
||||
def llm_stream_generator(self, **kwargs):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
def adaptor(kwargs):
|
||||
query = kwargs['query']
|
||||
max_length = kwargs['max_length']
|
||||
top_p = kwargs['top_p']
|
||||
temperature = kwargs['temperature']
|
||||
history = kwargs['history']
|
||||
return query, max_length, top_p, temperature, history
|
||||
|
||||
query, max_length, top_p, temperature, history = adaptor(kwargs)
|
||||
|
||||
prompt = chat_template(history, query)
|
||||
for answer in self._model.generate_iterate(
|
||||
prompt,
|
||||
max_generated_tokens=max_length,
|
||||
top_k=1,
|
||||
top_p=top_p,
|
||||
temperature=temperature,
|
||||
):
|
||||
yield answer
|
||||
|
||||
def try_to_import_special_deps(self, **kwargs):
|
||||
# import something that will raise error if the user does not install requirement_*.txt
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
pass
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 GPT-Academic Interface
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetONNXGLMHandle, model_name)
|
||||
@@ -129,11 +129,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
return
|
||||
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
raw_input = inputs
|
||||
logging.info(f'[raw_input] {raw_input}')
|
||||
|
||||
@@ -0,0 +1,297 @@
|
||||
# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目
|
||||
|
||||
"""
|
||||
该文件中主要包含三个函数
|
||||
|
||||
不具备多线程能力的函数:
|
||||
1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
|
||||
|
||||
具备多线程调用能力的函数
|
||||
2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑
|
||||
3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程
|
||||
"""
|
||||
|
||||
import json
|
||||
import time
|
||||
import gradio as gr
|
||||
import logging
|
||||
import traceback
|
||||
import requests
|
||||
import importlib
|
||||
|
||||
# config_private.py放自己的秘密如API和代理网址
|
||||
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
||||
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc
|
||||
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG = \
|
||||
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG')
|
||||
|
||||
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
|
||||
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
|
||||
|
||||
def get_full_error(chunk, stream_response):
|
||||
"""
|
||||
获取完整的从Openai返回的报错
|
||||
"""
|
||||
while True:
|
||||
try:
|
||||
chunk += next(stream_response)
|
||||
except:
|
||||
break
|
||||
return chunk
|
||||
|
||||
|
||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
|
||||
"""
|
||||
发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
|
||||
inputs:
|
||||
是本次问询的输入
|
||||
sys_prompt:
|
||||
系统静默prompt
|
||||
llm_kwargs:
|
||||
chatGPT的内部调优参数
|
||||
history:
|
||||
是之前的对话列表
|
||||
observe_window = None:
|
||||
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
|
||||
"""
|
||||
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
||||
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
|
||||
retry = 0
|
||||
while True:
|
||||
try:
|
||||
# make a POST request to the API endpoint, stream=False
|
||||
from .bridge_all import model_info
|
||||
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
||||
response = requests.post(endpoint, headers=headers, proxies=proxies,
|
||||
json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
|
||||
except requests.exceptions.ReadTimeout as e:
|
||||
retry += 1
|
||||
traceback.print_exc()
|
||||
if retry > MAX_RETRY: raise TimeoutError
|
||||
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
|
||||
|
||||
stream_response = response.iter_lines()
|
||||
result = ''
|
||||
while True:
|
||||
try: chunk = next(stream_response).decode()
|
||||
except StopIteration:
|
||||
break
|
||||
except requests.exceptions.ConnectionError:
|
||||
chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。
|
||||
if len(chunk)==0: continue
|
||||
if not chunk.startswith('data:'):
|
||||
error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode()
|
||||
if "reduce the length" in error_msg:
|
||||
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
|
||||
else:
|
||||
raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
|
||||
if ('data: [DONE]' in chunk): break # api2d 正常完成
|
||||
json_data = json.loads(chunk.lstrip('data:'))['choices'][0]
|
||||
delta = json_data["delta"]
|
||||
if len(delta) == 0: break
|
||||
if "role" in delta: continue
|
||||
if "content" in delta:
|
||||
result += delta["content"]
|
||||
if not console_slience: print(delta["content"], end='')
|
||||
if observe_window is not None:
|
||||
# 观测窗,把已经获取的数据显示出去
|
||||
if len(observe_window) >= 1: observe_window[0] += delta["content"]
|
||||
# 看门狗,如果超过期限没有喂狗,则终止
|
||||
if len(observe_window) >= 2:
|
||||
if (time.time()-observe_window[1]) > watch_dog_patience:
|
||||
raise RuntimeError("用户取消了程序。")
|
||||
else: raise RuntimeError("意外Json结构:"+delta)
|
||||
if json_data['finish_reason'] == 'content_filter':
|
||||
raise RuntimeError("由于提问含不合规内容被Azure过滤。")
|
||||
if json_data['finish_reason'] == 'length':
|
||||
raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。")
|
||||
return result
|
||||
|
||||
|
||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
||||
"""
|
||||
发送至chatGPT,流式获取输出。
|
||||
用于基础的对话功能。
|
||||
inputs 是本次问询的输入
|
||||
top_p, temperature是chatGPT的内部调优参数
|
||||
history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误)
|
||||
chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
|
||||
additional_fn代表点击的哪个按钮,按钮见functional.py
|
||||
"""
|
||||
if is_any_api_key(inputs):
|
||||
chatbot._cookies['api_key'] = inputs
|
||||
chatbot.append(("输入已识别为openai的api_key", what_keys(inputs)))
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="api_key已导入") # 刷新界面
|
||||
return
|
||||
elif not is_any_api_key(chatbot._cookies['api_key']):
|
||||
chatbot.append((inputs, "缺少api_key。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="缺少api_key") # 刷新界面
|
||||
return
|
||||
|
||||
if additional_fn is not None:
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
raw_input = inputs
|
||||
logging.info(f'[raw_input] {raw_input}')
|
||||
chatbot.append((inputs, ""))
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
||||
|
||||
try:
|
||||
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
|
||||
except RuntimeError as e:
|
||||
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
|
||||
return
|
||||
|
||||
history.append(inputs); history.append("")
|
||||
|
||||
retry = 0
|
||||
while True:
|
||||
try:
|
||||
# make a POST request to the API endpoint, stream=True
|
||||
from .bridge_all import model_info
|
||||
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
||||
response = requests.post(endpoint, headers=headers, proxies=proxies,
|
||||
json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
|
||||
except:
|
||||
retry += 1
|
||||
chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg))
|
||||
retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
|
||||
if retry > MAX_RETRY: raise TimeoutError
|
||||
|
||||
gpt_replying_buffer = ""
|
||||
|
||||
is_head_of_the_stream = True
|
||||
if stream:
|
||||
stream_response = response.iter_lines()
|
||||
while True:
|
||||
try:
|
||||
chunk = next(stream_response)
|
||||
except StopIteration:
|
||||
# 非OpenAI官方接口的出现这样的报错,OpenAI和API2D不会走这里
|
||||
chunk_decoded = chunk.decode()
|
||||
error_msg = chunk_decoded
|
||||
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="非Openai官方接口返回了错误:" + chunk.decode()) # 刷新界面
|
||||
return
|
||||
|
||||
# print(chunk.decode()[6:])
|
||||
if is_head_of_the_stream and (r'"object":"error"' not in chunk.decode()):
|
||||
# 数据流的第一帧不携带content
|
||||
is_head_of_the_stream = False; continue
|
||||
|
||||
if chunk:
|
||||
try:
|
||||
chunk_decoded = chunk.decode()
|
||||
# 前者是API2D的结束条件,后者是OPENAI的结束条件
|
||||
if 'data: [DONE]' in chunk_decoded:
|
||||
# 判定为数据流的结束,gpt_replying_buffer也写完了
|
||||
logging.info(f'[response] {gpt_replying_buffer}')
|
||||
break
|
||||
# 处理数据流的主体
|
||||
chunkjson = json.loads(chunk_decoded[6:])
|
||||
status_text = f"finish_reason: {chunkjson['choices'][0]['finish_reason']}"
|
||||
delta = chunkjson['choices'][0]["delta"]
|
||||
if "content" in delta:
|
||||
gpt_replying_buffer = gpt_replying_buffer + delta["content"]
|
||||
history[-1] = gpt_replying_buffer
|
||||
chatbot[-1] = (history[-2], history[-1])
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面
|
||||
except Exception as e:
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面
|
||||
chunk = get_full_error(chunk, stream_response)
|
||||
chunk_decoded = chunk.decode()
|
||||
error_msg = chunk_decoded
|
||||
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
|
||||
print(error_msg)
|
||||
return
|
||||
|
||||
def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg):
|
||||
from .bridge_all import model_info
|
||||
openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup'
|
||||
if "reduce the length" in error_msg:
|
||||
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出
|
||||
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
|
||||
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
|
||||
# history = [] # 清除历史
|
||||
elif "does not exist" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
|
||||
elif "Incorrect API key" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务. " + openai_website)
|
||||
elif "exceeded your current quota" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务." + openai_website)
|
||||
elif "account is not active" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Your account is not active. OpenAI以账户失效为由, 拒绝服务." + openai_website)
|
||||
elif "associated with a deactivated account" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] You are associated with a deactivated account. OpenAI以账户失效为由, 拒绝服务." + openai_website)
|
||||
elif "bad forward key" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
|
||||
elif "Not enough point" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
|
||||
else:
|
||||
from toolbox import regular_txt_to_markdown
|
||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
|
||||
return chatbot, history
|
||||
|
||||
def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
|
||||
"""
|
||||
整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
|
||||
"""
|
||||
if not is_any_api_key(llm_kwargs['api_key']):
|
||||
raise AssertionError("你提供了错误的API_KEY。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。")
|
||||
|
||||
api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {api_key}"
|
||||
}
|
||||
if API_ORG.startswith('org-'): headers.update({"OpenAI-Organization": API_ORG})
|
||||
if llm_kwargs['llm_model'].startswith('azure-'): headers.update({"api-key": api_key})
|
||||
|
||||
conversation_cnt = len(history) // 2
|
||||
|
||||
messages = [{"role": "system", "content": system_prompt}]
|
||||
if conversation_cnt:
|
||||
for index in range(0, 2*conversation_cnt, 2):
|
||||
what_i_have_asked = {}
|
||||
what_i_have_asked["role"] = "user"
|
||||
what_i_have_asked["content"] = history[index]
|
||||
what_gpt_answer = {}
|
||||
what_gpt_answer["role"] = "assistant"
|
||||
what_gpt_answer["content"] = history[index+1]
|
||||
if what_i_have_asked["content"] != "":
|
||||
if what_gpt_answer["content"] == "": continue
|
||||
if what_gpt_answer["content"] == timeout_bot_msg: continue
|
||||
messages.append(what_i_have_asked)
|
||||
messages.append(what_gpt_answer)
|
||||
else:
|
||||
messages[-1]['content'] = what_gpt_answer['content']
|
||||
|
||||
what_i_ask_now = {}
|
||||
what_i_ask_now["role"] = "user"
|
||||
what_i_ask_now["content"] = inputs
|
||||
messages.append(what_i_ask_now)
|
||||
|
||||
payload = {
|
||||
"model": llm_kwargs['llm_model'].strip('api2d-'),
|
||||
"messages": messages,
|
||||
"temperature": llm_kwargs['temperature'], # 1.0,
|
||||
"top_p": llm_kwargs['top_p'], # 1.0,
|
||||
"n": 1,
|
||||
"stream": stream,
|
||||
"presence_penalty": 0,
|
||||
"frequency_penalty": 0,
|
||||
}
|
||||
try:
|
||||
print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")
|
||||
except:
|
||||
print('输入中可能存在乱码。')
|
||||
return headers,payload
|
||||
|
||||
|
||||
@@ -116,11 +116,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
return
|
||||
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
raw_input = inputs
|
||||
logging.info(f'[raw_input] {raw_input}')
|
||||
|
||||
@@ -1,23 +1,25 @@
|
||||
model_name = "InternLM"
|
||||
cmd_to_install = "`pip install -r request_llm/requirements_chatglm.txt`"
|
||||
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
import time
|
||||
import threading
|
||||
import importlib
|
||||
from toolbox import update_ui, get_conf, Singleton
|
||||
from toolbox import update_ui, get_conf
|
||||
from multiprocessing import Process, Pipe
|
||||
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns, SingletonLocalLLM
|
||||
|
||||
model_name = "InternLM"
|
||||
cmd_to_install = "`pip install ???`"
|
||||
load_message = f"{model_name}尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,{model_name}消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Local Model Utils
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
def try_to_import_special_deps():
|
||||
import sentencepiece
|
||||
|
||||
user_prompt = "<|User|>:{user}<eoh>\n"
|
||||
robot_prompt = "<|Bot|>:{robot}<eoa>\n"
|
||||
cur_query_prompt = "<|User|>:{user}<eoh>\n<|Bot|>:"
|
||||
|
||||
|
||||
def combine_history(prompt, hist):
|
||||
user_prompt = "<|User|>:{user}<eoh>\n"
|
||||
robot_prompt = "<|Bot|>:{robot}<eoa>\n"
|
||||
cur_query_prompt = "<|User|>:{user}<eoh>\n<|Bot|>:"
|
||||
messages = hist
|
||||
total_prompt = ""
|
||||
for message in messages:
|
||||
@@ -29,24 +31,22 @@ def combine_history(prompt, hist):
|
||||
total_prompt = total_prompt + cur_query_prompt.replace("{user}", prompt)
|
||||
return total_prompt
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Local Model
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
@SingletonLocalLLM
|
||||
class GetInternlmHandle(LocalLLMHandle):
|
||||
|
||||
@Singleton
|
||||
class GetInternlmHandle(Process):
|
||||
def __init__(self):
|
||||
# ⭐主进程执行
|
||||
super().__init__(daemon=True)
|
||||
self.parent, self.child = Pipe()
|
||||
self._model = None
|
||||
self._tokenizer = None
|
||||
self.info = ""
|
||||
self.success = True
|
||||
self.check_dependency()
|
||||
self.start()
|
||||
self.threadLock = threading.Lock()
|
||||
def load_model_info(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
self.model_name = model_name
|
||||
self.cmd_to_install = cmd_to_install
|
||||
|
||||
def ready(self):
|
||||
# ⭐主进程执行
|
||||
return self._model is not None
|
||||
def try_to_import_special_deps(self, **kwargs):
|
||||
"""
|
||||
import something that will raise error if the user does not install requirement_*.txt
|
||||
"""
|
||||
import sentencepiece
|
||||
|
||||
def load_model_and_tokenizer(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
@@ -195,121 +195,8 @@ class GetInternlmHandle(Process):
|
||||
if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
|
||||
return
|
||||
|
||||
|
||||
|
||||
def check_dependency(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
try:
|
||||
try_to_import_special_deps()
|
||||
self.info = "依赖检测通过"
|
||||
self.success = True
|
||||
except:
|
||||
self.info = f"缺少{model_name}的依赖,如果要使用{model_name},除了基础的pip依赖以外,您还需要运行{cmd_to_install}安装{model_name}的依赖。"
|
||||
self.success = False
|
||||
|
||||
def run(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
# 第一次运行,加载参数
|
||||
try:
|
||||
self._model, self._tokenizer = self.load_model_and_tokenizer()
|
||||
except:
|
||||
from toolbox import trimmed_format_exc
|
||||
self.child.send(f'[Local Message] 不能正常加载{model_name}的参数.' + '\n```\n' + trimmed_format_exc() + '\n```\n')
|
||||
raise RuntimeError(f"不能正常加载{model_name}的参数!")
|
||||
|
||||
while True:
|
||||
# 进入任务等待状态
|
||||
kwargs = self.child.recv()
|
||||
# 收到消息,开始请求
|
||||
try:
|
||||
for response_full in self.llm_stream_generator(**kwargs):
|
||||
self.child.send(response_full)
|
||||
except:
|
||||
from toolbox import trimmed_format_exc
|
||||
self.child.send(f'[Local Message] 调用{model_name}失败.' + '\n```\n' + trimmed_format_exc() + '\n```\n')
|
||||
# 请求处理结束,开始下一个循环
|
||||
self.child.send('[Finish]')
|
||||
|
||||
def stream_chat(self, **kwargs):
|
||||
# ⭐主进程执行
|
||||
self.threadLock.acquire()
|
||||
self.parent.send(kwargs)
|
||||
while True:
|
||||
res = self.parent.recv()
|
||||
if res != '[Finish]':
|
||||
yield res
|
||||
else:
|
||||
break
|
||||
self.threadLock.release()
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 GPT-Academic
|
||||
# 🔌💻 GPT-Academic Interface
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
||||
"""
|
||||
⭐多线程方法
|
||||
函数的说明请见 request_llm/bridge_all.py
|
||||
"""
|
||||
_llm_handle = GetInternlmHandle()
|
||||
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + _llm_handle.info
|
||||
if not _llm_handle.success:
|
||||
error = _llm_handle.info
|
||||
_llm_handle = None
|
||||
raise RuntimeError(error)
|
||||
|
||||
# chatglm 没有 sys_prompt 接口,因此把prompt加入 history
|
||||
history_feedin = []
|
||||
history_feedin.append(["What can I do?", sys_prompt])
|
||||
for i in range(len(history)//2):
|
||||
history_feedin.append([history[2*i], history[2*i+1]] )
|
||||
|
||||
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
||||
response = ""
|
||||
for response in _llm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
||||
if len(observe_window) >= 1: observe_window[0] = response
|
||||
if len(observe_window) >= 2:
|
||||
if (time.time()-observe_window[1]) > watch_dog_patience:
|
||||
raise RuntimeError("程序终止。")
|
||||
return response
|
||||
|
||||
|
||||
|
||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
||||
"""
|
||||
⭐单线程方法
|
||||
函数的说明请见 request_llm/bridge_all.py
|
||||
"""
|
||||
chatbot.append((inputs, ""))
|
||||
|
||||
_llm_handle = GetInternlmHandle()
|
||||
chatbot[-1] = (inputs, load_message + "\n\n" + _llm_handle.info)
|
||||
yield from update_ui(chatbot=chatbot, history=[])
|
||||
if not _llm_handle.success:
|
||||
_llm_handle = None
|
||||
return
|
||||
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
|
||||
# 处理历史信息
|
||||
history_feedin = []
|
||||
history_feedin.append(["What can I do?", system_prompt] )
|
||||
for i in range(len(history)//2):
|
||||
history_feedin.append([history[2*i], history[2*i+1]] )
|
||||
|
||||
# 开始接收chatglm的回复
|
||||
response = f"[Local Message]: 等待{model_name}响应中 ..."
|
||||
for response in _llm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
||||
chatbot[-1] = (inputs, response)
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
# 总结输出
|
||||
if response == f"[Local Message]: 等待{model_name}响应中 ...":
|
||||
response = f"[Local Message]: {model_name}响应异常 ..."
|
||||
history.extend([inputs, response])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetInternlmHandle, model_name)
|
||||
@@ -154,11 +154,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
return
|
||||
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
# 处理历史信息
|
||||
history_feedin = []
|
||||
|
||||
@@ -154,11 +154,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
return
|
||||
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
# 处理历史信息
|
||||
history_feedin = []
|
||||
|
||||
@@ -154,11 +154,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
return
|
||||
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
# 处理历史信息
|
||||
history_feedin = []
|
||||
|
||||
@@ -224,11 +224,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
# 处理历史信息
|
||||
history_feedin = []
|
||||
|
||||
@@ -224,11 +224,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
return
|
||||
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
history_feedin = []
|
||||
for i in range(len(history)//2):
|
||||
|
||||
68
request_llm/bridge_qwen.py
普通文件
68
request_llm/bridge_qwen.py
普通文件
@@ -0,0 +1,68 @@
|
||||
model_name = "Qwen"
|
||||
cmd_to_install = "`pip install -r request_llm/requirements_qwen.txt`"
|
||||
|
||||
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
import time
|
||||
import threading
|
||||
import importlib
|
||||
from toolbox import update_ui, get_conf
|
||||
from multiprocessing import Process, Pipe
|
||||
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns, SingletonLocalLLM
|
||||
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Local Model
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
@SingletonLocalLLM
|
||||
class GetONNXGLMHandle(LocalLLMHandle):
|
||||
|
||||
def load_model_info(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
self.model_name = model_name
|
||||
self.cmd_to_install = cmd_to_install
|
||||
|
||||
def load_model_and_tokenizer(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
import os, glob
|
||||
import os
|
||||
import platform
|
||||
from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
||||
|
||||
model_id = 'qwen/Qwen-7B-Chat'
|
||||
revision = 'v1.0.1'
|
||||
self._tokenizer = AutoTokenizer.from_pretrained(model_id, revision=revision, trust_remote_code=True)
|
||||
# use fp16
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", revision=revision, trust_remote_code=True, fp16=True).eval()
|
||||
model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
|
||||
self._model = model
|
||||
|
||||
return self._model, self._tokenizer
|
||||
|
||||
def llm_stream_generator(self, **kwargs):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
def adaptor(kwargs):
|
||||
query = kwargs['query']
|
||||
max_length = kwargs['max_length']
|
||||
top_p = kwargs['top_p']
|
||||
temperature = kwargs['temperature']
|
||||
history = kwargs['history']
|
||||
return query, max_length, top_p, temperature, history
|
||||
|
||||
query, max_length, top_p, temperature, history = adaptor(kwargs)
|
||||
|
||||
for response in self._model.chat(self._tokenizer, query, history=history, stream=True):
|
||||
yield response
|
||||
|
||||
def try_to_import_special_deps(self, **kwargs):
|
||||
# import something that will raise error if the user does not install requirement_*.txt
|
||||
# 🏃♂️🏃♂️🏃♂️ 主进程执行
|
||||
import importlib
|
||||
importlib.import_module('modelscope')
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 GPT-Academic Interface
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetONNXGLMHandle, model_name)
|
||||
49
request_llm/bridge_spark.py
普通文件
49
request_llm/bridge_spark.py
普通文件
@@ -0,0 +1,49 @@
|
||||
|
||||
import time
|
||||
import threading
|
||||
import importlib
|
||||
from toolbox import update_ui, get_conf
|
||||
from multiprocessing import Process, Pipe
|
||||
|
||||
model_name = '星火认知大模型'
|
||||
|
||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
||||
"""
|
||||
⭐多线程方法
|
||||
函数的说明请见 request_llm/bridge_all.py
|
||||
"""
|
||||
watch_dog_patience = 5
|
||||
response = ""
|
||||
|
||||
from .com_sparkapi import SparkRequestInstance
|
||||
sri = SparkRequestInstance()
|
||||
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
|
||||
if len(observe_window) >= 1:
|
||||
observe_window[0] = response
|
||||
if len(observe_window) >= 2:
|
||||
if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
|
||||
return response
|
||||
|
||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
||||
"""
|
||||
⭐单线程方法
|
||||
函数的说明请见 request_llm/bridge_all.py
|
||||
"""
|
||||
chatbot.append((inputs, ""))
|
||||
|
||||
if additional_fn is not None:
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
# 开始接收回复
|
||||
from .com_sparkapi import SparkRequestInstance
|
||||
sri = SparkRequestInstance()
|
||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
||||
chatbot[-1] = (inputs, response)
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
# 总结输出
|
||||
if response == f"[Local Message]: 等待{model_name}响应中 ...":
|
||||
response = f"[Local Message]: {model_name}响应异常 ..."
|
||||
history.extend([inputs, response])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
@@ -248,14 +248,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
return
|
||||
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]:
|
||||
inputs = core_functional[additional_fn]["PreProcess"](
|
||||
inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + \
|
||||
inputs + core_functional[additional_fn]["Suffix"]
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
history_feedin = []
|
||||
for i in range(len(history)//2):
|
||||
|
||||
@@ -96,11 +96,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
additional_fn代表点击的哪个按钮,按钮见functional.py
|
||||
"""
|
||||
if additional_fn is not None:
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
raw_input = "What I would like to say is the following: " + inputs
|
||||
history.extend([inputs, ""])
|
||||
|
||||
229
request_llm/chatglmoonx.py
普通文件
229
request_llm/chatglmoonx.py
普通文件
@@ -0,0 +1,229 @@
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Source Code From https://huggingface.co/K024/ChatGLM-6b-onnx-u8s8/blob/main/model.py
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
import re
|
||||
import numpy as np
|
||||
# import torch
|
||||
from onnxruntime import InferenceSession, SessionOptions
|
||||
|
||||
|
||||
# Currently `MatMulInteger` and `DynamicQuantizeLinear` are only supported on CPU,
|
||||
# although they are documented as supported on CUDA.
|
||||
providers = ["CPUExecutionProvider"]
|
||||
|
||||
# if torch.cuda.is_available():
|
||||
# providers = ["CUDAExecutionProvider"] + providers
|
||||
|
||||
|
||||
# Default paths
|
||||
tokenizer_path = "chatglm-6b-int8-onnx-merged/sentencepiece.model"
|
||||
onnx_model_path = "chatglm-6b-int8-onnx-merged/chatglm-6b-int8.onnx"
|
||||
|
||||
|
||||
# input & output names
|
||||
past_names = [f"past_{name}_{i}" for i in range(28) for name in ["key", "value"]]
|
||||
present_names = [f"present_{name}_{i}" for i in range(28) for name in ["key", "value"]]
|
||||
output_names = ["logits"] + present_names
|
||||
|
||||
|
||||
# default kv_cache for first inference
|
||||
default_past_key_values = {
|
||||
k: np.zeros((1, 0, 32, 128), dtype=np.float32) for k in past_names
|
||||
}
|
||||
|
||||
|
||||
def chat_template(history: list[tuple[str, str]], current: str):
|
||||
prompt = ""
|
||||
chat_round = 0
|
||||
for question, answer in history:
|
||||
prompt += f"[Round {chat_round}]\n问:{question}\n答:{answer}\n"
|
||||
chat_round += 1
|
||||
prompt += f"[Round {chat_round}]\n问:{current}\n答:"
|
||||
return prompt
|
||||
|
||||
|
||||
def process_response(response: str):
|
||||
response = response.strip()
|
||||
response = response.replace("[[训练时间]]", "2023年")
|
||||
punkts = [
|
||||
[",", ","],
|
||||
["!", "!"],
|
||||
[":", ":"],
|
||||
[";", ";"],
|
||||
["\?", "?"],
|
||||
]
|
||||
for item in punkts:
|
||||
response = re.sub(r"([\u4e00-\u9fff])%s" % item[0], r"\1%s" % item[1], response)
|
||||
response = re.sub(r"%s([\u4e00-\u9fff])" % item[0], r"%s\1" % item[1], response)
|
||||
return response
|
||||
|
||||
|
||||
class ChatGLMModel():
|
||||
|
||||
def __init__(self, onnx_model_path=onnx_model_path, tokenizer_path=tokenizer_path, profile=False) -> None:
|
||||
self.tokenizer = ChatGLMTokenizer(tokenizer_path)
|
||||
options = SessionOptions()
|
||||
options.enable_profiling = profile
|
||||
self.session = InferenceSession(onnx_model_path, options, providers=providers)
|
||||
self.eop_token_id = self.tokenizer["<eop>"]
|
||||
|
||||
|
||||
def prepare_input(self, prompt: str):
|
||||
input_ids, prefix_mask = self.tokenizer.encode(prompt)
|
||||
|
||||
input_ids = np.array([input_ids], dtype=np.longlong)
|
||||
prefix_mask = np.array([prefix_mask], dtype=np.longlong)
|
||||
|
||||
return input_ids, prefix_mask, default_past_key_values
|
||||
|
||||
|
||||
def sample_next_token(self, logits: np.ndarray, top_k=50, top_p=0.7, temperature=1):
|
||||
# softmax with temperature
|
||||
exp_logits = np.exp(logits / temperature)
|
||||
probs = exp_logits / np.sum(exp_logits)
|
||||
|
||||
# top k
|
||||
top_k_idx = np.argsort(-probs)[:top_k]
|
||||
top_k_probs = probs[top_k_idx]
|
||||
|
||||
# top p
|
||||
cumsum_probs = np.cumsum(top_k_probs)
|
||||
top_k_probs[(cumsum_probs - top_k_probs) > top_p] = 0.0
|
||||
top_k_probs = top_k_probs / np.sum(top_k_probs)
|
||||
|
||||
# sample
|
||||
next_token = np.random.choice(top_k_idx, size=1, p=top_k_probs)
|
||||
return next_token[0].item()
|
||||
|
||||
|
||||
def generate_iterate(self, prompt: str, max_generated_tokens=100, top_k=50, top_p=0.7, temperature=1):
|
||||
input_ids, prefix_mask, past_key_values = self.prepare_input(prompt)
|
||||
output_tokens = []
|
||||
|
||||
while True:
|
||||
inputs = {
|
||||
"input_ids": input_ids,
|
||||
"prefix_mask": prefix_mask,
|
||||
"use_past": np.array(len(output_tokens) > 0),
|
||||
}
|
||||
inputs.update(past_key_values)
|
||||
|
||||
logits, *past_key_values = self.session.run(output_names, inputs)
|
||||
past_key_values = { k: v for k, v in zip(past_names, past_key_values) }
|
||||
|
||||
next_token = self.sample_next_token(logits[0, -1], top_k=top_k, top_p=top_p, temperature=temperature)
|
||||
|
||||
output_tokens += [next_token]
|
||||
|
||||
if next_token == self.eop_token_id or len(output_tokens) > max_generated_tokens:
|
||||
break
|
||||
|
||||
input_ids = np.array([[next_token]], dtype=np.longlong)
|
||||
prefix_mask = np.concatenate([prefix_mask, np.array([[0]], dtype=np.longlong)], axis=1)
|
||||
|
||||
yield process_response(self.tokenizer.decode(output_tokens))
|
||||
|
||||
return process_response(self.tokenizer.decode(output_tokens))
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Source Code From https://huggingface.co/K024/ChatGLM-6b-onnx-u8s8/blob/main/tokenizer.py
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
import re
|
||||
from sentencepiece import SentencePieceProcessor
|
||||
|
||||
|
||||
def replace_spaces_with_blank(match: re.Match[str]):
|
||||
return f"<|blank_{len(match.group())}|>"
|
||||
|
||||
|
||||
def replace_blank_with_spaces(match: re.Match[str]):
|
||||
return " " * int(match.group(1))
|
||||
|
||||
|
||||
class ChatGLMTokenizer:
|
||||
def __init__(self, vocab_file):
|
||||
assert vocab_file is not None
|
||||
self.vocab_file = vocab_file
|
||||
self.special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "<unused_0>", "<sop>", "<eop>", "<ENC>", "<dBLOCK>"]
|
||||
self.text_tokenizer = SentencePieceProcessor(str(vocab_file))
|
||||
|
||||
def __len__(self):
|
||||
return len(self.text_tokenizer)
|
||||
|
||||
def __getitem__(self, key: str):
|
||||
return self.text_tokenizer[key]
|
||||
|
||||
|
||||
def preprocess(self, text: str, linebreak=True, whitespaces=True):
|
||||
if linebreak:
|
||||
text = text.replace("\n", "<n>")
|
||||
if whitespaces:
|
||||
text = text.replace("\t", "<|tab|>")
|
||||
text = re.sub(r" {2,80}", replace_spaces_with_blank, text)
|
||||
return text
|
||||
|
||||
|
||||
def encode(
|
||||
self, text: str, text_pair: str = None,
|
||||
linebreak=True, whitespaces=True,
|
||||
add_dummy_prefix=True, special_tokens=True,
|
||||
) -> tuple[list[int], list[int]]:
|
||||
"""
|
||||
text: Text to encode. Bidirectional part with a [gMASK] and an <sop> for causal LM.
|
||||
text_pair: causal LM part.
|
||||
linebreak: Whether to encode newline (\n) in text.
|
||||
whitespaces: Whether to encode multiple whitespaces or tab in text, useful for source code encoding.
|
||||
special_tokens: Whether to encode special token ([MASK], [gMASK], etc.) in text.
|
||||
add_dummy_prefix: Whether to add dummy blank space in the beginning.
|
||||
"""
|
||||
text = self.preprocess(text, linebreak, whitespaces)
|
||||
if not add_dummy_prefix:
|
||||
text = "<n>" + text
|
||||
|
||||
tokens = self.text_tokenizer.encode(text)
|
||||
prefix_mask = [1] * len(tokens)
|
||||
if special_tokens:
|
||||
tokens += [self.text_tokenizer["[gMASK]"], self.text_tokenizer["<sop>"]]
|
||||
prefix_mask += [1, 0]
|
||||
|
||||
if text_pair is not None:
|
||||
text_pair = self.preprocess(text_pair, linebreak, whitespaces)
|
||||
pair_tokens = self.text_tokenizer.encode(text_pair)
|
||||
tokens += pair_tokens
|
||||
prefix_mask += [0] * len(pair_tokens)
|
||||
if special_tokens:
|
||||
tokens += [self.text_tokenizer["<eop>"]]
|
||||
prefix_mask += [0]
|
||||
|
||||
return (tokens if add_dummy_prefix else tokens[2:]), prefix_mask
|
||||
|
||||
|
||||
def decode(self, text_ids: list[int]) -> str:
|
||||
text = self.text_tokenizer.decode(text_ids)
|
||||
text = text.replace("<n>", "\n")
|
||||
text = text.replace("<|tab|>", "\t")
|
||||
text = re.sub(r"<\|blank_(\d\d?)\|>", replace_blank_with_spaces, text)
|
||||
return text
|
||||
|
||||
|
||||
184
request_llm/com_sparkapi.py
普通文件
184
request_llm/com_sparkapi.py
普通文件
@@ -0,0 +1,184 @@
|
||||
from toolbox import get_conf
|
||||
import base64
|
||||
import datetime
|
||||
import hashlib
|
||||
import hmac
|
||||
import json
|
||||
from urllib.parse import urlparse
|
||||
import ssl
|
||||
from datetime import datetime
|
||||
from time import mktime
|
||||
from urllib.parse import urlencode
|
||||
from wsgiref.handlers import format_date_time
|
||||
import websocket
|
||||
import threading, time
|
||||
|
||||
timeout_bot_msg = '[Local Message] Request timeout. Network error.'
|
||||
|
||||
class Ws_Param(object):
|
||||
# 初始化
|
||||
def __init__(self, APPID, APIKey, APISecret, gpt_url):
|
||||
self.APPID = APPID
|
||||
self.APIKey = APIKey
|
||||
self.APISecret = APISecret
|
||||
self.host = urlparse(gpt_url).netloc
|
||||
self.path = urlparse(gpt_url).path
|
||||
self.gpt_url = gpt_url
|
||||
|
||||
# 生成url
|
||||
def create_url(self):
|
||||
# 生成RFC1123格式的时间戳
|
||||
now = datetime.now()
|
||||
date = format_date_time(mktime(now.timetuple()))
|
||||
|
||||
# 拼接字符串
|
||||
signature_origin = "host: " + self.host + "\n"
|
||||
signature_origin += "date: " + date + "\n"
|
||||
signature_origin += "GET " + self.path + " HTTP/1.1"
|
||||
|
||||
# 进行hmac-sha256进行加密
|
||||
signature_sha = hmac.new(self.APISecret.encode('utf-8'), signature_origin.encode('utf-8'), digestmod=hashlib.sha256).digest()
|
||||
signature_sha_base64 = base64.b64encode(signature_sha).decode(encoding='utf-8')
|
||||
authorization_origin = f'api_key="{self.APIKey}", algorithm="hmac-sha256", headers="host date request-line", signature="{signature_sha_base64}"'
|
||||
authorization = base64.b64encode(authorization_origin.encode('utf-8')).decode(encoding='utf-8')
|
||||
|
||||
# 将请求的鉴权参数组合为字典
|
||||
v = {
|
||||
"authorization": authorization,
|
||||
"date": date,
|
||||
"host": self.host
|
||||
}
|
||||
# 拼接鉴权参数,生成url
|
||||
url = self.gpt_url + '?' + urlencode(v)
|
||||
# 此处打印出建立连接时候的url,参考本demo的时候可取消上方打印的注释,比对相同参数时生成的url与自己代码生成的url是否一致
|
||||
return url
|
||||
|
||||
|
||||
|
||||
class SparkRequestInstance():
|
||||
def __init__(self):
|
||||
XFYUN_APPID, XFYUN_API_SECRET, XFYUN_API_KEY = get_conf('XFYUN_APPID', 'XFYUN_API_SECRET', 'XFYUN_API_KEY')
|
||||
|
||||
self.appid = XFYUN_APPID
|
||||
self.api_secret = XFYUN_API_SECRET
|
||||
self.api_key = XFYUN_API_KEY
|
||||
self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat"
|
||||
self.time_to_yield_event = threading.Event()
|
||||
self.time_to_exit_event = threading.Event()
|
||||
|
||||
self.result_buf = ""
|
||||
|
||||
def generate(self, inputs, llm_kwargs, history, system_prompt):
|
||||
llm_kwargs = llm_kwargs
|
||||
history = history
|
||||
system_prompt = system_prompt
|
||||
import _thread as thread
|
||||
thread.start_new_thread(self.create_blocking_request, (inputs, llm_kwargs, history, system_prompt))
|
||||
while True:
|
||||
self.time_to_yield_event.wait(timeout=1)
|
||||
if self.time_to_yield_event.is_set():
|
||||
yield self.result_buf
|
||||
if self.time_to_exit_event.is_set():
|
||||
return self.result_buf
|
||||
|
||||
|
||||
def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt):
|
||||
wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, self.gpt_url)
|
||||
websocket.enableTrace(False)
|
||||
wsUrl = wsParam.create_url()
|
||||
|
||||
# 收到websocket连接建立的处理
|
||||
def on_open(ws):
|
||||
import _thread as thread
|
||||
thread.start_new_thread(run, (ws,))
|
||||
|
||||
def run(ws, *args):
|
||||
data = json.dumps(gen_params(ws.appid, *ws.all_args))
|
||||
ws.send(data)
|
||||
|
||||
# 收到websocket消息的处理
|
||||
def on_message(ws, message):
|
||||
data = json.loads(message)
|
||||
code = data['header']['code']
|
||||
if code != 0:
|
||||
print(f'请求错误: {code}, {data}')
|
||||
ws.close()
|
||||
self.time_to_exit_event.set()
|
||||
else:
|
||||
choices = data["payload"]["choices"]
|
||||
status = choices["status"]
|
||||
content = choices["text"][0]["content"]
|
||||
ws.content += content
|
||||
self.result_buf += content
|
||||
if status == 2:
|
||||
ws.close()
|
||||
self.time_to_exit_event.set()
|
||||
self.time_to_yield_event.set()
|
||||
|
||||
# 收到websocket错误的处理
|
||||
def on_error(ws, error):
|
||||
print("error:", error)
|
||||
self.time_to_exit_event.set()
|
||||
|
||||
# 收到websocket关闭的处理
|
||||
def on_close(ws, *args):
|
||||
self.time_to_exit_event.set()
|
||||
|
||||
# websocket
|
||||
ws = websocket.WebSocketApp(wsUrl, on_message=on_message, on_error=on_error, on_close=on_close, on_open=on_open)
|
||||
ws.appid = self.appid
|
||||
ws.content = ""
|
||||
ws.all_args = (inputs, llm_kwargs, history, system_prompt)
|
||||
ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})
|
||||
|
||||
def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
|
||||
conversation_cnt = len(history) // 2
|
||||
messages = [{"role": "system", "content": system_prompt}]
|
||||
if conversation_cnt:
|
||||
for index in range(0, 2*conversation_cnt, 2):
|
||||
what_i_have_asked = {}
|
||||
what_i_have_asked["role"] = "user"
|
||||
what_i_have_asked["content"] = history[index]
|
||||
what_gpt_answer = {}
|
||||
what_gpt_answer["role"] = "assistant"
|
||||
what_gpt_answer["content"] = history[index+1]
|
||||
if what_i_have_asked["content"] != "":
|
||||
if what_gpt_answer["content"] == "": continue
|
||||
if what_gpt_answer["content"] == timeout_bot_msg: continue
|
||||
messages.append(what_i_have_asked)
|
||||
messages.append(what_gpt_answer)
|
||||
else:
|
||||
messages[-1]['content'] = what_gpt_answer['content']
|
||||
what_i_ask_now = {}
|
||||
what_i_ask_now["role"] = "user"
|
||||
what_i_ask_now["content"] = inputs
|
||||
messages.append(what_i_ask_now)
|
||||
return messages
|
||||
|
||||
|
||||
def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
|
||||
"""
|
||||
通过appid和用户的提问来生成请参数
|
||||
"""
|
||||
data = {
|
||||
"header": {
|
||||
"app_id": appid,
|
||||
"uid": "1234"
|
||||
},
|
||||
"parameter": {
|
||||
"chat": {
|
||||
"domain": "general",
|
||||
"temperature": llm_kwargs["temperature"],
|
||||
"random_threshold": 0.5,
|
||||
"max_tokens": 4096,
|
||||
"auditing": "default"
|
||||
}
|
||||
},
|
||||
"payload": {
|
||||
"message": {
|
||||
"text": generate_message_payload(inputs, llm_kwargs, history, system_prompt)
|
||||
}
|
||||
}
|
||||
}
|
||||
return data
|
||||
|
||||
180
request_llm/local_llm_class.py
普通文件
180
request_llm/local_llm_class.py
普通文件
@@ -0,0 +1,180 @@
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
import time
|
||||
import threading
|
||||
import importlib
|
||||
from toolbox import update_ui, get_conf, Singleton
|
||||
from multiprocessing import Process, Pipe
|
||||
|
||||
def SingletonLocalLLM(cls):
|
||||
"""
|
||||
一个单实例装饰器
|
||||
"""
|
||||
_instance = {}
|
||||
def _singleton(*args, **kargs):
|
||||
if cls not in _instance:
|
||||
_instance[cls] = cls(*args, **kargs)
|
||||
return _instance[cls]
|
||||
elif _instance[cls].corrupted:
|
||||
_instance[cls] = cls(*args, **kargs)
|
||||
return _instance[cls]
|
||||
else:
|
||||
return _instance[cls]
|
||||
return _singleton
|
||||
|
||||
class LocalLLMHandle(Process):
|
||||
def __init__(self):
|
||||
# ⭐主进程执行
|
||||
super().__init__(daemon=True)
|
||||
self.corrupted = False
|
||||
self.load_model_info()
|
||||
self.parent, self.child = Pipe()
|
||||
self.running = True
|
||||
self._model = None
|
||||
self._tokenizer = None
|
||||
self.info = ""
|
||||
self.check_dependency()
|
||||
self.start()
|
||||
self.threadLock = threading.Lock()
|
||||
|
||||
def load_model_info(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
raise NotImplementedError("Method not implemented yet")
|
||||
self.model_name = ""
|
||||
self.cmd_to_install = ""
|
||||
|
||||
def load_model_and_tokenizer(self):
|
||||
"""
|
||||
This function should return the model and the tokenizer
|
||||
"""
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
raise NotImplementedError("Method not implemented yet")
|
||||
|
||||
def llm_stream_generator(self, **kwargs):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
raise NotImplementedError("Method not implemented yet")
|
||||
|
||||
def try_to_import_special_deps(self, **kwargs):
|
||||
"""
|
||||
import something that will raise error if the user does not install requirement_*.txt
|
||||
"""
|
||||
# ⭐主进程执行
|
||||
raise NotImplementedError("Method not implemented yet")
|
||||
|
||||
def check_dependency(self):
|
||||
# ⭐主进程执行
|
||||
try:
|
||||
self.try_to_import_special_deps()
|
||||
self.info = "依赖检测通过"
|
||||
self.running = True
|
||||
except:
|
||||
self.info = f"缺少{self.model_name}的依赖,如果要使用{self.model_name},除了基础的pip依赖以外,您还需要运行{self.cmd_to_install}安装{self.model_name}的依赖。"
|
||||
self.running = False
|
||||
|
||||
def run(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
# 第一次运行,加载参数
|
||||
try:
|
||||
self._model, self._tokenizer = self.load_model_and_tokenizer()
|
||||
except:
|
||||
self.running = False
|
||||
from toolbox import trimmed_format_exc
|
||||
self.child.send(f'[Local Message] 不能正常加载{self.model_name}的参数.' + '\n```\n' + trimmed_format_exc() + '\n```\n')
|
||||
self.child.send('[FinishBad]')
|
||||
raise RuntimeError(f"不能正常加载{self.model_name}的参数!")
|
||||
|
||||
while True:
|
||||
# 进入任务等待状态
|
||||
kwargs = self.child.recv()
|
||||
# 收到消息,开始请求
|
||||
try:
|
||||
for response_full in self.llm_stream_generator(**kwargs):
|
||||
self.child.send(response_full)
|
||||
self.child.send('[Finish]')
|
||||
# 请求处理结束,开始下一个循环
|
||||
except:
|
||||
from toolbox import trimmed_format_exc
|
||||
self.child.send(f'[Local Message] 调用{self.model_name}失败.' + '\n```\n' + trimmed_format_exc() + '\n```\n')
|
||||
self.child.send('[Finish]')
|
||||
|
||||
def stream_chat(self, **kwargs):
|
||||
# ⭐主进程执行
|
||||
self.threadLock.acquire()
|
||||
self.parent.send(kwargs)
|
||||
while True:
|
||||
res = self.parent.recv()
|
||||
if res == '[Finish]':
|
||||
break
|
||||
if res == '[FinishBad]':
|
||||
self.running = False
|
||||
self.corrupted = True
|
||||
break
|
||||
else:
|
||||
yield res
|
||||
self.threadLock.release()
|
||||
|
||||
|
||||
|
||||
def get_local_llm_predict_fns(LLMSingletonClass, model_name):
|
||||
load_message = f"{model_name}尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,{model_name}消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
|
||||
|
||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
||||
"""
|
||||
⭐多线程方法
|
||||
函数的说明请见 request_llm/bridge_all.py
|
||||
"""
|
||||
_llm_handle = LLMSingletonClass()
|
||||
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + _llm_handle.info
|
||||
if not _llm_handle.running: raise RuntimeError(_llm_handle.info)
|
||||
|
||||
# chatglm 没有 sys_prompt 接口,因此把prompt加入 history
|
||||
history_feedin = []
|
||||
history_feedin.append(["What can I do?", sys_prompt])
|
||||
for i in range(len(history)//2):
|
||||
history_feedin.append([history[2*i], history[2*i+1]] )
|
||||
|
||||
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
||||
response = ""
|
||||
for response in _llm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
||||
if len(observe_window) >= 1:
|
||||
observe_window[0] = response
|
||||
if len(observe_window) >= 2:
|
||||
if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
|
||||
return response
|
||||
|
||||
|
||||
|
||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
||||
"""
|
||||
⭐单线程方法
|
||||
函数的说明请见 request_llm/bridge_all.py
|
||||
"""
|
||||
chatbot.append((inputs, ""))
|
||||
|
||||
_llm_handle = LLMSingletonClass()
|
||||
chatbot[-1] = (inputs, load_message + "\n\n" + _llm_handle.info)
|
||||
yield from update_ui(chatbot=chatbot, history=[])
|
||||
if not _llm_handle.running: raise RuntimeError(_llm_handle.info)
|
||||
|
||||
if additional_fn is not None:
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
# 处理历史信息
|
||||
history_feedin = []
|
||||
history_feedin.append(["What can I do?", system_prompt] )
|
||||
for i in range(len(history)//2):
|
||||
history_feedin.append([history[2*i], history[2*i+1]] )
|
||||
|
||||
# 开始接收回复
|
||||
response = f"[Local Message]: 等待{model_name}响应中 ..."
|
||||
for response in _llm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
||||
chatbot[-1] = (inputs, response)
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
# 总结输出
|
||||
if response == f"[Local Message]: 等待{model_name}响应中 ...":
|
||||
response = f"[Local Message]: {model_name}响应异常 ..."
|
||||
history.extend([inputs, response])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
return predict_no_ui_long_connection, predict
|
||||
@@ -1,5 +1,5 @@
|
||||
protobuf
|
||||
transformers==4.27.1
|
||||
transformers>=4.27.1
|
||||
cpm_kernels
|
||||
torch>=1.10
|
||||
mdtex2html
|
||||
|
||||
@@ -0,0 +1,11 @@
|
||||
protobuf
|
||||
transformers>=4.27.1
|
||||
cpm_kernels
|
||||
torch>=1.10
|
||||
mdtex2html
|
||||
sentencepiece
|
||||
numpy
|
||||
onnxruntime
|
||||
sentencepiece
|
||||
streamlit
|
||||
streamlit-chat
|
||||
@@ -0,0 +1,2 @@
|
||||
modelscope
|
||||
transformers_stream_generator
|
||||
@@ -18,4 +18,5 @@ openai
|
||||
numpy
|
||||
arxiv
|
||||
rich
|
||||
websocket-client
|
||||
pypdf2==2.12.1
|
||||
|
||||
0
tests/__init__.py
普通文件
0
tests/__init__.py
普通文件
@@ -15,10 +15,12 @@ if __name__ == "__main__":
|
||||
# from request_llm.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
|
||||
# from request_llm.bridge_jittorllms_llama import predict_no_ui_long_connection
|
||||
# from request_llm.bridge_claude import predict_no_ui_long_connection
|
||||
from request_llm.bridge_internlm import predict_no_ui_long_connection
|
||||
# from request_llm.bridge_internlm import predict_no_ui_long_connection
|
||||
# from request_llm.bridge_qwen import predict_no_ui_long_connection
|
||||
from request_llm.bridge_spark import predict_no_ui_long_connection
|
||||
|
||||
llm_kwargs = {
|
||||
'max_length': 512,
|
||||
'max_length': 4096,
|
||||
'top_p': 1,
|
||||
'temperature': 1,
|
||||
}
|
||||
52
tests/test_plugins.py
普通文件
52
tests/test_plugins.py
普通文件
@@ -0,0 +1,52 @@
|
||||
"""
|
||||
对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py
|
||||
"""
|
||||
|
||||
|
||||
import os, sys
|
||||
def validate_path(): dir_name = os.path.dirname(__file__); root_dir_assume = os.path.abspath(dir_name + '/..'); os.chdir(root_dir_assume); sys.path.append(root_dir_assume)
|
||||
validate_path() # 返回项目根路径
|
||||
from tests.test_utils import plugin_test
|
||||
|
||||
if __name__ == "__main__":
|
||||
plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表')
|
||||
|
||||
plugin_test(plugin='crazy_functions.解析项目源代码->解析一个Python项目', main_input="crazy_functions/test_project/python/dqn")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.解析项目源代码->解析一个C项目', main_input="crazy_functions/test_project/cpp/cppipc")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.Latex全文润色->Latex英文润色', main_input="crazy_functions/test_project/latex/attention")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown中译英', main_input="README.md")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.批量翻译PDF文档_多线程->批量翻译PDF文档', main_input="crazy_functions/test_project/pdf_and_word")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.总结word文档->总结word文档', main_input="crazy_functions/test_project/pdf_and_word")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.下载arxiv论文翻译摘要->下载arxiv论文并翻译摘要', main_input="1812.10695")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.联网的ChatGPT->连接网络回答问题', main_input="谁是应急食品?")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.解析JupyterNotebook->解析ipynb文件', main_input="crazy_functions/test_samples")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A ball split into 2, and then split into 4, and finally split into 8.")
|
||||
|
||||
# for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]:
|
||||
# plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown翻译指定语言', main_input="README.md", advanced_arg={"advanced_arg": lang})
|
||||
|
||||
# plugin_test(plugin='crazy_functions.Langchain知识库->知识库问答', main_input="./")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.Langchain知识库->读取知识库作答', main_input="What is the installation method?")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.Langchain知识库->读取知识库作答', main_input="远程云服务器部署?")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2210.03629")
|
||||
|
||||
# advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求:100字以内,用第二人称。' --system_prompt=''" }
|
||||
# plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg)
|
||||
|
||||
# advanced_arg = {"advanced_arg":"--pre_seq_len=128 --learning_rate=2e-2 --num_gpus=1 --json_dataset='t_code.json' --ptuning_directory='/home/hmp/ChatGLM2-6B/ptuning' " }
|
||||
# plugin_test(plugin='crazy_functions.chatglm微调工具->启动微调', main_input='build/dev.json', advanced_arg=advanced_arg)
|
||||
|
||||
78
tests/test_utils.py
普通文件
78
tests/test_utils.py
普通文件
@@ -0,0 +1,78 @@
|
||||
from toolbox import get_conf
|
||||
from toolbox import set_conf
|
||||
from toolbox import set_multi_conf
|
||||
from toolbox import get_plugin_handle
|
||||
from toolbox import get_plugin_default_kwargs
|
||||
from toolbox import get_chat_handle
|
||||
from toolbox import get_chat_default_kwargs
|
||||
from functools import wraps
|
||||
import sys
|
||||
import os
|
||||
|
||||
def chat_to_markdown_str(chat):
|
||||
result = ""
|
||||
for i, cc in enumerate(chat):
|
||||
result += f'\n\n{cc[0]}\n\n{cc[1]}'
|
||||
if i != len(chat)-1:
|
||||
result += '\n\n---'
|
||||
return result
|
||||
|
||||
def silence_stdout(func):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
_original_stdout = sys.stdout
|
||||
sys.stdout = open(os.devnull, 'w')
|
||||
for q in func(*args, **kwargs):
|
||||
sys.stdout = _original_stdout
|
||||
yield q
|
||||
sys.stdout = open(os.devnull, 'w')
|
||||
sys.stdout.close()
|
||||
sys.stdout = _original_stdout
|
||||
return wrapper
|
||||
|
||||
def silence_stdout_fn(func):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
_original_stdout = sys.stdout
|
||||
sys.stdout = open(os.devnull, 'w')
|
||||
result = func(*args, **kwargs)
|
||||
sys.stdout.close()
|
||||
sys.stdout = _original_stdout
|
||||
return result
|
||||
return wrapper
|
||||
|
||||
class VoidTerminal():
|
||||
def __init__(self) -> None:
|
||||
pass
|
||||
|
||||
vt = VoidTerminal()
|
||||
vt.get_conf = silence_stdout_fn(get_conf)
|
||||
vt.set_conf = silence_stdout_fn(set_conf)
|
||||
vt.set_multi_conf = silence_stdout_fn(set_multi_conf)
|
||||
vt.get_plugin_handle = silence_stdout_fn(get_plugin_handle)
|
||||
vt.get_plugin_default_kwargs = silence_stdout_fn(get_plugin_default_kwargs)
|
||||
vt.get_chat_handle = silence_stdout_fn(get_chat_handle)
|
||||
vt.get_chat_default_kwargs = silence_stdout_fn(get_chat_default_kwargs)
|
||||
vt.chat_to_markdown_str = chat_to_markdown_str
|
||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
|
||||
vt.get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
|
||||
|
||||
def plugin_test(main_input, plugin, advanced_arg=None):
|
||||
from rich.live import Live
|
||||
from rich.markdown import Markdown
|
||||
|
||||
vt.set_conf(key="API_KEY", value=API_KEY)
|
||||
vt.set_conf(key="LLM_MODEL", value=LLM_MODEL)
|
||||
|
||||
plugin = vt.get_plugin_handle(plugin)
|
||||
plugin_kwargs = vt.get_plugin_default_kwargs()
|
||||
plugin_kwargs['main_input'] = main_input
|
||||
if advanced_arg is not None:
|
||||
plugin_kwargs['plugin_kwargs'] = advanced_arg
|
||||
my_working_plugin = silence_stdout(plugin)(**plugin_kwargs)
|
||||
|
||||
with Live(Markdown(""), auto_refresh=False) as live:
|
||||
for cookies, chat, hist, msg in my_working_plugin:
|
||||
md_str = vt.chat_to_markdown_str(chat)
|
||||
md = Markdown(md_str)
|
||||
live.update(md, refresh=True)
|
||||
@@ -103,6 +103,10 @@ mspace {
|
||||
width: 10% !important;
|
||||
}
|
||||
|
||||
button.sm {
|
||||
padding: 6px 8px !important;
|
||||
}
|
||||
|
||||
/* usage_display */
|
||||
.insert_block {
|
||||
position: relative;
|
||||
@@ -124,15 +128,15 @@ textarea {
|
||||
resize: none;
|
||||
height: 100%; /* 填充父元素的高度 */
|
||||
}
|
||||
#main_chatbot {
|
||||
/* #main_chatbot {
|
||||
height: 75vh !important;
|
||||
max-height: 75vh !important;
|
||||
/* overflow: auto !important; */
|
||||
overflow: auto !important;
|
||||
z-index: 2;
|
||||
transform: translateZ(0) !important;
|
||||
backface-visibility: hidden !important;
|
||||
will-change: transform !important;
|
||||
}
|
||||
} */
|
||||
#prompt_result{
|
||||
height: 60vh !important;
|
||||
max-height: 60vh !important;
|
||||
@@ -201,9 +205,9 @@ textarea.svelte-1pie7s6 {
|
||||
background: #393939 !important;
|
||||
border: var(--input-border-width) solid var(--input-border-color) !important;
|
||||
}
|
||||
.dark input[type="range"] {
|
||||
/* .dark input[type="range"] {
|
||||
background: #393939 !important;
|
||||
}
|
||||
} */
|
||||
#user_info .wrap {
|
||||
opacity: 0;
|
||||
}
|
||||
@@ -238,7 +242,7 @@ textarea.svelte-1pie7s6 {
|
||||
#debug_mes {
|
||||
transition: all 0.6s;
|
||||
}
|
||||
#main_chatbot {
|
||||
#gpt-chatbot {
|
||||
transition: height 0.3s ease;
|
||||
}
|
||||
|
||||
@@ -415,7 +419,7 @@ input[type="range"]::-webkit-slider-thumb {
|
||||
input[type="range"]::-webkit-slider-thumb:hover {
|
||||
background: var(--neutral-50);
|
||||
}
|
||||
input[type=range]::-webkit-slider-runnable-track {
|
||||
input[type="range"]::-webkit-slider-runnable-track {
|
||||
-webkit-appearance: none;
|
||||
box-shadow: none;
|
||||
border: none;
|
||||
@@ -440,28 +444,37 @@ ol:not(.options), ul:not(.options) {
|
||||
}
|
||||
|
||||
/* 亮色(默认) */
|
||||
#main_chatbot {
|
||||
#gpt-chatbot {
|
||||
background-color: var(--chatbot-background-color-light) !important;
|
||||
color: var(--chatbot-color-light) !important;
|
||||
box-shadow: 0 0 12px 4px rgba(0, 0, 0, 0.06);
|
||||
}
|
||||
/* 暗色 */
|
||||
.dark #main_chatbot {
|
||||
.dark #gpt-chatbot {
|
||||
background-color: var(--block-background-fill) !important;
|
||||
color: var(--chatbot-color-dark) !important;
|
||||
box-shadow: 0 0 12px 4px rgba(0, 0, 0, 0.2);
|
||||
}
|
||||
|
||||
#gpt-panel > div {
|
||||
box-shadow: 0 0 12px 4px rgba(0, 0, 0, 0.06);
|
||||
}
|
||||
.dark #gpt-panel > div {
|
||||
box-shadow: 0 0 12px 4px rgba(0, 0, 0, 0.2);
|
||||
}
|
||||
|
||||
/* 屏幕宽度大于等于500px的设备 */
|
||||
/* update on 2023.4.8: 高度的细致调整已写入JavaScript */
|
||||
@media screen and (min-width: 500px) {
|
||||
/* @media screen and (min-width: 500px) {
|
||||
#main_chatbot {
|
||||
height: calc(100vh - 200px);
|
||||
}
|
||||
#main_chatbot .wrap {
|
||||
max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
|
||||
}
|
||||
}
|
||||
} */
|
||||
/* 屏幕宽度小于500px的设备 */
|
||||
@media screen and (max-width: 499px) {
|
||||
/* @media screen and (max-width: 499px) {
|
||||
#main_chatbot {
|
||||
height: calc(100vh - 140px);
|
||||
}
|
||||
@@ -474,8 +487,8 @@ ol:not(.options), ul:not(.options) {
|
||||
#app_title h1{
|
||||
letter-spacing: -1px; font-size: 22px;
|
||||
}
|
||||
}
|
||||
#main_chatbot .wrap {
|
||||
} */
|
||||
#gpt-chatbot .wrap {
|
||||
overflow-x: hidden
|
||||
}
|
||||
/* 对话气泡 */
|
||||
@@ -491,11 +504,19 @@ ol:not(.options), ul:not(.options) {
|
||||
[data-testid = "bot"] {
|
||||
max-width: 85%;
|
||||
border-bottom-left-radius: 0 !important;
|
||||
background-color: var(--message-bot-background-color-light) !important;
|
||||
}
|
||||
[data-testid = "user"] {
|
||||
max-width: 85%;
|
||||
width: auto !important;
|
||||
border-bottom-right-radius: 0 !important;
|
||||
background-color: var(--message-user-background-color-light) !important;
|
||||
}
|
||||
.dark [data-testid = "bot"] {
|
||||
background-color: var(--message-bot-background-color-dark) !important;
|
||||
}
|
||||
.dark [data-testid = "user"] {
|
||||
background-color: var(--message-user-background-color-dark) !important;
|
||||
}
|
||||
|
||||
.message p {
|
||||
|
||||
41
themes/green.js
普通文件
41
themes/green.js
普通文件
@@ -0,0 +1,41 @@
|
||||
|
||||
var academic_chat = null;
|
||||
|
||||
var sliders = null;
|
||||
var rangeInputs = null;
|
||||
var numberInputs = null;
|
||||
|
||||
function set_elements() {
|
||||
academic_chat = document.querySelector('gradio-app');
|
||||
async function get_sliders() {
|
||||
sliders = document.querySelectorAll('input[type="range"]');
|
||||
while (sliders.length == 0) {
|
||||
await new Promise(r => setTimeout(r, 100));
|
||||
sliders = document.querySelectorAll('input[type="range"]');
|
||||
}
|
||||
setSlider();
|
||||
}
|
||||
get_sliders();
|
||||
}
|
||||
|
||||
function setSlider() {
|
||||
rangeInputs = document.querySelectorAll('input[type="range"]');
|
||||
numberInputs = document.querySelectorAll('input[type="number"]')
|
||||
function setSliderRange() {
|
||||
var range = document.querySelectorAll('input[type="range"]');
|
||||
range.forEach(range => {
|
||||
range.style.backgroundSize = (range.value - range.min) / (range.max - range.min) * 100 + '% 100%';
|
||||
});
|
||||
}
|
||||
setSliderRange();
|
||||
rangeInputs.forEach(rangeInput => {
|
||||
rangeInput.addEventListener('input', setSliderRange);
|
||||
});
|
||||
numberInputs.forEach(numberInput => {
|
||||
numberInput.addEventListener('input', setSliderRange);
|
||||
})
|
||||
}
|
||||
|
||||
window.addEventListener("DOMContentLoaded", () => {
|
||||
set_elements();
|
||||
});
|
||||
@@ -56,14 +56,14 @@ def adjust_theme():
|
||||
button_primary_background_fill_hover="*primary_400",
|
||||
button_primary_border_color="*primary_500",
|
||||
button_primary_border_color_dark="*primary_600",
|
||||
button_primary_text_color="wihte",
|
||||
button_primary_text_color="white",
|
||||
button_primary_text_color_dark="white",
|
||||
button_secondary_background_fill="*neutral_100",
|
||||
button_secondary_background_fill_hover="*neutral_50",
|
||||
button_secondary_background_fill_dark="*neutral_900",
|
||||
button_secondary_text_color="*neutral_800",
|
||||
button_secondary_text_color_dark="white",
|
||||
background_fill_primary="#F7F7F7",
|
||||
background_fill_primary="*neutral_50",
|
||||
background_fill_primary_dark="#1F1F1F",
|
||||
block_title_text_color="*primary_500",
|
||||
block_title_background_fill_dark="*primary_900",
|
||||
@@ -87,6 +87,10 @@ def adjust_theme():
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
|
||||
with open('themes/green.js', 'r', encoding='utf8') as f:
|
||||
js += f"<script>{f.read()}</script>"
|
||||
|
||||
gradio_original_template_fn = gr.routes.templates.TemplateResponse
|
||||
def gradio_new_template_fn(*args, **kwargs):
|
||||
res = gradio_original_template_fn(*args, **kwargs)
|
||||
|
||||
164
toolbox.py
164
toolbox.py
@@ -117,20 +117,20 @@ def CatchException(f):
|
||||
"""
|
||||
|
||||
@wraps(f)
|
||||
def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT=-1):
|
||||
def decorated(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs):
|
||||
try:
|
||||
yield from f(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT)
|
||||
yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)
|
||||
except Exception as e:
|
||||
from check_proxy import check_proxy
|
||||
from toolbox import get_conf
|
||||
proxies, = get_conf('proxies')
|
||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||
if len(chatbot) == 0:
|
||||
chatbot.clear()
|
||||
chatbot.append(["插件调度异常", "异常原因"])
|
||||
chatbot[-1] = (chatbot[-1][0],
|
||||
if len(chatbot_with_cookie) == 0:
|
||||
chatbot_with_cookie.clear()
|
||||
chatbot_with_cookie.append(["插件调度异常", "异常原因"])
|
||||
chatbot_with_cookie[-1] = (chatbot_with_cookie[-1][0],
|
||||
f"[Local Message] 实验性函数调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}")
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg=f'异常 {e}') # 刷新界面
|
||||
yield from update_ui(chatbot=chatbot_with_cookie, history=history, msg=f'异常 {e}') # 刷新界面
|
||||
return decorated
|
||||
|
||||
|
||||
@@ -196,11 +196,10 @@ def write_results_to_file(history, file_name=None):
|
||||
import time
|
||||
if file_name is None:
|
||||
# file_name = time.strftime("chatGPT分析报告%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md'
|
||||
file_name = 'chatGPT分析报告' + \
|
||||
time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md'
|
||||
file_name = 'GPT-Report-' + gen_time_str() + '.md'
|
||||
os.makedirs('./gpt_log/', exist_ok=True)
|
||||
with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
|
||||
f.write('# chatGPT 分析报告\n')
|
||||
f.write('# GPT-Academic Report\n')
|
||||
for i, content in enumerate(history):
|
||||
try:
|
||||
if type(content) != str: content = str(content)
|
||||
@@ -219,6 +218,37 @@ def write_results_to_file(history, file_name=None):
|
||||
return res
|
||||
|
||||
|
||||
def write_history_to_file(history, file_basename=None, file_fullname=None):
|
||||
"""
|
||||
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
|
||||
"""
|
||||
import os
|
||||
import time
|
||||
if file_fullname is None:
|
||||
if file_basename is not None:
|
||||
file_fullname = os.path.join(get_log_folder(), file_basename)
|
||||
else:
|
||||
file_fullname = os.path.join(get_log_folder(), f'GPT-Academic-{gen_time_str()}.md')
|
||||
os.makedirs(os.path.dirname(file_fullname), exist_ok=True)
|
||||
with open(file_fullname, 'w', encoding='utf8') as f:
|
||||
f.write('# GPT-Academic Report\n')
|
||||
for i, content in enumerate(history):
|
||||
try:
|
||||
if type(content) != str: content = str(content)
|
||||
except:
|
||||
continue
|
||||
if i % 2 == 0:
|
||||
f.write('## ')
|
||||
try:
|
||||
f.write(content)
|
||||
except:
|
||||
# remove everything that cannot be handled by utf8
|
||||
f.write(content.encode('utf-8', 'ignore').decode())
|
||||
f.write('\n\n')
|
||||
res = os.path.abspath(file_fullname)
|
||||
return res
|
||||
|
||||
|
||||
def regular_txt_to_markdown(text):
|
||||
"""
|
||||
将普通文本转换为Markdown格式的文本。
|
||||
@@ -466,7 +496,7 @@ def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
|
||||
# 将文件复制一份到下载区
|
||||
import shutil
|
||||
if rename_file is None: rename_file = f'{gen_time_str()}-{os.path.basename(file)}'
|
||||
new_path = os.path.join(f'./gpt_log/', rename_file)
|
||||
new_path = os.path.join(get_log_folder(), rename_file)
|
||||
# 如果已经存在,先删除
|
||||
if os.path.exists(new_path) and not os.path.samefile(new_path, file): os.remove(new_path)
|
||||
# 把文件复制过去
|
||||
@@ -477,6 +507,10 @@ def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
|
||||
else: current = []
|
||||
chatbot._cookies.update({'file_to_promote': [new_path] + current})
|
||||
|
||||
def disable_auto_promotion(chatbot):
|
||||
chatbot._cookies.update({'file_to_promote': []})
|
||||
return
|
||||
|
||||
def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
|
||||
"""
|
||||
当文件被上传时的回调函数
|
||||
@@ -492,7 +526,7 @@ def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
|
||||
shutil.rmtree('./private_upload/')
|
||||
except:
|
||||
pass
|
||||
time_tag = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
|
||||
time_tag = gen_time_str()
|
||||
os.makedirs(f'private_upload/{time_tag}', exist_ok=True)
|
||||
err_msg = ''
|
||||
for file in files:
|
||||
@@ -849,8 +883,7 @@ def zip_folder(source_folder, dest_folder, zip_name):
|
||||
print(f"Zip file created at {zip_file}")
|
||||
|
||||
def zip_result(folder):
|
||||
import time
|
||||
t = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
|
||||
t = gen_time_str()
|
||||
zip_folder(folder, './gpt_log/', f'{t}-result.zip')
|
||||
return pj('./gpt_log/', f'{t}-result.zip')
|
||||
|
||||
@@ -858,6 +891,11 @@ def gen_time_str():
|
||||
import time
|
||||
return time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
|
||||
|
||||
def get_log_folder(user='default', plugin_name='shared'):
|
||||
_dir = os.path.join(os.path.dirname(__file__), 'gpt_log', user, plugin_name)
|
||||
if not os.path.exists(_dir): os.makedirs(_dir)
|
||||
return _dir
|
||||
|
||||
class ProxyNetworkActivate():
|
||||
"""
|
||||
这段代码定义了一个名为TempProxy的空上下文管理器, 用于给一小段代码上代理
|
||||
@@ -902,3 +940,101 @@ def Singleton(cls):
|
||||
return _instance[cls]
|
||||
|
||||
return _singleton
|
||||
|
||||
"""
|
||||
========================================================================
|
||||
第四部分
|
||||
接驳虚空终端:
|
||||
- set_conf: 在运行过程中动态地修改配置
|
||||
- set_multi_conf: 在运行过程中动态地修改多个配置
|
||||
- get_plugin_handle: 获取插件的句柄
|
||||
- get_plugin_default_kwargs: 获取插件的默认参数
|
||||
- get_chat_handle: 获取简单聊天的句柄
|
||||
- get_chat_default_kwargs: 获取简单聊天的默认参数
|
||||
========================================================================
|
||||
"""
|
||||
|
||||
def set_conf(key, value):
|
||||
from toolbox import read_single_conf_with_lru_cache, get_conf
|
||||
read_single_conf_with_lru_cache.cache_clear()
|
||||
get_conf.cache_clear()
|
||||
os.environ[key] = str(value)
|
||||
altered, = get_conf(key)
|
||||
return altered
|
||||
|
||||
def set_multi_conf(dic):
|
||||
for k, v in dic.items(): set_conf(k, v)
|
||||
return
|
||||
|
||||
def get_plugin_handle(plugin_name):
|
||||
"""
|
||||
e.g. plugin_name = 'crazy_functions.批量Markdown翻译->Markdown翻译指定语言'
|
||||
"""
|
||||
import importlib
|
||||
assert '->' in plugin_name, \
|
||||
"Example of plugin_name: crazy_functions.批量Markdown翻译->Markdown翻译指定语言"
|
||||
module, fn_name = plugin_name.split('->')
|
||||
f_hot_reload = getattr(importlib.import_module(module, fn_name), fn_name)
|
||||
return f_hot_reload
|
||||
|
||||
def get_chat_handle():
|
||||
"""
|
||||
"""
|
||||
from request_llm.bridge_all import predict_no_ui_long_connection
|
||||
return predict_no_ui_long_connection
|
||||
|
||||
def get_plugin_default_kwargs():
|
||||
"""
|
||||
"""
|
||||
from toolbox import get_conf, ChatBotWithCookies
|
||||
|
||||
WEB_PORT, LLM_MODEL, API_KEY = \
|
||||
get_conf('WEB_PORT', 'LLM_MODEL', 'API_KEY')
|
||||
|
||||
llm_kwargs = {
|
||||
'api_key': API_KEY,
|
||||
'llm_model': LLM_MODEL,
|
||||
'top_p':1.0,
|
||||
'max_length': None,
|
||||
'temperature':1.0,
|
||||
}
|
||||
chatbot = ChatBotWithCookies(llm_kwargs)
|
||||
|
||||
# txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port
|
||||
default_plugin_kwargs = {
|
||||
"main_input": "./README.md",
|
||||
"llm_kwargs": llm_kwargs,
|
||||
"plugin_kwargs": {},
|
||||
"chatbot_with_cookie": chatbot,
|
||||
"history": [],
|
||||
"system_prompt": "You are a good AI.",
|
||||
"web_port": WEB_PORT
|
||||
}
|
||||
return default_plugin_kwargs
|
||||
|
||||
def get_chat_default_kwargs():
|
||||
"""
|
||||
"""
|
||||
from toolbox import get_conf
|
||||
|
||||
LLM_MODEL, API_KEY = get_conf('LLM_MODEL', 'API_KEY')
|
||||
|
||||
llm_kwargs = {
|
||||
'api_key': API_KEY,
|
||||
'llm_model': LLM_MODEL,
|
||||
'top_p':1.0,
|
||||
'max_length': None,
|
||||
'temperature':1.0,
|
||||
}
|
||||
|
||||
default_chat_kwargs = {
|
||||
"inputs": "Hello there, are you ready?",
|
||||
"llm_kwargs": llm_kwargs,
|
||||
"history": [],
|
||||
"sys_prompt": "You are AI assistant",
|
||||
"observe_window": None,
|
||||
"console_slience": False,
|
||||
}
|
||||
|
||||
return default_chat_kwargs
|
||||
|
||||
|
||||
4
version
4
version
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"version": 3.47,
|
||||
"version": 3.48,
|
||||
"show_feature": true,
|
||||
"new_feature": "优化一键升级 <-> 提高arxiv翻译速度和成功率 <-> 支持自定义APIKEY格式 <-> 临时修复theme的文件丢失问题 <-> 新增实时语音对话插件(自动断句,脱手对话) <-> 支持加载自定义的ChatGLM2微调模型 <-> 动态ChatBot窗口高度 <-> 修复Azure接口的BUG <-> 完善多语言模块 <-> 完善本地Latex矫错和翻译功能 <-> 增加gpt-3.5-16k的支持"
|
||||
"new_feature": "接入阿里通义千问、讯飞星火、上海AI-Lab书生 <-> 优化一键升级 <-> 提高arxiv翻译速度和成功率 <-> 支持自定义APIKEY格式 <-> 临时修复theme的文件丢失问题 <-> 新增实时语音对话插件(自动断句,脱手对话) <-> 支持加载自定义的ChatGLM2微调模型 <-> 动态ChatBot窗口高度 <-> 修复Azure接口的BUG <-> 完善多语言模块 <-> 完善本地Latex矫错和翻译功能 <-> 增加gpt-3.5-16k的支持"
|
||||
}
|
||||
|
||||
在新工单中引用
屏蔽一个用户