镜像自地址
https://github.com/binary-husky/gpt_academic.git
已同步 2025-12-06 14:36:48 +00:00
比较提交
50 次代码提交
version3.4
...
version3.5
| 作者 | SHA1 | 提交日期 | |
|---|---|---|---|
|
|
843113ba0f | ||
|
|
79080290c6 | ||
|
|
9bd2023a8e | ||
|
|
0d6e32d31a | ||
|
|
0418257218 | ||
|
|
a3e6fc0141 | ||
|
|
1dd165a3cd | ||
|
|
e666b5269e | ||
|
|
0b70e9df7b | ||
|
|
1639796041 | ||
|
|
d0af074225 | ||
|
|
6d7f3feab3 | ||
|
|
045b7f6312 | ||
|
|
116b7ce12f | ||
|
|
8b0905c076 | ||
|
|
b69140307b | ||
|
|
b31abbcad3 | ||
|
|
2d5a1fbc12 | ||
|
|
89de49f31e | ||
|
|
a208782049 | ||
|
|
eb802ee975 | ||
|
|
f40d48b014 | ||
|
|
ef4203f5ca | ||
|
|
adf93195e8 | ||
|
|
3e5cdbaf68 | ||
|
|
27cab3b38a | ||
|
|
09d38e4abf | ||
|
|
7efb5cb6f5 | ||
|
|
31ff6e1e7a | ||
|
|
2fa3d47887 | ||
|
|
2cca46375c | ||
|
|
06410b593c | ||
|
|
545c9f47de | ||
|
|
973ad41bde | ||
|
|
3fa7416eb2 | ||
|
|
ec76d3dcc4 | ||
|
|
3f27bec94b | ||
|
|
ed11269aef | ||
|
|
6c653734ec | ||
|
|
19bd0c35ed | ||
|
|
3f4c4ebc29 | ||
|
|
6cc7d4ed69 | ||
|
|
67fff17917 | ||
|
|
8fce49fa02 | ||
|
|
30f28b37c3 | ||
|
|
6a5681dd0a | ||
|
|
dacc282763 | ||
|
|
9720bec5e5 | ||
|
|
8b3b883fce | ||
|
|
4dc0f8e57a |
15
README.md
15
README.md
@@ -16,7 +16,7 @@ To translate this project to arbitary language with GPT, read and run [`multi_la
|
||||
>
|
||||
> 1.请注意只有 **高亮(如红色)** 标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
|
||||
>
|
||||
> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。[安装方法](#installation)。
|
||||
> 2.本项目中每个文件的功能都在[自译解报告`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。[安装方法](#installation)。
|
||||
>
|
||||
> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM和Moss等等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
|
||||
|
||||
@@ -27,7 +27,7 @@ To translate this project to arbitary language with GPT, read and run [`multi_la
|
||||
|
||||
功能(⭐= 近期新增功能) | 描述
|
||||
--- | ---
|
||||
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | ⭐阿里达摩院[通义千问](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/)
|
||||
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, [通义千问](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
|
||||
一键润色 | 支持一键润色、一键查找论文语法错误
|
||||
一键中英互译 | 一键中英互译
|
||||
一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
|
||||
@@ -102,7 +102,7 @@ cd gpt_academic
|
||||
|
||||
在`config.py`中,配置API KEY等设置,[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1) 。
|
||||
|
||||
(P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。P.S.项目同样支持通过`环境变量`配置大多数选项,环境变量的书写格式参考`docker-compose`文件。读取优先级: `环境变量` > `config_private.py` > `config.py`)
|
||||
(P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中(仅复制您修改过的配置条目即可)。`config_private.py`不受git管控,可以让您的隐私信息更加安全。P.S.项目同样支持通过`环境变量`配置大多数选项,环境变量的书写格式参考`docker-compose`文件。读取优先级: `环境变量` > `config_private.py` > `config.py`)
|
||||
|
||||
|
||||
3. 安装依赖
|
||||
@@ -166,7 +166,7 @@ docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
||||
```
|
||||
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用docker-compose获取Latex功能(修改docker-compose.yml,保留方案4并删除其他方案)。
|
||||
|
||||
2. ChatGPT + ChatGLM2 + MOSS(需要熟悉Docker)
|
||||
2. ChatGPT + ChatGLM2 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
|
||||
|
||||
``` sh
|
||||
@@ -174,7 +174,7 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
3. ChatGPT + LLAMA + 盘古 + RWKV(需要熟悉Docker)
|
||||
3. ChatGPT + LLAMA + 盘古 + RWKV(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-jittorllms.yml)
|
||||
|
||||
``` sh
|
||||
@@ -300,7 +300,10 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
||||
|
||||
|
||||
### II:版本:
|
||||
- version 3.5(Todo): 使用自然语言调用本项目的所有函数插件(高优先级)
|
||||
- version 3.60(todo): 优化虚空终端,引入code interpreter和更多插件
|
||||
- version 3.50: 使用自然语言调用本项目的所有函数插件(虚空终端),支持插件分类,改进UI,设计新主题
|
||||
- version 3.49: 支持百度千帆平台和文心一言
|
||||
- version 3.48: 支持阿里达摩院通义千问,上海AI-Lab书生,讯飞星火
|
||||
- version 3.46: 支持完全脱手操作的实时语音对话
|
||||
- version 3.45: 支持自定义ChatGLM2微调模型
|
||||
- version 3.44: 正式支持Azure,优化界面易用性
|
||||
|
||||
114
config.py
114
config.py
@@ -11,7 +11,7 @@
|
||||
API_KEY = "此处填API密钥" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4"
|
||||
|
||||
|
||||
# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改
|
||||
# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改;如果使用本地或无地域限制的大模型时,此处也不需要修改
|
||||
USE_PROXY = False
|
||||
if USE_PROXY:
|
||||
"""
|
||||
@@ -43,7 +43,11 @@ API_URL_REDIRECT = {}
|
||||
DEFAULT_WORKER_NUM = 3
|
||||
|
||||
|
||||
# 对话窗的高度
|
||||
# 色彩主题,可选 ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast"]
|
||||
THEME = "Default"
|
||||
|
||||
|
||||
# 对话窗的高度 (仅在LAYOUT="TOP-DOWN"时生效)
|
||||
CHATBOT_HEIGHT = 1115
|
||||
|
||||
|
||||
@@ -68,14 +72,26 @@ WEB_PORT = -1
|
||||
MAX_RETRY = 2
|
||||
|
||||
|
||||
# 插件分类默认选项
|
||||
DEFAULT_FN_GROUPS = ['对话', '编程', '学术']
|
||||
|
||||
|
||||
# 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
|
||||
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss", "newbing", "stack-claude"]
|
||||
# P.S. 其他可用的模型还包括 ["qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "spark", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5", "api2d-gpt-3.5-turbo",
|
||||
"gpt-4", "api2d-gpt-4", "chatglm", "moss", "newbing", "stack-claude"]
|
||||
# P.S. 其他可用的模型还包括 ["qianfan", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613",
|
||||
# "spark", "sparkv2", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
|
||||
|
||||
# ChatGLM(2) Finetune Model Path (如果使用ChatGLM2微调模型,需要把"chatglmft"加入AVAIL_LLM_MODELS中)
|
||||
ChatGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
|
||||
# 百度千帆(LLM_MODEL="qianfan")
|
||||
BAIDU_CLOUD_API_KEY = ''
|
||||
BAIDU_CLOUD_SECRET_KEY = ''
|
||||
BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat"
|
||||
|
||||
|
||||
# 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径
|
||||
CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
|
||||
|
||||
|
||||
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
|
||||
@@ -91,10 +107,6 @@ CONCURRENT_COUNT = 100
|
||||
AUTO_CLEAR_TXT = False
|
||||
|
||||
|
||||
# 色彩主体,可选 ["Default", "Chuanhu-Small-and-Beautiful"]
|
||||
THEME = "Default"
|
||||
|
||||
|
||||
# 加一个live2d装饰
|
||||
ADD_WAIFU = False
|
||||
|
||||
@@ -150,3 +162,85 @@ ANTHROPIC_API_KEY = ""
|
||||
|
||||
# 自定义API KEY格式
|
||||
CUSTOM_API_KEY_PATTERN = ""
|
||||
|
||||
|
||||
# HUGGINGFACE的TOKEN,下载LLAMA时起作用 https://huggingface.co/docs/hub/security-tokens
|
||||
HUGGINGFACE_ACCESS_TOKEN = "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"
|
||||
|
||||
|
||||
# GROBID服务器地址(填写多个可以均衡负载),用于高质量地读取PDF文档
|
||||
# 获取方法:复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space"
|
||||
GROBID_URLS = [
|
||||
"https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space",
|
||||
"https://shaocongma-grobid.hf.space","https://FBR123-grobid.hf.space", "https://yeku-grobid.hf.space",
|
||||
]
|
||||
|
||||
|
||||
# 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭
|
||||
ALLOW_RESET_CONFIG = False
|
||||
|
||||
|
||||
"""
|
||||
在线大模型配置关联关系示意图
|
||||
│
|
||||
├── "gpt-3.5-turbo" 等openai模型
|
||||
│ ├── API_KEY
|
||||
│ ├── CUSTOM_API_KEY_PATTERN(不常用)
|
||||
│ ├── API_ORG(不常用)
|
||||
│ └── API_URL_REDIRECT(不常用)
|
||||
│
|
||||
├── "azure-gpt-3.5" 等azure模型
|
||||
│ ├── API_KEY
|
||||
│ ├── AZURE_ENDPOINT
|
||||
│ ├── AZURE_API_KEY
|
||||
│ ├── AZURE_ENGINE
|
||||
│ └── API_URL_REDIRECT
|
||||
│
|
||||
├── "spark" 星火认知大模型 spark & sparkv2
|
||||
│ ├── XFYUN_APPID
|
||||
│ ├── XFYUN_API_SECRET
|
||||
│ └── XFYUN_API_KEY
|
||||
│
|
||||
├── "claude-1-100k" 等claude模型
|
||||
│ └── ANTHROPIC_API_KEY
|
||||
│
|
||||
├── "stack-claude"
|
||||
│ ├── SLACK_CLAUDE_BOT_ID
|
||||
│ └── SLACK_CLAUDE_USER_TOKEN
|
||||
│
|
||||
├── "qianfan" 百度千帆大模型库
|
||||
│ ├── BAIDU_CLOUD_QIANFAN_MODEL
|
||||
│ ├── BAIDU_CLOUD_API_KEY
|
||||
│ └── BAIDU_CLOUD_SECRET_KEY
|
||||
│
|
||||
├── "newbing" Newbing接口不再稳定,不推荐使用
|
||||
├── NEWBING_STYLE
|
||||
└── NEWBING_COOKIES
|
||||
|
||||
|
||||
用户图形界面布局依赖关系示意图
|
||||
│
|
||||
├── CHATBOT_HEIGHT 对话窗的高度
|
||||
├── CODE_HIGHLIGHT 代码高亮
|
||||
├── LAYOUT 窗口布局
|
||||
├── DARK_MODE 暗色模式 / 亮色模式
|
||||
├── DEFAULT_FN_GROUPS 插件分类默认选项
|
||||
├── THEME 色彩主题
|
||||
├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框
|
||||
├── ADD_WAIFU 加一个live2d装饰
|
||||
├── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性
|
||||
|
||||
|
||||
插件在线服务配置依赖关系示意图
|
||||
│
|
||||
├── 语音功能
|
||||
│ ├── ENABLE_AUDIO
|
||||
│ ├── ALIYUN_TOKEN
|
||||
│ ├── ALIYUN_APPKEY
|
||||
│ ├── ALIYUN_ACCESSKEY
|
||||
│ └── ALIYUN_SECRET
|
||||
│
|
||||
├── PDF文档精准解析
|
||||
│ └── GROBID_URLS
|
||||
|
||||
"""
|
||||
|
||||
@@ -63,6 +63,7 @@ def get_core_functions():
|
||||
"英译中": {
|
||||
"Prefix": r"翻译成地道的中文:" + "\n\n",
|
||||
"Suffix": r"",
|
||||
"Visible": False,
|
||||
},
|
||||
"找图片": {
|
||||
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
|
||||
@@ -78,6 +79,7 @@ def get_core_functions():
|
||||
"Prefix": r"Here are some bibliography items, please transform them into bibtex style." +
|
||||
r"Note that, reference styles maybe more than one kind, you should transform each item correctly." +
|
||||
r"Items need to be transformed:",
|
||||
"Visible": False,
|
||||
"Suffix": r"",
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ from toolbox import HotReload # HotReload 的意思是热更新,修改函数
|
||||
|
||||
|
||||
def get_crazy_functions():
|
||||
###################### 第一组插件 ###########################
|
||||
from crazy_functions.读文章写摘要 import 读文章写摘要
|
||||
from crazy_functions.生成函数注释 import 批量生成函数注释
|
||||
from crazy_functions.解析项目源代码 import 解析项目本身
|
||||
@@ -25,114 +24,8 @@ def get_crazy_functions():
|
||||
from crazy_functions.对话历史存档 import 载入对话历史存档
|
||||
from crazy_functions.对话历史存档 import 删除所有本地对话历史记录
|
||||
from crazy_functions.辅助功能 import 清除缓存
|
||||
|
||||
from crazy_functions.批量Markdown翻译 import Markdown英译中
|
||||
function_plugins = {
|
||||
"解析整个Python项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"Function": HotReload(解析一个Python项目)
|
||||
},
|
||||
"载入对话历史存档(先上传存档或输入路径)": {
|
||||
"Color": "stop",
|
||||
"AsButton":False,
|
||||
"Function": HotReload(载入对话历史存档)
|
||||
},
|
||||
"删除所有本地对话历史记录(请谨慎操作)": {
|
||||
"AsButton":False,
|
||||
"Function": HotReload(删除所有本地对话历史记录)
|
||||
},
|
||||
"清除所有缓存文件(请谨慎操作)": {
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(清除缓存)
|
||||
},
|
||||
"解析Jupyter Notebook文件": {
|
||||
"Color": "stop",
|
||||
"AsButton":False,
|
||||
"Function": HotReload(解析ipynb文件),
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示
|
||||
},
|
||||
"批量总结Word文档": {
|
||||
"Color": "stop",
|
||||
"Function": HotReload(总结word文档)
|
||||
},
|
||||
"解析整个C++项目头文件": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个C项目的头文件)
|
||||
},
|
||||
"解析整个C++项目(.cpp/.hpp/.c/.h)": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个C项目)
|
||||
},
|
||||
"解析整个Go项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个Golang项目)
|
||||
},
|
||||
"解析整个Rust项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个Rust项目)
|
||||
},
|
||||
"解析整个Java项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个Java项目)
|
||||
},
|
||||
"解析整个前端项目(js,ts,css等)": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个前端项目)
|
||||
},
|
||||
"解析整个Lua项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个Lua项目)
|
||||
},
|
||||
"解析整个CSharp项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个CSharp项目)
|
||||
},
|
||||
"读Tex论文写摘要": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"Function": HotReload(读文章写摘要)
|
||||
},
|
||||
"Markdown/Readme英译中": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Color": "stop",
|
||||
"Function": HotReload(Markdown英译中)
|
||||
},
|
||||
"批量生成函数注释": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(批量生成函数注释)
|
||||
},
|
||||
"保存当前的对话": {
|
||||
"Function": HotReload(对话历史存档)
|
||||
},
|
||||
"[多线程Demo] 解析此项目本身(源码自译解)": {
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析项目本身)
|
||||
},
|
||||
# "[老旧的Demo] 把本项目源代码切换成全英文": {
|
||||
# # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
# "AsButton": False, # 加入下拉菜单中
|
||||
# "Function": HotReload(全项目切换英文)
|
||||
# },
|
||||
"[插件demo] 历史上的今天": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Function": HotReload(高阶功能模板函数)
|
||||
},
|
||||
|
||||
}
|
||||
###################### 第二组插件 ###########################
|
||||
# [第二组插件]: 经过充分测试
|
||||
from crazy_functions.批量总结PDF文档 import 批量总结PDF文档
|
||||
# from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer
|
||||
from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档
|
||||
from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
|
||||
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
|
||||
@@ -141,88 +34,248 @@ def get_crazy_functions():
|
||||
from crazy_functions.Latex全文翻译 import Latex中译英
|
||||
from crazy_functions.Latex全文翻译 import Latex英译中
|
||||
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
||||
from crazy_functions.虚空终端 import 虚空终端
|
||||
|
||||
function_plugins.update({
|
||||
"批量翻译PDF文档(多线程)": {
|
||||
|
||||
function_plugins = {
|
||||
"虚空终端": {
|
||||
"Group": "对话|编程|学术",
|
||||
"Color": "stop",
|
||||
"AsButton": True, # 加入下拉菜单中
|
||||
"AsButton": True,
|
||||
"Function": HotReload(虚空终端)
|
||||
},
|
||||
"解析整个Python项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Info": "解析一个Python项目的所有源文件(.py) | 输入参数为路径",
|
||||
"Function": HotReload(解析一个Python项目)
|
||||
},
|
||||
"载入对话历史存档(先上传存档或输入路径)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Info": "载入对话历史存档 | 输入参数为路径",
|
||||
"Function": HotReload(载入对话历史存档)
|
||||
},
|
||||
"删除所有本地对话历史记录(谨慎操作)": {
|
||||
"Group": "对话",
|
||||
"AsButton": False,
|
||||
"Info": "删除所有本地对话历史记录,谨慎操作 | 不需要输入参数",
|
||||
"Function": HotReload(删除所有本地对话历史记录)
|
||||
},
|
||||
"清除所有缓存文件(谨慎操作)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
|
||||
"Function": HotReload(清除缓存)
|
||||
},
|
||||
"批量总结Word文档": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Info": "批量总结word文档 | 输入参数为路径",
|
||||
"Function": HotReload(总结word文档)
|
||||
},
|
||||
"解析整个C++项目头文件": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个C++项目的所有头文件(.h/.hpp) | 输入参数为路径",
|
||||
"Function": HotReload(解析一个C项目的头文件)
|
||||
},
|
||||
"解析整个C++项目(.cpp/.hpp/.c/.h)": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个C++项目的所有源文件(.cpp/.hpp/.c/.h)| 输入参数为路径",
|
||||
"Function": HotReload(解析一个C项目)
|
||||
},
|
||||
"解析整个Go项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个Go项目的所有源文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析一个Golang项目)
|
||||
},
|
||||
"解析整个Rust项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个Rust项目的所有源文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析一个Rust项目)
|
||||
},
|
||||
"解析整个Java项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个Java项目的所有源文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析一个Java项目)
|
||||
},
|
||||
"解析整个前端项目(js,ts,css等)": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个前端项目的所有源文件(js,ts,css等) | 输入参数为路径",
|
||||
"Function": HotReload(解析一个前端项目)
|
||||
},
|
||||
"解析整个Lua项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个Lua项目的所有源文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析一个Lua项目)
|
||||
},
|
||||
"解析整个CSharp项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个CSharp项目的所有源文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析一个CSharp项目)
|
||||
},
|
||||
"解析Jupyter Notebook文件": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Info": "解析Jupyter Notebook文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析ipynb文件),
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示
|
||||
},
|
||||
"读Tex论文写摘要": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Info": "读取Tex论文并写摘要 | 输入参数为路径",
|
||||
"Function": HotReload(读文章写摘要)
|
||||
},
|
||||
"翻译README或MD": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Info": "将Markdown翻译为中文 | 输入参数为路径或URL",
|
||||
"Function": HotReload(Markdown英译中)
|
||||
},
|
||||
"翻译Markdown或README(支持Github链接)": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Info": "将Markdown或README翻译为中文 | 输入参数为路径或URL",
|
||||
"Function": HotReload(Markdown英译中)
|
||||
},
|
||||
"批量生成函数注释": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "批量生成函数的注释 | 输入参数为路径",
|
||||
"Function": HotReload(批量生成函数注释)
|
||||
},
|
||||
"保存当前的对话": {
|
||||
"Group": "对话",
|
||||
"AsButton": True,
|
||||
"Info": "保存当前的对话 | 不需要输入参数",
|
||||
"Function": HotReload(对话历史存档)
|
||||
},
|
||||
"[多线程Demo]解析此项目本身(源码自译解)": {
|
||||
"Group": "对话|编程",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "多线程解析并翻译此项目的源码 | 不需要输入参数",
|
||||
"Function": HotReload(解析项目本身)
|
||||
},
|
||||
"[插件demo]历史上的今天": {
|
||||
"Group": "对话",
|
||||
"AsButton": True,
|
||||
"Info": "查看历史上的今天事件 | 不需要输入参数",
|
||||
"Function": HotReload(高阶功能模板函数)
|
||||
},
|
||||
"精准翻译PDF论文": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Info": "精准翻译PDF论文为中文 | 输入参数为路径",
|
||||
"Function": HotReload(批量翻译PDF文档)
|
||||
},
|
||||
"询问多个GPT模型": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Function": HotReload(同时问询)
|
||||
},
|
||||
"[测试功能] 批量总结PDF文档": {
|
||||
"批量总结PDF文档": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Info": "批量总结PDF文档的内容 | 输入参数为路径",
|
||||
"Function": HotReload(批量总结PDF文档)
|
||||
},
|
||||
# "[测试功能] 批量总结PDF文档pdfminer": {
|
||||
# "Color": "stop",
|
||||
# "AsButton": False, # 加入下拉菜单中
|
||||
# "Function": HotReload(批量总结PDF文档pdfminer)
|
||||
# },
|
||||
"谷歌学术检索助手(输入谷歌学术搜索页url)": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "使用谷歌学术检索助手搜索指定URL的结果 | 输入参数为谷歌学术搜索页的URL",
|
||||
"Function": HotReload(谷歌检索小助手)
|
||||
},
|
||||
"理解PDF文档内容 (模仿ChatPDF)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "理解PDF文档的内容并进行回答 | 输入参数为路径",
|
||||
"Function": HotReload(理解PDF文档内容标准文件输入)
|
||||
},
|
||||
"英文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex英文润色)
|
||||
},
|
||||
"英文Latex项目全文纠错(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex英文纠错)
|
||||
},
|
||||
"中文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "对中文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex中文润色)
|
||||
},
|
||||
"Latex项目全文中译英(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "对Latex项目全文进行中译英处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex中译英)
|
||||
},
|
||||
"Latex项目全文英译中(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "对Latex项目全文进行英译中处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex英译中)
|
||||
},
|
||||
"批量Markdown中译英(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Markdown中译英)
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
})
|
||||
|
||||
###################### 第三组插件 ###########################
|
||||
# [第三组插件]: 尚未充分测试的函数插件
|
||||
|
||||
# -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-
|
||||
try:
|
||||
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
|
||||
function_plugins.update({
|
||||
"一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
# "Info": "下载arxiv论文并翻译摘要 | 输入参数为arxiv编号如1812.10695",
|
||||
"Function": HotReload(下载arxiv论文并翻译摘要)
|
||||
}
|
||||
})
|
||||
@@ -233,16 +286,20 @@ def get_crazy_functions():
|
||||
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
||||
function_plugins.update({
|
||||
"连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
# "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
|
||||
"Function": HotReload(连接网络回答问题)
|
||||
}
|
||||
})
|
||||
from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
|
||||
function_plugins.update({
|
||||
"连接网络回答问题(中文Bing版,输入问题后点击该插件)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "连接网络回答问题(需要访问中文Bing)| 输入参数是一个问题",
|
||||
"Function": HotReload(连接bing搜索回答问题)
|
||||
}
|
||||
})
|
||||
@@ -253,10 +310,11 @@ def get_crazy_functions():
|
||||
from crazy_functions.解析项目源代码 import 解析任意code项目
|
||||
function_plugins.update({
|
||||
"解析项目源代码(手动指定和筛选源代码文件类型)": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示
|
||||
"Function": HotReload(解析任意code项目)
|
||||
},
|
||||
})
|
||||
@@ -267,10 +325,11 @@ def get_crazy_functions():
|
||||
from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
|
||||
function_plugins.update({
|
||||
"询问多个GPT模型(手动指定询问哪些模型)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
|
||||
"Function": HotReload(同时问询_指定模型)
|
||||
},
|
||||
})
|
||||
@@ -281,10 +340,12 @@ def get_crazy_functions():
|
||||
from crazy_functions.图片生成 import 图片生成
|
||||
function_plugins.update({
|
||||
"图片生成(先切换模型到openai或api2d)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "在这里输入分辨率, 如256x256(默认)", # 高级参数输入区的显示提示
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "在这里输入分辨率, 如256x256(默认)", # 高级参数输入区的显示提示
|
||||
"Info": "图片生成 | 输入参数字符串,提供图像的内容",
|
||||
"Function": HotReload(图片生成)
|
||||
},
|
||||
})
|
||||
@@ -295,10 +356,12 @@ def get_crazy_functions():
|
||||
from crazy_functions.总结音视频 import 总结音视频
|
||||
function_plugins.update({
|
||||
"批量总结音视频(输入路径或上传压缩包)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder": "调用openai api 使用whisper-1模型, 目前支持的格式:mp4, m4a, wav, mpga, mpeg, mp3。此处可以输入解析提示,例如:解析为简体中文(默认)。",
|
||||
"Info": "批量总结音频或视频 | 输入参数为路径",
|
||||
"Function": HotReload(总结音视频)
|
||||
}
|
||||
})
|
||||
@@ -309,8 +372,10 @@ def get_crazy_functions():
|
||||
from crazy_functions.数学动画生成manim import 动画生成
|
||||
function_plugins.update({
|
||||
"数学动画生成(Manim)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Info": "按照自然语言描述生成一个动画 | 输入参数是一段话",
|
||||
"Function": HotReload(动画生成)
|
||||
}
|
||||
})
|
||||
@@ -321,6 +386,7 @@ def get_crazy_functions():
|
||||
from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
|
||||
function_plugins.update({
|
||||
"Markdown翻译(手动指定语言)": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
@@ -335,6 +401,7 @@ def get_crazy_functions():
|
||||
from crazy_functions.Langchain知识库 import 知识库问答
|
||||
function_plugins.update({
|
||||
"构建知识库(请先上传文件素材)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
@@ -349,6 +416,7 @@ def get_crazy_functions():
|
||||
from crazy_functions.Langchain知识库 import 读取知识库作答
|
||||
function_plugins.update({
|
||||
"知识库问答": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
@@ -363,6 +431,7 @@ def get_crazy_functions():
|
||||
from crazy_functions.交互功能函数模板 import 交互功能模板函数
|
||||
function_plugins.update({
|
||||
"交互功能模板函数": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Function": HotReload(交互功能模板函数)
|
||||
@@ -371,6 +440,68 @@ def get_crazy_functions():
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
|
||||
function_plugins.update({
|
||||
"Latex英文纠错+高亮修正位置 [需Latex]": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
|
||||
"Function": HotReload(Latex英文纠错加PDF对比)
|
||||
}
|
||||
})
|
||||
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
||||
function_plugins.update({
|
||||
"Arixv论文精细翻译(输入arxivID)[需Latex]": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder":
|
||||
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
|
||||
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
|
||||
'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
|
||||
"Function": HotReload(Latex翻译中文并重新编译PDF)
|
||||
}
|
||||
})
|
||||
function_plugins.update({
|
||||
"本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder":
|
||||
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
|
||||
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
|
||||
'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
|
||||
"Function": HotReload(Latex翻译中文并重新编译PDF)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from toolbox import get_conf
|
||||
ENABLE_AUDIO, = get_conf('ENABLE_AUDIO')
|
||||
if ENABLE_AUDIO:
|
||||
from crazy_functions.语音助手 import 语音助手
|
||||
function_plugins.update({
|
||||
"实时音频采集": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Info": "开始语言对话 | 没有输入参数",
|
||||
"Function": HotReload(语音助手)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
|
||||
# try:
|
||||
# from crazy_functions.chatglm微调工具 import 微调数据集生成
|
||||
# function_plugins.update({
|
||||
@@ -385,71 +516,23 @@ def get_crazy_functions():
|
||||
# except:
|
||||
# print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
|
||||
function_plugins.update({
|
||||
"Latex英文纠错+高亮修正位置 [需Latex]": {
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
|
||||
"Function": HotReload(Latex英文纠错加PDF对比)
|
||||
}
|
||||
})
|
||||
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
||||
function_plugins.update({
|
||||
"Arixv论文精细翻译(输入arxivID)[需Latex]": {
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder":
|
||||
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "+
|
||||
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " + 'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||
"Function": HotReload(Latex翻译中文并重新编译PDF)
|
||||
}
|
||||
})
|
||||
function_plugins.update({
|
||||
"本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder":
|
||||
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "+
|
||||
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " + 'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||
"Function": HotReload(Latex翻译中文并重新编译PDF)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
|
||||
try:
|
||||
from toolbox import get_conf
|
||||
ENABLE_AUDIO, = get_conf('ENABLE_AUDIO')
|
||||
if ENABLE_AUDIO:
|
||||
from crazy_functions.语音助手 import 语音助手
|
||||
function_plugins.update({
|
||||
"实时音频采集": {
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Function": HotReload(语音助手)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
# try:
|
||||
# from crazy_functions.虚空终端 import 终端
|
||||
# function_plugins.update({
|
||||
# "超级终端": {
|
||||
# "Color": "stop",
|
||||
# "AsButton": False,
|
||||
# # "AdvancedArgs": True,
|
||||
# # "ArgsReminder": "",
|
||||
# "Function": HotReload(终端)
|
||||
# }
|
||||
# })
|
||||
# except:
|
||||
# print('Load function plugin failed')
|
||||
"""
|
||||
设置默认值:
|
||||
- 默认 Group = 对话
|
||||
- 默认 AsButton = True
|
||||
- 默认 AdvancedArgs = False
|
||||
- 默认 Color = secondary
|
||||
"""
|
||||
for name, function_meta in function_plugins.items():
|
||||
if "Group" not in function_meta:
|
||||
function_plugins[name]["Group"] = '对话'
|
||||
if "AsButton" not in function_meta:
|
||||
function_plugins[name]["AsButton"] = True
|
||||
if "AdvancedArgs" not in function_meta:
|
||||
function_plugins[name]["AdvancedArgs"] = False
|
||||
if "Color" not in function_meta:
|
||||
function_plugins[name]["Color"] = 'secondary'
|
||||
|
||||
return function_plugins
|
||||
|
||||
@@ -6,7 +6,7 @@ pj = os.path.join
|
||||
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
||||
|
||||
# =================================== 工具函数 ===============================================
|
||||
专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
||||
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
||||
def switch_prompt(pfg, mode, more_requirement):
|
||||
"""
|
||||
Generate prompts and system prompts based on the mode for proofreading or translating.
|
||||
@@ -291,7 +291,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
|
||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
else:
|
||||
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
|
||||
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体(见Github wiki) ...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
|
||||
|
||||
@@ -0,0 +1,111 @@
|
||||
"""
|
||||
https://github.com/langchain-ai/langchain/blob/master/docs/extras/modules/model_io/output_parsers/pydantic.ipynb
|
||||
|
||||
Example 1.
|
||||
|
||||
# Define your desired data structure.
|
||||
class Joke(BaseModel):
|
||||
setup: str = Field(description="question to set up a joke")
|
||||
punchline: str = Field(description="answer to resolve the joke")
|
||||
|
||||
# You can add custom validation logic easily with Pydantic.
|
||||
@validator("setup")
|
||||
def question_ends_with_question_mark(cls, field):
|
||||
if field[-1] != "?":
|
||||
raise ValueError("Badly formed question!")
|
||||
return field
|
||||
|
||||
|
||||
Example 2.
|
||||
|
||||
# Here's another example, but with a compound typed field.
|
||||
class Actor(BaseModel):
|
||||
name: str = Field(description="name of an actor")
|
||||
film_names: List[str] = Field(description="list of names of films they starred in")
|
||||
"""
|
||||
|
||||
import json, re, logging
|
||||
|
||||
|
||||
PYDANTIC_FORMAT_INSTRUCTIONS = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
|
||||
|
||||
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
|
||||
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
|
||||
|
||||
Here is the output schema:
|
||||
```
|
||||
{schema}
|
||||
```"""
|
||||
|
||||
|
||||
PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
|
||||
```
|
||||
{schema}
|
||||
```"""
|
||||
|
||||
class JsonStringError(Exception): ...
|
||||
|
||||
class GptJsonIO():
|
||||
|
||||
def __init__(self, schema, example_instruction=True):
|
||||
self.pydantic_object = schema
|
||||
self.example_instruction = example_instruction
|
||||
self.format_instructions = self.generate_format_instructions()
|
||||
|
||||
def generate_format_instructions(self):
|
||||
schema = self.pydantic_object.schema()
|
||||
|
||||
# Remove extraneous fields.
|
||||
reduced_schema = schema
|
||||
if "title" in reduced_schema:
|
||||
del reduced_schema["title"]
|
||||
if "type" in reduced_schema:
|
||||
del reduced_schema["type"]
|
||||
# Ensure json in context is well-formed with double quotes.
|
||||
if self.example_instruction:
|
||||
schema_str = json.dumps(reduced_schema)
|
||||
return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)
|
||||
else:
|
||||
return PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE.format(schema=schema_str)
|
||||
|
||||
def generate_output(self, text):
|
||||
# Greedy search for 1st json candidate.
|
||||
match = re.search(
|
||||
r"\{.*\}", text.strip(), re.MULTILINE | re.IGNORECASE | re.DOTALL
|
||||
)
|
||||
json_str = ""
|
||||
if match: json_str = match.group()
|
||||
json_object = json.loads(json_str, strict=False)
|
||||
final_object = self.pydantic_object.parse_obj(json_object)
|
||||
return final_object
|
||||
|
||||
def generate_repair_prompt(self, broken_json, error):
|
||||
prompt = "Fix a broken json string.\n\n" + \
|
||||
"(1) The broken json string need to fix is: \n\n" + \
|
||||
"```" + "\n" + \
|
||||
broken_json + "\n" + \
|
||||
"```" + "\n\n" + \
|
||||
"(2) The error message is: \n\n" + \
|
||||
error + "\n\n" + \
|
||||
"Now, fix this json string. \n\n"
|
||||
return prompt
|
||||
|
||||
def generate_output_auto_repair(self, response, gpt_gen_fn):
|
||||
"""
|
||||
response: string containing canidate json
|
||||
gpt_gen_fn: gpt_gen_fn(inputs, sys_prompt)
|
||||
"""
|
||||
try:
|
||||
result = self.generate_output(response)
|
||||
except Exception as e:
|
||||
try:
|
||||
logging.info(f'Repairing json:{response}')
|
||||
repair_prompt = self.generate_repair_prompt(broken_json = response, error=repr(e))
|
||||
result = self.generate_output(gpt_gen_fn(repair_prompt, self.format_instructions))
|
||||
logging.info('Repaire json success.')
|
||||
except Exception as e:
|
||||
# 没辙了,放弃治疗
|
||||
logging.info('Repaire json fail.')
|
||||
raise JsonStringError('Cannot repair json.', str(e))
|
||||
return result
|
||||
|
||||
@@ -281,9 +281,12 @@ def rm_comments(main_file):
|
||||
def find_tex_file_ignore_case(fp):
|
||||
dir_name = os.path.dirname(fp)
|
||||
base_name = os.path.basename(fp)
|
||||
# 如果输入的文件路径是正确的
|
||||
if os.path.exists(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
||||
# 如果不正确,试着加上.tex后缀试试
|
||||
if not base_name.endswith('.tex'): base_name+='.tex'
|
||||
if os.path.exists(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
||||
# go case in-sensitive
|
||||
# 如果还找不到,解除大小写限制,再试一次
|
||||
import glob
|
||||
for f in glob.glob(dir_name+'/*.tex'):
|
||||
base_name_s = os.path.basename(fp)
|
||||
|
||||
@@ -0,0 +1,25 @@
|
||||
import requests
|
||||
import random
|
||||
from functools import lru_cache
|
||||
class GROBID_OFFLINE_EXCEPTION(Exception): pass
|
||||
|
||||
def get_avail_grobid_url():
|
||||
from toolbox import get_conf
|
||||
GROBID_URLS, = get_conf('GROBID_URLS')
|
||||
if len(GROBID_URLS) == 0: return None
|
||||
try:
|
||||
_grobid_url = random.choice(GROBID_URLS) # 随机负载均衡
|
||||
if _grobid_url.endswith('/'): _grobid_url = _grobid_url.rstrip('/')
|
||||
res = requests.get(_grobid_url+'/api/isalive')
|
||||
if res.text=='true': return _grobid_url
|
||||
else: return None
|
||||
except:
|
||||
return None
|
||||
|
||||
@lru_cache(maxsize=32)
|
||||
def parse_pdf(pdf_path, grobid_url):
|
||||
import scipdf # pip install scipdf_parser
|
||||
if grobid_url.endswith('/'): grobid_url = grobid_url.rstrip('/')
|
||||
article_dict = scipdf.parse_pdf_to_dict(pdf_path, grobid_url=grobid_url)
|
||||
return article_dict
|
||||
|
||||
@@ -1,87 +0,0 @@
|
||||
#include "libipc/buffer.h"
|
||||
#include "libipc/utility/pimpl.h"
|
||||
|
||||
#include <cstring>
|
||||
|
||||
namespace ipc {
|
||||
|
||||
bool operator==(buffer const & b1, buffer const & b2) {
|
||||
return (b1.size() == b2.size()) && (std::memcmp(b1.data(), b2.data(), b1.size()) == 0);
|
||||
}
|
||||
|
||||
bool operator!=(buffer const & b1, buffer const & b2) {
|
||||
return !(b1 == b2);
|
||||
}
|
||||
|
||||
class buffer::buffer_ : public pimpl<buffer_> {
|
||||
public:
|
||||
void* p_;
|
||||
std::size_t s_;
|
||||
void* a_;
|
||||
buffer::destructor_t d_;
|
||||
|
||||
buffer_(void* p, std::size_t s, buffer::destructor_t d, void* a)
|
||||
: p_(p), s_(s), a_(a), d_(d) {
|
||||
}
|
||||
|
||||
~buffer_() {
|
||||
if (d_ == nullptr) return;
|
||||
d_((a_ == nullptr) ? p_ : a_, s_);
|
||||
}
|
||||
};
|
||||
|
||||
buffer::buffer()
|
||||
: buffer(nullptr, 0, nullptr, nullptr) {
|
||||
}
|
||||
|
||||
buffer::buffer(void* p, std::size_t s, destructor_t d)
|
||||
: p_(p_->make(p, s, d, nullptr)) {
|
||||
}
|
||||
|
||||
buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional)
|
||||
: p_(p_->make(p, s, d, additional)) {
|
||||
}
|
||||
|
||||
buffer::buffer(void* p, std::size_t s)
|
||||
: buffer(p, s, nullptr) {
|
||||
}
|
||||
|
||||
buffer::buffer(char const & c)
|
||||
: buffer(const_cast<char*>(&c), 1) {
|
||||
}
|
||||
|
||||
buffer::buffer(buffer&& rhs)
|
||||
: buffer() {
|
||||
swap(rhs);
|
||||
}
|
||||
|
||||
buffer::~buffer() {
|
||||
p_->clear();
|
||||
}
|
||||
|
||||
void buffer::swap(buffer& rhs) {
|
||||
std::swap(p_, rhs.p_);
|
||||
}
|
||||
|
||||
buffer& buffer::operator=(buffer rhs) {
|
||||
swap(rhs);
|
||||
return *this;
|
||||
}
|
||||
|
||||
bool buffer::empty() const noexcept {
|
||||
return (impl(p_)->p_ == nullptr) || (impl(p_)->s_ == 0);
|
||||
}
|
||||
|
||||
void* buffer::data() noexcept {
|
||||
return impl(p_)->p_;
|
||||
}
|
||||
|
||||
void const * buffer::data() const noexcept {
|
||||
return impl(p_)->p_;
|
||||
}
|
||||
|
||||
std::size_t buffer::size() const noexcept {
|
||||
return impl(p_)->s_;
|
||||
}
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,701 +0,0 @@
|
||||
|
||||
#include <type_traits>
|
||||
#include <cstring>
|
||||
#include <algorithm>
|
||||
#include <utility> // std::pair, std::move, std::forward
|
||||
#include <atomic>
|
||||
#include <type_traits> // aligned_storage_t
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <array>
|
||||
#include <cassert>
|
||||
|
||||
#include "libipc/ipc.h"
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/shm.h"
|
||||
#include "libipc/pool_alloc.h"
|
||||
#include "libipc/queue.h"
|
||||
#include "libipc/policy.h"
|
||||
#include "libipc/rw_lock.h"
|
||||
#include "libipc/waiter.h"
|
||||
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/utility/id_pool.h"
|
||||
#include "libipc/utility/scope_guard.h"
|
||||
#include "libipc/utility/utility.h"
|
||||
|
||||
#include "libipc/memory/resource.h"
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_array.h"
|
||||
|
||||
namespace {
|
||||
|
||||
using msg_id_t = std::uint32_t;
|
||||
using acc_t = std::atomic<msg_id_t>;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct msg_t;
|
||||
|
||||
template <std::size_t AlignSize>
|
||||
struct msg_t<0, AlignSize> {
|
||||
msg_id_t cc_id_;
|
||||
msg_id_t id_;
|
||||
std::int32_t remain_;
|
||||
bool storage_;
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct msg_t : msg_t<0, AlignSize> {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
|
||||
msg_t() = default;
|
||||
msg_t(msg_id_t cc_id, msg_id_t id, std::int32_t remain, void const * data, std::size_t size)
|
||||
: msg_t<0, AlignSize> {cc_id, id, remain, (data == nullptr) || (size == 0)} {
|
||||
if (this->storage_) {
|
||||
if (data != nullptr) {
|
||||
// copy storage-id
|
||||
*reinterpret_cast<ipc::storage_id_t*>(&data_) =
|
||||
*static_cast<ipc::storage_id_t const *>(data);
|
||||
}
|
||||
}
|
||||
else std::memcpy(&data_, data, size);
|
||||
}
|
||||
};
|
||||
|
||||
template <typename T>
|
||||
ipc::buff_t make_cache(T& data, std::size_t size) {
|
||||
auto ptr = ipc::mem::alloc(size);
|
||||
std::memcpy(ptr, &data, (ipc::detail::min)(sizeof(data), size));
|
||||
return { ptr, size, ipc::mem::free };
|
||||
}
|
||||
|
||||
struct cache_t {
|
||||
std::size_t fill_;
|
||||
ipc::buff_t buff_;
|
||||
|
||||
cache_t(std::size_t f, ipc::buff_t && b)
|
||||
: fill_(f), buff_(std::move(b))
|
||||
{}
|
||||
|
||||
void append(void const * data, std::size_t size) {
|
||||
if (fill_ >= buff_.size() || data == nullptr || size == 0) return;
|
||||
auto new_fill = (ipc::detail::min)(fill_ + size, buff_.size());
|
||||
std::memcpy(static_cast<ipc::byte_t*>(buff_.data()) + fill_, data, new_fill - fill_);
|
||||
fill_ = new_fill;
|
||||
}
|
||||
};
|
||||
|
||||
auto cc_acc() {
|
||||
static ipc::shm::handle acc_h("__CA_CONN__", sizeof(acc_t));
|
||||
return static_cast<acc_t*>(acc_h.get());
|
||||
}
|
||||
|
||||
IPC_CONSTEXPR_ std::size_t align_chunk_size(std::size_t size) noexcept {
|
||||
return (((size - 1) / ipc::large_msg_align) + 1) * ipc::large_msg_align;
|
||||
}
|
||||
|
||||
IPC_CONSTEXPR_ std::size_t calc_chunk_size(std::size_t size) noexcept {
|
||||
return ipc::make_align(alignof(std::max_align_t), align_chunk_size(
|
||||
ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>)) + size));
|
||||
}
|
||||
|
||||
struct chunk_t {
|
||||
std::atomic<ipc::circ::cc_t> &conns() noexcept {
|
||||
return *reinterpret_cast<std::atomic<ipc::circ::cc_t> *>(this);
|
||||
}
|
||||
|
||||
void *data() noexcept {
|
||||
return reinterpret_cast<ipc::byte_t *>(this)
|
||||
+ ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>));
|
||||
}
|
||||
};
|
||||
|
||||
struct chunk_info_t {
|
||||
ipc::id_pool<> pool_;
|
||||
ipc::spin_lock lock_;
|
||||
|
||||
IPC_CONSTEXPR_ static std::size_t chunks_mem_size(std::size_t chunk_size) noexcept {
|
||||
return ipc::id_pool<>::max_count * chunk_size;
|
||||
}
|
||||
|
||||
ipc::byte_t *chunks_mem() noexcept {
|
||||
return reinterpret_cast<ipc::byte_t *>(this + 1);
|
||||
}
|
||||
|
||||
chunk_t *at(std::size_t chunk_size, ipc::storage_id_t id) noexcept {
|
||||
if (id < 0) return nullptr;
|
||||
return reinterpret_cast<chunk_t *>(chunks_mem() + (chunk_size * id));
|
||||
}
|
||||
};
|
||||
|
||||
auto& chunk_storages() {
|
||||
class chunk_handle_t {
|
||||
ipc::shm::handle handle_;
|
||||
|
||||
public:
|
||||
chunk_info_t *get_info(std::size_t chunk_size) {
|
||||
if (!handle_.valid() &&
|
||||
!handle_.acquire( ("__CHUNK_INFO__" + ipc::to_string(chunk_size)).c_str(),
|
||||
sizeof(chunk_info_t) + chunk_info_t::chunks_mem_size(chunk_size) )) {
|
||||
ipc::error("[chunk_storages] chunk_shm.id_info_.acquire failed: chunk_size = %zd\n", chunk_size);
|
||||
return nullptr;
|
||||
}
|
||||
auto info = static_cast<chunk_info_t*>(handle_.get());
|
||||
if (info == nullptr) {
|
||||
ipc::error("[chunk_storages] chunk_shm.id_info_.get failed: chunk_size = %zd\n", chunk_size);
|
||||
return nullptr;
|
||||
}
|
||||
return info;
|
||||
}
|
||||
};
|
||||
static ipc::map<std::size_t, chunk_handle_t> chunk_hs;
|
||||
return chunk_hs;
|
||||
}
|
||||
|
||||
chunk_info_t *chunk_storage_info(std::size_t chunk_size) {
|
||||
auto &storages = chunk_storages();
|
||||
std::decay_t<decltype(storages)>::iterator it;
|
||||
{
|
||||
static ipc::rw_lock lock;
|
||||
IPC_UNUSED_ std::shared_lock<ipc::rw_lock> guard {lock};
|
||||
if ((it = storages.find(chunk_size)) == storages.end()) {
|
||||
using chunk_handle_t = std::decay_t<decltype(storages)>::value_type::second_type;
|
||||
guard.unlock();
|
||||
IPC_UNUSED_ std::lock_guard<ipc::rw_lock> guard {lock};
|
||||
it = storages.emplace(chunk_size, chunk_handle_t{}).first;
|
||||
}
|
||||
}
|
||||
return it->second.get_info(chunk_size);
|
||||
}
|
||||
|
||||
std::pair<ipc::storage_id_t, void*> acquire_storage(std::size_t size, ipc::circ::cc_t conns) {
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return {};
|
||||
|
||||
info->lock_.lock();
|
||||
info->pool_.prepare();
|
||||
// got an unique id
|
||||
auto id = info->pool_.acquire();
|
||||
info->lock_.unlock();
|
||||
|
||||
auto chunk = info->at(chunk_size, id);
|
||||
if (chunk == nullptr) return {};
|
||||
chunk->conns().store(conns, std::memory_order_relaxed);
|
||||
return { id, chunk->data() };
|
||||
}
|
||||
|
||||
void *find_storage(ipc::storage_id_t id, std::size_t size) {
|
||||
if (id < 0) {
|
||||
ipc::error("[find_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
|
||||
return nullptr;
|
||||
}
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return nullptr;
|
||||
return info->at(chunk_size, id)->data();
|
||||
}
|
||||
|
||||
void release_storage(ipc::storage_id_t id, std::size_t size) {
|
||||
if (id < 0) {
|
||||
ipc::error("[release_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
|
||||
return;
|
||||
}
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return;
|
||||
info->lock_.lock();
|
||||
info->pool_.release(id);
|
||||
info->lock_.unlock();
|
||||
}
|
||||
|
||||
template <ipc::relat Rp, ipc::relat Rc>
|
||||
bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::unicast>,
|
||||
std::atomic<ipc::circ::cc_t> &/*conns*/, ipc::circ::cc_t /*curr_conns*/, ipc::circ::cc_t /*conn_id*/) noexcept {
|
||||
return true;
|
||||
}
|
||||
|
||||
template <ipc::relat Rp, ipc::relat Rc>
|
||||
bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::broadcast>,
|
||||
std::atomic<ipc::circ::cc_t> &conns, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) noexcept {
|
||||
auto last_conns = curr_conns & ~conn_id;
|
||||
for (unsigned k = 0;;) {
|
||||
auto chunk_conns = conns.load(std::memory_order_acquire);
|
||||
if (conns.compare_exchange_weak(chunk_conns, chunk_conns & last_conns, std::memory_order_release)) {
|
||||
return (chunk_conns & last_conns) == 0;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
void recycle_storage(ipc::storage_id_t id, std::size_t size, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) {
|
||||
if (id < 0) {
|
||||
ipc::error("[recycle_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
|
||||
return;
|
||||
}
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return;
|
||||
|
||||
auto chunk = info->at(chunk_size, id);
|
||||
if (chunk == nullptr) return;
|
||||
|
||||
if (!sub_rc(Flag{}, chunk->conns(), curr_conns, conn_id)) {
|
||||
return;
|
||||
}
|
||||
info->lock_.lock();
|
||||
info->pool_.release(id);
|
||||
info->lock_.unlock();
|
||||
}
|
||||
|
||||
template <typename MsgT>
|
||||
bool clear_message(void* p) {
|
||||
auto msg = static_cast<MsgT*>(p);
|
||||
if (msg->storage_) {
|
||||
std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg->remain_;
|
||||
if (r_size <= 0) {
|
||||
ipc::error("[clear_message] invalid msg size: %d\n", (int)r_size);
|
||||
return true;
|
||||
}
|
||||
release_storage(
|
||||
*reinterpret_cast<ipc::storage_id_t*>(&msg->data_),
|
||||
static_cast<std::size_t>(r_size));
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
struct conn_info_head {
|
||||
|
||||
ipc::string name_;
|
||||
msg_id_t cc_id_; // connection-info id
|
||||
ipc::detail::waiter cc_waiter_, wt_waiter_, rd_waiter_;
|
||||
ipc::shm::handle acc_h_;
|
||||
|
||||
conn_info_head(char const * name)
|
||||
: name_ {name}
|
||||
, cc_id_ {(cc_acc() == nullptr) ? 0 : cc_acc()->fetch_add(1, std::memory_order_relaxed)}
|
||||
, cc_waiter_{("__CC_CONN__" + name_).c_str()}
|
||||
, wt_waiter_{("__WT_CONN__" + name_).c_str()}
|
||||
, rd_waiter_{("__RD_CONN__" + name_).c_str()}
|
||||
, acc_h_ {("__AC_CONN__" + name_).c_str(), sizeof(acc_t)} {
|
||||
}
|
||||
|
||||
void quit_waiting() {
|
||||
cc_waiter_.quit_waiting();
|
||||
wt_waiter_.quit_waiting();
|
||||
rd_waiter_.quit_waiting();
|
||||
}
|
||||
|
||||
auto acc() {
|
||||
return static_cast<acc_t*>(acc_h_.get());
|
||||
}
|
||||
|
||||
auto& recv_cache() {
|
||||
thread_local ipc::unordered_map<msg_id_t, cache_t> tls;
|
||||
return tls;
|
||||
}
|
||||
};
|
||||
|
||||
template <typename W, typename F>
|
||||
bool wait_for(W& waiter, F&& pred, std::uint64_t tm) {
|
||||
if (tm == 0) return !pred();
|
||||
for (unsigned k = 0; pred();) {
|
||||
bool ret = true;
|
||||
ipc::sleep(k, [&k, &ret, &waiter, &pred, tm] {
|
||||
ret = waiter.wait_if(std::forward<F>(pred), tm);
|
||||
k = 0;
|
||||
});
|
||||
if (!ret) return false; // timeout or fail
|
||||
if (k == 0) break; // k has been reset
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename Policy,
|
||||
std::size_t DataSize = ipc::data_length,
|
||||
std::size_t AlignSize = (ipc::detail::min)(DataSize, alignof(std::max_align_t))>
|
||||
struct queue_generator {
|
||||
|
||||
using queue_t = ipc::queue<msg_t<DataSize, AlignSize>, Policy>;
|
||||
|
||||
struct conn_info_t : conn_info_head {
|
||||
queue_t que_;
|
||||
|
||||
conn_info_t(char const * name)
|
||||
: conn_info_head{name}
|
||||
, que_{("__QU_CONN__" +
|
||||
ipc::to_string(DataSize) + "__" +
|
||||
ipc::to_string(AlignSize) + "__" + name).c_str()} {
|
||||
}
|
||||
|
||||
void disconnect_receiver() {
|
||||
bool dis = que_.disconnect();
|
||||
this->quit_waiting();
|
||||
if (dis) {
|
||||
this->recv_cache().clear();
|
||||
}
|
||||
}
|
||||
};
|
||||
};
|
||||
|
||||
template <typename Policy>
|
||||
struct detail_impl {
|
||||
|
||||
using policy_t = Policy;
|
||||
using flag_t = typename policy_t::flag_t;
|
||||
using queue_t = typename queue_generator<policy_t>::queue_t;
|
||||
using conn_info_t = typename queue_generator<policy_t>::conn_info_t;
|
||||
|
||||
constexpr static conn_info_t* info_of(ipc::handle_t h) noexcept {
|
||||
return static_cast<conn_info_t*>(h);
|
||||
}
|
||||
|
||||
constexpr static queue_t* queue_of(ipc::handle_t h) noexcept {
|
||||
return (info_of(h) == nullptr) ? nullptr : &(info_of(h)->que_);
|
||||
}
|
||||
|
||||
/* API implementations */
|
||||
|
||||
static void disconnect(ipc::handle_t h) {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
return;
|
||||
}
|
||||
que->shut_sending();
|
||||
assert(info_of(h) != nullptr);
|
||||
info_of(h)->disconnect_receiver();
|
||||
}
|
||||
|
||||
static bool reconnect(ipc::handle_t * ph, bool start_to_recv) {
|
||||
assert(ph != nullptr);
|
||||
assert(*ph != nullptr);
|
||||
auto que = queue_of(*ph);
|
||||
if (que == nullptr) {
|
||||
return false;
|
||||
}
|
||||
if (start_to_recv) {
|
||||
que->shut_sending();
|
||||
if (que->connect()) { // wouldn't connect twice
|
||||
info_of(*ph)->cc_waiter_.broadcast();
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
// start_to_recv == false
|
||||
if (que->connected()) {
|
||||
info_of(*ph)->disconnect_receiver();
|
||||
}
|
||||
return que->ready_sending();
|
||||
}
|
||||
|
||||
static bool connect(ipc::handle_t * ph, char const * name, bool start_to_recv) {
|
||||
assert(ph != nullptr);
|
||||
if (*ph == nullptr) {
|
||||
*ph = ipc::mem::alloc<conn_info_t>(name);
|
||||
}
|
||||
return reconnect(ph, start_to_recv);
|
||||
}
|
||||
|
||||
static void destroy(ipc::handle_t h) {
|
||||
disconnect(h);
|
||||
ipc::mem::free(info_of(h));
|
||||
}
|
||||
|
||||
static std::size_t recv_count(ipc::handle_t h) noexcept {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
return ipc::invalid_value;
|
||||
}
|
||||
return que->conn_count();
|
||||
}
|
||||
|
||||
static bool wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
return false;
|
||||
}
|
||||
return wait_for(info_of(h)->cc_waiter_, [que, r_count] {
|
||||
return que->conn_count() < r_count;
|
||||
}, tm);
|
||||
}
|
||||
|
||||
template <typename F>
|
||||
static bool send(F&& gen_push, ipc::handle_t h, void const * data, std::size_t size) {
|
||||
if (data == nullptr || size == 0) {
|
||||
ipc::error("fail: send(%p, %zd)\n", data, size);
|
||||
return false;
|
||||
}
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
ipc::error("fail: send, queue_of(h) == nullptr\n");
|
||||
return false;
|
||||
}
|
||||
if (que->elems() == nullptr) {
|
||||
ipc::error("fail: send, queue_of(h)->elems() == nullptr\n");
|
||||
return false;
|
||||
}
|
||||
if (!que->ready_sending()) {
|
||||
ipc::error("fail: send, que->ready_sending() == false\n");
|
||||
return false;
|
||||
}
|
||||
ipc::circ::cc_t conns = que->elems()->connections(std::memory_order_relaxed);
|
||||
if (conns == 0) {
|
||||
ipc::error("fail: send, there is no receiver on this connection.\n");
|
||||
return false;
|
||||
}
|
||||
// calc a new message id
|
||||
auto acc = info_of(h)->acc();
|
||||
if (acc == nullptr) {
|
||||
ipc::error("fail: send, info_of(h)->acc() == nullptr\n");
|
||||
return false;
|
||||
}
|
||||
auto msg_id = acc->fetch_add(1, std::memory_order_relaxed);
|
||||
auto try_push = std::forward<F>(gen_push)(info_of(h), que, msg_id);
|
||||
if (size > ipc::large_msg_limit) {
|
||||
auto dat = acquire_storage(size, conns);
|
||||
void * buf = dat.second;
|
||||
if (buf != nullptr) {
|
||||
std::memcpy(buf, data, size);
|
||||
return try_push(static_cast<std::int32_t>(size) -
|
||||
static_cast<std::int32_t>(ipc::data_length), &(dat.first), 0);
|
||||
}
|
||||
// try using message fragment
|
||||
//ipc::log("fail: shm::handle for big message. msg_id: %zd, size: %zd\n", msg_id, size);
|
||||
}
|
||||
// push message fragment
|
||||
std::int32_t offset = 0;
|
||||
for (std::int32_t i = 0; i < static_cast<std::int32_t>(size / ipc::data_length); ++i, offset += ipc::data_length) {
|
||||
if (!try_push(static_cast<std::int32_t>(size) - offset - static_cast<std::int32_t>(ipc::data_length),
|
||||
static_cast<ipc::byte_t const *>(data) + offset, ipc::data_length)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
// if remain > 0, this is the last message fragment
|
||||
std::int32_t remain = static_cast<std::int32_t>(size) - offset;
|
||||
if (remain > 0) {
|
||||
if (!try_push(remain - static_cast<std::int32_t>(ipc::data_length),
|
||||
static_cast<ipc::byte_t const *>(data) + offset,
|
||||
static_cast<std::size_t>(remain))) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return send([tm](auto info, auto que, auto msg_id) {
|
||||
return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) {
|
||||
if (!wait_for(info->wt_waiter_, [&] {
|
||||
return !que->push(
|
||||
[](void*) { return true; },
|
||||
info->cc_id_, msg_id, remain, data, size);
|
||||
}, tm)) {
|
||||
ipc::log("force_push: msg_id = %zd, remain = %d, size = %zd\n", msg_id, remain, size);
|
||||
if (!que->force_push(
|
||||
clear_message<typename queue_t::value_t>,
|
||||
info->cc_id_, msg_id, remain, data, size)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
info->rd_waiter_.broadcast();
|
||||
return true;
|
||||
};
|
||||
}, h, data, size);
|
||||
}
|
||||
|
||||
static bool try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return send([tm](auto info, auto que, auto msg_id) {
|
||||
return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) {
|
||||
if (!wait_for(info->wt_waiter_, [&] {
|
||||
return !que->push(
|
||||
[](void*) { return true; },
|
||||
info->cc_id_, msg_id, remain, data, size);
|
||||
}, tm)) {
|
||||
return false;
|
||||
}
|
||||
info->rd_waiter_.broadcast();
|
||||
return true;
|
||||
};
|
||||
}, h, data, size);
|
||||
}
|
||||
|
||||
static ipc::buff_t recv(ipc::handle_t h, std::uint64_t tm) {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
ipc::error("fail: recv, queue_of(h) == nullptr\n");
|
||||
return {};
|
||||
}
|
||||
if (!que->connected()) {
|
||||
// hasn't connected yet, just return.
|
||||
return {};
|
||||
}
|
||||
auto& rc = info_of(h)->recv_cache();
|
||||
for (;;) {
|
||||
// pop a new message
|
||||
typename queue_t::value_t msg;
|
||||
if (!wait_for(info_of(h)->rd_waiter_, [que, &msg] {
|
||||
return !que->pop(msg);
|
||||
}, tm)) {
|
||||
// pop failed, just return.
|
||||
return {};
|
||||
}
|
||||
info_of(h)->wt_waiter_.broadcast();
|
||||
if ((info_of(h)->acc() != nullptr) && (msg.cc_id_ == info_of(h)->cc_id_)) {
|
||||
continue; // ignore message to self
|
||||
}
|
||||
// msg.remain_ may minus & abs(msg.remain_) < data_length
|
||||
std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg.remain_;
|
||||
if (r_size <= 0) {
|
||||
ipc::error("fail: recv, r_size = %d\n", (int)r_size);
|
||||
return {};
|
||||
}
|
||||
std::size_t msg_size = static_cast<std::size_t>(r_size);
|
||||
// large message
|
||||
if (msg.storage_) {
|
||||
ipc::storage_id_t buf_id = *reinterpret_cast<ipc::storage_id_t*>(&msg.data_);
|
||||
void* buf = find_storage(buf_id, msg_size);
|
||||
if (buf != nullptr) {
|
||||
struct recycle_t {
|
||||
ipc::storage_id_t storage_id;
|
||||
ipc::circ::cc_t curr_conns;
|
||||
ipc::circ::cc_t conn_id;
|
||||
} *r_info = ipc::mem::alloc<recycle_t>(recycle_t{
|
||||
buf_id, que->elems()->connections(std::memory_order_relaxed), que->connected_id()
|
||||
});
|
||||
if (r_info == nullptr) {
|
||||
ipc::log("fail: ipc::mem::alloc<recycle_t>.\n");
|
||||
return ipc::buff_t{buf, msg_size}; // no recycle
|
||||
} else {
|
||||
return ipc::buff_t{buf, msg_size, [](void* p_info, std::size_t size) {
|
||||
auto r_info = static_cast<recycle_t *>(p_info);
|
||||
IPC_UNUSED_ auto finally = ipc::guard([r_info] {
|
||||
ipc::mem::free(r_info);
|
||||
});
|
||||
recycle_storage<flag_t>(r_info->storage_id, size, r_info->curr_conns, r_info->conn_id);
|
||||
}, r_info};
|
||||
}
|
||||
} else {
|
||||
ipc::log("fail: shm::handle for large message. msg_id: %zd, buf_id: %zd, size: %zd\n", msg.id_, buf_id, msg_size);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
// find cache with msg.id_
|
||||
auto cac_it = rc.find(msg.id_);
|
||||
if (cac_it == rc.end()) {
|
||||
if (msg_size <= ipc::data_length) {
|
||||
return make_cache(msg.data_, msg_size);
|
||||
}
|
||||
// gc
|
||||
if (rc.size() > 1024) {
|
||||
std::vector<msg_id_t> need_del;
|
||||
for (auto const & pair : rc) {
|
||||
auto cmp = std::minmax(msg.id_, pair.first);
|
||||
if (cmp.second - cmp.first > 8192) {
|
||||
need_del.push_back(pair.first);
|
||||
}
|
||||
}
|
||||
for (auto id : need_del) rc.erase(id);
|
||||
}
|
||||
// cache the first message fragment
|
||||
rc.emplace(msg.id_, cache_t { ipc::data_length, make_cache(msg.data_, msg_size) });
|
||||
}
|
||||
// has cached before this message
|
||||
else {
|
||||
auto& cac = cac_it->second;
|
||||
// this is the last message fragment
|
||||
if (msg.remain_ <= 0) {
|
||||
cac.append(&(msg.data_), msg_size);
|
||||
// finish this message, erase it from cache
|
||||
auto buff = std::move(cac.buff_);
|
||||
rc.erase(cac_it);
|
||||
return buff;
|
||||
}
|
||||
// there are remain datas after this message
|
||||
cac.append(&(msg.data_), ipc::data_length);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static ipc::buff_t try_recv(ipc::handle_t h) {
|
||||
return recv(h, 0);
|
||||
}
|
||||
|
||||
}; // detail_impl<Policy>
|
||||
|
||||
template <typename Flag>
|
||||
using policy_t = ipc::policy::choose<ipc::circ::elem_array, Flag>;
|
||||
|
||||
} // internal-linkage
|
||||
|
||||
namespace ipc {
|
||||
|
||||
template <typename Flag>
|
||||
ipc::handle_t chan_impl<Flag>::inited() {
|
||||
ipc::detail::waiter::init();
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::connect(ipc::handle_t * ph, char const * name, unsigned mode) {
|
||||
return detail_impl<policy_t<Flag>>::connect(ph, name, mode & receiver);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::reconnect(ipc::handle_t * ph, unsigned mode) {
|
||||
return detail_impl<policy_t<Flag>>::reconnect(ph, mode & receiver);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
void chan_impl<Flag>::disconnect(ipc::handle_t h) {
|
||||
detail_impl<policy_t<Flag>>::disconnect(h);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
void chan_impl<Flag>::destroy(ipc::handle_t h) {
|
||||
detail_impl<policy_t<Flag>>::destroy(h);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
char const * chan_impl<Flag>::name(ipc::handle_t h) {
|
||||
auto info = detail_impl<policy_t<Flag>>::info_of(h);
|
||||
return (info == nullptr) ? nullptr : info->name_.c_str();
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
std::size_t chan_impl<Flag>::recv_count(ipc::handle_t h) {
|
||||
return detail_impl<policy_t<Flag>>::recv_count(h);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::wait_for_recv(h, r_count, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::send(h, data, size, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
buff_t chan_impl<Flag>::recv(ipc::handle_t h, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::recv(h, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::try_send(h, data, size, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
buff_t chan_impl<Flag>::try_recv(ipc::handle_t h) {
|
||||
return detail_impl<policy_t<Flag>>::try_recv(h);
|
||||
}
|
||||
|
||||
template struct chan_impl<ipc::wr<relat::single, relat::single, trans::unicast >>;
|
||||
// template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::unicast >>; // TBD
|
||||
// template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::unicast >>; // TBD
|
||||
template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::broadcast>>;
|
||||
template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::broadcast>>;
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,25 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <type_traits>
|
||||
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/prod_cons.h"
|
||||
|
||||
#include "libipc/circ/elem_array.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace policy {
|
||||
|
||||
template <template <typename, std::size_t...> class Elems, typename Flag>
|
||||
struct choose;
|
||||
|
||||
template <typename Flag>
|
||||
struct choose<circ::elem_array, Flag> {
|
||||
using flag_t = Flag;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
using elems_t = circ::elem_array<ipc::prod_cons_impl<flag_t>, DataSize, AlignSize>;
|
||||
};
|
||||
|
||||
} // namespace policy
|
||||
} // namespace ipc
|
||||
@@ -1,17 +0,0 @@
|
||||
#include "libipc/pool_alloc.h"
|
||||
|
||||
#include "libipc/memory/resource.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace mem {
|
||||
|
||||
void* pool_alloc::alloc(std::size_t size) {
|
||||
return async_pool_alloc::alloc(size);
|
||||
}
|
||||
|
||||
void pool_alloc::free(void* p, std::size_t size) {
|
||||
async_pool_alloc::free(p, size);
|
||||
}
|
||||
|
||||
} // namespace mem
|
||||
} // namespace ipc
|
||||
@@ -1,433 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <atomic>
|
||||
#include <utility>
|
||||
#include <cstring>
|
||||
#include <type_traits>
|
||||
#include <cstdint>
|
||||
|
||||
#include "libipc/def.h"
|
||||
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_def.h"
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/utility/utility.h"
|
||||
|
||||
namespace ipc {
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
/// producer-consumer implementation
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
template <typename Flag>
|
||||
struct prod_cons_impl;
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> rd_; // read index
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
|
||||
constexpr circ::u2_t cursor() const noexcept {
|
||||
return 0;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
|
||||
return false; // full
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_wt].data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
|
||||
* So we could just disconnect all connections of receiver, and return false.
|
||||
*/
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(~static_cast<circ::cc_t>(0u));
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
|
||||
auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
|
||||
if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_rd].data_));
|
||||
std::forward<R>(out)(true);
|
||||
rd_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi , trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(cur_rd) ==
|
||||
circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi , relat::multi, trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::multi, trans::unicast>> {
|
||||
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
circ::u2_t cur_ct, nxt_ct;
|
||||
for (unsigned k = 0;;) {
|
||||
cur_ct = ct_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(nxt_ct = cur_ct + 1) ==
|
||||
circ::index_of(rd_.load(std::memory_order_acquire))) {
|
||||
return false; // full
|
||||
}
|
||||
if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
auto* el = elems + circ::index_of(cur_ct);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
while (1) {
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_ct != wt_.load(std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
if ((~cac_ct) != cur_ct) {
|
||||
return true;
|
||||
}
|
||||
if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
wt_.store(nxt_ct, std::memory_order_release);
|
||||
cur_ct = nxt_ct;
|
||||
nxt_ct = cur_ct + 1;
|
||||
el = elems + circ::index_of(cur_ct);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
auto cur_wt = wt_.load(std::memory_order_acquire);
|
||||
auto id_rd = circ::index_of(cur_rd);
|
||||
auto id_wt = circ::index_of(cur_wt);
|
||||
if (id_rd == id_wt) {
|
||||
auto* el = elems + id_wt;
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((~cac_ct) != cur_wt) {
|
||||
return false; // empty
|
||||
}
|
||||
if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
wt_.store(cur_wt + 1, std::memory_order_release);
|
||||
}
|
||||
k = 0;
|
||||
}
|
||||
else {
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
ep_mask = 0x00000000ffffffffull,
|
||||
ep_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t> rc_ { 0 }; // read-counter
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return wt_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
epoch_ += ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
|
||||
if (cur == cursor()) return false; // acquire
|
||||
auto* el = elems + circ::index_of(cur++);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & ep_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = cur_rc & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)((nxt_rc & ep_mask) == 0);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
rc_mask = 0x00000000ffffffffull,
|
||||
ep_mask = 0x00ffffffffffffffull,
|
||||
ep_incr = 0x0100000000000000ull,
|
||||
ic_mask = 0xff000000ffffffffull,
|
||||
ic_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t > rc_ { 0 }; // read-counter
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
alignas(cache_line_size) std::atomic<rc_t> epoch_ { 0 };
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return ct_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_rc(rc_t rc) noexcept {
|
||||
return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_mask(rc_t rc) noexcept {
|
||||
return inc_rc(rc) & ~rc_mask;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.load(std::memory_order_acquire);
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_relaxed);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
else if (!rem_cc) {
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((cur_fl != cur_ct) && cur_fl) {
|
||||
return false; // full
|
||||
}
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed) &&
|
||||
epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed)) {
|
||||
if (epoch == epoch_.load(std::memory_order_acquire)) {
|
||||
break;
|
||||
}
|
||||
else if (push(wrapper, std::forward<F>(f), elems)) {
|
||||
return true;
|
||||
}
|
||||
epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E, std::size_t N>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
|
||||
auto* el = elems + circ::index_of(cur);
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_fl != ~static_cast<flag_t>(cur)) {
|
||||
return false; // empty
|
||||
}
|
||||
++cur;
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & rc_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = inc_rc(cur_rc) & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
bool last_one = false;
|
||||
if ((last_one = (nxt_rc & rc_mask) == 0)) {
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
}
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)(last_one);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,216 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <type_traits>
|
||||
#include <new>
|
||||
#include <utility> // [[since C++14]]: std::exchange
|
||||
#include <algorithm>
|
||||
#include <atomic>
|
||||
#include <tuple>
|
||||
#include <thread>
|
||||
#include <chrono>
|
||||
#include <string>
|
||||
#include <cassert> // assert
|
||||
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/shm.h"
|
||||
#include "libipc/rw_lock.h"
|
||||
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_def.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace detail {
|
||||
|
||||
class queue_conn {
|
||||
protected:
|
||||
circ::cc_t connected_ = 0;
|
||||
shm::handle elems_h_;
|
||||
|
||||
template <typename Elems>
|
||||
Elems* open(char const * name) {
|
||||
if (name == nullptr || name[0] == '\0') {
|
||||
ipc::error("fail open waiter: name is empty!\n");
|
||||
return nullptr;
|
||||
}
|
||||
if (!elems_h_.acquire(name, sizeof(Elems))) {
|
||||
return nullptr;
|
||||
}
|
||||
auto elems = static_cast<Elems*>(elems_h_.get());
|
||||
if (elems == nullptr) {
|
||||
ipc::error("fail acquire elems: %s\n", name);
|
||||
return nullptr;
|
||||
}
|
||||
elems->init();
|
||||
return elems;
|
||||
}
|
||||
|
||||
void close() {
|
||||
elems_h_.release();
|
||||
}
|
||||
|
||||
public:
|
||||
queue_conn() = default;
|
||||
queue_conn(const queue_conn&) = delete;
|
||||
queue_conn& operator=(const queue_conn&) = delete;
|
||||
|
||||
bool connected() const noexcept {
|
||||
return connected_ != 0;
|
||||
}
|
||||
|
||||
circ::cc_t connected_id() const noexcept {
|
||||
return connected_;
|
||||
}
|
||||
|
||||
template <typename Elems>
|
||||
auto connect(Elems* elems) noexcept
|
||||
/*needs 'optional' here*/
|
||||
-> std::tuple<bool, bool, decltype(std::declval<Elems>().cursor())> {
|
||||
if (elems == nullptr) return {};
|
||||
// if it's already connected, just return
|
||||
if (connected()) return {connected(), false, 0};
|
||||
connected_ = elems->connect_receiver();
|
||||
return {connected(), true, elems->cursor()};
|
||||
}
|
||||
|
||||
template <typename Elems>
|
||||
bool disconnect(Elems* elems) noexcept {
|
||||
if (elems == nullptr) return false;
|
||||
// if it's already disconnected, just return false
|
||||
if (!connected()) return false;
|
||||
elems->disconnect_receiver(std::exchange(connected_, 0));
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
template <typename Elems>
|
||||
class queue_base : public queue_conn {
|
||||
using base_t = queue_conn;
|
||||
|
||||
public:
|
||||
using elems_t = Elems;
|
||||
using policy_t = typename elems_t::policy_t;
|
||||
|
||||
protected:
|
||||
elems_t * elems_ = nullptr;
|
||||
decltype(std::declval<elems_t>().cursor()) cursor_ = 0;
|
||||
bool sender_flag_ = false;
|
||||
|
||||
public:
|
||||
using base_t::base_t;
|
||||
|
||||
queue_base() = default;
|
||||
|
||||
explicit queue_base(char const * name)
|
||||
: queue_base{} {
|
||||
elems_ = open<elems_t>(name);
|
||||
}
|
||||
|
||||
explicit queue_base(elems_t * elems) noexcept
|
||||
: queue_base{} {
|
||||
assert(elems != nullptr);
|
||||
elems_ = elems;
|
||||
}
|
||||
|
||||
/* not virtual */ ~queue_base() {
|
||||
base_t::close();
|
||||
}
|
||||
|
||||
elems_t * elems() noexcept { return elems_; }
|
||||
elems_t const * elems() const noexcept { return elems_; }
|
||||
|
||||
bool ready_sending() noexcept {
|
||||
if (elems_ == nullptr) return false;
|
||||
return sender_flag_ || (sender_flag_ = elems_->connect_sender());
|
||||
}
|
||||
|
||||
void shut_sending() noexcept {
|
||||
if (elems_ == nullptr) return;
|
||||
if (!sender_flag_) return;
|
||||
elems_->disconnect_sender();
|
||||
}
|
||||
|
||||
bool connect() noexcept {
|
||||
auto tp = base_t::connect(elems_);
|
||||
if (std::get<0>(tp) && std::get<1>(tp)) {
|
||||
cursor_ = std::get<2>(tp);
|
||||
return true;
|
||||
}
|
||||
return std::get<0>(tp);
|
||||
}
|
||||
|
||||
bool disconnect() noexcept {
|
||||
return base_t::disconnect(elems_);
|
||||
}
|
||||
|
||||
std::size_t conn_count() const noexcept {
|
||||
return (elems_ == nullptr) ? static_cast<std::size_t>(invalid_value) : elems_->conn_count();
|
||||
}
|
||||
|
||||
bool valid() const noexcept {
|
||||
return elems_ != nullptr;
|
||||
}
|
||||
|
||||
bool empty() const noexcept {
|
||||
return !valid() || (cursor_ == elems_->cursor());
|
||||
}
|
||||
|
||||
template <typename T, typename F, typename... P>
|
||||
bool push(F&& prep, P&&... params) {
|
||||
if (elems_ == nullptr) return false;
|
||||
return elems_->push(this, [&](void* p) {
|
||||
if (prep(p)) ::new (p) T(std::forward<P>(params)...);
|
||||
});
|
||||
}
|
||||
|
||||
template <typename T, typename F, typename... P>
|
||||
bool force_push(F&& prep, P&&... params) {
|
||||
if (elems_ == nullptr) return false;
|
||||
return elems_->force_push(this, [&](void* p) {
|
||||
if (prep(p)) ::new (p) T(std::forward<P>(params)...);
|
||||
});
|
||||
}
|
||||
|
||||
template <typename T, typename F>
|
||||
bool pop(T& item, F&& out) {
|
||||
if (elems_ == nullptr) {
|
||||
return false;
|
||||
}
|
||||
return elems_->pop(this, &(this->cursor_), [&item](void* p) {
|
||||
::new (&item) T(std::move(*static_cast<T*>(p)));
|
||||
}, std::forward<F>(out));
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace detail
|
||||
|
||||
template <typename T, typename Policy>
|
||||
class queue final : public detail::queue_base<typename Policy::template elems_t<sizeof(T), alignof(T)>> {
|
||||
using base_t = detail::queue_base<typename Policy::template elems_t<sizeof(T), alignof(T)>>;
|
||||
|
||||
public:
|
||||
using value_t = T;
|
||||
|
||||
using base_t::base_t;
|
||||
|
||||
template <typename... P>
|
||||
bool push(P&&... params) {
|
||||
return base_t::template push<T>(std::forward<P>(params)...);
|
||||
}
|
||||
|
||||
template <typename... P>
|
||||
bool force_push(P&&... params) {
|
||||
return base_t::template force_push<T>(std::forward<P>(params)...);
|
||||
}
|
||||
|
||||
bool pop(T& item) {
|
||||
return base_t::pop(item, [](bool) {});
|
||||
}
|
||||
|
||||
template <typename F>
|
||||
bool pop(T& item, F&& out) {
|
||||
return base_t::pop(item, std::forward<F>(out));
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,103 +0,0 @@
|
||||
|
||||
#include <string>
|
||||
#include <utility>
|
||||
|
||||
#include "libipc/shm.h"
|
||||
|
||||
#include "libipc/utility/pimpl.h"
|
||||
#include "libipc/memory/resource.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace shm {
|
||||
|
||||
class handle::handle_ : public pimpl<handle_> {
|
||||
public:
|
||||
shm::id_t id_ = nullptr;
|
||||
void* m_ = nullptr;
|
||||
|
||||
ipc::string n_;
|
||||
std::size_t s_ = 0;
|
||||
};
|
||||
|
||||
handle::handle()
|
||||
: p_(p_->make()) {
|
||||
}
|
||||
|
||||
handle::handle(char const * name, std::size_t size, unsigned mode)
|
||||
: handle() {
|
||||
acquire(name, size, mode);
|
||||
}
|
||||
|
||||
handle::handle(handle&& rhs)
|
||||
: handle() {
|
||||
swap(rhs);
|
||||
}
|
||||
|
||||
handle::~handle() {
|
||||
release();
|
||||
p_->clear();
|
||||
}
|
||||
|
||||
void handle::swap(handle& rhs) {
|
||||
std::swap(p_, rhs.p_);
|
||||
}
|
||||
|
||||
handle& handle::operator=(handle rhs) {
|
||||
swap(rhs);
|
||||
return *this;
|
||||
}
|
||||
|
||||
bool handle::valid() const noexcept {
|
||||
return impl(p_)->m_ != nullptr;
|
||||
}
|
||||
|
||||
std::size_t handle::size() const noexcept {
|
||||
return impl(p_)->s_;
|
||||
}
|
||||
|
||||
char const * handle::name() const noexcept {
|
||||
return impl(p_)->n_.c_str();
|
||||
}
|
||||
|
||||
std::int32_t handle::ref() const noexcept {
|
||||
return shm::get_ref(impl(p_)->id_);
|
||||
}
|
||||
|
||||
void handle::sub_ref() noexcept {
|
||||
shm::sub_ref(impl(p_)->id_);
|
||||
}
|
||||
|
||||
bool handle::acquire(char const * name, std::size_t size, unsigned mode) {
|
||||
release();
|
||||
impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode);
|
||||
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
|
||||
return valid();
|
||||
}
|
||||
|
||||
std::int32_t handle::release() {
|
||||
if (impl(p_)->id_ == nullptr) return -1;
|
||||
return shm::release(detach());
|
||||
}
|
||||
|
||||
void* handle::get() const {
|
||||
return impl(p_)->m_;
|
||||
}
|
||||
|
||||
void handle::attach(id_t id) {
|
||||
if (id == nullptr) return;
|
||||
release();
|
||||
impl(p_)->id_ = id;
|
||||
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
|
||||
}
|
||||
|
||||
id_t handle::detach() {
|
||||
auto old = impl(p_)->id_;
|
||||
impl(p_)->id_ = nullptr;
|
||||
impl(p_)->m_ = nullptr;
|
||||
impl(p_)->s_ = 0;
|
||||
impl(p_)->n_.clear();
|
||||
return old;
|
||||
}
|
||||
|
||||
} // namespace shm
|
||||
} // namespace ipc
|
||||
@@ -1,83 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <utility>
|
||||
#include <string>
|
||||
#include <mutex>
|
||||
#include <atomic>
|
||||
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/mutex.h"
|
||||
#include "libipc/condition.h"
|
||||
#include "libipc/platform/detail.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace detail {
|
||||
|
||||
class waiter {
|
||||
ipc::sync::condition cond_;
|
||||
ipc::sync::mutex lock_;
|
||||
std::atomic<bool> quit_ {false};
|
||||
|
||||
public:
|
||||
static void init();
|
||||
|
||||
waiter() = default;
|
||||
waiter(char const *name) {
|
||||
open(name);
|
||||
}
|
||||
|
||||
~waiter() {
|
||||
close();
|
||||
}
|
||||
|
||||
bool valid() const noexcept {
|
||||
return cond_.valid() && lock_.valid();
|
||||
}
|
||||
|
||||
bool open(char const *name) noexcept {
|
||||
quit_.store(false, std::memory_order_relaxed);
|
||||
if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) {
|
||||
return false;
|
||||
}
|
||||
if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) {
|
||||
cond_.close();
|
||||
return false;
|
||||
}
|
||||
return valid();
|
||||
}
|
||||
|
||||
void close() noexcept {
|
||||
cond_.close();
|
||||
lock_.close();
|
||||
}
|
||||
|
||||
template <typename F>
|
||||
bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept {
|
||||
IPC_UNUSED_ std::lock_guard<ipc::sync::mutex> guard {lock_};
|
||||
while ([this, &pred] {
|
||||
return !quit_.load(std::memory_order_relaxed)
|
||||
&& std::forward<F>(pred)();
|
||||
}()) {
|
||||
if (!cond_.wait(lock_, tm)) return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
bool notify() noexcept {
|
||||
std::lock_guard<ipc::sync::mutex>{lock_}; // barrier
|
||||
return cond_.notify(lock_);
|
||||
}
|
||||
|
||||
bool broadcast() noexcept {
|
||||
std::lock_guard<ipc::sync::mutex>{lock_}; // barrier
|
||||
return cond_.broadcast(lock_);
|
||||
}
|
||||
|
||||
bool quit_waiting() {
|
||||
quit_.store(true, std::memory_order_release);
|
||||
return broadcast();
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace detail
|
||||
} // namespace ipc
|
||||
@@ -1,3 +0,0 @@
|
||||
https://github.com/mutouyun/cpp-ipc
|
||||
|
||||
A high-performance inter-process communication library using shared memory on Linux/Windows.
|
||||
文件差异内容过多而无法显示
加载差异
@@ -1,316 +0,0 @@
|
||||
// jpgd.h - C++ class for JPEG decompression.
|
||||
// Public domain, Rich Geldreich <richgel99@gmail.com>
|
||||
#ifndef JPEG_DECODER_H
|
||||
#define JPEG_DECODER_H
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
#include <setjmp.h>
|
||||
|
||||
namespace jpgd
|
||||
{
|
||||
typedef unsigned char uint8;
|
||||
typedef signed short int16;
|
||||
typedef unsigned short uint16;
|
||||
typedef unsigned int uint;
|
||||
typedef signed int int32;
|
||||
|
||||
// Loads a JPEG image from a memory buffer or a file.
|
||||
// req_comps can be 1 (grayscale), 3 (RGB), or 4 (RGBA).
|
||||
// On return, width/height will be set to the image's dimensions, and actual_comps will be set to the either 1 (grayscale) or 3 (RGB).
|
||||
// Notes: For more control over where and how the source data is read, see the decompress_jpeg_image_from_stream() function below, or call the jpeg_decoder class directly.
|
||||
// Requesting a 8 or 32bpp image is currently a little faster than 24bpp because the jpeg_decoder class itself currently always unpacks to either 8 or 32bpp.
|
||||
// BEGIN EPIC MOD
|
||||
//unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps);
|
||||
unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format);
|
||||
// END EPIC MOD
|
||||
unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps);
|
||||
|
||||
// Success/failure error codes.
|
||||
enum jpgd_status
|
||||
{
|
||||
JPGD_SUCCESS = 0, JPGD_FAILED = -1, JPGD_DONE = 1,
|
||||
JPGD_BAD_DHT_COUNTS = -256, JPGD_BAD_DHT_INDEX, JPGD_BAD_DHT_MARKER, JPGD_BAD_DQT_MARKER, JPGD_BAD_DQT_TABLE,
|
||||
JPGD_BAD_PRECISION, JPGD_BAD_HEIGHT, JPGD_BAD_WIDTH, JPGD_TOO_MANY_COMPONENTS,
|
||||
JPGD_BAD_SOF_LENGTH, JPGD_BAD_VARIABLE_MARKER, JPGD_BAD_DRI_LENGTH, JPGD_BAD_SOS_LENGTH,
|
||||
JPGD_BAD_SOS_COMP_ID, JPGD_W_EXTRA_BYTES_BEFORE_MARKER, JPGD_NO_ARITHMITIC_SUPPORT, JPGD_UNEXPECTED_MARKER,
|
||||
JPGD_NOT_JPEG, JPGD_UNSUPPORTED_MARKER, JPGD_BAD_DQT_LENGTH, JPGD_TOO_MANY_BLOCKS,
|
||||
JPGD_UNDEFINED_QUANT_TABLE, JPGD_UNDEFINED_HUFF_TABLE, JPGD_NOT_SINGLE_SCAN, JPGD_UNSUPPORTED_COLORSPACE,
|
||||
JPGD_UNSUPPORTED_SAMP_FACTORS, JPGD_DECODE_ERROR, JPGD_BAD_RESTART_MARKER, JPGD_ASSERTION_ERROR,
|
||||
JPGD_BAD_SOS_SPECTRAL, JPGD_BAD_SOS_SUCCESSIVE, JPGD_STREAM_READ, JPGD_NOTENOUGHMEM
|
||||
};
|
||||
|
||||
// Input stream interface.
|
||||
// Derive from this class to read input data from sources other than files or memory. Set m_eof_flag to true when no more data is available.
|
||||
// The decoder is rather greedy: it will keep on calling this method until its internal input buffer is full, or until the EOF flag is set.
|
||||
// It the input stream contains data after the JPEG stream's EOI (end of image) marker it will probably be pulled into the internal buffer.
|
||||
// Call the get_total_bytes_read() method to determine the actual size of the JPEG stream after successful decoding.
|
||||
class jpeg_decoder_stream
|
||||
{
|
||||
public:
|
||||
jpeg_decoder_stream() { }
|
||||
virtual ~jpeg_decoder_stream() { }
|
||||
|
||||
// The read() method is called when the internal input buffer is empty.
|
||||
// Parameters:
|
||||
// pBuf - input buffer
|
||||
// max_bytes_to_read - maximum bytes that can be written to pBuf
|
||||
// pEOF_flag - set this to true if at end of stream (no more bytes remaining)
|
||||
// Returns -1 on error, otherwise return the number of bytes actually written to the buffer (which may be 0).
|
||||
// Notes: This method will be called in a loop until you set *pEOF_flag to true or the internal buffer is full.
|
||||
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) = 0;
|
||||
};
|
||||
|
||||
// stdio FILE stream class.
|
||||
class jpeg_decoder_file_stream : public jpeg_decoder_stream
|
||||
{
|
||||
jpeg_decoder_file_stream(const jpeg_decoder_file_stream &);
|
||||
jpeg_decoder_file_stream &operator =(const jpeg_decoder_file_stream &);
|
||||
|
||||
FILE *m_pFile;
|
||||
bool m_eof_flag, m_error_flag;
|
||||
|
||||
public:
|
||||
jpeg_decoder_file_stream();
|
||||
virtual ~jpeg_decoder_file_stream();
|
||||
|
||||
bool open(const char *Pfilename);
|
||||
void close();
|
||||
|
||||
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag);
|
||||
};
|
||||
|
||||
// Memory stream class.
|
||||
class jpeg_decoder_mem_stream : public jpeg_decoder_stream
|
||||
{
|
||||
const uint8 *m_pSrc_data;
|
||||
uint m_ofs, m_size;
|
||||
|
||||
public:
|
||||
jpeg_decoder_mem_stream() : m_pSrc_data(NULL), m_ofs(0), m_size(0) { }
|
||||
jpeg_decoder_mem_stream(const uint8 *pSrc_data, uint size) : m_pSrc_data(pSrc_data), m_ofs(0), m_size(size) { }
|
||||
|
||||
virtual ~jpeg_decoder_mem_stream() { }
|
||||
|
||||
bool open(const uint8 *pSrc_data, uint size);
|
||||
void close() { m_pSrc_data = NULL; m_ofs = 0; m_size = 0; }
|
||||
|
||||
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag);
|
||||
};
|
||||
|
||||
// Loads JPEG file from a jpeg_decoder_stream.
|
||||
unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps);
|
||||
|
||||
enum
|
||||
{
|
||||
JPGD_IN_BUF_SIZE = 8192, JPGD_MAX_BLOCKS_PER_MCU = 10, JPGD_MAX_HUFF_TABLES = 8, JPGD_MAX_QUANT_TABLES = 4,
|
||||
JPGD_MAX_COMPONENTS = 4, JPGD_MAX_COMPS_IN_SCAN = 4, JPGD_MAX_BLOCKS_PER_ROW = 8192, JPGD_MAX_HEIGHT = 16384, JPGD_MAX_WIDTH = 16384
|
||||
};
|
||||
|
||||
typedef int16 jpgd_quant_t;
|
||||
typedef int16 jpgd_block_t;
|
||||
|
||||
class jpeg_decoder
|
||||
{
|
||||
public:
|
||||
// Call get_error_code() after constructing to determine if the stream is valid or not. You may call the get_width(), get_height(), etc.
|
||||
// methods after the constructor is called. You may then either destruct the object, or begin decoding the image by calling begin_decoding(), then decode() on each scanline.
|
||||
jpeg_decoder(jpeg_decoder_stream *pStream);
|
||||
|
||||
~jpeg_decoder();
|
||||
|
||||
// Call this method after constructing the object to begin decompression.
|
||||
// If JPGD_SUCCESS is returned you may then call decode() on each scanline.
|
||||
int begin_decoding();
|
||||
|
||||
// Returns the next scan line.
|
||||
// For grayscale images, pScan_line will point to a buffer containing 8-bit pixels (get_bytes_per_pixel() will return 1).
|
||||
// Otherwise, it will always point to a buffer containing 32-bit RGBA pixels (A will always be 255, and get_bytes_per_pixel() will return 4).
|
||||
// Returns JPGD_SUCCESS if a scan line has been returned.
|
||||
// Returns JPGD_DONE if all scan lines have been returned.
|
||||
// Returns JPGD_FAILED if an error occurred. Call get_error_code() for a more info.
|
||||
int decode(const void** pScan_line, uint* pScan_line_len);
|
||||
|
||||
inline jpgd_status get_error_code() const { return m_error_code; }
|
||||
|
||||
inline int get_width() const { return m_image_x_size; }
|
||||
inline int get_height() const { return m_image_y_size; }
|
||||
|
||||
inline int get_num_components() const { return m_comps_in_frame; }
|
||||
|
||||
inline int get_bytes_per_pixel() const { return m_dest_bytes_per_pixel; }
|
||||
inline int get_bytes_per_scan_line() const { return m_image_x_size * get_bytes_per_pixel(); }
|
||||
|
||||
// Returns the total number of bytes actually consumed by the decoder (which should equal the actual size of the JPEG file).
|
||||
inline int get_total_bytes_read() const { return m_total_bytes_read; }
|
||||
|
||||
private:
|
||||
jpeg_decoder(const jpeg_decoder &);
|
||||
jpeg_decoder &operator =(const jpeg_decoder &);
|
||||
|
||||
typedef void (*pDecode_block_func)(jpeg_decoder *, int, int, int);
|
||||
|
||||
struct huff_tables
|
||||
{
|
||||
bool ac_table;
|
||||
uint look_up[256];
|
||||
uint look_up2[256];
|
||||
uint8 code_size[256];
|
||||
uint tree[512];
|
||||
};
|
||||
|
||||
struct coeff_buf
|
||||
{
|
||||
uint8 *pData;
|
||||
int block_num_x, block_num_y;
|
||||
int block_len_x, block_len_y;
|
||||
int block_size;
|
||||
};
|
||||
|
||||
struct mem_block
|
||||
{
|
||||
mem_block *m_pNext;
|
||||
size_t m_used_count;
|
||||
size_t m_size;
|
||||
char m_data[1];
|
||||
};
|
||||
|
||||
jmp_buf m_jmp_state;
|
||||
mem_block *m_pMem_blocks;
|
||||
int m_image_x_size;
|
||||
int m_image_y_size;
|
||||
jpeg_decoder_stream *m_pStream;
|
||||
int m_progressive_flag;
|
||||
uint8 m_huff_ac[JPGD_MAX_HUFF_TABLES];
|
||||
uint8* m_huff_num[JPGD_MAX_HUFF_TABLES]; // pointer to number of Huffman codes per bit size
|
||||
uint8* m_huff_val[JPGD_MAX_HUFF_TABLES]; // pointer to Huffman codes per bit size
|
||||
jpgd_quant_t* m_quant[JPGD_MAX_QUANT_TABLES]; // pointer to quantization tables
|
||||
int m_scan_type; // Gray, Yh1v1, Yh1v2, Yh2v1, Yh2v2 (CMYK111, CMYK4114 no longer supported)
|
||||
int m_comps_in_frame; // # of components in frame
|
||||
int m_comp_h_samp[JPGD_MAX_COMPONENTS]; // component's horizontal sampling factor
|
||||
int m_comp_v_samp[JPGD_MAX_COMPONENTS]; // component's vertical sampling factor
|
||||
int m_comp_quant[JPGD_MAX_COMPONENTS]; // component's quantization table selector
|
||||
int m_comp_ident[JPGD_MAX_COMPONENTS]; // component's ID
|
||||
int m_comp_h_blocks[JPGD_MAX_COMPONENTS];
|
||||
int m_comp_v_blocks[JPGD_MAX_COMPONENTS];
|
||||
int m_comps_in_scan; // # of components in scan
|
||||
int m_comp_list[JPGD_MAX_COMPS_IN_SCAN]; // components in this scan
|
||||
int m_comp_dc_tab[JPGD_MAX_COMPONENTS]; // component's DC Huffman coding table selector
|
||||
int m_comp_ac_tab[JPGD_MAX_COMPONENTS]; // component's AC Huffman coding table selector
|
||||
int m_spectral_start; // spectral selection start
|
||||
int m_spectral_end; // spectral selection end
|
||||
int m_successive_low; // successive approximation low
|
||||
int m_successive_high; // successive approximation high
|
||||
int m_max_mcu_x_size; // MCU's max. X size in pixels
|
||||
int m_max_mcu_y_size; // MCU's max. Y size in pixels
|
||||
int m_blocks_per_mcu;
|
||||
int m_max_blocks_per_row;
|
||||
int m_mcus_per_row, m_mcus_per_col;
|
||||
int m_mcu_org[JPGD_MAX_BLOCKS_PER_MCU];
|
||||
int m_total_lines_left; // total # lines left in image
|
||||
int m_mcu_lines_left; // total # lines left in this MCU
|
||||
int m_real_dest_bytes_per_scan_line;
|
||||
int m_dest_bytes_per_scan_line; // rounded up
|
||||
int m_dest_bytes_per_pixel; // 4 (RGB) or 1 (Y)
|
||||
huff_tables* m_pHuff_tabs[JPGD_MAX_HUFF_TABLES];
|
||||
coeff_buf* m_dc_coeffs[JPGD_MAX_COMPONENTS];
|
||||
coeff_buf* m_ac_coeffs[JPGD_MAX_COMPONENTS];
|
||||
int m_eob_run;
|
||||
int m_block_y_mcu[JPGD_MAX_COMPONENTS];
|
||||
uint8* m_pIn_buf_ofs;
|
||||
int m_in_buf_left;
|
||||
int m_tem_flag;
|
||||
bool m_eof_flag;
|
||||
uint8 m_in_buf_pad_start[128];
|
||||
uint8 m_in_buf[JPGD_IN_BUF_SIZE + 128];
|
||||
uint8 m_in_buf_pad_end[128];
|
||||
int m_bits_left;
|
||||
uint m_bit_buf;
|
||||
int m_restart_interval;
|
||||
int m_restarts_left;
|
||||
int m_next_restart_num;
|
||||
int m_max_mcus_per_row;
|
||||
int m_max_blocks_per_mcu;
|
||||
int m_expanded_blocks_per_mcu;
|
||||
int m_expanded_blocks_per_row;
|
||||
int m_expanded_blocks_per_component;
|
||||
bool m_freq_domain_chroma_upsample;
|
||||
int m_max_mcus_per_col;
|
||||
uint m_last_dc_val[JPGD_MAX_COMPONENTS];
|
||||
jpgd_block_t* m_pMCU_coefficients;
|
||||
int m_mcu_block_max_zag[JPGD_MAX_BLOCKS_PER_MCU];
|
||||
uint8* m_pSample_buf;
|
||||
int m_crr[256];
|
||||
int m_cbb[256];
|
||||
int m_crg[256];
|
||||
int m_cbg[256];
|
||||
uint8* m_pScan_line_0;
|
||||
uint8* m_pScan_line_1;
|
||||
jpgd_status m_error_code;
|
||||
bool m_ready_flag;
|
||||
int m_total_bytes_read;
|
||||
|
||||
void free_all_blocks();
|
||||
// BEGIN EPIC MOD
|
||||
UE_NORETURN void stop_decoding(jpgd_status status);
|
||||
// END EPIC MOD
|
||||
void *alloc(size_t n, bool zero = false);
|
||||
void word_clear(void *p, uint16 c, uint n);
|
||||
void prep_in_buffer();
|
||||
void read_dht_marker();
|
||||
void read_dqt_marker();
|
||||
void read_sof_marker();
|
||||
void skip_variable_marker();
|
||||
void read_dri_marker();
|
||||
void read_sos_marker();
|
||||
int next_marker();
|
||||
int process_markers();
|
||||
void locate_soi_marker();
|
||||
void locate_sof_marker();
|
||||
int locate_sos_marker();
|
||||
void init(jpeg_decoder_stream * pStream);
|
||||
void create_look_ups();
|
||||
void fix_in_buffer();
|
||||
void transform_mcu(int mcu_row);
|
||||
void transform_mcu_expand(int mcu_row);
|
||||
coeff_buf* coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y);
|
||||
inline jpgd_block_t *coeff_buf_getp(coeff_buf *cb, int block_x, int block_y);
|
||||
void load_next_row();
|
||||
void decode_next_row();
|
||||
void make_huff_table(int index, huff_tables *pH);
|
||||
void check_quant_tables();
|
||||
void check_huff_tables();
|
||||
void calc_mcu_block_order();
|
||||
int init_scan();
|
||||
void init_frame();
|
||||
void process_restart();
|
||||
void decode_scan(pDecode_block_func decode_block_func);
|
||||
void init_progressive();
|
||||
void init_sequential();
|
||||
void decode_start();
|
||||
void decode_init(jpeg_decoder_stream * pStream);
|
||||
void H2V2Convert();
|
||||
void H2V1Convert();
|
||||
void H1V2Convert();
|
||||
void H1V1Convert();
|
||||
void gray_convert();
|
||||
void expanded_convert();
|
||||
void find_eoi();
|
||||
inline uint get_char();
|
||||
inline uint get_char(bool *pPadding_flag);
|
||||
inline void stuff_char(uint8 q);
|
||||
inline uint8 get_octet();
|
||||
inline uint get_bits(int num_bits);
|
||||
inline uint get_bits_no_markers(int numbits);
|
||||
inline int huff_decode(huff_tables *pH);
|
||||
inline int huff_decode(huff_tables *pH, int& extrabits);
|
||||
static inline uint8 clamp(int i);
|
||||
static void decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
static void decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
static void decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
static void decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
};
|
||||
|
||||
} // namespace jpgd
|
||||
|
||||
#endif // JPEG_DECODER_H
|
||||
文件差异内容过多而无法显示
加载差异
@@ -1,172 +0,0 @@
|
||||
|
||||
// jpge.h - C++ class for JPEG compression.
|
||||
// Public domain, Rich Geldreich <richgel99@gmail.com>
|
||||
// Alex Evans: Added RGBA support, linear memory allocator.
|
||||
#ifndef JPEG_ENCODER_H
|
||||
#define JPEG_ENCODER_H
|
||||
|
||||
#include <stdint.h>
|
||||
|
||||
namespace jpge
|
||||
{
|
||||
typedef unsigned char uint8;
|
||||
typedef signed short int16;
|
||||
typedef signed int int32;
|
||||
typedef unsigned short uint16;
|
||||
typedef unsigned int uint32;
|
||||
typedef unsigned int uint;
|
||||
|
||||
// JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common.
|
||||
enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 };
|
||||
|
||||
// JPEG compression parameters structure.
|
||||
struct params
|
||||
{
|
||||
inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { }
|
||||
|
||||
inline bool check_valid() const
|
||||
{
|
||||
if ((m_quality < 1) || (m_quality > 100)) return false;
|
||||
if ((uint)m_subsampling > (uint)H2V2) return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
// Quality: 1-100, higher is better. Typical values are around 50-95.
|
||||
int m_quality;
|
||||
|
||||
// m_subsampling:
|
||||
// 0 = Y (grayscale) only
|
||||
// 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU)
|
||||
// 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU)
|
||||
// 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common)
|
||||
subsampling_t m_subsampling;
|
||||
|
||||
// Disables CbCr discrimination - only intended for testing.
|
||||
// If true, the Y quantization table is also used for the CbCr channels.
|
||||
bool m_no_chroma_discrim_flag;
|
||||
|
||||
bool m_two_pass_flag;
|
||||
};
|
||||
|
||||
// Writes JPEG image to a file.
|
||||
// num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels.
|
||||
bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
|
||||
|
||||
// Writes JPEG image to memory buffer.
|
||||
// On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes.
|
||||
// If return value is true, buf_size will be set to the size of the compressed data.
|
||||
bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
|
||||
|
||||
// Output stream abstract class - used by the jpeg_encoder class to write to the output stream.
|
||||
// put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts.
|
||||
class output_stream
|
||||
{
|
||||
public:
|
||||
virtual ~output_stream() { };
|
||||
virtual bool put_buf(const void* Pbuf, int64_t len) = 0;
|
||||
template<class T> inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); }
|
||||
};
|
||||
|
||||
// Lower level jpeg_encoder class - useful if more control is needed than the above helper functions.
|
||||
class jpeg_encoder
|
||||
{
|
||||
public:
|
||||
jpeg_encoder();
|
||||
~jpeg_encoder();
|
||||
|
||||
// Initializes the compressor.
|
||||
// pStream: The stream object to use for writing compressed data.
|
||||
// params - Compression parameters structure, defined above.
|
||||
// width, height - Image dimensions.
|
||||
// channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data.
|
||||
// Returns false on out of memory or if a stream write fails.
|
||||
bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params());
|
||||
|
||||
const params &get_params() const { return m_params; }
|
||||
|
||||
// Deinitializes the compressor, freeing any allocated memory. May be called at any time.
|
||||
void deinit();
|
||||
|
||||
uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; }
|
||||
inline uint get_cur_pass() { return m_pass_num; }
|
||||
|
||||
// Call this method with each source scanline.
|
||||
// width * src_channels bytes per scanline is expected (RGB or Y format).
|
||||
// You must call with NULL after all scanlines are processed to finish compression.
|
||||
// Returns false on out of memory or if a stream write fails.
|
||||
bool process_scanline(const void* pScanline);
|
||||
|
||||
private:
|
||||
jpeg_encoder(const jpeg_encoder &);
|
||||
jpeg_encoder &operator =(const jpeg_encoder &);
|
||||
|
||||
typedef int32 sample_array_t;
|
||||
|
||||
output_stream *m_pStream;
|
||||
params m_params;
|
||||
uint8 m_num_components;
|
||||
uint8 m_comp_h_samp[3], m_comp_v_samp[3];
|
||||
int m_image_x, m_image_y, m_image_bpp, m_image_bpl;
|
||||
int m_image_x_mcu, m_image_y_mcu;
|
||||
int m_image_bpl_xlt, m_image_bpl_mcu;
|
||||
int m_mcus_per_row;
|
||||
int m_mcu_x, m_mcu_y;
|
||||
uint8 *m_mcu_lines[16];
|
||||
uint8 m_mcu_y_ofs;
|
||||
sample_array_t m_sample_array[64];
|
||||
int16 m_coefficient_array[64];
|
||||
int32 m_quantization_tables[2][64];
|
||||
uint m_huff_codes[4][256];
|
||||
uint8 m_huff_code_sizes[4][256];
|
||||
uint8 m_huff_bits[4][17];
|
||||
uint8 m_huff_val[4][256];
|
||||
uint32 m_huff_count[4][256];
|
||||
int m_last_dc_val[3];
|
||||
enum { JPGE_OUT_BUF_SIZE = 2048 };
|
||||
uint8 m_out_buf[JPGE_OUT_BUF_SIZE];
|
||||
uint8 *m_pOut_buf;
|
||||
uint m_out_buf_left;
|
||||
uint32 m_bit_buffer;
|
||||
uint m_bits_in;
|
||||
uint8 m_pass_num;
|
||||
bool m_all_stream_writes_succeeded;
|
||||
|
||||
void optimize_huffman_table(int table_num, int table_len);
|
||||
void emit_byte(uint8 i);
|
||||
void emit_word(uint i);
|
||||
void emit_marker(int marker);
|
||||
void emit_jfif_app0();
|
||||
void emit_dqt();
|
||||
void emit_sof();
|
||||
void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag);
|
||||
void emit_dhts();
|
||||
void emit_sos();
|
||||
void emit_markers();
|
||||
void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val);
|
||||
void compute_quant_table(int32 *dst, int16 *src);
|
||||
void adjust_quant_table(int32 *dst, int32 *src);
|
||||
void first_pass_init();
|
||||
bool second_pass_init();
|
||||
bool jpg_open(int p_x_res, int p_y_res, int src_channels);
|
||||
void load_block_8_8_grey(int x);
|
||||
void load_block_8_8(int x, int y, int c);
|
||||
void load_block_16_8(int x, int c);
|
||||
void load_block_16_8_8(int x, int c);
|
||||
void load_quantized_coefficients(int component_num);
|
||||
void flush_output_buffer();
|
||||
void put_bits(uint bits, uint len);
|
||||
void code_coefficients_pass_one(int component_num);
|
||||
void code_coefficients_pass_two(int component_num);
|
||||
void code_block(int component_num);
|
||||
void process_mcu_row();
|
||||
bool terminate_pass_one();
|
||||
bool terminate_pass_two();
|
||||
bool process_end_of_image();
|
||||
void load_mcu(const void* src);
|
||||
void clear();
|
||||
void init();
|
||||
};
|
||||
|
||||
} // namespace jpge
|
||||
|
||||
#endif // JPEG_ENCODER
|
||||
@@ -1,3 +0,0 @@
|
||||
jpge.h - C++ class for JPEG compression.
|
||||
Public domain, Rich Geldreich <richgel99@gmail.com>
|
||||
Alex Evans: Added RGBA support, linear memory allocator.
|
||||
文件差异内容过多而无法显示
加载差异
文件差异内容过多而无法显示
加载差异
@@ -1,433 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <atomic>
|
||||
#include <utility>
|
||||
#include <cstring>
|
||||
#include <type_traits>
|
||||
#include <cstdint>
|
||||
|
||||
#include "libipc/def.h"
|
||||
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_def.h"
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/utility/utility.h"
|
||||
|
||||
namespace ipc {
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
/// producer-consumer implementation
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
template <typename Flag>
|
||||
struct prod_cons_impl;
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> rd_; // read index
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
|
||||
constexpr circ::u2_t cursor() const noexcept {
|
||||
return 0;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
|
||||
return false; // full
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_wt].data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
|
||||
* So we could just disconnect all connections of receiver, and return false.
|
||||
*/
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(~static_cast<circ::cc_t>(0u));
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
|
||||
auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
|
||||
if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_rd].data_));
|
||||
std::forward<R>(out)(true);
|
||||
rd_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi , trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(cur_rd) ==
|
||||
circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi , relat::multi, trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::multi, trans::unicast>> {
|
||||
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
circ::u2_t cur_ct, nxt_ct;
|
||||
for (unsigned k = 0;;) {
|
||||
cur_ct = ct_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(nxt_ct = cur_ct + 1) ==
|
||||
circ::index_of(rd_.load(std::memory_order_acquire))) {
|
||||
return false; // full
|
||||
}
|
||||
if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
auto* el = elems + circ::index_of(cur_ct);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
while (1) {
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_ct != wt_.load(std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
if ((~cac_ct) != cur_ct) {
|
||||
return true;
|
||||
}
|
||||
if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
wt_.store(nxt_ct, std::memory_order_release);
|
||||
cur_ct = nxt_ct;
|
||||
nxt_ct = cur_ct + 1;
|
||||
el = elems + circ::index_of(cur_ct);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
auto cur_wt = wt_.load(std::memory_order_acquire);
|
||||
auto id_rd = circ::index_of(cur_rd);
|
||||
auto id_wt = circ::index_of(cur_wt);
|
||||
if (id_rd == id_wt) {
|
||||
auto* el = elems + id_wt;
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((~cac_ct) != cur_wt) {
|
||||
return false; // empty
|
||||
}
|
||||
if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
wt_.store(cur_wt + 1, std::memory_order_release);
|
||||
}
|
||||
k = 0;
|
||||
}
|
||||
else {
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
ep_mask = 0x00000000ffffffffull,
|
||||
ep_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t> rc_ { 0 }; // read-counter
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return wt_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
epoch_ += ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
|
||||
if (cur == cursor()) return false; // acquire
|
||||
auto* el = elems + circ::index_of(cur++);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & ep_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = cur_rc & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)((nxt_rc & ep_mask) == 0);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
rc_mask = 0x00000000ffffffffull,
|
||||
ep_mask = 0x00ffffffffffffffull,
|
||||
ep_incr = 0x0100000000000000ull,
|
||||
ic_mask = 0xff000000ffffffffull,
|
||||
ic_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t > rc_ { 0 }; // read-counter
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
alignas(cache_line_size) std::atomic<rc_t> epoch_ { 0 };
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return ct_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_rc(rc_t rc) noexcept {
|
||||
return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_mask(rc_t rc) noexcept {
|
||||
return inc_rc(rc) & ~rc_mask;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.load(std::memory_order_acquire);
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_relaxed);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
else if (!rem_cc) {
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((cur_fl != cur_ct) && cur_fl) {
|
||||
return false; // full
|
||||
}
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed) &&
|
||||
epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed)) {
|
||||
if (epoch == epoch_.load(std::memory_order_acquire)) {
|
||||
break;
|
||||
}
|
||||
else if (push(wrapper, std::forward<F>(f), elems)) {
|
||||
return true;
|
||||
}
|
||||
epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E, std::size_t N>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
|
||||
auto* el = elems + circ::index_of(cur);
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_fl != ~static_cast<flag_t>(cur)) {
|
||||
return false; // empty
|
||||
}
|
||||
++cur;
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & rc_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = inc_rc(cur_rc) & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
bool last_one = false;
|
||||
if ((last_one = (nxt_rc & rc_mask) == 0)) {
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
}
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)(last_one);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,58 +0,0 @@
|
||||
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}.
|
||||
|
||||
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}.
|
||||
|
||||
End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}.
|
||||
|
||||
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.
|
||||
In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}.
|
||||
|
||||
|
||||
%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
|
||||
|
||||
%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation.
|
||||
|
||||
%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
|
||||
|
||||
%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost.
|
||||
|
||||
%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length.
|
||||
|
||||
%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
|
||||
|
||||
%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
|
||||
|
||||
|
||||
|
||||
%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)?
|
||||
|
||||
%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence.
|
||||
|
||||
%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model.
|
||||
|
||||
%\begin{table}[h!]
|
||||
%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.}
|
||||
%\label{tab:op_complexities}
|
||||
%\begin{center}
|
||||
%\vspace{-5pt}
|
||||
%\scalebox{0.75}{
|
||||
|
||||
%\begin{tabular}{l|c|c|c}
|
||||
%\hline \hline
|
||||
%Layer Type & Receptive & Complexity & Sequential \\
|
||||
% & Field & & Operations \\
|
||||
%\hline
|
||||
%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\
|
||||
%\hline
|
||||
%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\
|
||||
%\hline
|
||||
%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\
|
||||
%\hline
|
||||
%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\
|
||||
%\hline
|
||||
%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\
|
||||
%\hline \hline
|
||||
%\end{tabular}
|
||||
%}
|
||||
%\end{center}
|
||||
%\end{table}
|
||||
@@ -1,18 +0,0 @@
|
||||
Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}.
|
||||
|
||||
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples.
|
||||
%\marginpar{not sure if the memory constraints are understandable here}
|
||||
Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
|
||||
|
||||
%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away}
|
||||
|
||||
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network.
|
||||
|
||||
%\marginpar{not sure if "cross-positional communication" is understandable without explanation}
|
||||
%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?}
|
||||
|
||||
In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.
|
||||
%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.}
|
||||
|
||||
% Just a standard paragraph with citations, rewrite.
|
||||
%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do.
|
||||
@@ -1,155 +0,0 @@
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
\includegraphics[scale=0.6]{Figures/ModalNet-21}
|
||||
\caption{The Transformer - model architecture.}
|
||||
\label{fig:model-arch}
|
||||
\end{figure}
|
||||
|
||||
% Although the primary workhorse of our model is attention,
|
||||
%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail.
|
||||
|
||||
Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next.
|
||||
|
||||
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively.
|
||||
|
||||
\subsection{Encoder and Decoder Stacks}
|
||||
|
||||
\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$.
|
||||
|
||||
\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$.
|
||||
|
||||
% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail.
|
||||
|
||||
\subsection{Attention} \label{sec:attention}
|
||||
An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
|
||||
|
||||
\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod}
|
||||
|
||||
% \begin{figure}
|
||||
% \centering
|
||||
% \includegraphics[scale=0.6]{Figures/ModalNet-19}
|
||||
% \caption{Scaled Dot-Product Attention.}
|
||||
% \label{fig:multi-head-att}
|
||||
% \end{figure}
|
||||
|
||||
We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values.
|
||||
|
||||
In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as:
|
||||
|
||||
\begin{equation}
|
||||
\mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V
|
||||
\end{equation}
|
||||
|
||||
The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.
|
||||
|
||||
%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients.
|
||||
|
||||
% Already described in the subsequent section
|
||||
%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$.
|
||||
|
||||
%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model.
|
||||
|
||||
While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$.
|
||||
|
||||
|
||||
%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$.
|
||||
|
||||
|
||||
\subsubsection{Multi-Head Attention} \label{sec:multihead}
|
||||
|
||||
\begin{figure}
|
||||
\begin{minipage}[t]{0.5\textwidth}
|
||||
\centering
|
||||
Scaled Dot-Product Attention \\
|
||||
\vspace{0.5cm}
|
||||
\includegraphics[scale=0.6]{Figures/ModalNet-19}
|
||||
\end{minipage}
|
||||
\begin{minipage}[t]{0.5\textwidth}
|
||||
\centering
|
||||
Multi-Head Attention \\
|
||||
\vspace{0.1cm}
|
||||
\includegraphics[scale=0.6]{Figures/ModalNet-20}
|
||||
\end{minipage}
|
||||
|
||||
|
||||
% \centering
|
||||
|
||||
\caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.}
|
||||
\label{fig:multi-head-att}
|
||||
\end{figure}
|
||||
|
||||
Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively.
|
||||
On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}.
|
||||
|
||||
Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.
|
||||
|
||||
\begin{align*}
|
||||
\mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\
|
||||
% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\
|
||||
\text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\
|
||||
\end{align*}
|
||||
|
||||
Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$.
|
||||
|
||||
|
||||
%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation.
|
||||
|
||||
In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$.
|
||||
Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
|
||||
|
||||
\subsubsection{Applications of Attention in our Model}
|
||||
|
||||
The Transformer uses multi-head attention in three different ways:
|
||||
\begin{itemize}
|
||||
\item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}.
|
||||
|
||||
\item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.
|
||||
|
||||
\item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}.
|
||||
|
||||
\end{itemize}
|
||||
|
||||
\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn}
|
||||
|
||||
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
|
||||
|
||||
\begin{equation}
|
||||
\mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2
|
||||
\end{equation}
|
||||
|
||||
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$.
|
||||
|
||||
|
||||
|
||||
%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention.
|
||||
|
||||
%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention.
|
||||
|
||||
|
||||
%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as
|
||||
%\begin{equation*} \label{eq:attention}
|
||||
% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq).
|
||||
%\end{equation*}
|
||||
%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$.
|
||||
|
||||
%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$.
|
||||
%\marginpar{}
|
||||
|
||||
\subsection{Embeddings and Softmax}
|
||||
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$.
|
||||
|
||||
|
||||
\subsection{Positional Encoding}
|
||||
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}.
|
||||
|
||||
In this work, we use sine and cosine functions of different frequencies:
|
||||
|
||||
\begin{align*}
|
||||
PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\
|
||||
PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel})
|
||||
\end{align*}
|
||||
|
||||
where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$.
|
||||
|
||||
We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
|
||||
@@ -1,45 +0,0 @@
|
||||
\pagebreak
|
||||
\section*{Two Feed-Forward Layers = Attention over Parameters}\label{sec:parameter_attention}
|
||||
|
||||
In addition to attention layers, our model contains position-wise feed-forward networks (Section \ref{sec:ffn}), which consist of two linear transformations with a ReLU activation in between. In fact, these networks too can be seen as a form of attention. Compare the formula for such a network with the formula for a simple dot-product attention layer (biases and scaling factors omitted):
|
||||
|
||||
\begin{align*}
|
||||
FFN(x, W_1, W_2) = ReLU(xW_1)W_2 \\
|
||||
A(q, K, V) = Softmax(qK^T)V
|
||||
\end{align*}
|
||||
|
||||
Based on the similarity of these formulae, the two-layer feed-forward network can be seen as a kind of attention, where the keys and values are the rows of the trainable parameter matrices $W_1$ and $W_2$, and where we use ReLU instead of Softmax in the compatibility function.
|
||||
|
||||
%the compatablity function is $compat(q, k_i) = ReLU(q \cdot k_i)$ instead of $Softmax(qK_T)_i$.
|
||||
|
||||
Given this similarity, we experimented with replacing the position-wise feed-forward networks with attention layers similar to the ones we use everywhere else our model. The multi-head-attention-over-parameters sublayer is identical to the multi-head attention described in \ref{sec:multihead}, except that the "keys" and "values" inputs to each attention head are trainable model parameters, as opposed to being linear projections of a previous layer. These parameters are scaled up by a factor of $\sqrt{d_{model}}$ in order to be more similar to activations.
|
||||
|
||||
In our first experiment, we replaced each position-wise feed-forward network with a multi-head-attention-over-parameters sublayer with $h_p=8$ heads, key-dimensionality $d_{pk}=64$, and value-dimensionality $d_{pv}=64$, using $n_p=1536$ key-value pairs for each attention head. The sublayer has a total of $2097152$ parameters, including the parameters in the query projection and the output projection. This matches the number of parameters in the position-wise feed-forward network that we replaced. While the theoretical amount of computation is also the same, in practice, the attention version caused the step times to be about 30\% longer.
|
||||
|
||||
In our second experiment, we used $h_p=8$ heads, and $n_p=512$ key-value pairs for each attention head, again matching the total number of parameters in the base model.
|
||||
|
||||
Results for the first experiment were slightly worse than for the base model, and results for the second experiment were slightly better, see Table~\ref{tab:parameter_attention}.
|
||||
|
||||
\begin{table}[h]
|
||||
\caption{Replacing the position-wise feed-forward networks with multihead-attention-over-parameters produces similar results to the base model. All metrics are on the English-to-German translation development set, newstest2013.}
|
||||
\label{tab:parameter_attention}
|
||||
\begin{center}
|
||||
\vspace{-2mm}
|
||||
%\scalebox{1.0}{
|
||||
\begin{tabular}{c|cccccc|cccc}
|
||||
\hline\rule{0pt}{2.0ex}
|
||||
& \multirow{2}{*}{$\dmodel$} & \multirow{2}{*}{$\dff$} &
|
||||
\multirow{2}{*}{$h_p$} & \multirow{2}{*}{$d_{pk}$} & \multirow{2}{*}{$d_{pv}$} &
|
||||
\multirow{2}{*}{$n_p$} &
|
||||
PPL & BLEU & params & training\\
|
||||
& & & & & & & (dev) & (dev) & $\times10^6$ & time \\
|
||||
\hline\rule{0pt}{2.0ex}
|
||||
base & 512 & 2048 & & & & & 4.92 & 25.8 & 65 & 12 hours\\
|
||||
\hline\rule{0pt}{2.0ex}
|
||||
AOP$_1$ & 512 & & 8 & 64 & 64 & 1536 & 4.92& 25.5 & 65 & 16 hours\\
|
||||
AOP$_2$ & 512 & & 16 & 64 & 64 & 512 & \textbf{4.86} & \textbf{25.9} & 65 & 16 hours \\
|
||||
\hline
|
||||
\end{tabular}
|
||||
%}
|
||||
\end{center}
|
||||
\end{table}
|
||||
@@ -1,8 +0,0 @@
|
||||
chatgpt的老祖宗《Attention is all you need》
|
||||
|
||||
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
|
||||
|
||||
真实的摘要如下
|
||||
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
|
||||
|
||||
https://arxiv.org/abs/1706.03762
|
||||
@@ -1,2 +0,0 @@
|
||||
from stable_baselines3.dqn.dqn import DQN
|
||||
from stable_baselines3.dqn.policies import CnnPolicy, MlpPolicy
|
||||
@@ -1,245 +0,0 @@
|
||||
from typing import Any, Dict, List, Optional, Tuple, Type, Union
|
||||
|
||||
import gym
|
||||
import numpy as np
|
||||
import torch as th
|
||||
from torch.nn import functional as F
|
||||
|
||||
from stable_baselines3.common import logger
|
||||
from stable_baselines3.common.off_policy_algorithm import OffPolicyAlgorithm
|
||||
from stable_baselines3.common.preprocessing import maybe_transpose
|
||||
from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
|
||||
from stable_baselines3.common.utils import get_linear_fn, is_vectorized_observation, polyak_update
|
||||
from stable_baselines3.dqn.policies import DQNPolicy
|
||||
|
||||
|
||||
class DQN(OffPolicyAlgorithm):
|
||||
"""
|
||||
Deep Q-Network (DQN)
|
||||
|
||||
Paper: https://arxiv.org/abs/1312.5602, https://www.nature.com/articles/nature14236
|
||||
Default hyperparameters are taken from the nature paper,
|
||||
except for the optimizer and learning rate that were taken from Stable Baselines defaults.
|
||||
|
||||
:param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)
|
||||
:param env: The environment to learn from (if registered in Gym, can be str)
|
||||
:param learning_rate: The learning rate, it can be a function
|
||||
of the current progress remaining (from 1 to 0)
|
||||
:param buffer_size: size of the replay buffer
|
||||
:param learning_starts: how many steps of the model to collect transitions for before learning starts
|
||||
:param batch_size: Minibatch size for each gradient update
|
||||
:param tau: the soft update coefficient ("Polyak update", between 0 and 1) default 1 for hard update
|
||||
:param gamma: the discount factor
|
||||
:param train_freq: Update the model every ``train_freq`` steps. Alternatively pass a tuple of frequency and unit
|
||||
like ``(5, "step")`` or ``(2, "episode")``.
|
||||
:param gradient_steps: How many gradient steps to do after each rollout (see ``train_freq``)
|
||||
Set to ``-1`` means to do as many gradient steps as steps done in the environment
|
||||
during the rollout.
|
||||
:param optimize_memory_usage: Enable a memory efficient variant of the replay buffer
|
||||
at a cost of more complexity.
|
||||
See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195
|
||||
:param target_update_interval: update the target network every ``target_update_interval``
|
||||
environment steps.
|
||||
:param exploration_fraction: fraction of entire training period over which the exploration rate is reduced
|
||||
:param exploration_initial_eps: initial value of random action probability
|
||||
:param exploration_final_eps: final value of random action probability
|
||||
:param max_grad_norm: The maximum value for the gradient clipping
|
||||
:param tensorboard_log: the log location for tensorboard (if None, no logging)
|
||||
:param create_eval_env: Whether to create a second environment that will be
|
||||
used for evaluating the agent periodically. (Only available when passing string for the environment)
|
||||
:param policy_kwargs: additional arguments to be passed to the policy on creation
|
||||
:param verbose: the verbosity level: 0 no output, 1 info, 2 debug
|
||||
:param seed: Seed for the pseudo random generators
|
||||
:param device: Device (cpu, cuda, ...) on which the code should be run.
|
||||
Setting it to auto, the code will be run on the GPU if possible.
|
||||
:param _init_setup_model: Whether or not to build the network at the creation of the instance
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
policy: Union[str, Type[DQNPolicy]],
|
||||
env: Union[GymEnv, str],
|
||||
learning_rate: Union[float, Schedule] = 1e-4,
|
||||
buffer_size: int = 1000000,
|
||||
learning_starts: int = 50000,
|
||||
batch_size: Optional[int] = 32,
|
||||
tau: float = 1.0,
|
||||
gamma: float = 0.99,
|
||||
train_freq: Union[int, Tuple[int, str]] = 4,
|
||||
gradient_steps: int = 1,
|
||||
optimize_memory_usage: bool = False,
|
||||
target_update_interval: int = 10000,
|
||||
exploration_fraction: float = 0.1,
|
||||
exploration_initial_eps: float = 1.0,
|
||||
exploration_final_eps: float = 0.05,
|
||||
max_grad_norm: float = 10,
|
||||
tensorboard_log: Optional[str] = None,
|
||||
create_eval_env: bool = False,
|
||||
policy_kwargs: Optional[Dict[str, Any]] = None,
|
||||
verbose: int = 0,
|
||||
seed: Optional[int] = None,
|
||||
device: Union[th.device, str] = "auto",
|
||||
_init_setup_model: bool = True,
|
||||
):
|
||||
|
||||
super(DQN, self).__init__(
|
||||
policy,
|
||||
env,
|
||||
DQNPolicy,
|
||||
learning_rate,
|
||||
buffer_size,
|
||||
learning_starts,
|
||||
batch_size,
|
||||
tau,
|
||||
gamma,
|
||||
train_freq,
|
||||
gradient_steps,
|
||||
action_noise=None, # No action noise
|
||||
policy_kwargs=policy_kwargs,
|
||||
tensorboard_log=tensorboard_log,
|
||||
verbose=verbose,
|
||||
device=device,
|
||||
create_eval_env=create_eval_env,
|
||||
seed=seed,
|
||||
sde_support=False,
|
||||
optimize_memory_usage=optimize_memory_usage,
|
||||
supported_action_spaces=(gym.spaces.Discrete,),
|
||||
)
|
||||
|
||||
self.exploration_initial_eps = exploration_initial_eps
|
||||
self.exploration_final_eps = exploration_final_eps
|
||||
self.exploration_fraction = exploration_fraction
|
||||
self.target_update_interval = target_update_interval
|
||||
self.max_grad_norm = max_grad_norm
|
||||
# "epsilon" for the epsilon-greedy exploration
|
||||
self.exploration_rate = 0.0
|
||||
# Linear schedule will be defined in `_setup_model()`
|
||||
self.exploration_schedule = None
|
||||
self.q_net, self.q_net_target = None, None
|
||||
|
||||
if _init_setup_model:
|
||||
self._setup_model()
|
||||
|
||||
def _setup_model(self) -> None:
|
||||
super(DQN, self)._setup_model()
|
||||
self._create_aliases()
|
||||
self.exploration_schedule = get_linear_fn(
|
||||
self.exploration_initial_eps, self.exploration_final_eps, self.exploration_fraction
|
||||
)
|
||||
|
||||
def _create_aliases(self) -> None:
|
||||
self.q_net = self.policy.q_net
|
||||
self.q_net_target = self.policy.q_net_target
|
||||
|
||||
def _on_step(self) -> None:
|
||||
"""
|
||||
Update the exploration rate and target network if needed.
|
||||
This method is called in ``collect_rollouts()`` after each step in the environment.
|
||||
"""
|
||||
if self.num_timesteps % self.target_update_interval == 0:
|
||||
polyak_update(self.q_net.parameters(), self.q_net_target.parameters(), self.tau)
|
||||
|
||||
self.exploration_rate = self.exploration_schedule(self._current_progress_remaining)
|
||||
logger.record("rollout/exploration rate", self.exploration_rate)
|
||||
|
||||
def train(self, gradient_steps: int, batch_size: int = 100) -> None:
|
||||
# Update learning rate according to schedule
|
||||
self._update_learning_rate(self.policy.optimizer)
|
||||
|
||||
losses = []
|
||||
for _ in range(gradient_steps):
|
||||
# Sample replay buffer
|
||||
replay_data = self.replay_buffer.sample(batch_size, env=self._vec_normalize_env)
|
||||
|
||||
with th.no_grad():
|
||||
# Compute the next Q-values using the target network
|
||||
next_q_values = self.q_net_target(replay_data.next_observations)
|
||||
# Follow greedy policy: use the one with the highest value
|
||||
next_q_values, _ = next_q_values.max(dim=1)
|
||||
# Avoid potential broadcast issue
|
||||
next_q_values = next_q_values.reshape(-1, 1)
|
||||
# 1-step TD target
|
||||
target_q_values = replay_data.rewards + (1 - replay_data.dones) * self.gamma * next_q_values
|
||||
|
||||
# Get current Q-values estimates
|
||||
current_q_values = self.q_net(replay_data.observations)
|
||||
|
||||
# Retrieve the q-values for the actions from the replay buffer
|
||||
current_q_values = th.gather(current_q_values, dim=1, index=replay_data.actions.long())
|
||||
|
||||
# Compute Huber loss (less sensitive to outliers)
|
||||
loss = F.smooth_l1_loss(current_q_values, target_q_values)
|
||||
losses.append(loss.item())
|
||||
|
||||
# Optimize the policy
|
||||
self.policy.optimizer.zero_grad()
|
||||
loss.backward()
|
||||
# Clip gradient norm
|
||||
th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm)
|
||||
self.policy.optimizer.step()
|
||||
|
||||
# Increase update counter
|
||||
self._n_updates += gradient_steps
|
||||
|
||||
logger.record("train/n_updates", self._n_updates, exclude="tensorboard")
|
||||
logger.record("train/loss", np.mean(losses))
|
||||
|
||||
def predict(
|
||||
self,
|
||||
observation: np.ndarray,
|
||||
state: Optional[np.ndarray] = None,
|
||||
mask: Optional[np.ndarray] = None,
|
||||
deterministic: bool = False,
|
||||
) -> Tuple[np.ndarray, Optional[np.ndarray]]:
|
||||
"""
|
||||
Overrides the base_class predict function to include epsilon-greedy exploration.
|
||||
|
||||
:param observation: the input observation
|
||||
:param state: The last states (can be None, used in recurrent policies)
|
||||
:param mask: The last masks (can be None, used in recurrent policies)
|
||||
:param deterministic: Whether or not to return deterministic actions.
|
||||
:return: the model's action and the next state
|
||||
(used in recurrent policies)
|
||||
"""
|
||||
if not deterministic and np.random.rand() < self.exploration_rate:
|
||||
if is_vectorized_observation(maybe_transpose(observation, self.observation_space), self.observation_space):
|
||||
n_batch = observation.shape[0]
|
||||
action = np.array([self.action_space.sample() for _ in range(n_batch)])
|
||||
else:
|
||||
action = np.array(self.action_space.sample())
|
||||
else:
|
||||
action, state = self.policy.predict(observation, state, mask, deterministic)
|
||||
return action, state
|
||||
|
||||
def learn(
|
||||
self,
|
||||
total_timesteps: int,
|
||||
callback: MaybeCallback = None,
|
||||
log_interval: int = 4,
|
||||
eval_env: Optional[GymEnv] = None,
|
||||
eval_freq: int = -1,
|
||||
n_eval_episodes: int = 5,
|
||||
tb_log_name: str = "DQN",
|
||||
eval_log_path: Optional[str] = None,
|
||||
reset_num_timesteps: bool = True,
|
||||
) -> OffPolicyAlgorithm:
|
||||
|
||||
return super(DQN, self).learn(
|
||||
total_timesteps=total_timesteps,
|
||||
callback=callback,
|
||||
log_interval=log_interval,
|
||||
eval_env=eval_env,
|
||||
eval_freq=eval_freq,
|
||||
n_eval_episodes=n_eval_episodes,
|
||||
tb_log_name=tb_log_name,
|
||||
eval_log_path=eval_log_path,
|
||||
reset_num_timesteps=reset_num_timesteps,
|
||||
)
|
||||
|
||||
def _excluded_save_params(self) -> List[str]:
|
||||
return super(DQN, self)._excluded_save_params() + ["q_net", "q_net_target"]
|
||||
|
||||
def _get_torch_save_params(self) -> Tuple[List[str], List[str]]:
|
||||
state_dicts = ["policy", "policy.optimizer"]
|
||||
|
||||
return state_dicts, []
|
||||
@@ -1,237 +0,0 @@
|
||||
from typing import Any, Dict, List, Optional, Type
|
||||
|
||||
import gym
|
||||
import torch as th
|
||||
from torch import nn
|
||||
|
||||
from stable_baselines3.common.policies import BasePolicy, register_policy
|
||||
from stable_baselines3.common.torch_layers import BaseFeaturesExtractor, FlattenExtractor, NatureCNN, create_mlp
|
||||
from stable_baselines3.common.type_aliases import Schedule
|
||||
|
||||
|
||||
class QNetwork(BasePolicy):
|
||||
"""
|
||||
Action-Value (Q-Value) network for DQN
|
||||
|
||||
:param observation_space: Observation space
|
||||
:param action_space: Action space
|
||||
:param net_arch: The specification of the policy and value networks.
|
||||
:param activation_fn: Activation function
|
||||
:param normalize_images: Whether to normalize images or not,
|
||||
dividing by 255.0 (True by default)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
observation_space: gym.spaces.Space,
|
||||
action_space: gym.spaces.Space,
|
||||
features_extractor: nn.Module,
|
||||
features_dim: int,
|
||||
net_arch: Optional[List[int]] = None,
|
||||
activation_fn: Type[nn.Module] = nn.ReLU,
|
||||
normalize_images: bool = True,
|
||||
):
|
||||
super(QNetwork, self).__init__(
|
||||
observation_space,
|
||||
action_space,
|
||||
features_extractor=features_extractor,
|
||||
normalize_images=normalize_images,
|
||||
)
|
||||
|
||||
if net_arch is None:
|
||||
net_arch = [64, 64]
|
||||
|
||||
self.net_arch = net_arch
|
||||
self.activation_fn = activation_fn
|
||||
self.features_extractor = features_extractor
|
||||
self.features_dim = features_dim
|
||||
self.normalize_images = normalize_images
|
||||
action_dim = self.action_space.n # number of actions
|
||||
q_net = create_mlp(self.features_dim, action_dim, self.net_arch, self.activation_fn)
|
||||
self.q_net = nn.Sequential(*q_net)
|
||||
|
||||
def forward(self, obs: th.Tensor) -> th.Tensor:
|
||||
"""
|
||||
Predict the q-values.
|
||||
|
||||
:param obs: Observation
|
||||
:return: The estimated Q-Value for each action.
|
||||
"""
|
||||
return self.q_net(self.extract_features(obs))
|
||||
|
||||
def _predict(self, observation: th.Tensor, deterministic: bool = True) -> th.Tensor:
|
||||
q_values = self.forward(observation)
|
||||
# Greedy action
|
||||
action = q_values.argmax(dim=1).reshape(-1)
|
||||
return action
|
||||
|
||||
def _get_constructor_parameters(self) -> Dict[str, Any]:
|
||||
data = super()._get_constructor_parameters()
|
||||
|
||||
data.update(
|
||||
dict(
|
||||
net_arch=self.net_arch,
|
||||
features_dim=self.features_dim,
|
||||
activation_fn=self.activation_fn,
|
||||
features_extractor=self.features_extractor,
|
||||
)
|
||||
)
|
||||
return data
|
||||
|
||||
|
||||
class DQNPolicy(BasePolicy):
|
||||
"""
|
||||
Policy class with Q-Value Net and target net for DQN
|
||||
|
||||
:param observation_space: Observation space
|
||||
:param action_space: Action space
|
||||
:param lr_schedule: Learning rate schedule (could be constant)
|
||||
:param net_arch: The specification of the policy and value networks.
|
||||
:param activation_fn: Activation function
|
||||
:param features_extractor_class: Features extractor to use.
|
||||
:param features_extractor_kwargs: Keyword arguments
|
||||
to pass to the features extractor.
|
||||
:param normalize_images: Whether to normalize images or not,
|
||||
dividing by 255.0 (True by default)
|
||||
:param optimizer_class: The optimizer to use,
|
||||
``th.optim.Adam`` by default
|
||||
:param optimizer_kwargs: Additional keyword arguments,
|
||||
excluding the learning rate, to pass to the optimizer
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
observation_space: gym.spaces.Space,
|
||||
action_space: gym.spaces.Space,
|
||||
lr_schedule: Schedule,
|
||||
net_arch: Optional[List[int]] = None,
|
||||
activation_fn: Type[nn.Module] = nn.ReLU,
|
||||
features_extractor_class: Type[BaseFeaturesExtractor] = FlattenExtractor,
|
||||
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
|
||||
normalize_images: bool = True,
|
||||
optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
|
||||
optimizer_kwargs: Optional[Dict[str, Any]] = None,
|
||||
):
|
||||
super(DQNPolicy, self).__init__(
|
||||
observation_space,
|
||||
action_space,
|
||||
features_extractor_class,
|
||||
features_extractor_kwargs,
|
||||
optimizer_class=optimizer_class,
|
||||
optimizer_kwargs=optimizer_kwargs,
|
||||
)
|
||||
|
||||
if net_arch is None:
|
||||
if features_extractor_class == FlattenExtractor:
|
||||
net_arch = [64, 64]
|
||||
else:
|
||||
net_arch = []
|
||||
|
||||
self.net_arch = net_arch
|
||||
self.activation_fn = activation_fn
|
||||
self.normalize_images = normalize_images
|
||||
|
||||
self.net_args = {
|
||||
"observation_space": self.observation_space,
|
||||
"action_space": self.action_space,
|
||||
"net_arch": self.net_arch,
|
||||
"activation_fn": self.activation_fn,
|
||||
"normalize_images": normalize_images,
|
||||
}
|
||||
|
||||
self.q_net, self.q_net_target = None, None
|
||||
self._build(lr_schedule)
|
||||
|
||||
def _build(self, lr_schedule: Schedule) -> None:
|
||||
"""
|
||||
Create the network and the optimizer.
|
||||
|
||||
:param lr_schedule: Learning rate schedule
|
||||
lr_schedule(1) is the initial learning rate
|
||||
"""
|
||||
|
||||
self.q_net = self.make_q_net()
|
||||
self.q_net_target = self.make_q_net()
|
||||
self.q_net_target.load_state_dict(self.q_net.state_dict())
|
||||
|
||||
# Setup optimizer with initial learning rate
|
||||
self.optimizer = self.optimizer_class(self.parameters(), lr=lr_schedule(1), **self.optimizer_kwargs)
|
||||
|
||||
def make_q_net(self) -> QNetwork:
|
||||
# Make sure we always have separate networks for features extractors etc
|
||||
net_args = self._update_features_extractor(self.net_args, features_extractor=None)
|
||||
return QNetwork(**net_args).to(self.device)
|
||||
|
||||
def forward(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
|
||||
return self._predict(obs, deterministic=deterministic)
|
||||
|
||||
def _predict(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
|
||||
return self.q_net._predict(obs, deterministic=deterministic)
|
||||
|
||||
def _get_constructor_parameters(self) -> Dict[str, Any]:
|
||||
data = super()._get_constructor_parameters()
|
||||
|
||||
data.update(
|
||||
dict(
|
||||
net_arch=self.net_args["net_arch"],
|
||||
activation_fn=self.net_args["activation_fn"],
|
||||
lr_schedule=self._dummy_schedule, # dummy lr schedule, not needed for loading policy alone
|
||||
optimizer_class=self.optimizer_class,
|
||||
optimizer_kwargs=self.optimizer_kwargs,
|
||||
features_extractor_class=self.features_extractor_class,
|
||||
features_extractor_kwargs=self.features_extractor_kwargs,
|
||||
)
|
||||
)
|
||||
return data
|
||||
|
||||
|
||||
MlpPolicy = DQNPolicy
|
||||
|
||||
|
||||
class CnnPolicy(DQNPolicy):
|
||||
"""
|
||||
Policy class for DQN when using images as input.
|
||||
|
||||
:param observation_space: Observation space
|
||||
:param action_space: Action space
|
||||
:param lr_schedule: Learning rate schedule (could be constant)
|
||||
:param net_arch: The specification of the policy and value networks.
|
||||
:param activation_fn: Activation function
|
||||
:param features_extractor_class: Features extractor to use.
|
||||
:param normalize_images: Whether to normalize images or not,
|
||||
dividing by 255.0 (True by default)
|
||||
:param optimizer_class: The optimizer to use,
|
||||
``th.optim.Adam`` by default
|
||||
:param optimizer_kwargs: Additional keyword arguments,
|
||||
excluding the learning rate, to pass to the optimizer
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
observation_space: gym.spaces.Space,
|
||||
action_space: gym.spaces.Space,
|
||||
lr_schedule: Schedule,
|
||||
net_arch: Optional[List[int]] = None,
|
||||
activation_fn: Type[nn.Module] = nn.ReLU,
|
||||
features_extractor_class: Type[BaseFeaturesExtractor] = NatureCNN,
|
||||
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
|
||||
normalize_images: bool = True,
|
||||
optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
|
||||
optimizer_kwargs: Optional[Dict[str, Any]] = None,
|
||||
):
|
||||
super(CnnPolicy, self).__init__(
|
||||
observation_space,
|
||||
action_space,
|
||||
lr_schedule,
|
||||
net_arch,
|
||||
activation_fn,
|
||||
features_extractor_class,
|
||||
features_extractor_kwargs,
|
||||
normalize_images,
|
||||
optimizer_class,
|
||||
optimizer_kwargs,
|
||||
)
|
||||
|
||||
|
||||
register_policy("MlpPolicy", MlpPolicy)
|
||||
register_policy("CnnPolicy", CnnPolicy)
|
||||
@@ -1,2 +0,0 @@
|
||||
github stablebaseline3
|
||||
https://github.com/DLR-RM/stable-baselines3
|
||||
@@ -1,27 +0,0 @@
|
||||
"In practice, we found that a high-entropy initial state is more likely to increase the speed of training.
|
||||
The entropy is calculated by:
|
||||
$$H=-\sum_{k= 1}^{n_k} p(k) \cdot \log p(k), p(k)=\frac{|A_k|}{|\mathcal{A}|}$$
|
||||
where $H$ is the entropy, $|A_k|$ is the number of agent nodes in $k$-th cluster, $|\mathcal{A}|$ is the total number of agents.
|
||||
To ensure the Cooperation Graph initialization has higher entropy,
|
||||
we will randomly generate multiple initial states,
|
||||
rank by their entropy and then pick the one with maximum $H$."
|
||||
|
||||
```
|
||||
FROM ubuntu:latest
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y python3 python3-pip && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN echo '[global]' > /etc/pip.conf && \
|
||||
echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
|
||||
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
|
||||
|
||||
RUN pip3 install gradio requests[socks] mdtex2html
|
||||
|
||||
COPY . /gpt
|
||||
WORKDIR /gpt
|
||||
|
||||
|
||||
CMD ["python3", "main.py"]
|
||||
```
|
||||
@@ -0,0 +1,114 @@
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List
|
||||
from toolbox import update_ui_lastest_msg, disable_auto_promotion
|
||||
from request_llm.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
|
||||
import copy, json, pickle, os, sys, time
|
||||
|
||||
|
||||
def read_avail_plugin_enum():
|
||||
from crazy_functional import get_crazy_functions
|
||||
plugin_arr = get_crazy_functions()
|
||||
# remove plugins with out explaination
|
||||
plugin_arr = {k:v for k, v in plugin_arr.items() if 'Info' in v}
|
||||
plugin_arr_info = {"F_{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)}
|
||||
plugin_arr_dict = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
|
||||
plugin_arr_dict_parse = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
|
||||
plugin_arr_dict_parse.update({f"F_{i}":v for i, v in enumerate(plugin_arr.values(), start=1)})
|
||||
prompt = json.dumps(plugin_arr_info, ensure_ascii=False, indent=2)
|
||||
prompt = "\n\nThe defination of PluginEnum:\nPluginEnum=" + prompt
|
||||
return prompt, plugin_arr_dict, plugin_arr_dict_parse
|
||||
|
||||
def wrap_code(txt):
|
||||
txt = txt.replace('```','')
|
||||
return f"\n```\n{txt}\n```\n"
|
||||
|
||||
def have_any_recent_upload_files(chatbot):
|
||||
_5min = 5 * 60
|
||||
if not chatbot: return False # chatbot is None
|
||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
||||
if not most_recent_uploaded: return False # most_recent_uploaded is None
|
||||
if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
|
||||
else: return False # most_recent_uploaded is too old
|
||||
|
||||
def get_recent_file_prompt_support(chatbot):
|
||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
||||
path = most_recent_uploaded['path']
|
||||
prompt = "\nAdditional Information:\n"
|
||||
prompt = "In case that this plugin requires a path or a file as argument,"
|
||||
prompt += f"it is important for you to know that the user has recently uploaded a file, located at: `{path}`"
|
||||
prompt += f"Only use it when necessary, otherwise, you can ignore this file."
|
||||
return prompt
|
||||
|
||||
def get_inputs_show_user(inputs, plugin_arr_enum_prompt):
|
||||
# remove plugin_arr_enum_prompt from inputs string
|
||||
inputs_show_user = inputs.replace(plugin_arr_enum_prompt, "")
|
||||
inputs_show_user += plugin_arr_enum_prompt[:200] + '...'
|
||||
inputs_show_user += '\n...\n'
|
||||
inputs_show_user += '...\n'
|
||||
inputs_show_user += '...}'
|
||||
return inputs_show_user
|
||||
|
||||
def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
|
||||
plugin_arr_enum_prompt, plugin_arr_dict, plugin_arr_dict_parse = read_avail_plugin_enum()
|
||||
class Plugin(BaseModel):
|
||||
plugin_selection: str = Field(description="The most related plugin from one of the PluginEnum.", default="F_0000")
|
||||
reason_of_selection: str = Field(description="The reason why you should select this plugin.", default="This plugin satisfy user requirement most")
|
||||
# ⭐ ⭐ ⭐ 选择插件
|
||||
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n查找可用插件中...", chatbot=chatbot, history=history, delay=0)
|
||||
gpt_json_io = GptJsonIO(Plugin)
|
||||
gpt_json_io.format_instructions = "The format of your output should be a json that can be parsed by json.loads.\n"
|
||||
gpt_json_io.format_instructions += """Output example: {"plugin_selection":"F_1234", "reason_of_selection":"F_1234 plugin satisfy user requirement most"}\n"""
|
||||
gpt_json_io.format_instructions += "The plugins you are authorized to use are listed below:\n"
|
||||
gpt_json_io.format_instructions += plugin_arr_enum_prompt
|
||||
inputs = "Choose the correct plugin according to user requirements, the user requirement is: \n\n" + \
|
||||
">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + gpt_json_io.format_instructions
|
||||
|
||||
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
|
||||
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
|
||||
try:
|
||||
gpt_reply = run_gpt_fn(inputs, "")
|
||||
plugin_sel = gpt_json_io.generate_output_auto_repair(gpt_reply, run_gpt_fn)
|
||||
except JsonStringError:
|
||||
msg = f"抱歉, {llm_kwargs['llm_model']}无法理解您的需求。"
|
||||
msg += "请求的Prompt为:\n" + wrap_code(get_inputs_show_user(inputs, plugin_arr_enum_prompt))
|
||||
msg += "语言模型回复为:\n" + wrap_code(gpt_reply)
|
||||
msg += "\n但您可以尝试再试一次\n"
|
||||
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
|
||||
return
|
||||
if plugin_sel.plugin_selection not in plugin_arr_dict_parse:
|
||||
msg = f"抱歉, 找不到合适插件执行该任务, 或者{llm_kwargs['llm_model']}无法理解您的需求。"
|
||||
msg += f"语言模型{llm_kwargs['llm_model']}选择了不存在的插件:\n" + wrap_code(gpt_reply)
|
||||
msg += "\n但您可以尝试再试一次\n"
|
||||
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
|
||||
return
|
||||
|
||||
# ⭐ ⭐ ⭐ 确认插件参数
|
||||
if not have_any_recent_upload_files(chatbot):
|
||||
appendix_info = ""
|
||||
else:
|
||||
appendix_info = get_recent_file_prompt_support(chatbot)
|
||||
|
||||
plugin = plugin_arr_dict_parse[plugin_sel.plugin_selection]
|
||||
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n提取插件参数...", chatbot=chatbot, history=history, delay=0)
|
||||
class PluginExplicit(BaseModel):
|
||||
plugin_selection: str = plugin_sel.plugin_selection
|
||||
plugin_arg: str = Field(description="The argument of the plugin.", default="")
|
||||
gpt_json_io = GptJsonIO(PluginExplicit)
|
||||
gpt_json_io.format_instructions += "The information about this plugin is:" + plugin["Info"]
|
||||
inputs = f"A plugin named {plugin_sel.plugin_selection} is selected, " + \
|
||||
"you should extract plugin_arg from the user requirement, the user requirement is: \n\n" + \
|
||||
">> " + (txt + appendix_info).rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
|
||||
gpt_json_io.format_instructions
|
||||
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
|
||||
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
|
||||
plugin_sel = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)
|
||||
|
||||
|
||||
# ⭐ ⭐ ⭐ 执行插件
|
||||
fn = plugin['Function']
|
||||
fn_name = fn.__name__
|
||||
msg = f'{llm_kwargs["llm_model"]}为您选择了插件: `{fn_name}`\n\n插件说明:{plugin["Info"]}\n\n插件参数:{plugin_sel.plugin_arg}\n\n假如偏离了您的要求,按停止键终止。'
|
||||
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
|
||||
yield from fn(plugin_sel.plugin_arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, -1)
|
||||
return
|
||||
@@ -0,0 +1,81 @@
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List
|
||||
from toolbox import update_ui_lastest_msg, get_conf
|
||||
from request_llm.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO
|
||||
import copy, json, pickle, os, sys
|
||||
|
||||
|
||||
def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
|
||||
ALLOW_RESET_CONFIG, = get_conf('ALLOW_RESET_CONFIG')
|
||||
if not ALLOW_RESET_CONFIG:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"当前配置不允许被修改!如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
|
||||
chatbot=chatbot, history=history, delay=2
|
||||
)
|
||||
return
|
||||
|
||||
# ⭐ ⭐ ⭐ 读取可配置项目条目
|
||||
names = {}
|
||||
from enum import Enum
|
||||
import config
|
||||
for k, v in config.__dict__.items():
|
||||
if k.startswith('__'): continue
|
||||
names.update({k:k})
|
||||
# if len(names) > 20: break # 限制最多前10个配置项,如果太多了会导致gpt无法理解
|
||||
|
||||
ConfigOptions = Enum('ConfigOptions', names)
|
||||
class ModifyConfigurationIntention(BaseModel):
|
||||
which_config_to_modify: ConfigOptions = Field(description="the name of the configuration to modify, you must choose from one of the ConfigOptions enum.", default=None)
|
||||
new_option_value: str = Field(description="the new value of the option", default=None)
|
||||
|
||||
# ⭐ ⭐ ⭐ 分析用户意图
|
||||
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n读取新配置中", chatbot=chatbot, history=history, delay=0)
|
||||
gpt_json_io = GptJsonIO(ModifyConfigurationIntention)
|
||||
inputs = "Analyze how to change configuration according to following user input, answer me with json: \n\n" + \
|
||||
">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
|
||||
gpt_json_io.format_instructions
|
||||
|
||||
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
|
||||
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
|
||||
user_intention = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)
|
||||
|
||||
explicit_conf = user_intention.which_config_to_modify.value
|
||||
|
||||
ok = (explicit_conf in txt)
|
||||
if ok:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}",
|
||||
chatbot=chatbot, history=history, delay=1
|
||||
)
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}\n\n正在修改配置中",
|
||||
chatbot=chatbot, history=history, delay=2
|
||||
)
|
||||
|
||||
# ⭐ ⭐ ⭐ 立即应用配置
|
||||
from toolbox import set_conf
|
||||
set_conf(explicit_conf, user_intention.new_option_value)
|
||||
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,重新页面即可生效。", chatbot=chatbot, history=history, delay=1
|
||||
)
|
||||
else:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"失败,如果需要配置{explicit_conf},您需要明确说明并在指令中提到它。", chatbot=chatbot, history=history, delay=5
|
||||
)
|
||||
|
||||
def modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
|
||||
ALLOW_RESET_CONFIG, = get_conf('ALLOW_RESET_CONFIG')
|
||||
if not ALLOW_RESET_CONFIG:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"当前配置不允许被修改!如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
|
||||
chatbot=chatbot, history=history, delay=2
|
||||
)
|
||||
return
|
||||
|
||||
yield from modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,五秒后即将重启!若出现报错请无视即可。", chatbot=chatbot, history=history, delay=5
|
||||
)
|
||||
os.execl(sys.executable, sys.executable, *sys.argv)
|
||||
@@ -0,0 +1,28 @@
|
||||
import pickle
|
||||
|
||||
class VoidTerminalState():
|
||||
def __init__(self):
|
||||
self.reset_state()
|
||||
|
||||
def reset_state(self):
|
||||
self.has_provided_explaination = False
|
||||
|
||||
def lock_plugin(self, chatbot):
|
||||
chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端'
|
||||
chatbot._cookies['plugin_state'] = pickle.dumps(self)
|
||||
|
||||
def unlock_plugin(self, chatbot):
|
||||
self.reset_state()
|
||||
chatbot._cookies['lock_plugin'] = None
|
||||
chatbot._cookies['plugin_state'] = pickle.dumps(self)
|
||||
|
||||
def set_state(self, chatbot, key, value):
|
||||
setattr(self, key, value)
|
||||
chatbot._cookies['plugin_state'] = pickle.dumps(self)
|
||||
|
||||
def get_state(chatbot):
|
||||
state = chatbot._cookies.get('plugin_state', None)
|
||||
if state is not None: state = pickle.loads(state)
|
||||
else: state = VoidTerminalState()
|
||||
state.chatbot = chatbot
|
||||
return state
|
||||
@@ -145,6 +145,8 @@ def get_files_from_everything(txt, preference=''):
|
||||
project_folder = txt
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
|
||||
else:
|
||||
project_folder = None
|
||||
file_manifest = []
|
||||
success = False
|
||||
|
||||
return success, file_manifest, project_folder
|
||||
|
||||
@@ -1,15 +1,19 @@
|
||||
from toolbox import CatchException, report_execption, write_results_to_file
|
||||
from toolbox import update_ui, promote_file_to_downloadzone
|
||||
from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion
|
||||
from toolbox import write_history_to_file, get_log_folder
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from .crazy_utils import read_and_clean_pdf_text
|
||||
from .pdf_fns.parse_pdf import parse_pdf, get_avail_grobid_url
|
||||
from colorful import *
|
||||
import glob
|
||||
import os
|
||||
import math
|
||||
|
||||
@CatchException
|
||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port):
|
||||
import glob
|
||||
import os
|
||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
|
||||
disable_auto_promotion(chatbot)
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -30,20 +34,11 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_
|
||||
# 清空历史,以免输入溢出
|
||||
history = []
|
||||
|
||||
from .crazy_utils import get_files_from_everything
|
||||
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf')
|
||||
# 检测输入参数,如没有给定输入参数,直接退出
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "":
|
||||
txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history,
|
||||
a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# 搜索需要处理的文件清单
|
||||
file_manifest = [f for f in glob.glob(
|
||||
f'{project_folder}/**/*.pdf', recursive=True)]
|
||||
if not success:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
|
||||
# 如果没找到任何文件
|
||||
if len(file_manifest) == 0:
|
||||
@@ -53,22 +48,130 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_
|
||||
return
|
||||
|
||||
# 开始正式执行任务
|
||||
yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt)
|
||||
grobid_url = get_avail_grobid_url()
|
||||
if grobid_url is not None:
|
||||
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
|
||||
else:
|
||||
yield from update_ui_lastest_msg("GROBID服务不可用,请检查config中的GROBID_URL。作为替代,现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
|
||||
yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
|
||||
def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt):
|
||||
import os
|
||||
def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url):
|
||||
import copy
|
||||
import tiktoken
|
||||
TOKEN_LIMIT_PER_FRAGMENT = 1280
|
||||
generated_conclusion_files = []
|
||||
generated_html_files = []
|
||||
DST_LANG = "中文"
|
||||
for index, fp in enumerate(file_manifest):
|
||||
chatbot.append(["当前进度:", f"正在连接GROBID服务,请稍候: {grobid_url}\n如果等待时间过长,请修改config中的GROBID_URL,可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
article_dict = parse_pdf(fp, grobid_url)
|
||||
print(article_dict)
|
||||
prompt = "以下是一篇学术论文的基本信息:\n"
|
||||
# title
|
||||
title = article_dict.get('title', '无法获取 title'); prompt += f'title:{title}\n\n'
|
||||
# authors
|
||||
authors = article_dict.get('authors', '无法获取 authors'); prompt += f'authors:{authors}\n\n'
|
||||
# abstract
|
||||
abstract = article_dict.get('abstract', '无法获取 abstract'); prompt += f'abstract:{abstract}\n\n'
|
||||
# command
|
||||
prompt += f"请将题目和摘要翻译为{DST_LANG}。"
|
||||
meta = [f'# Title:\n\n', title, f'# Abstract:\n\n', abstract ]
|
||||
|
||||
# 单线,获取文章meta信息
|
||||
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=prompt,
|
||||
inputs_show_user=prompt,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot, history=[],
|
||||
sys_prompt="You are an academic paper reader。",
|
||||
)
|
||||
|
||||
# 多线,翻译
|
||||
inputs_array = []
|
||||
inputs_show_user_array = []
|
||||
|
||||
# get_token_num
|
||||
from request_llm.bridge_all import model_info
|
||||
enc = model_info[llm_kwargs['llm_model']]['tokenizer']
|
||||
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||
|
||||
def break_down(txt):
|
||||
raw_token_num = get_token_num(txt)
|
||||
if raw_token_num <= TOKEN_LIMIT_PER_FRAGMENT:
|
||||
return [txt]
|
||||
else:
|
||||
# raw_token_num > TOKEN_LIMIT_PER_FRAGMENT
|
||||
# find a smooth token limit to achieve even seperation
|
||||
count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
|
||||
token_limit_smooth = raw_token_num // count + count
|
||||
return breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn=get_token_num, limit=token_limit_smooth)
|
||||
|
||||
for section in article_dict.get('sections'):
|
||||
if len(section['text']) == 0: continue
|
||||
section_frags = break_down(section['text'])
|
||||
for i, fragment in enumerate(section_frags):
|
||||
heading = section['heading']
|
||||
if len(section_frags) > 1: heading += f' Part-{i+1}'
|
||||
inputs_array.append(
|
||||
f"你需要翻译{heading}章节,内容如下: \n\n{fragment}"
|
||||
)
|
||||
inputs_show_user_array.append(
|
||||
f"# {heading}\n\n{fragment}"
|
||||
)
|
||||
|
||||
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array=inputs_array,
|
||||
inputs_show_user_array=inputs_show_user_array,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history_array=[meta for _ in inputs_array],
|
||||
sys_prompt_array=[
|
||||
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in inputs_array],
|
||||
)
|
||||
res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + gpt_response_collection, file_basename=None, file_fullname=None)
|
||||
promote_file_to_downloadzone(res_path, rename_file=os.path.basename(fp)+'.md', chatbot=chatbot)
|
||||
generated_conclusion_files.append(res_path)
|
||||
|
||||
ch = construct_html()
|
||||
orig = ""
|
||||
trans = ""
|
||||
gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
|
||||
for i,k in enumerate(gpt_response_collection_html):
|
||||
if i%2==0:
|
||||
gpt_response_collection_html[i] = inputs_show_user_array[i//2]
|
||||
else:
|
||||
gpt_response_collection_html[i] = gpt_response_collection_html[i]
|
||||
|
||||
final = ["", "", "一、论文概况", "", "Abstract", paper_meta_info, "二、论文翻译", ""]
|
||||
final.extend(gpt_response_collection_html)
|
||||
for i, k in enumerate(final):
|
||||
if i%2==0:
|
||||
orig = k
|
||||
if i%2==1:
|
||||
trans = k
|
||||
ch.add_row(a=orig, b=trans)
|
||||
create_report_file_name = f"{os.path.basename(fp)}.trans.html"
|
||||
html_file = ch.save_file(create_report_file_name)
|
||||
generated_html_files.append(html_file)
|
||||
promote_file_to_downloadzone(html_file, rename_file=os.path.basename(html_file), chatbot=chatbot)
|
||||
|
||||
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
||||
import copy
|
||||
TOKEN_LIMIT_PER_FRAGMENT = 1280
|
||||
generated_conclusion_files = []
|
||||
generated_html_files = []
|
||||
for index, fp in enumerate(file_manifest):
|
||||
# 读取PDF文件
|
||||
file_content, page_one = read_and_clean_pdf_text(fp)
|
||||
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
|
||||
# 递归地切割PDF文件
|
||||
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||
from request_llm.bridge_all import model_info
|
||||
@@ -140,8 +243,7 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
|
||||
trans = k
|
||||
ch.add_row(a=orig, b=trans)
|
||||
create_report_file_name = f"{os.path.basename(fp)}.trans.html"
|
||||
ch.save_file(create_report_file_name)
|
||||
generated_html_files.append(f'./gpt_log/{create_report_file_name}')
|
||||
generated_html_files.append(ch.save_file(create_report_file_name))
|
||||
except:
|
||||
from toolbox import trimmed_format_exc
|
||||
print('writing html result failed:', trimmed_format_exc())
|
||||
@@ -202,6 +304,6 @@ class construct_html():
|
||||
|
||||
|
||||
def save_file(self, file_name):
|
||||
with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
|
||||
with open(os.path.join(get_log_folder(), file_name), 'w', encoding='utf8') as f:
|
||||
f.write(self.html_string.encode('utf-8', 'ignore').decode())
|
||||
|
||||
return os.path.join(get_log_folder(), file_name)
|
||||
|
||||
@@ -1,119 +1,179 @@
|
||||
"""
|
||||
Explanation of the Void Terminal Plugin:
|
||||
|
||||
Please describe in natural language what you want to do.
|
||||
|
||||
1. You can open the plugin's dropdown menu to explore various capabilities of this project, and then describe your needs in natural language, for example:
|
||||
- "Please call the plugin to translate a PDF paper for me. I just uploaded the paper to the upload area."
|
||||
- "Please use the plugin to translate a PDF paper, with the address being https://www.nature.com/articles/s41586-019-1724-z.pdf."
|
||||
- "Generate an image with blooming flowers and lush green grass using the plugin."
|
||||
- "Translate the README using the plugin. The GitHub URL is https://github.com/facebookresearch/co-tracker."
|
||||
- "Translate an Arxiv paper for me. The Arxiv ID is 1812.10695. Remember to use the plugin and don't do it manually!"
|
||||
- "I don't like the current interface color. Modify the configuration and change the theme to THEME="High-Contrast"."
|
||||
- "Could you please explain the structure of the Transformer network?"
|
||||
|
||||
2. If you use keywords like "call the plugin xxx", "modify the configuration xxx", "please", etc., your intention can be recognized more accurately.
|
||||
|
||||
3. Your intention can be recognized more accurately when using powerful models like GPT4. This plugin is relatively new, so please feel free to provide feedback on GitHub.
|
||||
|
||||
4. Now, if you need to process a file, please upload the file (drag the file to the file upload area) or describe the path to the file.
|
||||
|
||||
5. If you don't need to upload a file, you can simply repeat your command again.
|
||||
"""
|
||||
explain_msg = """
|
||||
## 虚空终端插件说明:
|
||||
|
||||
1. 请用**自然语言**描述您需要做什么。例如:
|
||||
- 「请调用插件,为我翻译PDF论文,论文我刚刚放到上传区了。」
|
||||
- 「请调用插件翻译PDF论文,地址为https://www.nature.com/articles/s41586-019-1724-z.pdf」
|
||||
- 「生成一张图片,图中鲜花怒放,绿草如茵,用插件实现。」
|
||||
- 「用插件翻译README,Github网址是https://github.com/facebookresearch/co-tracker」
|
||||
- 「给爷翻译Arxiv论文,arxiv论文的ID是1812.10695,记得用插件,不要自己瞎搞!」
|
||||
- 「我不喜欢当前的界面颜色,修改配置,把主题THEME更换为THEME="High-Contrast"。」
|
||||
- 「请问Transformer网络的结构是怎样的?」
|
||||
|
||||
2. 您可以打开插件下拉菜单以了解本项目的各种能力。
|
||||
|
||||
3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词,您的意图可以被识别的更准确。
|
||||
|
||||
4. 建议使用 GPT3.5 或更强的模型,弱模型可能无法理解您的想法。该插件诞生时间不长,欢迎您前往Github反馈问题。
|
||||
|
||||
5. 现在,如果需要处理文件,请您上传文件(将文件拖动到文件上传区),或者描述文件所在的路径。
|
||||
|
||||
6. 如果不需要上传文件,现在您只需要再次重复一次您的指令即可。
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List
|
||||
from toolbox import CatchException, update_ui, gen_time_str
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from .crazy_utils import input_clipping
|
||||
import copy, json
|
||||
from toolbox import update_ui_lastest_msg, disable_auto_promotion
|
||||
from request_llm.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from crazy_functions.crazy_utils import input_clipping
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
|
||||
from crazy_functions.vt_fns.vt_state import VoidTerminalState
|
||||
from crazy_functions.vt_fns.vt_modify_config import modify_configuration_hot
|
||||
from crazy_functions.vt_fns.vt_modify_config import modify_configuration_reboot
|
||||
from crazy_functions.vt_fns.vt_call_plugin import execute_plugin
|
||||
|
||||
def get_fn_lib():
|
||||
return {
|
||||
"BatchTranslatePDFDocuments_MultiThreaded": {
|
||||
"module": "crazy_functions.批量翻译PDF文档_多线程",
|
||||
"function": "批量翻译PDF文档",
|
||||
"description": "Translate PDF Documents",
|
||||
"arg_1_description": "A path containing pdf files.",
|
||||
},
|
||||
"SummarizingWordDocuments": {
|
||||
"module": "crazy_functions.总结word文档",
|
||||
"function": "总结word文档",
|
||||
"description": "Summarize Word Documents",
|
||||
"arg_1_description": "A path containing Word files.",
|
||||
},
|
||||
"ImageGeneration": {
|
||||
"module": "crazy_functions.图片生成",
|
||||
"function": "图片生成",
|
||||
"description": "Generate a image that satisfies some description.",
|
||||
"arg_1_description": "Descriptions about the image to be generated.",
|
||||
},
|
||||
"TranslateMarkdownFromEnglishToChinese": {
|
||||
"module": "crazy_functions.批量Markdown翻译",
|
||||
"function": "Markdown中译英",
|
||||
"description": "Translate Markdown Documents from English to Chinese.",
|
||||
"arg_1_description": "A path containing Markdown files.",
|
||||
},
|
||||
"SummaryAudioVideo": {
|
||||
"module": "crazy_functions.总结音视频",
|
||||
"function": "总结音视频",
|
||||
"description": "Get text from a piece of audio and summarize this audio.",
|
||||
"arg_1_description": "A path containing audio files.",
|
||||
},
|
||||
}
|
||||
class UserIntention(BaseModel):
|
||||
user_prompt: str = Field(description="the content of user input", default="")
|
||||
intention_type: str = Field(description="the type of user intention, choose from ['ModifyConfiguration', 'ExecutePlugin', 'Chat']", default="ExecutePlugin")
|
||||
user_provide_file: bool = Field(description="whether the user provides a path to a file", default=False)
|
||||
user_provide_url: bool = Field(description="whether the user provides a url", default=False)
|
||||
|
||||
functions = [
|
||||
{
|
||||
"name": k,
|
||||
"description": v['description'],
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"plugin_arg_1": {
|
||||
"type": "string",
|
||||
"description": v['arg_1_description'],
|
||||
},
|
||||
},
|
||||
"required": ["plugin_arg_1"],
|
||||
},
|
||||
} for k, v in get_fn_lib().items()
|
||||
]
|
||||
|
||||
def inspect_dependency(chatbot, history):
|
||||
return True
|
||||
def chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=txt, inputs_show_user=txt,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||
sys_prompt=system_prompt
|
||||
)
|
||||
chatbot[-1] = [txt, gpt_say]
|
||||
history.extend([txt, gpt_say])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
pass
|
||||
|
||||
def eval_code(code, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
import importlib
|
||||
try:
|
||||
tmp = get_fn_lib()[code['name']]
|
||||
fp, fn = tmp['module'], tmp['function']
|
||||
fn_plugin = getattr(importlib.import_module(fp, fn), fn)
|
||||
arg = json.loads(code['arguments'])['plugin_arg_1']
|
||||
yield from fn_plugin(arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||
except:
|
||||
from toolbox import trimmed_format_exc
|
||||
chatbot.append(["执行错误", f"\n```\n{trimmed_format_exc()}\n```\n"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
def get_code_block(reply):
|
||||
import re
|
||||
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
|
||||
matches = re.findall(pattern, reply) # find all code blocks in text
|
||||
if len(matches) != 1:
|
||||
raise RuntimeError("GPT is not generating proper code.")
|
||||
return matches[0].strip('python') # code block
|
||||
explain_intention_to_user = {
|
||||
'Chat': "聊天对话",
|
||||
'ExecutePlugin': "调用插件",
|
||||
'ModifyConfiguration': "修改配置",
|
||||
}
|
||||
|
||||
|
||||
def analyze_intention_with_simple_rules(txt):
|
||||
user_intention = UserIntention()
|
||||
user_intention.user_prompt = txt
|
||||
is_certain = False
|
||||
|
||||
if '请问' in txt:
|
||||
is_certain = True
|
||||
user_intention.intention_type = 'Chat'
|
||||
|
||||
if '用插件' in txt:
|
||||
is_certain = True
|
||||
user_intention.intention_type = 'ExecutePlugin'
|
||||
|
||||
if '修改配置' in txt:
|
||||
is_certain = True
|
||||
user_intention.intention_type = 'ModifyConfiguration'
|
||||
|
||||
return is_certain, user_intention
|
||||
|
||||
|
||||
@CatchException
|
||||
def 终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数, 暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄, 用于显示给用户
|
||||
history 聊天历史, 前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
# 清空历史, 以免输入溢出
|
||||
history = []
|
||||
def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
disable_auto_promotion(chatbot=chatbot)
|
||||
# 获取当前虚空终端状态
|
||||
state = VoidTerminalState.get_state(chatbot)
|
||||
appendix_msg = ""
|
||||
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append(["虚空终端插件的功能?", "根据自然语言的描述, 执行任意插件的命令."])
|
||||
# 用简单的关键词检测用户意图
|
||||
is_certain, _ = analyze_intention_with_simple_rules(txt)
|
||||
if txt.startswith('private_upload/') and len(txt) == 34:
|
||||
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=False)
|
||||
appendix_msg = "\n\n**很好,您已经上传了文件**,现在请您描述您的需求。"
|
||||
|
||||
if is_certain or (state.has_provided_explaination):
|
||||
# 如果意图明确,跳过提示环节
|
||||
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
|
||||
state.unlock_plugin(chatbot=chatbot)
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||
return
|
||||
else:
|
||||
# 如果意图模糊,提示
|
||||
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
|
||||
state.lock_plugin(chatbot=chatbot)
|
||||
chatbot.append(("虚空终端状态:", explain_msg+appendix_msg))
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
return
|
||||
|
||||
|
||||
|
||||
def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
history = []
|
||||
chatbot.append(("虚空终端状态: ", f"正在执行任务: {txt}"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 输入
|
||||
i_say = txt
|
||||
# 开始
|
||||
llm_kwargs_function_call = copy.deepcopy(llm_kwargs)
|
||||
llm_kwargs_function_call['llm_model'] = 'gpt-call-fn' # 修改调用函数
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=txt,
|
||||
llm_kwargs=llm_kwargs_function_call, chatbot=chatbot, history=[],
|
||||
sys_prompt=functions
|
||||
)
|
||||
|
||||
# 将代码转为动画
|
||||
res = json.loads(gpt_say)['choices'][0]
|
||||
if res['finish_reason'] == 'function_call':
|
||||
code = json.loads(gpt_say)['choices'][0]
|
||||
yield from eval_code(code['message']['function_call'], llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||
# ⭐ ⭐ ⭐ 分析用户意图
|
||||
is_certain, user_intention = analyze_intention_with_simple_rules(txt)
|
||||
if not is_certain:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n分析用户意图中", chatbot=chatbot, history=history, delay=0)
|
||||
gpt_json_io = GptJsonIO(UserIntention)
|
||||
rf_req = "\nchoose from ['ModifyConfiguration', 'ExecutePlugin', 'Chat']"
|
||||
inputs = "Analyze the intention of the user according to following user input: \n\n" + \
|
||||
">> " + (txt+rf_req).rstrip('\n').replace('\n','\n>> ') + '\n\n' + gpt_json_io.format_instructions
|
||||
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
|
||||
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
|
||||
analyze_res = run_gpt_fn(inputs, "")
|
||||
try:
|
||||
user_intention = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
|
||||
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
|
||||
except JsonStringError as e:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 失败 当前语言模型({llm_kwargs['llm_model']})不能理解您的意图", chatbot=chatbot, history=history, delay=0)
|
||||
return
|
||||
else:
|
||||
chatbot.append(["无法调用相关功能", res])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
pass
|
||||
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
|
||||
chatbot=chatbot, history=history, delay=0)
|
||||
|
||||
# 用户意图: 修改本项目的配置
|
||||
if user_intention.intention_type == 'ModifyConfiguration':
|
||||
yield from modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
|
||||
|
||||
# 用户意图: 调度插件
|
||||
if user_intention.intention_type == 'ExecutePlugin':
|
||||
yield from execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
|
||||
|
||||
# 用户意图: 聊天
|
||||
if user_intention.intention_type == 'Chat':
|
||||
yield from chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
|
||||
|
||||
return
|
||||
|
||||
|
||||
@@ -16,6 +16,7 @@ services:
|
||||
AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "newbing"] '
|
||||
WEB_PORT: ' 22303 '
|
||||
ADD_WAIFU: ' True '
|
||||
# THEME: ' Chuanhu-Small-and-Beautiful '
|
||||
# DEFAULT_WORKER_NUM: ' 10 '
|
||||
# AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] '
|
||||
|
||||
@@ -28,7 +29,7 @@ services:
|
||||
|
||||
|
||||
### ===================================================
|
||||
### 【方案二】 如果需要运行ChatGLM本地模型
|
||||
### 【方案二】 如果需要运行ChatGLM + Qwen + MOSS等本地模型
|
||||
### ===================================================
|
||||
version: '3'
|
||||
services:
|
||||
@@ -36,11 +37,11 @@ services:
|
||||
image: ghcr.io/binary-husky/gpt_academic_chatglm_moss:master # (Auto Built by Dockerfile: docs/Dockerfile+ChatGLM)
|
||||
environment:
|
||||
# 请查阅 `config.py` 以查看所有的配置信息
|
||||
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
||||
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
||||
USE_PROXY: ' True '
|
||||
proxies: ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
|
||||
LLM_MODEL: ' gpt-3.5-turbo '
|
||||
AVAIL_LLM_MODELS: ' ["chatglm", "moss", "gpt-3.5-turbo", "gpt-4", "newbing"] '
|
||||
AVAIL_LLM_MODELS: ' ["chatglm", "qwen", "moss", "gpt-3.5-turbo", "gpt-4", "newbing"] '
|
||||
LOCAL_MODEL_DEVICE: ' cuda '
|
||||
DEFAULT_WORKER_NUM: ' 10 '
|
||||
WEB_PORT: ' 12303 '
|
||||
@@ -57,6 +58,10 @@ services:
|
||||
command: >
|
||||
bash -c "python3 -u main.py"
|
||||
|
||||
# P.S. 通过对 command 进行微调,可以便捷地安装额外的依赖
|
||||
# command: >
|
||||
# bash -c "pip install -r request_llm/requirements_qwen.txt && python3 -u main.py"
|
||||
|
||||
### ===================================================
|
||||
### 【方案三】 如果需要运行ChatGPT + LLAMA + 盘古 + RWKV本地模型
|
||||
### ===================================================
|
||||
|
||||
@@ -18,6 +18,7 @@ WORKDIR /gpt/gpt_academic
|
||||
RUN git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss
|
||||
RUN python3 -m pip install -r requirements.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_moss.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_qwen.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_chatglm.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_newbing.txt
|
||||
|
||||
|
||||
74
main.py
74
main.py
@@ -6,18 +6,18 @@ def main():
|
||||
from request_llm.bridge_all import predict
|
||||
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
|
||||
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
|
||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = \
|
||||
get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
|
||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
|
||||
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
|
||||
ENABLE_AUDIO, AUTO_CLEAR_TXT = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT')
|
||||
|
||||
# 如果WEB_PORT是-1, 则随机选取WEB端口
|
||||
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
|
||||
if not AUTHENTICATION: AUTHENTICATION = None
|
||||
|
||||
from check_proxy import get_current_version
|
||||
from themes.theme import adjust_theme, advanced_css, theme_declaration
|
||||
initial_prompt = "Serve me as a writing and programming assistant."
|
||||
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
|
||||
description = """代码开源和更新[地址🚀](https://github.com/binary-husky/chatgpt_academic),感谢热情的[开发者们❤️](https://github.com/binary-husky/chatgpt_academic/graphs/contributors)"""
|
||||
description = "代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic),"
|
||||
description += "感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors)"
|
||||
|
||||
# 问询记录, python 版本建议3.9+(越新越好)
|
||||
import logging, uuid
|
||||
@@ -34,7 +34,10 @@ def main():
|
||||
|
||||
# 高级函数插件
|
||||
from crazy_functional import get_crazy_functions
|
||||
crazy_fns = get_crazy_functions()
|
||||
DEFAULT_FN_GROUPS, = get_conf('DEFAULT_FN_GROUPS')
|
||||
plugins = get_crazy_functions()
|
||||
all_plugin_groups = list(set([g for _, plugin in plugins.items() for g in plugin['Group'].split('|')]))
|
||||
match_group = lambda tags, groups: any([g in groups for g in tags.split('|')])
|
||||
|
||||
# 处理markdown文本格式的转变
|
||||
gr.Chatbot.postprocess = format_io
|
||||
@@ -83,25 +86,33 @@ def main():
|
||||
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
|
||||
variant = functional[k]["Color"] if "Color" in functional[k] else "secondary"
|
||||
functional[k]["Button"] = gr.Button(k, variant=variant)
|
||||
functional[k]["Button"].style(size="sm")
|
||||
with gr.Accordion("函数插件区", open=True, elem_id="plugin-panel") as area_crazy_fn:
|
||||
with gr.Row():
|
||||
gr.Markdown("插件可读取“输入区”文本/路径作为参数(上传文件自动修正路径)")
|
||||
with gr.Row(elem_id="input-plugin-group"):
|
||||
plugin_group_sel = gr.Dropdown(choices=all_plugin_groups, label='', show_label=False, value=DEFAULT_FN_GROUPS,
|
||||
multiselect=True, interactive=True, elem_classes='normal_mut_select').style(container=False)
|
||||
with gr.Row():
|
||||
for k in crazy_fns:
|
||||
if not crazy_fns[k].get("AsButton", True): continue
|
||||
variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary"
|
||||
crazy_fns[k]["Button"] = gr.Button(k, variant=variant)
|
||||
crazy_fns[k]["Button"].style(size="sm")
|
||||
for k, plugin in plugins.items():
|
||||
if not plugin.get("AsButton", True): continue
|
||||
visible = True if match_group(plugin['Group'], DEFAULT_FN_GROUPS) else False
|
||||
variant = plugins[k]["Color"] if "Color" in plugin else "secondary"
|
||||
plugin['Button'] = plugins[k]['Button'] = gr.Button(k, variant=variant, visible=visible).style(size="sm")
|
||||
with gr.Row():
|
||||
with gr.Accordion("更多函数插件", open=True):
|
||||
dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)]
|
||||
dropdown_fn_list = []
|
||||
for k, plugin in plugins.items():
|
||||
if not match_group(plugin['Group'], DEFAULT_FN_GROUPS): continue
|
||||
if not plugin.get("AsButton", True): dropdown_fn_list.append(k) # 排除已经是按钮的插件
|
||||
elif plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示
|
||||
with gr.Row():
|
||||
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="", show_label=False).style(container=False)
|
||||
with gr.Row():
|
||||
plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False,
|
||||
placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
|
||||
with gr.Row():
|
||||
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary")
|
||||
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm")
|
||||
with gr.Row():
|
||||
with gr.Accordion("点击展开“文件上传区”。上传本地文件/压缩包供函数插件调用。", open=False) as area_file_up:
|
||||
file_upload = gr.Files(label="任何文件, 但推荐上传压缩文件(zip, tar)", file_count="multiple")
|
||||
@@ -112,7 +123,6 @@ def main():
|
||||
max_length_sl = gr.Slider(minimum=256, maximum=8192, value=4096, step=1, interactive=True, label="Local LLM MaxLength",)
|
||||
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
|
||||
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
||||
|
||||
gr.Markdown(description)
|
||||
with gr.Accordion("备选输入区", open=True, visible=False, elem_id="input-panel2") as area_input_secondary:
|
||||
with gr.Row():
|
||||
@@ -123,6 +133,7 @@ def main():
|
||||
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
|
||||
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
|
||||
clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm")
|
||||
|
||||
# 功能区显示开关与功能区的互动
|
||||
def fn_area_visibility(a):
|
||||
ret = {}
|
||||
@@ -160,19 +171,19 @@ def main():
|
||||
click_handle = functional[k]["Button"].click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(k)], outputs=output_combo)
|
||||
cancel_handles.append(click_handle)
|
||||
# 文件上传区,接收文件后与chatbot的互动
|
||||
file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes], [chatbot, txt, txt2])
|
||||
file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies])
|
||||
# 函数插件-固定按钮区
|
||||
for k in crazy_fns:
|
||||
if not crazy_fns[k].get("AsButton", True): continue
|
||||
click_handle = crazy_fns[k]["Button"].click(ArgsGeneralWrapper(crazy_fns[k]["Function"]), [*input_combo, gr.State(PORT)], output_combo)
|
||||
for k in plugins:
|
||||
if not plugins[k].get("AsButton", True): continue
|
||||
click_handle = plugins[k]["Button"].click(ArgsGeneralWrapper(plugins[k]["Function"]), [*input_combo, gr.State(PORT)], output_combo)
|
||||
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot])
|
||||
cancel_handles.append(click_handle)
|
||||
# 函数插件-下拉菜单与随变按钮的互动
|
||||
def on_dropdown_changed(k):
|
||||
variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary"
|
||||
variant = plugins[k]["Color"] if "Color" in plugins[k] else "secondary"
|
||||
ret = {switchy_bt: gr.update(value=k, variant=variant)}
|
||||
if crazy_fns[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区
|
||||
ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + crazy_fns[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
|
||||
if plugins[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区
|
||||
ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + plugins[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
|
||||
else:
|
||||
ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")})
|
||||
return ret
|
||||
@@ -183,13 +194,26 @@ def main():
|
||||
# 随变按钮的回调函数注册
|
||||
def route(request: gr.Request, k, *args, **kwargs):
|
||||
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
|
||||
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(request, *args, **kwargs)
|
||||
yield from ArgsGeneralWrapper(plugins[k]["Function"])(request, *args, **kwargs)
|
||||
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
|
||||
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot])
|
||||
cancel_handles.append(click_handle)
|
||||
# 终止按钮的回调函数注册
|
||||
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
|
||||
stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
|
||||
plugins_as_btn = {name:plugin for name, plugin in plugins.items() if plugin.get('Button', None)}
|
||||
def on_group_change(group_list):
|
||||
btn_list = []
|
||||
fns_list = []
|
||||
if not group_list: # 处理特殊情况:没有选择任何插件组
|
||||
return [*[plugin['Button'].update(visible=False) for _, plugin in plugins_as_btn.items()], gr.Dropdown.update(choices=[])]
|
||||
for k, plugin in plugins.items():
|
||||
if plugin.get("AsButton", True):
|
||||
btn_list.append(plugin['Button'].update(visible=match_group(plugin['Group'], group_list))) # 刷新按钮
|
||||
if plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示
|
||||
elif match_group(plugin['Group'], group_list): fns_list.append(k) # 刷新下拉列表
|
||||
return [*btn_list, gr.Dropdown.update(choices=fns_list)]
|
||||
plugin_group_sel.select(fn=on_group_change, inputs=[plugin_group_sel], outputs=[*[plugin['Button'] for name, plugin in plugins_as_btn.items()], dropdown])
|
||||
if ENABLE_AUDIO:
|
||||
from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution
|
||||
rad = RealtimeAudioDistribution()
|
||||
@@ -221,8 +245,10 @@ def main():
|
||||
|
||||
auto_opentab_delay()
|
||||
demo.queue(concurrency_count=CONCURRENT_COUNT).launch(
|
||||
server_name="0.0.0.0", server_port=PORT,
|
||||
favicon_path="docs/logo.png", auth=AUTHENTICATION,
|
||||
server_name="0.0.0.0",
|
||||
server_port=PORT,
|
||||
favicon_path="docs/logo.png",
|
||||
auth=AUTHENTICATION if len(AUTHENTICATION) != 0 else None,
|
||||
blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"])
|
||||
|
||||
# 如果需要在二级路径下运行
|
||||
|
||||
@@ -19,6 +19,12 @@ from .bridge_chatgpt import predict as chatgpt_ui
|
||||
from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
|
||||
from .bridge_chatglm import predict as chatglm_ui
|
||||
|
||||
from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
|
||||
from .bridge_chatglm import predict as chatglm_ui
|
||||
|
||||
from .bridge_qianfan import predict_no_ui_long_connection as qianfan_noui
|
||||
from .bridge_qianfan import predict as qianfan_ui
|
||||
|
||||
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
|
||||
|
||||
class LazyloadTiktoken(object):
|
||||
@@ -165,7 +171,14 @@ model_info = {
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
},
|
||||
|
||||
"qianfan": {
|
||||
"fn_with_ui": qianfan_ui,
|
||||
"fn_without_ui": qianfan_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 2000,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
},
|
||||
}
|
||||
|
||||
# -=-=-=-=-=-=- 以下部分是新加入的模型,可能附带额外依赖 -=-=-=-=-=-=-
|
||||
@@ -385,6 +398,38 @@ if "spark" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
if "sparkv2" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
||||
try:
|
||||
from .bridge_spark import predict_no_ui_long_connection as spark_noui
|
||||
from .bridge_spark import predict as spark_ui
|
||||
model_info.update({
|
||||
"sparkv2": {
|
||||
"fn_with_ui": spark_ui,
|
||||
"fn_without_ui": spark_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 4096,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
}
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
if "llama2" in AVAIL_LLM_MODELS: # llama2
|
||||
try:
|
||||
from .bridge_llama2 import predict_no_ui_long_connection as llama2_noui
|
||||
from .bridge_llama2 import predict as llama2_ui
|
||||
model_info.update({
|
||||
"llama2": {
|
||||
"fn_with_ui": llama2_ui,
|
||||
"fn_without_ui": llama2_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 4096,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
}
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -63,9 +63,9 @@ class GetGLMFTHandle(Process):
|
||||
# if not os.path.exists(conf): raise RuntimeError('找不到微调模型信息')
|
||||
# with open(conf, 'r', encoding='utf8') as f:
|
||||
# model_args = json.loads(f.read())
|
||||
ChatGLM_PTUNING_CHECKPOINT, = get_conf('ChatGLM_PTUNING_CHECKPOINT')
|
||||
assert os.path.exists(ChatGLM_PTUNING_CHECKPOINT), "找不到微调模型检查点"
|
||||
conf = os.path.join(ChatGLM_PTUNING_CHECKPOINT, "config.json")
|
||||
CHATGLM_PTUNING_CHECKPOINT, = get_conf('CHATGLM_PTUNING_CHECKPOINT')
|
||||
assert os.path.exists(CHATGLM_PTUNING_CHECKPOINT), "找不到微调模型检查点"
|
||||
conf = os.path.join(CHATGLM_PTUNING_CHECKPOINT, "config.json")
|
||||
with open(conf, 'r', encoding='utf8') as f:
|
||||
model_args = json.loads(f.read())
|
||||
if 'model_name_or_path' not in model_args:
|
||||
@@ -78,9 +78,9 @@ class GetGLMFTHandle(Process):
|
||||
config.pre_seq_len = model_args['pre_seq_len']
|
||||
config.prefix_projection = model_args['prefix_projection']
|
||||
|
||||
print(f"Loading prefix_encoder weight from {ChatGLM_PTUNING_CHECKPOINT}")
|
||||
print(f"Loading prefix_encoder weight from {CHATGLM_PTUNING_CHECKPOINT}")
|
||||
model = AutoModel.from_pretrained(model_args['model_name_or_path'], config=config, trust_remote_code=True)
|
||||
prefix_state_dict = torch.load(os.path.join(ChatGLM_PTUNING_CHECKPOINT, "pytorch_model.bin"))
|
||||
prefix_state_dict = torch.load(os.path.join(CHATGLM_PTUNING_CHECKPOINT, "pytorch_model.bin"))
|
||||
new_prefix_state_dict = {}
|
||||
for k, v in prefix_state_dict.items():
|
||||
if k.startswith("transformer.prefix_encoder."):
|
||||
|
||||
@@ -137,6 +137,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
chatbot.append((inputs, ""))
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
|
||||
|
||||
# check mis-behavior
|
||||
if raw_input.startswith('private_upload/') and len(raw_input) == 34:
|
||||
chatbot[-1] = (inputs, f"[Local Message] 检测到操作错误!当您上传文档之后,需要点击“函数插件区”按钮进行处理,而不是点击“提交”按钮。")
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
|
||||
time.sleep(2)
|
||||
|
||||
try:
|
||||
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
|
||||
except RuntimeError as e:
|
||||
@@ -177,14 +183,13 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="非Openai官方接口返回了错误:" + chunk.decode()) # 刷新界面
|
||||
return
|
||||
|
||||
# print(chunk.decode()[6:])
|
||||
if is_head_of_the_stream and (r'"object":"error"' not in chunk.decode()):
|
||||
chunk_decoded = chunk.decode()
|
||||
if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r"content" not in chunk_decoded):
|
||||
# 数据流的第一帧不携带content
|
||||
is_head_of_the_stream = False; continue
|
||||
|
||||
if chunk:
|
||||
try:
|
||||
chunk_decoded = chunk.decode()
|
||||
# 前者是API2D的结束条件,后者是OPENAI的结束条件
|
||||
if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0]["delta"]) == 0):
|
||||
# 判定为数据流的结束,gpt_replying_buffer也写完了
|
||||
@@ -216,7 +221,6 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
||||
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
|
||||
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
|
||||
# history = [] # 清除历史
|
||||
elif "does not exist" in error_msg:
|
||||
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
|
||||
elif "Incorrect API key" in error_msg:
|
||||
|
||||
91
request_llm/bridge_llama2.py
普通文件
91
request_llm/bridge_llama2.py
普通文件
@@ -0,0 +1,91 @@
|
||||
model_name = "LLaMA"
|
||||
cmd_to_install = "`pip install -r request_llm/requirements_chatglm.txt`"
|
||||
|
||||
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
|
||||
from toolbox import update_ui, get_conf, ProxyNetworkActivate
|
||||
from multiprocessing import Process, Pipe
|
||||
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns, SingletonLocalLLM
|
||||
from threading import Thread
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Local Model
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
@SingletonLocalLLM
|
||||
class GetONNXGLMHandle(LocalLLMHandle):
|
||||
|
||||
def load_model_info(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
self.model_name = model_name
|
||||
self.cmd_to_install = cmd_to_install
|
||||
|
||||
def load_model_and_tokenizer(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
import os, glob
|
||||
import os
|
||||
import platform
|
||||
huggingface_token, device = get_conf('HUGGINGFACE_ACCESS_TOKEN', 'LOCAL_MODEL_DEVICE')
|
||||
assert len(huggingface_token) != 0, "没有填写 HUGGINGFACE_ACCESS_TOKEN"
|
||||
with open(os.path.expanduser('~/.cache/huggingface/token'), 'w') as f:
|
||||
f.write(huggingface_token)
|
||||
model_id = 'meta-llama/Llama-2-7b-chat-hf'
|
||||
with ProxyNetworkActivate():
|
||||
self._tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=huggingface_token)
|
||||
# use fp16
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id, use_auth_token=huggingface_token).eval()
|
||||
if device.startswith('cuda'): model = model.half().to(device)
|
||||
self._model = model
|
||||
|
||||
return self._model, self._tokenizer
|
||||
|
||||
def llm_stream_generator(self, **kwargs):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
def adaptor(kwargs):
|
||||
query = kwargs['query']
|
||||
max_length = kwargs['max_length']
|
||||
top_p = kwargs['top_p']
|
||||
temperature = kwargs['temperature']
|
||||
history = kwargs['history']
|
||||
console_slience = kwargs.get('console_slience', True)
|
||||
return query, max_length, top_p, temperature, history, console_slience
|
||||
|
||||
def convert_messages_to_prompt(query, history):
|
||||
prompt = ""
|
||||
for a, b in history:
|
||||
prompt += f"\n[INST]{a}[/INST]"
|
||||
prompt += "\n{b}" + b
|
||||
prompt += f"\n[INST]{query}[/INST]"
|
||||
return prompt
|
||||
|
||||
query, max_length, top_p, temperature, history, console_slience = adaptor(kwargs)
|
||||
prompt = convert_messages_to_prompt(query, history)
|
||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-
|
||||
# code from transformers.llama
|
||||
streamer = TextIteratorStreamer(self._tokenizer)
|
||||
# Run the generation in a separate thread, so that we can fetch the generated text in a non-blocking way.
|
||||
inputs = self._tokenizer([prompt], return_tensors="pt")
|
||||
prompt_tk_back = self._tokenizer.batch_decode(inputs['input_ids'])[0]
|
||||
|
||||
generation_kwargs = dict(inputs.to(self._model.device), streamer=streamer, max_new_tokens=max_length)
|
||||
thread = Thread(target=self._model.generate, kwargs=generation_kwargs)
|
||||
thread.start()
|
||||
generated_text = ""
|
||||
for new_text in streamer:
|
||||
generated_text += new_text
|
||||
if not console_slience: print(new_text, end='')
|
||||
yield generated_text.lstrip(prompt_tk_back).rstrip("</s>")
|
||||
if not console_slience: print()
|
||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-
|
||||
|
||||
def try_to_import_special_deps(self, **kwargs):
|
||||
# import something that will raise error if the user does not install requirement_*.txt
|
||||
# 🏃♂️🏃♂️🏃♂️ 主进程执行
|
||||
import importlib
|
||||
importlib.import_module('transformers')
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 GPT-Academic Interface
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetONNXGLMHandle, model_name)
|
||||
165
request_llm/bridge_qianfan.py
普通文件
165
request_llm/bridge_qianfan.py
普通文件
@@ -0,0 +1,165 @@
|
||||
|
||||
import time, requests, json
|
||||
from multiprocessing import Process, Pipe
|
||||
from functools import wraps
|
||||
from datetime import datetime, timedelta
|
||||
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, get_conf
|
||||
|
||||
model_name = '千帆大模型平台'
|
||||
timeout_bot_msg = '[Local Message] Request timeout. Network error.'
|
||||
|
||||
def cache_decorator(timeout):
|
||||
cache = {}
|
||||
def decorator(func):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
key = (func.__name__, args, frozenset(kwargs.items()))
|
||||
# Check if result is already cached and not expired
|
||||
if key in cache:
|
||||
result, timestamp = cache[key]
|
||||
if datetime.now() - timestamp < timedelta(seconds=timeout):
|
||||
return result
|
||||
|
||||
# Call the function and cache the result
|
||||
result = func(*args, **kwargs)
|
||||
cache[key] = (result, datetime.now())
|
||||
return result
|
||||
return wrapper
|
||||
return decorator
|
||||
|
||||
@cache_decorator(timeout=3600)
|
||||
def get_access_token():
|
||||
"""
|
||||
使用 AK,SK 生成鉴权签名(Access Token)
|
||||
:return: access_token,或是None(如果错误)
|
||||
"""
|
||||
# if (access_token_cache is None) or (time.time() - last_access_token_obtain_time > 3600):
|
||||
BAIDU_CLOUD_API_KEY, BAIDU_CLOUD_SECRET_KEY = get_conf('BAIDU_CLOUD_API_KEY', 'BAIDU_CLOUD_SECRET_KEY')
|
||||
|
||||
if len(BAIDU_CLOUD_SECRET_KEY) == 0: raise RuntimeError("没有配置BAIDU_CLOUD_SECRET_KEY")
|
||||
if len(BAIDU_CLOUD_API_KEY) == 0: raise RuntimeError("没有配置BAIDU_CLOUD_API_KEY")
|
||||
|
||||
url = "https://aip.baidubce.com/oauth/2.0/token"
|
||||
params = {"grant_type": "client_credentials", "client_id": BAIDU_CLOUD_API_KEY, "client_secret": BAIDU_CLOUD_SECRET_KEY}
|
||||
access_token_cache = str(requests.post(url, params=params).json().get("access_token"))
|
||||
return access_token_cache
|
||||
# else:
|
||||
# return access_token_cache
|
||||
|
||||
|
||||
def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
|
||||
conversation_cnt = len(history) // 2
|
||||
if system_prompt == "": system_prompt = "Hello"
|
||||
messages = [{"role": "user", "content": system_prompt}]
|
||||
messages.append({"role": "assistant", "content": 'Certainly!'})
|
||||
if conversation_cnt:
|
||||
for index in range(0, 2*conversation_cnt, 2):
|
||||
what_i_have_asked = {}
|
||||
what_i_have_asked["role"] = "user"
|
||||
what_i_have_asked["content"] = history[index] if history[index]!="" else "Hello"
|
||||
what_gpt_answer = {}
|
||||
what_gpt_answer["role"] = "assistant"
|
||||
what_gpt_answer["content"] = history[index+1] if history[index]!="" else "Hello"
|
||||
if what_i_have_asked["content"] != "":
|
||||
if what_gpt_answer["content"] == "": continue
|
||||
if what_gpt_answer["content"] == timeout_bot_msg: continue
|
||||
messages.append(what_i_have_asked)
|
||||
messages.append(what_gpt_answer)
|
||||
else:
|
||||
messages[-1]['content'] = what_gpt_answer['content']
|
||||
what_i_ask_now = {}
|
||||
what_i_ask_now["role"] = "user"
|
||||
what_i_ask_now["content"] = inputs
|
||||
messages.append(what_i_ask_now)
|
||||
return messages
|
||||
|
||||
|
||||
def generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt):
|
||||
BAIDU_CLOUD_QIANFAN_MODEL, = get_conf('BAIDU_CLOUD_QIANFAN_MODEL')
|
||||
|
||||
url_lib = {
|
||||
"ERNIE-Bot": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/completions" ,
|
||||
"ERNIE-Bot-turbo": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/eb-instant" ,
|
||||
"BLOOMZ-7B": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/bloomz_7b1",
|
||||
|
||||
"Llama-2-70B-Chat": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_2_70b",
|
||||
"Llama-2-13B-Chat": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_2_13b",
|
||||
"Llama-2-7B-Chat": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_2_7b",
|
||||
}
|
||||
|
||||
url = url_lib[BAIDU_CLOUD_QIANFAN_MODEL]
|
||||
|
||||
url += "?access_token=" + get_access_token()
|
||||
|
||||
|
||||
payload = json.dumps({
|
||||
"messages": generate_message_payload(inputs, llm_kwargs, history, system_prompt),
|
||||
"stream": True
|
||||
})
|
||||
headers = {
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
response = requests.request("POST", url, headers=headers, data=payload, stream=True)
|
||||
buffer = ""
|
||||
for line in response.iter_lines():
|
||||
if len(line) == 0: continue
|
||||
try:
|
||||
dec = line.decode().lstrip('data:')
|
||||
dec = json.loads(dec)
|
||||
incoming = dec['result']
|
||||
buffer += incoming
|
||||
yield buffer
|
||||
except:
|
||||
if ('error_code' in dec) and ("max length" in dec['error_msg']):
|
||||
raise ConnectionAbortedError(dec['error_msg']) # 上下文太长导致 token 溢出
|
||||
elif ('error_code' in dec):
|
||||
raise RuntimeError(dec['error_msg'])
|
||||
|
||||
|
||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
||||
"""
|
||||
⭐多线程方法
|
||||
函数的说明请见 request_llm/bridge_all.py
|
||||
"""
|
||||
watch_dog_patience = 5
|
||||
response = ""
|
||||
|
||||
for response in generate_from_baidu_qianfan(inputs, llm_kwargs, history, sys_prompt):
|
||||
if len(observe_window) >= 1:
|
||||
observe_window[0] = response
|
||||
if len(observe_window) >= 2:
|
||||
if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
|
||||
return response
|
||||
|
||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
||||
"""
|
||||
⭐单线程方法
|
||||
函数的说明请见 request_llm/bridge_all.py
|
||||
"""
|
||||
chatbot.append((inputs, ""))
|
||||
|
||||
if additional_fn is not None:
|
||||
from core_functional import handle_core_functionality
|
||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
# 开始接收回复
|
||||
try:
|
||||
for response in generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt):
|
||||
chatbot[-1] = (inputs, response)
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
except ConnectionAbortedError as e:
|
||||
from .bridge_all import model_info
|
||||
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出
|
||||
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
|
||||
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
|
||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg="异常") # 刷新界面
|
||||
return
|
||||
|
||||
# 总结输出
|
||||
response = f"[Local Message]: {model_name}响应异常 ..."
|
||||
if response == f"[Local Message]: 等待{model_name}响应中 ...":
|
||||
response = f"[Local Message]: {model_name}响应异常 ..."
|
||||
history.extend([inputs, response])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
@@ -30,6 +30,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
||||
函数的说明请见 request_llm/bridge_all.py
|
||||
"""
|
||||
chatbot.append((inputs, ""))
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
if additional_fn is not None:
|
||||
from core_functional import handle_core_functionality
|
||||
|
||||
@@ -63,6 +63,8 @@ class SparkRequestInstance():
|
||||
self.api_secret = XFYUN_API_SECRET
|
||||
self.api_key = XFYUN_API_KEY
|
||||
self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat"
|
||||
self.gpt_url_v2 = "ws://spark-api.xf-yun.com/v2.1/chat"
|
||||
|
||||
self.time_to_yield_event = threading.Event()
|
||||
self.time_to_exit_event = threading.Event()
|
||||
|
||||
@@ -83,7 +85,12 @@ class SparkRequestInstance():
|
||||
|
||||
|
||||
def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt):
|
||||
wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, self.gpt_url)
|
||||
if llm_kwargs['llm_model'] == 'sparkv2':
|
||||
gpt_url = self.gpt_url_v2
|
||||
else:
|
||||
gpt_url = self.gpt_url
|
||||
|
||||
wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, gpt_url)
|
||||
websocket.enableTrace(False)
|
||||
wsUrl = wsParam.create_url()
|
||||
|
||||
@@ -167,7 +174,7 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
|
||||
},
|
||||
"parameter": {
|
||||
"chat": {
|
||||
"domain": "general",
|
||||
"domain": "generalv2" if llm_kwargs['llm_model'] == 'sparkv2' else "general",
|
||||
"temperature": llm_kwargs["temperature"],
|
||||
"random_threshold": 0.5,
|
||||
"max_tokens": 4096,
|
||||
|
||||
@@ -128,7 +128,7 @@ def get_local_llm_predict_fns(LLMSingletonClass, model_name):
|
||||
|
||||
# chatglm 没有 sys_prompt 接口,因此把prompt加入 history
|
||||
history_feedin = []
|
||||
history_feedin.append(["What can I do?", sys_prompt])
|
||||
history_feedin.append([sys_prompt, "Certainly!"])
|
||||
for i in range(len(history)//2):
|
||||
history_feedin.append([history[2*i], history[2*i+1]] )
|
||||
|
||||
@@ -161,7 +161,7 @@ def get_local_llm_predict_fns(LLMSingletonClass, model_name):
|
||||
|
||||
# 处理历史信息
|
||||
history_feedin = []
|
||||
history_feedin.append(["What can I do?", system_prompt] )
|
||||
history_feedin.append([system_prompt, "Certainly!"])
|
||||
for i in range(len(history)//2):
|
||||
history_feedin.append([history[2*i], history[2*i+1]] )
|
||||
|
||||
|
||||
@@ -18,5 +18,6 @@ openai
|
||||
numpy
|
||||
arxiv
|
||||
rich
|
||||
websocket-client
|
||||
pypdf2==2.12.1
|
||||
websocket-client
|
||||
scipdf_parser==0.3
|
||||
|
||||
@@ -9,9 +9,13 @@ validate_path() # 返回项目根路径
|
||||
from tests.test_utils import plugin_test
|
||||
|
||||
if __name__ == "__main__":
|
||||
plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表')
|
||||
# plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='修改api-key为sk-jhoejriotherjep')
|
||||
|
||||
plugin_test(plugin='crazy_functions.解析项目源代码->解析一个Python项目', main_input="crazy_functions/test_project/python/dqn")
|
||||
plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='调用插件,对C:/Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns中的python文件进行解析')
|
||||
|
||||
# plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表')
|
||||
|
||||
# plugin_test(plugin='crazy_functions.解析项目源代码->解析一个Python项目', main_input="crazy_functions/test_project/python/dqn")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.解析项目源代码->解析一个C项目', main_input="crazy_functions/test_project/cpp/cppipc")
|
||||
|
||||
@@ -19,7 +23,7 @@ if __name__ == "__main__":
|
||||
|
||||
# plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown中译英', main_input="README.md")
|
||||
|
||||
# plugin_test(plugin='crazy_functions.批量翻译PDF文档_多线程->批量翻译PDF文档', main_input="crazy_functions/test_project/pdf_and_word")
|
||||
# plugin_test(plugin='crazy_functions.批量翻译PDF文档_多线程->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
|
||||
|
||||
# plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=")
|
||||
|
||||
|
||||
21
themes/common.css
普通文件
21
themes/common.css
普通文件
@@ -0,0 +1,21 @@
|
||||
/* hide remove all button */
|
||||
.remove-all.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
|
||||
visibility: hidden;
|
||||
}
|
||||
|
||||
/* hide selector border */
|
||||
#input-plugin-group .wrap.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
|
||||
border: 0px;
|
||||
box-shadow: none;
|
||||
}
|
||||
|
||||
/* hide selector label */
|
||||
#input-plugin-group .svelte-1gfkn6j {
|
||||
visibility: hidden;
|
||||
}
|
||||
|
||||
|
||||
/* height of the upload box */
|
||||
.wrap.svelte-xwlu1w {
|
||||
min-height: var(--size-32);
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
function ChatBotHeight() {
|
||||
function update_height(){
|
||||
var { panel_height_target, chatbot_height, chatbot } = get_elements();
|
||||
var { panel_height_target, chatbot_height, chatbot } = get_elements(true);
|
||||
if (panel_height_target!=chatbot_height)
|
||||
{
|
||||
var pixelString = panel_height_target.toString() + 'px';
|
||||
@@ -28,18 +28,24 @@ function ChatBotHeight() {
|
||||
}, 50); // 每100毫秒执行一次
|
||||
}
|
||||
|
||||
function get_elements() {
|
||||
function get_elements(consider_state_panel=false) {
|
||||
var chatbot = document.querySelector('#gpt-chatbot > div.wrap.svelte-18telvq');
|
||||
if (!chatbot) {
|
||||
chatbot = document.querySelector('#gpt-chatbot');
|
||||
}
|
||||
const panel1 = document.querySelector('#input-panel');
|
||||
const panel2 = document.querySelector('#basic-panel');
|
||||
const panel3 = document.querySelector('#plugin-panel');
|
||||
const panel4 = document.querySelector('#interact-panel');
|
||||
const panel5 = document.querySelector('#input-panel2');
|
||||
const panel_active = document.querySelector('#state-panel');
|
||||
var panel_height_target = (20-panel_active.offsetHeight) + panel1.offsetHeight + panel2.offsetHeight + panel3.offsetHeight + panel4.offsetHeight + panel5.offsetHeight + 21;
|
||||
const panel1 = document.querySelector('#input-panel').getBoundingClientRect();
|
||||
const panel2 = document.querySelector('#basic-panel').getBoundingClientRect()
|
||||
const panel3 = document.querySelector('#plugin-panel').getBoundingClientRect();
|
||||
const panel4 = document.querySelector('#interact-panel').getBoundingClientRect();
|
||||
const panel5 = document.querySelector('#input-panel2').getBoundingClientRect();
|
||||
const panel_active = document.querySelector('#state-panel').getBoundingClientRect();
|
||||
if (consider_state_panel || panel_active.height < 25){
|
||||
document.state_panel_height = panel_active.height;
|
||||
}
|
||||
// 25 是chatbot的label高度, 16 是右侧的gap
|
||||
var panel_height_target = panel1.height + panel2.height + panel3.height + panel4.height + panel5.height - 25 + 16*3;
|
||||
// 禁止动态的state-panel高度影响
|
||||
panel_height_target = panel_height_target + (document.state_panel_height-panel_active.height)
|
||||
var panel_height_target = parseInt(panel_height_target);
|
||||
var chatbot_height = chatbot.style.height;
|
||||
var chatbot_height = parseInt(chatbot_height);
|
||||
|
||||
482
themes/contrast.css
普通文件
482
themes/contrast.css
普通文件
@@ -0,0 +1,482 @@
|
||||
:root {
|
||||
--body-text-color: #FFFFFF;
|
||||
--link-text-color: #FFFFFF;
|
||||
--link-text-color-active: #FFFFFF;
|
||||
--link-text-color-hover: #FFFFFF;
|
||||
--link-text-color-visited: #FFFFFF;
|
||||
--body-text-color-subdued: #FFFFFF;
|
||||
--block-info-text-color: #FFFFFF;
|
||||
--block-label-text-color: #FFFFFF;
|
||||
--block-title-text-color: #FFFFFF;
|
||||
--checkbox-label-text-color: #FFFFFF;
|
||||
--checkbox-label-text-color-selected: #FFFFFF;
|
||||
--error-text-color: #FFFFFF;
|
||||
--button-cancel-text-color: #FFFFFF;
|
||||
--button-cancel-text-color-hover: #FFFFFF;
|
||||
--button-primary-text-color: #FFFFFF;
|
||||
--button-primary-text-color-hover: #FFFFFF;
|
||||
--button-secondary-text-color: #FFFFFF;
|
||||
--button-secondary-text-color-hover: #FFFFFF;
|
||||
|
||||
|
||||
--border-bottom-right-radius: 0px;
|
||||
--border-bottom-left-radius: 0px;
|
||||
--border-top-right-radius: 0px;
|
||||
--border-top-left-radius: 0px;
|
||||
--block-radius: 0px;
|
||||
--button-large-radius: 0px;
|
||||
--button-small-radius: 0px;
|
||||
--block-background-fill: #000000;
|
||||
|
||||
--border-color-accent: #3cff00;
|
||||
--border-color-primary: #3cff00;
|
||||
--block-border-color: #3cff00;
|
||||
--block-label-border-color: #3cff00;
|
||||
--block-title-border-color: #3cff00;
|
||||
--panel-border-color: #3cff00;
|
||||
--checkbox-border-color: #3cff00;
|
||||
--checkbox-border-color-focus: #3cff00;
|
||||
--checkbox-border-color-hover: #3cff00;
|
||||
--checkbox-border-color-selected: #3cff00;
|
||||
--checkbox-label-border-color: #3cff00;
|
||||
--checkbox-label-border-color-hover: #3cff00;
|
||||
--error-border-color: #3cff00;
|
||||
--input-border-color: #3cff00;
|
||||
--input-border-color-focus: #3cff00;
|
||||
--input-border-color-hover: #3cff00;
|
||||
--table-border-color: #3cff00;
|
||||
--button-cancel-border-color: #3cff00;
|
||||
--button-cancel-border-color-hover: #3cff00;
|
||||
--button-primary-border-color: #3cff00;
|
||||
--button-primary-border-color-hover: #3cff00;
|
||||
--button-secondary-border-color: #3cff00;
|
||||
--button-secondary-border-color-hover: #3cff00;
|
||||
|
||||
|
||||
--body-background-fill: #000000;
|
||||
--background-fill-primary: #000000;
|
||||
--background-fill-secondary: #000000;
|
||||
--block-background-fill: #000000;
|
||||
--block-label-background-fill: #000000;
|
||||
--block-title-background-fill: #000000;
|
||||
--panel-background-fill: #000000;
|
||||
--chatbot-code-background-color: #000000;
|
||||
--checkbox-background-color: #000000;
|
||||
--checkbox-background-color-focus: #000000;
|
||||
--checkbox-background-color-hover: #000000;
|
||||
--checkbox-background-color-selected: #000000;
|
||||
--checkbox-label-background-fill: #000000;
|
||||
--checkbox-label-background-fill-hover: #000000;
|
||||
--checkbox-label-background-fill-selected: #000000;
|
||||
--error-background-fill: #000000;
|
||||
--input-background-fill: #000000;
|
||||
--input-background-fill-focus: #000000;
|
||||
--input-background-fill-hover: #000000;
|
||||
--stat-background-fill: #000000;
|
||||
--table-even-background-fill: #000000;
|
||||
--table-odd-background-fill: #000000;
|
||||
--button-cancel-background-fill: #000000;
|
||||
--button-cancel-background-fill-hover: #000000;
|
||||
--button-primary-background-fill: #000000;
|
||||
--button-primary-background-fill-hover: #000000;
|
||||
--button-secondary-background-fill: #000000;
|
||||
--button-secondary-background-fill-hover: #000000;
|
||||
--color-accent-soft: #000000;
|
||||
}
|
||||
|
||||
.dark {
|
||||
--body-text-color: #FFFFFF;
|
||||
--link-text-color: #FFFFFF;
|
||||
--link-text-color-active: #FFFFFF;
|
||||
--link-text-color-hover: #FFFFFF;
|
||||
--link-text-color-visited: #FFFFFF;
|
||||
--body-text-color-subdued: #FFFFFF;
|
||||
--block-info-text-color: #FFFFFF;
|
||||
--block-label-text-color: #FFFFFF;
|
||||
--block-title-text-color: #FFFFFF;
|
||||
--checkbox-label-text-color: #FFFFFF;
|
||||
--checkbox-label-text-color-selected: #FFFFFF;
|
||||
--error-text-color: #FFFFFF;
|
||||
--button-cancel-text-color: #FFFFFF;
|
||||
--button-cancel-text-color-hover: #FFFFFF;
|
||||
--button-primary-text-color: #FFFFFF;
|
||||
--button-primary-text-color-hover: #FFFFFF;
|
||||
--button-secondary-text-color: #FFFFFF;
|
||||
--button-secondary-text-color-hover: #FFFFFF;
|
||||
|
||||
|
||||
|
||||
--border-bottom-right-radius: 0px;
|
||||
--border-bottom-left-radius: 0px;
|
||||
--border-top-right-radius: 0px;
|
||||
--border-top-left-radius: 0px;
|
||||
--block-radius: 0px;
|
||||
--button-large-radius: 0px;
|
||||
--button-small-radius: 0px;
|
||||
--block-background-fill: #000000;
|
||||
|
||||
--border-color-accent: #3cff00;
|
||||
--border-color-primary: #3cff00;
|
||||
--block-border-color: #3cff00;
|
||||
--block-label-border-color: #3cff00;
|
||||
--block-title-border-color: #3cff00;
|
||||
--panel-border-color: #3cff00;
|
||||
--checkbox-border-color: #3cff00;
|
||||
--checkbox-border-color-focus: #3cff00;
|
||||
--checkbox-border-color-hover: #3cff00;
|
||||
--checkbox-border-color-selected: #3cff00;
|
||||
--checkbox-label-border-color: #3cff00;
|
||||
--checkbox-label-border-color-hover: #3cff00;
|
||||
--error-border-color: #3cff00;
|
||||
--input-border-color: #3cff00;
|
||||
--input-border-color-focus: #3cff00;
|
||||
--input-border-color-hover: #3cff00;
|
||||
--table-border-color: #3cff00;
|
||||
--button-cancel-border-color: #3cff00;
|
||||
--button-cancel-border-color-hover: #3cff00;
|
||||
--button-primary-border-color: #3cff00;
|
||||
--button-primary-border-color-hover: #3cff00;
|
||||
--button-secondary-border-color: #3cff00;
|
||||
--button-secondary-border-color-hover: #3cff00;
|
||||
|
||||
|
||||
--body-background-fill: #000000;
|
||||
--background-fill-primary: #000000;
|
||||
--background-fill-secondary: #000000;
|
||||
--block-background-fill: #000000;
|
||||
--block-label-background-fill: #000000;
|
||||
--block-title-background-fill: #000000;
|
||||
--panel-background-fill: #000000;
|
||||
--chatbot-code-background-color: #000000;
|
||||
--checkbox-background-color: #000000;
|
||||
--checkbox-background-color-focus: #000000;
|
||||
--checkbox-background-color-hover: #000000;
|
||||
--checkbox-background-color-selected: #000000;
|
||||
--checkbox-label-background-fill: #000000;
|
||||
--checkbox-label-background-fill-hover: #000000;
|
||||
--checkbox-label-background-fill-selected: #000000;
|
||||
--error-background-fill: #000000;
|
||||
--input-background-fill: #000000;
|
||||
--input-background-fill-focus: #000000;
|
||||
--input-background-fill-hover: #000000;
|
||||
--stat-background-fill: #000000;
|
||||
--table-even-background-fill: #000000;
|
||||
--table-odd-background-fill: #000000;
|
||||
--button-cancel-background-fill: #000000;
|
||||
--button-cancel-background-fill-hover: #000000;
|
||||
--button-primary-background-fill: #000000;
|
||||
--button-primary-background-fill-hover: #000000;
|
||||
--button-secondary-background-fill: #000000;
|
||||
--button-secondary-background-fill-hover: #000000;
|
||||
--color-accent-soft: #000000;
|
||||
}
|
||||
|
||||
|
||||
|
||||
.block.svelte-mppz8v {
|
||||
border-color: #3cff00;
|
||||
}
|
||||
|
||||
/* 插件下拉菜单 */
|
||||
#plugin-panel .wrap.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
|
||||
box-shadow: var(--input-shadow);
|
||||
border: var(--input-border-width) dashed var(--border-color-primary);
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
#plugin-panel .dropdown-arrow.svelte-p5edak {
|
||||
width: 50px;
|
||||
}
|
||||
#plugin-panel input.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
|
||||
padding-left: 5px;
|
||||
}
|
||||
.root{
|
||||
border-bottom-right-radius: 0px;
|
||||
border-bottom-left-radius: 0px;
|
||||
border-top-right-radius: 0px;
|
||||
border-top-left-radius: 0px;
|
||||
}
|
||||
|
||||
/* 小按钮 */
|
||||
.sm.svelte-1ipelgc {
|
||||
font-family: "Microsoft YaHei UI", "Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui";
|
||||
--button-small-text-weight: 600;
|
||||
--button-small-text-size: 16px;
|
||||
border-bottom-right-radius: 0px;
|
||||
border-bottom-left-radius: 0px;
|
||||
border-top-right-radius: 0px;
|
||||
border-top-left-radius: 0px;
|
||||
}
|
||||
|
||||
#plugin-panel .sm.svelte-1ipelgc {
|
||||
font-family: "Microsoft YaHei UI", "Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui";
|
||||
--button-small-text-weight: 400;
|
||||
--button-small-text-size: 14px;
|
||||
border-bottom-right-radius: 0px;
|
||||
border-bottom-left-radius: 0px;
|
||||
border-top-right-radius: 0px;
|
||||
border-top-left-radius: 0px;
|
||||
}
|
||||
|
||||
.wrap-inner.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
|
||||
padding: 0%;
|
||||
}
|
||||
|
||||
.markdown-body table {
|
||||
margin: 1em 0;
|
||||
border-collapse: collapse;
|
||||
empty-cells: show;
|
||||
}
|
||||
|
||||
.markdown-body th, .markdown-body td {
|
||||
border: 1.2px solid var(--border-color-primary);
|
||||
padding: 5px;
|
||||
}
|
||||
|
||||
.markdown-body thead {
|
||||
background-color: rgb(0, 0, 0);
|
||||
}
|
||||
|
||||
.markdown-body thead th {
|
||||
padding: .5em .2em;
|
||||
}
|
||||
|
||||
.normal_mut_select .svelte-1gfkn6j {
|
||||
float: left;
|
||||
width: auto;
|
||||
line-height: 260% !important;
|
||||
}
|
||||
|
||||
.markdown-body ol, .markdown-body ul {
|
||||
padding-inline-start: 2em !important;
|
||||
}
|
||||
|
||||
/* chat box. */
|
||||
[class *= "message"] {
|
||||
border-radius: var(--radius-xl) !important;
|
||||
/* padding: var(--spacing-xl) !important; */
|
||||
/* font-size: var(--text-md) !important; */
|
||||
/* line-height: var(--line-md) !important; */
|
||||
/* min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */
|
||||
/* min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */
|
||||
}
|
||||
[data-testid = "bot"] {
|
||||
max-width: 95%;
|
||||
/* width: auto !important; */
|
||||
border-bottom-left-radius: 0 !important;
|
||||
}
|
||||
[data-testid = "user"] {
|
||||
max-width: 100%;
|
||||
/* width: auto !important; */
|
||||
border-bottom-right-radius: 0 !important;
|
||||
}
|
||||
|
||||
/* linein code block. */
|
||||
.markdown-body code {
|
||||
display: inline;
|
||||
white-space: break-spaces;
|
||||
border-radius: 6px;
|
||||
margin: 0 2px 0 2px;
|
||||
padding: .2em .4em .1em .4em;
|
||||
background-color: rgba(0, 0, 0, 0.95);
|
||||
color: #c9d1d9;
|
||||
}
|
||||
|
||||
.dark .markdown-body code {
|
||||
display: inline;
|
||||
white-space: break-spaces;
|
||||
border-radius: 6px;
|
||||
margin: 0 2px 0 2px;
|
||||
padding: .2em .4em .1em .4em;
|
||||
background-color: rgba(0,0,0,0.2);
|
||||
}
|
||||
|
||||
/* code block css */
|
||||
.markdown-body pre code {
|
||||
display: block;
|
||||
overflow: auto;
|
||||
white-space: pre;
|
||||
background-color: rgba(0, 0, 0, 0.95);
|
||||
border-radius: 10px;
|
||||
padding: 1em;
|
||||
margin: 1em 2em 1em 0.5em;
|
||||
}
|
||||
|
||||
.dark .markdown-body pre code {
|
||||
display: block;
|
||||
overflow: auto;
|
||||
white-space: pre;
|
||||
background-color: rgba(0,0,0,0.2);
|
||||
border-radius: 10px;
|
||||
padding: 1em;
|
||||
margin: 1em 2em 1em 0.5em;
|
||||
}
|
||||
|
||||
/* .mic-wrap.svelte-1thnwz {
|
||||
|
||||
} */
|
||||
.block.svelte-mppz8v > .mic-wrap.svelte-1thnwz{
|
||||
justify-content: center;
|
||||
display: flex;
|
||||
padding: 0;
|
||||
|
||||
}
|
||||
|
||||
.codehilite .hll { background-color: #6e7681 }
|
||||
.codehilite .c { color: #8b949e; font-style: italic } /* Comment */
|
||||
.codehilite .err { color: #f85149 } /* Error */
|
||||
.codehilite .esc { color: #c9d1d9 } /* Escape */
|
||||
.codehilite .g { color: #c9d1d9 } /* Generic */
|
||||
.codehilite .k { color: #ff7b72 } /* Keyword */
|
||||
.codehilite .l { color: #a5d6ff } /* Literal */
|
||||
.codehilite .n { color: #c9d1d9 } /* Name */
|
||||
.codehilite .o { color: #ff7b72; font-weight: bold } /* Operator */
|
||||
.codehilite .x { color: #c9d1d9 } /* Other */
|
||||
.codehilite .p { color: #c9d1d9 } /* Punctuation */
|
||||
.codehilite .ch { color: #8b949e; font-style: italic } /* Comment.Hashbang */
|
||||
.codehilite .cm { color: #8b949e; font-style: italic } /* Comment.Multiline */
|
||||
.codehilite .cp { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Preproc */
|
||||
.codehilite .cpf { color: #8b949e; font-style: italic } /* Comment.PreprocFile */
|
||||
.codehilite .c1 { color: #8b949e; font-style: italic } /* Comment.Single */
|
||||
.codehilite .cs { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Special */
|
||||
.codehilite .gd { color: #ffa198; background-color: #490202 } /* Generic.Deleted */
|
||||
.codehilite .ge { color: #c9d1d9; font-style: italic } /* Generic.Emph */
|
||||
.codehilite .gr { color: #ffa198 } /* Generic.Error */
|
||||
.codehilite .gh { color: #79c0ff; font-weight: bold } /* Generic.Heading */
|
||||
.codehilite .gi { color: #56d364; background-color: #0f5323 } /* Generic.Inserted */
|
||||
.codehilite .go { color: #8b949e } /* Generic.Output */
|
||||
.codehilite .gp { color: #8b949e } /* Generic.Prompt */
|
||||
.codehilite .gs { color: #c9d1d9; font-weight: bold } /* Generic.Strong */
|
||||
.codehilite .gu { color: #79c0ff } /* Generic.Subheading */
|
||||
.codehilite .gt { color: #ff7b72 } /* Generic.Traceback */
|
||||
.codehilite .g-Underline { color: #c9d1d9; text-decoration: underline } /* Generic.Underline */
|
||||
.codehilite .kc { color: #79c0ff } /* Keyword.Constant */
|
||||
.codehilite .kd { color: #ff7b72 } /* Keyword.Declaration */
|
||||
.codehilite .kn { color: #ff7b72 } /* Keyword.Namespace */
|
||||
.codehilite .kp { color: #79c0ff } /* Keyword.Pseudo */
|
||||
.codehilite .kr { color: #ff7b72 } /* Keyword.Reserved */
|
||||
.codehilite .kt { color: #ff7b72 } /* Keyword.Type */
|
||||
.codehilite .ld { color: #79c0ff } /* Literal.Date */
|
||||
.codehilite .m { color: #a5d6ff } /* Literal.Number */
|
||||
.codehilite .s { color: #a5d6ff } /* Literal.String */
|
||||
.codehilite .na { color: #c9d1d9 } /* Name.Attribute */
|
||||
.codehilite .nb { color: #c9d1d9 } /* Name.Builtin */
|
||||
.codehilite .nc { color: #f0883e; font-weight: bold } /* Name.Class */
|
||||
.codehilite .no { color: #79c0ff; font-weight: bold } /* Name.Constant */
|
||||
.codehilite .nd { color: #d2a8ff; font-weight: bold } /* Name.Decorator */
|
||||
.codehilite .ni { color: #ffa657 } /* Name.Entity */
|
||||
.codehilite .ne { color: #f0883e; font-weight: bold } /* Name.Exception */
|
||||
.codehilite .nf { color: #d2a8ff; font-weight: bold } /* Name.Function */
|
||||
.codehilite .nl { color: #79c0ff; font-weight: bold } /* Name.Label */
|
||||
.codehilite .nn { color: #ff7b72 } /* Name.Namespace */
|
||||
.codehilite .nx { color: #c9d1d9 } /* Name.Other */
|
||||
.codehilite .py { color: #79c0ff } /* Name.Property */
|
||||
.codehilite .nt { color: #7ee787 } /* Name.Tag */
|
||||
.codehilite .nv { color: #79c0ff } /* Name.Variable */
|
||||
.codehilite .ow { color: #ff7b72; font-weight: bold } /* Operator.Word */
|
||||
.codehilite .pm { color: #c9d1d9 } /* Punctuation.Marker */
|
||||
.codehilite .w { color: #6e7681 } /* Text.Whitespace */
|
||||
.codehilite .mb { color: #a5d6ff } /* Literal.Number.Bin */
|
||||
.codehilite .mf { color: #a5d6ff } /* Literal.Number.Float */
|
||||
.codehilite .mh { color: #a5d6ff } /* Literal.Number.Hex */
|
||||
.codehilite .mi { color: #a5d6ff } /* Literal.Number.Integer */
|
||||
.codehilite .mo { color: #a5d6ff } /* Literal.Number.Oct */
|
||||
.codehilite .sa { color: #79c0ff } /* Literal.String.Affix */
|
||||
.codehilite .sb { color: #a5d6ff } /* Literal.String.Backtick */
|
||||
.codehilite .sc { color: #a5d6ff } /* Literal.String.Char */
|
||||
.codehilite .dl { color: #79c0ff } /* Literal.String.Delimiter */
|
||||
.codehilite .sd { color: #a5d6ff } /* Literal.String.Doc */
|
||||
.codehilite .s2 { color: #a5d6ff } /* Literal.String.Double */
|
||||
.codehilite .se { color: #79c0ff } /* Literal.String.Escape */
|
||||
.codehilite .sh { color: #79c0ff } /* Literal.String.Heredoc */
|
||||
.codehilite .si { color: #a5d6ff } /* Literal.String.Interpol */
|
||||
.codehilite .sx { color: #a5d6ff } /* Literal.String.Other */
|
||||
.codehilite .sr { color: #79c0ff } /* Literal.String.Regex */
|
||||
.codehilite .s1 { color: #a5d6ff } /* Literal.String.Single */
|
||||
.codehilite .ss { color: #a5d6ff } /* Literal.String.Symbol */
|
||||
.codehilite .bp { color: #c9d1d9 } /* Name.Builtin.Pseudo */
|
||||
.codehilite .fm { color: #d2a8ff; font-weight: bold } /* Name.Function.Magic */
|
||||
.codehilite .vc { color: #79c0ff } /* Name.Variable.Class */
|
||||
.codehilite .vg { color: #79c0ff } /* Name.Variable.Global */
|
||||
.codehilite .vi { color: #79c0ff } /* Name.Variable.Instance */
|
||||
.codehilite .vm { color: #79c0ff } /* Name.Variable.Magic */
|
||||
.codehilite .il { color: #a5d6ff } /* Literal.Number.Integer.Long */
|
||||
|
||||
.dark .codehilite .hll { background-color: #2C3B41 }
|
||||
.dark .codehilite .c { color: #79d618; font-style: italic } /* Comment */
|
||||
.dark .codehilite .err { color: #FF5370 } /* Error */
|
||||
.dark .codehilite .esc { color: #89DDFF } /* Escape */
|
||||
.dark .codehilite .g { color: #EEFFFF } /* Generic */
|
||||
.dark .codehilite .k { color: #BB80B3 } /* Keyword */
|
||||
.dark .codehilite .l { color: #C3E88D } /* Literal */
|
||||
.dark .codehilite .n { color: #EEFFFF } /* Name */
|
||||
.dark .codehilite .o { color: #89DDFF } /* Operator */
|
||||
.dark .codehilite .p { color: #89DDFF } /* Punctuation */
|
||||
.dark .codehilite .ch { color: #79d618; font-style: italic } /* Comment.Hashbang */
|
||||
.dark .codehilite .cm { color: #79d618; font-style: italic } /* Comment.Multiline */
|
||||
.dark .codehilite .cp { color: #79d618; font-style: italic } /* Comment.Preproc */
|
||||
.dark .codehilite .cpf { color: #79d618; font-style: italic } /* Comment.PreprocFile */
|
||||
.dark .codehilite .c1 { color: #79d618; font-style: italic } /* Comment.Single */
|
||||
.dark .codehilite .cs { color: #79d618; font-style: italic } /* Comment.Special */
|
||||
.dark .codehilite .gd { color: #FF5370 } /* Generic.Deleted */
|
||||
.dark .codehilite .ge { color: #89DDFF } /* Generic.Emph */
|
||||
.dark .codehilite .gr { color: #FF5370 } /* Generic.Error */
|
||||
.dark .codehilite .gh { color: #C3E88D } /* Generic.Heading */
|
||||
.dark .codehilite .gi { color: #C3E88D } /* Generic.Inserted */
|
||||
.dark .codehilite .go { color: #79d618 } /* Generic.Output */
|
||||
.dark .codehilite .gp { color: #FFCB6B } /* Generic.Prompt */
|
||||
.dark .codehilite .gs { color: #FF5370 } /* Generic.Strong */
|
||||
.dark .codehilite .gu { color: #89DDFF } /* Generic.Subheading */
|
||||
.dark .codehilite .gt { color: #FF5370 } /* Generic.Traceback */
|
||||
.dark .codehilite .kc { color: #89DDFF } /* Keyword.Constant */
|
||||
.dark .codehilite .kd { color: #BB80B3 } /* Keyword.Declaration */
|
||||
.dark .codehilite .kn { color: #89DDFF; font-style: italic } /* Keyword.Namespace */
|
||||
.dark .codehilite .kp { color: #89DDFF } /* Keyword.Pseudo */
|
||||
.dark .codehilite .kr { color: #BB80B3 } /* Keyword.Reserved */
|
||||
.dark .codehilite .kt { color: #BB80B3 } /* Keyword.Type */
|
||||
.dark .codehilite .ld { color: #C3E88D } /* Literal.Date */
|
||||
.dark .codehilite .m { color: #F78C6C } /* Literal.Number */
|
||||
.dark .codehilite .s { color: #C3E88D } /* Literal.String */
|
||||
.dark .codehilite .na { color: #BB80B3 } /* Name.Attribute */
|
||||
.dark .codehilite .nb { color: #82AAFF } /* Name.Builtin */
|
||||
.dark .codehilite .nc { color: #FFCB6B } /* Name.Class */
|
||||
.dark .codehilite .no { color: #EEFFFF } /* Name.Constant */
|
||||
.dark .codehilite .nd { color: #82AAFF } /* Name.Decorator */
|
||||
.dark .codehilite .ni { color: #89DDFF } /* Name.Entity */
|
||||
.dark .codehilite .ne { color: #FFCB6B } /* Name.Exception */
|
||||
.dark .codehilite .nf { color: #82AAFF } /* Name.Function */
|
||||
.dark .codehilite .nl { color: #82AAFF } /* Name.Label */
|
||||
.dark .codehilite .nn { color: #FFCB6B } /* Name.Namespace */
|
||||
.dark .codehilite .nx { color: #EEFFFF } /* Name.Other */
|
||||
.dark .codehilite .py { color: #FFCB6B } /* Name.Property */
|
||||
.dark .codehilite .nt { color: #FF5370 } /* Name.Tag */
|
||||
.dark .codehilite .nv { color: #89DDFF } /* Name.Variable */
|
||||
.dark .codehilite .ow { color: #89DDFF; font-style: italic } /* Operator.Word */
|
||||
.dark .codehilite .pm { color: #89DDFF } /* Punctuation.Marker */
|
||||
.dark .codehilite .w { color: #EEFFFF } /* Text.Whitespace */
|
||||
.dark .codehilite .mb { color: #F78C6C } /* Literal.Number.Bin */
|
||||
.dark .codehilite .mf { color: #F78C6C } /* Literal.Number.Float */
|
||||
.dark .codehilite .mh { color: #F78C6C } /* Literal.Number.Hex */
|
||||
.dark .codehilite .mi { color: #F78C6C } /* Literal.Number.Integer */
|
||||
.dark .codehilite .mo { color: #F78C6C } /* Literal.Number.Oct */
|
||||
.dark .codehilite .sa { color: #BB80B3 } /* Literal.String.Affix */
|
||||
.dark .codehilite .sb { color: #C3E88D } /* Literal.String.Backtick */
|
||||
.dark .codehilite .sc { color: #C3E88D } /* Literal.String.Char */
|
||||
.dark .codehilite .dl { color: #EEFFFF } /* Literal.String.Delimiter */
|
||||
.dark .codehilite .sd { color: #79d618; font-style: italic } /* Literal.String.Doc */
|
||||
.dark .codehilite .s2 { color: #C3E88D } /* Literal.String.Double */
|
||||
.dark .codehilite .se { color: #EEFFFF } /* Literal.String.Escape */
|
||||
.dark .codehilite .sh { color: #C3E88D } /* Literal.String.Heredoc */
|
||||
.dark .codehilite .si { color: #89DDFF } /* Literal.String.Interpol */
|
||||
.dark .codehilite .sx { color: #C3E88D } /* Literal.String.Other */
|
||||
.dark .codehilite .sr { color: #89DDFF } /* Literal.String.Regex */
|
||||
.dark .codehilite .s1 { color: #C3E88D } /* Literal.String.Single */
|
||||
.dark .codehilite .ss { color: #89DDFF } /* Literal.String.Symbol */
|
||||
.dark .codehilite .bp { color: #89DDFF } /* Name.Builtin.Pseudo */
|
||||
.dark .codehilite .fm { color: #82AAFF } /* Name.Function.Magic */
|
||||
.dark .codehilite .vc { color: #89DDFF } /* Name.Variable.Class */
|
||||
.dark .codehilite .vg { color: #89DDFF } /* Name.Variable.Global */
|
||||
.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */
|
||||
.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */
|
||||
.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */
|
||||
|
||||
88
themes/contrast.py
普通文件
88
themes/contrast.py
普通文件
@@ -0,0 +1,88 @@
|
||||
import gradio as gr
|
||||
from toolbox import get_conf
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
||||
|
||||
def adjust_theme():
|
||||
|
||||
try:
|
||||
color_er = gr.themes.utils.colors.fuchsia
|
||||
set_theme = gr.themes.Default(
|
||||
primary_hue=gr.themes.utils.colors.orange,
|
||||
neutral_hue=gr.themes.utils.colors.gray,
|
||||
font=["Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"],
|
||||
font_mono=["ui-monospace", "Consolas", "monospace"])
|
||||
set_theme.set(
|
||||
# Colors
|
||||
input_background_fill_dark="*neutral_800",
|
||||
# Transition
|
||||
button_transition="none",
|
||||
# Shadows
|
||||
button_shadow="*shadow_drop",
|
||||
button_shadow_hover="*shadow_drop_lg",
|
||||
button_shadow_active="*shadow_inset",
|
||||
input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset",
|
||||
input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset",
|
||||
input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset",
|
||||
checkbox_label_shadow="*shadow_drop",
|
||||
block_shadow="*shadow_drop",
|
||||
form_gap_width="1px",
|
||||
# Button borders
|
||||
input_border_width="1px",
|
||||
input_background_fill="white",
|
||||
# Gradients
|
||||
stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)",
|
||||
stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)",
|
||||
error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)",
|
||||
error_background_fill_dark="*background_fill_primary",
|
||||
checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)",
|
||||
checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)",
|
||||
checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)",
|
||||
checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)",
|
||||
button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)",
|
||||
button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)",
|
||||
button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)",
|
||||
button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)",
|
||||
button_primary_border_color_dark="*primary_500",
|
||||
button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)",
|
||||
button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)",
|
||||
button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)",
|
||||
button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)",
|
||||
button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})",
|
||||
button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})",
|
||||
button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})",
|
||||
button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})",
|
||||
button_cancel_border_color=color_er.c200,
|
||||
button_cancel_border_color_dark=color_er.c600,
|
||||
button_cancel_text_color=color_er.c600,
|
||||
button_cancel_text_color_dark="white",
|
||||
)
|
||||
|
||||
if LAYOUT=="TOP-DOWN":
|
||||
js = ""
|
||||
else:
|
||||
with open('themes/common.js', 'r', encoding='utf8') as f:
|
||||
js = f"<script>{f.read()}</script>"
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
if ADD_WAIFU:
|
||||
js += """
|
||||
<script src="file=docs/waifu_plugin/jquery.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
gradio_original_template_fn = gr.routes.templates.TemplateResponse
|
||||
def gradio_new_template_fn(*args, **kwargs):
|
||||
res = gradio_original_template_fn(*args, **kwargs)
|
||||
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
||||
res.init_headers()
|
||||
return res
|
||||
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
|
||||
except:
|
||||
set_theme = None
|
||||
print('gradio版本较旧, 不能自定义字体和颜色')
|
||||
return set_theme
|
||||
|
||||
with open("themes/contrast.css", "r", encoding="utf-8") as f:
|
||||
advanced_css = f.read()
|
||||
with open("themes/common.css", "r", encoding="utf-8") as f:
|
||||
advanced_css += f.read()
|
||||
@@ -1,3 +1,46 @@
|
||||
.dark {
|
||||
--background-fill-primary: #050810;
|
||||
--body-background-fill: var(--background-fill-primary);
|
||||
}
|
||||
/* 插件下拉菜单 */
|
||||
#plugin-panel .wrap.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
|
||||
box-shadow: var(--input-shadow);
|
||||
border: var(--input-border-width) dashed var(--border-color-primary);
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
#plugin-panel .dropdown-arrow.svelte-p5edak {
|
||||
width: 50px;
|
||||
}
|
||||
#plugin-panel input.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
|
||||
padding-left: 5px;
|
||||
}
|
||||
|
||||
/* 小按钮 */
|
||||
.sm.svelte-1ipelgc {
|
||||
font-family: "Microsoft YaHei UI", "Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui";
|
||||
--button-small-text-weight: 600;
|
||||
--button-small-text-size: 16px;
|
||||
border-bottom-right-radius: 6px;
|
||||
border-bottom-left-radius: 6px;
|
||||
border-top-right-radius: 6px;
|
||||
border-top-left-radius: 6px;
|
||||
}
|
||||
|
||||
#plugin-panel .sm.svelte-1ipelgc {
|
||||
font-family: "Microsoft YaHei UI", "Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui";
|
||||
--button-small-text-weight: 400;
|
||||
--button-small-text-size: 14px;
|
||||
border-bottom-right-radius: 6px;
|
||||
border-bottom-left-radius: 6px;
|
||||
border-top-right-radius: 6px;
|
||||
border-top-left-radius: 6px;
|
||||
}
|
||||
|
||||
.wrap-inner.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
|
||||
padding: 0%;
|
||||
}
|
||||
|
||||
.markdown-body table {
|
||||
margin: 1em 0;
|
||||
border-collapse: collapse;
|
||||
@@ -17,6 +60,12 @@
|
||||
padding: .5em .2em;
|
||||
}
|
||||
|
||||
.normal_mut_select .svelte-1gfkn6j {
|
||||
float: left;
|
||||
width: auto;
|
||||
line-height: 260% !important;
|
||||
}
|
||||
|
||||
.markdown-body ol, .markdown-body ul {
|
||||
padding-inline-start: 2em !important;
|
||||
}
|
||||
|
||||
@@ -9,7 +9,7 @@ def adjust_theme():
|
||||
set_theme = gr.themes.Default(
|
||||
primary_hue=gr.themes.utils.colors.orange,
|
||||
neutral_hue=gr.themes.utils.colors.gray,
|
||||
font=["sans-serif", "Microsoft YaHei", "ui-sans-serif", "system-ui"],
|
||||
font=["Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"],
|
||||
font_mono=["ui-monospace", "Consolas", "monospace"])
|
||||
set_theme.set(
|
||||
# Colors
|
||||
@@ -84,3 +84,5 @@ def adjust_theme():
|
||||
|
||||
with open("themes/default.css", "r", encoding="utf-8") as f:
|
||||
advanced_css = f.read()
|
||||
with open("themes/common.css", "r", encoding="utf-8") as f:
|
||||
advanced_css += f.read()
|
||||
|
||||
@@ -24,7 +24,11 @@ mspace {
|
||||
border-color: yellow;
|
||||
}
|
||||
}
|
||||
|
||||
.normal_mut_select .svelte-1gfkn6j {
|
||||
float: left;
|
||||
width: auto;
|
||||
line-height: 260% !important;
|
||||
}
|
||||
#highlight_update {
|
||||
animation-name: highlight;
|
||||
animation-duration: 0.75s;
|
||||
|
||||
@@ -106,3 +106,5 @@ def adjust_theme():
|
||||
|
||||
with open("themes/green.css", "r", encoding="utf-8") as f:
|
||||
advanced_css = f.read()
|
||||
with open("themes/common.css", "r", encoding="utf-8") as f:
|
||||
advanced_css += f.read()
|
||||
|
||||
@@ -5,6 +5,9 @@ THEME, = get_conf('THEME')
|
||||
if THEME == 'Chuanhu-Small-and-Beautiful':
|
||||
from .green import adjust_theme, advanced_css
|
||||
theme_declaration = "<h2 align=\"center\" class=\"small\">[Chuanhu-Small-and-Beautiful主题]</h2>"
|
||||
elif THEME == 'High-Contrast':
|
||||
from .contrast import adjust_theme, advanced_css
|
||||
theme_declaration = ""
|
||||
else:
|
||||
from .default import adjust_theme, advanced_css
|
||||
theme_declaration = ""
|
||||
|
||||
46
toolbox.py
46
toolbox.py
@@ -24,6 +24,19 @@ pj = os.path.join
|
||||
|
||||
class ChatBotWithCookies(list):
|
||||
def __init__(self, cookie):
|
||||
"""
|
||||
cookies = {
|
||||
'top_p': top_p,
|
||||
'temperature': temperature,
|
||||
'lock_plugin': bool,
|
||||
"files_to_promote": ["file1", "file2"],
|
||||
"most_recent_uploaded": {
|
||||
"path": "uploaded_path",
|
||||
"time": time.time(),
|
||||
"time_str": "timestr",
|
||||
}
|
||||
}
|
||||
"""
|
||||
self._cookies = cookie
|
||||
|
||||
def write_list(self, list):
|
||||
@@ -47,6 +60,8 @@ def ArgsGeneralWrapper(f):
|
||||
# 引入一个有cookie的chatbot
|
||||
cookies.update({
|
||||
'top_p':top_p,
|
||||
'api_key': cookies['api_key'],
|
||||
'llm_model': llm_model,
|
||||
'temperature':temperature,
|
||||
})
|
||||
llm_kwargs = {
|
||||
@@ -69,7 +84,7 @@ def ArgsGeneralWrapper(f):
|
||||
# 处理个别特殊插件的锁定状态
|
||||
module, fn_name = cookies['lock_plugin'].split('->')
|
||||
f_hot_reload = getattr(importlib.import_module(module, fn_name), fn_name)
|
||||
yield from f_hot_reload(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
|
||||
yield from f_hot_reload(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, request)
|
||||
return decorated
|
||||
|
||||
|
||||
@@ -479,7 +494,8 @@ def find_recent_files(directory):
|
||||
current_time = time.time()
|
||||
one_minute_ago = current_time - 60
|
||||
recent_files = []
|
||||
|
||||
if not os.path.exists(directory):
|
||||
os.makedirs(directory, exist_ok=True)
|
||||
for filename in os.listdir(directory):
|
||||
file_path = os.path.join(directory, filename)
|
||||
if file_path.endswith('.log'):
|
||||
@@ -503,15 +519,15 @@ def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
|
||||
if not os.path.exists(new_path): shutil.copyfile(file, new_path)
|
||||
# 将文件添加到chatbot cookie中,避免多用户干扰
|
||||
if chatbot:
|
||||
if 'file_to_promote' in chatbot._cookies: current = chatbot._cookies['file_to_promote']
|
||||
if 'files_to_promote' in chatbot._cookies: current = chatbot._cookies['files_to_promote']
|
||||
else: current = []
|
||||
chatbot._cookies.update({'file_to_promote': [new_path] + current})
|
||||
chatbot._cookies.update({'files_to_promote': [new_path] + current})
|
||||
|
||||
def disable_auto_promotion(chatbot):
|
||||
chatbot._cookies.update({'file_to_promote': []})
|
||||
chatbot._cookies.update({'files_to_promote': []})
|
||||
return
|
||||
|
||||
def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
|
||||
def on_file_uploaded(files, chatbot, txt, txt2, checkboxes, cookies):
|
||||
"""
|
||||
当文件被上传时的回调函数
|
||||
"""
|
||||
@@ -546,14 +562,20 @@ def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
|
||||
f'[Local Message] 收到以下文件: \n\n{moved_files_str}' +
|
||||
f'\n\n调用路径参数已自动修正到: \n\n{txt}' +
|
||||
f'\n\n现在您点击任意“红颜色”标识的函数插件时,以上文件将被作为输入参数'+err_msg])
|
||||
return chatbot, txt, txt2
|
||||
cookies.update({
|
||||
'most_recent_uploaded': {
|
||||
'path': f'private_upload/{time_tag}',
|
||||
'time': time.time(),
|
||||
'time_str': time_tag
|
||||
}})
|
||||
return chatbot, txt, txt2, cookies
|
||||
|
||||
|
||||
def on_report_generated(cookies, files, chatbot):
|
||||
from toolbox import find_recent_files
|
||||
if 'file_to_promote' in cookies:
|
||||
report_files = cookies['file_to_promote']
|
||||
cookies.pop('file_to_promote')
|
||||
if 'files_to_promote' in cookies:
|
||||
report_files = cookies['files_to_promote']
|
||||
cookies.pop('files_to_promote')
|
||||
else:
|
||||
report_files = find_recent_files('gpt_log')
|
||||
if len(report_files) == 0:
|
||||
@@ -1001,7 +1023,7 @@ def get_plugin_default_kwargs():
|
||||
chatbot = ChatBotWithCookies(llm_kwargs)
|
||||
|
||||
# txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port
|
||||
default_plugin_kwargs = {
|
||||
DEFAULT_FN_GROUPS_kwargs = {
|
||||
"main_input": "./README.md",
|
||||
"llm_kwargs": llm_kwargs,
|
||||
"plugin_kwargs": {},
|
||||
@@ -1010,7 +1032,7 @@ def get_plugin_default_kwargs():
|
||||
"system_prompt": "You are a good AI.",
|
||||
"web_port": WEB_PORT
|
||||
}
|
||||
return default_plugin_kwargs
|
||||
return DEFAULT_FN_GROUPS_kwargs
|
||||
|
||||
def get_chat_default_kwargs():
|
||||
"""
|
||||
|
||||
4
version
4
version
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"version": 3.48,
|
||||
"version": 3.50,
|
||||
"show_feature": true,
|
||||
"new_feature": "接入阿里通义千问、讯飞星火、上海AI-Lab书生 <-> 优化一键升级 <-> 提高arxiv翻译速度和成功率 <-> 支持自定义APIKEY格式 <-> 临时修复theme的文件丢失问题 <-> 新增实时语音对话插件(自动断句,脱手对话) <-> 支持加载自定义的ChatGLM2微调模型 <-> 动态ChatBot窗口高度 <-> 修复Azure接口的BUG <-> 完善多语言模块 <-> 完善本地Latex矫错和翻译功能 <-> 增加gpt-3.5-16k的支持"
|
||||
"new_feature": "支持插件分类! <-> 支持用户使用自然语言调度各个插件(虚空终端) ! <-> 改进UI,设计新主题 <-> 支持借助GROBID实现PDF高精度翻译 <-> 接入百度千帆平台和文心一言 <-> 接入阿里通义千问、讯飞星火、上海AI-Lab书生 <-> 优化一键升级 <-> 提高arxiv翻译速度和成功率"
|
||||
}
|
||||
|
||||
在新工单中引用
屏蔽一个用户