比较提交

..

416 次代码提交

作者 SHA1 备注 提交日期
qingxu fu
8ddc1adae4 version 2.5 2023-04-08 22:27:02 +08:00
qingxu fu
4e3f759d0c 移动参数位置 2023-04-08 22:16:33 +08:00
qingxu fu
94ff62bdaa 错别字 2023-04-08 22:15:33 +08:00
qingxu fu
2cbb5dbdaa up 2023-04-08 22:14:05 +08:00
Your Name
3b85a29f91 加入自动更新协议 2023-04-08 02:48:35 +08:00
Your Name
166daa1ea7 显示版本 2023-04-08 02:39:54 +08:00
Your Name
5c3ecd7477 自动更新程序 2023-04-08 02:38:02 +08:00
Your Name
d5b03377ff 多种接口 2023-04-08 00:51:58 +08:00
Your Name
7cd11f2bbd 新插件移动到插件菜单中 2023-04-08 00:42:54 +08:00
Your Name
f65cc8deea Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-04-08 00:41:46 +08:00
Your Name
48ee620524 代码高亮开关 2023-04-08 00:41:39 +08:00
binary-husky
8a5be8fb8d Merge pull request #366 from Hanzoe/master
new function: 实现单篇PDF论文翻译理解
2023-04-08 00:41:03 +08:00
binary-husky
f26b8e28e1 Update README.md 2023-04-08 00:32:22 +08:00
Your Name
b005b84ad6 更新requirements.txt实现代码高亮必要 2023-04-08 00:23:26 +08:00
Your Name
1edf7ef80d Fix dockerfile 2023-04-08 00:01:11 +08:00
Your Name
3fed08f65e version 2.45 2023-04-07 23:58:10 +08:00
Your Name
fa8603d745 Merge branch 'master' into dev 2023-04-07 23:55:19 +08:00
Your Name
6b5c2538cf 新增谷歌学术统合小助手 2023-04-07 23:54:24 +08:00
Your Name
7f1c7ebd68 version 2.43 2023-04-07 22:08:05 +08:00
Your Name
ff87aebc29 处理多线程中出现的网络问题 2023-04-07 22:06:08 +08:00
Hanzoe
2c746056ff Update crazy_functional.py 2023-04-07 21:35:36 +08:00
Hanzoe
0e4cac29f8 Add files via upload 2023-04-07 21:34:55 +08:00
Hanzoe
8513d46398 Merge pull request #1 from binary-husky/master
单篇论文翻译理解
2023-04-07 21:34:11 +08:00
Your Name
b2495a6f7e Merge branch 'dev' of github.com:binary-husky/chatgpt_academic into dev 2023-04-07 21:09:43 +08:00
Your Name
5603d33d67 highlight 2023-04-07 21:09:37 +08:00
Your Name
d06d4f3a6f highlight 2023-04-07 21:08:34 +08:00
Your Name
b2adc77a73 Merge branch 'dev' of github.com:binary-husky/chatgpt_academic into dev 2023-04-07 21:00:32 +08:00
Your Name
1f6e2547b2 Merge branch 'master' into dev 2023-04-07 20:59:35 +08:00
qingxu fu
fd0e3fb5c4 代码、公式高亮 2023-04-07 20:30:30 +08:00
qingxu fu
a0b7ae6674 Merge branch 'master' of https://github.com/binary-husky/chatgpt_academic into master 2023-04-07 19:26:20 +08:00
qingxu fu
8ca232cda3 修复小BUG 2023-04-07 19:26:17 +08:00
binary-husky
34e983c7a5 Update README.md 2023-04-07 19:09:18 +08:00
binary-husky
c0d096726c Update README.md 2023-04-07 19:08:41 +08:00
qingxu fu
969e8c1d89 正确显示列表序号 2023-04-07 18:33:46 +08:00
binary-husky
d4e3082db4 Update toolbox.py 2023-04-07 18:27:52 +08:00
binary-husky
777e56882b Update README.md 2023-04-07 18:21:13 +08:00
Your Name
4da7d75ad4 修复公式显示错误 2023-04-07 18:14:27 +08:00
qingxu fu
1538acaa5a fix equation 2023-04-07 17:55:24 +08:00
qingxu fu
b47f69978e 更新requirements.txt 2023-04-07 12:45:47 +08:00
binary-husky
823c136de4 Update README.md 2023-04-06 19:24:37 +08:00
binary-husky
cede926b54 Update README.md 2023-04-06 19:15:58 +08:00
binary-husky
bb036d92c6 Update README.md 2023-04-06 18:55:16 +08:00
binary-husky
7d96a5753c Update README.md 2023-04-06 18:49:49 +08:00
qingxu fu
179d87d3eb 改善提示 2023-04-06 18:45:24 +08:00
Your Name
e42f84a812 替换基础函数 2023-04-06 18:41:04 +08:00
qingxu fu
8c21ada50c 主要代码规整化 2023-04-06 18:29:49 +08:00
qingxu fu
f99a7bb6f6 小问题修复 2023-04-06 18:26:46 +08:00
qingxu fu
1c4cdde478 format file 2023-04-06 18:15:11 +08:00
qingxu fu
7b851ff6b4 修复完成后的文件显示问题 2023-04-06 18:13:16 +08:00
qingxu fu
f1b50dd5f5 fix error 2023-04-06 17:23:26 +08:00
qingxu fu
d936800765 Merge branch 'dev_ui' of https://github.com/binary-husky/chatgpt_academic into dev_ui 2023-04-06 17:19:28 +08:00
qingxu fu
b6e05f93d1 change UI 2023-04-06 17:19:25 +08:00
qingxu fu
2c9bf11464 change UI 2023-04-06 17:18:30 +08:00
qingxu fu
e5d014ea2f change UI 2023-04-06 17:17:31 +08:00
qingxu fu
cd89a4809e 恢复模板函数 2023-04-06 17:15:13 +08:00
qingxu fu
fe8a1ab590 修正打印提示 2023-04-06 16:59:52 +08:00
qingxu fu
f63100939a update self_analysis 2023-04-06 16:33:01 +08:00
qingxu fu
9587808c12 2.4版本 2023-04-06 16:13:56 +08:00
Your Name
527ef979dc End 2023-04-06 03:43:53 +08:00
Your Name
bb0b6a2f34 update 2023-04-06 03:30:02 +08:00
Your Name
19302a33b4 重命名一些函数 2023-04-06 02:02:04 +08:00
Your Name
03cf9fda6c 修改文件命名 2023-04-05 16:19:35 +08:00
qingxu fu
ab35afd399 Merge remote-tracking branch 'origin/master' into dev_ui 2023-04-05 14:35:46 +08:00
binary-husky
53d20b26d9 Update README.md 2023-04-05 14:34:43 +08:00
binary-husky
d5c1c150d2 Update README.md 2023-04-05 14:09:56 +08:00
binary-husky
3eb4bbb123 Update README.md 2023-04-05 14:09:35 +08:00
binary-husky
f4d252ebe2 Update README.md 2023-04-05 14:07:59 +08:00
Your Name
ee7494c0b7 处理没有文件返回的问题 2023-04-05 02:15:47 +08:00
qingxu fu
dcdc8351e7 BUG FIX 2023-04-05 01:58:34 +08:00
qingxu fu
0aeb5b28cd 改进效率 2023-04-05 00:25:53 +08:00
qingxu fu
1dd1720d38 Merge branch 'dev_ui' of https://github.com/binary-husky/chatgpt_academic into dev_ui 2023-04-05 00:15:09 +08:00
qingxu fu
19be0490af BUG FIX 2023-04-05 00:11:12 +08:00
qingxu fu
0c9e18291a BUG FIX 2023-04-05 00:10:06 +08:00
qingxu fu
9f47d0f714 Bug Fix: Hot Reload Wapper For All 2023-04-05 00:09:13 +08:00
qingxu fu
7a254c150f 参数输入bug修复 2023-04-05 00:07:08 +08:00
qingxu fu
3648648b3d 支持更多界面布局的切换 2023-04-04 23:46:47 +08:00
qingxu fu
1da60b7a0c merge 2023-04-04 22:56:06 +08:00
qingxu fu
c40f6f00bb check_new_version 2023-04-04 22:54:08 +08:00
binary-husky
a239abac50 Update version 2023-04-04 22:34:28 +08:00
binary-husky
1042d28e1f Update version 2023-04-04 22:20:39 +08:00
binary-husky
7b75422c26 Update version 2023-04-04 22:20:21 +08:00
binary-husky
99817e9040 Update version 2023-04-04 22:17:47 +08:00
qingxu fu
c9fa26405d 规划版本号 2023-04-04 21:38:20 +08:00
binary-husky
005232afa6 Update issue templates 2023-04-04 17:13:40 +08:00
binary-husky
5b8cc5a899 Update README.md 2023-04-04 15:33:53 +08:00
qingxu fu
a4137e7170 修复代码英文重构Bug 2023-04-04 15:23:42 +08:00
qingxu fu
aaf44750d9 默认暗色护眼主题 2023-04-03 20:56:00 +08:00
binary-husky
bd6eb90449 Merge pull request #290 from LiZheGuang/master
fix: 🐛 修复react解析项目不显示在下拉列表的问题
2023-04-03 17:58:28 +08:00
LiZheGuang
b5a48369a4 fix: 🐛 修复react解析项目不显示在下拉列表的问题 2023-04-03 17:44:09 +08:00
binary-husky
6b5bdbe98a Update issue templates 2023-04-03 17:00:51 +08:00
qingxu fu
69624c66d7 update README 2023-04-03 09:32:01 +08:00
binary-husky
69be335d22 Update README.md 2023-04-03 01:49:40 +08:00
binary-husky
bb1e410cb4 Update README.md 2023-04-03 01:47:49 +08:00
binary-husky
9ccc53fa96 Update README.md 2023-04-03 01:39:17 +08:00
binary-husky
4b83486b3d Update README.md 2023-04-03 01:38:44 +08:00
binary-husky
417c8325de Update README.md 2023-04-03 01:03:00 +08:00
binary-husky
d51ae6abb2 Update README.md 2023-04-03 01:01:57 +08:00
binary-husky
fff7b8ef91 Update README.md 2023-04-02 22:15:12 +08:00
binary-husky
c59eb8ff9e Update README.md 2023-04-02 22:04:33 +08:00
binary-husky
a74f0a9343 Update config.py 2023-04-02 22:02:41 +08:00
binary-husky
f4905a60e2 Update README.md 2023-04-02 21:51:41 +08:00
binary-husky
a562849e4c Update README.md 2023-04-02 21:44:44 +08:00
binary-husky
6105b7f73b Update README.md 2023-04-02 21:41:36 +08:00
binary-husky
3486fb5c10 Update README.md 2023-04-02 21:39:28 +08:00
binary-husky
5f7a1a3da3 Update README.md 2023-04-02 21:33:09 +08:00
binary-husky
8d086ce7c0 Update README.md 2023-04-02 21:31:44 +08:00
binary-husky
b188c4a2b5 Update README.md 2023-04-02 21:28:59 +08:00
binary-husky
10cf456aa8 Update README.md 2023-04-02 21:27:19 +08:00
Your Name
4043db7f33 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-04-02 20:35:16 +08:00
Your Name
9a192fd473 #236 2023-04-02 20:35:09 +08:00
Your Name
9f91fca4d2 remove verbose print 2023-04-02 20:22:11 +08:00
Your Name
160b001bef CHATBOT_HEIGHT - 1 2023-04-02 20:20:37 +08:00
Your Name
1d912bc10d return None instead of [] when no file is concluded 2023-04-02 20:18:58 +08:00
Your Name
51b3f8adca +异常处理 2023-04-02 20:03:25 +08:00
Your Name
16de1812d3 微调theme 2023-04-02 20:02:47 +08:00
binary-husky
641b96548a Update README.md 2023-04-02 16:55:18 +08:00
Your Name
34f4ba211d Merge branch 'CSS' of https://github.com/Keldos-Li/chatgpt_academic (#236) 2023-04-02 16:01:35 +08:00
Your Name
19aba350a3 修改按钮提示 2023-04-02 15:48:54 +08:00
binary-husky
ab57f4bfb0 Merge pull request #253 from RongkangXiong/dev
add crazy_functions 解析一个Java项目
2023-04-02 15:40:03 +08:00
Your Name
b7e0a48cd2 添加Golang、Java等项目的支持 2023-04-02 15:33:09 +08:00
Your Name
58b051ead3 加入 arxiv 小助手插件 2023-04-02 15:19:21 +08:00
RongkangXiong
0c7378e096 add crazy_functions 解析一个Rect项目 2023-04-02 03:07:21 +08:00
RongkangXiong
a52ae14457 add crazy_functions 解析一个Java项目 2023-04-02 02:59:03 +08:00
Your Name
ce3e9b6289 Merge branch 'master' into dev 2023-04-02 01:24:03 +08:00
Your Name
9f07531a16 update 2023-04-02 01:23:15 +08:00
Your Name
2f646b3199 stage llm model interface 2023-04-02 01:18:51 +08:00
Your Name
30b1cbd95c q 2023-04-02 00:51:17 +08:00
Your Name
6c32961211 接入TGUI 2023-04-02 00:40:05 +08:00
Your Name
ba7c7290c1 成功借助tgui调用更多LLM 2023-04-02 00:22:41 +08:00
Your Name
f245f94444 up 2023-04-01 23:46:32 +08:00
Your Name
e86ffad6e7 wait new pr 2023-04-01 21:56:55 +08:00
Your Name
706b2604d8 修改文件名 2023-04-01 21:45:58 +08:00
Keldos
e35f7a7186 fix: 修正CSS中的注释解决列表显示
- 同时使用.markdown-body缩限了css作用域
2023-04-01 20:34:18 +08:00
binary-husky
7c91cfebfa Update README.md 2023-04-01 20:21:31 +08:00
Your Name
9d84ddbc62 advanced theme 2023-04-01 19:48:14 +08:00
Your Name
3d3ffd6de2 新的arxiv论文插件 2023-04-01 19:43:56 +08:00
binary-husky
82038a42da Merge pull request #239 from ylsislove/golang-code-analysis
feat: add function to parse Golang projects
2023-04-01 19:42:06 +08:00
binary-husky
0ce2d423cf Update functional_crazy.py 2023-04-01 19:37:39 +08:00
wangyu
4d6bdca3fc feat: add function to parse Golang projects
This commit adds a new function to parse Golang projects to the collection of crazy functions.
2023-04-01 19:19:36 +08:00
Your Name
c64dbb03fd fix bug 2023-04-01 19:07:58 +08:00
Your Name
8575c82ed7 README 2023-04-01 18:07:26 +08:00
Your Name
bca61754e3 Typo in Prompt 2023-04-01 17:29:30 +08:00
Your Name
f61ea1559c python3.7 compat 2023-04-01 17:11:59 +08:00
Keldos
12c36a68ce feat: 调整表格样式 2023-04-01 16:58:51 +08:00
Keldos
998e127b2f feat: 使用CSS完善表格、列表、代码块、对话气泡显示样式
移植了 川虎ChatGPT 的CSS——但是川虎ChatGPT的CSS也是我写的~
2023-04-01 16:51:38 +08:00
Your Name
c9c9449a59 README up 2023-04-01 16:36:57 +08:00
Your Name
722b538261 update README 2023-04-01 16:35:45 +08:00
Your Name
29fb436e76 Merge branch 'dev' 2023-04-01 16:31:57 +08:00
binary-husky
6789eaee45 Update README.md 2023-04-01 04:25:03 +08:00
binary-husky
937823ec64 Update README.md 2023-04-01 04:19:02 +08:00
Your Name
3c95299f48 交互优化 2023-04-01 04:11:31 +08:00
Your Name
639e24fc82 将css样式移动到theme文件,减少main.py的代码行数 2023-04-01 03:39:43 +08:00
binary-husky
364810983a Merge pull request #209 from jr-shen/dev-1
(1)修改语法检查的prompt,确保输出格式统一。

之前使用时经常发现输出没有把修改的部分加粗,或者在表格中把整段文字输出了,影响阅读。因此在之前的prompt基础上增加了一个example,确保输出格式统一。

(2)表格内增加了边框线,使行/列之间的分隔更清楚。

使用时发现没有边框的表格在里面文字较多时难以区分。因此增加表格内边框线。
2023-04-01 03:37:02 +08:00
Your Name
cb9404c4de 优化Token溢出时的处理 2023-04-01 03:36:05 +08:00
Your Name
17a18e99fa 隐藏、显示功能区 2023-04-01 00:21:27 +08:00
Your Name
30ea77a496 更清朗些的UI 2023-03-31 23:54:25 +08:00
Your Name
01931b0bd2 更清朗的UI 2023-03-31 23:51:17 +08:00
Junru Shen
6a2c7db7c1 add markdown table border line to make text boundary more clear 2023-03-31 23:40:21 +08:00
Junru Shen
9c15c446a6 make grammar correction prompt more clear 2023-03-31 23:38:49 +08:00
Your Name
44ed8a46ad 对word和pdf进行简易的支持 2023-03-31 23:18:45 +08:00
Your Name
41801c017c Merge branch 'master' into dev 2023-03-31 23:08:30 +08:00
Your Name
3e88422ab6 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-31 22:49:45 +08:00
Your Name
7c8b8b95b2 修复bug 2023-03-31 22:49:39 +08:00
Your Name
65b1d78516 优化自译解功能 2023-03-31 22:36:46 +08:00
binary-husky
e815afb792 Update README.md 2023-03-31 21:48:45 +08:00
Your Name
ac681d3201 移动函数到调用模组 2023-03-31 21:46:47 +08:00
binary-husky
ecebdf3ab5 Merge pull request #204 from Eralien/dev-clean_pdf
feat: clean pdf fitz text
2023-03-31 21:42:18 +08:00
binary-husky
8aea6536e0 add contributor 2023-03-31 21:41:17 +08:00
binary-husky
9c90b28bed Merge pull request #147 from JasonGuo1/master
feat(toolbox.py,总结word文档.py): 支持rar格式与7z格式解压;word读取
2023-03-31 21:39:05 +08:00
Your Name
36ef2fa884 JasonGuo1 2023-03-31 21:37:46 +08:00
Your Name
89ca010a11 Merge branch 'master' of https://github.com/JasonGuo1/chatgpt_academic into JasonGuo1-master 2023-03-31 21:31:31 +08:00
Siyuan Feng
60fc2bf4c1 feat: clean pdf fitz text 2023-03-31 21:26:55 +08:00
binary-husky
838c3dc881 Merge pull request #117 from XMB-7/better_prompt
feat: better prompt
2023-03-31 21:19:25 +08:00
Your Name
16c59e1bf6 Merge branch 'better_prompt' of https://github.com/XMB-7/chatgpt_academic into XMB-7-better_prompt 2023-03-31 21:18:28 +08:00
binary-husky
fb889cb4ce Merge pull request #174 from Euclid-Jie/Euclid_Test
feature(read pdf paper then write summary)
2023-03-31 21:06:02 +08:00
Your Name
c91cfc64d9 整合 2023-03-31 21:05:18 +08:00
Your Name
dcab956cff Merge branch 'dev' into Euclid-Jie-Euclid_Test 2023-03-31 21:03:43 +08:00
Your Name
da55ae68f6 pdfminer整合到一个文件中 2023-03-31 21:03:12 +08:00
Your Name
201f53c0f0 Merge branch 'Euclid_Test' of https://github.com/Euclid-Jie/chatgpt_academic into Euclid-Jie-Euclid_Test 2023-03-31 20:26:59 +08:00
Your Name
77a98f03d0 修改文本 2023-03-31 20:12:27 +08:00
Your Name
9ee5545ad5 fix import error 2023-03-31 20:05:31 +08:00
Your Name
d37b0ce447 Merge branch 'dev' of github.com:binary-husky/chatgpt_academic into dev 2023-03-31 20:04:11 +08:00
Your Name
0abb84ae4b config新增说明 2023-03-31 20:02:12 +08:00
binary-husky
e6034b6928 Merge pull request #194 from fulyaec/enhance-chataca
修改AUTHENTICATION的判断,使得AUTHENTICATION为None/[]/""时都可以正确判断
2023-03-31 19:49:40 +08:00
Your Name
ac9e72b9f8 revert toolbox 2023-03-31 19:46:01 +08:00
Your Name
cb1ac5d13d Merge branch 'enhance-chataca' of https://github.com/fulyaec/chatgpt_academic into fulyaec-enhance-chataca 2023-03-31 19:45:23 +08:00
binary-husky
3497addc7b Merge pull request #198 from oneLuckyman/feature-match-API_KEY
一个小改进:更精准的 API_KEY 确认机制
2023-03-31 19:28:21 +08:00
Your Name
f17c580fd9 Merge remote-tracking branch 'origin/hot-reload-test' 2023-03-31 19:21:15 +08:00
Jia Xinglong
808e23c98a 使用 re 模块的 match 函数可以更精准的匹配和确认 API_KEY 是否正确 2023-03-31 17:38:39 +08:00
fulyaec
e3e4fa19a2 refactor and enhance 2023-03-31 16:24:40 +08:00
binary-husky
3cc0635628 Update main.py 2023-03-31 13:33:03 +08:00
binary-husky
7149f9fe90 Update README.md 2023-03-31 13:29:37 +08:00
binary-husky
d2f009ba8d Update README.md 2023-03-31 13:11:10 +08:00
欧玮杰
8f4f13efd5 fix(the ".PDF" file can not be recognized): 2023-03-31 10:26:40 +08:00
欧玮杰
fefe96144f fix(fix "gbk" encode error in 批量总结PDF文档 line14):
由于不可编码字符,导致报错,添加软解码,处理原始文本。
2023-03-31 10:03:10 +08:00
欧玮杰
5521f0e41c feature(read pdf paper then write summary):
add a func called readPdf in toolbox, which can read pdf paper to str. then use bs4.BeautifulSoup to clean content.
2023-03-31 00:54:01 +08:00
binary-husky
2d5d719696 Merge pull request #171 from RoderickChan/add-deploy-instruction
在README中添加远程部署的指导
2023-03-31 00:00:35 +08:00
binary-husky
2b339464ee Update README.md 2023-03-30 23:59:01 +08:00
binary-husky
43ad798c68 Update README.md 2023-03-30 23:34:17 +08:00
RoderickChan
a441c90436 在README中添加远程部署的指导方案 2023-03-30 23:31:44 +08:00
JasonGuo1
5ca623e9c5 feat(总结word文档):增加读取docx、doc格式的功能 2023-03-30 23:23:41 +08:00
binary-husky
c74a8b04b5 添加Wiki链接 2023-03-30 23:09:45 +08:00
JasonGuo1
34c693a571 feat(toolbox):调整了空格的问题 2023-03-30 20:28:15 +08:00
binary-husky
d4cd1877f4 Merge pull request #151 from SadPencil/patch-1
Fix a typo
2023-03-30 19:14:48 +08:00
qingxu fu
57a668bf30 自译解报告 2023-03-30 18:21:17 +08:00
qingxu fu
700d14be10 Merge branch 'hot-reload-test' of https://github.com/binary-husky/chatgpt_academic into hot-reload-test 2023-03-30 18:05:03 +08:00
qingxu fu
3fe96956ce 新增热更新功能 2023-03-30 18:04:20 +08:00
qingxu fu
c358c95630 新增热更新功能 2023-03-30 18:01:06 +08:00
Sad Pencil
70c02a08e9 Fix a typo 2023-03-30 15:55:46 +08:00
JasonGuo1
27f7153f37 feat(toolbox): 支持rar格式与7z格式解压,修改了下注释 2023-03-30 15:48:55 +08:00
JasonGuo1
eb69704c20 feat(toolbox): 支持rar格式与7z格式解压,修改了下注释 2023-03-30 15:48:00 +08:00
JasonGuo1
d616915348 feat(toolbox): 支持rar格式与7z格式解压,修改了下注释 2023-03-30 15:47:18 +08:00
JasonGuo1
5381df8f35 feat(toolbox): 支持rar格式与7z格式解压,修改了下注释 2023-03-30 15:45:58 +08:00
JasonGuo1
a7c857d4d9 feat(支持rar格式与7z格式解压) 2023-03-30 15:24:01 +08:00
binary-husky
1c3ce18eff Update README.md 2023-03-30 14:48:46 +08:00
binary-husky
a689960e31 Update README.md 2023-03-30 14:48:20 +08:00
binary-husky
07eca9fe55 Update README.md 2023-03-30 14:47:19 +08:00
binary-husky
00f29f2120 Update README.md 2023-03-30 14:08:24 +08:00
binary-husky
10ea3daaa5 Update README.md 2023-03-30 13:24:30 +08:00
binary-husky
eb95eec122 Merge pull request #124 from Freddd13/master
feat: 添加wsl2使用windows proxy的方法
2023-03-30 12:56:13 +08:00
qingxu fu
e03634f9e2 查找语法错误之前先清除换行符 2023-03-30 12:52:28 +08:00
qingxu fu
77d4628877 语法错误查找prompt更新 2023-03-30 12:16:18 +08:00
qingxu fu
8c3d37bbce up 2023-03-30 11:51:55 +08:00
qingxu fu
28f6801870 标准化代码格式 2023-03-30 11:50:11 +08:00
qingxu fu
6d7fb20b5a 修改配置的读取方式 2023-03-30 11:05:38 +08:00
Freddd13
9dc0d3273b update: 修改readme 2023-03-30 02:00:15 +08:00
Freddd13
13e976774f Merge remote-tracking branch 'upstream/master' 2023-03-30 01:58:39 +08:00
Freddd13
35d3346a1c feat: 支持wsl2使用windows proxy 2023-03-30 01:47:39 +08:00
Your Name
eaec1c3728 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-30 00:15:37 +08:00
Your Name
0140b0cda8 change UI layout 2023-03-30 00:15:31 +08:00
binary-husky
f98fe5b9ab Update README.md 2023-03-30 00:09:58 +08:00
Xiaoming Bai
932c430d5d better prompt 2023-03-30 00:06:02 +08:00
binary-husky
0063f8dd82 Update README.md 2023-03-29 23:53:33 +08:00
binary-husky
23b8c47c54 Update README.md 2023-03-29 23:50:20 +08:00
binary-husky
5353a32345 Update README.md 2023-03-29 23:48:58 +08:00
binary-husky
d569d2f9c1 Update README.md 2023-03-29 23:44:37 +08:00
binary-husky
67e6070d80 Update README.md 2023-03-29 23:44:01 +08:00
binary-husky
01873f7811 Merge pull request #108 from sjiang95/condainstall
readme: update
2023-03-29 23:22:44 +08:00
Your Name
c2318bc197 Merge branch 'dev' 2023-03-29 23:15:29 +08:00
binary-husky
86dc4ca3af Merge pull request #102 from ValeriaWong/master
feat(读文章写摘要):支持pdf文件批量阅读及总结 #101
2023-03-29 23:14:12 +08:00
Your Name
4a2f73d007 change UI layout 2023-03-29 23:04:37 +08:00
Your Name
9c643e6d8c change ui layout 2023-03-29 23:00:16 +08:00
Your Name
34323c9d36 Merge https://github.com/ValeriaWong/chatgpt_academic into ValeriaWong-master 2023-03-29 21:49:56 +08:00
Your Name
48cc2349e3 add pip package check 2023-03-29 21:47:56 +08:00
Your Name
7d57967519 Merge branch 'master' of https://github.com/ValeriaWong/chatgpt_academic 2023-03-29 21:44:59 +08:00
Shengjiang Quan
68ffa54c84 readme: update
Re-format a part of the markdown content
and add conda instruction for installation.

Signed-off-by: Shengjiang Quan <qsj287068067@126.com>
2023-03-29 22:36:15 +09:00
ValeriaWong
02ddcdf731 Merge branch 'master' of https://github.com/ValeriaWong/chatgpt_academic 2023-03-29 21:05:25 +08:00
ValeriaWong
14ea206a5a Merge branch 'binary-husky:master' into master 2023-03-29 20:57:07 +08:00
ValeriaWong
814c93bb75 feat(读文章写摘要):支持pdf文件批量阅读及总结 #101 2023-03-29 20:55:13 +08:00
binary-husky
fb953148b9 Update main.py 2023-03-29 20:47:34 +08:00
Your Name
2bcd34360a bug quick fix 2023-03-29 20:41:07 +08:00
binary-husky
b3211362f9 Merge pull request #82 from Okabe-Rintarou-0/master
支持暂停按钮 #53
2023-03-29 20:38:06 +08:00
Your Name
be1bf6cb77 提交后不清空输入栏,添加停止键 2023-03-29 20:36:58 +08:00
Your Name
5f771d8acf Merge branch 'master' of https://github.com/Okabe-Rintarou-0/chatgpt_academic into Okabe-Rintarou-0-master 2023-03-29 20:26:13 +08:00
Your Name
095579c323 Merge branch 'master' of https://github.com/Okabe-Rintarou-0/chatgpt_academic into Okabe-Rintarou-0-master 2023-03-29 20:07:38 +08:00
Your Name
f9308300f3 handle ip location lookup error 2023-03-29 19:37:39 +08:00
binary-husky
6ca28fbff2 Merge pull request #87 from Okabe-Rintarou-0/fix-markdown-display
正确显示多行输入的 markdown #84
2023-03-29 19:32:19 +08:00
binary-husky
11f619f76f Merge pull request #96 from eltociear/patch-1
fix typo in predict.py
2023-03-29 18:47:51 +08:00
Your Name
39d0176673 新增代理配置说明 2023-03-29 18:07:33 +08:00
Ikko Eltociear Ashimine
043bd59ffc fix typo in predict.py
refleshing -> refreshing
2023-03-29 18:57:37 +09:00
ValeriaWong
2ac602e0aa feat(读文章写摘要):支持pdf文件批量阅读及总结 2023-03-29 17:57:17 +08:00
Your Name
50ee37f23c error message change 2023-03-29 16:50:37 +08:00
Your Name
f0d9098df5 dev 2023-03-29 16:47:15 +08:00
okabe
2401cf2136 fix: markdown display bug #84 2023-03-29 15:29:40 +08:00
okabe
dd3c4f988c feat: support stop generate button (#53) 2023-03-29 14:53:53 +08:00
Your Name
22e7dc617b fix directory return bug 2023-03-29 14:28:57 +08:00
Your Name
a3c937c202 update comments 2023-03-29 14:16:59 +08:00
505030475
1b01d0fd8a 修复变量名 2023-03-29 13:58:30 +08:00
505030475
0b018bf871 Merge remote-tracking branch 'origin/test-3-29' 2023-03-29 13:44:57 +08:00
505030475
88d41ab4ca config comments 2023-03-29 13:43:07 +08:00
binary-husky
6ebadeb2a6 Update README.md 2023-03-29 13:38:43 +08:00
binary-husky
515045a8d1 Merge pull request #57 from GaiZhenbiao/master
Adding a bunch of nice-to-have features
2023-03-29 13:37:58 +08:00
Your Name
74d5061969 Merge branch 'test-3-29' of github.com:binary-husky/chatgpt_academic into test-3-29 2023-03-29 12:29:52 +08:00
Your Name
0b7c1c50ca 优化Unsplash API的使用 2023-03-29 12:28:45 +08:00
Your Name
f45e0d3486 优化Unsplash API的使用 2023-03-29 12:27:47 +08:00
Your Name
881132557e 历史上的今天,带图片 2023-03-29 12:21:47 +08:00
Your Name
6d852d76b0 更新一个更有意思的模板函数 2023-03-29 11:36:55 +08:00
Your Name
b588291cdf [实验] 历史上的今天(高级函数demo) 2023-03-29 11:34:03 +08:00
Your Name
b4e0fe39ea bug fix 2023-03-29 01:42:11 +08:00
Your Name
7295978c93 change description 2023-03-29 01:39:15 +08:00
Your Name
f2359e0442 更好的多线程交互性 2023-03-29 01:32:28 +08:00
Your Name
ced6898daa introduce project self-translation 2023-03-29 01:11:53 +08:00
Tuchuanhuhuhu
39ad492a65 增加“重置”按钮,提交之后自动清空输入框 2023-03-28 23:33:19 +08:00
Tuchuanhuhuhu
dd8ec0d511 temprature的取值范围为[0, 2] 2023-03-28 23:20:54 +08:00
Tuchuanhuhuhu
43d59d936f 增加并行处理与权限控制 2023-03-28 23:17:12 +08:00
Your Name
4bca8b4f82 simplify codes 2023-03-28 23:09:25 +08:00
Chuan Hu
46eba1f399 Improve the way to open webbrowser 2023-03-28 22:47:30 +08:00
Your Name
5a6877f9fa 界面色彩自定义 2023-03-28 22:35:55 +08:00
Your Name
2c15b51ea4 explain color and theme 2023-03-28 22:31:43 +08:00
Your Name
9666b9b99b Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-28 22:25:27 +08:00
Your Name
baf61477d2 remove .vscode from git 2023-03-28 22:24:59 +08:00
Your Name
00e411bd68 update todo 2023-03-28 22:10:22 +08:00
Your Name
37bcdf684d fix unicode bug 2023-03-28 20:31:44 +08:00
binary-husky
c80808a0c5 Merge pull request #46 from mambaHu/master
Markdown analysis report garbled issue
2023-03-28 20:04:01 +08:00
luca hu
b69313884d improving garbled words issue with utf8 2023-03-28 19:34:18 +08:00
binary-husky
3b6675755e Delete jpeg-compressor.tps 2023-03-28 17:21:14 +08:00
binary-husky
3e2e4937db Delete JpegLibrary.tps 2023-03-28 17:21:06 +08:00
binary-husky
e6f7e75e19 Delete UElibJPG.Build.cs 2023-03-28 17:20:54 +08:00
505030475
232d06a1ab theme 2023-03-28 12:59:31 +08:00
505030475
22db1147db o 2023-03-28 12:53:05 +08:00
binary-husky
eb77532df5 Update README.md 2023-03-28 01:36:15 +08:00
binary-husky
9ee3d1390d Update README.md 2023-03-28 01:13:55 +08:00
Your Name
141df08332 http post error show 2023-03-27 18:25:07 +08:00
Your Name
248bcb7095 bug fix 2023-03-27 15:16:50 +08:00
Your Name
7c7b1cb030 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-27 15:14:12 +08:00
Your Name
290a33ea74 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-27 15:14:05 +08:00
binary-husky
aabe2818e7 Update README.md 2023-03-27 15:09:02 +08:00
binary-husky
c3a6a09c4c Update README.md 2023-03-27 15:01:49 +08:00
binary-husky
8c5332748f Update README.md 2023-03-27 15:01:07 +08:00
binary-husky
0b0860defc Update README.md 2023-03-27 14:57:12 +08:00
binary-husky
18894b04f1 Update README.md 2023-03-27 14:56:32 +08:00
binary-husky
8e75ad8134 Update README.md 2023-03-27 14:56:20 +08:00
binary-husky
7c33580bf6 Update README.md 2023-03-27 14:56:03 +08:00
Your Name
223e747d57 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-27 14:53:46 +08:00
Your Name
8c7be26661 up 2023-03-27 14:51:05 +08:00
binary-husky
a1fceeae45 Update README.md 2023-03-27 14:47:52 +08:00
binary-husky
4b534400ea Update README.md 2023-03-27 14:45:32 +08:00
binary-husky
ac9b590660 Update README.md 2023-03-27 13:47:08 +08:00
binary-husky
695ed5e02d Update README.md 2023-03-27 13:45:08 +08:00
Your Name
c063510442 UI change 2023-03-27 13:24:29 +08:00
Your Name
bc6d0926b1 file IO 2023-03-27 13:01:22 +08:00
qingxu fu
0f20ffeff4 localFileToRemote 2023-03-27 11:29:11 +08:00
Your Name
9299c93b17 Merge branch 'test-3-26' 2023-03-26 20:21:39 +08:00
binary-husky
3463a1173a Merge pull request #10 from ifyz/patch-1
Update main.py
2023-03-26 20:21:14 +08:00
Your Name
74eaff4919 fix dockerfile 2023-03-26 20:18:55 +08:00
binary-husky
dd51708309 Update main.py 2023-03-26 20:11:44 +08:00
Your Name
25693c362c UI 2023-03-26 20:10:14 +08:00
Your Name
9eb15f5e68 调整样式 2023-03-26 20:04:59 +08:00
Your Name
7e195244f1 up 2023-03-26 19:32:04 +08:00
Your Name
98d97ebbc8 Merge branch 'ifyz-patch-1' into test-3-26 2023-03-26 19:14:49 +08:00
Your Name
6ab3672cfe Merge branch 'patch-1' of https://github.com/ifyz/chatgpt_academic into ifyz-patch-1 2023-03-26 19:14:27 +08:00
Your Name
6f1ad29e3f add comments 2023-03-26 19:13:58 +08:00
ifyz
61c65d90fe Update main.py
使用全局变量,禁用Gradio 的分析功能。解决国内用户因调用GoogleAnalytics导致的加载缓慢。
使用本地字体,修改Gradio默认从Googleapis调用字体。从而解决用户由于国内网络环境打开首页缓慢的问题。
2023-03-26 17:11:58 +08:00
ifyz
8965706169 Update main.py
使用本地字体,修改Gradio默认从Googleapis调用字体。从而解决用户由于国内网络环境打开首页缓慢的问题。
2023-03-26 15:48:16 +08:00
qingxu fu
a63ae898ce Merge branch 'master' of https://github.com/binary-husky/chatgpt_academic into master 2023-03-24 21:03:33 +08:00
binary-husky
186f10a3e3 Update README.md 2023-03-24 21:03:09 +08:00
binary-husky
67ed394484 Update README.md 2023-03-24 21:02:02 +08:00
qingxu fu
2f4c46b22c trim button text 2023-03-24 20:56:34 +08:00
binary-husky
2f5c977a27 Update predict.py 2023-03-24 19:54:52 +08:00
binary-husky
f801cfbcf6 Update .gitattributes 2023-03-24 19:51:52 +08:00
binary-husky
15693f3f8f Update .gitattributes 2023-03-24 19:50:54 +08:00
binary-husky
8cbac095c1 Create .gitattributes 2023-03-24 19:49:58 +08:00
Your Name
289ace03cf 测试实验性功能 使用说明 2023-03-24 19:47:37 +08:00
Your Name
cefc40aff8 update readme 2023-03-24 19:42:21 +08:00
binary-husky
15dd4b7a80 Update README.md 2023-03-24 19:38:33 +08:00
Your Name
5f8473badb source 2023-03-24 19:37:47 +08:00
Your Name
a8cfe2f113 remote additional file 2023-03-24 19:35:13 +08:00
Your Name
27e056ea35 move images 2023-03-24 19:34:21 +08:00
binary-husky
278f21fde0 Update README.md 2023-03-24 19:20:43 +08:00
Your Name
121f288878 push 2023-03-24 19:10:34 +08:00
binary-husky
5a6b46f0b5 Update README.md 2023-03-24 19:04:55 +08:00
binary-husky
22766d3c12 Update README.md 2023-03-24 19:03:03 +08:00
Your Name
4c72e6cc80 fix count down error 2023-03-24 18:53:43 +08:00
Your Name
38705af90b beta 2023-03-24 18:47:45 +08:00
Your Name
35e15aab8c bug fix 2023-03-24 18:08:48 +08:00
Your Name
1cad53f641 模块化封装 2023-03-24 18:04:59 +08:00
Your Name
818690037a up 2023-03-24 16:34:48 +08:00
Your Name
8053f696d5 用gpt给自己生成注释 2023-03-24 16:25:40 +08:00
Your Name
0b1afab5dd muban 2023-03-24 16:22:26 +08:00
Your Name
aa87d031f5 易读性+ 2023-03-24 16:17:01 +08:00
Your Name
ba2dd3d0ff 批量生成函数注释 2023-03-24 16:14:25 +08:00
Your Name
c19419e1fb 生成文本报告 2023-03-24 15:42:09 +08:00
Your Name
031232c986 better traceback 2023-03-24 15:25:14 +08:00
Your Name
ea4cd5a75a 增加读latex文章的功能,添加测试样例 2023-03-24 14:56:57 +08:00
Your Name
3fb519bf83 Merge remote-tracking branch 'origin/master' into test-3-24 2023-03-24 13:29:37 +08:00
Your Name
1e4bcc0757 设置及时响应 2023-03-24 13:28:12 +08:00
Your Name
44b40ff726 update 2023-03-24 13:12:25 +08:00
binary-husky
791dbf8592 Update functional_crazy.py 2023-03-24 13:11:41 +08:00
binary-husky
82fe0f758d Update functional_crazy.py 2023-03-24 13:06:42 +08:00
binary-husky
8b1def3425 Update functional_crazy.py 2023-03-24 13:06:34 +08:00
binary-husky
8e85e83cec Update functional_crazy.py 2023-03-24 13:02:47 +08:00
qingxu fu
ecdeda8e92 Merge branch 'master' of github.com:binary-husky/chatgpt_academic into master 2023-03-24 11:43:24 +08:00
qingxu fu
831009f027 auto retry 2023-03-24 11:42:39 +08:00
binary-husky
c6da02422b Update README.md 2023-03-24 00:43:33 +08:00
binary-husky
2ad73e9ec3 Update README.md 2023-03-23 22:15:31 +08:00
binary-husky
a0f73a61ea Update predict.py 2023-03-23 22:13:09 +08:00
binary-husky
1b6d611888 Update README.md 2023-03-23 17:03:08 +08:00
Your Name
47d9ea4921 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-23 00:35:01 +08:00
Your Name
b1e33b0f7a 正确地显示requests错误 2023-03-23 00:34:55 +08:00
binary-husky
1e125ca05f Update README.md 2023-03-23 00:20:39 +08:00
binary-husky
16253ed53c Update README.md 2023-03-23 00:15:44 +08:00
binary-husky
8beb75762c Update README.md 2023-03-22 22:52:15 +08:00
Your Name
8da4a0af45 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-22 22:42:56 +08:00
Your Name
371bef6e1a bug fix 2023-03-22 22:42:50 +08:00
Your Name
2aaa836d81 程序自解析功能 2023-03-22 22:37:14 +08:00
binary-husky
14d53a288f Update README.md 2023-03-22 22:34:15 +08:00
binary-husky
b1d6e711c0 Update README.md 2023-03-22 20:47:28 +08:00
binary-husky
32b005199d Update README.md 2023-03-22 20:06:09 +08:00
binary-husky
a8886562a1 Update README.md 2023-03-22 20:02:03 +08:00
binary-husky
a60d881268 Update README.md 2023-03-22 19:58:10 +08:00
binary-husky
b9967686a1 私密配置
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
2023-03-22 19:49:45 +08:00
binary-husky
dd3b3524bd 借鉴github.com/GaiZhenbiao/ChuanhuChatGPT项目 2023-03-22 19:47:49 +08:00
binary-husky
a880819ffd from github.com/polarwinkel/mdtex2html 2023-03-22 19:46:08 +08:00
binary-husky
b8d802a922 Update README.md 2023-03-22 19:39:02 +08:00
binary-husky
01aafd4df2 Update README.md 2023-03-22 19:36:39 +08:00
binary-husky
7b3b6f5241 Update README.md 2023-03-22 19:35:34 +08:00
binary-husky
7a50af1897 Update README.md 2023-03-22 19:33:26 +08:00
binary-husky
5dc406c641 修复gradio不吃代理的问题 2023-03-22 19:22:42 +08:00
qingxu fu
3103817990 add private conf 2023-03-22 17:54:15 +08:00
qingxu fu
88aaf310c4 upload 2023-03-22 17:48:25 +08:00
qingxu fu
e8d86a3242 代理位置 2023-03-22 17:45:10 +08:00
qingxu fu
37fd1b1c97 upload 2023-03-22 17:35:23 +08:00
qingxu fu
e3e313beab logging 2023-03-22 17:32:48 +08:00
qingxu fu
380a8a9d4d fix logging encoding 2023-03-22 17:30:30 +08:00
qingxu fu
d00f6bb1a6 add proxy debug funtion 2023-03-22 17:25:37 +08:00
binary-husky
f8a44a82a9 Update README.md 2023-03-22 16:09:37 +08:00
binary-husky
2be45c386d Update predict.py 2023-03-21 21:24:38 +08:00
binary-husky
2b7e8b9278 Update README.md 2023-03-21 17:53:40 +08:00
binary-husky
8590014462 Update README.md 2023-03-21 17:53:04 +08:00
binary-husky
8215caddb2 Update README.md 2023-03-21 15:49:52 +08:00
505030475
93a9d27b1e ok 2023-03-21 13:53:24 +08:00
binary-husky
e128523103 Update README.md 2023-03-21 13:45:08 +08:00
505030475
da0caab4cf add deploy method for windows 2023-03-21 13:35:53 +08:00
binary-husky
6e9813565a Update README.md 2023-03-20 18:56:09 +08:00
binary-husky
01cc409f99 Update README.md 2023-03-20 18:55:06 +08:00
Your Name
3438f8f291 readme 2023-03-20 18:39:48 +08:00
共有 208 个文件被更改,包括 15795 次插入28719 次删除

19
.github/ISSUE_TEMPLATE/bug_report.md vendored 普通文件
查看文件

@@ -0,0 +1,19 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug 简述**
**Screen Shot 截图**
**Terminal Traceback 终端traceback如果有**
Before submitting an issue 提交issue之前
- Please try to upgrade your code. 如果您的代码不是最新的,建议您先尝试更新代码
- Please check project wiki for common problem solutions.项目[wiki](https://github.com/binary-husky/chatgpt_academic/wiki)有一些常见问题的解决方法

查看文件

@@ -1,71 +0,0 @@
name: Report Bug | 报告BUG
description: "Report bug"
title: "[Bug]: "
labels: []
body:
- type: dropdown
id: download
attributes:
label: Installation Method | 安装方法与平台
options:
- Please choose | 请选择
- Pip Install (I ignored requirements.txt)
- Pip Install (I used latest requirements.txt)
- OneKeyInstall (一键安装脚本-windows)
- OneKeyInstall (一键安装脚本-mac)
- Anaconda (I ignored requirements.txt)
- Anaconda (I used latest requirements.txt)
- DockerWindows/Mac
- DockerLinux
- Docker-ComposeWindows/Mac
- Docker-ComposeLinux
- Huggingface
- Others (Please Describe)
validations:
required: true
- type: dropdown
id: version
attributes:
label: Version | 版本
options:
- Please choose | 请选择
- Latest | 最新版
- Others | 非最新版
validations:
required: true
- type: dropdown
id: os
attributes:
label: OS | 操作系统
options:
- Please choose | 请选择
- Windows
- Mac
- Linux
- Docker
validations:
required: true
- type: textarea
id: describe
attributes:
label: Describe the bug | 简述
description: Describe the bug | 简述
validations:
required: true
- type: textarea
id: screenshot
attributes:
label: Screen Shot | 有帮助的截图
description: Screen Shot | 有帮助的截图
validations:
required: true
- type: textarea
id: traceback
attributes:
label: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback如有 + 帮助我们复现的测试材料样本(如有)
description: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback如有 + 帮助我们复现的测试材料样本(如有)

查看文件

@@ -0,0 +1,10 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---

查看文件

@@ -1,23 +0,0 @@
name: Feature Request | 功能请求
description: "Feature Request"
title: "[Feature]: "
labels: []
body:
- type: dropdown
id: download
attributes:
label: Class | 类型
options:
- Please choose | 请选择
- 其他
- 函数插件
- 大语言模型
- 程序主体
validations:
required: false
- type: textarea
id: traceback
attributes:
label: Feature Request | 功能请求
description: Feature Request | 功能请求

查看文件

@@ -1,44 +0,0 @@
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
name: build-with-all-capacity
on:
push:
branches:
- 'master'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}_with_all_capacity
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
file: docs/GithubAction+AllCapacity
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

查看文件

@@ -1,44 +0,0 @@
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
name: build-with-audio-assistant
on:
push:
branches:
- 'master'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}_audio_assistant
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
file: docs/GithubAction+NoLocal+AudioAssistant
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

查看文件

@@ -1,44 +0,0 @@
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
name: build-with-chatglm
on:
push:
branches:
- 'master'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}_chatglm_moss
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
file: docs/GithubAction+ChatGLM+Moss
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

查看文件

@@ -1,51 +0,0 @@
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
name: build-with-latex-arm
on:
push:
branches:
- "master"
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}_with_latex_arm
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Checkout repository
uses: actions/checkout@v4
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
push: true
platforms: linux/arm64
file: docs/GithubAction+NoLocal+Latex
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

查看文件

@@ -1,44 +0,0 @@
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
name: build-with-latex
on:
push:
branches:
- 'master'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}_with_latex
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
file: docs/GithubAction+NoLocal+Latex
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

查看文件

@@ -1,44 +0,0 @@
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
name: build-without-local-llms
on:
push:
branches:
- 'master'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}_nolocal
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
file: docs/GithubAction+NoLocal
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

查看文件

@@ -1,25 +0,0 @@
# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.
#
# You can adjust the behavior by modifying this file.
# For more information, see:
# https://github.com/actions/stale
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '*/5 * * * *'
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: read
steps:
- uses: actions/stale@v8
with:
stale-issue-message: 'This issue is stale because it has been open 100 days with no activity. Remove stale label or comment or this will be closed in 1 days.'
days-before-stale: 100
days-before-close: 1
debug-only: true

23
.gitignore vendored
查看文件

@@ -131,11 +131,7 @@ dmypy.json
# Pyre type checker
.pyre/
# macOS files
.DS_Store
.vscode
.idea
history
ssr_conf
@@ -145,21 +141,4 @@ private.md
private_upload
other_llms
cradle*
debug*
private*
crazy_functions/test_project/pdf_and_word
crazy_functions/test_samples
request_llms/jittorllms
multi-language
request_llms/moss
media
flagged
request_llms/ChatGLM-6b-onnx-u8s8
.pre-commit-config.yaml
test.*
temp.*
objdump*
*.min.*.js
TODO
experimental_mods
search_results
debug*

查看文件

@@ -1,39 +1,13 @@
# 此Dockerfile适用于“无本地模型”的迷你运行环境构建
# 如果需要使用chatglm等本地模型或者latex运行依赖,请参考 docker-compose.yml
# - 如何构建: 先修改 `config.py`, 然后 `docker build -t gpt-academic . `
# - 如何运行(Linux下): `docker run --rm -it --net=host gpt-academic `
# - 如何运行(其他操作系统,选择任意一个固定端口50923): `docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic `
FROM python:3.11
# 非必要步骤,更换pip源 (以下三行,可以删除)
RUN echo '[global]' > /etc/pip.conf && \
echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
# 语音输出功能以下两行,第一行更换阿里源,第二行安装ffmpeg,都可以删除
RUN UBUNTU_VERSION=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release); echo "deb https://mirrors.aliyun.com/debian/ $UBUNTU_VERSION main non-free contrib" > /etc/apt/sources.list; apt-get update
RUN apt-get install ffmpeg -y
# 进入工作路径(必要)
COPY . /gpt
WORKDIR /gpt
# 安装大部分依赖,利用Docker缓存加速以后的构建 (以下两行,可以删除)
COPY requirements.txt ./
RUN pip3 install -r requirements.txt
# 装载项目文件,安装剩余依赖(必要)
COPY . .
RUN pip3 install -r requirements.txt
# 非必要步骤,用于预热模块(可以删除)
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
# 启动(必要)
CMD ["python3", "-u", "main.py"]
CMD ["python3", "main.py"]

290
README.md 普通文件
查看文件

@@ -0,0 +1,290 @@
# ChatGPT 学术优化
**如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发issue或者pull requestsdev分支**
If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request to `dev` branch.
> **Note**
>
> 1.请注意只有“红颜色”标识的函数插件按钮才支持读取文件。目前对pdf/word格式文件的支持插件正在逐步完善中,需要更多developer的帮助。
>
> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。
>
> 3.如果您不太习惯部分中文命名的函数、注释或者界面,您可以随时点击相关函数插件,调用ChatGPT一键生成纯英文的项目源代码。
>
> 4.项目使用OpenAI的gpt-3.5-turbo模型,期待gpt-4早点放宽门槛😂
<div align="center">
功能 | 描述
--- | ---
一键润色 | 支持一键润色、一键查找论文语法错误
一键中英互译 | 一键中英互译
一键代码解释 | 可以正确显示代码、解释代码
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持配置代理服务器
模块化设计 | 支持自定义高阶的实验性功能与[函数插件],插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键读懂本项目的源代码
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java项目树
读论文 | [函数插件] 一键解读latex论文全文并生成摘要
批量注释生成 | [函数插件] 一键批量生成函数注释
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
[arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) (Version>=2.45) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你选择有趣的文章
公式显示 | 可以同时显示公式的tex形式和渲染形式
图片显示 | 可以在markdown中显示图片
多线程函数插件支持 | 支持多线调用chatgpt,一键处理海量文本或程序
支持GPT输出的markdown表格 | 可以输出支持GPT的markdown表格
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
…… | ……
</div>
<!-- - 新界面master主分支, 右dev开发前沿 -->
- 新界面
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
</div>
- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板
<div align="center">
<img src="img/公式.gif" width="700" >
</div>
- 润色/纠错
<div align="center">
<img src="img/润色.gif" width="700" >
</div>
- 支持GPT输出的markdown表格
<div align="center">
<img src="img/demo2.jpg" width="500" >
</div>
- 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
</div>
- 懒得看项目代码?整个工程直接给chatgpt炫嘴里
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
</div>
## 直接运行 (Windows, Linux or MacOS)
### 1. 下载项目
```sh
git clone https://github.com/binary-husky/chatgpt_academic.git
cd chatgpt_academic
```
### 2. 配置API_KEY和代理设置
在`config.py`中,配置 海外Proxy 和 OpenAI API KEY,说明如下
```
1. 如果你在国内,需要设置海外代理才能够顺利使用 OpenAI API,设置方法请仔细阅读config.py1.修改其中的USE_PROXY为True; 2.按照说明修改其中的proxies
2. 配置 OpenAI API KEY。你需要在 OpenAI 官网上注册并获取 API KEY。一旦你拿到了 API KEY,在 config.py 文件里配置好即可。
3. 与代理网络有关的issue网络超时、代理不起作用汇总到 https://github.com/binary-husky/chatgpt_academic/issues/1
```
P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。
### 3. 安装依赖
```sh
# (选择一)推荐
python -m pip install -r requirements.txt
# 选择二如果您使用anaconda,步骤也是类似的
# (选择二.1conda create -n gptac_venv python=3.11
# (选择二.2conda activate gptac_venv
# (选择二.3python -m pip install -r requirements.txt
# 备注使用官方pip源或者阿里pip源,其他pip源如一些大学的pip有可能出问题,临时换源方法
# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
```
### 4. 运行
```sh
python main.py
```
### 5. 测试实验性功能
```
- 测试C++项目头文件分析
input区域 输入 `./crazy_functions/test_project/cpp/libJPG` , 然后点击 "[实验] 解析整个C++项目input输入项目根路径"
- 测试给Latex项目写摘要
input区域 输入 `./crazy_functions/test_project/latex/attention` , 然后点击 "[实验] 读tex论文写摘要input输入项目根路径"
- 测试Python项目分析
input区域 输入 `./crazy_functions/test_project/python/dqn` , 然后点击 "[实验] 解析整个py项目input输入项目根路径"
- 测试自我代码解读
点击 "[实验] 请解析并解构此项目本身"
- 测试实验功能模板函数要求gpt回答历史上的今天发生了什么,您可以根据此函数为模板,实现更复杂的功能
点击 "[实验] 实验功能函数模板"
```
## 使用docker (Linux)
``` sh
# 下载项目
git clone https://github.com/binary-husky/chatgpt_academic.git
cd chatgpt_academic
# 配置 海外Proxy 和 OpenAI API KEY
用任意文本编辑器编辑 config.py
# 安装
docker build -t gpt-academic .
# 运行
docker run --rm -it --net=host gpt-academic
# 测试实验性功能
## 测试自我代码解读
点击 "[实验] 请解析并解构此项目本身"
## 测试实验功能模板函数要求gpt回答历史上的今天发生了什么,您可以根据此函数为模板,实现更复杂的功能
点击 "[实验] 实验功能函数模板"
##请注意在docker中运行时,需要额外注意程序的文件访问权限问题
## 测试C++项目头文件分析
input区域 输入 ./crazy_functions/test_project/cpp/libJPG , 然后点击 "[实验] 解析整个C++项目input输入项目根路径"
## 测试给Latex项目写摘要
input区域 输入 ./crazy_functions/test_project/latex/attention , 然后点击 "[实验] 读tex论文写摘要input输入项目根路径"
## 测试Python项目分析
input区域 输入 ./crazy_functions/test_project/python/dqn , 然后点击 "[实验] 解析整个py项目input输入项目根路径"
```
## 其他部署方式
- 使用WSL2Windows Subsystem for Linux 子系统)
请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
- nginx远程部署
请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E7%9A%84%E6%8C%87%E5%AF%BC)
## 自定义新的便捷按钮(学术快捷键自定义)
打开functional.py,添加条目如下,然后重启程序即可。如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。
例如
```
"超级英译中": {
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
"Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词\n\n",
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。
"Suffix": "",
},
```
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
</div>
如果你发明了更好用的学术快捷键,欢迎发issue或者pull requests
## 配置代理
### 方法一:常规方法
在```config.py```中修改端口与代理软件对应
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226571294-37a47cd9-4d40-4c16-97a2-d360845406f7.png" width="500" >
<img src="https://user-images.githubusercontent.com/96192199/226838985-e5c95956-69c2-4c23-a4dd-cd7944eeb451.png" width="500" >
</div>
配置完成后,你可以用以下命令测试代理是否工作,如果一切正常,下面的代码将输出你的代理服务器所在地:
```
python check_proxy.py
```
### 方法二:纯新手教程
[纯新手教程](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
## 兼容性测试
### 图片显示:
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
</div>
### 如果一个程序能够读懂并剖析自己:
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
</div>
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
</div>
### 其他任意Python/Cpp项目剖析
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
</div>
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
</div>
### Latex论文一键阅读理解与摘要生成
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
</div>
### 自动报告生成
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
</div>
### 模块化功能设计
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
</div>
### 源代码转译英文
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
</div>
## Todo 与 版本规划:
- version 3 (Todo):
- - 支持gpt4和其他更多llm
- version 2.4+ (Todo):
- - 总结大工程源代码时文本过长、token溢出的问题
- - 实现项目打包部署
- - 函数插件参数接口优化
- - 自更新
- version 2.4: (1)新增PDF全文翻译功能; (2)新增输入区切换位置的功能; (3)新增垂直布局选项; (4)多线程函数插件优化。
- version 2.3: 增强多线程交互性
- version 2.2: 函数插件支持热重载
- version 2.1: 可折叠式布局
- version 2.0: 引入模块化函数插件
- version 1.0: 基础功能
## 参考与学习
```
代码中参考了很多其他优秀项目中的设计,主要包括:
# 借鉴项目1借鉴了ChuanhuChatGPT中读取OpenAI json的方法、记录历史问询记录的方法以及gradio queue的使用技巧
https://github.com/GaiZhenbiao/ChuanhuChatGPT
# 借鉴项目2借鉴了mdtex2html中公式处理的方法
https://github.com/polarwinkel/mdtex2html
```

查看文件

@@ -1,77 +1,28 @@
from loguru import logger
def check_proxy(proxies, return_ip=False):
"""
检查代理配置并返回结果。
Args:
proxies (dict): 包含http和https代理配置的字典。
return_ip (bool, optional): 是否返回代理的IP地址。默认为False。
Returns:
str or None: 检查的结果信息或代理的IP地址如果`return_ip`为True
"""
def check_proxy(proxies):
import requests
proxies_https = proxies['https'] if proxies is not None else ''
ip = None
try:
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4) # ⭐ 执行GET请求以获取代理信息
response = requests.get("https://ipapi.co/json/",
proxies=proxies, timeout=4)
data = response.json()
print(f'查询代理的地理位置,返回的结果是{data}')
if 'country_name' in data:
country = data['country_name']
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
if 'ip' in data:
ip = data['ip']
elif 'error' in data:
alternative, ip = _check_with_backup_source(proxies) # ⭐ 调用备用方法检查代理配置
if alternative is None:
result = f"代理配置 {proxies_https}, 代理所在地未知,IP查询频率受限"
else:
result = f"代理配置 {proxies_https}, 代理所在地:{alternative}"
else:
result = f"代理配置 {proxies_https}, 代理数据解析失败:{data}"
if not return_ip:
logger.warning(result)
return result
else:
return ip
result = f"代理配置 {proxies_https}, 代理所在地未知,IP查询频率受限"
print(result)
return result
except:
result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效"
if not return_ip:
logger.warning(result)
return result
else:
return ip
print(result)
return result
def _check_with_backup_source(proxies):
"""
通过备份源检查代理,并获取相应信息。
Args:
proxies (dict): 包含代理信息的字典。
Returns:
tuple: 代理信息(geo)和IP地址(ip)的元组。
"""
import random, string, requests
random_string = ''.join(random.choices(string.ascii_letters + string.digits, k=32))
try:
res_json = requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json() # ⭐ 执行代理检查和备份源请求
return res_json['dns']['geo'], res_json['dns']['ip']
except:
return None, None
def backup_and_download(current_version, remote_version):
"""
一键更新协议:备份当前版本,下载远程版本并解压缩。
Args:
current_version (str): 当前版本号。
remote_version (str): 远程版本号。
Returns:
str: 新版本目录的路径。
一键更新协议:备份和下载
"""
from toolbox import get_conf
import shutil
@@ -85,10 +36,10 @@ def backup_and_download(current_version, remote_version):
return new_version_dir
os.makedirs(new_version_dir)
shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
proxies = get_conf('proxies')
try: r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
except: r = requests.get('https://public.agent-matrix.com/publish/master.zip', proxies=proxies, stream=True)
zip_file_path = backup_dir+'/master.zip' # ⭐ 保存备份文件的路径
proxies, = get_conf('proxies')
r = requests.get(
'https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
zip_file_path = backup_dir+'/master.zip'
with open(zip_file_path, 'wb+') as f:
f.write(r.content)
dst_path = new_version_dir
@@ -104,84 +55,48 @@ def backup_and_download(current_version, remote_version):
def patch_and_restart(path):
"""
一键更新协议:覆盖和重启
Args:
path (str): 新版本代码所在的路径
注意事项:
如果您的程序没有使用config_private.py私密配置文件,则会将config.py重命名为config_private.py以避免配置丢失。
更新流程:
- 复制最新版本代码到当前目录
- 更新pip包依赖
- 如果更新失败,则提示手动安装依赖库并重启
"""
from distutils import dir_util
import distutils
import shutil
import os
import sys
import time
import glob
from shared_utils.colorful import log亮黄, log亮绿, log亮红
# if not using config_private, move origin config.py as config_private.py
if not os.path.exists('config_private.py'):
log亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
print('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
'另外您可以随时在history子文件夹下找回旧版的程序。')
shutil.copyfile('config.py', 'config_private.py')
path_new_version = glob.glob(path + '/*-master')[0]
dir_util.copy_tree(path_new_version, './') # ⭐ 将最新版本代码复制到当前目录
log亮绿('代码已经更新,即将更新pip包依赖……')
for i in reversed(range(5)): time.sleep(1); log亮绿(i)
try:
import subprocess
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
except:
log亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
log亮绿('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启')
log亮红('假如重启失败,您可能需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
log亮绿(' ------------------------------ -----------------------------------')
for i in reversed(range(8)): time.sleep(1); log亮绿(i)
os.execl(sys.executable, sys.executable, *sys.argv) # 重启程序
distutils.dir_util.copy_tree(path+'/chatgpt_academic-master', './')
print('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启')
for i in reversed(range(5)):
time.sleep(1)
print(i)
print(' ------------------------------ -----------------------------------')
os.execl(sys.executable, 'python', 'main.py')
def get_current_version():
"""
获取当前的版本号。
Returns:
str: 当前的版本号。如果无法获取版本号,则返回空字符串。
"""
import json
try:
with open('./version', 'r', encoding='utf8') as f:
current_version = json.loads(f.read())['version'] # ⭐ 从读取的json数据中提取版本号
current_version = json.loads(f.read())['version']
except:
current_version = ""
return current_version
def auto_update(raise_error=False):
def auto_update():
"""
一键更新协议:查询版本和用户意见
Args:
raise_error (bool, optional): 是否在出错时抛出错误。默认为 False。
Returns:
None
"""
try:
from toolbox import get_conf
import requests
import time
import json
proxies = get_conf('proxies')
try: response = requests.get("https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
except: response = requests.get("https://public.agent-matrix.com/publish/version", proxies=proxies, timeout=5)
proxies, = get_conf('proxies')
response = requests.get(
"https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=1)
remote_json_data = json.loads(response.text)
remote_version = remote_json_data['version']
if remote_json_data["show_feature"]:
@@ -191,67 +106,29 @@ def auto_update(raise_error=False):
with open('./version', 'r', encoding='utf8') as f:
current_version = f.read()
current_version = json.loads(current_version)['version']
if (remote_version - current_version) >= 0.01-1e-5:
from shared_utils.colorful import log亮黄
log亮黄(f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}{new_feature}') # ⭐ 在控制台打印新版本信息
logger.info('1Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
user_instruction = input('2是否一键更新代码Y+回车=确认,输入其他/无输入+回车=不更新)?')
if (remote_version - current_version) >= 0.05:
print(
f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}{new_feature}')
print('1Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
user_instruction = input('2是否一键更新代码Y/y+回车=确认,输入其他/无输入+回车=不更新)?')
if user_instruction in ['Y', 'y']:
path = backup_and_download(current_version, remote_version) # ⭐ 备份并下载文件
path = backup_and_download(current_version, remote_version)
try:
patch_and_restart(path) # ⭐ 执行覆盖并重启操作
patch_and_restart(path)
except:
msg = '更新失败。'
if raise_error:
from toolbox import trimmed_format_exc
msg += trimmed_format_exc()
logger.warning(msg)
print('更新失败。')
else:
logger.info('自动更新程序:已禁用')
print('自动更新程序:已禁用')
return
else:
return
except:
msg = '自动更新程序:已禁用。建议排查:代理网络配置。'
if raise_error:
from toolbox import trimmed_format_exc
msg += trimmed_format_exc()
logger.info(msg)
def warm_up_modules():
"""
预热模块,加载特定模块并执行预热操作。
"""
logger.info('正在执行一些模块的预热 ...')
from toolbox import ProxyNetworkActivate
from request_llms.bridge_all import model_info
with ProxyNetworkActivate("Warmup_Modules"):
enc = model_info["gpt-3.5-turbo"]['tokenizer']
enc.encode("模块预热", disallowed_special=())
enc = model_info["gpt-4"]['tokenizer']
enc.encode("模块预热", disallowed_special=())
def warm_up_vectordb():
"""
执行一些模块的预热操作。
本函数主要用于执行一些模块的预热操作,确保在后续的流程中能够顺利运行。
⭐ 关键作用:预热模块
Returns:
None
"""
logger.info('正在执行一些模块的预热 ...')
from toolbox import ProxyNetworkActivate
with ProxyNetworkActivate("Warmup_Modules"):
import nltk
with ProxyNetworkActivate("Warmup_Modules"): nltk.download("punkt")
print('自动更新程序:已禁用')
if __name__ == '__main__':
import os
os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
from toolbox import get_conf
proxies = get_conf('proxies')
check_proxy(proxies)
proxies, = get_conf('proxies')
check_proxy(proxies)

412
config.py
查看文件

@@ -1,421 +1,53 @@
"""
以下所有配置也都支持利用环境变量覆写,环境变量配置格式见docker-compose.yml。
读取优先级:环境变量 > config_private.py > config.py
--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
All the following configurations also support using environment variables to override,
and the environment variable configuration format can be seen in docker-compose.yml.
Configuration reading priority: environment variable > config_private.py > config.py
"""
# [step 1]>> 例如: API_KEY = "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" 此key无效
API_KEY = "sk-此处填API密钥"
# [step 1]>> API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"。极少数情况下,还需要填写组织格式如org-123456789abcdefghijklmno的,请向下翻,找 API_ORG 设置项
API_KEY = "此处填API密钥" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4"
# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改;如果使用本地或无地域限制的大模型时,此处也不需要修改
# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改
USE_PROXY = False
if USE_PROXY:
"""
代理网络的地址,打开你的代理软件查看代理协议(socks5h / http)、地址(localhost)和端口(11284)
填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
<配置教程&视频教程> https://github.com/binary-husky/gpt_academic/issues/1>
[协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
[地址] 填localhost或者127.0.0.1localhost意思是代理软件安装在本机上
[端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
"""
# 填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
# 例如 "socks5h://localhost:11284"
# [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
# [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了localhost意思是代理软件安装在本机上
# [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
# 代理网络的地址,打开你的科学上网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284)
proxies = {
# [协议]:// [地址] :[端口]
"http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890",
"https": "socks5h://localhost:11284", # 再例如 "https": "http://127.0.0.1:7890",
"http": "socks5h://localhost:11284",
"https": "socks5h://localhost:11284",
}
else:
proxies = None
# [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
"gpt-4o", "gpt-4o-mini", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo",
"gemini-1.5-pro", "chatglm3"
]
EMBEDDING_MODEL = "text-embedding-3-small"
# --- --- --- ---
# P.S. 其他可用的模型还包括
# AVAIL_LLM_MODELS = [
# "glm-4-0520", "glm-4-air", "glm-4-airx", "glm-4-flash",
# "qianfan", "deepseekcoder",
# "spark", "sparkv2", "sparkv3", "sparkv3.5", "sparkv4",
# "qwen-turbo", "qwen-plus", "qwen-max", "qwen-local",
# "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k",
# "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125", "gpt-4o-2024-05-13"
# "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2",
# "moss", "llama2", "chatglm_onnx", "internlm", "jittorllms_pangualpha", "jittorllms_llama",
# "deepseek-chat" ,"deepseek-coder",
# "gemini-1.5-flash",
# "yi-34b-chat-0205","yi-34b-chat-200k","yi-large","yi-medium","yi-spark","yi-large-turbo","yi-large-preview",
# ]
# --- --- --- ---
# 此外,您还可以在接入one-api/vllm/ollama/Openroute时,
# 使用"one-api-*","vllm-*","ollama-*","openrouter-*"前缀直接使用非标准方式接入的模型,例如
# AVAIL_LLM_MODELS = ["one-api-claude-3-sonnet-20240229(max_token=100000)", "ollama-phi3(max_token=4096)","openrouter-openai/gpt-4o-mini","openrouter-openai/chatgpt-4o-latest"]
# --- --- --- ---
# --------------- 以下配置可以优化体验 ---------------
# 重新URL重新定向,实现更换API_URL的作用高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人
# 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
# 举例: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://reverse-proxy-url/v1/chat/completions", "http://localhost:11434/api/chat": "在这里填写您ollama的URL"}
API_URL_REDIRECT = {}
# 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次
# 一言以蔽之免费5刀用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询https://platform.openai.com/docs/guides/rate-limits/overview
DEFAULT_WORKER_NUM = 3
# 色彩主题, 可选 ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast"]
# 更多主题, 请查阅Gradio主题商店: https://huggingface.co/spaces/gradio/theme-gallery 可选 ["Gstaff/Xkcd", "NoCrypt/Miku", ...]
THEME = "Default"
AVAIL_THEMES = ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast", "Gstaff/Xkcd", "NoCrypt/Miku"]
# 默认的系统提示词system prompt
INIT_SYS_PROMPT = "Serve me as a writing and programming assistant."
# 对话窗的高度 仅在LAYOUT="TOP-DOWN"时生效)
# [step 3]>> 以下配置可以优化体验,但大部分场合下并不需要修改
# 对话窗的高度
CHATBOT_HEIGHT = 1115
# 代码高亮
CODE_HIGHLIGHT = True
# 窗口布局
LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
# 暗色模式 / 亮色模式
DARK_MODE = True
LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
# 发送请求到OpenAI后,等待多久判定为超时
TIMEOUT_SECONDS = 30
TIMEOUT_SECONDS = 25
# 网页的端口, -1代表随机端口
WEB_PORT = -1
# 是否自动打开浏览器页面
AUTO_OPEN_BROWSER = True
# 如果OpenAI不响应网络卡顿、代理失败、KEY失效,重试的次数限制
MAX_RETRY = 2
# OpenAI模型选择是gpt4现在只对申请成功的人开放
LLM_MODEL = "gpt-3.5-turbo"
# 插件分类默认选项
DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
# OpenAI的API_URL
API_URL = "https://api.openai.com/v1/chat/completions"
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
# 选择本地模型变体只有当AVAIL_LLM_MODELS包含了对应本地模型时,才会起作用
# 如果你选择Qwen系列的模型,那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型
# 也可以是具体的模型路径
QWEN_LOCAL_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
# 接入通义千问在线大模型 https://dashscope.console.aliyun.com/
DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
# 百度千帆LLM_MODEL="qianfan"
BAIDU_CLOUD_API_KEY = ''
BAIDU_CLOUD_SECRET_KEY = ''
BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot-4"(文心大模型4.0), "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat", "ERNIE-Speed-128K", "ERNIE-Speed-8K", "ERNIE-Lite-8K"
# 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径
CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
# 设置gradio的并行线程数不需要修改
# 设置并行使用的线程数
CONCURRENT_COUNT = 100
# 是否在提交时自动清空输入框
AUTO_CLEAR_TXT = False
# 加一个live2d装饰
ADD_WAIFU = False
# 设置用户名和密码不需要修改相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个
# 设置用户名和密码相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个
# [("username", "password"), ("username2", "password2"), ...]
AUTHENTICATION = []
# 如果需要在二级路径下运行(常规情况下,不要修改!!
# (举例 CUSTOM_PATH = "/gpt_academic",可以让软件运行在 http://ip:port/gpt_academic/ 下。)
CUSTOM_PATH = "/"
# HTTPS 秘钥和证书(不需要修改)
SSL_KEYFILE = ""
SSL_CERTFILE = ""
# 极少数情况下,openai的官方KEY需要伴随组织编码格式如org-xxxxxxxxxxxxxxxxxxxxxxxx使用
API_ORG = ""
# 如果需要使用Slack Claude,使用教程详情见 request_llms/README.md
SLACK_CLAUDE_BOT_ID = ''
SLACK_CLAUDE_USER_TOKEN = ''
# 如果需要使用AZURE方法一单个azure模型部署详情请见额外文档 docs\use_azure.md
AZURE_ENDPOINT = "https://你亲手写的api名称.openai.azure.com/"
AZURE_API_KEY = "填入azure openai api的密钥" # 建议直接在API_KEY处填写,该选项即将被弃用
AZURE_ENGINE = "填入你亲手写的部署名" # 读 docs\use_azure.md
# 如果需要使用AZURE方法二多个azure模型部署+动态切换)详情请见额外文档 docs\use_azure.md
AZURE_CFG_ARRAY = {}
# 阿里云实时语音识别 配置难度较高
# 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
ENABLE_AUDIO = False
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
ALIYUN_ACCESSKEY="" # (无需填写)
ALIYUN_SECRET="" # (无需填写)
# GPT-SOVITS 文本转语音服务的运行地址(将语言模型的生成文本朗读出来)
TTS_TYPE = "EDGE_TTS" # EDGE_TTS / LOCAL_SOVITS_API / DISABLE
GPT_SOVITS_URL = ""
EDGE_TTS_VOICE = "zh-CN-XiaoxiaoNeural"
# 接入讯飞星火大模型 https://console.xfyun.cn/services/iat
XFYUN_APPID = "00000000"
XFYUN_API_SECRET = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
# 接入智谱大模型
ZHIPUAI_API_KEY = ""
ZHIPUAI_MODEL = "" # 此选项已废弃,不再需要填写
# Claude API KEY
ANTHROPIC_API_KEY = ""
# 月之暗面 API KEY
MOONSHOT_API_KEY = ""
# 零一万物(Yi Model) API KEY
YIMODEL_API_KEY = ""
# 深度求索(DeepSeek) API KEY,默认请求地址为"https://api.deepseek.com/v1/chat/completions"
DEEPSEEK_API_KEY = ""
# 紫东太初大模型 https://ai-maas.wair.ac.cn
TAICHU_API_KEY = ""
# Mathpix 拥有执行PDF的OCR功能,但是需要注册账号
MATHPIX_APPID = ""
MATHPIX_APPKEY = ""
# DOC2X的PDF解析服务,注册账号并获取API KEY: https://doc2x.noedgeai.com/login
DOC2X_API_KEY = ""
# 自定义API KEY格式
CUSTOM_API_KEY_PATTERN = ""
# Google Gemini API-Key
GEMINI_API_KEY = ''
# HUGGINGFACE的TOKEN,下载LLAMA时起作用 https://huggingface.co/docs/hub/security-tokens
HUGGINGFACE_ACCESS_TOKEN = "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"
# GROBID服务器地址填写多个可以均衡负载,用于高质量地读取PDF文档
# 获取方法复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space"
GROBID_URLS = [
"https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space",
"https://qingxu98-grobid4.hf.space","https://qingxu98-grobid5.hf.space", "https://qingxu98-grobid6.hf.space",
"https://qingxu98-grobid7.hf.space", "https://qingxu98-grobid8.hf.space",
]
# Searxng互联网检索服务
SEARXNG_URL = "https://cloud-1.agent-matrix.com/"
# 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭
ALLOW_RESET_CONFIG = False
# 在使用AutoGen插件时,是否使用Docker容器运行代码
AUTOGEN_USE_DOCKER = False
# 临时的上传文件夹位置,请尽量不要修改
PATH_PRIVATE_UPLOAD = "private_upload"
# 日志文件夹的位置,请尽量不要修改
PATH_LOGGING = "gpt_log"
# 存储翻译好的arxiv论文的路径,请尽量不要修改
ARXIV_CACHE_DIR = "gpt_log/arxiv_cache"
# 除了连接OpenAI之外,还有哪些场合允许使用代理,请尽量不要修改
WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
"Warmup_Modules", "Nougat_Download", "AutoGen", "Connect_OpenAI_Embedding"]
# 启用插件热加载
PLUGIN_HOT_RELOAD = False
# 自定义按钮的最大数量限制
NUM_CUSTOM_BASIC_BTN = 4
# 媒体智能体的服务地址这是一个huggingface空间,请前往huggingface复制该空间,然后把自己新的空间地址填在这里
DAAS_SERVER_URL = "https://hamercity-bbdown.hf.space/stream"
"""
--------------- 配置关联关系说明 ---------------
在线大模型配置关联关系示意图
├── "gpt-3.5-turbo" 等openai模型
│ ├── API_KEY
│ ├── CUSTOM_API_KEY_PATTERN不常用
│ ├── API_ORG不常用
│ └── API_URL_REDIRECT不常用
├── "azure-gpt-3.5" 等azure模型单个azure模型,不需要动态切换
│ ├── API_KEY
│ ├── AZURE_ENDPOINT
│ ├── AZURE_API_KEY
│ ├── AZURE_ENGINE
│ └── API_URL_REDIRECT
├── "azure-gpt-3.5" 等azure模型多个azure模型,需要动态切换,高优先级
│ └── AZURE_CFG_ARRAY
├── "spark" 星火认知大模型 spark & sparkv2
│ ├── XFYUN_APPID
│ ├── XFYUN_API_SECRET
│ └── XFYUN_API_KEY
├── "claude-3-opus-20240229" 等claude模型
│ └── ANTHROPIC_API_KEY
├── "stack-claude"
│ ├── SLACK_CLAUDE_BOT_ID
│ └── SLACK_CLAUDE_USER_TOKEN
├── "qianfan" 百度千帆大模型库
│ ├── BAIDU_CLOUD_QIANFAN_MODEL
│ ├── BAIDU_CLOUD_API_KEY
│ └── BAIDU_CLOUD_SECRET_KEY
├── "glm-4", "glm-3-turbo", "zhipuai" 智谱AI大模型
│ └── ZHIPUAI_API_KEY
├── "yi-34b-chat-0205", "yi-34b-chat-200k" 等零一万物(Yi Model)大模型
│ └── YIMODEL_API_KEY
├── "qwen-turbo" 等通义千问大模型
│ └── DASHSCOPE_API_KEY
├── "Gemini"
│ └── GEMINI_API_KEY
└── "one-api-...(max_token=...)" 用一种更方便的方式接入one-api多模型管理界面
├── AVAIL_LLM_MODELS
├── API_KEY
└── API_URL_REDIRECT
本地大模型示意图
├── "chatglm3"
├── "chatglm"
├── "chatglm_onnx"
├── "chatglmft"
├── "internlm"
├── "moss"
├── "jittorllms_pangualpha"
├── "jittorllms_llama"
├── "deepseekcoder"
├── "qwen-local"
├── RWKV的支持见Wiki
└── "llama2"
用户图形界面布局依赖关系示意图
├── CHATBOT_HEIGHT 对话窗的高度
├── CODE_HIGHLIGHT 代码高亮
├── LAYOUT 窗口布局
├── DARK_MODE 暗色模式 / 亮色模式
├── DEFAULT_FN_GROUPS 插件分类默认选项
├── THEME 色彩主题
├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框
├── ADD_WAIFU 加一个live2d装饰
└── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性
插件在线服务配置依赖关系示意图
├── 互联网检索
│ └── SEARXNG_URL
├── 语音功能
│ ├── ENABLE_AUDIO
│ ├── ALIYUN_TOKEN
│ ├── ALIYUN_APPKEY
│ ├── ALIYUN_ACCESSKEY
│ └── ALIYUN_SECRET
└── PDF文档精准解析
├── GROBID_URLS
├── MATHPIX_APPID
└── MATHPIX_APPKEY
"""

查看文件

@@ -1,175 +1,71 @@
# 'primary' 颜色对应 theme.py 中的 primary_hue
# 'secondary' 颜色对应 theme.py 中的 neutral_hue
# 'stop' 颜色对应 theme.py 中的 color_er
import importlib
# 默认按钮颜色是 secondary
from toolbox import clear_line_break
from toolbox import apply_gpt_academic_string_mask_langbased
from toolbox import build_gpt_academic_masked_string_langbased
from textwrap import dedent
def get_core_functions():
return {
"学术语料润色": {
# [1*] 前缀字符串,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等。
# 这里填一个提示词字符串就行了,这里为了区分中英文情景搞复杂了一点
"Prefix": build_gpt_academic_masked_string_langbased(
text_show_english=
r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, "
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. "
r"Firstly, you should provide the polished paragraph (in English). "
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table.",
text_show_chinese=
r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性,"
r"同时分解长句,减少重复,并提供改进建议。请先提供文本的更正版本,然后在markdown表格中列出修改的内容,并给出修改的理由:"
) + "\n\n",
# [2*] 后缀字符串,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
"英语学术润色": {
# 前言
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
# 后语
"Suffix": r"",
# [3] 按钮颜色 (可选参数,默认 secondary)
"Color": r"secondary",
# [4] 按钮是否可见 (可选参数,默认 True,即可见)
"Visible": True,
# [5] 是否在触发时清除历史 (可选参数,默认 False,即不处理之前的对话历史)
"AutoClearHistory": False,
# [6] 文本预处理 (可选参数,默认 None,举例写个函数移除所有的换行符
"PreProcess": None,
# [7] 模型选择 (可选参数。如不设置,则使用当前全局模型;如设置,则用指定模型覆盖全局模型。)
# "ModelOverride": "gpt-3.5-turbo", # 主要用途:强制点击此基础功能按钮时,使用指定的模型。
"Color": r"secondary", # 按钮颜色
},
"总结绘制脑图": {
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
"Prefix": '''"""\n\n''',
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
"Suffix":
# dedent() 函数用于去除多行字符串的缩进
dedent("\n\n"+r'''
"""
使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如
以下是对以上文本的总结,以mermaid flowchart的形式展示
```mermaid
flowchart LR
A["节点名1"] --> B("节点名2")
B --> C{"节点名3"}
C --> D["节点名4"]
C --> |"箭头名1"| E["节点名5"]
C --> |"箭头名2"| F["节点名6"]
```
注意:
1使用中文
2节点名字使用引号包裹,如["Laptop"]
3`|` 和 `"`之间不要存在空格
4根据情况选择flowchart LR从左到右或者flowchart TD从上到下
'''),
"中文学术润色": {
"Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n",
"Suffix": r"",
},
"查找语法错误": {
"Prefix": r"Help me ensure that the grammar and the spelling is correct. "
r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good. "
r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, "
r"put the original text the first column, "
r"put the corrected text in the second column and highlight the key words you fixed. "
r"Finally, please provide the proofreaded text.""\n\n"
"Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " +
r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." +
r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " +
r"put the original text the first column, " +
r"put the corrected text in the second column and highlight the key words you fixed.""\n"
r"Example:""\n"
r"Paragraph: How is you? Do you knows what is it?""\n"
r"| Original sentence | Corrected sentence |""\n"
r"| :--- | :--- |""\n"
r"| How **is** you? | How **are** you? |""\n"
r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n\n"
r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n"
r"Below is a paragraph from an academic paper. "
r"You need to report all grammar and spelling mistakes as the example before."
+ "\n\n",
"Suffix": r"",
"PreProcess": clear_line_break, # 预处理:清除换行符
},
"中译英": {
"Prefix": r"Please translate following sentence to English:" + "\n\n",
"Suffix": r"",
},
"学术英中互译": {
"Prefix": build_gpt_academic_masked_string_langbased(
text_show_chinese=
r"I want you to act as a scientific English-Chinese translator, "
r"I will provide you with some paragraphs in one language "
r"and your task is to accurately and academically translate the paragraphs only into the other language. "
r"Do not repeat the original provided paragraphs after translation. "
r"You should use artificial intelligence tools, "
r"such as natural language processing, and rhetorical knowledge "
r"and experience about effective writing techniques to reply. "
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:",
text_show_english=
r"你是经验丰富的翻译,请把以下学术文章段落翻译成中文,"
r"并同时充分考虑中文的语法、清晰、简洁和整体可读性,"
r"必要时,你可以修改整个句子的顺序以确保翻译后的段落符合中文的语言习惯。"
r"你需要翻译的文本如下:"
) + "\n\n",
"Suffix": r"",
"学术中英互译": {
"Prefix": r"I want you to act as a scientific English-Chinese translator, " +
r"I will provide you with some paragraphs in one language " +
r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
r"Do not repeat the original provided paragraphs after translation. " +
r"You should use artificial intelligence tools, " +
r"such as natural language processing, and rhetorical knowledge " +
r"and experience about effective writing techniques to reply. " +
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
"Suffix": "",
"Color": "secondary",
},
"英译中": {
"Prefix": r"翻译成地道的中文:" + "\n\n",
"Prefix": r"翻译成中文:" + "\n\n",
"Suffix": r"",
"Visible": False,
},
"找图片": {
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL,"
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片" + "\n\n",
"Suffix": r"",
"Visible": False,
},
"解释代码": {
"Prefix": r"请解释以下代码:" + "\n```\n",
"Suffix": "\n```\n",
},
"参考文献转Bib": {
"Prefix": r"Here are some bibliography items, please transform them into bibtex style."
r"Note that, reference styles maybe more than one kind, you should transform each item correctly."
r"Items need to be transformed:" + "\n\n",
"Visible": False,
"Suffix": r"",
}
}
def handle_core_functionality(additional_fn, inputs, history, chatbot):
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
addition = chatbot._cookies['customize_fn_overwrite']
if additional_fn in addition:
# 自定义功能
inputs = addition[additional_fn]["Prefix"] + inputs + addition[additional_fn]["Suffix"]
return inputs, history
else:
# 预制功能
if "PreProcess" in core_functional[additional_fn]:
if core_functional[additional_fn]["PreProcess"] is not None:
inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
# 为字符串加上上面定义的前缀和后缀。
inputs = apply_gpt_academic_string_mask_langbased(
string = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"],
lang_reference = inputs,
)
if core_functional[additional_fn].get("AutoClearHistory", False):
history = []
return inputs, history
if __name__ == "__main__":
t = get_core_functions()["总结绘制脑图"]
print(t["Prefix"] + t["Suffix"])

查看文件

@@ -1,48 +1,135 @@
from toolbox import HotReload # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效
from toolbox import trimmed_format_exc
from loguru import logger
def get_crazy_functions():
from crazy_functions.AntFin import AntFinTest
###################### 第一组插件 ###########################
# [第一组插件]: 最早期编写的项目插件和一些demo
from crazy_functions.读文章写摘要 import 读文章写摘要
from crazy_functions.生成函数注释 import 批量生成函数注释
from crazy_functions.解析项目源代码 import 解析项目本身
from crazy_functions.解析项目源代码 import 解析一个Python项目
from crazy_functions.解析项目源代码 import 解析一个C项目的头文件
from crazy_functions.解析项目源代码 import 解析一个C项目
from crazy_functions.解析项目源代码 import 解析一个Golang项目
from crazy_functions.解析项目源代码 import 解析一个Java项目
from crazy_functions.解析项目源代码 import 解析一个Rect项目
from crazy_functions.高级功能函数模板 import 高阶功能模板函数
from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文
function_plugins = {
"蚂小财测试": {
"Group": "智能体",
"Color": "stop",
"AsButton": False,
"Info": "蚂小财测试",
"Function": HotReload(AntFinTest),
"请解析并解构此项目本身(源码自译解)": {
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(解析项目本身)
},
"解析整个Python项目": {
"Color": "stop", # 按钮颜色
"Function": HotReload(解析一个Python项目)
},
"解析整个C++项目头文件": {
"Color": "stop", # 按钮颜色
"Function": HotReload(解析一个C项目的头文件)
},
"解析整个C++项目(.cpp/.h": {
"Color": "stop", # 按钮颜色
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(解析一个C项目)
},
"解析整个Go项目": {
"Color": "stop", # 按钮颜色
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(解析一个Golang项目)
},
"解析整个Java项目": {
"Color": "stop", # 按钮颜色
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(解析一个Java项目)
},
"解析整个React项目": {
"Color": "stop", # 按钮颜色
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(解析一个Rect项目)
},
"读Tex论文写摘要": {
"Color": "stop", # 按钮颜色
"Function": HotReload(读文章写摘要)
},
"批量生成函数注释": {
"Color": "stop", # 按钮颜色
"Function": HotReload(批量生成函数注释)
},
"[多线程demo] 把本项目源代码切换成全英文": {
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
"Function": HotReload(全项目切换英文)
},
"[函数插件模板demo] 历史上的今天": {
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
"Function": HotReload(高阶功能模板函数)
},
}
###################### 第二组插件 ###########################
# [第二组插件]: 经过充分测试,但功能上距离达到完美状态还差一点点
from crazy_functions.批量总结PDF文档 import 批量总结PDF文档
from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer
from crazy_functions.总结word文档 import 总结word文档
from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档
from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
function_plugins.update({
"批量翻译PDF文档多线程": {
"Color": "stop",
"AsButton": True, # 加入下拉菜单中
"Function": HotReload(批量翻译PDF文档)
},
"[仅供开发调试] 批量总结PDF文档": {
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
"Function": HotReload(批量总结PDF文档)
},
"[仅供开发调试] 批量总结PDF文档pdfminer": {
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(批量总结PDF文档pdfminer)
},
"谷歌学术检索助手输入谷歌学术搜索页url": {
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(谷歌检索小助手)
},
"批量总结Word文档": {
"Color": "stop",
"Function": HotReload(总结word文档)
},
"理解PDF文档内容Tk文件选择接口,仅本地": {
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(理解PDF文档内容)
},
"理解PDF文档内容通用接口,读取文件输入区": {
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(理解PDF文档内容标准文件输入)
},
})
"""
设置默认值:
- 默认 Group = 对话
- 默认 AsButton = True
- 默认 AdvancedArgs = False
- 默认 Color = secondary
"""
for name, function_meta in function_plugins.items():
if "Group" not in function_meta:
function_plugins[name]["Group"] = "对话"
if "AsButton" not in function_meta:
function_plugins[name]["AsButton"] = True
if "AdvancedArgs" not in function_meta:
function_plugins[name]["AdvancedArgs"] = False
if "Color" not in function_meta:
function_plugins[name]["Color"] = "secondary"
###################### 第三组插件 ###########################
# [第三组插件]: 尚未充分测试的函数插件,放在这里
try:
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
function_plugins.update({
"一键下载arxiv论文并翻译摘要先在input输入编号,如1812.10695": {
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(下载arxiv论文并翻译摘要)
}
})
except Exception as err:
print(f'[下载arxiv论文并翻译摘要] 插件导入失败 {str(err)}')
###################### 第n组插件 ###########################
return function_plugins
def get_multiplex_button_functions():
"""多路复用主提交按钮的功能映射
"""
return {
"常规对话":
"",
"蚂小财测试":
"蚂小财测试", # 映射到上面的 `询问多个GPT模型` 插件
}

查看文件

@@ -1,9 +0,0 @@
from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log_folder
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicState
@CatchException
def AntFinTest(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
chatbot.append(("AntFin Test", "AntFin Test"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新

查看文件

@@ -1,43 +0,0 @@
from toolbox import get_conf, update_ui
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
from crazy_functions.AntFin import AntFinTest
class ImageGen_Wrap(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
"""
gui_definition = {
"main_input":
ArgProperty(title="输入图片描述", description="需要生成图像的文本描述,尽量使用英文", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
"model_name":
ArgProperty(title="模型", options=["DALLE2", "DALLE3"], default_value="DALLE3", description="", type="dropdown").model_dump_json(),
"resolution":
ArgProperty(title="分辨率", options=["256x256(限DALLE2)", "512x512(限DALLE2)", "1024x1024", "1792x1024(限DALLE3)", "1024x1792(限DALLE3)"], default_value="1024x1024", description="", type="dropdown").model_dump_json(),
"quality (仅DALLE3生效)":
ArgProperty(title="质量", options=["standard", "hd"], default_value="standard", description="", type="dropdown").model_dump_json(),
"style (仅DALLE3生效)":
ArgProperty(title="风格", options=["vivid", "natural"], default_value="vivid", description="", type="dropdown").model_dump_json(),
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
yield from AntFinTest(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)

查看文件

@@ -1,23 +0,0 @@
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, ProxyNetworkActivate
from toolbox import report_exception, get_log_folder, update_ui_lastest_msg, Singleton
from crazy_functions.agent_fns.pipe import PluginMultiprocessManager, PipeCom
from crazy_functions.agent_fns.general import AutoGenGeneral
class AutoGenMath(AutoGenGeneral):
def define_agents(self):
from autogen import AssistantAgent, UserProxyAgent
return [
{
"name": "assistant", # name of the agent.
"cls": AssistantAgent, # class of the agent.
},
{
"name": "user_proxy", # name of the agent.
"cls": UserProxyAgent, # class of the agent.
"human_input_mode": "ALWAYS", # always ask for human input.
"llm_config": False, # disables llm-based auto reply.
},
]

查看文件

@@ -1,20 +0,0 @@
from crazy_functions.agent_fns.pipe import PluginMultiprocessManager, PipeCom
from loguru import logger
class EchoDemo(PluginMultiprocessManager):
def subprocess_worker(self, child_conn):
# ⭐⭐ 子进程
self.child_conn = child_conn
while True:
msg = self.child_conn.recv() # PipeCom
if msg.cmd == "user_input":
# wait futher user input
self.child_conn.send(PipeCom("show", msg.content))
wait_success = self.subprocess_worker_wait_user_feedback(wait_msg="我准备好处理下一个问题了.")
if not wait_success:
# wait timeout, terminate this subprocess_worker
break
elif msg.cmd == "terminate":
self.child_conn.send(PipeCom("done", ""))
break
logger.info('[debug] subprocess_worker terminated')

查看文件

@@ -1,138 +0,0 @@
from toolbox import trimmed_format_exc, get_conf, ProxyNetworkActivate
from crazy_functions.agent_fns.pipe import PluginMultiprocessManager, PipeCom
from request_llms.bridge_all import predict_no_ui_long_connection
import time
def gpt_academic_generate_oai_reply(
self,
messages,
sender,
config,
):
llm_config = self.llm_config if config is None else config
if llm_config is False:
return False, None
if messages is None:
messages = self._oai_messages[sender]
inputs = messages[-1]['content']
history = []
for message in messages[:-1]:
history.append(message['content'])
context=messages[-1].pop("context", None)
assert context is None, "预留参数 context 未实现"
reply = predict_no_ui_long_connection(
inputs=inputs,
llm_kwargs=llm_config,
history=history,
sys_prompt=self._oai_system_message[0]['content'],
console_slience=True
)
assumed_done = reply.endswith('\nTERMINATE')
return True, reply
class AutoGenGeneral(PluginMultiprocessManager):
def gpt_academic_print_override(self, user_proxy, message, sender):
# ⭐⭐ run in subprocess
try:
print_msg = sender.name + "\n\n---\n\n" + message["content"]
except:
print_msg = sender.name + "\n\n---\n\n" + message
self.child_conn.send(PipeCom("show", print_msg))
def gpt_academic_get_human_input(self, user_proxy, message):
# ⭐⭐ run in subprocess
patience = 300
begin_waiting_time = time.time()
self.child_conn.send(PipeCom("interact", message))
while True:
time.sleep(0.5)
if self.child_conn.poll():
wait_success = True
break
if time.time() - begin_waiting_time > patience:
self.child_conn.send(PipeCom("done", ""))
wait_success = False
break
if wait_success:
return self.child_conn.recv().content
else:
raise TimeoutError("等待用户输入超时")
def define_agents(self):
raise NotImplementedError
def exe_autogen(self, input):
# ⭐⭐ run in subprocess
input = input.content
code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker}
agents = self.define_agents()
user_proxy = None
assistant = None
for agent_kwargs in agents:
agent_cls = agent_kwargs.pop('cls')
kwargs = {
'llm_config':self.llm_kwargs,
'code_execution_config':code_execution_config
}
kwargs.update(agent_kwargs)
agent_handle = agent_cls(**kwargs)
agent_handle._print_received_message = lambda a,b: self.gpt_academic_print_override(agent_kwargs, a, b)
for d in agent_handle._reply_func_list:
if hasattr(d['reply_func'],'__name__') and d['reply_func'].__name__ == 'generate_oai_reply':
d['reply_func'] = gpt_academic_generate_oai_reply
if agent_kwargs['name'] == 'user_proxy':
agent_handle.get_human_input = lambda a: self.gpt_academic_get_human_input(user_proxy, a)
user_proxy = agent_handle
if agent_kwargs['name'] == 'assistant': assistant = agent_handle
try:
if user_proxy is None or assistant is None: raise Exception("用户代理或助理代理未定义")
with ProxyNetworkActivate("AutoGen"):
user_proxy.initiate_chat(assistant, message=input)
except Exception as e:
tb_str = '```\n' + trimmed_format_exc() + '```'
self.child_conn.send(PipeCom("done", "AutoGen 执行失败: \n\n" + tb_str))
def subprocess_worker(self, child_conn):
# ⭐⭐ run in subprocess
self.child_conn = child_conn
while True:
msg = self.child_conn.recv() # PipeCom
self.exe_autogen(msg)
class AutoGenGroupChat(AutoGenGeneral):
def exe_autogen(self, input):
# ⭐⭐ run in subprocess
import autogen
input = input.content
with ProxyNetworkActivate("AutoGen"):
code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker}
agents = self.define_agents()
agents_instances = []
for agent_kwargs in agents:
agent_cls = agent_kwargs.pop("cls")
kwargs = {"code_execution_config": code_execution_config}
kwargs.update(agent_kwargs)
agent_handle = agent_cls(**kwargs)
agent_handle._print_received_message = lambda a, b: self.gpt_academic_print_override(agent_kwargs, a, b)
agents_instances.append(agent_handle)
if agent_kwargs["name"] == "user_proxy":
user_proxy = agent_handle
user_proxy.get_human_input = lambda a: self.gpt_academic_get_human_input(user_proxy, a)
try:
groupchat = autogen.GroupChat(agents=agents_instances, messages=[], max_round=50)
manager = autogen.GroupChatManager(groupchat=groupchat, **self.define_group_chat_manager_config())
manager._print_received_message = lambda a, b: self.gpt_academic_print_override(agent_kwargs, a, b)
manager.get_human_input = lambda a: self.gpt_academic_get_human_input(manager, a)
if user_proxy is None:
raise Exception("user_proxy is not defined")
user_proxy.initiate_chat(manager, message=input)
except Exception:
tb_str = "```\n" + trimmed_format_exc() + "```"
self.child_conn.send(PipeCom("done", "AutoGen exe failed: \n\n" + tb_str))
def define_group_chat_manager_config(self):
raise NotImplementedError

查看文件

@@ -1,16 +0,0 @@
from toolbox import Singleton
@Singleton
class GradioMultiuserManagerForPersistentClasses():
def __init__(self):
self.mapping = {}
def already_alive(self, key):
return (key in self.mapping) and (self.mapping[key].is_alive())
def set(self, key, x):
self.mapping[key] = x
return self.mapping[key]
def get(self, key):
return self.mapping[key]

查看文件

@@ -1,195 +0,0 @@
from toolbox import get_log_folder, update_ui, gen_time_str, get_conf, promote_file_to_downloadzone
from crazy_functions.agent_fns.watchdog import WatchDog
from loguru import logger
import time, os
class PipeCom:
def __init__(self, cmd, content) -> None:
self.cmd = cmd
self.content = content
class PluginMultiprocessManager:
def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# ⭐ run in main process
self.autogen_work_dir = os.path.join(get_log_folder("autogen"), gen_time_str())
self.previous_work_dir_files = {}
self.llm_kwargs = llm_kwargs
self.plugin_kwargs = plugin_kwargs
self.chatbot = chatbot
self.history = history
self.system_prompt = system_prompt
# self.user_request = user_request
self.alive = True
self.use_docker = get_conf("AUTOGEN_USE_DOCKER")
self.last_user_input = ""
# create a thread to monitor self.heartbeat, terminate the instance if no heartbeat for a long time
timeout_seconds = 5 * 60
self.heartbeat_watchdog = WatchDog(timeout=timeout_seconds, bark_fn=self.terminate, interval=5)
self.heartbeat_watchdog.begin_watch()
def feed_heartbeat_watchdog(self):
# feed this `dog`, so the dog will not `bark` (bark_fn will terminate the instance)
self.heartbeat_watchdog.feed()
def is_alive(self):
return self.alive
def launch_subprocess_with_pipe(self):
# ⭐ run in main process
from multiprocessing import Process, Pipe
parent_conn, child_conn = Pipe()
self.p = Process(target=self.subprocess_worker, args=(child_conn,))
self.p.daemon = True
self.p.start()
return parent_conn
def terminate(self):
self.p.terminate()
self.alive = False
logger.info("[debug] instance terminated")
def subprocess_worker(self, child_conn):
# ⭐⭐ run in subprocess
raise NotImplementedError
def send_command(self, cmd):
# ⭐ run in main process
repeated = False
if cmd == self.last_user_input:
repeated = True
cmd = ""
else:
self.last_user_input = cmd
self.parent_conn.send(PipeCom("user_input", cmd))
return repeated, cmd
def immediate_showoff_when_possible(self, fp):
# ⭐ 主进程
# 获取fp的拓展名
file_type = fp.split('.')[-1]
# 如果是文本文件, 则直接显示文本内容
if file_type.lower() in ['png', 'jpg']:
image_path = os.path.abspath(fp)
self.chatbot.append([
'检测到新生图像:',
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
])
yield from update_ui(chatbot=self.chatbot, history=self.history)
def overwatch_workdir_file_change(self):
# ⭐ 主进程 Docker 外挂文件夹监控
path_to_overwatch = self.autogen_work_dir
change_list = []
# 扫描路径下的所有文件, 并与self.previous_work_dir_files中所记录的文件进行对比,
# 如果有新文件出现,或者文件的修改时间发生变化,则更新self.previous_work_dir_files中
# 把新文件和发生变化的文件的路径记录到 change_list 中
for root, dirs, files in os.walk(path_to_overwatch):
for file in files:
file_path = os.path.join(root, file)
if file_path not in self.previous_work_dir_files.keys():
last_modified_time = os.stat(file_path).st_mtime
self.previous_work_dir_files.update({file_path: last_modified_time})
change_list.append(file_path)
else:
last_modified_time = os.stat(file_path).st_mtime
if last_modified_time != self.previous_work_dir_files[file_path]:
self.previous_work_dir_files[file_path] = last_modified_time
change_list.append(file_path)
if len(change_list) > 0:
file_links = ""
for f in change_list:
res = promote_file_to_downloadzone(f)
file_links += f'<br/><a href="file={res}" target="_blank">{res}</a>'
yield from self.immediate_showoff_when_possible(f)
self.chatbot.append(['检测到新生文档.', f'文档清单如下: {file_links}'])
yield from update_ui(chatbot=self.chatbot, history=self.history)
return change_list
def main_process_ui_control(self, txt, create_or_resume) -> str:
# ⭐ 主进程
if create_or_resume == 'create':
self.cnt = 1
self.parent_conn = self.launch_subprocess_with_pipe() # ⭐⭐⭐
repeated, cmd_to_autogen = self.send_command(txt)
if txt == 'exit':
self.chatbot.append([f"结束", "结束信号已明确,终止AutoGen程序。"])
yield from update_ui(chatbot=self.chatbot, history=self.history)
self.terminate()
return "terminate"
# patience = 10
while True:
time.sleep(0.5)
if not self.alive:
# the heartbeat watchdog might have it killed
self.terminate()
return "terminate"
if self.parent_conn.poll():
self.feed_heartbeat_watchdog()
if "[GPT-Academic] 等待中" in self.chatbot[-1][-1]:
self.chatbot.pop(-1) # remove the last line
if "等待您的进一步指令" in self.chatbot[-1][-1]:
self.chatbot.pop(-1) # remove the last line
if '[GPT-Academic] 等待中' in self.chatbot[-1][-1]:
self.chatbot.pop(-1) # remove the last line
msg = self.parent_conn.recv() # PipeCom
if msg.cmd == "done":
self.chatbot.append([f"结束", msg.content])
self.cnt += 1
yield from update_ui(chatbot=self.chatbot, history=self.history)
self.terminate()
break
if msg.cmd == "show":
yield from self.overwatch_workdir_file_change()
notice = ""
if repeated: notice = "(自动忽略重复的输入)"
self.chatbot.append([f"运行阶段-{self.cnt}(上次用户反馈输入为: 「{cmd_to_autogen}{notice}", msg.content])
self.cnt += 1
yield from update_ui(chatbot=self.chatbot, history=self.history)
if msg.cmd == "interact":
yield from self.overwatch_workdir_file_change()
self.chatbot.append([f"程序抵达用户反馈节点.", msg.content +
"\n\n等待您的进一步指令." +
"\n\n(1) 一般情况下您不需要说什么, 清空输入区, 然后直接点击“提交”以继续. " +
"\n\n(2) 如果您需要补充些什么, 输入要反馈的内容, 直接点击“提交”以继续. " +
"\n\n(3) 如果您想终止程序, 输入exit, 直接点击“提交”以终止AutoGen并解锁. "
])
yield from update_ui(chatbot=self.chatbot, history=self.history)
# do not terminate here, leave the subprocess_worker instance alive
return "wait_feedback"
else:
self.feed_heartbeat_watchdog()
if '[GPT-Academic] 等待中' not in self.chatbot[-1][-1]:
# begin_waiting_time = time.time()
self.chatbot.append(["[GPT-Academic] 等待AutoGen执行结果 ...", "[GPT-Academic] 等待中"])
self.chatbot[-1] = [self.chatbot[-1][0], self.chatbot[-1][1].replace("[GPT-Academic] 等待中", "[GPT-Academic] 等待中.")]
yield from update_ui(chatbot=self.chatbot, history=self.history)
# if time.time() - begin_waiting_time > patience:
# self.chatbot.append([f"结束", "等待超时, 终止AutoGen程序。"])
# yield from update_ui(chatbot=self.chatbot, history=self.history)
# self.terminate()
# return "terminate"
self.terminate()
return "terminate"
def subprocess_worker_wait_user_feedback(self, wait_msg="wait user feedback"):
# ⭐⭐ run in subprocess
patience = 5 * 60
begin_waiting_time = time.time()
self.child_conn.send(PipeCom("interact", wait_msg))
while True:
time.sleep(0.5)
if self.child_conn.poll():
wait_success = True
break
if time.time() - begin_waiting_time > patience:
self.child_conn.send(PipeCom("done", ""))
wait_success = False
break
return wait_success

查看文件

@@ -1,457 +0,0 @@
import datetime
import re
import os
from loguru import logger
from textwrap import dedent
from toolbox import CatchException, update_ui
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
# TODO: 解决缩进问题
find_function_end_prompt = '''
Below is a page of code that you need to read. This page may not yet complete, you job is to split this page to sperate functions, class functions etc.
- Provide the line number where the first visible function ends.
- Provide the line number where the next visible function begins.
- If there are no other functions in this page, you should simply return the line number of the last line.
- Only focus on functions declared by `def` keyword. Ignore inline functions. Ignore function calls.
------------------ Example ------------------
INPUT:
```
L0000 |import sys
L0001 |import re
L0002 |
L0003 |def trimmed_format_exc():
L0004 | import os
L0005 | import traceback
L0006 | str = traceback.format_exc()
L0007 | current_path = os.getcwd()
L0008 | replace_path = "."
L0009 | return str.replace(current_path, replace_path)
L0010 |
L0011 |
L0012 |def trimmed_format_exc_markdown():
L0013 | ...
L0014 | ...
```
OUTPUT:
```
<first_function_end_at>L0009</first_function_end_at>
<next_function_begin_from>L0012</next_function_begin_from>
```
------------------ End of Example ------------------
------------------ the real INPUT you need to process NOW ------------------
```
{THE_TAGGED_CODE}
```
'''
revise_funtion_prompt = '''
You need to read the following code, and revise the source code ({FILE_BASENAME}) according to following instructions:
1. You should analyze the purpose of the functions (if there are any).
2. You need to add docstring for the provided functions (if there are any).
Be aware:
1. You must NOT modify the indent of code.
2. You are NOT authorized to change or translate non-comment code, and you are NOT authorized to add empty lines either, toggle qu.
3. Use {LANG} to add comments and docstrings. Do NOT translate Chinese that is already in the code.
4. Besides adding a docstring, use the ⭐ symbol to annotate the most core and important line of code within the function, explaining its role.
------------------ Example ------------------
INPUT:
```
L0000 |
L0001 |def zip_result(folder):
L0002 | t = gen_time_str()
L0003 | zip_folder(folder, get_log_folder(), f"result.zip")
L0004 | return os.path.join(get_log_folder(), f"result.zip")
L0005 |
L0006 |
```
OUTPUT:
<instruction_1_purpose>
This function compresses a given folder, and return the path of the resulting `zip` file.
</instruction_1_purpose>
<instruction_2_revised_code>
```
def zip_result(folder):
"""
Compresses the specified folder into a zip file and stores it in the log folder.
Args:
folder (str): The path to the folder that needs to be compressed.
Returns:
str: The path to the created zip file in the log folder.
"""
t = gen_time_str()
zip_folder(folder, get_log_folder(), f"result.zip") # ⭐ Execute the zipping of folder
return os.path.join(get_log_folder(), f"result.zip")
```
</instruction_2_revised_code>
------------------ End of Example ------------------
------------------ the real INPUT you need to process NOW ({FILE_BASENAME}) ------------------
```
{THE_CODE}
```
{INDENT_REMINDER}
{BRIEF_REMINDER}
{HINT_REMINDER}
'''
revise_funtion_prompt_chinese = '''
您需要阅读以下代码,并根据以下说明修订源代码({FILE_BASENAME}):
1. 如果源代码中包含函数的话, 你应该分析给定函数实现了什么功能
2. 如果源代码中包含函数的话, 你需要为函数添加docstring, docstring必须使用中文
请注意:
1. 你不得修改代码的缩进
2. 你无权更改或翻译代码中的非注释部分,也不允许添加空行
3. 使用 {LANG} 添加注释和文档字符串。不要翻译代码中已有的中文
4. 除了添加docstring之外, 使用⭐符号给该函数中最核心、最重要的一行代码添加注释,并说明其作用
------------------ 示例 ------------------
INPUT:
```
L0000 |
L0001 |def zip_result(folder):
L0002 | t = gen_time_str()
L0003 | zip_folder(folder, get_log_folder(), f"result.zip")
L0004 | return os.path.join(get_log_folder(), f"result.zip")
L0005 |
L0006 |
```
OUTPUT:
<instruction_1_purpose>
该函数用于压缩指定文件夹,并返回生成的`zip`文件的路径。
</instruction_1_purpose>
<instruction_2_revised_code>
```
def zip_result(folder):
"""
该函数将指定的文件夹压缩成ZIP文件, 并将其存储在日志文件夹中。
输入参数:
folder (str): 需要压缩的文件夹的路径。
返回值:
str: 日志文件夹中创建的ZIP文件的路径。
"""
t = gen_time_str()
zip_folder(folder, get_log_folder(), f"result.zip") # ⭐ 执行文件夹的压缩
return os.path.join(get_log_folder(), f"result.zip")
```
</instruction_2_revised_code>
------------------ End of Example ------------------
------------------ the real INPUT you need to process NOW ({FILE_BASENAME}) ------------------
```
{THE_CODE}
```
{INDENT_REMINDER}
{BRIEF_REMINDER}
{HINT_REMINDER}
'''
class PythonCodeComment():
def __init__(self, llm_kwargs, plugin_kwargs, language, observe_window_update) -> None:
self.original_content = ""
self.full_context = []
self.full_context_with_line_no = []
self.current_page_start = 0
self.page_limit = 100 # 100 lines of code each page
self.ignore_limit = 20
self.llm_kwargs = llm_kwargs
self.plugin_kwargs = plugin_kwargs
self.language = language
self.observe_window_update = observe_window_update
if self.language == "chinese":
self.core_prompt = revise_funtion_prompt_chinese
else:
self.core_prompt = revise_funtion_prompt
self.path = None
self.file_basename = None
self.file_brief = ""
def generate_tagged_code_from_full_context(self):
for i, code in enumerate(self.full_context):
number = i
padded_number = f"{number:04}"
result = f"L{padded_number}"
self.full_context_with_line_no.append(f"{result} | {code}")
return self.full_context_with_line_no
def read_file(self, path, brief):
with open(path, 'r', encoding='utf8') as f:
self.full_context = f.readlines()
self.original_content = ''.join(self.full_context)
self.file_basename = os.path.basename(path)
self.file_brief = brief
self.full_context_with_line_no = self.generate_tagged_code_from_full_context()
self.path = path
def find_next_function_begin(self, tagged_code:list, begin_and_end):
begin, end = begin_and_end
THE_TAGGED_CODE = ''.join(tagged_code)
self.llm_kwargs['temperature'] = 0
result = predict_no_ui_long_connection(
inputs=find_function_end_prompt.format(THE_TAGGED_CODE=THE_TAGGED_CODE),
llm_kwargs=self.llm_kwargs,
history=[],
sys_prompt="",
observe_window=[],
console_slience=True
)
def extract_number(text):
# 使用正则表达式匹配模式
match = re.search(r'<next_function_begin_from>L(\d+)</next_function_begin_from>', text)
if match:
# 提取匹配的数字部分并转换为整数
return int(match.group(1))
return None
line_no = extract_number(result)
if line_no is not None:
return line_no
else:
return end
def _get_next_window(self):
#
current_page_start = self.current_page_start
if self.current_page_start == len(self.full_context) + 1:
raise StopIteration
# 如果剩余的行数非常少,一鼓作气处理掉
if len(self.full_context) - self.current_page_start < self.ignore_limit:
future_page_start = len(self.full_context) + 1
self.current_page_start = future_page_start
return current_page_start, future_page_start
tagged_code = self.full_context_with_line_no[ self.current_page_start: self.current_page_start + self.page_limit]
line_no = self.find_next_function_begin(tagged_code, [self.current_page_start, self.current_page_start + self.page_limit])
if line_no > len(self.full_context) - 5:
line_no = len(self.full_context) + 1
future_page_start = line_no
self.current_page_start = future_page_start
# ! consider eof
return current_page_start, future_page_start
def dedent(self, text):
"""Remove any common leading whitespace from every line in `text`.
"""
# Look for the longest leading string of spaces and tabs common to
# all lines.
margin = None
_whitespace_only_re = re.compile('^[ \t]+$', re.MULTILINE)
_leading_whitespace_re = re.compile('(^[ \t]*)(?:[^ \t\n])', re.MULTILINE)
text = _whitespace_only_re.sub('', text)
indents = _leading_whitespace_re.findall(text)
for indent in indents:
if margin is None:
margin = indent
# Current line more deeply indented than previous winner:
# no change (previous winner is still on top).
elif indent.startswith(margin):
pass
# Current line consistent with and no deeper than previous winner:
# it's the new winner.
elif margin.startswith(indent):
margin = indent
# Find the largest common whitespace between current line and previous
# winner.
else:
for i, (x, y) in enumerate(zip(margin, indent)):
if x != y:
margin = margin[:i]
break
# sanity check (testing/debugging only)
if 0 and margin:
for line in text.split("\n"):
assert not line or line.startswith(margin), \
"line = %r, margin = %r" % (line, margin)
if margin:
text = re.sub(r'(?m)^' + margin, '', text)
return text, len(margin)
else:
return text, 0
def get_next_batch(self):
current_page_start, future_page_start = self._get_next_window()
return ''.join(self.full_context[current_page_start: future_page_start]), current_page_start, future_page_start
def tag_code(self, fn, hint):
code = fn
_, n_indent = self.dedent(code)
indent_reminder = "" if n_indent == 0 else "(Reminder: as you can see, this piece of code has indent made up with {n_indent} whitespace, please preseve them in the OUTPUT.)"
brief_reminder = "" if self.file_brief == "" else f"({self.file_basename} abstract: {self.file_brief})"
hint_reminder = "" if hint is None else f"(Reminder: do not ignore or modify code such as `{hint}`, provide complete code in the OUTPUT.)"
self.llm_kwargs['temperature'] = 0
result = predict_no_ui_long_connection(
inputs=self.core_prompt.format(
LANG=self.language,
FILE_BASENAME=self.file_basename,
THE_CODE=code,
INDENT_REMINDER=indent_reminder,
BRIEF_REMINDER=brief_reminder,
HINT_REMINDER=hint_reminder
),
llm_kwargs=self.llm_kwargs,
history=[],
sys_prompt="",
observe_window=[],
console_slience=True
)
def get_code_block(reply):
import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) == 1:
return matches[0].strip('python') # code block
return None
code_block = get_code_block(result)
if code_block is not None:
code_block = self.sync_and_patch(original=code, revised=code_block)
return code_block
else:
return code
def get_markdown_block_in_html(self, html):
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
found_list = soup.find_all("div", class_="markdown-body")
if found_list:
res = found_list[0]
return res.prettify()
else:
return None
def sync_and_patch(self, original, revised):
"""Ensure the number of pre-string empty lines in revised matches those in original."""
def count_leading_empty_lines(s, reverse=False):
"""Count the number of leading empty lines in a string."""
lines = s.split('\n')
if reverse: lines = list(reversed(lines))
count = 0
for line in lines:
if line.strip() == '':
count += 1
else:
break
return count
original_empty_lines = count_leading_empty_lines(original)
revised_empty_lines = count_leading_empty_lines(revised)
if original_empty_lines > revised_empty_lines:
additional_lines = '\n' * (original_empty_lines - revised_empty_lines)
revised = additional_lines + revised
elif original_empty_lines < revised_empty_lines:
lines = revised.split('\n')
revised = '\n'.join(lines[revised_empty_lines - original_empty_lines:])
original_empty_lines = count_leading_empty_lines(original, reverse=True)
revised_empty_lines = count_leading_empty_lines(revised, reverse=True)
if original_empty_lines > revised_empty_lines:
additional_lines = '\n' * (original_empty_lines - revised_empty_lines)
revised = revised + additional_lines
elif original_empty_lines < revised_empty_lines:
lines = revised.split('\n')
revised = '\n'.join(lines[:-(revised_empty_lines - original_empty_lines)])
return revised
def begin_comment_source_code(self, chatbot=None, history=None):
# from toolbox import update_ui_lastest_msg
assert self.path is not None
assert '.py' in self.path # must be python source code
# write_target = self.path + '.revised.py'
write_content = ""
# with open(self.path + '.revised.py', 'w+', encoding='utf8') as f:
while True:
try:
# yield from update_ui_lastest_msg(f"({self.file_basename}) 正在读取下一段代码片段:\n", chatbot=chatbot, history=history, delay=0)
next_batch, line_no_start, line_no_end = self.get_next_batch()
self.observe_window_update(f"正在处理{self.file_basename} - {line_no_start}/{len(self.full_context)}\n")
# yield from update_ui_lastest_msg(f"({self.file_basename}) 处理代码片段:\n\n{next_batch}", chatbot=chatbot, history=history, delay=0)
hint = None
MAX_ATTEMPT = 2
for attempt in range(MAX_ATTEMPT):
result = self.tag_code(next_batch, hint)
try:
successful, hint = self.verify_successful(next_batch, result)
except Exception as e:
logger.error('ignored exception:\n' + str(e))
break
if successful:
break
if attempt == MAX_ATTEMPT - 1:
# cannot deal with this, give up
result = next_batch
break
# f.write(result)
write_content += result
except StopIteration:
next_batch, line_no_start, line_no_end = [], -1, -1
return None, write_content
def verify_successful(self, original, revised):
""" Determine whether the revised code contains every line that already exists
"""
from crazy_functions.ast_fns.comment_remove import remove_python_comments
original = remove_python_comments(original)
original_lines = original.split('\n')
revised_lines = revised.split('\n')
for l in original_lines:
l = l.strip()
if '\'' in l or '\"' in l: continue # ast sometimes toggle " to '
found = False
for lt in revised_lines:
if l in lt:
found = True
break
if not found:
return False, l
return True, None

查看文件

@@ -1,45 +0,0 @@
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<style>ADVANCED_CSS</style>
<meta charset="UTF-8">
<title>源文件对比</title>
<style>
body {
font-family: Arial, sans-serif;
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
margin: 0;
}
.container {
display: flex;
width: 95%;
height: -webkit-fill-available;
}
.code-container {
flex: 1;
margin: 0px;
padding: 0px;
border: 1px solid #ccc;
background-color: #f9f9f9;
overflow: auto;
}
pre {
white-space: pre-wrap;
word-wrap: break-word;
}
</style>
</head>
<body>
<div class="container">
<div class="code-container">
REPLACE_CODE_FILE_LEFT
</div>
<div class="code-container">
REPLACE_CODE_FILE_RIGHT
</div>
</div>
</body>
</html>

查看文件

@@ -1,29 +0,0 @@
import threading, time
from loguru import logger
class WatchDog():
def __init__(self, timeout, bark_fn, interval=3, msg="") -> None:
self.last_feed = None
self.timeout = timeout
self.bark_fn = bark_fn
self.interval = interval
self.msg = msg
self.kill_dog = False
def watch(self):
while True:
if self.kill_dog: break
if time.time() - self.last_feed > self.timeout:
if len(self.msg) > 0: logger.info(self.msg)
self.bark_fn()
break
time.sleep(self.interval)
def begin_watch(self):
self.last_feed = time.time()
th = threading.Thread(target=self.watch)
th.daemon = True
th.start()
def feed(self):
self.last_feed = time.time()

查看文件

@@ -1,54 +0,0 @@
import token
import tokenize
import copy
import io
def remove_python_comments(input_source: str) -> str:
source_flag = copy.copy(input_source)
source = io.StringIO(input_source)
ls = input_source.split('\n')
prev_toktype = token.INDENT
readline = source.readline
def get_char_index(lineno, col):
# find the index of the char in the source code
if lineno == 1:
return len('\n'.join(ls[:(lineno-1)])) + col
else:
return len('\n'.join(ls[:(lineno-1)])) + col + 1
def replace_char_between(start_lineno, start_col, end_lineno, end_col, source, replace_char, ls):
# replace char between start_lineno, start_col and end_lineno, end_col with replace_char, but keep '\n' and ' '
b = get_char_index(start_lineno, start_col)
e = get_char_index(end_lineno, end_col)
for i in range(b, e):
if source[i] == '\n':
source = source[:i] + '\n' + source[i+1:]
elif source[i] == ' ':
source = source[:i] + ' ' + source[i+1:]
else:
source = source[:i] + replace_char + source[i+1:]
return source
tokgen = tokenize.generate_tokens(readline)
for toktype, ttext, (slineno, scol), (elineno, ecol), ltext in tokgen:
if toktype == token.STRING and (prev_toktype == token.INDENT):
source_flag = replace_char_between(slineno, scol, elineno, ecol, source_flag, ' ', ls)
elif toktype == token.STRING and (prev_toktype == token.NEWLINE):
source_flag = replace_char_between(slineno, scol, elineno, ecol, source_flag, ' ', ls)
elif toktype == tokenize.COMMENT:
source_flag = replace_char_between(slineno, scol, elineno, ecol, source_flag, ' ', ls)
prev_toktype = toktype
return source_flag
# 示例使用
if __name__ == "__main__":
with open("source.py", "r", encoding="utf-8") as f:
source_code = f.read()
cleaned_code = remove_python_comments(source_code)
with open("cleaned_source.py", "w", encoding="utf-8") as f:
f.write(cleaned_code)

查看文件

@@ -1,153 +1,19 @@
import os
import threading
from loguru import logger
from shared_utils.char_visual_effect import scolling_visual_effect
from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton
import traceback
def input_clipping(inputs, history, max_token_limit, return_clip_flags=False):
"""
当输入文本 + 历史文本超出最大限制时,采取措施丢弃一部分文本。
输入:
- inputs 本次请求
- history 历史上下文
- max_token_limit 最大token限制
输出:
- inputs 本次请求经过clip
- history 历史上下文经过clip
"""
import numpy as np
from request_llms.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer']
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
mode = 'input-and-history'
# 当 输入部分的token占比 小于 全文的一半时,只裁剪历史
input_token_num = get_token_num(inputs)
original_input_len = len(inputs)
if input_token_num < max_token_limit//2:
mode = 'only-history'
max_token_limit = max_token_limit - input_token_num
everything = [inputs] if mode == 'input-and-history' else ['']
everything.extend(history)
full_token_num = n_token = get_token_num('\n'.join(everything))
everything_token = [get_token_num(e) for e in everything]
everything_token_num = sum(everything_token)
delta = max(everything_token) // 16 # 截断时的颗粒度
while n_token > max_token_limit:
where = np.argmax(everything_token)
encoded = enc.encode(everything[where], disallowed_special=())
clipped_encoded = encoded[:len(encoded)-delta]
everything[where] = enc.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
everything_token[where] = get_token_num(everything[where])
n_token = get_token_num('\n'.join(everything))
if mode == 'input-and-history':
inputs = everything[0]
full_token_num = everything_token_num
else:
full_token_num = everything_token_num + input_token_num
history = everything[1:]
flags = {
"mode": mode,
"original_input_token_num": input_token_num,
"original_full_token_num": full_token_num,
"original_input_len": original_input_len,
"clipped_input_len": len(inputs),
}
if not return_clip_flags:
return inputs, history
else:
return inputs, history, flags
def request_gpt_model_in_new_thread_with_ui_alive(
inputs, inputs_show_user, llm_kwargs,
chatbot, history, sys_prompt, refresh_interval=0.2,
handle_token_exceed=True,
retry_times_at_unknown_error=2,
):
"""
Request GPT model,请求GPT模型同时维持用户界面活跃。
输入参数 Args 以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行:
inputs (string): List of inputs (输入)
inputs_show_user (string): List of inputs to show user展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性
top_p (float): Top p value for sampling from model distribution GPT参数,浮点数
temperature (float): Temperature value for sampling from model distributionGPT参数,浮点数
chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化)
history (list): List of chat history (历史,对话历史列表)
sys_prompt (string): List of system prompts 系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样
refresh_interval (float, optional): Refresh interval for UI (default: 0.2) 刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果
handle_token_exceed是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启
retry_times_at_unknown_error失败时的重试次数
输出 Returns:
future: 输出,GPT返回的结果
"""
def request_gpt_model_in_new_thread_with_ui_alive(inputs, inputs_show_user, top_p, temperature, chatbot, history, sys_prompt, refresh_interval=0.2):
import time
from concurrent.futures import ThreadPoolExecutor
from request_llms.bridge_all import predict_no_ui_long_connection
from request_llm.bridge_chatgpt import predict_no_ui_long_connection
# 用户反馈
chatbot.append([inputs_show_user, ""])
yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
msg = '正常'
yield chatbot, [], msg
executor = ThreadPoolExecutor(max_workers=16)
mutable = ["", time.time(), ""]
# 看门狗耐心
watch_dog_patience = 5
# 请求任务
def _req_gpt(inputs, history, sys_prompt):
retry_op = retry_times_at_unknown_error
exceeded_cnt = 0
while True:
# watchdog error
if len(mutable) >= 2 and (time.time()-mutable[1]) > watch_dog_patience:
raise RuntimeError("检测到程序终止。")
try:
# 【第一种情况】:顺利完成
result = predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs,
history=history, sys_prompt=sys_prompt, observe_window=mutable)
return result
except ConnectionAbortedError as token_exceeded_error:
# 【第二种情况】Token溢出
if handle_token_exceed:
exceeded_cnt += 1
# 【选择处理】 尝试计算比例,尽可能多地保留文本
from toolbox import get_reduce_token_percent
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
MAX_TOKEN = get_max_token(llm_kwargs)
EXCEED_ALLO = 512 + 512 * exceeded_cnt
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
mutable[0] += f'[Local Message] 警告,文本过长将进行截断,Token溢出数{n_exceed}\n\n'
continue # 返回重试
else:
# 【选择放弃】
tb_str = '```\n' + trimmed_format_exc() + '```'
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
return mutable[0] # 放弃
except:
# 【第三种情况】:其他错误:重试几次
tb_str = '```\n' + trimmed_format_exc() + '```'
logger.error(tb_str)
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if retry_op > 0:
retry_op -= 1
mutable[0] += f"[Local Message] 重试中,请稍等 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}\n\n"
if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str):
time.sleep(30)
time.sleep(5)
continue # 返回重试
else:
time.sleep(5)
return mutable[0] # 放弃
# 提交任务
future = executor.submit(_req_gpt, inputs, history, sys_prompt)
mutable = ["", time.time()]
future = executor.submit(lambda:
predict_no_ui_long_connection(
inputs=inputs, top_p=top_p, temperature=temperature, history=history, sys_prompt=sys_prompt, observe_window=mutable)
)
while True:
# yield一次以刷新前端页面
time.sleep(refresh_interval)
@@ -156,169 +22,50 @@ def request_gpt_model_in_new_thread_with_ui_alive(
if future.done():
break
chatbot[-1] = [chatbot[-1][0], mutable[0]]
yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
msg = "正常"
yield chatbot, [], msg
return future.result()
final_result = future.result()
chatbot[-1] = [chatbot[-1][0], final_result]
yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息
return final_result
def can_multi_process(llm) -> bool:
from request_llms.bridge_all import model_info
def default_condition(llm) -> bool:
# legacy condition
if llm.startswith('gpt-'): return True
if llm.startswith('chatgpt-'): return True
if llm.startswith('api2d-'): return True
if llm.startswith('azure-'): return True
if llm.startswith('spark'): return True
if llm.startswith('zhipuai') or llm.startswith('glm-'): return True
return False
if llm in model_info:
if 'can_multi_thread' in model_info[llm]:
return model_info[llm]['can_multi_thread']
else:
return default_condition(llm)
else:
return default_condition(llm)
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array, inputs_show_user_array, llm_kwargs,
chatbot, history_array, sys_prompt_array,
refresh_interval=0.2, max_workers=-1, scroller_max_len=75,
handle_token_exceed=True, show_user_at_complete=False,
retry_times_at_unknown_error=2,
):
"""
Request GPT model using multiple threads with UI and high efficiency
请求GPT模型的[多线程]版。
具备以下功能:
实时在UI上反馈远程数据流
使用线程池,可调节线程池的大小避免openai的流量限制错误
处理中途中止的情况
网络等出问题时,会把traceback和已经接收的数据转入输出
输入参数 Args 以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行:
inputs_array (list): List of inputs (每个子任务的输入)
inputs_show_user_array (list): List of inputs to show user每个子任务展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性
llm_kwargs: llm_kwargs参数
chatbot: chatbot (用户界面对话窗口句柄,用于数据流可视化)
history_array (list): List of chat history (历史对话输入,双层列表,第一层列表是子任务分解,第二层列表是对话历史)
sys_prompt_array (list): List of system prompts 系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样
refresh_interval (float, optional): Refresh interval for UI (default: 0.2) 刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果
max_workers (int, optional): Maximum number of threads (default: see config.py) 最大线程数,如果子任务非常多,需要用此选项防止高频地请求openai导致错误
scroller_max_len (int, optional): Maximum length for scroller (default: 30)(数据流的显示最后收到的多少个字符,仅仅服务于视觉效果)
handle_token_exceed (bool, optional): (是否在输入过长时,自动缩减文本)
handle_token_exceed是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启
show_user_at_complete (bool, optional): (在结束时,把完整输入-输出结果显示在聊天框)
retry_times_at_unknown_error子任务失败时的重试次数
输出 Returns:
list: List of GPT model responses 每个子任务的输出汇总,如果某个子任务出错,response中会携带traceback报错信息,方便调试和定位问题。
"""
import time, random
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(inputs_array, inputs_show_user_array, top_p, temperature, chatbot, history_array, sys_prompt_array, refresh_interval=0.2, max_workers=10, scroller_max_len=30):
import time
from concurrent.futures import ThreadPoolExecutor
from request_llms.bridge_all import predict_no_ui_long_connection
from request_llm.bridge_chatgpt import predict_no_ui_long_connection
assert len(inputs_array) == len(history_array)
assert len(inputs_array) == len(sys_prompt_array)
if max_workers == -1: # 读取配置文件
try: max_workers = get_conf('DEFAULT_WORKER_NUM')
except: max_workers = 8
if max_workers <= 0: max_workers = 3
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
if not can_multi_process(llm_kwargs['llm_model']):
max_workers = 1
executor = ThreadPoolExecutor(max_workers=max_workers)
n_frag = len(inputs_array)
# 用户反馈
chatbot.append(["请开始多线程操作。", ""])
yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
# 跨线程传递
mutable = [["", time.time(), "等待中"] for _ in range(n_frag)]
msg = '正常'
yield chatbot, [], msg
# 异步原子
mutable = [["", time.time()] for _ in range(n_frag)]
# 看门狗耐心
watch_dog_patience = 5
# 子线程任务
def _req_gpt(index, inputs, history, sys_prompt):
gpt_say = ""
retry_op = retry_times_at_unknown_error
exceeded_cnt = 0
mutable[index][2] = "执行中"
detect_timeout = lambda: len(mutable[index]) >= 2 and (time.time()-mutable[index][1]) > watch_dog_patience
while True:
# watchdog error
if detect_timeout(): raise RuntimeError("检测到程序终止。")
try:
# 【第一种情况】:顺利完成
gpt_say = predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=history,
sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True
)
mutable[index][2] = "已成功"
return gpt_say
except ConnectionAbortedError as token_exceeded_error:
# 【第二种情况】Token溢出
if handle_token_exceed:
exceeded_cnt += 1
# 【选择处理】 尝试计算比例,尽可能多地保留文本
from toolbox import get_reduce_token_percent
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
MAX_TOKEN = get_max_token(llm_kwargs)
EXCEED_ALLO = 512 + 512 * exceeded_cnt
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
gpt_say += f'[Local Message] 警告,文本过长将进行截断,Token溢出数{n_exceed}\n\n'
mutable[index][2] = f"截断重试"
continue # 返回重试
else:
# 【选择放弃】
tb_str = '```\n' + trimmed_format_exc() + '```'
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
mutable[index][2] = "输入过长已放弃"
return gpt_say # 放弃
except:
# 【第三种情况】:其他错误
if detect_timeout(): raise RuntimeError("检测到程序终止。")
tb_str = '```\n' + trimmed_format_exc() + '```'
logger.error(tb_str)
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
if retry_op > 0:
retry_op -= 1
wait = random.randint(5, 20)
if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str):
wait = wait * 3
fail_info = "OpenAI绑定信用卡可解除频率限制 "
else:
fail_info = ""
# 也许等待十几秒后,情况会好转
for i in range(wait):
mutable[index][2] = f"{fail_info}等待重试 {wait-i}"; time.sleep(1)
# 开始重试
if detect_timeout(): raise RuntimeError("检测到程序终止。")
mutable[index][2] = f"重试中 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}"
continue # 返回重试
else:
mutable[index][2] = "已失败"
wait = 5
time.sleep(5)
return gpt_say # 放弃
try:
gpt_say = predict_no_ui_long_connection(
inputs=inputs, top_p=top_p, temperature=temperature, history=history, sys_prompt=sys_prompt, observe_window=mutable[index]
)
except:
# 收拾残局
tb_str = '```\n' + traceback.format_exc() + '```'
gpt_say = f"[Local Message] 线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if len(mutable[index][0]) > 0:
gpt_say += "此线程失败前收到的回答:" + mutable[index][0]
return gpt_say
# 异步任务开始
futures = [executor.submit(_req_gpt, index, inputs, history, sys_prompt) for index, inputs, history, sys_prompt in zip(
range(len(inputs_array)), inputs_array, history_array, sys_prompt_array)]
cnt = 0
while True:
# yield一次以刷新前端页面
time.sleep(refresh_interval)
cnt += 1
worker_done = [h.done() for h in futures]
if all(worker_done):
executor.shutdown()
break
# 更好的UI视觉效果
observe_win = []
# 每个线程都要“喂狗”(看门狗)
@@ -326,326 +73,87 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
mutable[thread_index][1] = time.time()
# 在前端打印些好玩的东西
for thread_index, _ in enumerate(worker_done):
print_something_really_funny = f"[ ...`{scolling_visual_effect(mutable[thread_index][0], scroller_max_len)}`... ]"
print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\
replace('\n', '').replace('```', '...').replace(
' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
observe_win.append(print_something_really_funny)
# 在前端打印些好玩的东西
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
if not done else f'`{mutable[thread_index][2]}`\n\n'
for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)])
# 在前端打印些好玩的东西
chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))]
yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
if all(worker_done):
executor.shutdown()
break
stat_str = ''.join([f'执行中: {obs}\n\n' if not done else '已完成\n\n' for done, obs in zip(
worker_done, observe_win)])
chatbot[-1] = [chatbot[-1][0],
f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))]
msg = "正常"
yield chatbot, [], msg
# 异步任务结束
gpt_response_collection = []
for inputs_show_user, f in zip(inputs_show_user_array, futures):
gpt_res = f.result()
gpt_response_collection.extend([inputs_show_user, gpt_res])
# 是否在结束时,在界面上显示结果
if show_user_at_complete:
for inputs_show_user, f in zip(inputs_show_user_array, futures):
gpt_res = f.result()
chatbot.append([inputs_show_user, gpt_res])
yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
time.sleep(0.5)
return gpt_response_collection
def breakdown_txt_to_satisfy_token_limit(txt, get_token_fn, limit):
def cut(txt_tocut, must_break_at_empty_line): # 递归
if get_token_fn(txt_tocut) <= limit:
return [txt_tocut]
else:
lines = txt_tocut.split('\n')
estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
estimated_line_cut = int(estimated_line_cut)
for cnt in reversed(range(estimated_line_cut)):
if must_break_at_empty_line:
if lines[cnt] != "":
continue
print(cnt)
prev = "\n".join(lines[:cnt])
post = "\n".join(lines[cnt:])
if get_token_fn(prev) < limit:
break
if cnt == 0:
print('what the fuck ?')
raise RuntimeError("存在一行极长的文本!")
# print(len(post))
# 列表递归接龙
result = [prev]
result.extend(cut(post, must_break_at_empty_line))
return result
try:
return cut(txt, must_break_at_empty_line=True)
except RuntimeError:
return cut(txt, must_break_at_empty_line=False)
def read_and_clean_pdf_text(fp):
"""
这个函数用于分割pdf,用了很多trick,逻辑较乱,效果奇好
**输入参数说明**
- `fp`需要读取和清理文本的pdf文件路径
**输出参数说明**
- `meta_txt`:清理后的文本内容字符串
- `page_one_meta`:第一页清理后的文本内容列表
**函数功能**
读取pdf文件并清理其中的文本内容,清理规则包括
- 提取所有块元的文本信息,并合并为一个字符串
- 去除短块字符数小于100并替换为回车符
- 清理多余的空行
- 合并小写字母开头的段落块并替换为空格
- 清除重复的换行
- 将每个换行符替换为两个换行符,使每个段落之间有两个换行符分隔
"""
import fitz, copy
import re
import numpy as np
# from shared_utils.colorful import print亮黄, print亮绿
fc = 0 # Index 0 文本
fs = 1 # Index 1 字体
fb = 2 # Index 2 框框
REMOVE_FOOT_NOTE = True # 是否丢弃掉 不是正文的内容 (比正文字体小,如参考文献、脚注、图注等)
REMOVE_FOOT_FFSIZE_PERCENT = 0.95 # 小于正文的?时,判定为不是正文有些文章的正文部分字体大小不是100%统一的,有肉眼不可见的小变化)
def primary_ffsize(l):
"""
提取文本块主字体
"""
fsize_statiscs = {}
for wtf in l['spans']:
if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0
fsize_statiscs[wtf['size']] += len(wtf['text'])
return max(fsize_statiscs, key=fsize_statiscs.get)
def ffsize_same(a,b):
"""
提取字体大小是否近似相等
"""
return abs((a-b)/max(a,b)) < 0.02
with fitz.open(fp) as doc:
meta_txt = []
meta_font = []
meta_line = []
meta_span = []
############################## <第 1 步,搜集初始信息> ##################################
for index, page in enumerate(doc):
# file_content += page.get_text()
text_areas = page.get_text("dict") # 获取页面上的文本信息
for t in text_areas['blocks']:
if 'lines' in t:
pf = 998
for l in t['lines']:
txt_line = "".join([wtf['text'] for wtf in l['spans']])
if len(txt_line) == 0: continue
pf = primary_ffsize(l)
meta_line.append([txt_line, pf, l['bbox'], l])
for wtf in l['spans']: # for l in t['lines']:
meta_span.append([wtf['text'], wtf['size'], len(wtf['text'])])
# meta_line.append(["NEW_BLOCK", pf])
# 块元提取 for each word segment with in line for each line cross-line words for each block
meta_txt.extend([" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace(
'- ', '') for t in text_areas['blocks'] if 'lines' in t])
meta_font.extend([np.mean([np.mean([wtf['size'] for wtf in l['spans']])
for l in t['lines']]) for t in text_areas['blocks'] if 'lines' in t])
if index == 0:
page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace(
'- ', '') for t in text_areas['blocks'] if 'lines' in t]
############################## <第 2 步,获取正文主字体> ##################################
def breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn, limit):
def cut(txt_tocut, must_break_at_empty_line): # 递归
if get_token_fn(txt_tocut) <= limit:
return [txt_tocut]
else:
lines = txt_tocut.split('\n')
estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
estimated_line_cut = int(estimated_line_cut)
cnt = 0
for cnt in reversed(range(estimated_line_cut)):
if must_break_at_empty_line:
if lines[cnt] != "":
continue
print(cnt)
prev = "\n".join(lines[:cnt])
post = "\n".join(lines[cnt:])
if get_token_fn(prev) < limit:
break
if cnt == 0:
# print('what the fuck ? 存在一行极长的文本!')
raise RuntimeError("存在一行极长的文本!")
# print(len(post))
# 列表递归接龙
result = [prev]
result.extend(cut(post, must_break_at_empty_line))
return result
try:
return cut(txt, must_break_at_empty_line=True)
except RuntimeError:
try:
fsize_statiscs = {}
for span in meta_span:
if span[1] not in fsize_statiscs: fsize_statiscs[span[1]] = 0
fsize_statiscs[span[1]] += span[2]
main_fsize = max(fsize_statiscs, key=fsize_statiscs.get)
if REMOVE_FOOT_NOTE:
give_up_fize_threshold = main_fsize * REMOVE_FOOT_FFSIZE_PERCENT
except:
raise RuntimeError(f'抱歉, 我们暂时无法解析此PDF文档: {fp}')
############################## <第 3 步,切分和重新整合> ##################################
mega_sec = []
sec = []
for index, line in enumerate(meta_line):
if index == 0:
sec.append(line[fc])
continue
if REMOVE_FOOT_NOTE:
if meta_line[index][fs] <= give_up_fize_threshold:
continue
if ffsize_same(meta_line[index][fs], meta_line[index-1][fs]):
# 尝试识别段落
if meta_line[index][fc].endswith('.') and\
(meta_line[index-1][fc] != 'NEW_BLOCK') and \
(meta_line[index][fb][2] - meta_line[index][fb][0]) < (meta_line[index-1][fb][2] - meta_line[index-1][fb][0]) * 0.7:
sec[-1] += line[fc]
sec[-1] += "\n\n"
else:
sec[-1] += " "
sec[-1] += line[fc]
else:
if (index+1 < len(meta_line)) and \
meta_line[index][fs] > main_fsize:
# 单行 + 字体大
mega_sec.append(copy.deepcopy(sec))
sec = []
sec.append("# " + line[fc])
else:
# 尝试识别section
if meta_line[index-1][fs] > meta_line[index][fs]:
sec.append("\n" + line[fc])
else:
sec.append(line[fc])
mega_sec.append(copy.deepcopy(sec))
finals = []
for ms in mega_sec:
final = " ".join(ms)
final = final.replace('- ', ' ')
finals.append(final)
meta_txt = finals
############################## <第 4 步,乱七八糟的后处理> ##################################
def 把字符太少的块清除为回车(meta_txt):
for index, block_txt in enumerate(meta_txt):
if len(block_txt) < 100:
meta_txt[index] = '\n'
return meta_txt
meta_txt = 把字符太少的块清除为回车(meta_txt)
def 清理多余的空行(meta_txt):
for index in reversed(range(1, len(meta_txt))):
if meta_txt[index] == '\n' and meta_txt[index-1] == '\n':
meta_txt.pop(index)
return meta_txt
meta_txt = 清理多余的空行(meta_txt)
def 合并小写开头的段落块(meta_txt):
def starts_with_lowercase_word(s):
pattern = r"^[a-z]+"
match = re.match(pattern, s)
if match:
return True
else:
return False
# 对于某些PDF会有第一个段落就以小写字母开头,为了避免索引错误将其更改为大写
if starts_with_lowercase_word(meta_txt[0]):
meta_txt[0] = meta_txt[0].capitalize()
for _ in range(100):
for index, block_txt in enumerate(meta_txt):
if starts_with_lowercase_word(block_txt):
if meta_txt[index-1] != '\n':
meta_txt[index-1] += ' '
else:
meta_txt[index-1] = ''
meta_txt[index-1] += meta_txt[index]
meta_txt[index] = '\n'
return meta_txt
meta_txt = 合并小写开头的段落块(meta_txt)
meta_txt = 清理多余的空行(meta_txt)
meta_txt = '\n'.join(meta_txt)
# 清除重复的换行
for _ in range(5):
meta_txt = meta_txt.replace('\n\n', '\n')
# 换行 -> 双换行
meta_txt = meta_txt.replace('\n', '\n\n')
############################## <第 5 步,展示分割效果> ##################################
# for f in finals:
# print亮黄(f)
# print亮绿('***************************')
return meta_txt, page_one_meta
def get_files_from_everything(txt, type): # type='.md'
"""
这个函数是用来获取指定目录下所有指定类型(如.md的文件,并且对于网络上的文件,也可以获取它。
下面是对每个参数和返回值的说明:
参数
- txt: 路径或网址,表示要搜索的文件或者文件夹路径或网络上的文件。
- type: 字符串,表示要搜索的文件类型。默认是.md。
返回值
- success: 布尔值,表示函数是否成功执行。
- file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。
- project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。
该函数详细注释已添加,请确认是否满足您的需要。
"""
import glob, os
success = True
if txt.startswith('http'):
# 网络的远程文件
import requests
from toolbox import get_conf
from toolbox import get_log_folder, gen_time_str
proxies = get_conf('proxies')
try:
r = requests.get(txt, proxies=proxies)
except:
raise ConnectionRefusedError(f"无法下载资源{txt},请检查。")
path = os.path.join(get_log_folder(plugin_name='web_download'), gen_time_str()+type)
with open(path, 'wb+') as f: f.write(r.content)
project_folder = get_log_folder(plugin_name='web_download')
file_manifest = [path]
elif txt.endswith(type):
# 直接给定文件
file_manifest = [txt]
project_folder = os.path.dirname(txt)
elif os.path.exists(txt):
# 本地路径,递归搜索
project_folder = txt
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*'+type, recursive=True)]
if len(file_manifest) == 0:
success = False
else:
project_folder = None
file_manifest = []
success = False
return success, file_manifest, project_folder
@Singleton
class nougat_interface():
def __init__(self):
self.threadLock = threading.Lock()
def nougat_with_timeout(self, command, cwd, timeout=3600):
import subprocess
from toolbox import ProxyNetworkActivate
logger.info(f'正在执行命令 {command}')
with ProxyNetworkActivate("Nougat_Download"):
process = subprocess.Popen(command, shell=False, cwd=cwd, env=os.environ)
try:
stdout, stderr = process.communicate(timeout=timeout)
except subprocess.TimeoutExpired:
process.kill()
stdout, stderr = process.communicate()
logger.error("Process timed out!")
return False
return True
def NOUGAT_parse_pdf(self, fp, chatbot, history):
from toolbox import update_ui_lastest_msg
yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度:正在排队, 等待线程锁...",
chatbot=chatbot, history=history, delay=0)
self.threadLock.acquire()
import glob, threading, os
from toolbox import get_log_folder, gen_time_str
dst = os.path.join(get_log_folder(plugin_name='nougat'), gen_time_str())
os.makedirs(dst)
yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度正在加载NOUGAT... 提示首次运行需要花费较长时间下载NOUGAT参数",
chatbot=chatbot, history=history, delay=0)
command = ['nougat', '--out', os.path.abspath(dst), os.path.abspath(fp)]
self.nougat_with_timeout(command, cwd=os.getcwd(), timeout=3600)
res = glob.glob(os.path.join(dst,'*.mmd'))
if len(res) == 0:
self.threadLock.release()
raise RuntimeError("Nougat解析论文失败。")
self.threadLock.release()
return res[0]
def try_install_deps(deps, reload_m=[]):
import subprocess, sys, importlib
for dep in deps:
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '--user', dep])
import site
importlib.reload(site)
for m in reload_m:
importlib.reload(__import__(m))
def get_plugin_arg(plugin_kwargs, key, default):
# 如果参数是空的
if (key in plugin_kwargs) and (plugin_kwargs[key] == ""): plugin_kwargs.pop(key)
# 正常情况
return plugin_kwargs.get(key, default)
return cut(txt, must_break_at_empty_line=False)
except RuntimeError:
# 这个中文的句号是故意的,作为一个标识而存在
res = cut(txt.replace('.', '\n'), must_break_at_empty_line=False)
return [r.replace('\n', '.') for r in res]

查看文件

@@ -1,127 +0,0 @@
import os
from textwrap import indent
from loguru import logger
class FileNode:
def __init__(self, name, build_manifest=False):
self.name = name
self.children = []
self.is_leaf = False
self.level = 0
self.parenting_ship = []
self.comment = ""
self.comment_maxlen_show = 50
self.build_manifest = build_manifest
self.manifest = {}
@staticmethod
def add_linebreaks_at_spaces(string, interval=10):
return '\n'.join(string[i:i+interval] for i in range(0, len(string), interval))
def sanitize_comment(self, comment):
if len(comment) > self.comment_maxlen_show: suf = '...'
else: suf = ''
comment = comment[:self.comment_maxlen_show]
comment = comment.replace('\"', '').replace('`', '').replace('\n', '').replace('`', '').replace('$', '')
comment = self.add_linebreaks_at_spaces(comment, 10)
return '`' + comment + suf + '`'
def add_file(self, file_path, file_comment):
directory_names, file_name = os.path.split(file_path)
current_node = self
level = 1
if directory_names == "":
new_node = FileNode(file_name)
self.manifest[file_path] = new_node
current_node.children.append(new_node)
new_node.is_leaf = True
new_node.comment = self.sanitize_comment(file_comment)
new_node.level = level
current_node = new_node
else:
dnamesplit = directory_names.split(os.sep)
for i, directory_name in enumerate(dnamesplit):
found_child = False
level += 1
for child in current_node.children:
if child.name == directory_name:
current_node = child
found_child = True
break
if not found_child:
new_node = FileNode(directory_name)
current_node.children.append(new_node)
new_node.level = level - 1
current_node = new_node
term = FileNode(file_name)
self.manifest[file_path] = term
term.level = level
term.comment = self.sanitize_comment(file_comment)
term.is_leaf = True
current_node.children.append(term)
def print_files_recursively(self, level=0, code="R0"):
logger.info(' '*level + self.name + ' ' + str(self.is_leaf) + ' ' + str(self.level))
for j, child in enumerate(self.children):
child.print_files_recursively(level=level+1, code=code+str(j))
self.parenting_ship.extend(child.parenting_ship)
p1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
p2 = """ --> """
p3 = f"""{code+str(j)}[\"🗎{child.name}\"]""" if child.is_leaf else f"""{code+str(j)}[[\"📁{child.name}\"]]"""
edge_code = p1 + p2 + p3
if edge_code in self.parenting_ship:
continue
self.parenting_ship.append(edge_code)
if self.comment != "":
pc1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
pc2 = f""" -.-x """
pc3 = f"""C{code}[\"{self.comment}\"]:::Comment"""
edge_code = pc1 + pc2 + pc3
self.parenting_ship.append(edge_code)
MERMAID_TEMPLATE = r"""
```mermaid
flowchart LR
%% <gpt_academic_hide_mermaid_code> 一个特殊标记,用于在生成mermaid图表时隐藏代码块
classDef Comment stroke-dasharray: 5 5
subgraph {graph_name}
{relationship}
end
```
"""
def build_file_tree_mermaid_diagram(file_manifest, file_comments, graph_name):
# Create the root node
file_tree_struct = FileNode("root")
# Build the tree structure
for file_path, file_comment in zip(file_manifest, file_comments):
file_tree_struct.add_file(file_path, file_comment)
file_tree_struct.print_files_recursively()
cc = "\n".join(file_tree_struct.parenting_ship)
ccc = indent(cc, prefix=" "*8)
return MERMAID_TEMPLATE.format(graph_name=graph_name, relationship=ccc)
if __name__ == "__main__":
# File manifest
file_manifest = [
"cradle_void_terminal.ipynb",
"tests/test_utils.py",
"tests/test_plugins.py",
"tests/test_llms.py",
"config.py",
"build/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/model_weights_0.bin",
"crazy_functions/latex_fns/latex_actions.py",
"crazy_functions/latex_fns/latex_toolbox.py"
]
file_comments = [
"根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件",
"包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器",
"用于构建HTML报告的类和方法用于构建HTML报告的类和方法用于构建HTML报告的类和方法",
"包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码",
"用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数",
"是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块",
"用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器",
"包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类",
]
logger.info(build_file_tree_mermaid_diagram(file_manifest, file_comments, "项目文件树"))

查看文件

@@ -1,42 +0,0 @@
from toolbox import CatchException, update_ui, update_ui_lastest_msg
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
import random
class MiniGame_ASCII_Art(GptAcademicGameBaseState):
def step(self, prompt, chatbot, history):
if self.step_cnt == 0:
chatbot.append(["我画你猜(动物)", "请稍等..."])
else:
if prompt.strip() == 'exit':
self.delete_game = True
yield from update_ui_lastest_msg(lastmsg=f"谜底是{self.obj},游戏结束。", chatbot=chatbot, history=history, delay=0.)
return
chatbot.append([prompt, ""])
yield from update_ui(chatbot=chatbot, history=history)
if self.step_cnt == 0:
self.lock_plugin(chatbot)
self.cur_task = 'draw'
if self.cur_task == 'draw':
avail_obj = ["","","","","老鼠",""]
self.obj = random.choice(avail_obj)
inputs = "I want to play a game called Guess the ASCII art. You can draw the ASCII art and I will try to guess it. " + \
f"This time you draw a {self.obj}. Note that you must not indicate what you have draw in the text, and you should only produce the ASCII art wrapped by ```. "
raw_res = predict_no_ui_long_connection(inputs=inputs, llm_kwargs=self.llm_kwargs, history=[], sys_prompt="")
self.cur_task = 'identify user guess'
res = get_code_block(raw_res)
history += ['', f'the answer is {self.obj}', inputs, res]
yield from update_ui_lastest_msg(lastmsg=res, chatbot=chatbot, history=history, delay=0.)
elif self.cur_task == 'identify user guess':
if is_same_thing(self.obj, prompt, self.llm_kwargs):
self.delete_game = True
yield from update_ui_lastest_msg(lastmsg="你猜对了!", chatbot=chatbot, history=history, delay=0.)
else:
self.cur_task = 'identify user guess'
yield from update_ui_lastest_msg(lastmsg="猜错了,再试试,输入“exit”获取答案。", chatbot=chatbot, history=history, delay=0.)

查看文件

@@ -1,212 +0,0 @@
prompts_hs = """ 请以“{headstart}”为开头,编写一个小说的第一幕。
- 尽量短,不要包含太多情节,因为你接下来将会与用户互动续写下面的情节,要留出足够的互动空间。
- 出现人物时,给出人物的名字。
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
- 字数要求第一幕的字数少于300字,且少于2个段落。
"""
prompts_interact = """ 小说的前文回顾:
{previously_on_story}
你是一个作家,根据以上的情节,给出4种不同的后续剧情发展方向,每个发展方向都精明扼要地用一句话说明。稍后,我将在这4个选择中,挑选一种剧情发展。
输出格式例如:
1. 后续剧情发展1
2. 后续剧情发展2
3. 后续剧情发展3
4. 后续剧情发展4
"""
prompts_resume = """小说的前文回顾:
{previously_on_story}
你是一个作家,我们正在互相讨论,确定后续剧情的发展。
在以下的剧情发展中,
{choice}
我认为更合理的是:{user_choice}
请在前文的基础上(不要重复前文),围绕我选定的剧情情节,编写小说的下一幕。
- 禁止杜撰不符合我选择的剧情。
- 尽量短,不要包含太多情节,因为你接下来将会与用户互动续写下面的情节,要留出足够的互动空间。
- 不要重复前文。
- 出现人物时,给出人物的名字。
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
- 小说的下一幕字数少于300字,且少于2个段落。
"""
prompts_terminate = """小说的前文回顾:
{previously_on_story}
你是一个作家,我们正在互相讨论,确定后续剧情的发展。
现在,故事该结束了,我认为最合理的故事结局是:{user_choice}
请在前文的基础上(不要重复前文),编写小说的最后一幕。
- 不要重复前文。
- 出现人物时,给出人物的名字。
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
- 字数要求最后一幕的字数少于1000字。
"""
from toolbox import CatchException, update_ui, update_ui_lastest_msg
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
import random
class MiniGame_ResumeStory(GptAcademicGameBaseState):
story_headstart = [
'先行者知道,他现在是全宇宙中唯一的一个人了。',
'深夜,一个年轻人穿过天安门广场向纪念堂走去。在二十二世纪编年史中,计算机把他的代号定为M102。',
'他知道,这最后一课要提前讲了。又一阵剧痛从肝部袭来,几乎使他晕厥过去。',
'在距地球五万光年的远方,在银河系的中心,一场延续了两万年的星际战争已接近尾声。那里的太空中渐渐隐现出一个方形区域,仿佛灿烂的群星的背景被剪出一个方口。',
'伊依一行三人乘坐一艘游艇在南太平洋上做吟诗航行,他们的目的地是南极,如果几天后能顺利到达那里,他们将钻出地壳去看诗云。',
'很多人生来就会莫名其妙地迷上一样东西,仿佛他的出生就是要和这东西约会似的,正是这样,圆圆迷上了肥皂泡。'
]
def begin_game_step_0(self, prompt, chatbot, history):
# init game at step 0
self.headstart = random.choice(self.story_headstart)
self.story = []
chatbot.append(["互动写故事", f"这次的故事开头是:{self.headstart}"])
self.sys_prompt_ = '你是一个想象力丰富的杰出作家。正在与你的朋友互动,一起写故事,因此你每次写的故事段落应少于300字结局除外'
def generate_story_image(self, story_paragraph):
try:
from crazy_functions.AntFin import gen_image
prompt_ = predict_no_ui_long_connection(inputs=story_paragraph, llm_kwargs=self.llm_kwargs, history=[], sys_prompt='你需要根据用户给出的小说段落,进行简短的环境描写。要求80字以内。')
image_url, image_path = gen_image(self.llm_kwargs, prompt_, '512x512', model="dall-e-2", quality='standard', style='natural')
return f'<br/><div align="center"><img src="file={image_path}"></div>'
except:
return ''
def step(self, prompt, chatbot, history):
"""
首先,处理游戏初始化等特殊情况
"""
if self.step_cnt == 0:
self.begin_game_step_0(prompt, chatbot, history)
self.lock_plugin(chatbot)
self.cur_task = 'head_start'
else:
if prompt.strip() == 'exit' or prompt.strip() == '结束剧情':
# should we terminate game here?
self.delete_game = True
yield from update_ui_lastest_msg(lastmsg=f"游戏结束。", chatbot=chatbot, history=history, delay=0.)
return
if '剧情收尾' in prompt:
self.cur_task = 'story_terminate'
# # well, game resumes
# chatbot.append([prompt, ""])
# update ui, don't keep the user waiting
yield from update_ui(chatbot=chatbot, history=history)
"""
处理游戏的主体逻辑
"""
if self.cur_task == 'head_start':
"""
这是游戏的第一步
"""
inputs_ = prompts_hs.format(headstart=self.headstart)
history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, '故事开头', self.llm_kwargs,
chatbot, history_, self.sys_prompt_
)
self.story.append(story_paragraph)
# # 配图
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
# # 构建后续剧情引导
previously_on_story = ""
for s in self.story:
previously_on_story += s + '\n'
inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
history_ = []
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, '请在以下几种故事走向中,选择一种(当然,您也可以选择给出其他故事走向):', self.llm_kwargs,
chatbot,
history_,
self.sys_prompt_
)
self.cur_task = 'user_choice'
elif self.cur_task == 'user_choice':
"""
根据用户的提示,确定故事的下一步
"""
if '请在以下几种故事走向中,选择一种' in chatbot[-1][0]: chatbot.pop(-1)
previously_on_story = ""
for s in self.story:
previously_on_story += s + '\n'
inputs_ = prompts_resume.format(previously_on_story=previously_on_story, choice=self.next_choices, user_choice=prompt)
history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, f'下一段故事(您的选择是:{prompt})。', self.llm_kwargs,
chatbot, history_, self.sys_prompt_
)
self.story.append(story_paragraph)
# # 配图
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
# # 构建后续剧情引导
previously_on_story = ""
for s in self.story:
previously_on_story += s + '\n'
inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
history_ = []
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_,
'请在以下几种故事走向中,选择一种。当然,您也可以给出您心中的其他故事走向。另外,如果您希望剧情立即收尾,请输入剧情走向,并以“剧情收尾”四个字提示程序。', self.llm_kwargs,
chatbot,
history_,
self.sys_prompt_
)
self.cur_task = 'user_choice'
elif self.cur_task == 'story_terminate':
"""
根据用户的提示,确定故事的结局
"""
previously_on_story = ""
for s in self.story:
previously_on_story += s + '\n'
inputs_ = prompts_terminate.format(previously_on_story=previously_on_story, user_choice=prompt)
history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, f'故事收尾(您的选择是:{prompt})。', self.llm_kwargs,
chatbot, history_, self.sys_prompt_
)
# # 配图
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
# terminate game
self.delete_game = True
return

查看文件

@@ -1,35 +0,0 @@
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
from request_llms.bridge_all import predict_no_ui_long_connection
def get_code_block(reply):
import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) == 1:
return "```" + matches[0] + "```" # code block
raise RuntimeError("GPT is not generating proper code.")
def is_same_thing(a, b, llm_kwargs):
from pydantic import BaseModel, Field
class IsSameThing(BaseModel):
is_same_thing: bool = Field(description="determine whether two objects are same thing.", default=False)
def run_gpt_fn(inputs, sys_prompt, history=[]):
return predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs,
history=history, sys_prompt=sys_prompt, observe_window=[]
)
gpt_json_io = GptJsonIO(IsSameThing)
inputs_01 = "Identity whether the user input and the target is the same thing: \n target object: {a} \n user input object: {b} \n\n\n".format(a=a, b=b)
inputs_01 += "\n\n\n Note that the user may describe the target object with a different language, e.g. cat and 猫 are the same thing."
analyze_res_cot_01 = run_gpt_fn(inputs_01, "", [])
inputs_02 = inputs_01 + gpt_json_io.format_instructions
analyze_res = run_gpt_fn(inputs_02, "", [inputs_01, analyze_res_cot_01])
try:
res = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
return res.is_same_thing
except JsonStringError as e:
return False

查看文件

@@ -1,70 +0,0 @@
import time
import importlib
from toolbox import trimmed_format_exc, gen_time_str, get_log_folder
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, is_the_upload_folder
from toolbox import promote_file_to_downloadzone, get_log_folder, update_ui_lastest_msg
import multiprocessing
def get_class_name(class_string):
import re
# Use regex to extract the class name
class_name = re.search(r'class (\w+)\(', class_string).group(1)
return class_name
def try_make_module(code, chatbot):
module_file = 'gpt_fn_' + gen_time_str().replace('-','_')
fn_path = f'{get_log_folder(plugin_name="gen_plugin_verify")}/{module_file}.py'
with open(fn_path, 'w', encoding='utf8') as f: f.write(code)
promote_file_to_downloadzone(fn_path, chatbot=chatbot)
class_name = get_class_name(code)
manager = multiprocessing.Manager()
return_dict = manager.dict()
p = multiprocessing.Process(target=is_function_successfully_generated, args=(fn_path, class_name, return_dict))
# only has 10 seconds to run
p.start(); p.join(timeout=10)
if p.is_alive(): p.terminate(); p.join()
p.close()
return return_dict["success"], return_dict['traceback']
# check is_function_successfully_generated
def is_function_successfully_generated(fn_path, class_name, return_dict):
return_dict['success'] = False
return_dict['traceback'] = ""
try:
# Create a spec for the module
module_spec = importlib.util.spec_from_file_location('example_module', fn_path)
# Load the module
example_module = importlib.util.module_from_spec(module_spec)
module_spec.loader.exec_module(example_module)
# Now you can use the module
some_class = getattr(example_module, class_name)
# Now you can create an instance of the class
instance = some_class()
return_dict['success'] = True
return
except:
return_dict['traceback'] = trimmed_format_exc()
return
def subprocess_worker(code, file_path, return_dict):
return_dict['result'] = None
return_dict['success'] = False
return_dict['traceback'] = ""
try:
module_file = 'gpt_fn_' + gen_time_str().replace('-','_')
fn_path = f'{get_log_folder(plugin_name="gen_plugin_run")}/{module_file}.py'
with open(fn_path, 'w', encoding='utf8') as f: f.write(code)
class_name = get_class_name(code)
# Create a spec for the module
module_spec = importlib.util.spec_from_file_location('example_module', fn_path)
# Load the module
example_module = importlib.util.module_from_spec(module_spec)
module_spec.loader.exec_module(example_module)
# Now you can use the module
some_class = getattr(example_module, class_name)
# Now you can create an instance of the class
instance = some_class()
return_dict['result'] = instance.run(file_path)
return_dict['success'] = True
except:
return_dict['traceback'] = trimmed_format_exc()

查看文件

@@ -1,37 +0,0 @@
import platform
import pickle
import multiprocessing
def run_in_subprocess_wrapper_func(v_args):
func, args, kwargs, return_dict, exception_dict = pickle.loads(v_args)
import sys
try:
result = func(*args, **kwargs)
return_dict['result'] = result
except Exception as e:
exc_info = sys.exc_info()
exception_dict['exception'] = exc_info
def run_in_subprocess_with_timeout(func, timeout=60):
if platform.system() == 'Linux':
def wrapper(*args, **kwargs):
return_dict = multiprocessing.Manager().dict()
exception_dict = multiprocessing.Manager().dict()
v_args = pickle.dumps((func, args, kwargs, return_dict, exception_dict))
process = multiprocessing.Process(target=run_in_subprocess_wrapper_func, args=(v_args,))
process.start()
process.join(timeout)
if process.is_alive():
process.terminate()
raise TimeoutError(f'功能单元{str(func)}未能在规定时间内完成任务')
process.close()
if 'exception' in exception_dict:
# ooops, the subprocess ran into an exception
exc_info = exception_dict['exception']
raise exc_info[1].with_traceback(exc_info[2])
if 'result' in return_dict.keys():
# If the subprocess ran successfully, return the result
return return_dict['result']
return wrapper
else:
return func

查看文件

@@ -1,111 +0,0 @@
"""
https://github.com/langchain-ai/langchain/blob/master/docs/extras/modules/model_io/output_parsers/pydantic.ipynb
Example 1.
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator("setup")
def question_ends_with_question_mark(cls, field):
if field[-1] != "?":
raise ValueError("Badly formed question!")
return field
Example 2.
# Here's another example, but with a compound typed field.
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
"""
import json, re
from loguru import logger as logging
PYDANTIC_FORMAT_INSTRUCTIONS = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
Here is the output schema:
```
{schema}
```"""
PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
```
{schema}
```"""
class JsonStringError(Exception): ...
class GptJsonIO():
def __init__(self, schema, example_instruction=True):
self.pydantic_object = schema
self.example_instruction = example_instruction
self.format_instructions = self.generate_format_instructions()
def generate_format_instructions(self):
schema = self.pydantic_object.schema()
# Remove extraneous fields.
reduced_schema = schema
if "title" in reduced_schema:
del reduced_schema["title"]
if "type" in reduced_schema:
del reduced_schema["type"]
# Ensure json in context is well-formed with double quotes.
schema_str = json.dumps(reduced_schema)
if self.example_instruction:
return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)
else:
return PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE.format(schema=schema_str)
def generate_output(self, text):
# Greedy search for 1st json candidate.
match = re.search(
r"\{.*\}", text.strip(), re.MULTILINE | re.IGNORECASE | re.DOTALL
)
json_str = ""
if match: json_str = match.group()
json_object = json.loads(json_str, strict=False)
final_object = self.pydantic_object.parse_obj(json_object)
return final_object
def generate_repair_prompt(self, broken_json, error):
prompt = "Fix a broken json string.\n\n" + \
"(1) The broken json string need to fix is: \n\n" + \
"```" + "\n" + \
broken_json + "\n" + \
"```" + "\n\n" + \
"(2) The error message is: \n\n" + \
error + "\n\n" + \
"Now, fix this json string. \n\n"
return prompt
def generate_output_auto_repair(self, response, gpt_gen_fn):
"""
response: string containing canidate json
gpt_gen_fn: gpt_gen_fn(inputs, sys_prompt)
"""
try:
result = self.generate_output(response)
except Exception as e:
try:
logging.info(f'Repairing json{response}')
repair_prompt = self.generate_repair_prompt(broken_json = response, error=repr(e))
result = self.generate_output(gpt_gen_fn(repair_prompt, self.format_instructions))
logging.info('Repaire json success.')
except Exception as e:
# 没辙了,放弃治疗
logging.info('Repaire json fail.')
raise JsonStringError('Cannot repair json.', str(e))
return result

查看文件

@@ -1,26 +0,0 @@
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
def structure_output(txt, prompt, err_msg, run_gpt_fn, pydantic_cls):
gpt_json_io = GptJsonIO(pydantic_cls)
analyze_res = run_gpt_fn(
txt,
sys_prompt=prompt + gpt_json_io.format_instructions
)
try:
friend = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
except JsonStringError as e:
return None, err_msg
err_msg = ""
return friend, err_msg
def select_tool(prompt, run_gpt_fn, pydantic_cls):
pydantic_cls_instance, err_msg = structure_output(
txt=prompt,
prompt="根据提示, 分析应该调用哪个工具函数\n\n",
err_msg=f"不能理解该联系人",
run_gpt_fn=run_gpt_fn,
pydantic_cls=pydantic_cls
)
return pydantic_cls_instance, err_msg

查看文件

@@ -1,537 +0,0 @@
import os
import re
import shutil
import numpy as np
from loguru import logger
from toolbox import update_ui, update_ui_lastest_msg, get_log_folder, gen_time_str
from toolbox import get_conf, promote_file_to_downloadzone
from crazy_functions.latex_fns.latex_toolbox import PRESERVE, TRANSFORM
from crazy_functions.latex_fns.latex_toolbox import set_forbidden_text, set_forbidden_text_begin_end, set_forbidden_text_careful_brace
from crazy_functions.latex_fns.latex_toolbox import reverse_forbidden_text_careful_brace, reverse_forbidden_text, convert_to_linklist, post_process
from crazy_functions.latex_fns.latex_toolbox import fix_content, find_main_tex_file, merge_tex_files, compile_latex_with_timeout
from crazy_functions.latex_fns.latex_toolbox import find_title_and_abs
from crazy_functions.latex_fns.latex_pickle_io import objdump, objload
pj = os.path.join
def split_subprocess(txt, project_folder, return_dict, opts):
"""
break down latex file to a linked list,
each node use a preserve flag to indicate whether it should
be proccessed by GPT.
"""
text = txt
mask = np.zeros(len(txt), dtype=np.uint8) + TRANSFORM
# 吸收title与作者以上的部分
text, mask = set_forbidden_text(text, mask, r"^(.*?)\\maketitle", re.DOTALL)
text, mask = set_forbidden_text(text, mask, r"^(.*?)\\begin{document}", re.DOTALL)
# 吸收iffalse注释
text, mask = set_forbidden_text(text, mask, r"\\iffalse(.*?)\\fi", re.DOTALL)
# 吸收在42行以内的begin-end组合
text, mask = set_forbidden_text_begin_end(text, mask, r"\\begin\{([a-z\*]*)\}(.*?)\\end\{\1\}", re.DOTALL, limit_n_lines=42)
# 吸收匿名公式
text, mask = set_forbidden_text(text, mask, [ r"\$\$([^$]+)\$\$", r"\\\[.*?\\\]" ], re.DOTALL)
# 吸收其他杂项
text, mask = set_forbidden_text(text, mask, [ r"\\section\{(.*?)\}", r"\\section\*\{(.*?)\}", r"\\subsection\{(.*?)\}", r"\\subsubsection\{(.*?)\}" ])
text, mask = set_forbidden_text(text, mask, [ r"\\bibliography\{(.*?)\}", r"\\bibliographystyle\{(.*?)\}" ])
text, mask = set_forbidden_text(text, mask, r"\\begin\{thebibliography\}.*?\\end\{thebibliography\}", re.DOTALL)
text, mask = set_forbidden_text(text, mask, r"\\begin\{lstlisting\}(.*?)\\end\{lstlisting\}", re.DOTALL)
text, mask = set_forbidden_text(text, mask, r"\\begin\{wraptable\}(.*?)\\end\{wraptable\}", re.DOTALL)
text, mask = set_forbidden_text(text, mask, r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}", re.DOTALL)
text, mask = set_forbidden_text(text, mask, [r"\\begin\{wrapfigure\}(.*?)\\end\{wrapfigure\}", r"\\begin\{wrapfigure\*\}(.*?)\\end\{wrapfigure\*\}"], re.DOTALL)
text, mask = set_forbidden_text(text, mask, [r"\\begin\{figure\}(.*?)\\end\{figure\}", r"\\begin\{figure\*\}(.*?)\\end\{figure\*\}"], re.DOTALL)
text, mask = set_forbidden_text(text, mask, [r"\\begin\{multline\}(.*?)\\end\{multline\}", r"\\begin\{multline\*\}(.*?)\\end\{multline\*\}"], re.DOTALL)
text, mask = set_forbidden_text(text, mask, [r"\\begin\{table\}(.*?)\\end\{table\}", r"\\begin\{table\*\}(.*?)\\end\{table\*\}"], re.DOTALL)
text, mask = set_forbidden_text(text, mask, [r"\\begin\{minipage\}(.*?)\\end\{minipage\}", r"\\begin\{minipage\*\}(.*?)\\end\{minipage\*\}"], re.DOTALL)
text, mask = set_forbidden_text(text, mask, [r"\\begin\{align\*\}(.*?)\\end\{align\*\}", r"\\begin\{align\}(.*?)\\end\{align\}"], re.DOTALL)
text, mask = set_forbidden_text(text, mask, [r"\\begin\{equation\}(.*?)\\end\{equation\}", r"\\begin\{equation\*\}(.*?)\\end\{equation\*\}"], re.DOTALL)
text, mask = set_forbidden_text(text, mask, [r"\\includepdf\[(.*?)\]\{(.*?)\}", r"\\clearpage", r"\\newpage", r"\\appendix", r"\\tableofcontents", r"\\include\{(.*?)\}"])
text, mask = set_forbidden_text(text, mask, [r"\\vspace\{(.*?)\}", r"\\hspace\{(.*?)\}", r"\\label\{(.*?)\}", r"\\begin\{(.*?)\}", r"\\end\{(.*?)\}", r"\\item "])
text, mask = set_forbidden_text_careful_brace(text, mask, r"\\hl\{(.*?)\}", re.DOTALL)
# reverse 操作必须放在最后
text, mask = reverse_forbidden_text_careful_brace(text, mask, r"\\caption\{(.*?)\}", re.DOTALL, forbid_wrapper=True)
text, mask = reverse_forbidden_text_careful_brace(text, mask, r"\\abstract\{(.*?)\}", re.DOTALL, forbid_wrapper=True)
text, mask = reverse_forbidden_text(text, mask, r"\\begin\{abstract\}(.*?)\\end\{abstract\}", re.DOTALL, forbid_wrapper=True)
root = convert_to_linklist(text, mask)
# 最后一步处理,增强稳健性
root = post_process(root)
# 输出html调试文件,用红色标注处保留区PRESERVE,用黑色标注转换区TRANSFORM
with open(pj(project_folder, 'debug_log.html'), 'w', encoding='utf8') as f:
segment_parts_for_gpt = []
nodes = []
node = root
while True:
nodes.append(node)
show_html = node.string.replace('\n','<br/>')
if not node.preserve:
segment_parts_for_gpt.append(node.string)
f.write(f'<p style="color:black;">#{node.range}{show_html}#</p>')
else:
f.write(f'<p style="color:red;">{show_html}</p>')
node = node.next
if node is None: break
for n in nodes: n.next = None # break
return_dict['nodes'] = nodes
return_dict['segment_parts_for_gpt'] = segment_parts_for_gpt
return return_dict
class LatexPaperSplit():
"""
break down latex file to a linked list,
each node use a preserve flag to indicate whether it should
be proccessed by GPT.
"""
def __init__(self) -> None:
self.nodes = None
self.msg = "*{\\scriptsize\\textbf{警告该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成," + \
"版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \
"项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。"
# 请您不要删除或修改这行警告,除非您是论文的原作者如果您是论文原作者,欢迎加REAME中的QQ联系开发者
self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\"
self.title = "unknown"
self.abstract = "unknown"
def read_title_and_abstract(self, txt):
try:
title, abstract = find_title_and_abs(txt)
if title is not None:
self.title = title.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '')
if abstract is not None:
self.abstract = abstract.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '')
except:
pass
def merge_result(self, arr, mode, msg, buggy_lines=[], buggy_line_surgery_n_lines=10):
"""
Merge the result after the GPT process completed
"""
result_string = ""
node_cnt = 0
line_cnt = 0
for node in self.nodes:
if node.preserve:
line_cnt += node.string.count('\n')
result_string += node.string
else:
translated_txt = fix_content(arr[node_cnt], node.string)
begin_line = line_cnt
end_line = line_cnt + translated_txt.count('\n')
# reverse translation if any error
if any([begin_line-buggy_line_surgery_n_lines <= b_line <= end_line+buggy_line_surgery_n_lines for b_line in buggy_lines]):
translated_txt = node.string
result_string += translated_txt
node_cnt += 1
line_cnt += translated_txt.count('\n')
if mode == 'translate_zh':
pattern = re.compile(r'\\begin\{abstract\}.*\n')
match = pattern.search(result_string)
if not match:
# match \abstract{xxxx}
pattern_compile = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
match = pattern_compile.search(result_string)
position = match.regs[1][0]
else:
# match \begin{abstract}xxxx\end{abstract}
position = match.end()
result_string = result_string[:position] + self.msg + msg + self.msg_declare + result_string[position:]
return result_string
def split(self, txt, project_folder, opts):
"""
break down latex file to a linked list,
each node use a preserve flag to indicate whether it should
be proccessed by GPT.
P.S. use multiprocessing to avoid timeout error
"""
import multiprocessing
manager = multiprocessing.Manager()
return_dict = manager.dict()
p = multiprocessing.Process(
target=split_subprocess,
args=(txt, project_folder, return_dict, opts))
p.start()
p.join()
p.close()
self.nodes = return_dict['nodes']
self.sp = return_dict['segment_parts_for_gpt']
return self.sp
class LatexPaperFileGroup():
"""
use tokenizer to break down text according to max_token_limit
"""
def __init__(self):
self.file_paths = []
self.file_contents = []
self.sp_file_contents = []
self.sp_file_index = []
self.sp_file_tag = []
# count_token
from request_llms.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer']
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
self.get_token_num = get_token_num
def run_file_split(self, max_token_limit=1900):
"""
use tokenizer to break down text according to max_token_limit
"""
for index, file_content in enumerate(self.file_contents):
if self.get_token_num(file_content) < max_token_limit:
self.sp_file_contents.append(file_content)
self.sp_file_index.append(index)
self.sp_file_tag.append(self.file_paths[index])
else:
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
for j, segment in enumerate(segments):
self.sp_file_contents.append(segment)
self.sp_file_index.append(index)
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
def merge_result(self):
self.file_result = ["" for _ in range(len(self.file_paths))]
for r, k in zip(self.sp_file_result, self.sp_file_index):
self.file_result[k] += r
def write_result(self):
manifest = []
for path, res in zip(self.file_paths, self.file_result):
with open(path + '.polish.tex', 'w', encoding='utf8') as f:
manifest.append(path + '.polish.tex')
f.write(res)
return manifest
def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, mode='proofread', switch_prompt=None, opts=[]):
import time, os, re
from ..crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from .latex_actions import LatexPaperFileGroup, LatexPaperSplit
# <-------- 寻找主tex文件 ---------->
maintex = find_main_tex_file(file_manifest, mode)
chatbot.append((f"定位主Latex文件", f'[Local Message] 分析结果该项目的Latex主文件是{maintex}, 如果分析错误, 请立即终止程序, 删除或修改歧义文件, 然后重试。主程序即将开始, 请稍候。'))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
time.sleep(3)
# <-------- 读取Latex文件, 将多文件tex工程融合为一个巨型tex ---------->
main_tex_basename = os.path.basename(maintex)
assert main_tex_basename.endswith('.tex')
main_tex_basename_bare = main_tex_basename[:-4]
may_exist_bbl = pj(project_folder, f'{main_tex_basename_bare}.bbl')
if os.path.exists(may_exist_bbl):
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge.bbl'))
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_{mode}.bbl'))
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_diff.bbl'))
with open(maintex, 'r', encoding='utf-8', errors='replace') as f:
content = f.read()
merged_content = merge_tex_files(project_folder, content, mode)
with open(project_folder + '/merge.tex', 'w', encoding='utf-8', errors='replace') as f:
f.write(merged_content)
# <-------- 精细切分latex文件 ---------->
chatbot.append((f"Latex文件融合完成", f'[Local Message] 正在精细切分latex文件,这需要一段时间计算,文档越长耗时越长,请耐心等待。'))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
lps = LatexPaperSplit()
lps.read_title_and_abstract(merged_content)
res = lps.split(merged_content, project_folder, opts) # 消耗时间的函数
# <-------- 拆分过长的latex片段 ---------->
pfg = LatexPaperFileGroup()
for index, r in enumerate(res):
pfg.file_paths.append('segment-' + str(index))
pfg.file_contents.append(r)
pfg.run_file_split(max_token_limit=1024)
n_split = len(pfg.sp_file_contents)
# <-------- 根据需要切换prompt ---------->
inputs_array, sys_prompt_array = switch_prompt(pfg, mode)
inputs_show_user_array = [f"{mode} {f}" for f in pfg.sp_file_tag]
if os.path.exists(pj(project_folder,'temp.pkl')):
# <-------- 【仅调试】如果存在调试缓存文件,则跳过GPT请求环节 ---------->
pfg = objload(file=pj(project_folder,'temp.pkl'))
else:
# <-------- gpt 多线程请求 ---------->
history_array = [[""] for _ in range(n_split)]
# LATEX_EXPERIMENTAL, = get_conf('LATEX_EXPERIMENTAL')
# if LATEX_EXPERIMENTAL:
# paper_meta = f"The paper you processing is `{lps.title}`, a part of the abstraction is `{lps.abstract}`"
# paper_meta_max_len = 888
# history_array = [[ paper_meta[:paper_meta_max_len] + '...', "Understand, what should I do?"] for _ in range(n_split)]
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=inputs_array,
inputs_show_user_array=inputs_show_user_array,
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history_array=history_array,
sys_prompt_array=sys_prompt_array,
# max_workers=5, # 并行任务数量限制, 最多同时执行5个, 其他的排队等待
scroller_max_len = 40
)
# <-------- 文本碎片重组为完整的tex片段 ---------->
pfg.sp_file_result = []
for i_say, gpt_say, orig_content in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], pfg.sp_file_contents):
pfg.sp_file_result.append(gpt_say)
pfg.merge_result()
# <-------- 临时存储用于调试 ---------->
pfg.get_token_num = None
objdump(pfg, file=pj(project_folder,'temp.pkl'))
write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder)
# <-------- 写出文件 ---------->
msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}"
final_tex = lps.merge_result(pfg.file_result, mode, msg)
objdump((lps, pfg.file_result, mode, msg), file=pj(project_folder,'merge_result.pkl'))
with open(project_folder + f'/merge_{mode}.tex', 'w', encoding='utf-8', errors='replace') as f:
if mode != 'translate_zh' or "binary" in final_tex: f.write(final_tex)
# <-------- 整理结果, 退出 ---------->
chatbot.append((f"完成了吗?", 'GPT结果已输出, 即将编译PDF'))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------- 返回 ---------->
return project_folder + f'/merge_{mode}.tex'
def remove_buggy_lines(file_path, log_path, tex_name, tex_name_pure, n_fix, work_folder_modified, fixed_line=[]):
try:
with open(log_path, 'r', encoding='utf-8', errors='replace') as f:
log = f.read()
import re
buggy_lines = re.findall(tex_name+':([0-9]{1,5}):', log)
buggy_lines = [int(l) for l in buggy_lines]
buggy_lines = sorted(buggy_lines)
buggy_line = buggy_lines[0]-1
logger.warning("reversing tex line that has errors", buggy_line)
# 重组,逆转出错的段落
if buggy_line not in fixed_line:
fixed_line.append(buggy_line)
lps, file_result, mode, msg = objload(file=pj(work_folder_modified,'merge_result.pkl'))
final_tex = lps.merge_result(file_result, mode, msg, buggy_lines=fixed_line, buggy_line_surgery_n_lines=5*n_fix)
with open(pj(work_folder_modified, f"{tex_name_pure}_fix_{n_fix}.tex"), 'w', encoding='utf-8', errors='replace') as f:
f.write(final_tex)
return True, f"{tex_name_pure}_fix_{n_fix}", buggy_lines
except:
logger.error("Fatal error occurred, but we cannot identify error, please download zip, read latex log, and compile manually.")
return False, -1, [-1]
def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_folder_original, work_folder_modified, work_folder, mode='default'):
import os, time
n_fix = 1
fixed_line = []
max_try = 32
chatbot.append([f"正在编译PDF文档", f'编译已经开始。当前工作路径为{work_folder},如果程序停顿5分钟以上,请直接去该路径下取回翻译结果,或者重启之后再度尝试 ...']); yield from update_ui(chatbot=chatbot, history=history)
chatbot.append([f"正在编译PDF文档", '...']); yield from update_ui(chatbot=chatbot, history=history); time.sleep(1); chatbot[-1] = list(chatbot[-1]) # 刷新界面
yield from update_ui_lastest_msg('编译已经开始...', chatbot, history) # 刷新Gradio前端界面
while True:
import os
may_exist_bbl = pj(work_folder_modified, f'merge.bbl')
target_bbl = pj(work_folder_modified, f'{main_file_modified}.bbl')
if os.path.exists(may_exist_bbl) and not os.path.exists(target_bbl):
shutil.copyfile(may_exist_bbl, target_bbl)
# https://stackoverflow.com/questions/738755/dont-make-me-manually-abort-a-latex-compile-when-theres-an-error
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译原始PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')):
# 只有第二步成功,才能继续下面的步骤
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面
if not os.path.exists(pj(work_folder_original, f'{main_file_original}.bbl')):
ok = compile_latex_with_timeout(f'bibtex {main_file_original}.aux', work_folder_original)
if not os.path.exists(pj(work_folder_modified, f'{main_file_modified}.bbl')):
ok = compile_latex_with_timeout(f'bibtex {main_file_modified}.aux', work_folder_modified)
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译文献交叉引用 ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
if mode!='translate_zh':
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 使用latexdiff生成论文转化前后对比 ...', chatbot, history) # 刷新Gradio前端界面
logger.info( f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex')
ok = compile_latex_with_timeout(f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex', os.getcwd())
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 正在编译对比PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
ok = compile_latex_with_timeout(f'bibtex merge_diff.aux', work_folder)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
# <---------- 检查结果 ----------->
results_ = ""
original_pdf_success = os.path.exists(pj(work_folder_original, f'{main_file_original}.pdf'))
modified_pdf_success = os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf'))
diff_pdf_success = os.path.exists(pj(work_folder, f'merge_diff.pdf'))
results_ += f"原始PDF编译是否成功: {original_pdf_success};"
results_ += f"转化PDF编译是否成功: {modified_pdf_success};"
results_ += f"对比PDF编译是否成功: {diff_pdf_success};"
yield from update_ui_lastest_msg(f'{n_fix}编译结束:<br/>{results_}...', chatbot, history) # 刷新Gradio前端界面
if diff_pdf_success:
result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
if modified_pdf_success:
yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 正在尝试生成对比PDF, 请稍候 ...', chatbot, history) # 刷新Gradio前端界面
result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
origin_pdf = pj(work_folder_original, f'{main_file_original}.pdf') # get pdf path
if os.path.exists(pj(work_folder, '..', 'translation')):
shutil.copyfile(result_pdf, pj(work_folder, '..', 'translation', 'translate_zh.pdf'))
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
# 将两个PDF拼接
if original_pdf_success:
try:
from .latex_toolbox import merge_pdfs
concat_pdf = pj(work_folder_modified, f'comparison.pdf')
merge_pdfs(origin_pdf, result_pdf, concat_pdf)
if os.path.exists(pj(work_folder, '..', 'translation')):
shutil.copyfile(concat_pdf, pj(work_folder, '..', 'translation', 'comparison.pdf'))
promote_file_to_downloadzone(concat_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
except Exception as e:
logger.error(e)
pass
return True # 成功啦
else:
if n_fix>=max_try: break
n_fix += 1
can_retry, main_file_modified, buggy_lines = remove_buggy_lines(
file_path=pj(work_folder_modified, f'{main_file_modified}.tex'),
log_path=pj(work_folder_modified, f'{main_file_modified}.log'),
tex_name=f'{main_file_modified}.tex',
tex_name_pure=f'{main_file_modified}',
n_fix=n_fix,
work_folder_modified=work_folder_modified,
fixed_line=fixed_line
)
yield from update_ui_lastest_msg(f'由于最为关键的转化PDF编译失败, 将根据报错信息修正tex源文件并重试, 当前报错的latex代码处于第{buggy_lines}行 ...', chatbot, history) # 刷新Gradio前端界面
if not can_retry: break
return False # 失败啦
def write_html(sp_file_contents, sp_file_result, chatbot, project_folder):
# write html
try:
import shutil
from crazy_functions.pdf_fns.report_gen_html import construct_html
from toolbox import gen_time_str
ch = construct_html()
orig = ""
trans = ""
final = []
for c,r in zip(sp_file_contents, sp_file_result):
final.append(c)
final.append(r)
for i, k in enumerate(final):
if i%2==0:
orig = k
if i%2==1:
trans = k
ch.add_row(a=orig, b=trans)
create_report_file_name = f"{gen_time_str()}.trans.html"
res = ch.save_file(create_report_file_name)
shutil.copyfile(res, pj(project_folder, create_report_file_name))
promote_file_to_downloadzone(file=res, chatbot=chatbot)
except:
from toolbox import trimmed_format_exc
logger.error('writing html result failed:', trimmed_format_exc())
def upload_to_gptac_cloud_if_user_allow(chatbot, arxiv_id):
try:
# 如果用户允许,我们将arxiv论文PDF上传到GPTAC学术云
from toolbox import map_file_to_sha256
# 检查是否顺利,如果没有生成预期的文件,则跳过
is_result_good = False
for file_path in chatbot._cookies.get("files_to_promote", []):
if file_path.endswith('translate_zh.pdf'):
is_result_good = True
if not is_result_good:
return
# 上传文件
for file_path in chatbot._cookies.get("files_to_promote", []):
align_name = None
# normalized name
for name in ['translate_zh.pdf', 'comparison.pdf']:
if file_path.endswith(name): align_name = name
# if match any align name
if align_name:
logger.info(f'Uploading to GPTAC cloud as the user has set `allow_cloud_io`: {file_path}')
with open(file_path, 'rb') as f:
import requests
url = 'https://cloud-2.agent-matrix.com/arxiv_tf_paper_normal_upload'
files = {'file': (align_name, f, 'application/octet-stream')}
data = {
'arxiv_id': arxiv_id,
'file_hash': map_file_to_sha256(file_path),
'language': 'zh',
'trans_prompt': 'to_be_implemented',
'llm_model': 'to_be_implemented',
'llm_model_param': 'to_be_implemented',
}
resp = requests.post(url=url, files=files, data=data, timeout=30)
logger.info(f'Uploading terminate ({resp.status_code})`: {file_path}')
except:
# 如果上传失败,不会中断程序,因为这是次要功能
pass
def check_gptac_cloud(arxiv_id, chatbot):
import requests
success = False
downloaded = []
try:
for pdf_target in ['translate_zh.pdf', 'comparison.pdf']:
url = 'https://cloud-2.agent-matrix.com/arxiv_tf_paper_normal_exist'
data = {
'arxiv_id': arxiv_id,
'name': pdf_target,
}
resp = requests.post(url=url, data=data)
cache_hit_result = resp.text.strip('"')
if cache_hit_result.startswith("http"):
url = cache_hit_result
logger.info(f'Downloading from GPTAC cloud: {url}')
resp = requests.get(url=url, timeout=30)
target = os.path.join(get_log_folder(plugin_name='gptac_cloud'), gen_time_str(), pdf_target)
os.makedirs(os.path.dirname(target), exist_ok=True)
with open(target, 'wb') as f:
f.write(resp.content)
new_path = promote_file_to_downloadzone(target, chatbot=chatbot)
success = True
downloaded.append(new_path)
except:
pass
return success, downloaded

查看文件

@@ -1,48 +0,0 @@
import pickle
class SafeUnpickler(pickle.Unpickler):
def get_safe_classes(self):
from crazy_functions.latex_fns.latex_actions import LatexPaperFileGroup, LatexPaperSplit
from crazy_functions.latex_fns.latex_toolbox import LinkedListNode
from numpy.core.multiarray import scalar
from numpy import dtype
# 定义允许的安全类
safe_classes = {
# 在这里添加其他安全的类
'LatexPaperFileGroup': LatexPaperFileGroup,
'LatexPaperSplit': LatexPaperSplit,
'LinkedListNode': LinkedListNode,
'scalar': scalar,
'dtype': dtype,
}
return safe_classes
def find_class(self, module, name):
# 只允许特定的类进行反序列化
self.safe_classes = self.get_safe_classes()
match_class_name = None
for class_name in self.safe_classes.keys():
if (class_name in f'{module}.{name}'):
match_class_name = class_name
if match_class_name is not None:
return self.safe_classes[match_class_name]
# 如果尝试加载未授权的类,则抛出异常
raise pickle.UnpicklingError(f"Attempted to deserialize unauthorized class '{name}' from module '{module}'")
def objdump(obj, file="objdump.tmp"):
with open(file, "wb+") as f:
pickle.dump(obj, f)
return
def objload(file="objdump.tmp"):
import os
if not os.path.exists(file):
return
with open(file, "rb") as f:
unpickler = SafeUnpickler(f)
return unpickler.load()

查看文件

@@ -1,906 +0,0 @@
import os
import re
import shutil
import numpy as np
from loguru import logger
PRESERVE = 0
TRANSFORM = 1
pj = os.path.join
class LinkedListNode:
"""
Linked List Node
"""
def __init__(self, string, preserve=True) -> None:
self.string = string
self.preserve = preserve
self.next = None
self.range = None
# self.begin_line = 0
# self.begin_char = 0
def convert_to_linklist(text, mask):
root = LinkedListNode("", preserve=True)
current_node = root
for c, m, i in zip(text, mask, range(len(text))):
if (m == PRESERVE and current_node.preserve) or (
m == TRANSFORM and not current_node.preserve
):
# add
current_node.string += c
else:
current_node.next = LinkedListNode(c, preserve=(m == PRESERVE))
current_node = current_node.next
return root
def post_process(root):
# 修复括号
node = root
while True:
string = node.string
if node.preserve:
node = node.next
if node is None:
break
continue
def break_check(string):
str_stack = [""] # (lv, index)
for i, c in enumerate(string):
if c == "{":
str_stack.append("{")
elif c == "}":
if len(str_stack) == 1:
logger.warning("fixing brace error")
return i
str_stack.pop(-1)
else:
str_stack[-1] += c
return -1
bp = break_check(string)
if bp == -1:
pass
elif bp == 0:
node.string = string[:1]
q = LinkedListNode(string[1:], False)
q.next = node.next
node.next = q
else:
node.string = string[:bp]
q = LinkedListNode(string[bp:], False)
q.next = node.next
node.next = q
node = node.next
if node is None:
break
# 屏蔽空行和太短的句子
node = root
while True:
if len(node.string.strip("\n").strip("")) == 0:
node.preserve = True
if len(node.string.strip("\n").strip("")) < 42:
node.preserve = True
node = node.next
if node is None:
break
node = root
while True:
if node.next and node.preserve and node.next.preserve:
node.string += node.next.string
node.next = node.next.next
node = node.next
if node is None:
break
# 将前后断行符脱离
node = root
prev_node = None
while True:
if not node.preserve:
lstriped_ = node.string.lstrip().lstrip("\n")
if (
(prev_node is not None)
and (prev_node.preserve)
and (len(lstriped_) != len(node.string))
):
prev_node.string += node.string[: -len(lstriped_)]
node.string = lstriped_
rstriped_ = node.string.rstrip().rstrip("\n")
if (
(node.next is not None)
and (node.next.preserve)
and (len(rstriped_) != len(node.string))
):
node.next.string = node.string[len(rstriped_) :] + node.next.string
node.string = rstriped_
# =-=-=
prev_node = node
node = node.next
if node is None:
break
# 标注节点的行数范围
node = root
n_line = 0
expansion = 2
while True:
n_l = node.string.count("\n")
node.range = [n_line - expansion, n_line + n_l + expansion] # 失败时,扭转的范围
n_line = n_line + n_l
node = node.next
if node is None:
break
return root
"""
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Latex segmentation with a binary mask (PRESERVE=0, TRANSFORM=1)
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
"""
def set_forbidden_text(text, mask, pattern, flags=0):
"""
Add a preserve text area in this paper
e.g. with pattern = r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}"
you can mask out (mask = PRESERVE so that text become untouchable for GPT)
everything between "\begin{equation}" and "\end{equation}"
"""
if isinstance(pattern, list):
pattern = "|".join(pattern)
pattern_compile = re.compile(pattern, flags)
for res in pattern_compile.finditer(text):
mask[res.span()[0] : res.span()[1]] = PRESERVE
return text, mask
def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
"""
Move area out of preserve area (make text editable for GPT)
count the number of the braces so as to catch compelete text area.
e.g.
\begin{abstract} blablablablablabla. \end{abstract}
"""
if isinstance(pattern, list):
pattern = "|".join(pattern)
pattern_compile = re.compile(pattern, flags)
for res in pattern_compile.finditer(text):
if not forbid_wrapper:
mask[res.span()[0] : res.span()[1]] = TRANSFORM
else:
mask[res.regs[0][0] : res.regs[1][0]] = PRESERVE # '\\begin{abstract}'
mask[res.regs[1][0] : res.regs[1][1]] = TRANSFORM # abstract
mask[res.regs[1][1] : res.regs[0][1]] = PRESERVE # abstract
return text, mask
def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
"""
Add a preserve text area in this paper (text become untouchable for GPT).
count the number of the braces so as to catch compelete text area.
e.g.
\caption{blablablablabla\texbf{blablabla}blablabla.}
"""
pattern_compile = re.compile(pattern, flags)
for res in pattern_compile.finditer(text):
brace_level = -1
p = begin = end = res.regs[0][0]
for _ in range(1024 * 16):
if text[p] == "}" and brace_level == 0:
break
elif text[p] == "}":
brace_level -= 1
elif text[p] == "{":
brace_level += 1
p += 1
end = p + 1
mask[begin:end] = PRESERVE
return text, mask
def reverse_forbidden_text_careful_brace(
text, mask, pattern, flags=0, forbid_wrapper=True
):
"""
Move area out of preserve area (make text editable for GPT)
count the number of the braces so as to catch compelete text area.
e.g.
\caption{blablablablabla\texbf{blablabla}blablabla.}
"""
pattern_compile = re.compile(pattern, flags)
for res in pattern_compile.finditer(text):
brace_level = 0
p = begin = end = res.regs[1][0]
for _ in range(1024 * 16):
if text[p] == "}" and brace_level == 0:
break
elif text[p] == "}":
brace_level -= 1
elif text[p] == "{":
brace_level += 1
p += 1
end = p
mask[begin:end] = TRANSFORM
if forbid_wrapper:
mask[res.regs[0][0] : begin] = PRESERVE
mask[end : res.regs[0][1]] = PRESERVE
return text, mask
def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_lines=42):
"""
Find all \begin{} ... \end{} text block that with less than limit_n_lines lines.
Add it to preserve area
"""
pattern_compile = re.compile(pattern, flags)
def search_with_line_limit(text, mask):
for res in pattern_compile.finditer(text):
cmd = res.group(1) # begin{what}
this = res.group(2) # content between begin and end
this_mask = mask[res.regs[2][0] : res.regs[2][1]]
white_list = [
"document",
"abstract",
"lemma",
"definition",
"sproof",
"em",
"emph",
"textit",
"textbf",
"itemize",
"enumerate",
]
if (cmd in white_list) or this.count(
"\n"
) >= limit_n_lines: # use a magical number 42
this, this_mask = search_with_line_limit(this, this_mask)
mask[res.regs[2][0] : res.regs[2][1]] = this_mask
else:
mask[res.regs[0][0] : res.regs[0][1]] = PRESERVE
return text, mask
return search_with_line_limit(text, mask)
"""
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Latex Merge File
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
"""
def find_main_tex_file(file_manifest, mode):
"""
在多Tex文档中,寻找主文件,必须包含documentclass,返回找到的第一个。
P.S. 但愿没人把latex模板放在里面传进来 (6.25 加入判定latex模板的代码)
"""
canidates = []
for texf in file_manifest:
if os.path.basename(texf).startswith("merge"):
continue
with open(texf, "r", encoding="utf8", errors="ignore") as f:
file_content = f.read()
if r"\documentclass" in file_content:
canidates.append(texf)
else:
continue
if len(canidates) == 0:
raise RuntimeError("无法找到一个主Tex文件包含documentclass关键字")
elif len(canidates) == 1:
return canidates[0]
else: # if len(canidates) >= 2 通过一些Latex模板中常见但通常不会出现在正文的单词,对不同latex源文件扣分,取评分最高者返回
canidates_score = []
# 给出一些判定模板文档的词作为扣分项
unexpected_words = [
"\\LaTeX",
"manuscript",
"Guidelines",
"font",
"citations",
"rejected",
"blind review",
"reviewers",
]
expected_words = ["\\input", "\\ref", "\\cite"]
for texf in canidates:
canidates_score.append(0)
with open(texf, "r", encoding="utf8", errors="ignore") as f:
file_content = f.read()
file_content = rm_comments(file_content)
for uw in unexpected_words:
if uw in file_content:
canidates_score[-1] -= 1
for uw in expected_words:
if uw in file_content:
canidates_score[-1] += 1
select = np.argmax(canidates_score) # 取评分最高者返回
return canidates[select]
def rm_comments(main_file):
new_file_remove_comment_lines = []
for l in main_file.splitlines():
# 删除整行的空注释
if l.lstrip().startswith("%"):
pass
else:
new_file_remove_comment_lines.append(l)
main_file = "\n".join(new_file_remove_comment_lines)
# main_file = re.sub(r"\\include{(.*?)}", r"\\input{\1}", main_file) # 将 \include 命令转换为 \input 命令
main_file = re.sub(r"(?<!\\)%.*", "", main_file) # 使用正则表达式查找半行注释, 并替换为空字符串
return main_file
def find_tex_file_ignore_case(fp):
dir_name = os.path.dirname(fp)
base_name = os.path.basename(fp)
# 如果输入的文件路径是正确的
if os.path.isfile(pj(dir_name, base_name)):
return pj(dir_name, base_name)
# 如果不正确,试着加上.tex后缀试试
if not base_name.endswith(".tex"):
base_name += ".tex"
if os.path.isfile(pj(dir_name, base_name)):
return pj(dir_name, base_name)
# 如果还找不到,解除大小写限制,再试一次
import glob
for f in glob.glob(dir_name + "/*.tex"):
base_name_s = os.path.basename(fp)
base_name_f = os.path.basename(f)
if base_name_s.lower() == base_name_f.lower():
return f
# 试着加上.tex后缀试试
if not base_name_s.endswith(".tex"):
base_name_s += ".tex"
if base_name_s.lower() == base_name_f.lower():
return f
return None
def merge_tex_files_(project_foler, main_file, mode):
"""
Merge Tex project recrusively
"""
main_file = rm_comments(main_file)
for s in reversed([q for q in re.finditer(r"\\input\{(.*?)\}", main_file, re.M)]):
f = s.group(1)
fp = os.path.join(project_foler, f)
fp_ = find_tex_file_ignore_case(fp)
if fp_:
try:
with open(fp_, "r", encoding="utf-8", errors="replace") as fx:
c = fx.read()
except:
c = f"\n\nWarning from GPT-Academic: LaTex source file is missing!\n\n"
else:
raise RuntimeError(f"找不到{fp},Tex源文件缺失")
c = merge_tex_files_(project_foler, c, mode)
main_file = main_file[: s.span()[0]] + c + main_file[s.span()[1] :]
return main_file
def find_title_and_abs(main_file):
def extract_abstract_1(text):
pattern = r"\\abstract\{(.*?)\}"
match = re.search(pattern, text, re.DOTALL)
if match:
return match.group(1)
else:
return None
def extract_abstract_2(text):
pattern = r"\\begin\{abstract\}(.*?)\\end\{abstract\}"
match = re.search(pattern, text, re.DOTALL)
if match:
return match.group(1)
else:
return None
def extract_title(string):
pattern = r"\\title\{(.*?)\}"
match = re.search(pattern, string, re.DOTALL)
if match:
return match.group(1)
else:
return None
abstract = extract_abstract_1(main_file)
if abstract is None:
abstract = extract_abstract_2(main_file)
title = extract_title(main_file)
return title, abstract
def merge_tex_files(project_foler, main_file, mode):
"""
Merge Tex project recrusively
P.S. 顺便把CTEX塞进去以支持中文
P.S. 顺便把Latex的注释去除
"""
main_file = merge_tex_files_(project_foler, main_file, mode)
main_file = rm_comments(main_file)
if mode == "translate_zh":
# find paper documentclass
pattern = re.compile(r"\\documentclass.*\n")
match = pattern.search(main_file)
assert match is not None, "Cannot find documentclass statement!"
position = match.end()
add_ctex = "\\usepackage{ctex}\n"
add_url = "\\usepackage{url}\n" if "{url}" not in main_file else ""
main_file = main_file[:position] + add_ctex + add_url + main_file[position:]
# fontset=windows
import platform
main_file = re.sub(
r"\\documentclass\[(.*?)\]{(.*?)}",
r"\\documentclass[\1,fontset=windows,UTF8]{\2}",
main_file,
)
main_file = re.sub(
r"\\documentclass{(.*?)}",
r"\\documentclass[fontset=windows,UTF8]{\1}",
main_file,
)
# find paper abstract
pattern_opt1 = re.compile(r"\\begin\{abstract\}.*\n")
pattern_opt2 = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
match_opt1 = pattern_opt1.search(main_file)
match_opt2 = pattern_opt2.search(main_file)
if (match_opt1 is None) and (match_opt2 is None):
# "Cannot find paper abstract section!"
main_file = insert_abstract(main_file)
match_opt1 = pattern_opt1.search(main_file)
match_opt2 = pattern_opt2.search(main_file)
assert (match_opt1 is not None) or (
match_opt2 is not None
), "Cannot find paper abstract section!"
return main_file
insert_missing_abs_str = r"""
\begin{abstract}
The GPT-Academic program cannot find abstract section in this paper.
\end{abstract}
"""
def insert_abstract(tex_content):
if "\\maketitle" in tex_content:
# find the position of "\maketitle"
find_index = tex_content.index("\\maketitle")
# find the nearest ending line
end_line_index = tex_content.find("\n", find_index)
# insert "abs_str" on the next line
modified_tex = (
tex_content[: end_line_index + 1]
+ "\n\n"
+ insert_missing_abs_str
+ "\n\n"
+ tex_content[end_line_index + 1 :]
)
return modified_tex
elif r"\begin{document}" in tex_content:
# find the position of "\maketitle"
find_index = tex_content.index(r"\begin{document}")
# find the nearest ending line
end_line_index = tex_content.find("\n", find_index)
# insert "abs_str" on the next line
modified_tex = (
tex_content[: end_line_index + 1]
+ "\n\n"
+ insert_missing_abs_str
+ "\n\n"
+ tex_content[end_line_index + 1 :]
)
return modified_tex
else:
return tex_content
"""
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Post process
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
"""
def mod_inbraket(match):
"""
为啥chatgpt会把cite里面的逗号换成中文逗号呀
"""
# get the matched string
cmd = match.group(1)
str_to_modify = match.group(2)
# modify the matched string
str_to_modify = str_to_modify.replace("", ":") # 前面是中文冒号,后面是英文冒号
str_to_modify = str_to_modify.replace("", ",") # 前面是中文逗号,后面是英文逗号
# str_to_modify = 'BOOM'
return "\\" + cmd + "{" + str_to_modify + "}"
def fix_content(final_tex, node_string):
"""
Fix common GPT errors to increase success rate
"""
final_tex = re.sub(r"(?<!\\)%", "\\%", final_tex)
final_tex = re.sub(r"\\([a-z]{2,10})\ \{", r"\\\1{", string=final_tex)
final_tex = re.sub(r"\\\ ([a-z]{2,10})\{", r"\\\1{", string=final_tex)
final_tex = re.sub(r"\\([a-z]{2,10})\{([^\}]*?)\}", mod_inbraket, string=final_tex)
if "Traceback" in final_tex and "[Local Message]" in final_tex:
final_tex = node_string # 出问题了,还原原文
if node_string.count("\\begin") != final_tex.count("\\begin"):
final_tex = node_string # 出问题了,还原原文
if node_string.count("\_") > 0 and node_string.count("\_") > final_tex.count("\_"):
# walk and replace any _ without \
final_tex = re.sub(r"(?<!\\)_", "\\_", final_tex)
def compute_brace_level(string):
# this function count the number of { and }
brace_level = 0
for c in string:
if c == "{":
brace_level += 1
elif c == "}":
brace_level -= 1
return brace_level
def join_most(tex_t, tex_o):
# this function join translated string and original string when something goes wrong
p_t = 0
p_o = 0
def find_next(string, chars, begin):
p = begin
while p < len(string):
if string[p] in chars:
return p, string[p]
p += 1
return None, None
while True:
res1, char = find_next(tex_o, ["{", "}"], p_o)
if res1 is None:
break
res2, char = find_next(tex_t, [char], p_t)
if res2 is None:
break
p_o = res1 + 1
p_t = res2 + 1
return tex_t[:p_t] + tex_o[p_o:]
if compute_brace_level(final_tex) != compute_brace_level(node_string):
# 出问题了,还原部分原文,保证括号正确
final_tex = join_most(final_tex, node_string)
return final_tex
def compile_latex_with_timeout(command, cwd, timeout=60):
import subprocess
process = subprocess.Popen(
command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd
)
try:
stdout, stderr = process.communicate(timeout=timeout)
except subprocess.TimeoutExpired:
process.kill()
stdout, stderr = process.communicate()
logger.error("Process timed out (compile_latex_with_timeout)!")
return False
return True
def run_in_subprocess_wrapper_func(func, args, kwargs, return_dict, exception_dict):
import sys
try:
result = func(*args, **kwargs)
return_dict["result"] = result
except Exception as e:
exc_info = sys.exc_info()
exception_dict["exception"] = exc_info
def run_in_subprocess(func):
import multiprocessing
def wrapper(*args, **kwargs):
return_dict = multiprocessing.Manager().dict()
exception_dict = multiprocessing.Manager().dict()
process = multiprocessing.Process(
target=run_in_subprocess_wrapper_func,
args=(func, args, kwargs, return_dict, exception_dict),
)
process.start()
process.join()
process.close()
if "exception" in exception_dict:
# ooops, the subprocess ran into an exception
exc_info = exception_dict["exception"]
raise exc_info[1].with_traceback(exc_info[2])
if "result" in return_dict.keys():
# If the subprocess ran successfully, return the result
return return_dict["result"]
return wrapper
def _merge_pdfs(pdf1_path, pdf2_path, output_path):
try:
logger.info("Merging PDFs using _merge_pdfs_ng")
_merge_pdfs_ng(pdf1_path, pdf2_path, output_path)
except:
logger.info("Merging PDFs using _merge_pdfs_legacy")
_merge_pdfs_legacy(pdf1_path, pdf2_path, output_path)
def _merge_pdfs_ng(pdf1_path, pdf2_path, output_path):
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
from PyPDF2.generic import NameObject, TextStringObject, ArrayObject, FloatObject, NumberObject
Percent = 1
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
# Open the first PDF file
with open(pdf1_path, "rb") as pdf1_file:
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
# Open the second PDF file
with open(pdf2_path, "rb") as pdf2_file:
pdf2_reader = PyPDF2.PdfFileReader(pdf2_file)
# Create a new PDF file to store the merged pages
output_writer = PyPDF2.PdfFileWriter()
# Determine the number of pages in each PDF file
num_pages = max(pdf1_reader.numPages, pdf2_reader.numPages)
# Merge the pages from the two PDF files
for page_num in range(num_pages):
# Add the page from the first PDF file
if page_num < pdf1_reader.numPages:
page1 = pdf1_reader.getPage(page_num)
else:
page1 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
# Add the page from the second PDF file
if page_num < pdf2_reader.numPages:
page2 = pdf2_reader.getPage(page_num)
else:
page2 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
# Create a new empty page with double width
new_page = PyPDF2.PageObject.createBlankPage(
width=int(
int(page1.mediaBox.getWidth())
+ int(page2.mediaBox.getWidth()) * Percent
),
height=max(page1.mediaBox.getHeight(), page2.mediaBox.getHeight()),
)
new_page.mergeTranslatedPage(page1, 0, 0)
new_page.mergeTranslatedPage(
page2,
int(
int(page1.mediaBox.getWidth())
- int(page2.mediaBox.getWidth()) * (1 - Percent)
),
0,
)
if "/Annots" in new_page:
annotations = new_page["/Annots"]
for i, annot in enumerate(annotations):
annot_obj = annot.get_object()
# 检查注释类型是否是链接(/Link
if annot_obj.get("/Subtype") == "/Link":
# 检查是否为内部链接跳转(/GoTo或外部URI链接/URI
action = annot_obj.get("/A")
if action:
if "/S" in action and action["/S"] == "/GoTo":
# 内部链接:跳转到文档中的某个页面
dest = action.get("/D") # 目标页或目标位置
# if dest and annot.idnum in page2_annot_id:
# if dest in pdf2_reader.named_destinations:
if dest and page2.annotations:
if annot in page2.annotations:
# 获取原始文件中跳转信息,包括跳转页面
destination = pdf2_reader.named_destinations[
dest
]
page_number = (
pdf2_reader.get_destination_page_number(
destination
)
)
# 更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
# “/D”:[10,'/XYZ',100,100,0]
if destination.dest_array[1] == "/XYZ":
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
FloatObject(
destination.dest_array[
2
]
+ int(
page1.mediaBox.getWidth()
)
),
destination.dest_array[3],
destination.dest_array[4],
]
) # 确保键和值是 PdfObject
}
)
else:
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
]
) # 确保键和值是 PdfObject
}
)
rect = annot_obj.get("/Rect")
# 更新点击坐标
rect = ArrayObject(
[
FloatObject(
rect[0]
+ int(page1.mediaBox.getWidth())
),
rect[1],
FloatObject(
rect[2]
+ int(page1.mediaBox.getWidth())
),
rect[3],
]
)
annot_obj.update(
{
NameObject(
"/Rect"
): rect # 确保键和值是 PdfObject
}
)
# if dest and annot.idnum in page1_annot_id:
# if dest in pdf1_reader.named_destinations:
if dest and page1.annotations:
if annot in page1.annotations:
# 获取原始文件中跳转信息,包括跳转页面
destination = pdf1_reader.named_destinations[
dest
]
page_number = (
pdf1_reader.get_destination_page_number(
destination
)
)
# 更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
# “/D”:[10,'/XYZ',100,100,0]
if destination.dest_array[1] == "/XYZ":
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
FloatObject(
destination.dest_array[
2
]
),
destination.dest_array[3],
destination.dest_array[4],
]
) # 确保键和值是 PdfObject
}
)
else:
annot_obj["/A"].update(
{
NameObject("/D"): ArrayObject(
[
NumberObject(page_number),
destination.dest_array[1],
]
) # 确保键和值是 PdfObject
}
)
rect = annot_obj.get("/Rect")
rect = ArrayObject(
[
FloatObject(rect[0]),
rect[1],
FloatObject(rect[2]),
rect[3],
]
)
annot_obj.update(
{
NameObject(
"/Rect"
): rect # 确保键和值是 PdfObject
}
)
elif "/S" in action and action["/S"] == "/URI":
# 外部链接跳转到某个URI
uri = action.get("/URI")
output_writer.addPage(new_page)
# Save the merged PDF file
with open(output_path, "wb") as output_file:
output_writer.write(output_file)
def _merge_pdfs_legacy(pdf1_path, pdf2_path, output_path):
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
Percent = 0.95
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
# Open the first PDF file
with open(pdf1_path, "rb") as pdf1_file:
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
# Open the second PDF file
with open(pdf2_path, "rb") as pdf2_file:
pdf2_reader = PyPDF2.PdfFileReader(pdf2_file)
# Create a new PDF file to store the merged pages
output_writer = PyPDF2.PdfFileWriter()
# Determine the number of pages in each PDF file
num_pages = max(pdf1_reader.numPages, pdf2_reader.numPages)
# Merge the pages from the two PDF files
for page_num in range(num_pages):
# Add the page from the first PDF file
if page_num < pdf1_reader.numPages:
page1 = pdf1_reader.getPage(page_num)
else:
page1 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
# Add the page from the second PDF file
if page_num < pdf2_reader.numPages:
page2 = pdf2_reader.getPage(page_num)
else:
page2 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
# Create a new empty page with double width
new_page = PyPDF2.PageObject.createBlankPage(
width=int(
int(page1.mediaBox.getWidth())
+ int(page2.mediaBox.getWidth()) * Percent
),
height=max(page1.mediaBox.getHeight(), page2.mediaBox.getHeight()),
)
new_page.mergeTranslatedPage(page1, 0, 0)
new_page.mergeTranslatedPage(
page2,
int(
int(page1.mediaBox.getWidth())
- int(page2.mediaBox.getWidth()) * (1 - Percent)
),
0,
)
output_writer.addPage(new_page)
# Save the merged PDF file
with open(output_path, "wb") as output_file:
output_writer.write(output_file)
merge_pdfs = run_in_subprocess(_merge_pdfs) # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放

查看文件

@@ -1,256 +0,0 @@
import time, json, sys, struct
import numpy as np
from loguru import logger as logging
from scipy.io.wavfile import WAVE_FORMAT
def write_numpy_to_wave(filename, rate, data, add_header=False):
"""
Write a NumPy array as a WAV file.
"""
def _array_tofile(fid, data):
# ravel gives a c-contiguous buffer
fid.write(data.ravel().view('b').data)
if hasattr(filename, 'write'):
fid = filename
else:
fid = open(filename, 'wb')
fs = rate
try:
dkind = data.dtype.kind
if not (dkind == 'i' or dkind == 'f' or (dkind == 'u' and
data.dtype.itemsize == 1)):
raise ValueError("Unsupported data type '%s'" % data.dtype)
header_data = b''
header_data += b'RIFF'
header_data += b'\x00\x00\x00\x00'
header_data += b'WAVE'
# fmt chunk
header_data += b'fmt '
if dkind == 'f':
format_tag = WAVE_FORMAT.IEEE_FLOAT
else:
format_tag = WAVE_FORMAT.PCM
if data.ndim == 1:
channels = 1
else:
channels = data.shape[1]
bit_depth = data.dtype.itemsize * 8
bytes_per_second = fs*(bit_depth // 8)*channels
block_align = channels * (bit_depth // 8)
fmt_chunk_data = struct.pack('<HHIIHH', format_tag, channels, fs,
bytes_per_second, block_align, bit_depth)
if not (dkind == 'i' or dkind == 'u'):
# add cbSize field for non-PCM files
fmt_chunk_data += b'\x00\x00'
header_data += struct.pack('<I', len(fmt_chunk_data))
header_data += fmt_chunk_data
# fact chunk (non-PCM files)
if not (dkind == 'i' or dkind == 'u'):
header_data += b'fact'
header_data += struct.pack('<II', 4, data.shape[0])
# check data size (needs to be immediately before the data chunk)
if ((len(header_data)-4-4) + (4+4+data.nbytes)) > 0xFFFFFFFF:
raise ValueError("Data exceeds wave file size limit")
if add_header:
fid.write(header_data)
# data chunk
fid.write(b'data')
fid.write(struct.pack('<I', data.nbytes))
if data.dtype.byteorder == '>' or (data.dtype.byteorder == '=' and
sys.byteorder == 'big'):
data = data.byteswap()
_array_tofile(fid, data)
if add_header:
# Determine file size and place it in correct
# position at start of the file.
size = fid.tell()
fid.seek(4)
fid.write(struct.pack('<I', size-8))
finally:
if not hasattr(filename, 'write'):
fid.close()
else:
fid.seek(0)
def is_speaker_speaking(vad, data, sample_rate):
# Function to detect if the speaker is speaking
# The WebRTC VAD only accepts 16-bit mono PCM audio,
# sampled at 8000, 16000, 32000 or 48000 Hz.
# A frame must be either 10, 20, or 30 ms in duration:
frame_duration = 30
n_bit_each = int(sample_rate * frame_duration / 1000)*2 # x2 because audio is 16 bit (2 bytes)
res_list = []
for t in range(len(data)):
if t!=0 and t % n_bit_each == 0:
res_list.append(vad.is_speech(data[t-n_bit_each:t], sample_rate))
info = ''.join(['^' if r else '.' for r in res_list])
info = info[:10]
if any(res_list):
return True, info
else:
return False, info
class AliyunASR():
def test_on_sentence_begin(self, message, *args):
pass
def test_on_sentence_end(self, message, *args):
message = json.loads(message)
self.parsed_sentence = message['payload']['result']
self.event_on_entence_end.set()
def test_on_start(self, message, *args):
pass
def test_on_error(self, message, *args):
logging.error("on_error args=>{}".format(args))
pass
def test_on_close(self, *args):
self.aliyun_service_ok = False
pass
def test_on_result_chg(self, message, *args):
message = json.loads(message)
self.parsed_text = message['payload']['result']
self.event_on_result_chg.set()
def test_on_completed(self, message, *args):
pass
def audio_convertion_thread(self, uuid):
# 在一个异步线程中采集音频
import nls # pip install git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
import tempfile
from scipy import io
from toolbox import get_conf
from .audio_io import change_sample_rate
from .audio_io import RealtimeAudioDistribution
NEW_SAMPLERATE = 16000
rad = RealtimeAudioDistribution()
rad.clean_up()
temp_folder = tempfile.gettempdir()
TOKEN, APPKEY = get_conf('ALIYUN_TOKEN', 'ALIYUN_APPKEY')
if len(TOKEN) == 0:
TOKEN = self.get_token()
self.aliyun_service_ok = True
URL="wss://nls-gateway.aliyuncs.com/ws/v1"
sr = nls.NlsSpeechTranscriber(
url=URL,
token=TOKEN,
appkey=APPKEY,
on_sentence_begin=self.test_on_sentence_begin,
on_sentence_end=self.test_on_sentence_end,
on_start=self.test_on_start,
on_result_changed=self.test_on_result_chg,
on_completed=self.test_on_completed,
on_error=self.test_on_error,
on_close=self.test_on_close,
callback_args=[uuid.hex]
)
timeout_limit_second = 20
r = sr.start(aformat="pcm",
timeout=timeout_limit_second,
enable_intermediate_result=True,
enable_punctuation_prediction=True,
enable_inverse_text_normalization=True)
import webrtcvad
vad = webrtcvad.Vad()
vad.set_mode(1)
is_previous_frame_transmitted = False # 上一帧是否有人说话
previous_frame_data = None
echo_cnt = 0 # 在没有声音之后,继续向服务器发送n次音频数据
echo_cnt_max = 4 # 在没有声音之后,继续向服务器发送n次音频数据
keep_alive_last_send_time = time.time()
while not self.stop:
# time.sleep(self.capture_interval)
audio = rad.read(uuid.hex)
if audio is not None:
# convert to pcm file
temp_file = f'{temp_folder}/{uuid.hex}.pcm' #
dsdata = change_sample_rate(audio, rad.rate, NEW_SAMPLERATE) # 48000 --> 16000
write_numpy_to_wave(temp_file, NEW_SAMPLERATE, dsdata)
# read pcm binary
with open(temp_file, "rb") as f: data = f.read()
is_speaking, info = is_speaker_speaking(vad, data, NEW_SAMPLERATE)
if is_speaking or echo_cnt > 0:
# 如果话筒激活 / 如果处于回声收尾阶段
echo_cnt -= 1
if not is_previous_frame_transmitted: # 上一帧没有人声,但是我们把上一帧同样加上
if previous_frame_data is not None: data = previous_frame_data + data
if is_speaking:
echo_cnt = echo_cnt_max
slices = zip(*(iter(data),) * 640) # 640个字节为一组
for i in slices: sr.send_audio(bytes(i))
keep_alive_last_send_time = time.time()
is_previous_frame_transmitted = True
else:
is_previous_frame_transmitted = False
echo_cnt = 0
# 保持链接激活,即使没有声音,也根据时间间隔,发送一些音频片段给服务器
if time.time() - keep_alive_last_send_time > timeout_limit_second/2:
slices = zip(*(iter(data),) * 640) # 640个字节为一组
for i in slices: sr.send_audio(bytes(i))
keep_alive_last_send_time = time.time()
is_previous_frame_transmitted = True
self.audio_shape = info
else:
time.sleep(0.1)
if not self.aliyun_service_ok:
self.stop = True
self.stop_msg = 'Aliyun音频服务异常,请检查ALIYUN_TOKEN和ALIYUN_APPKEY是否过期。'
r = sr.stop()
def get_token(self):
from toolbox import get_conf
import json
from aliyunsdkcore.request import CommonRequest
from aliyunsdkcore.client import AcsClient
AccessKey_ID, AccessKey_secret = get_conf('ALIYUN_ACCESSKEY', 'ALIYUN_SECRET')
# 创建AcsClient实例
client = AcsClient(
AccessKey_ID,
AccessKey_secret,
"cn-shanghai"
)
# 创建request,并设置参数。
request = CommonRequest()
request.set_method('POST')
request.set_domain('nls-meta.cn-shanghai.aliyuncs.com')
request.set_version('2019-02-28')
request.set_action_name('CreateToken')
try:
response = client.do_action_with_exception(request)
logging.info(response)
jss = json.loads(response)
if 'Token' in jss and 'Id' in jss['Token']:
token = jss['Token']['Id']
expireTime = jss['Token']['ExpireTime']
logging.info("token = " + token)
logging.info("expireTime = " + str(expireTime))
except Exception as e:
logging.error(e)
return token

查看文件

@@ -1,51 +0,0 @@
import numpy as np
from scipy import interpolate
def Singleton(cls):
_instance = {}
def _singleton(*args, **kargs):
if cls not in _instance:
_instance[cls] = cls(*args, **kargs)
return _instance[cls]
return _singleton
@Singleton
class RealtimeAudioDistribution():
def __init__(self) -> None:
self.data = {}
self.max_len = 1024*1024
self.rate = 48000 # 只读,每秒采样数量
def clean_up(self):
self.data = {}
def feed(self, uuid, audio):
self.rate, audio_ = audio
# print('feed', len(audio_), audio_[-25:])
if uuid not in self.data:
self.data[uuid] = audio_
else:
new_arr = np.concatenate((self.data[uuid], audio_))
if len(new_arr) > self.max_len: new_arr = new_arr[-self.max_len:]
self.data[uuid] = new_arr
def read(self, uuid):
if uuid in self.data:
res = self.data.pop(uuid)
# print('\r read-', len(res), '-', max(res), end='', flush=True)
else:
res = None
return res
def change_sample_rate(audio, old_sr, new_sr):
duration = audio.shape[0] / old_sr
time_old = np.linspace(0, duration, audio.shape[0])
time_new = np.linspace(0, duration, int(audio.shape[0] * new_sr / old_sr))
interpolator = interpolate.interp1d(time_old, audio.T)
new_audio = interpolator(time_new).T
return new_audio.astype(np.int16)

查看文件

@@ -1,39 +0,0 @@
from toolbox import update_ui, get_conf, promote_file_to_downloadzone, update_ui_lastest_msg, generate_file_link
from shared_utils.docker_as_service_api import stream_daas
from shared_utils.docker_as_service_api import DockerServiceApiComModel
def download_video(video_id, only_audio, user_name, chatbot, history):
from toolbox import get_log_folder
chatbot.append([None, "Processing..."])
yield from update_ui(chatbot, history)
client_command = f'{video_id} --audio-only' if only_audio else video_id
server_url = get_conf('DAAS_SERVER_URL')
docker_service_api_com_model = DockerServiceApiComModel(client_command=client_command)
save_file_dir = get_log_folder(user_name, plugin_name='media_downloader')
for output_manifest in stream_daas(docker_service_api_com_model, server_url, save_file_dir):
status_buf = ""
status_buf += "DaaS message: \n\n"
status_buf += output_manifest['server_message'].replace('\n', '<br/>')
status_buf += "\n\n"
status_buf += "DaaS standard error: \n\n"
status_buf += output_manifest['server_std_err'].replace('\n', '<br/>')
status_buf += "\n\n"
status_buf += "DaaS standard output: \n\n"
status_buf += output_manifest['server_std_out'].replace('\n', '<br/>')
status_buf += "\n\n"
status_buf += "DaaS file attach: \n\n"
status_buf += str(output_manifest['server_file_attach'])
yield from update_ui_lastest_msg(status_buf, chatbot, history)
return output_manifest['server_file_attach']
def search_videos(keywords):
from toolbox import get_log_folder
client_command = keywords
server_url = get_conf('DAAS_SERVER_URL').replace('stream', 'search')
docker_service_api_com_model = DockerServiceApiComModel(client_command=client_command)
save_file_dir = get_log_folder("default_user", plugin_name='media_downloader')
for output_manifest in stream_daas(docker_service_api_com_model, server_url, save_file_dir):
return output_manifest['server_message']

查看文件

@@ -1,93 +0,0 @@
from pydantic import BaseModel, Field
from typing import List
from toolbox import update_ui_lastest_msg, disable_auto_promotion
from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log_folder
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
import time
import pickle
def have_any_recent_upload_files(chatbot):
_5min = 5 * 60
if not chatbot: return False # chatbot is None
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
if not most_recent_uploaded: return False # most_recent_uploaded is None
if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
else: return False # most_recent_uploaded is too old
class GptAcademicState():
def __init__(self):
self.reset()
def reset(self):
pass
def dump_state(self, chatbot):
chatbot._cookies['plugin_state'] = pickle.dumps(self)
def set_state(self, chatbot, key, value):
setattr(self, key, value)
chatbot._cookies['plugin_state'] = pickle.dumps(self)
def get_state(chatbot, cls=None):
state = chatbot._cookies.get('plugin_state', None)
if state is not None: state = pickle.loads(state)
elif cls is not None: state = cls()
else: state = GptAcademicState()
state.chatbot = chatbot
return state
class GptAcademicGameBaseState():
"""
1. first init: __init__ ->
"""
def init_game(self, chatbot, lock_plugin):
self.plugin_name = None
self.callback_fn = None
self.delete_game = False
self.step_cnt = 0
def lock_plugin(self, chatbot):
if self.callback_fn is None:
raise ValueError("callback_fn is None")
chatbot._cookies['lock_plugin'] = self.callback_fn
self.dump_state(chatbot)
def get_plugin_name(self):
if self.plugin_name is None:
raise ValueError("plugin_name is None")
return self.plugin_name
def dump_state(self, chatbot):
chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = pickle.dumps(self)
def set_state(self, chatbot, key, value):
setattr(self, key, value)
chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = pickle.dumps(self)
@staticmethod
def sync_state(chatbot, llm_kwargs, cls, plugin_name, callback_fn, lock_plugin=True):
state = chatbot._cookies.get(f'plugin_state/{plugin_name}', None)
if state is not None:
state = pickle.loads(state)
else:
state = cls()
state.init_game(chatbot, lock_plugin)
state.plugin_name = plugin_name
state.llm_kwargs = llm_kwargs
state.chatbot = chatbot
state.callback_fn = callback_fn
return state
def continue_game(self, prompt, chatbot, history):
# 游戏主体
yield from self.step(prompt, chatbot, history)
self.step_cnt += 1
# 保存状态,收尾
self.dump_state(chatbot)
# 如果游戏结束,清理
if self.delete_game:
chatbot._cookies['lock_plugin'] = None
chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = None
yield from update_ui(chatbot=chatbot, history=history)

查看文件

@@ -1,126 +0,0 @@
from crazy_functions.ipc_fns.mp import run_in_subprocess_with_timeout
from loguru import logger
def force_breakdown(txt, limit, get_token_fn):
""" 当无法用标点、空行分割时,我们用最暴力的方法切割
"""
for i in reversed(range(len(txt))):
if get_token_fn(txt[:i]) < limit:
return txt[:i], txt[i:]
return "Tiktoken未知错误", "Tiktoken未知错误"
def maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage):
""" 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage
当 remain_txt_to_cut < `_min` 时,我们再把 remain_txt_to_cut_storage 中的部分文字取出
"""
_min = int(5e4)
_max = int(1e5)
# print(len(remain_txt_to_cut), len(remain_txt_to_cut_storage))
if len(remain_txt_to_cut) < _min and len(remain_txt_to_cut_storage) > 0:
remain_txt_to_cut = remain_txt_to_cut + remain_txt_to_cut_storage
remain_txt_to_cut_storage = ""
if len(remain_txt_to_cut) > _max:
remain_txt_to_cut_storage = remain_txt_to_cut[_max:] + remain_txt_to_cut_storage
remain_txt_to_cut = remain_txt_to_cut[:_max]
return remain_txt_to_cut, remain_txt_to_cut_storage
def cut(limit, get_token_fn, txt_tocut, must_break_at_empty_line, break_anyway=False):
""" 文本切分
"""
res = []
total_len = len(txt_tocut)
fin_len = 0
remain_txt_to_cut = txt_tocut
remain_txt_to_cut_storage = ""
# 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage
remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage)
while True:
if get_token_fn(remain_txt_to_cut) <= limit:
# 如果剩余文本的token数小于限制,那么就不用切了
res.append(remain_txt_to_cut); fin_len+=len(remain_txt_to_cut)
break
else:
# 如果剩余文本的token数大于限制,那么就切
lines = remain_txt_to_cut.split('\n')
# 估计一个切分点
estimated_line_cut = limit / get_token_fn(remain_txt_to_cut) * len(lines)
estimated_line_cut = int(estimated_line_cut)
# 开始查找合适切分点的偏移cnt
cnt = 0
for cnt in reversed(range(estimated_line_cut)):
if must_break_at_empty_line:
# 首先尝试用双空行(\n\n作为切分点
if lines[cnt] != "":
continue
prev = "\n".join(lines[:cnt])
post = "\n".join(lines[cnt:])
if get_token_fn(prev) < limit:
break
if cnt == 0:
# 如果没有找到合适的切分点
if break_anyway:
# 是否允许暴力切分
prev, post = force_breakdown(remain_txt_to_cut, limit, get_token_fn)
else:
# 不允许直接报错
raise RuntimeError(f"存在一行极长的文本!{remain_txt_to_cut}")
# 追加列表
res.append(prev); fin_len+=len(prev)
# 准备下一次迭代
remain_txt_to_cut = post
remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage)
process = fin_len/total_len
logger.info(f'正在文本切分 {int(process*100)}%')
if len(remain_txt_to_cut.strip()) == 0:
break
return res
def breakdown_text_to_satisfy_token_limit_(txt, limit, llm_model="gpt-3.5-turbo"):
""" 使用多种方式尝试切分文本,以满足 token 限制
"""
from request_llms.bridge_all import model_info
enc = model_info[llm_model]['tokenizer']
def get_token_fn(txt): return len(enc.encode(txt, disallowed_special=()))
try:
# 第1次尝试,将双空行\n\n作为切分点
return cut(limit, get_token_fn, txt, must_break_at_empty_line=True)
except RuntimeError:
try:
# 第2次尝试,将单空行\n作为切分点
return cut(limit, get_token_fn, txt, must_break_at_empty_line=False)
except RuntimeError:
try:
# 第3次尝试,将英文句号.)作为切分点
res = cut(limit, get_token_fn, txt.replace('.', '\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在
return [r.replace('\n', '.') for r in res]
except RuntimeError as e:
try:
# 第4次尝试,将中文句号作为切分点
res = cut(limit, get_token_fn, txt.replace('', '。。\n'), must_break_at_empty_line=False)
return [r.replace('。。\n', '') for r in res]
except RuntimeError as e:
# 第5次尝试,没办法了,随便切一下吧
return cut(limit, get_token_fn, txt, must_break_at_empty_line=False, break_anyway=True)
breakdown_text_to_satisfy_token_limit = run_in_subprocess_with_timeout(breakdown_text_to_satisfy_token_limit_, timeout=60)
if __name__ == '__main__':
from crazy_functions.crazy_utils import read_and_clean_pdf_text
file_content, page_one = read_and_clean_pdf_text("build/assets/at.pdf")
from request_llms.bridge_all import model_info
for i in range(5):
file_content += file_content
logger.info(len(file_content))
TOKEN_LIMIT_PER_FRAGMENT = 2500
res = breakdown_text_to_satisfy_token_limit(file_content, TOKEN_LIMIT_PER_FRAGMENT)

查看文件

@@ -1,171 +0,0 @@
from functools import lru_cache
from toolbox import gen_time_str
from toolbox import promote_file_to_downloadzone
from toolbox import write_history_to_file, promote_file_to_downloadzone
from toolbox import get_conf
from toolbox import ProxyNetworkActivate
from shared_utils.colorful import *
import requests
import random
import copy
import os
import math
class GROBID_OFFLINE_EXCEPTION(Exception): pass
def get_avail_grobid_url():
GROBID_URLS = get_conf('GROBID_URLS')
if len(GROBID_URLS) == 0: return None
try:
_grobid_url = random.choice(GROBID_URLS) # 随机负载均衡
if _grobid_url.endswith('/'): _grobid_url = _grobid_url.rstrip('/')
with ProxyNetworkActivate('Connect_Grobid'):
res = requests.get(_grobid_url+'/api/isalive')
if res.text=='true': return _grobid_url
else: return None
except:
return None
@lru_cache(maxsize=32)
def parse_pdf(pdf_path, grobid_url):
import scipdf # pip install scipdf_parser
if grobid_url.endswith('/'): grobid_url = grobid_url.rstrip('/')
try:
with ProxyNetworkActivate('Connect_Grobid'):
article_dict = scipdf.parse_pdf_to_dict(pdf_path, grobid_url=grobid_url)
except GROBID_OFFLINE_EXCEPTION:
raise GROBID_OFFLINE_EXCEPTION("GROBID服务不可用,请修改config中的GROBID_URL,可修改成本地GROBID服务。")
except:
raise RuntimeError("解析PDF失败,请检查PDF是否损坏。")
return article_dict
def produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chatbot, fp, generated_conclusion_files):
# -=-=-=-=-=-=-=-= 写出第1个文件翻译前后混合 -=-=-=-=-=-=-=-=
res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + gpt_response_collection, file_basename=f"{gen_time_str()}translated_and_original.md", file_fullname=None)
promote_file_to_downloadzone(res_path, rename_file=os.path.basename(res_path)+'.md', chatbot=chatbot)
generated_conclusion_files.append(res_path)
# -=-=-=-=-=-=-=-= 写出第2个文件仅翻译后的文本 -=-=-=-=-=-=-=-=
translated_res_array = []
# 记录当前的大章节标题:
last_section_name = ""
for index, value in enumerate(gpt_response_collection):
# 先挑选偶数序列号:
if index % 2 != 0:
# 先提取当前英文标题:
cur_section_name = gpt_response_collection[index-1].split('\n')[0].split(" Part")[0]
# 如果index是1的话,则直接使用first section name
if cur_section_name != last_section_name:
cur_value = cur_section_name + '\n'
last_section_name = copy.deepcopy(cur_section_name)
else:
cur_value = ""
# 再做一个小修改重新修改当前part的标题,默认用英文的
cur_value += value
translated_res_array.append(cur_value)
res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + translated_res_array,
file_basename = f"{gen_time_str()}-translated_only.md",
file_fullname = None,
auto_caption = False)
promote_file_to_downloadzone(res_path, rename_file=os.path.basename(res_path)+'.md', chatbot=chatbot)
generated_conclusion_files.append(res_path)
return res_path
def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG, plugin_kwargs={}):
from crazy_functions.pdf_fns.report_gen_html import construct_html
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
prompt = "以下是一篇学术论文的基本信息:\n"
# title
title = article_dict.get('title', '无法获取 title'); prompt += f'title:{title}\n\n'
# authors
authors = article_dict.get('authors', '无法获取 authors')[:100]; prompt += f'authors:{authors}\n\n'
# abstract
abstract = article_dict.get('abstract', '无法获取 abstract'); prompt += f'abstract:{abstract}\n\n'
# command
prompt += f"请将题目和摘要翻译为{DST_LANG}"
meta = [f'# Title:\n\n', title, f'# Abstract:\n\n', abstract ]
# 单线,获取文章meta信息
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=prompt,
inputs_show_user=prompt,
llm_kwargs=llm_kwargs,
chatbot=chatbot, history=[],
sys_prompt="You are an academic paper reader。",
)
# 多线,翻译
inputs_array = []
inputs_show_user_array = []
# get_token_num
from request_llms.bridge_all import model_info
enc = model_info[llm_kwargs['llm_model']]['tokenizer']
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
def break_down(txt):
raw_token_num = get_token_num(txt)
if raw_token_num <= TOKEN_LIMIT_PER_FRAGMENT:
return [txt]
else:
# raw_token_num > TOKEN_LIMIT_PER_FRAGMENT
# find a smooth token limit to achieve even seperation
count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
token_limit_smooth = raw_token_num // count + count
return breakdown_text_to_satisfy_token_limit(txt, limit=token_limit_smooth, llm_model=llm_kwargs['llm_model'])
for section in article_dict.get('sections'):
if len(section['text']) == 0: continue
section_frags = break_down(section['text'])
for i, fragment in enumerate(section_frags):
heading = section['heading']
if len(section_frags) > 1: heading += f' Part-{i+1}'
inputs_array.append(
f"你需要翻译{heading}章节,内容如下: \n\n{fragment}"
)
inputs_show_user_array.append(
f"# {heading}\n\n{fragment}"
)
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=inputs_array,
inputs_show_user_array=inputs_show_user_array,
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history_array=[meta for _ in inputs_array],
sys_prompt_array=[
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" + plugin_kwargs.get("additional_prompt", "") for _ in inputs_array],
)
# -=-=-=-=-=-=-=-= 写出Markdown文件 -=-=-=-=-=-=-=-=
produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chatbot, fp, generated_conclusion_files)
# -=-=-=-=-=-=-=-= 写出HTML文件 -=-=-=-=-=-=-=-=
ch = construct_html()
orig = ""
trans = ""
gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
for i,k in enumerate(gpt_response_collection_html):
if i%2==0:
gpt_response_collection_html[i] = inputs_show_user_array[i//2]
else:
# 先提取当前英文标题:
cur_section_name = gpt_response_collection[i-1].split('\n')[0].split(" Part")[0]
cur_value = cur_section_name + "\n" + gpt_response_collection_html[i]
gpt_response_collection_html[i] = cur_value
final = ["", "", "一、论文概况", "", "Abstract", paper_meta_info, "二、论文翻译", ""]
final.extend(gpt_response_collection_html)
for i, k in enumerate(final):
if i%2==0:
orig = k
if i%2==1:
trans = k
ch.add_row(a=orig, b=trans)
create_report_file_name = f"{os.path.basename(fp)}.trans.html"
html_file = ch.save_file(create_report_file_name)
generated_conclusion_files.append(html_file)
promote_file_to_downloadzone(html_file, rename_file=os.path.basename(html_file), chatbot=chatbot)

查看文件

@@ -1,26 +0,0 @@
import os
from toolbox import CatchException, report_exception, get_log_folder, gen_time_str, check_packages
from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion
from toolbox import write_history_to_file, promote_file_to_downloadzone, get_conf, extract_archive
from crazy_functions.pdf_fns.parse_pdf import parse_pdf, translate_pdf
def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url):
import copy, json
TOKEN_LIMIT_PER_FRAGMENT = 1024
generated_conclusion_files = []
generated_html_files = []
DST_LANG = "中文"
from crazy_functions.pdf_fns.report_gen_html import construct_html
for index, fp in enumerate(file_manifest):
chatbot.append(["当前进度:", f"正在连接GROBID服务,请稍候: {grobid_url}\n如果等待时间过长,请修改config中的GROBID_URL,可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
article_dict = parse_pdf(fp, grobid_url)
grobid_json_res = os.path.join(get_log_folder(), gen_time_str() + "grobid.json")
with open(grobid_json_res, 'w+', encoding='utf8') as f:
f.write(json.dumps(article_dict, indent=4, ensure_ascii=False))
promote_file_to_downloadzone(grobid_json_res, chatbot=chatbot)
if article_dict is None: raise RuntimeError("解析PDF失败,请检查PDF是否损坏。")
yield from translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG, plugin_kwargs=plugin_kwargs)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

查看文件

@@ -1,111 +0,0 @@
from toolbox import get_log_folder
from toolbox import update_ui, promote_file_to_downloadzone
from toolbox import write_history_to_file, promote_file_to_downloadzone
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from crazy_functions.crazy_utils import read_and_clean_pdf_text
from shared_utils.colorful import *
from loguru import logger
import os
def 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
"""
注意此函数已经弃用新函数位于crazy_functions/pdf_fns/parse_pdf.py
"""
import copy
TOKEN_LIMIT_PER_FRAGMENT = 1024
generated_conclusion_files = []
generated_html_files = []
from crazy_functions.pdf_fns.report_gen_html import construct_html
for index, fp in enumerate(file_manifest):
# 读取PDF文件
file_content, page_one = read_and_clean_pdf_text(fp)
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
# 递归地切割PDF文件
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=page_one, limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
# 为了更好的效果,我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
# 单线,获取文章meta信息
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取{paper_meta}",
inputs_show_user=f"请从{fp}中提取出“标题”、“收录会议或期刊”等基本信息。",
llm_kwargs=llm_kwargs,
chatbot=chatbot, history=[],
sys_prompt="Your job is to collect information from materials。",
)
# 多线,翻译
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=[
f"你需要翻译以下内容:\n{frag}" for frag in paper_fragments],
inputs_show_user_array=[f"\n---\n 原文: \n\n {frag.replace('#', '')} \n---\n 翻译:\n " for frag in paper_fragments],
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history_array=[[paper_meta] for _ in paper_fragments],
sys_prompt_array=[
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" + plugin_kwargs.get("additional_prompt", "")
for _ in paper_fragments],
# max_workers=5 # OpenAI所允许的最大并行过载
)
gpt_response_collection_md = copy.deepcopy(gpt_response_collection)
# 整理报告的格式
for i,k in enumerate(gpt_response_collection_md):
if i%2==0:
gpt_response_collection_md[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection_md)//2}] \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection_md)//2}]\n "
else:
gpt_response_collection_md[i] = gpt_response_collection_md[i]
final = ["一、论文概况\n\n---\n\n", paper_meta_info.replace('# ', '### ') + '\n\n---\n\n', "二、论文翻译", ""]
final.extend(gpt_response_collection_md)
create_report_file_name = f"{os.path.basename(fp)}.trans.md"
res = write_history_to_file(final, create_report_file_name)
promote_file_to_downloadzone(res, chatbot=chatbot)
# 更新UI
generated_conclusion_files.append(f'{get_log_folder()}/{create_report_file_name}')
chatbot.append((f"{fp}完成了吗?", res))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# write html
try:
ch = construct_html()
orig = ""
trans = ""
gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
for i,k in enumerate(gpt_response_collection_html):
if i%2==0:
gpt_response_collection_html[i] = paper_fragments[i//2].replace('#', '')
else:
gpt_response_collection_html[i] = gpt_response_collection_html[i]
final = ["论文概况", paper_meta_info.replace('# ', '### '), "二、论文翻译", ""]
final.extend(gpt_response_collection_html)
for i, k in enumerate(final):
if i%2==0:
orig = k
if i%2==1:
trans = k
ch.add_row(a=orig, b=trans)
create_report_file_name = f"{os.path.basename(fp)}.trans.html"
generated_html_files.append(ch.save_file(create_report_file_name))
except:
from toolbox import trimmed_format_exc
logger.error('writing html result failed:', trimmed_format_exc())
# 准备文件的下载
for pdf_path in generated_conclusion_files:
# 重命名文件
rename_file = f'翻译-{os.path.basename(pdf_path)}'
promote_file_to_downloadzone(pdf_path, rename_file=rename_file, chatbot=chatbot)
for html_path in generated_html_files:
# 重命名文件
rename_file = f'翻译-{os.path.basename(html_path)}'
promote_file_to_downloadzone(html_path, rename_file=rename_file, chatbot=chatbot)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

查看文件

@@ -1,250 +0,0 @@
from toolbox import get_log_folder, gen_time_str, get_conf
from toolbox import update_ui, promote_file_to_downloadzone
from toolbox import promote_file_to_downloadzone, extract_archive
from toolbox import generate_file_link, zip_folder
from crazy_functions.crazy_utils import get_files_from_everything
from shared_utils.colorful import *
from loguru import logger
import os
import time
def refresh_key(doc2x_api_key):
import requests, json
url = "https://api.doc2x.noedgeai.com/api/token/refresh"
res = requests.post(
url,
headers={"Authorization": "Bearer " + doc2x_api_key}
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
res_json = json.loads(decoded)
doc2x_api_key = res_json['data']['token']
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
return doc2x_api_key
def 解析PDF_DOC2X_转Latex(pdf_file_path):
zip_file_path, unzipped_folder = 解析PDF_DOC2X(pdf_file_path, format='tex')
return unzipped_folder
def 解析PDF_DOC2X(pdf_file_path, format='tex'):
"""
format: 'tex', 'md', 'docx'
"""
import requests, json, os
DOC2X_API_KEY = get_conf('DOC2X_API_KEY')
latex_dir = get_log_folder(plugin_name="pdf_ocr_latex")
markdown_dir = get_log_folder(plugin_name="pdf_ocr")
doc2x_api_key = DOC2X_API_KEY
# < ------ 第1步上传 ------ >
logger.info("Doc2x 第1步上传")
with open(pdf_file_path, 'rb') as file:
res = requests.post(
"https://v2.doc2x.noedgeai.com/api/v2/parse/pdf",
headers={"Authorization": "Bearer " + doc2x_api_key},
data=file
)
# res_json = []
if res.status_code == 200:
res_json = res.json()
else:
raise RuntimeError(f"Doc2x return an error: {res.json()}")
uuid = res_json['data']['uid']
# < ------ 第2步轮询等待 ------ >
logger.info("Doc2x 第2步轮询等待")
params = {'uid': uuid}
while True:
res = requests.get(
'https://v2.doc2x.noedgeai.com/api/v2/parse/status',
headers={"Authorization": "Bearer " + doc2x_api_key},
params=params
)
res_json = res.json()
if res_json['data']['status'] == "success":
break
elif res_json['data']['status'] == "processing":
time.sleep(3)
logger.info(f"Doc2x is processing at {res_json['data']['progress']}%")
elif res_json['data']['status'] == "failed":
raise RuntimeError(f"Doc2x return an error: {res_json}")
# < ------ 第3步提交转化 ------ >
logger.info("Doc2x 第3步提交转化")
data = {
"uid": uuid,
"to": format,
"formula_mode": "dollar",
"filename": "output"
}
res = requests.post(
'https://v2.doc2x.noedgeai.com/api/v2/convert/parse',
headers={"Authorization": "Bearer " + doc2x_api_key},
json=data
)
if res.status_code == 200:
res_json = res.json()
else:
raise RuntimeError(f"Doc2x return an error: {res.json()}")
# < ------ 第4步等待结果 ------ >
logger.info("Doc2x 第4步等待结果")
params = {'uid': uuid}
while True:
res = requests.get(
'https://v2.doc2x.noedgeai.com/api/v2/convert/parse/result',
headers={"Authorization": "Bearer " + doc2x_api_key},
params=params
)
res_json = res.json()
if res_json['data']['status'] == "success":
break
elif res_json['data']['status'] == "processing":
time.sleep(3)
logger.info(f"Doc2x still processing")
elif res_json['data']['status'] == "failed":
raise RuntimeError(f"Doc2x return an error: {res_json}")
# < ------ 第5步最后的处理 ------ >
logger.info("Doc2x 第5步最后的处理")
if format=='tex':
target_path = latex_dir
if format=='md':
target_path = markdown_dir
os.makedirs(target_path, exist_ok=True)
max_attempt = 3
# < ------ 下载 ------ >
for attempt in range(max_attempt):
try:
result_url = res_json['data']['url']
res = requests.get(result_url)
zip_path = os.path.join(target_path, gen_time_str() + '.zip')
unzip_path = os.path.join(target_path, gen_time_str())
if res.status_code == 200:
with open(zip_path, "wb") as f: f.write(res.content)
else:
raise RuntimeError(f"Doc2x return an error: {res.json()}")
except Exception as e:
if attempt < max_attempt - 1:
logger.error(f"Failed to download latex file, retrying... {e}")
time.sleep(3)
continue
else:
raise e
# < ------ 解压 ------ >
import zipfile
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall(unzip_path)
return zip_path, unzip_path
def 解析PDF_DOC2X_单文件(fp, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request):
def pdf2markdown(filepath):
chatbot.append((None, f"Doc2x 解析中"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
md_zip_path, unzipped_folder = 解析PDF_DOC2X(filepath, format='md')
promote_file_to_downloadzone(md_zip_path, chatbot=chatbot)
chatbot.append((None, f"完成解析 {md_zip_path} ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return md_zip_path
def deliver_to_markdown_plugin(md_zip_path, user_request):
from crazy_functions.Markdown_Translate import Markdown英译中
import shutil, re
time_tag = gen_time_str()
target_path_base = get_log_folder(chatbot.get_user())
file_origin_name = os.path.basename(md_zip_path)
this_file_path = os.path.join(target_path_base, file_origin_name)
os.makedirs(target_path_base, exist_ok=True)
shutil.copyfile(md_zip_path, this_file_path)
ex_folder = this_file_path + ".extract"
extract_archive(
file_path=this_file_path, dest_dir=ex_folder
)
# edit markdown files
success, file_manifest, project_folder = get_files_from_everything(ex_folder, type='.md')
for generated_fp in file_manifest:
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f:
content = f.read()
# 将公式中的\[ \]替换成$$
content = content.replace(r'\[', r'$$').replace(r'\]', r'$$')
# 将公式中的\( \)替换成$
content = content.replace(r'\(', r'$').replace(r'\)', r'$')
content = content.replace('```markdown', '\n').replace('```', '\n')
with open(generated_fp, 'w', encoding='utf8') as f:
f.write(content)
promote_file_to_downloadzone(generated_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 生成在线预览html
file_name = '在线预览翻译(原文)' + gen_time_str() + '.html'
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
# # Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
# md = re.sub(r'^<table>', r'.<table>', md, flags=re.MULTILINE)
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
chatbot.append([None, f"生成在线预览:{generate_file_link([preview_fp])}"])
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
chatbot.append((None, f"调用Markdown插件 {ex_folder} ..."))
plugin_kwargs['markdown_expected_output_dir'] = ex_folder
translated_f_name = 'translated_markdown.md'
generated_fp = plugin_kwargs['markdown_expected_output_path'] = os.path.join(ex_folder, translated_f_name)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from Markdown英译中(ex_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
if os.path.exists(generated_fp):
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f: content = f.read()
content = content.replace('```markdown', '\n').replace('```', '\n')
# Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
# content = re.sub(r'^<table>', r'.<table>', content, flags=re.MULTILINE)
with open(generated_fp, 'w', encoding='utf8') as f: f.write(content)
# 生成在线预览html
file_name = '在线预览翻译' + gen_time_str() + '.html'
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
# 生成包含图片的压缩包
dest_folder = get_log_folder(chatbot.get_user())
zip_name = '翻译后的带图文档.zip'
zip_folder(source_folder=ex_folder, dest_folder=dest_folder, zip_name=zip_name)
zip_fp = os.path.join(dest_folder, zip_name)
promote_file_to_downloadzone(zip_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
md_zip_path = yield from pdf2markdown(fp)
yield from deliver_to_markdown_plugin(md_zip_path, user_request)
def 解析PDF_基于DOC2X(file_manifest, *args):
for index, fp in enumerate(file_manifest):
yield from 解析PDF_DOC2X_单文件(fp, *args)
return

查看文件

@@ -1,85 +0,0 @@
from crazy_functions.crazy_utils import read_and_clean_pdf_text, get_files_from_everything
import os
import re
def extract_text_from_files(txt, chatbot, history):
"""
查找pdf/md/word并获取文本内容并返回状态以及文本
输入参数 Args:
chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化)
history (list): List of chat history (历史,对话历史列表)
输出 Returns:
文件是否存在(bool)
final_result(list):文本内容
page_one(list):第一页内容/摘要
file_manifest(list):文件路径
excption(string):需要用户手动处理的信息,如没出错则保持为空
"""
final_result = []
page_one = []
file_manifest = []
excption = ""
if txt == "":
final_result.append(txt)
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
#查找输入区内容中的文件
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
file_word,word_manifest,folder_word = get_files_from_everything(txt, '.docx')
file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc')
if file_doc:
excption = "word"
return False, final_result, page_one, file_manifest, excption
file_num = len(pdf_manifest) + len(md_manifest) + len(word_manifest)
if file_num == 0:
final_result.append(txt)
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
if file_pdf:
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
import fitz
except:
excption = "pdf"
return False, final_result, page_one, file_manifest, excption
for index, fp in enumerate(pdf_manifest):
file_content, pdf_one = read_and_clean_pdf_text(fp) # 尝试按照章节切割PDF
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
pdf_one = str(pdf_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
final_result.append(file_content)
page_one.append(pdf_one)
file_manifest.append(os.path.relpath(fp, folder_pdf))
if file_md:
for index, fp in enumerate(md_manifest):
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
file_content = f.read()
file_content = file_content.encode('utf-8', 'ignore').decode()
headers = re.findall(r'^#\s(.*)$', file_content, re.MULTILINE) #接下来提取md中的一级/二级标题作为摘要
if len(headers) > 0:
page_one.append("\n".join(headers)) #合并所有的标题,以换行符分割
else:
page_one.append("")
final_result.append(file_content)
file_manifest.append(os.path.relpath(fp, folder_md))
if file_word:
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
from docx import Document
except:
excption = "word_pip"
return False, final_result, page_one, file_manifest, excption
for index, fp in enumerate(word_manifest):
doc = Document(fp)
file_content = '\n'.join([p.text for p in doc.paragraphs])
file_content = file_content.encode('utf-8', 'ignore').decode()
page_one.append(file_content[:200])
final_result.append(file_content)
file_manifest.append(os.path.relpath(fp, folder_word))
return True, final_result, page_one, file_manifest, excption

查看文件

@@ -1,58 +0,0 @@
from toolbox import update_ui, get_conf, trimmed_format_exc, get_log_folder
import os
class construct_html():
def __init__(self) -> None:
self.html_string = ""
def add_row(self, a, b):
from toolbox import markdown_convertion
template = """
{
primary_col: {
header: String.raw`__PRIMARY_HEADER__`,
msg: String.raw`__PRIMARY_MSG__`,
},
secondary_rol: {
header: String.raw`__SECONDARY_HEADER__`,
msg: String.raw`__SECONDARY_MSG__`,
}
},
"""
def std(str):
str = str.replace(r'`',r'&#96;')
if str.endswith("\\"): str += ' '
if str.endswith("}"): str += ' '
if str.endswith("$"): str += ' '
return str
template_ = template
a_lines = a.split('\n')
b_lines = b.split('\n')
if len(a_lines) == 1 or len(a_lines[0]) > 50:
template_ = template_.replace("__PRIMARY_HEADER__", std(a[:20]))
template_ = template_.replace("__PRIMARY_MSG__", std(markdown_convertion(a)))
else:
template_ = template_.replace("__PRIMARY_HEADER__", std(a_lines[0]))
template_ = template_.replace("__PRIMARY_MSG__", std(markdown_convertion('\n'.join(a_lines[1:]))))
if len(b_lines) == 1 or len(b_lines[0]) > 50:
template_ = template_.replace("__SECONDARY_HEADER__", std(b[:20]))
template_ = template_.replace("__SECONDARY_MSG__", std(markdown_convertion(b)))
else:
template_ = template_.replace("__SECONDARY_HEADER__", std(b_lines[0]))
template_ = template_.replace("__SECONDARY_MSG__", std(markdown_convertion('\n'.join(b_lines[1:]))))
self.html_string += template_
def save_file(self, file_name):
from toolbox import get_log_folder
with open('crazy_functions/pdf_fns/report_template.html', 'r', encoding='utf8') as f:
html_template = f.read()
html_template = html_template.replace("__TF_ARR__", self.html_string)
with open(os.path.join(get_log_folder(), file_name), 'w', encoding='utf8') as f:
f.write(html_template.encode('utf-8', 'ignore').decode())
return os.path.join(get_log_folder(), file_name)

文件差异因一行或多行过长而隐藏

查看文件

@@ -1,73 +0,0 @@
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>GPT-Academic 翻译报告书</title>
<style>
.centered-a {
color: red;
text-align: center;
margin-bottom: 2%;
font-size: 1.5em;
}
.centered-b {
color: red;
text-align: center;
margin-top: 10%;
margin-bottom: 20%;
font-size: 1.5em;
}
.centered-c {
color: rgba(255, 0, 0, 0);
text-align: center;
margin-top: 2%;
margin-bottom: 20%;
font-size: 7em;
}
</style>
<script>
// Configure MathJax settings
MathJax = {
tex: {
inlineMath: [
['$', '$'],
['\(', '\)']
]
}
}
addEventListener('zero-md-rendered', () => {MathJax.typeset(); console.log('MathJax typeset!');})
</script>
<!-- Load MathJax library -->
<script src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js"></script>
<script
type="module"
src="https://cdn.jsdelivr.net/gh/zerodevx/zero-md@2/dist/zero-md.min.js"
></script>
</head>
<body>
<div class="test_temp1" style="width:10%; height: 500px; float:left;">
</div>
<div class="test_temp2" style="width:80%; height: 500px; float:left;">
<!-- Simply set the `src` attribute to your MD file and win -->
<div class="centered-a">
请按Ctrl+S保存此页面,否则该页面可能在几分钟后失效。
</div>
<zero-md src="translated_markdown.md" no-shadow>
</zero-md>
<div class="centered-b">
本报告由GPT-Academic开源项目生成,地址https://github.com/binary-husky/gpt_academic。
</div>
<div class="centered-c">
本报告由GPT-Academic开源项目生成,地址https://github.com/binary-husky/gpt_academic。
</div>
</div>
<div class="test_temp3" style="width:10%; height: 500px; float:left;">
</div>
</body>
</html>

查看文件

@@ -1,52 +0,0 @@
import os, json, base64
from pydantic import BaseModel, Field
from textwrap import dedent
from typing import List
class ArgProperty(BaseModel): # PLUGIN_ARG_MENU
title: str = Field(description="The title", default="")
description: str = Field(description="The description", default="")
default_value: str = Field(description="The default value", default="")
type: str = Field(description="The type", default="") # currently we support ['string', 'dropdown']
options: List[str] = Field(default=[], description="List of options available for the argument") # only used when type is 'dropdown'
class GptAcademicPluginTemplate():
def __init__(self):
# please note that `execute` method may run in different threads,
# thus you should not store any state in the plugin instance,
# which may be accessed by multiple threads
pass
def define_arg_selection_menu(self):
"""
An example as below:
```
def define_arg_selection_menu(self):
gui_definition = {
"main_input":
ArgProperty(title="main input", description="description", default_value="default_value", type="string").model_dump_json(),
"advanced_arg":
ArgProperty(title="advanced arguments", description="description", default_value="default_value", type="string").model_dump_json(),
"additional_arg_01":
ArgProperty(title="additional", description="description", default_value="default_value", type="string").model_dump_json(),
}
return gui_definition
```
"""
raise NotImplementedError("You need to implement this method in your plugin class")
def get_js_code_for_generating_menu(self, btnName):
define_arg_selection = self.define_arg_selection_menu()
if len(define_arg_selection.keys()) > 8:
raise ValueError("You can only have up to 8 arguments in the define_arg_selection")
# if "main_input" not in define_arg_selection:
# raise ValueError("You must have a 'main_input' in the define_arg_selection")
DEFINE_ARG_INPUT_INTERFACE = json.dumps(define_arg_selection)
return base64.b64encode(DEFINE_ARG_INPUT_INTERFACE.encode('utf-8')).decode('utf-8')
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
raise NotImplementedError("You need to implement this method in your plugin class")

查看文件

@@ -1,87 +0,0 @@
SearchOptimizerPrompt="""作为一个网页搜索助手,你的任务是结合历史记录,从不同角度,为“原问题”生成个不同版本的“检索词”,从而提高网页检索的精度。生成的问题要求指向对象清晰明确,并与“原问题语言相同”。例如:
历史记录:
"
Q: 对话背景。
A: 当前对话是关于 Nginx 的介绍和在Ubuntu上的使用等。
"
原问题: 怎么下载
检索词: ["Nginx 下载","Ubuntu Nginx","Ubuntu安装Nginx"]
----------------
历史记录:
"
Q: 对话背景。
A: 当前对话是关于 Nginx 的介绍和使用等。
Q: 报错 "no connection"
A: 报错"no connection"可能是因为……
"
原问题: 怎么解决
检索词: ["Nginx报错"no connection" 解决","Nginx'no connection'报错 原因","Nginx提示'no connection'"]
----------------
历史记录:
"
"
原问题: 你知道 Python 么?
检索词: ["Python","Python 使用教程。","Python 特点和优势"]
----------------
历史记录:
"
Q: 列出Java的三种特点?
A: 1. Java 是一种编译型语言。
2. Java 是一种面向对象的编程语言。
3. Java 是一种跨平台的编程语言。
"
原问题: 介绍下第2点。
检索词: ["Java 面向对象特点","Java 面向对象编程优势。","Java 面向对象编程"]
----------------
现在有历史记录:
"
{history}
"
有其原问题: {query}
直接给出最多{num}个检索词,必须以json形式给出,不得有多余字符:
"""
SearchAcademicOptimizerPrompt="""作为一个学术论文搜索助手,你的任务是结合历史记录,从不同角度,为“原问题”生成个不同版本的“检索词”,从而提高学术论文检索的精度。生成的问题要求指向对象清晰明确,并与“原问题语言相同”。例如:
历史记录:
"
Q: 对话背景。
A: 当前对话是关于深度学习的介绍和在图像识别中的应用等。
"
原问题: 怎么下载相关论文
检索词: ["深度学习 图像识别 论文下载","图像识别 深度学习 研究论文","深度学习 图像识别 论文资源","Deep Learning Image Recognition Paper Download","Image Recognition Deep Learning Research Paper"]
----------------
历史记录:
"
Q: 对话背景。
A: 当前对话是关于深度学习的介绍和应用等。
Q: 报错 "模型不收敛"
A: 报错"模型不收敛"可能是因为……
"
原问题: 怎么解决
检索词: ["深度学习 模型不收敛 解决方案 论文","深度学习 模型不收敛 原因 研究","深度学习 模型不收敛 论文","Deep Learning Model Convergence Issue Solution Paper","Deep Learning Model Convergence Problem Research"]
----------------
历史记录:
"
"
原问题: 你知道 GAN 么?
检索词: ["生成对抗网络 论文","GAN 使用教程 论文","GAN 特点和优势 研究","Generative Adversarial Network Paper","GAN Usage Tutorial Paper"]
----------------
历史记录:
"
Q: 列出机器学习的三种应用?
A: 1. 机器学习在图像识别中的应用。
2. 机器学习在自然语言处理中的应用。
3. 机器学习在推荐系统中的应用。
"
原问题: 介绍下第2点。
检索词: ["机器学习 自然语言处理 应用 论文","机器学习 自然语言处理 研究","机器学习 NLP 应用 论文","Machine Learning Natural Language Processing Application Paper","Machine Learning NLP Research"]
----------------
现在有历史记录:
"
{history}
"
有其原问题: {query}
直接给出最多{num}个检索词,必须以json形式给出,不得有多余字符:
"""

查看文件

@@ -1,138 +0,0 @@
import atexit
from loguru import logger
from typing import List
from llama_index.core import Document
from llama_index.core.ingestion import run_transformations
from llama_index.core.schema import TextNode
from crazy_functions.rag_fns.vector_store_index import GptacVectorStoreIndex
from request_llms.embed_models.openai_embed import OpenAiEmbeddingModel
DEFAULT_QUERY_GENERATION_PROMPT = """\
Now, you have context information as below:
---------------------
{context_str}
---------------------
Answer the user request below (use the context information if necessary, otherwise you can ignore them):
---------------------
{query_str}
"""
QUESTION_ANSWER_RECORD = """\
{{
"type": "This is a previous conversation with the user",
"question": "{question}",
"answer": "{answer}",
}}
"""
class SaveLoad():
def does_checkpoint_exist(self, checkpoint_dir=None):
import os, glob
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
if not os.path.exists(checkpoint_dir): return False
if len(glob.glob(os.path.join(checkpoint_dir, "*.json"))) == 0: return False
return True
def save_to_checkpoint(self, checkpoint_dir=None):
logger.info(f'saving vector store to: {checkpoint_dir}')
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
self.vs_index.storage_context.persist(persist_dir=checkpoint_dir)
def load_from_checkpoint(self, checkpoint_dir=None):
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
if self.does_checkpoint_exist(checkpoint_dir=checkpoint_dir):
logger.info('loading checkpoint from disk')
from llama_index.core import StorageContext, load_index_from_storage
storage_context = StorageContext.from_defaults(persist_dir=checkpoint_dir)
self.vs_index = load_index_from_storage(storage_context, embed_model=self.embed_model)
return self.vs_index
else:
return self.create_new_vs()
def create_new_vs(self):
return GptacVectorStoreIndex.default_vector_store(embed_model=self.embed_model)
def purge(self):
import shutil
shutil.rmtree(self.checkpoint_dir, ignore_errors=True)
self.vs_index = self.create_new_vs(self.checkpoint_dir)
class LlamaIndexRagWorker(SaveLoad):
def __init__(self, user_name, llm_kwargs, auto_load_checkpoint=True, checkpoint_dir=None) -> None:
self.debug_mode = True
self.embed_model = OpenAiEmbeddingModel(llm_kwargs)
self.user_name = user_name
self.checkpoint_dir = checkpoint_dir
if auto_load_checkpoint:
self.vs_index = self.load_from_checkpoint(checkpoint_dir)
else:
self.vs_index = self.create_new_vs()
atexit.register(lambda: self.save_to_checkpoint(checkpoint_dir))
def assign_embedding_model(self):
pass
def inspect_vector_store(self):
# This function is for debugging
self.vs_index.storage_context.index_store.to_dict()
docstore = self.vs_index.storage_context.docstore.docs
vector_store_preview = "\n".join([ f"{_id} | {tn.text}" for _id, tn in docstore.items() ])
logger.info('\n++ --------inspect_vector_store begin--------')
logger.info(vector_store_preview)
logger.info('oo --------inspect_vector_store end--------')
return vector_store_preview
def add_documents_to_vector_store(self, document_list: List[Document]):
"""
Adds a list of Document objects to the vector store after processing.
"""
documents = document_list
documents_nodes = run_transformations(
documents, # type: ignore
self.vs_index._transformations,
show_progress=True
)
self.vs_index.insert_nodes(documents_nodes)
if self.debug_mode:
self.inspect_vector_store()
def add_text_to_vector_store(self, text: str):
node = TextNode(text=text)
documents_nodes = run_transformations(
[node],
self.vs_index._transformations,
show_progress=True
)
self.vs_index.insert_nodes(documents_nodes)
if self.debug_mode:
self.inspect_vector_store()
def remember_qa(self, question, answer):
formatted_str = QUESTION_ANSWER_RECORD.format(question=question, answer=answer)
self.add_text_to_vector_store(formatted_str)
def retrieve_from_store_with_query(self, query):
if self.debug_mode:
self.inspect_vector_store()
retriever = self.vs_index.as_retriever()
return retriever.retrieve(query)
def build_prompt(self, query, nodes):
context_str = self.generate_node_array_preview(nodes)
return DEFAULT_QUERY_GENERATION_PROMPT.format(context_str=context_str, query_str=query)
def generate_node_array_preview(self, nodes):
buf = "\n".join(([f"(No.{i+1} | score {n.score:.3f}): {n.text}" for i, n in enumerate(nodes)]))
if self.debug_mode: logger.info(buf)
return buf
def purge_vector_store(self):
"""
Purges the current vector store and creates a new one.
"""
self.purge()

查看文件

@@ -1,108 +0,0 @@
import llama_index
import os
import atexit
from typing import List
from loguru import logger
from llama_index.core import Document
from llama_index.core.schema import TextNode
from request_llms.embed_models.openai_embed import OpenAiEmbeddingModel
from shared_utils.connect_void_terminal import get_chat_default_kwargs
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from crazy_functions.rag_fns.vector_store_index import GptacVectorStoreIndex
from llama_index.core.ingestion import run_transformations
from llama_index.core import PromptTemplate
from llama_index.core.response_synthesizers import TreeSummarize
from llama_index.core import StorageContext
from llama_index.vector_stores.milvus import MilvusVectorStore
from crazy_functions.rag_fns.llama_index_worker import LlamaIndexRagWorker
DEFAULT_QUERY_GENERATION_PROMPT = """\
Now, you have context information as below:
---------------------
{context_str}
---------------------
Answer the user request below (use the context information if necessary, otherwise you can ignore them):
---------------------
{query_str}
"""
QUESTION_ANSWER_RECORD = """\
{{
"type": "This is a previous conversation with the user",
"question": "{question}",
"answer": "{answer}",
}}
"""
class MilvusSaveLoad():
def does_checkpoint_exist(self, checkpoint_dir=None):
import os, glob
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
if not os.path.exists(checkpoint_dir): return False
if len(glob.glob(os.path.join(checkpoint_dir, "*.json"))) == 0: return False
return True
def save_to_checkpoint(self, checkpoint_dir=None):
logger.info(f'saving vector store to: {checkpoint_dir}')
# if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
# self.vs_index.storage_context.persist(persist_dir=checkpoint_dir)
def load_from_checkpoint(self, checkpoint_dir=None):
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
if self.does_checkpoint_exist(checkpoint_dir=checkpoint_dir):
logger.info('loading checkpoint from disk')
from llama_index.core import StorageContext, load_index_from_storage
storage_context = StorageContext.from_defaults(persist_dir=checkpoint_dir)
try:
self.vs_index = load_index_from_storage(storage_context, embed_model=self.embed_model)
return self.vs_index
except:
return self.create_new_vs(checkpoint_dir)
else:
return self.create_new_vs(checkpoint_dir)
def create_new_vs(self, checkpoint_dir, overwrite=False):
vector_store = MilvusVectorStore(
uri=os.path.join(checkpoint_dir, "milvus_demo.db"),
dim=self.embed_model.embedding_dimension(),
overwrite=overwrite
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = GptacVectorStoreIndex.default_vector_store(storage_context=storage_context, embed_model=self.embed_model)
return index
def purge(self):
self.vs_index = self.create_new_vs(self.checkpoint_dir, overwrite=True)
class MilvusRagWorker(MilvusSaveLoad, LlamaIndexRagWorker):
def __init__(self, user_name, llm_kwargs, auto_load_checkpoint=True, checkpoint_dir=None) -> None:
self.debug_mode = True
self.embed_model = OpenAiEmbeddingModel(llm_kwargs)
self.user_name = user_name
self.checkpoint_dir = checkpoint_dir
if auto_load_checkpoint:
self.vs_index = self.load_from_checkpoint(checkpoint_dir)
else:
self.vs_index = self.create_new_vs(checkpoint_dir)
atexit.register(lambda: self.save_to_checkpoint(checkpoint_dir))
def inspect_vector_store(self):
# This function is for debugging
try:
self.vs_index.storage_context.index_store.to_dict()
docstore = self.vs_index.storage_context.docstore.docs
if not docstore.items():
raise ValueError("cannot inspect")
vector_store_preview = "\n".join([ f"{_id} | {tn.text}" for _id, tn in docstore.items() ])
except:
dummy_retrieve_res: List["NodeWithScore"] = self.vs_index.as_retriever().retrieve(' ')
vector_store_preview = "\n".join(
[f"{node.id_} | {node.text}" for node in dummy_retrieve_res]
)
logger.info('\n++ --------inspect_vector_store begin--------')
logger.info(vector_store_preview)
logger.info('oo --------inspect_vector_store end--------')
return vector_store_preview

查看文件

@@ -1,22 +0,0 @@
import os
from llama_index.core import SimpleDirectoryReader
supports_format = ['.csv', '.docx', '.epub', '.ipynb', '.mbox', '.md', '.pdf', '.txt', '.ppt',
'.pptm', '.pptx']
# 修改后的 extract_text 函数,结合 SimpleDirectoryReader 和自定义解析逻辑
def extract_text(file_path):
_, ext = os.path.splitext(file_path.lower())
# 使用 SimpleDirectoryReader 处理它支持的文件格式
if ext in supports_format:
try:
reader = SimpleDirectoryReader(input_files=[file_path])
documents = reader.load_data()
if len(documents) > 0:
return documents[0].text
except Exception as e:
pass
return None

查看文件

@@ -1,58 +0,0 @@
from llama_index.core import VectorStoreIndex
from typing import Any, List, Optional
from llama_index.core.callbacks.base import CallbackManager
from llama_index.core.schema import TransformComponent
from llama_index.core.service_context import ServiceContext
from llama_index.core.settings import (
Settings,
callback_manager_from_settings_or_context,
transformations_from_settings_or_context,
)
from llama_index.core.storage.storage_context import StorageContext
class GptacVectorStoreIndex(VectorStoreIndex):
@classmethod
def default_vector_store(
cls,
storage_context: Optional[StorageContext] = None,
show_progress: bool = False,
callback_manager: Optional[CallbackManager] = None,
transformations: Optional[List[TransformComponent]] = None,
# deprecated
service_context: Optional[ServiceContext] = None,
embed_model = None,
**kwargs: Any,
):
"""Create index from documents.
Args:
documents (Optional[Sequence[BaseDocument]]): List of documents to
build the index from.
"""
storage_context = storage_context or StorageContext.from_defaults()
docstore = storage_context.docstore
callback_manager = (
callback_manager
or callback_manager_from_settings_or_context(Settings, service_context)
)
transformations = transformations or transformations_from_settings_or_context(
Settings, service_context
)
with callback_manager.as_trace("index_construction"):
return cls(
nodes=[],
storage_context=storage_context,
callback_manager=callback_manager,
show_progress=show_progress,
transformations=transformations,
service_context=service_context,
embed_model=embed_model,
**kwargs,
)

查看文件

@@ -0,0 +1,87 @@
#include "libipc/buffer.h"
#include "libipc/utility/pimpl.h"
#include <cstring>
namespace ipc {
bool operator==(buffer const & b1, buffer const & b2) {
return (b1.size() == b2.size()) && (std::memcmp(b1.data(), b2.data(), b1.size()) == 0);
}
bool operator!=(buffer const & b1, buffer const & b2) {
return !(b1 == b2);
}
class buffer::buffer_ : public pimpl<buffer_> {
public:
void* p_;
std::size_t s_;
void* a_;
buffer::destructor_t d_;
buffer_(void* p, std::size_t s, buffer::destructor_t d, void* a)
: p_(p), s_(s), a_(a), d_(d) {
}
~buffer_() {
if (d_ == nullptr) return;
d_((a_ == nullptr) ? p_ : a_, s_);
}
};
buffer::buffer()
: buffer(nullptr, 0, nullptr, nullptr) {
}
buffer::buffer(void* p, std::size_t s, destructor_t d)
: p_(p_->make(p, s, d, nullptr)) {
}
buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional)
: p_(p_->make(p, s, d, additional)) {
}
buffer::buffer(void* p, std::size_t s)
: buffer(p, s, nullptr) {
}
buffer::buffer(char const & c)
: buffer(const_cast<char*>(&c), 1) {
}
buffer::buffer(buffer&& rhs)
: buffer() {
swap(rhs);
}
buffer::~buffer() {
p_->clear();
}
void buffer::swap(buffer& rhs) {
std::swap(p_, rhs.p_);
}
buffer& buffer::operator=(buffer rhs) {
swap(rhs);
return *this;
}
bool buffer::empty() const noexcept {
return (impl(p_)->p_ == nullptr) || (impl(p_)->s_ == 0);
}
void* buffer::data() noexcept {
return impl(p_)->p_;
}
void const * buffer::data() const noexcept {
return impl(p_)->p_;
}
std::size_t buffer::size() const noexcept {
return impl(p_)->s_;
}
} // namespace ipc

查看文件

@@ -0,0 +1,701 @@
#include <type_traits>
#include <cstring>
#include <algorithm>
#include <utility> // std::pair, std::move, std::forward
#include <atomic>
#include <type_traits> // aligned_storage_t
#include <string>
#include <vector>
#include <array>
#include <cassert>
#include "libipc/ipc.h"
#include "libipc/def.h"
#include "libipc/shm.h"
#include "libipc/pool_alloc.h"
#include "libipc/queue.h"
#include "libipc/policy.h"
#include "libipc/rw_lock.h"
#include "libipc/waiter.h"
#include "libipc/utility/log.h"
#include "libipc/utility/id_pool.h"
#include "libipc/utility/scope_guard.h"
#include "libipc/utility/utility.h"
#include "libipc/memory/resource.h"
#include "libipc/platform/detail.h"
#include "libipc/circ/elem_array.h"
namespace {
using msg_id_t = std::uint32_t;
using acc_t = std::atomic<msg_id_t>;
template <std::size_t DataSize, std::size_t AlignSize>
struct msg_t;
template <std::size_t AlignSize>
struct msg_t<0, AlignSize> {
msg_id_t cc_id_;
msg_id_t id_;
std::int32_t remain_;
bool storage_;
};
template <std::size_t DataSize, std::size_t AlignSize>
struct msg_t : msg_t<0, AlignSize> {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
msg_t() = default;
msg_t(msg_id_t cc_id, msg_id_t id, std::int32_t remain, void const * data, std::size_t size)
: msg_t<0, AlignSize> {cc_id, id, remain, (data == nullptr) || (size == 0)} {
if (this->storage_) {
if (data != nullptr) {
// copy storage-id
*reinterpret_cast<ipc::storage_id_t*>(&data_) =
*static_cast<ipc::storage_id_t const *>(data);
}
}
else std::memcpy(&data_, data, size);
}
};
template <typename T>
ipc::buff_t make_cache(T& data, std::size_t size) {
auto ptr = ipc::mem::alloc(size);
std::memcpy(ptr, &data, (ipc::detail::min)(sizeof(data), size));
return { ptr, size, ipc::mem::free };
}
struct cache_t {
std::size_t fill_;
ipc::buff_t buff_;
cache_t(std::size_t f, ipc::buff_t && b)
: fill_(f), buff_(std::move(b))
{}
void append(void const * data, std::size_t size) {
if (fill_ >= buff_.size() || data == nullptr || size == 0) return;
auto new_fill = (ipc::detail::min)(fill_ + size, buff_.size());
std::memcpy(static_cast<ipc::byte_t*>(buff_.data()) + fill_, data, new_fill - fill_);
fill_ = new_fill;
}
};
auto cc_acc() {
static ipc::shm::handle acc_h("__CA_CONN__", sizeof(acc_t));
return static_cast<acc_t*>(acc_h.get());
}
IPC_CONSTEXPR_ std::size_t align_chunk_size(std::size_t size) noexcept {
return (((size - 1) / ipc::large_msg_align) + 1) * ipc::large_msg_align;
}
IPC_CONSTEXPR_ std::size_t calc_chunk_size(std::size_t size) noexcept {
return ipc::make_align(alignof(std::max_align_t), align_chunk_size(
ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>)) + size));
}
struct chunk_t {
std::atomic<ipc::circ::cc_t> &conns() noexcept {
return *reinterpret_cast<std::atomic<ipc::circ::cc_t> *>(this);
}
void *data() noexcept {
return reinterpret_cast<ipc::byte_t *>(this)
+ ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>));
}
};
struct chunk_info_t {
ipc::id_pool<> pool_;
ipc::spin_lock lock_;
IPC_CONSTEXPR_ static std::size_t chunks_mem_size(std::size_t chunk_size) noexcept {
return ipc::id_pool<>::max_count * chunk_size;
}
ipc::byte_t *chunks_mem() noexcept {
return reinterpret_cast<ipc::byte_t *>(this + 1);
}
chunk_t *at(std::size_t chunk_size, ipc::storage_id_t id) noexcept {
if (id < 0) return nullptr;
return reinterpret_cast<chunk_t *>(chunks_mem() + (chunk_size * id));
}
};
auto& chunk_storages() {
class chunk_handle_t {
ipc::shm::handle handle_;
public:
chunk_info_t *get_info(std::size_t chunk_size) {
if (!handle_.valid() &&
!handle_.acquire( ("__CHUNK_INFO__" + ipc::to_string(chunk_size)).c_str(),
sizeof(chunk_info_t) + chunk_info_t::chunks_mem_size(chunk_size) )) {
ipc::error("[chunk_storages] chunk_shm.id_info_.acquire failed: chunk_size = %zd\n", chunk_size);
return nullptr;
}
auto info = static_cast<chunk_info_t*>(handle_.get());
if (info == nullptr) {
ipc::error("[chunk_storages] chunk_shm.id_info_.get failed: chunk_size = %zd\n", chunk_size);
return nullptr;
}
return info;
}
};
static ipc::map<std::size_t, chunk_handle_t> chunk_hs;
return chunk_hs;
}
chunk_info_t *chunk_storage_info(std::size_t chunk_size) {
auto &storages = chunk_storages();
std::decay_t<decltype(storages)>::iterator it;
{
static ipc::rw_lock lock;
IPC_UNUSED_ std::shared_lock<ipc::rw_lock> guard {lock};
if ((it = storages.find(chunk_size)) == storages.end()) {
using chunk_handle_t = std::decay_t<decltype(storages)>::value_type::second_type;
guard.unlock();
IPC_UNUSED_ std::lock_guard<ipc::rw_lock> guard {lock};
it = storages.emplace(chunk_size, chunk_handle_t{}).first;
}
}
return it->second.get_info(chunk_size);
}
std::pair<ipc::storage_id_t, void*> acquire_storage(std::size_t size, ipc::circ::cc_t conns) {
std::size_t chunk_size = calc_chunk_size(size);
auto info = chunk_storage_info(chunk_size);
if (info == nullptr) return {};
info->lock_.lock();
info->pool_.prepare();
// got an unique id
auto id = info->pool_.acquire();
info->lock_.unlock();
auto chunk = info->at(chunk_size, id);
if (chunk == nullptr) return {};
chunk->conns().store(conns, std::memory_order_relaxed);
return { id, chunk->data() };
}
void *find_storage(ipc::storage_id_t id, std::size_t size) {
if (id < 0) {
ipc::error("[find_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
return nullptr;
}
std::size_t chunk_size = calc_chunk_size(size);
auto info = chunk_storage_info(chunk_size);
if (info == nullptr) return nullptr;
return info->at(chunk_size, id)->data();
}
void release_storage(ipc::storage_id_t id, std::size_t size) {
if (id < 0) {
ipc::error("[release_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
return;
}
std::size_t chunk_size = calc_chunk_size(size);
auto info = chunk_storage_info(chunk_size);
if (info == nullptr) return;
info->lock_.lock();
info->pool_.release(id);
info->lock_.unlock();
}
template <ipc::relat Rp, ipc::relat Rc>
bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::unicast>,
std::atomic<ipc::circ::cc_t> &/*conns*/, ipc::circ::cc_t /*curr_conns*/, ipc::circ::cc_t /*conn_id*/) noexcept {
return true;
}
template <ipc::relat Rp, ipc::relat Rc>
bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::broadcast>,
std::atomic<ipc::circ::cc_t> &conns, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) noexcept {
auto last_conns = curr_conns & ~conn_id;
for (unsigned k = 0;;) {
auto chunk_conns = conns.load(std::memory_order_acquire);
if (conns.compare_exchange_weak(chunk_conns, chunk_conns & last_conns, std::memory_order_release)) {
return (chunk_conns & last_conns) == 0;
}
ipc::yield(k);
}
}
template <typename Flag>
void recycle_storage(ipc::storage_id_t id, std::size_t size, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) {
if (id < 0) {
ipc::error("[recycle_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
return;
}
std::size_t chunk_size = calc_chunk_size(size);
auto info = chunk_storage_info(chunk_size);
if (info == nullptr) return;
auto chunk = info->at(chunk_size, id);
if (chunk == nullptr) return;
if (!sub_rc(Flag{}, chunk->conns(), curr_conns, conn_id)) {
return;
}
info->lock_.lock();
info->pool_.release(id);
info->lock_.unlock();
}
template <typename MsgT>
bool clear_message(void* p) {
auto msg = static_cast<MsgT*>(p);
if (msg->storage_) {
std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg->remain_;
if (r_size <= 0) {
ipc::error("[clear_message] invalid msg size: %d\n", (int)r_size);
return true;
}
release_storage(
*reinterpret_cast<ipc::storage_id_t*>(&msg->data_),
static_cast<std::size_t>(r_size));
}
return true;
}
struct conn_info_head {
ipc::string name_;
msg_id_t cc_id_; // connection-info id
ipc::detail::waiter cc_waiter_, wt_waiter_, rd_waiter_;
ipc::shm::handle acc_h_;
conn_info_head(char const * name)
: name_ {name}
, cc_id_ {(cc_acc() == nullptr) ? 0 : cc_acc()->fetch_add(1, std::memory_order_relaxed)}
, cc_waiter_{("__CC_CONN__" + name_).c_str()}
, wt_waiter_{("__WT_CONN__" + name_).c_str()}
, rd_waiter_{("__RD_CONN__" + name_).c_str()}
, acc_h_ {("__AC_CONN__" + name_).c_str(), sizeof(acc_t)} {
}
void quit_waiting() {
cc_waiter_.quit_waiting();
wt_waiter_.quit_waiting();
rd_waiter_.quit_waiting();
}
auto acc() {
return static_cast<acc_t*>(acc_h_.get());
}
auto& recv_cache() {
thread_local ipc::unordered_map<msg_id_t, cache_t> tls;
return tls;
}
};
template <typename W, typename F>
bool wait_for(W& waiter, F&& pred, std::uint64_t tm) {
if (tm == 0) return !pred();
for (unsigned k = 0; pred();) {
bool ret = true;
ipc::sleep(k, [&k, &ret, &waiter, &pred, tm] {
ret = waiter.wait_if(std::forward<F>(pred), tm);
k = 0;
});
if (!ret) return false; // timeout or fail
if (k == 0) break; // k has been reset
}
return true;
}
template <typename Policy,
std::size_t DataSize = ipc::data_length,
std::size_t AlignSize = (ipc::detail::min)(DataSize, alignof(std::max_align_t))>
struct queue_generator {
using queue_t = ipc::queue<msg_t<DataSize, AlignSize>, Policy>;
struct conn_info_t : conn_info_head {
queue_t que_;
conn_info_t(char const * name)
: conn_info_head{name}
, que_{("__QU_CONN__" +
ipc::to_string(DataSize) + "__" +
ipc::to_string(AlignSize) + "__" + name).c_str()} {
}
void disconnect_receiver() {
bool dis = que_.disconnect();
this->quit_waiting();
if (dis) {
this->recv_cache().clear();
}
}
};
};
template <typename Policy>
struct detail_impl {
using policy_t = Policy;
using flag_t = typename policy_t::flag_t;
using queue_t = typename queue_generator<policy_t>::queue_t;
using conn_info_t = typename queue_generator<policy_t>::conn_info_t;
constexpr static conn_info_t* info_of(ipc::handle_t h) noexcept {
return static_cast<conn_info_t*>(h);
}
constexpr static queue_t* queue_of(ipc::handle_t h) noexcept {
return (info_of(h) == nullptr) ? nullptr : &(info_of(h)->que_);
}
/* API implementations */
static void disconnect(ipc::handle_t h) {
auto que = queue_of(h);
if (que == nullptr) {
return;
}
que->shut_sending();
assert(info_of(h) != nullptr);
info_of(h)->disconnect_receiver();
}
static bool reconnect(ipc::handle_t * ph, bool start_to_recv) {
assert(ph != nullptr);
assert(*ph != nullptr);
auto que = queue_of(*ph);
if (que == nullptr) {
return false;
}
if (start_to_recv) {
que->shut_sending();
if (que->connect()) { // wouldn't connect twice
info_of(*ph)->cc_waiter_.broadcast();
return true;
}
return false;
}
// start_to_recv == false
if (que->connected()) {
info_of(*ph)->disconnect_receiver();
}
return que->ready_sending();
}
static bool connect(ipc::handle_t * ph, char const * name, bool start_to_recv) {
assert(ph != nullptr);
if (*ph == nullptr) {
*ph = ipc::mem::alloc<conn_info_t>(name);
}
return reconnect(ph, start_to_recv);
}
static void destroy(ipc::handle_t h) {
disconnect(h);
ipc::mem::free(info_of(h));
}
static std::size_t recv_count(ipc::handle_t h) noexcept {
auto que = queue_of(h);
if (que == nullptr) {
return ipc::invalid_value;
}
return que->conn_count();
}
static bool wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) {
auto que = queue_of(h);
if (que == nullptr) {
return false;
}
return wait_for(info_of(h)->cc_waiter_, [que, r_count] {
return que->conn_count() < r_count;
}, tm);
}
template <typename F>
static bool send(F&& gen_push, ipc::handle_t h, void const * data, std::size_t size) {
if (data == nullptr || size == 0) {
ipc::error("fail: send(%p, %zd)\n", data, size);
return false;
}
auto que = queue_of(h);
if (que == nullptr) {
ipc::error("fail: send, queue_of(h) == nullptr\n");
return false;
}
if (que->elems() == nullptr) {
ipc::error("fail: send, queue_of(h)->elems() == nullptr\n");
return false;
}
if (!que->ready_sending()) {
ipc::error("fail: send, que->ready_sending() == false\n");
return false;
}
ipc::circ::cc_t conns = que->elems()->connections(std::memory_order_relaxed);
if (conns == 0) {
ipc::error("fail: send, there is no receiver on this connection.\n");
return false;
}
// calc a new message id
auto acc = info_of(h)->acc();
if (acc == nullptr) {
ipc::error("fail: send, info_of(h)->acc() == nullptr\n");
return false;
}
auto msg_id = acc->fetch_add(1, std::memory_order_relaxed);
auto try_push = std::forward<F>(gen_push)(info_of(h), que, msg_id);
if (size > ipc::large_msg_limit) {
auto dat = acquire_storage(size, conns);
void * buf = dat.second;
if (buf != nullptr) {
std::memcpy(buf, data, size);
return try_push(static_cast<std::int32_t>(size) -
static_cast<std::int32_t>(ipc::data_length), &(dat.first), 0);
}
// try using message fragment
//ipc::log("fail: shm::handle for big message. msg_id: %zd, size: %zd\n", msg_id, size);
}
// push message fragment
std::int32_t offset = 0;
for (std::int32_t i = 0; i < static_cast<std::int32_t>(size / ipc::data_length); ++i, offset += ipc::data_length) {
if (!try_push(static_cast<std::int32_t>(size) - offset - static_cast<std::int32_t>(ipc::data_length),
static_cast<ipc::byte_t const *>(data) + offset, ipc::data_length)) {
return false;
}
}
// if remain > 0, this is the last message fragment
std::int32_t remain = static_cast<std::int32_t>(size) - offset;
if (remain > 0) {
if (!try_push(remain - static_cast<std::int32_t>(ipc::data_length),
static_cast<ipc::byte_t const *>(data) + offset,
static_cast<std::size_t>(remain))) {
return false;
}
}
return true;
}
static bool send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
return send([tm](auto info, auto que, auto msg_id) {
return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) {
if (!wait_for(info->wt_waiter_, [&] {
return !que->push(
[](void*) { return true; },
info->cc_id_, msg_id, remain, data, size);
}, tm)) {
ipc::log("force_push: msg_id = %zd, remain = %d, size = %zd\n", msg_id, remain, size);
if (!que->force_push(
clear_message<typename queue_t::value_t>,
info->cc_id_, msg_id, remain, data, size)) {
return false;
}
}
info->rd_waiter_.broadcast();
return true;
};
}, h, data, size);
}
static bool try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
return send([tm](auto info, auto que, auto msg_id) {
return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) {
if (!wait_for(info->wt_waiter_, [&] {
return !que->push(
[](void*) { return true; },
info->cc_id_, msg_id, remain, data, size);
}, tm)) {
return false;
}
info->rd_waiter_.broadcast();
return true;
};
}, h, data, size);
}
static ipc::buff_t recv(ipc::handle_t h, std::uint64_t tm) {
auto que = queue_of(h);
if (que == nullptr) {
ipc::error("fail: recv, queue_of(h) == nullptr\n");
return {};
}
if (!que->connected()) {
// hasn't connected yet, just return.
return {};
}
auto& rc = info_of(h)->recv_cache();
for (;;) {
// pop a new message
typename queue_t::value_t msg;
if (!wait_for(info_of(h)->rd_waiter_, [que, &msg] {
return !que->pop(msg);
}, tm)) {
// pop failed, just return.
return {};
}
info_of(h)->wt_waiter_.broadcast();
if ((info_of(h)->acc() != nullptr) && (msg.cc_id_ == info_of(h)->cc_id_)) {
continue; // ignore message to self
}
// msg.remain_ may minus & abs(msg.remain_) < data_length
std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg.remain_;
if (r_size <= 0) {
ipc::error("fail: recv, r_size = %d\n", (int)r_size);
return {};
}
std::size_t msg_size = static_cast<std::size_t>(r_size);
// large message
if (msg.storage_) {
ipc::storage_id_t buf_id = *reinterpret_cast<ipc::storage_id_t*>(&msg.data_);
void* buf = find_storage(buf_id, msg_size);
if (buf != nullptr) {
struct recycle_t {
ipc::storage_id_t storage_id;
ipc::circ::cc_t curr_conns;
ipc::circ::cc_t conn_id;
} *r_info = ipc::mem::alloc<recycle_t>(recycle_t{
buf_id, que->elems()->connections(std::memory_order_relaxed), que->connected_id()
});
if (r_info == nullptr) {
ipc::log("fail: ipc::mem::alloc<recycle_t>.\n");
return ipc::buff_t{buf, msg_size}; // no recycle
} else {
return ipc::buff_t{buf, msg_size, [](void* p_info, std::size_t size) {
auto r_info = static_cast<recycle_t *>(p_info);
IPC_UNUSED_ auto finally = ipc::guard([r_info] {
ipc::mem::free(r_info);
});
recycle_storage<flag_t>(r_info->storage_id, size, r_info->curr_conns, r_info->conn_id);
}, r_info};
}
} else {
ipc::log("fail: shm::handle for large message. msg_id: %zd, buf_id: %zd, size: %zd\n", msg.id_, buf_id, msg_size);
continue;
}
}
// find cache with msg.id_
auto cac_it = rc.find(msg.id_);
if (cac_it == rc.end()) {
if (msg_size <= ipc::data_length) {
return make_cache(msg.data_, msg_size);
}
// gc
if (rc.size() > 1024) {
std::vector<msg_id_t> need_del;
for (auto const & pair : rc) {
auto cmp = std::minmax(msg.id_, pair.first);
if (cmp.second - cmp.first > 8192) {
need_del.push_back(pair.first);
}
}
for (auto id : need_del) rc.erase(id);
}
// cache the first message fragment
rc.emplace(msg.id_, cache_t { ipc::data_length, make_cache(msg.data_, msg_size) });
}
// has cached before this message
else {
auto& cac = cac_it->second;
// this is the last message fragment
if (msg.remain_ <= 0) {
cac.append(&(msg.data_), msg_size);
// finish this message, erase it from cache
auto buff = std::move(cac.buff_);
rc.erase(cac_it);
return buff;
}
// there are remain datas after this message
cac.append(&(msg.data_), ipc::data_length);
}
}
}
static ipc::buff_t try_recv(ipc::handle_t h) {
return recv(h, 0);
}
}; // detail_impl<Policy>
template <typename Flag>
using policy_t = ipc::policy::choose<ipc::circ::elem_array, Flag>;
} // internal-linkage
namespace ipc {
template <typename Flag>
ipc::handle_t chan_impl<Flag>::inited() {
ipc::detail::waiter::init();
return nullptr;
}
template <typename Flag>
bool chan_impl<Flag>::connect(ipc::handle_t * ph, char const * name, unsigned mode) {
return detail_impl<policy_t<Flag>>::connect(ph, name, mode & receiver);
}
template <typename Flag>
bool chan_impl<Flag>::reconnect(ipc::handle_t * ph, unsigned mode) {
return detail_impl<policy_t<Flag>>::reconnect(ph, mode & receiver);
}
template <typename Flag>
void chan_impl<Flag>::disconnect(ipc::handle_t h) {
detail_impl<policy_t<Flag>>::disconnect(h);
}
template <typename Flag>
void chan_impl<Flag>::destroy(ipc::handle_t h) {
detail_impl<policy_t<Flag>>::destroy(h);
}
template <typename Flag>
char const * chan_impl<Flag>::name(ipc::handle_t h) {
auto info = detail_impl<policy_t<Flag>>::info_of(h);
return (info == nullptr) ? nullptr : info->name_.c_str();
}
template <typename Flag>
std::size_t chan_impl<Flag>::recv_count(ipc::handle_t h) {
return detail_impl<policy_t<Flag>>::recv_count(h);
}
template <typename Flag>
bool chan_impl<Flag>::wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) {
return detail_impl<policy_t<Flag>>::wait_for_recv(h, r_count, tm);
}
template <typename Flag>
bool chan_impl<Flag>::send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
return detail_impl<policy_t<Flag>>::send(h, data, size, tm);
}
template <typename Flag>
buff_t chan_impl<Flag>::recv(ipc::handle_t h, std::uint64_t tm) {
return detail_impl<policy_t<Flag>>::recv(h, tm);
}
template <typename Flag>
bool chan_impl<Flag>::try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
return detail_impl<policy_t<Flag>>::try_send(h, data, size, tm);
}
template <typename Flag>
buff_t chan_impl<Flag>::try_recv(ipc::handle_t h) {
return detail_impl<policy_t<Flag>>::try_recv(h);
}
template struct chan_impl<ipc::wr<relat::single, relat::single, trans::unicast >>;
// template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::unicast >>; // TBD
// template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::unicast >>; // TBD
template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::broadcast>>;
template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::broadcast>>;
} // namespace ipc

查看文件

@@ -0,0 +1,25 @@
#pragma once
#include <type_traits>
#include "libipc/def.h"
#include "libipc/prod_cons.h"
#include "libipc/circ/elem_array.h"
namespace ipc {
namespace policy {
template <template <typename, std::size_t...> class Elems, typename Flag>
struct choose;
template <typename Flag>
struct choose<circ::elem_array, Flag> {
using flag_t = Flag;
template <std::size_t DataSize, std::size_t AlignSize>
using elems_t = circ::elem_array<ipc::prod_cons_impl<flag_t>, DataSize, AlignSize>;
};
} // namespace policy
} // namespace ipc

查看文件

@@ -0,0 +1,17 @@
#include "libipc/pool_alloc.h"
#include "libipc/memory/resource.h"
namespace ipc {
namespace mem {
void* pool_alloc::alloc(std::size_t size) {
return async_pool_alloc::alloc(size);
}
void pool_alloc::free(void* p, std::size_t size) {
async_pool_alloc::free(p, size);
}
} // namespace mem
} // namespace ipc

查看文件

@@ -0,0 +1,433 @@
#pragma once
#include <atomic>
#include <utility>
#include <cstring>
#include <type_traits>
#include <cstdint>
#include "libipc/def.h"
#include "libipc/platform/detail.h"
#include "libipc/circ/elem_def.h"
#include "libipc/utility/log.h"
#include "libipc/utility/utility.h"
namespace ipc {
////////////////////////////////////////////////////////////////
/// producer-consumer implementation
////////////////////////////////////////////////////////////////
template <typename Flag>
struct prod_cons_impl;
template <>
struct prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
};
alignas(cache_line_size) std::atomic<circ::u2_t> rd_; // read index
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
constexpr circ::u2_t cursor() const noexcept {
return 0;
}
template <typename W, typename F, typename E>
bool push(W* /*wrapper*/, F&& f, E* elems) {
auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
return false; // full
}
std::forward<F>(f)(&(elems[cur_wt].data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
/**
* In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
* So we could just disconnect all connections of receiver, and return false.
*/
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(~static_cast<circ::cc_t>(0u));
return false;
}
template <typename W, typename F, typename R, typename E>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
return false; // empty
}
std::forward<F>(f)(&(elems[cur_rd].data_));
std::forward<R>(out)(true);
rd_.fetch_add(1, std::memory_order_release);
return true;
}
};
template <>
struct prod_cons_impl<wr<relat::single, relat::multi , trans::unicast>>
: prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(1);
return false;
}
template <typename W, typename F, typename R,
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
byte_t buff[DS];
for (unsigned k = 0;;) {
auto cur_rd = rd_.load(std::memory_order_relaxed);
if (circ::index_of(cur_rd) ==
circ::index_of(wt_.load(std::memory_order_acquire))) {
return false; // empty
}
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
std::forward<F>(f)(buff);
std::forward<R>(out)(true);
return true;
}
ipc::yield(k);
}
}
};
template <>
struct prod_cons_impl<wr<relat::multi , relat::multi, trans::unicast>>
: prod_cons_impl<wr<relat::single, relat::multi, trans::unicast>> {
using flag_t = std::uint64_t;
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
};
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
template <typename W, typename F, typename E>
bool push(W* /*wrapper*/, F&& f, E* elems) {
circ::u2_t cur_ct, nxt_ct;
for (unsigned k = 0;;) {
cur_ct = ct_.load(std::memory_order_relaxed);
if (circ::index_of(nxt_ct = cur_ct + 1) ==
circ::index_of(rd_.load(std::memory_order_acquire))) {
return false; // full
}
if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
break;
}
ipc::yield(k);
}
auto* el = elems + circ::index_of(cur_ct);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
while (1) {
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
if (cur_ct != wt_.load(std::memory_order_relaxed)) {
return true;
}
if ((~cac_ct) != cur_ct) {
return true;
}
if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
return true;
}
wt_.store(nxt_ct, std::memory_order_release);
cur_ct = nxt_ct;
nxt_ct = cur_ct + 1;
el = elems + circ::index_of(cur_ct);
}
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(1);
return false;
}
template <typename W, typename F, typename R,
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
byte_t buff[DS];
for (unsigned k = 0;;) {
auto cur_rd = rd_.load(std::memory_order_relaxed);
auto cur_wt = wt_.load(std::memory_order_acquire);
auto id_rd = circ::index_of(cur_rd);
auto id_wt = circ::index_of(cur_wt);
if (id_rd == id_wt) {
auto* el = elems + id_wt;
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
if ((~cac_ct) != cur_wt) {
return false; // empty
}
if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
wt_.store(cur_wt + 1, std::memory_order_release);
}
k = 0;
}
else {
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
std::forward<F>(f)(buff);
std::forward<R>(out)(true);
return true;
}
ipc::yield(k);
}
}
}
};
template <>
struct prod_cons_impl<wr<relat::single, relat::multi, trans::broadcast>> {
using rc_t = std::uint64_t;
enum : rc_t {
ep_mask = 0x00000000ffffffffull,
ep_incr = 0x0000000100000000ull
};
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<rc_t> rc_ { 0 }; // read-counter
};
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
circ::u2_t cursor() const noexcept {
return wt_.load(std::memory_order_acquire);
}
template <typename W, typename F, typename E>
bool push(W* wrapper, F&& f, E* elems) {
E* el;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & ep_mask;
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
return false; // has not finished yet
}
// consider rem_cc to be 0 here
if (el->rc_.compare_exchange_weak(
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
break;
}
ipc::yield(k);
}
std::forward<F>(f)(&(el->data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&& f, E* elems) {
E* el;
epoch_ += ep_incr;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & ep_mask;
if (cc & rem_cc) {
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
if (cc == 0) return false; // no reader
}
// just compare & exchange
if (el->rc_.compare_exchange_weak(
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
break;
}
ipc::yield(k);
}
std::forward<F>(f)(&(el->data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
template <typename W, typename F, typename R, typename E>
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
if (cur == cursor()) return false; // acquire
auto* el = elems + circ::index_of(cur++);
std::forward<F>(f)(&(el->data_));
for (unsigned k = 0;;) {
auto cur_rc = el->rc_.load(std::memory_order_acquire);
if ((cur_rc & ep_mask) == 0) {
std::forward<R>(out)(true);
return true;
}
auto nxt_rc = cur_rc & ~static_cast<rc_t>(wrapper->connected_id());
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
std::forward<R>(out)((nxt_rc & ep_mask) == 0);
return true;
}
ipc::yield(k);
}
}
};
template <>
struct prod_cons_impl<wr<relat::multi, relat::multi, trans::broadcast>> {
using rc_t = std::uint64_t;
using flag_t = std::uint64_t;
enum : rc_t {
rc_mask = 0x00000000ffffffffull,
ep_mask = 0x00ffffffffffffffull,
ep_incr = 0x0100000000000000ull,
ic_mask = 0xff000000ffffffffull,
ic_incr = 0x0000000100000000ull
};
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<rc_t > rc_ { 0 }; // read-counter
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
};
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
alignas(cache_line_size) std::atomic<rc_t> epoch_ { 0 };
circ::u2_t cursor() const noexcept {
return ct_.load(std::memory_order_acquire);
}
constexpr static rc_t inc_rc(rc_t rc) noexcept {
return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
}
constexpr static rc_t inc_mask(rc_t rc) noexcept {
return inc_rc(rc) & ~rc_mask;
}
template <typename W, typename F, typename E>
bool push(W* wrapper, F&& f, E* elems) {
E* el;
circ::u2_t cur_ct;
rc_t epoch = epoch_.load(std::memory_order_acquire);
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_relaxed);
circ::cc_t rem_cc = cur_rc & rc_mask;
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
return false; // has not finished yet
}
else if (!rem_cc) {
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
if ((cur_fl != cur_ct) && cur_fl) {
return false; // full
}
}
// consider rem_cc to be 0 here
if (el->rc_.compare_exchange_weak(
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed) &&
epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
break;
}
ipc::yield(k);
}
// only one thread/process would touch here at one time
ct_.store(cur_ct + 1, std::memory_order_release);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&& f, E* elems) {
E* el;
circ::u2_t cur_ct;
rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & rc_mask;
if (cc & rem_cc) {
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
if (cc == 0) return false; // no reader
}
// just compare & exchange
if (el->rc_.compare_exchange_weak(
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed)) {
if (epoch == epoch_.load(std::memory_order_acquire)) {
break;
}
else if (push(wrapper, std::forward<F>(f), elems)) {
return true;
}
epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
}
ipc::yield(k);
}
// only one thread/process would touch here at one time
ct_.store(cur_ct + 1, std::memory_order_release);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
return true;
}
template <typename W, typename F, typename R, typename E, std::size_t N>
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
auto* el = elems + circ::index_of(cur);
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
if (cur_fl != ~static_cast<flag_t>(cur)) {
return false; // empty
}
++cur;
std::forward<F>(f)(&(el->data_));
for (unsigned k = 0;;) {
auto cur_rc = el->rc_.load(std::memory_order_acquire);
if ((cur_rc & rc_mask) == 0) {
std::forward<R>(out)(true);
el->f_ct_.store(cur + N - 1, std::memory_order_release);
return true;
}
auto nxt_rc = inc_rc(cur_rc) & ~static_cast<rc_t>(wrapper->connected_id());
bool last_one = false;
if ((last_one = (nxt_rc & rc_mask) == 0)) {
el->f_ct_.store(cur + N - 1, std::memory_order_release);
}
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
std::forward<R>(out)(last_one);
return true;
}
ipc::yield(k);
}
}
};
} // namespace ipc

查看文件

@@ -0,0 +1,216 @@
#pragma once
#include <type_traits>
#include <new>
#include <utility> // [[since C++14]]: std::exchange
#include <algorithm>
#include <atomic>
#include <tuple>
#include <thread>
#include <chrono>
#include <string>
#include <cassert> // assert
#include "libipc/def.h"
#include "libipc/shm.h"
#include "libipc/rw_lock.h"
#include "libipc/utility/log.h"
#include "libipc/platform/detail.h"
#include "libipc/circ/elem_def.h"
namespace ipc {
namespace detail {
class queue_conn {
protected:
circ::cc_t connected_ = 0;
shm::handle elems_h_;
template <typename Elems>
Elems* open(char const * name) {
if (name == nullptr || name[0] == '\0') {
ipc::error("fail open waiter: name is empty!\n");
return nullptr;
}
if (!elems_h_.acquire(name, sizeof(Elems))) {
return nullptr;
}
auto elems = static_cast<Elems*>(elems_h_.get());
if (elems == nullptr) {
ipc::error("fail acquire elems: %s\n", name);
return nullptr;
}
elems->init();
return elems;
}
void close() {
elems_h_.release();
}
public:
queue_conn() = default;
queue_conn(const queue_conn&) = delete;
queue_conn& operator=(const queue_conn&) = delete;
bool connected() const noexcept {
return connected_ != 0;
}
circ::cc_t connected_id() const noexcept {
return connected_;
}
template <typename Elems>
auto connect(Elems* elems) noexcept
/*needs 'optional' here*/
-> std::tuple<bool, bool, decltype(std::declval<Elems>().cursor())> {
if (elems == nullptr) return {};
// if it's already connected, just return
if (connected()) return {connected(), false, 0};
connected_ = elems->connect_receiver();
return {connected(), true, elems->cursor()};
}
template <typename Elems>
bool disconnect(Elems* elems) noexcept {
if (elems == nullptr) return false;
// if it's already disconnected, just return false
if (!connected()) return false;
elems->disconnect_receiver(std::exchange(connected_, 0));
return true;
}
};
template <typename Elems>
class queue_base : public queue_conn {
using base_t = queue_conn;
public:
using elems_t = Elems;
using policy_t = typename elems_t::policy_t;
protected:
elems_t * elems_ = nullptr;
decltype(std::declval<elems_t>().cursor()) cursor_ = 0;
bool sender_flag_ = false;
public:
using base_t::base_t;
queue_base() = default;
explicit queue_base(char const * name)
: queue_base{} {
elems_ = open<elems_t>(name);
}
explicit queue_base(elems_t * elems) noexcept
: queue_base{} {
assert(elems != nullptr);
elems_ = elems;
}
/* not virtual */ ~queue_base() {
base_t::close();
}
elems_t * elems() noexcept { return elems_; }
elems_t const * elems() const noexcept { return elems_; }
bool ready_sending() noexcept {
if (elems_ == nullptr) return false;
return sender_flag_ || (sender_flag_ = elems_->connect_sender());
}
void shut_sending() noexcept {
if (elems_ == nullptr) return;
if (!sender_flag_) return;
elems_->disconnect_sender();
}
bool connect() noexcept {
auto tp = base_t::connect(elems_);
if (std::get<0>(tp) && std::get<1>(tp)) {
cursor_ = std::get<2>(tp);
return true;
}
return std::get<0>(tp);
}
bool disconnect() noexcept {
return base_t::disconnect(elems_);
}
std::size_t conn_count() const noexcept {
return (elems_ == nullptr) ? static_cast<std::size_t>(invalid_value) : elems_->conn_count();
}
bool valid() const noexcept {
return elems_ != nullptr;
}
bool empty() const noexcept {
return !valid() || (cursor_ == elems_->cursor());
}
template <typename T, typename F, typename... P>
bool push(F&& prep, P&&... params) {
if (elems_ == nullptr) return false;
return elems_->push(this, [&](void* p) {
if (prep(p)) ::new (p) T(std::forward<P>(params)...);
});
}
template <typename T, typename F, typename... P>
bool force_push(F&& prep, P&&... params) {
if (elems_ == nullptr) return false;
return elems_->force_push(this, [&](void* p) {
if (prep(p)) ::new (p) T(std::forward<P>(params)...);
});
}
template <typename T, typename F>
bool pop(T& item, F&& out) {
if (elems_ == nullptr) {
return false;
}
return elems_->pop(this, &(this->cursor_), [&item](void* p) {
::new (&item) T(std::move(*static_cast<T*>(p)));
}, std::forward<F>(out));
}
};
} // namespace detail
template <typename T, typename Policy>
class queue final : public detail::queue_base<typename Policy::template elems_t<sizeof(T), alignof(T)>> {
using base_t = detail::queue_base<typename Policy::template elems_t<sizeof(T), alignof(T)>>;
public:
using value_t = T;
using base_t::base_t;
template <typename... P>
bool push(P&&... params) {
return base_t::template push<T>(std::forward<P>(params)...);
}
template <typename... P>
bool force_push(P&&... params) {
return base_t::template force_push<T>(std::forward<P>(params)...);
}
bool pop(T& item) {
return base_t::pop(item, [](bool) {});
}
template <typename F>
bool pop(T& item, F&& out) {
return base_t::pop(item, std::forward<F>(out));
}
};
} // namespace ipc

查看文件

@@ -0,0 +1,103 @@
#include <string>
#include <utility>
#include "libipc/shm.h"
#include "libipc/utility/pimpl.h"
#include "libipc/memory/resource.h"
namespace ipc {
namespace shm {
class handle::handle_ : public pimpl<handle_> {
public:
shm::id_t id_ = nullptr;
void* m_ = nullptr;
ipc::string n_;
std::size_t s_ = 0;
};
handle::handle()
: p_(p_->make()) {
}
handle::handle(char const * name, std::size_t size, unsigned mode)
: handle() {
acquire(name, size, mode);
}
handle::handle(handle&& rhs)
: handle() {
swap(rhs);
}
handle::~handle() {
release();
p_->clear();
}
void handle::swap(handle& rhs) {
std::swap(p_, rhs.p_);
}
handle& handle::operator=(handle rhs) {
swap(rhs);
return *this;
}
bool handle::valid() const noexcept {
return impl(p_)->m_ != nullptr;
}
std::size_t handle::size() const noexcept {
return impl(p_)->s_;
}
char const * handle::name() const noexcept {
return impl(p_)->n_.c_str();
}
std::int32_t handle::ref() const noexcept {
return shm::get_ref(impl(p_)->id_);
}
void handle::sub_ref() noexcept {
shm::sub_ref(impl(p_)->id_);
}
bool handle::acquire(char const * name, std::size_t size, unsigned mode) {
release();
impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode);
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
return valid();
}
std::int32_t handle::release() {
if (impl(p_)->id_ == nullptr) return -1;
return shm::release(detach());
}
void* handle::get() const {
return impl(p_)->m_;
}
void handle::attach(id_t id) {
if (id == nullptr) return;
release();
impl(p_)->id_ = id;
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
}
id_t handle::detach() {
auto old = impl(p_)->id_;
impl(p_)->id_ = nullptr;
impl(p_)->m_ = nullptr;
impl(p_)->s_ = 0;
impl(p_)->n_.clear();
return old;
}
} // namespace shm
} // namespace ipc

查看文件

@@ -0,0 +1,83 @@
#pragma once
#include <utility>
#include <string>
#include <mutex>
#include <atomic>
#include "libipc/def.h"
#include "libipc/mutex.h"
#include "libipc/condition.h"
#include "libipc/platform/detail.h"
namespace ipc {
namespace detail {
class waiter {
ipc::sync::condition cond_;
ipc::sync::mutex lock_;
std::atomic<bool> quit_ {false};
public:
static void init();
waiter() = default;
waiter(char const *name) {
open(name);
}
~waiter() {
close();
}
bool valid() const noexcept {
return cond_.valid() && lock_.valid();
}
bool open(char const *name) noexcept {
quit_.store(false, std::memory_order_relaxed);
if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) {
return false;
}
if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) {
cond_.close();
return false;
}
return valid();
}
void close() noexcept {
cond_.close();
lock_.close();
}
template <typename F>
bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept {
IPC_UNUSED_ std::lock_guard<ipc::sync::mutex> guard {lock_};
while ([this, &pred] {
return !quit_.load(std::memory_order_relaxed)
&& std::forward<F>(pred)();
}()) {
if (!cond_.wait(lock_, tm)) return false;
}
return true;
}
bool notify() noexcept {
std::lock_guard<ipc::sync::mutex>{lock_}; // barrier
return cond_.notify(lock_);
}
bool broadcast() noexcept {
std::lock_guard<ipc::sync::mutex>{lock_}; // barrier
return cond_.broadcast(lock_);
}
bool quit_waiting() {
quit_.store(true, std::memory_order_release);
return broadcast();
}
};
} // namespace detail
} // namespace ipc

查看文件

@@ -0,0 +1,3 @@
https://github.com/mutouyun/cpp-ipc
A high-performance inter-process communication library using shared memory on Linux/Windows.

文件差异内容过多而无法显示 加载差异

查看文件

@@ -0,0 +1,316 @@
// jpgd.h - C++ class for JPEG decompression.
// Public domain, Rich Geldreich <richgel99@gmail.com>
#ifndef JPEG_DECODER_H
#define JPEG_DECODER_H
#include <stdlib.h>
#include <stdio.h>
#include <setjmp.h>
namespace jpgd
{
typedef unsigned char uint8;
typedef signed short int16;
typedef unsigned short uint16;
typedef unsigned int uint;
typedef signed int int32;
// Loads a JPEG image from a memory buffer or a file.
// req_comps can be 1 (grayscale), 3 (RGB), or 4 (RGBA).
// On return, width/height will be set to the image's dimensions, and actual_comps will be set to the either 1 (grayscale) or 3 (RGB).
// Notes: For more control over where and how the source data is read, see the decompress_jpeg_image_from_stream() function below, or call the jpeg_decoder class directly.
// Requesting a 8 or 32bpp image is currently a little faster than 24bpp because the jpeg_decoder class itself currently always unpacks to either 8 or 32bpp.
// BEGIN EPIC MOD
//unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps);
unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format);
// END EPIC MOD
unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps);
// Success/failure error codes.
enum jpgd_status
{
JPGD_SUCCESS = 0, JPGD_FAILED = -1, JPGD_DONE = 1,
JPGD_BAD_DHT_COUNTS = -256, JPGD_BAD_DHT_INDEX, JPGD_BAD_DHT_MARKER, JPGD_BAD_DQT_MARKER, JPGD_BAD_DQT_TABLE,
JPGD_BAD_PRECISION, JPGD_BAD_HEIGHT, JPGD_BAD_WIDTH, JPGD_TOO_MANY_COMPONENTS,
JPGD_BAD_SOF_LENGTH, JPGD_BAD_VARIABLE_MARKER, JPGD_BAD_DRI_LENGTH, JPGD_BAD_SOS_LENGTH,
JPGD_BAD_SOS_COMP_ID, JPGD_W_EXTRA_BYTES_BEFORE_MARKER, JPGD_NO_ARITHMITIC_SUPPORT, JPGD_UNEXPECTED_MARKER,
JPGD_NOT_JPEG, JPGD_UNSUPPORTED_MARKER, JPGD_BAD_DQT_LENGTH, JPGD_TOO_MANY_BLOCKS,
JPGD_UNDEFINED_QUANT_TABLE, JPGD_UNDEFINED_HUFF_TABLE, JPGD_NOT_SINGLE_SCAN, JPGD_UNSUPPORTED_COLORSPACE,
JPGD_UNSUPPORTED_SAMP_FACTORS, JPGD_DECODE_ERROR, JPGD_BAD_RESTART_MARKER, JPGD_ASSERTION_ERROR,
JPGD_BAD_SOS_SPECTRAL, JPGD_BAD_SOS_SUCCESSIVE, JPGD_STREAM_READ, JPGD_NOTENOUGHMEM
};
// Input stream interface.
// Derive from this class to read input data from sources other than files or memory. Set m_eof_flag to true when no more data is available.
// The decoder is rather greedy: it will keep on calling this method until its internal input buffer is full, or until the EOF flag is set.
// It the input stream contains data after the JPEG stream's EOI (end of image) marker it will probably be pulled into the internal buffer.
// Call the get_total_bytes_read() method to determine the actual size of the JPEG stream after successful decoding.
class jpeg_decoder_stream
{
public:
jpeg_decoder_stream() { }
virtual ~jpeg_decoder_stream() { }
// The read() method is called when the internal input buffer is empty.
// Parameters:
// pBuf - input buffer
// max_bytes_to_read - maximum bytes that can be written to pBuf
// pEOF_flag - set this to true if at end of stream (no more bytes remaining)
// Returns -1 on error, otherwise return the number of bytes actually written to the buffer (which may be 0).
// Notes: This method will be called in a loop until you set *pEOF_flag to true or the internal buffer is full.
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) = 0;
};
// stdio FILE stream class.
class jpeg_decoder_file_stream : public jpeg_decoder_stream
{
jpeg_decoder_file_stream(const jpeg_decoder_file_stream &);
jpeg_decoder_file_stream &operator =(const jpeg_decoder_file_stream &);
FILE *m_pFile;
bool m_eof_flag, m_error_flag;
public:
jpeg_decoder_file_stream();
virtual ~jpeg_decoder_file_stream();
bool open(const char *Pfilename);
void close();
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag);
};
// Memory stream class.
class jpeg_decoder_mem_stream : public jpeg_decoder_stream
{
const uint8 *m_pSrc_data;
uint m_ofs, m_size;
public:
jpeg_decoder_mem_stream() : m_pSrc_data(NULL), m_ofs(0), m_size(0) { }
jpeg_decoder_mem_stream(const uint8 *pSrc_data, uint size) : m_pSrc_data(pSrc_data), m_ofs(0), m_size(size) { }
virtual ~jpeg_decoder_mem_stream() { }
bool open(const uint8 *pSrc_data, uint size);
void close() { m_pSrc_data = NULL; m_ofs = 0; m_size = 0; }
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag);
};
// Loads JPEG file from a jpeg_decoder_stream.
unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps);
enum
{
JPGD_IN_BUF_SIZE = 8192, JPGD_MAX_BLOCKS_PER_MCU = 10, JPGD_MAX_HUFF_TABLES = 8, JPGD_MAX_QUANT_TABLES = 4,
JPGD_MAX_COMPONENTS = 4, JPGD_MAX_COMPS_IN_SCAN = 4, JPGD_MAX_BLOCKS_PER_ROW = 8192, JPGD_MAX_HEIGHT = 16384, JPGD_MAX_WIDTH = 16384
};
typedef int16 jpgd_quant_t;
typedef int16 jpgd_block_t;
class jpeg_decoder
{
public:
// Call get_error_code() after constructing to determine if the stream is valid or not. You may call the get_width(), get_height(), etc.
// methods after the constructor is called. You may then either destruct the object, or begin decoding the image by calling begin_decoding(), then decode() on each scanline.
jpeg_decoder(jpeg_decoder_stream *pStream);
~jpeg_decoder();
// Call this method after constructing the object to begin decompression.
// If JPGD_SUCCESS is returned you may then call decode() on each scanline.
int begin_decoding();
// Returns the next scan line.
// For grayscale images, pScan_line will point to a buffer containing 8-bit pixels (get_bytes_per_pixel() will return 1).
// Otherwise, it will always point to a buffer containing 32-bit RGBA pixels (A will always be 255, and get_bytes_per_pixel() will return 4).
// Returns JPGD_SUCCESS if a scan line has been returned.
// Returns JPGD_DONE if all scan lines have been returned.
// Returns JPGD_FAILED if an error occurred. Call get_error_code() for a more info.
int decode(const void** pScan_line, uint* pScan_line_len);
inline jpgd_status get_error_code() const { return m_error_code; }
inline int get_width() const { return m_image_x_size; }
inline int get_height() const { return m_image_y_size; }
inline int get_num_components() const { return m_comps_in_frame; }
inline int get_bytes_per_pixel() const { return m_dest_bytes_per_pixel; }
inline int get_bytes_per_scan_line() const { return m_image_x_size * get_bytes_per_pixel(); }
// Returns the total number of bytes actually consumed by the decoder (which should equal the actual size of the JPEG file).
inline int get_total_bytes_read() const { return m_total_bytes_read; }
private:
jpeg_decoder(const jpeg_decoder &);
jpeg_decoder &operator =(const jpeg_decoder &);
typedef void (*pDecode_block_func)(jpeg_decoder *, int, int, int);
struct huff_tables
{
bool ac_table;
uint look_up[256];
uint look_up2[256];
uint8 code_size[256];
uint tree[512];
};
struct coeff_buf
{
uint8 *pData;
int block_num_x, block_num_y;
int block_len_x, block_len_y;
int block_size;
};
struct mem_block
{
mem_block *m_pNext;
size_t m_used_count;
size_t m_size;
char m_data[1];
};
jmp_buf m_jmp_state;
mem_block *m_pMem_blocks;
int m_image_x_size;
int m_image_y_size;
jpeg_decoder_stream *m_pStream;
int m_progressive_flag;
uint8 m_huff_ac[JPGD_MAX_HUFF_TABLES];
uint8* m_huff_num[JPGD_MAX_HUFF_TABLES]; // pointer to number of Huffman codes per bit size
uint8* m_huff_val[JPGD_MAX_HUFF_TABLES]; // pointer to Huffman codes per bit size
jpgd_quant_t* m_quant[JPGD_MAX_QUANT_TABLES]; // pointer to quantization tables
int m_scan_type; // Gray, Yh1v1, Yh1v2, Yh2v1, Yh2v2 (CMYK111, CMYK4114 no longer supported)
int m_comps_in_frame; // # of components in frame
int m_comp_h_samp[JPGD_MAX_COMPONENTS]; // component's horizontal sampling factor
int m_comp_v_samp[JPGD_MAX_COMPONENTS]; // component's vertical sampling factor
int m_comp_quant[JPGD_MAX_COMPONENTS]; // component's quantization table selector
int m_comp_ident[JPGD_MAX_COMPONENTS]; // component's ID
int m_comp_h_blocks[JPGD_MAX_COMPONENTS];
int m_comp_v_blocks[JPGD_MAX_COMPONENTS];
int m_comps_in_scan; // # of components in scan
int m_comp_list[JPGD_MAX_COMPS_IN_SCAN]; // components in this scan
int m_comp_dc_tab[JPGD_MAX_COMPONENTS]; // component's DC Huffman coding table selector
int m_comp_ac_tab[JPGD_MAX_COMPONENTS]; // component's AC Huffman coding table selector
int m_spectral_start; // spectral selection start
int m_spectral_end; // spectral selection end
int m_successive_low; // successive approximation low
int m_successive_high; // successive approximation high
int m_max_mcu_x_size; // MCU's max. X size in pixels
int m_max_mcu_y_size; // MCU's max. Y size in pixels
int m_blocks_per_mcu;
int m_max_blocks_per_row;
int m_mcus_per_row, m_mcus_per_col;
int m_mcu_org[JPGD_MAX_BLOCKS_PER_MCU];
int m_total_lines_left; // total # lines left in image
int m_mcu_lines_left; // total # lines left in this MCU
int m_real_dest_bytes_per_scan_line;
int m_dest_bytes_per_scan_line; // rounded up
int m_dest_bytes_per_pixel; // 4 (RGB) or 1 (Y)
huff_tables* m_pHuff_tabs[JPGD_MAX_HUFF_TABLES];
coeff_buf* m_dc_coeffs[JPGD_MAX_COMPONENTS];
coeff_buf* m_ac_coeffs[JPGD_MAX_COMPONENTS];
int m_eob_run;
int m_block_y_mcu[JPGD_MAX_COMPONENTS];
uint8* m_pIn_buf_ofs;
int m_in_buf_left;
int m_tem_flag;
bool m_eof_flag;
uint8 m_in_buf_pad_start[128];
uint8 m_in_buf[JPGD_IN_BUF_SIZE + 128];
uint8 m_in_buf_pad_end[128];
int m_bits_left;
uint m_bit_buf;
int m_restart_interval;
int m_restarts_left;
int m_next_restart_num;
int m_max_mcus_per_row;
int m_max_blocks_per_mcu;
int m_expanded_blocks_per_mcu;
int m_expanded_blocks_per_row;
int m_expanded_blocks_per_component;
bool m_freq_domain_chroma_upsample;
int m_max_mcus_per_col;
uint m_last_dc_val[JPGD_MAX_COMPONENTS];
jpgd_block_t* m_pMCU_coefficients;
int m_mcu_block_max_zag[JPGD_MAX_BLOCKS_PER_MCU];
uint8* m_pSample_buf;
int m_crr[256];
int m_cbb[256];
int m_crg[256];
int m_cbg[256];
uint8* m_pScan_line_0;
uint8* m_pScan_line_1;
jpgd_status m_error_code;
bool m_ready_flag;
int m_total_bytes_read;
void free_all_blocks();
// BEGIN EPIC MOD
UE_NORETURN void stop_decoding(jpgd_status status);
// END EPIC MOD
void *alloc(size_t n, bool zero = false);
void word_clear(void *p, uint16 c, uint n);
void prep_in_buffer();
void read_dht_marker();
void read_dqt_marker();
void read_sof_marker();
void skip_variable_marker();
void read_dri_marker();
void read_sos_marker();
int next_marker();
int process_markers();
void locate_soi_marker();
void locate_sof_marker();
int locate_sos_marker();
void init(jpeg_decoder_stream * pStream);
void create_look_ups();
void fix_in_buffer();
void transform_mcu(int mcu_row);
void transform_mcu_expand(int mcu_row);
coeff_buf* coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y);
inline jpgd_block_t *coeff_buf_getp(coeff_buf *cb, int block_x, int block_y);
void load_next_row();
void decode_next_row();
void make_huff_table(int index, huff_tables *pH);
void check_quant_tables();
void check_huff_tables();
void calc_mcu_block_order();
int init_scan();
void init_frame();
void process_restart();
void decode_scan(pDecode_block_func decode_block_func);
void init_progressive();
void init_sequential();
void decode_start();
void decode_init(jpeg_decoder_stream * pStream);
void H2V2Convert();
void H2V1Convert();
void H1V2Convert();
void H1V1Convert();
void gray_convert();
void expanded_convert();
void find_eoi();
inline uint get_char();
inline uint get_char(bool *pPadding_flag);
inline void stuff_char(uint8 q);
inline uint8 get_octet();
inline uint get_bits(int num_bits);
inline uint get_bits_no_markers(int numbits);
inline int huff_decode(huff_tables *pH);
inline int huff_decode(huff_tables *pH, int& extrabits);
static inline uint8 clamp(int i);
static void decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y);
static void decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y);
static void decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y);
static void decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y);
};
} // namespace jpgd
#endif // JPEG_DECODER_H

文件差异内容过多而无法显示 加载差异

查看文件

@@ -0,0 +1,172 @@
// jpge.h - C++ class for JPEG compression.
// Public domain, Rich Geldreich <richgel99@gmail.com>
// Alex Evans: Added RGBA support, linear memory allocator.
#ifndef JPEG_ENCODER_H
#define JPEG_ENCODER_H
#include <stdint.h>
namespace jpge
{
typedef unsigned char uint8;
typedef signed short int16;
typedef signed int int32;
typedef unsigned short uint16;
typedef unsigned int uint32;
typedef unsigned int uint;
// JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common.
enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 };
// JPEG compression parameters structure.
struct params
{
inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { }
inline bool check_valid() const
{
if ((m_quality < 1) || (m_quality > 100)) return false;
if ((uint)m_subsampling > (uint)H2V2) return false;
return true;
}
// Quality: 1-100, higher is better. Typical values are around 50-95.
int m_quality;
// m_subsampling:
// 0 = Y (grayscale) only
// 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU)
// 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU)
// 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common)
subsampling_t m_subsampling;
// Disables CbCr discrimination - only intended for testing.
// If true, the Y quantization table is also used for the CbCr channels.
bool m_no_chroma_discrim_flag;
bool m_two_pass_flag;
};
// Writes JPEG image to a file.
// num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels.
bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
// Writes JPEG image to memory buffer.
// On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes.
// If return value is true, buf_size will be set to the size of the compressed data.
bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
// Output stream abstract class - used by the jpeg_encoder class to write to the output stream.
// put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts.
class output_stream
{
public:
virtual ~output_stream() { };
virtual bool put_buf(const void* Pbuf, int64_t len) = 0;
template<class T> inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); }
};
// Lower level jpeg_encoder class - useful if more control is needed than the above helper functions.
class jpeg_encoder
{
public:
jpeg_encoder();
~jpeg_encoder();
// Initializes the compressor.
// pStream: The stream object to use for writing compressed data.
// params - Compression parameters structure, defined above.
// width, height - Image dimensions.
// channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data.
// Returns false on out of memory or if a stream write fails.
bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params());
const params &get_params() const { return m_params; }
// Deinitializes the compressor, freeing any allocated memory. May be called at any time.
void deinit();
uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; }
inline uint get_cur_pass() { return m_pass_num; }
// Call this method with each source scanline.
// width * src_channels bytes per scanline is expected (RGB or Y format).
// You must call with NULL after all scanlines are processed to finish compression.
// Returns false on out of memory or if a stream write fails.
bool process_scanline(const void* pScanline);
private:
jpeg_encoder(const jpeg_encoder &);
jpeg_encoder &operator =(const jpeg_encoder &);
typedef int32 sample_array_t;
output_stream *m_pStream;
params m_params;
uint8 m_num_components;
uint8 m_comp_h_samp[3], m_comp_v_samp[3];
int m_image_x, m_image_y, m_image_bpp, m_image_bpl;
int m_image_x_mcu, m_image_y_mcu;
int m_image_bpl_xlt, m_image_bpl_mcu;
int m_mcus_per_row;
int m_mcu_x, m_mcu_y;
uint8 *m_mcu_lines[16];
uint8 m_mcu_y_ofs;
sample_array_t m_sample_array[64];
int16 m_coefficient_array[64];
int32 m_quantization_tables[2][64];
uint m_huff_codes[4][256];
uint8 m_huff_code_sizes[4][256];
uint8 m_huff_bits[4][17];
uint8 m_huff_val[4][256];
uint32 m_huff_count[4][256];
int m_last_dc_val[3];
enum { JPGE_OUT_BUF_SIZE = 2048 };
uint8 m_out_buf[JPGE_OUT_BUF_SIZE];
uint8 *m_pOut_buf;
uint m_out_buf_left;
uint32 m_bit_buffer;
uint m_bits_in;
uint8 m_pass_num;
bool m_all_stream_writes_succeeded;
void optimize_huffman_table(int table_num, int table_len);
void emit_byte(uint8 i);
void emit_word(uint i);
void emit_marker(int marker);
void emit_jfif_app0();
void emit_dqt();
void emit_sof();
void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag);
void emit_dhts();
void emit_sos();
void emit_markers();
void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val);
void compute_quant_table(int32 *dst, int16 *src);
void adjust_quant_table(int32 *dst, int32 *src);
void first_pass_init();
bool second_pass_init();
bool jpg_open(int p_x_res, int p_y_res, int src_channels);
void load_block_8_8_grey(int x);
void load_block_8_8(int x, int y, int c);
void load_block_16_8(int x, int c);
void load_block_16_8_8(int x, int c);
void load_quantized_coefficients(int component_num);
void flush_output_buffer();
void put_bits(uint bits, uint len);
void code_coefficients_pass_one(int component_num);
void code_coefficients_pass_two(int component_num);
void code_block(int component_num);
void process_mcu_row();
bool terminate_pass_one();
bool terminate_pass_two();
bool process_end_of_image();
void load_mcu(const void* src);
void clear();
void init();
};
} // namespace jpge
#endif // JPEG_ENCODER

查看文件

@@ -0,0 +1,3 @@
jpge.h - C++ class for JPEG compression.
Public domain, Rich Geldreich <richgel99@gmail.com>
Alex Evans: Added RGBA support, linear memory allocator.

文件差异内容过多而无法显示 加载差异

文件差异内容过多而无法显示 加载差异

查看文件

@@ -0,0 +1,433 @@
#pragma once
#include <atomic>
#include <utility>
#include <cstring>
#include <type_traits>
#include <cstdint>
#include "libipc/def.h"
#include "libipc/platform/detail.h"
#include "libipc/circ/elem_def.h"
#include "libipc/utility/log.h"
#include "libipc/utility/utility.h"
namespace ipc {
////////////////////////////////////////////////////////////////
/// producer-consumer implementation
////////////////////////////////////////////////////////////////
template <typename Flag>
struct prod_cons_impl;
template <>
struct prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
};
alignas(cache_line_size) std::atomic<circ::u2_t> rd_; // read index
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
constexpr circ::u2_t cursor() const noexcept {
return 0;
}
template <typename W, typename F, typename E>
bool push(W* /*wrapper*/, F&& f, E* elems) {
auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
return false; // full
}
std::forward<F>(f)(&(elems[cur_wt].data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
/**
* In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
* So we could just disconnect all connections of receiver, and return false.
*/
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(~static_cast<circ::cc_t>(0u));
return false;
}
template <typename W, typename F, typename R, typename E>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
return false; // empty
}
std::forward<F>(f)(&(elems[cur_rd].data_));
std::forward<R>(out)(true);
rd_.fetch_add(1, std::memory_order_release);
return true;
}
};
template <>
struct prod_cons_impl<wr<relat::single, relat::multi , trans::unicast>>
: prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(1);
return false;
}
template <typename W, typename F, typename R,
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
byte_t buff[DS];
for (unsigned k = 0;;) {
auto cur_rd = rd_.load(std::memory_order_relaxed);
if (circ::index_of(cur_rd) ==
circ::index_of(wt_.load(std::memory_order_acquire))) {
return false; // empty
}
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
std::forward<F>(f)(buff);
std::forward<R>(out)(true);
return true;
}
ipc::yield(k);
}
}
};
template <>
struct prod_cons_impl<wr<relat::multi , relat::multi, trans::unicast>>
: prod_cons_impl<wr<relat::single, relat::multi, trans::unicast>> {
using flag_t = std::uint64_t;
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
};
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
template <typename W, typename F, typename E>
bool push(W* /*wrapper*/, F&& f, E* elems) {
circ::u2_t cur_ct, nxt_ct;
for (unsigned k = 0;;) {
cur_ct = ct_.load(std::memory_order_relaxed);
if (circ::index_of(nxt_ct = cur_ct + 1) ==
circ::index_of(rd_.load(std::memory_order_acquire))) {
return false; // full
}
if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
break;
}
ipc::yield(k);
}
auto* el = elems + circ::index_of(cur_ct);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
while (1) {
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
if (cur_ct != wt_.load(std::memory_order_relaxed)) {
return true;
}
if ((~cac_ct) != cur_ct) {
return true;
}
if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
return true;
}
wt_.store(nxt_ct, std::memory_order_release);
cur_ct = nxt_ct;
nxt_ct = cur_ct + 1;
el = elems + circ::index_of(cur_ct);
}
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(1);
return false;
}
template <typename W, typename F, typename R,
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
byte_t buff[DS];
for (unsigned k = 0;;) {
auto cur_rd = rd_.load(std::memory_order_relaxed);
auto cur_wt = wt_.load(std::memory_order_acquire);
auto id_rd = circ::index_of(cur_rd);
auto id_wt = circ::index_of(cur_wt);
if (id_rd == id_wt) {
auto* el = elems + id_wt;
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
if ((~cac_ct) != cur_wt) {
return false; // empty
}
if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
wt_.store(cur_wt + 1, std::memory_order_release);
}
k = 0;
}
else {
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
std::forward<F>(f)(buff);
std::forward<R>(out)(true);
return true;
}
ipc::yield(k);
}
}
}
};
template <>
struct prod_cons_impl<wr<relat::single, relat::multi, trans::broadcast>> {
using rc_t = std::uint64_t;
enum : rc_t {
ep_mask = 0x00000000ffffffffull,
ep_incr = 0x0000000100000000ull
};
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<rc_t> rc_ { 0 }; // read-counter
};
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
circ::u2_t cursor() const noexcept {
return wt_.load(std::memory_order_acquire);
}
template <typename W, typename F, typename E>
bool push(W* wrapper, F&& f, E* elems) {
E* el;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & ep_mask;
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
return false; // has not finished yet
}
// consider rem_cc to be 0 here
if (el->rc_.compare_exchange_weak(
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
break;
}
ipc::yield(k);
}
std::forward<F>(f)(&(el->data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&& f, E* elems) {
E* el;
epoch_ += ep_incr;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & ep_mask;
if (cc & rem_cc) {
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
if (cc == 0) return false; // no reader
}
// just compare & exchange
if (el->rc_.compare_exchange_weak(
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
break;
}
ipc::yield(k);
}
std::forward<F>(f)(&(el->data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
template <typename W, typename F, typename R, typename E>
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
if (cur == cursor()) return false; // acquire
auto* el = elems + circ::index_of(cur++);
std::forward<F>(f)(&(el->data_));
for (unsigned k = 0;;) {
auto cur_rc = el->rc_.load(std::memory_order_acquire);
if ((cur_rc & ep_mask) == 0) {
std::forward<R>(out)(true);
return true;
}
auto nxt_rc = cur_rc & ~static_cast<rc_t>(wrapper->connected_id());
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
std::forward<R>(out)((nxt_rc & ep_mask) == 0);
return true;
}
ipc::yield(k);
}
}
};
template <>
struct prod_cons_impl<wr<relat::multi, relat::multi, trans::broadcast>> {
using rc_t = std::uint64_t;
using flag_t = std::uint64_t;
enum : rc_t {
rc_mask = 0x00000000ffffffffull,
ep_mask = 0x00ffffffffffffffull,
ep_incr = 0x0100000000000000ull,
ic_mask = 0xff000000ffffffffull,
ic_incr = 0x0000000100000000ull
};
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<rc_t > rc_ { 0 }; // read-counter
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
};
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
alignas(cache_line_size) std::atomic<rc_t> epoch_ { 0 };
circ::u2_t cursor() const noexcept {
return ct_.load(std::memory_order_acquire);
}
constexpr static rc_t inc_rc(rc_t rc) noexcept {
return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
}
constexpr static rc_t inc_mask(rc_t rc) noexcept {
return inc_rc(rc) & ~rc_mask;
}
template <typename W, typename F, typename E>
bool push(W* wrapper, F&& f, E* elems) {
E* el;
circ::u2_t cur_ct;
rc_t epoch = epoch_.load(std::memory_order_acquire);
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_relaxed);
circ::cc_t rem_cc = cur_rc & rc_mask;
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
return false; // has not finished yet
}
else if (!rem_cc) {
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
if ((cur_fl != cur_ct) && cur_fl) {
return false; // full
}
}
// consider rem_cc to be 0 here
if (el->rc_.compare_exchange_weak(
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed) &&
epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
break;
}
ipc::yield(k);
}
// only one thread/process would touch here at one time
ct_.store(cur_ct + 1, std::memory_order_release);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&& f, E* elems) {
E* el;
circ::u2_t cur_ct;
rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & rc_mask;
if (cc & rem_cc) {
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
if (cc == 0) return false; // no reader
}
// just compare & exchange
if (el->rc_.compare_exchange_weak(
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed)) {
if (epoch == epoch_.load(std::memory_order_acquire)) {
break;
}
else if (push(wrapper, std::forward<F>(f), elems)) {
return true;
}
epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
}
ipc::yield(k);
}
// only one thread/process would touch here at one time
ct_.store(cur_ct + 1, std::memory_order_release);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
return true;
}
template <typename W, typename F, typename R, typename E, std::size_t N>
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
auto* el = elems + circ::index_of(cur);
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
if (cur_fl != ~static_cast<flag_t>(cur)) {
return false; // empty
}
++cur;
std::forward<F>(f)(&(el->data_));
for (unsigned k = 0;;) {
auto cur_rc = el->rc_.load(std::memory_order_acquire);
if ((cur_rc & rc_mask) == 0) {
std::forward<R>(out)(true);
el->f_ct_.store(cur + N - 1, std::memory_order_release);
return true;
}
auto nxt_rc = inc_rc(cur_rc) & ~static_cast<rc_t>(wrapper->connected_id());
bool last_one = false;
if ((last_one = (nxt_rc & rc_mask) == 0)) {
el->f_ct_.store(cur + N - 1, std::memory_order_release);
}
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
std::forward<R>(out)(last_one);
return true;
}
ipc::yield(k);
}
}
};
} // namespace ipc

查看文件

@@ -0,0 +1,58 @@
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}.
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}.
End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}.
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.
In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}.
%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation.
%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost.
%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length.
%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)?
%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence.
%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model.
%\begin{table}[h!]
%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.}
%\label{tab:op_complexities}
%\begin{center}
%\vspace{-5pt}
%\scalebox{0.75}{
%\begin{tabular}{l|c|c|c}
%\hline \hline
%Layer Type & Receptive & Complexity & Sequential \\
% & Field & & Operations \\
%\hline
%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\
%\hline
%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\
%\hline
%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\
%\hline
%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\
%\hline
%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\
%\hline \hline
%\end{tabular}
%}
%\end{center}
%\end{table}

查看文件

@@ -0,0 +1,18 @@
Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}.
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples.
%\marginpar{not sure if the memory constraints are understandable here}
Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away}
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network.
%\marginpar{not sure if "cross-positional communication" is understandable without explanation}
%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?}
In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.
%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.}
% Just a standard paragraph with citations, rewrite.
%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do.

查看文件

@@ -0,0 +1,155 @@
\begin{figure}
\centering
\includegraphics[scale=0.6]{Figures/ModalNet-21}
\caption{The Transformer - model architecture.}
\label{fig:model-arch}
\end{figure}
% Although the primary workhorse of our model is attention,
%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail.
Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next.
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively.
\subsection{Encoder and Decoder Stacks}
\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$.
\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$.
% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail.
\subsection{Attention} \label{sec:attention}
An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod}
% \begin{figure}
% \centering
% \includegraphics[scale=0.6]{Figures/ModalNet-19}
% \caption{Scaled Dot-Product Attention.}
% \label{fig:multi-head-att}
% \end{figure}
We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values.
In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as:
\begin{equation}
\mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.
%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients.
% Already described in the subsequent section
%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$.
%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model.
While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$.
%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$.
\subsubsection{Multi-Head Attention} \label{sec:multihead}
\begin{figure}
\begin{minipage}[t]{0.5\textwidth}
\centering
Scaled Dot-Product Attention \\
\vspace{0.5cm}
\includegraphics[scale=0.6]{Figures/ModalNet-19}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\centering
Multi-Head Attention \\
\vspace{0.1cm}
\includegraphics[scale=0.6]{Figures/ModalNet-20}
\end{minipage}
% \centering
\caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.}
\label{fig:multi-head-att}
\end{figure}
Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively.
On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}.
Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.
\begin{align*}
\mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\
% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\
\text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\
\end{align*}
Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$.
%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation.
In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$.
Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
\subsubsection{Applications of Attention in our Model}
The Transformer uses multi-head attention in three different ways:
\begin{itemize}
\item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}.
\item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.
\item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}.
\end{itemize}
\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn}
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
\begin{equation}
\mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2
\end{equation}
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$.
%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention.
%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention.
%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as
%\begin{equation*} \label{eq:attention}
% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq).
%\end{equation*}
%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$.
%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$.
%\marginpar{}
\subsection{Embeddings and Softmax}
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$.
\subsection{Positional Encoding}
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}.
In this work, we use sine and cosine functions of different frequencies:
\begin{align*}
PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\
PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel})
\end{align*}
where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$.
We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.

查看文件

@@ -0,0 +1,45 @@
\pagebreak
\section*{Two Feed-Forward Layers = Attention over Parameters}\label{sec:parameter_attention}
In addition to attention layers, our model contains position-wise feed-forward networks (Section \ref{sec:ffn}), which consist of two linear transformations with a ReLU activation in between. In fact, these networks too can be seen as a form of attention. Compare the formula for such a network with the formula for a simple dot-product attention layer (biases and scaling factors omitted):
\begin{align*}
FFN(x, W_1, W_2) = ReLU(xW_1)W_2 \\
A(q, K, V) = Softmax(qK^T)V
\end{align*}
Based on the similarity of these formulae, the two-layer feed-forward network can be seen as a kind of attention, where the keys and values are the rows of the trainable parameter matrices $W_1$ and $W_2$, and where we use ReLU instead of Softmax in the compatibility function.
%the compatablity function is $compat(q, k_i) = ReLU(q \cdot k_i)$ instead of $Softmax(qK_T)_i$.
Given this similarity, we experimented with replacing the position-wise feed-forward networks with attention layers similar to the ones we use everywhere else our model. The multi-head-attention-over-parameters sublayer is identical to the multi-head attention described in \ref{sec:multihead}, except that the "keys" and "values" inputs to each attention head are trainable model parameters, as opposed to being linear projections of a previous layer. These parameters are scaled up by a factor of $\sqrt{d_{model}}$ in order to be more similar to activations.
In our first experiment, we replaced each position-wise feed-forward network with a multi-head-attention-over-parameters sublayer with $h_p=8$ heads, key-dimensionality $d_{pk}=64$, and value-dimensionality $d_{pv}=64$, using $n_p=1536$ key-value pairs for each attention head. The sublayer has a total of $2097152$ parameters, including the parameters in the query projection and the output projection. This matches the number of parameters in the position-wise feed-forward network that we replaced. While the theoretical amount of computation is also the same, in practice, the attention version caused the step times to be about 30\% longer.
In our second experiment, we used $h_p=8$ heads, and $n_p=512$ key-value pairs for each attention head, again matching the total number of parameters in the base model.
Results for the first experiment were slightly worse than for the base model, and results for the second experiment were slightly better, see Table~\ref{tab:parameter_attention}.
\begin{table}[h]
\caption{Replacing the position-wise feed-forward networks with multihead-attention-over-parameters produces similar results to the base model. All metrics are on the English-to-German translation development set, newstest2013.}
\label{tab:parameter_attention}
\begin{center}
\vspace{-2mm}
%\scalebox{1.0}{
\begin{tabular}{c|cccccc|cccc}
\hline\rule{0pt}{2.0ex}
& \multirow{2}{*}{$\dmodel$} & \multirow{2}{*}{$\dff$} &
\multirow{2}{*}{$h_p$} & \multirow{2}{*}{$d_{pk}$} & \multirow{2}{*}{$d_{pv}$} &
\multirow{2}{*}{$n_p$} &
PPL & BLEU & params & training\\
& & & & & & & (dev) & (dev) & $\times10^6$ & time \\
\hline\rule{0pt}{2.0ex}
base & 512 & 2048 & & & & & 4.92 & 25.8 & 65 & 12 hours\\
\hline\rule{0pt}{2.0ex}
AOP$_1$ & 512 & & 8 & 64 & 64 & 1536 & 4.92& 25.5 & 65 & 16 hours\\
AOP$_2$ & 512 & & 16 & 64 & 64 & 512 & \textbf{4.86} & \textbf{25.9} & 65 & 16 hours \\
\hline
\end{tabular}
%}
\end{center}
\end{table}

查看文件

@@ -0,0 +1,8 @@
chatgpt的老祖宗《Attention is all you need》
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
真实的摘要如下
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
https://arxiv.org/abs/1706.03762

查看文件

@@ -0,0 +1,2 @@
from stable_baselines3.dqn.dqn import DQN
from stable_baselines3.dqn.policies import CnnPolicy, MlpPolicy

查看文件

@@ -0,0 +1,245 @@
from typing import Any, Dict, List, Optional, Tuple, Type, Union
import gym
import numpy as np
import torch as th
from torch.nn import functional as F
from stable_baselines3.common import logger
from stable_baselines3.common.off_policy_algorithm import OffPolicyAlgorithm
from stable_baselines3.common.preprocessing import maybe_transpose
from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
from stable_baselines3.common.utils import get_linear_fn, is_vectorized_observation, polyak_update
from stable_baselines3.dqn.policies import DQNPolicy
class DQN(OffPolicyAlgorithm):
"""
Deep Q-Network (DQN)
Paper: https://arxiv.org/abs/1312.5602, https://www.nature.com/articles/nature14236
Default hyperparameters are taken from the nature paper,
except for the optimizer and learning rate that were taken from Stable Baselines defaults.
:param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)
:param env: The environment to learn from (if registered in Gym, can be str)
:param learning_rate: The learning rate, it can be a function
of the current progress remaining (from 1 to 0)
:param buffer_size: size of the replay buffer
:param learning_starts: how many steps of the model to collect transitions for before learning starts
:param batch_size: Minibatch size for each gradient update
:param tau: the soft update coefficient ("Polyak update", between 0 and 1) default 1 for hard update
:param gamma: the discount factor
:param train_freq: Update the model every ``train_freq`` steps. Alternatively pass a tuple of frequency and unit
like ``(5, "step")`` or ``(2, "episode")``.
:param gradient_steps: How many gradient steps to do after each rollout (see ``train_freq``)
Set to ``-1`` means to do as many gradient steps as steps done in the environment
during the rollout.
:param optimize_memory_usage: Enable a memory efficient variant of the replay buffer
at a cost of more complexity.
See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195
:param target_update_interval: update the target network every ``target_update_interval``
environment steps.
:param exploration_fraction: fraction of entire training period over which the exploration rate is reduced
:param exploration_initial_eps: initial value of random action probability
:param exploration_final_eps: final value of random action probability
:param max_grad_norm: The maximum value for the gradient clipping
:param tensorboard_log: the log location for tensorboard (if None, no logging)
:param create_eval_env: Whether to create a second environment that will be
used for evaluating the agent periodically. (Only available when passing string for the environment)
:param policy_kwargs: additional arguments to be passed to the policy on creation
:param verbose: the verbosity level: 0 no output, 1 info, 2 debug
:param seed: Seed for the pseudo random generators
:param device: Device (cpu, cuda, ...) on which the code should be run.
Setting it to auto, the code will be run on the GPU if possible.
:param _init_setup_model: Whether or not to build the network at the creation of the instance
"""
def __init__(
self,
policy: Union[str, Type[DQNPolicy]],
env: Union[GymEnv, str],
learning_rate: Union[float, Schedule] = 1e-4,
buffer_size: int = 1000000,
learning_starts: int = 50000,
batch_size: Optional[int] = 32,
tau: float = 1.0,
gamma: float = 0.99,
train_freq: Union[int, Tuple[int, str]] = 4,
gradient_steps: int = 1,
optimize_memory_usage: bool = False,
target_update_interval: int = 10000,
exploration_fraction: float = 0.1,
exploration_initial_eps: float = 1.0,
exploration_final_eps: float = 0.05,
max_grad_norm: float = 10,
tensorboard_log: Optional[str] = None,
create_eval_env: bool = False,
policy_kwargs: Optional[Dict[str, Any]] = None,
verbose: int = 0,
seed: Optional[int] = None,
device: Union[th.device, str] = "auto",
_init_setup_model: bool = True,
):
super(DQN, self).__init__(
policy,
env,
DQNPolicy,
learning_rate,
buffer_size,
learning_starts,
batch_size,
tau,
gamma,
train_freq,
gradient_steps,
action_noise=None, # No action noise
policy_kwargs=policy_kwargs,
tensorboard_log=tensorboard_log,
verbose=verbose,
device=device,
create_eval_env=create_eval_env,
seed=seed,
sde_support=False,
optimize_memory_usage=optimize_memory_usage,
supported_action_spaces=(gym.spaces.Discrete,),
)
self.exploration_initial_eps = exploration_initial_eps
self.exploration_final_eps = exploration_final_eps
self.exploration_fraction = exploration_fraction
self.target_update_interval = target_update_interval
self.max_grad_norm = max_grad_norm
# "epsilon" for the epsilon-greedy exploration
self.exploration_rate = 0.0
# Linear schedule will be defined in `_setup_model()`
self.exploration_schedule = None
self.q_net, self.q_net_target = None, None
if _init_setup_model:
self._setup_model()
def _setup_model(self) -> None:
super(DQN, self)._setup_model()
self._create_aliases()
self.exploration_schedule = get_linear_fn(
self.exploration_initial_eps, self.exploration_final_eps, self.exploration_fraction
)
def _create_aliases(self) -> None:
self.q_net = self.policy.q_net
self.q_net_target = self.policy.q_net_target
def _on_step(self) -> None:
"""
Update the exploration rate and target network if needed.
This method is called in ``collect_rollouts()`` after each step in the environment.
"""
if self.num_timesteps % self.target_update_interval == 0:
polyak_update(self.q_net.parameters(), self.q_net_target.parameters(), self.tau)
self.exploration_rate = self.exploration_schedule(self._current_progress_remaining)
logger.record("rollout/exploration rate", self.exploration_rate)
def train(self, gradient_steps: int, batch_size: int = 100) -> None:
# Update learning rate according to schedule
self._update_learning_rate(self.policy.optimizer)
losses = []
for _ in range(gradient_steps):
# Sample replay buffer
replay_data = self.replay_buffer.sample(batch_size, env=self._vec_normalize_env)
with th.no_grad():
# Compute the next Q-values using the target network
next_q_values = self.q_net_target(replay_data.next_observations)
# Follow greedy policy: use the one with the highest value
next_q_values, _ = next_q_values.max(dim=1)
# Avoid potential broadcast issue
next_q_values = next_q_values.reshape(-1, 1)
# 1-step TD target
target_q_values = replay_data.rewards + (1 - replay_data.dones) * self.gamma * next_q_values
# Get current Q-values estimates
current_q_values = self.q_net(replay_data.observations)
# Retrieve the q-values for the actions from the replay buffer
current_q_values = th.gather(current_q_values, dim=1, index=replay_data.actions.long())
# Compute Huber loss (less sensitive to outliers)
loss = F.smooth_l1_loss(current_q_values, target_q_values)
losses.append(loss.item())
# Optimize the policy
self.policy.optimizer.zero_grad()
loss.backward()
# Clip gradient norm
th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm)
self.policy.optimizer.step()
# Increase update counter
self._n_updates += gradient_steps
logger.record("train/n_updates", self._n_updates, exclude="tensorboard")
logger.record("train/loss", np.mean(losses))
def predict(
self,
observation: np.ndarray,
state: Optional[np.ndarray] = None,
mask: Optional[np.ndarray] = None,
deterministic: bool = False,
) -> Tuple[np.ndarray, Optional[np.ndarray]]:
"""
Overrides the base_class predict function to include epsilon-greedy exploration.
:param observation: the input observation
:param state: The last states (can be None, used in recurrent policies)
:param mask: The last masks (can be None, used in recurrent policies)
:param deterministic: Whether or not to return deterministic actions.
:return: the model's action and the next state
(used in recurrent policies)
"""
if not deterministic and np.random.rand() < self.exploration_rate:
if is_vectorized_observation(maybe_transpose(observation, self.observation_space), self.observation_space):
n_batch = observation.shape[0]
action = np.array([self.action_space.sample() for _ in range(n_batch)])
else:
action = np.array(self.action_space.sample())
else:
action, state = self.policy.predict(observation, state, mask, deterministic)
return action, state
def learn(
self,
total_timesteps: int,
callback: MaybeCallback = None,
log_interval: int = 4,
eval_env: Optional[GymEnv] = None,
eval_freq: int = -1,
n_eval_episodes: int = 5,
tb_log_name: str = "DQN",
eval_log_path: Optional[str] = None,
reset_num_timesteps: bool = True,
) -> OffPolicyAlgorithm:
return super(DQN, self).learn(
total_timesteps=total_timesteps,
callback=callback,
log_interval=log_interval,
eval_env=eval_env,
eval_freq=eval_freq,
n_eval_episodes=n_eval_episodes,
tb_log_name=tb_log_name,
eval_log_path=eval_log_path,
reset_num_timesteps=reset_num_timesteps,
)
def _excluded_save_params(self) -> List[str]:
return super(DQN, self)._excluded_save_params() + ["q_net", "q_net_target"]
def _get_torch_save_params(self) -> Tuple[List[str], List[str]]:
state_dicts = ["policy", "policy.optimizer"]
return state_dicts, []

查看文件

@@ -0,0 +1,237 @@
from typing import Any, Dict, List, Optional, Type
import gym
import torch as th
from torch import nn
from stable_baselines3.common.policies import BasePolicy, register_policy
from stable_baselines3.common.torch_layers import BaseFeaturesExtractor, FlattenExtractor, NatureCNN, create_mlp
from stable_baselines3.common.type_aliases import Schedule
class QNetwork(BasePolicy):
"""
Action-Value (Q-Value) network for DQN
:param observation_space: Observation space
:param action_space: Action space
:param net_arch: The specification of the policy and value networks.
:param activation_fn: Activation function
:param normalize_images: Whether to normalize images or not,
dividing by 255.0 (True by default)
"""
def __init__(
self,
observation_space: gym.spaces.Space,
action_space: gym.spaces.Space,
features_extractor: nn.Module,
features_dim: int,
net_arch: Optional[List[int]] = None,
activation_fn: Type[nn.Module] = nn.ReLU,
normalize_images: bool = True,
):
super(QNetwork, self).__init__(
observation_space,
action_space,
features_extractor=features_extractor,
normalize_images=normalize_images,
)
if net_arch is None:
net_arch = [64, 64]
self.net_arch = net_arch
self.activation_fn = activation_fn
self.features_extractor = features_extractor
self.features_dim = features_dim
self.normalize_images = normalize_images
action_dim = self.action_space.n # number of actions
q_net = create_mlp(self.features_dim, action_dim, self.net_arch, self.activation_fn)
self.q_net = nn.Sequential(*q_net)
def forward(self, obs: th.Tensor) -> th.Tensor:
"""
Predict the q-values.
:param obs: Observation
:return: The estimated Q-Value for each action.
"""
return self.q_net(self.extract_features(obs))
def _predict(self, observation: th.Tensor, deterministic: bool = True) -> th.Tensor:
q_values = self.forward(observation)
# Greedy action
action = q_values.argmax(dim=1).reshape(-1)
return action
def _get_constructor_parameters(self) -> Dict[str, Any]:
data = super()._get_constructor_parameters()
data.update(
dict(
net_arch=self.net_arch,
features_dim=self.features_dim,
activation_fn=self.activation_fn,
features_extractor=self.features_extractor,
)
)
return data
class DQNPolicy(BasePolicy):
"""
Policy class with Q-Value Net and target net for DQN
:param observation_space: Observation space
:param action_space: Action space
:param lr_schedule: Learning rate schedule (could be constant)
:param net_arch: The specification of the policy and value networks.
:param activation_fn: Activation function
:param features_extractor_class: Features extractor to use.
:param features_extractor_kwargs: Keyword arguments
to pass to the features extractor.
:param normalize_images: Whether to normalize images or not,
dividing by 255.0 (True by default)
:param optimizer_class: The optimizer to use,
``th.optim.Adam`` by default
:param optimizer_kwargs: Additional keyword arguments,
excluding the learning rate, to pass to the optimizer
"""
def __init__(
self,
observation_space: gym.spaces.Space,
action_space: gym.spaces.Space,
lr_schedule: Schedule,
net_arch: Optional[List[int]] = None,
activation_fn: Type[nn.Module] = nn.ReLU,
features_extractor_class: Type[BaseFeaturesExtractor] = FlattenExtractor,
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
normalize_images: bool = True,
optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
optimizer_kwargs: Optional[Dict[str, Any]] = None,
):
super(DQNPolicy, self).__init__(
observation_space,
action_space,
features_extractor_class,
features_extractor_kwargs,
optimizer_class=optimizer_class,
optimizer_kwargs=optimizer_kwargs,
)
if net_arch is None:
if features_extractor_class == FlattenExtractor:
net_arch = [64, 64]
else:
net_arch = []
self.net_arch = net_arch
self.activation_fn = activation_fn
self.normalize_images = normalize_images
self.net_args = {
"observation_space": self.observation_space,
"action_space": self.action_space,
"net_arch": self.net_arch,
"activation_fn": self.activation_fn,
"normalize_images": normalize_images,
}
self.q_net, self.q_net_target = None, None
self._build(lr_schedule)
def _build(self, lr_schedule: Schedule) -> None:
"""
Create the network and the optimizer.
:param lr_schedule: Learning rate schedule
lr_schedule(1) is the initial learning rate
"""
self.q_net = self.make_q_net()
self.q_net_target = self.make_q_net()
self.q_net_target.load_state_dict(self.q_net.state_dict())
# Setup optimizer with initial learning rate
self.optimizer = self.optimizer_class(self.parameters(), lr=lr_schedule(1), **self.optimizer_kwargs)
def make_q_net(self) -> QNetwork:
# Make sure we always have separate networks for features extractors etc
net_args = self._update_features_extractor(self.net_args, features_extractor=None)
return QNetwork(**net_args).to(self.device)
def forward(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
return self._predict(obs, deterministic=deterministic)
def _predict(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
return self.q_net._predict(obs, deterministic=deterministic)
def _get_constructor_parameters(self) -> Dict[str, Any]:
data = super()._get_constructor_parameters()
data.update(
dict(
net_arch=self.net_args["net_arch"],
activation_fn=self.net_args["activation_fn"],
lr_schedule=self._dummy_schedule, # dummy lr schedule, not needed for loading policy alone
optimizer_class=self.optimizer_class,
optimizer_kwargs=self.optimizer_kwargs,
features_extractor_class=self.features_extractor_class,
features_extractor_kwargs=self.features_extractor_kwargs,
)
)
return data
MlpPolicy = DQNPolicy
class CnnPolicy(DQNPolicy):
"""
Policy class for DQN when using images as input.
:param observation_space: Observation space
:param action_space: Action space
:param lr_schedule: Learning rate schedule (could be constant)
:param net_arch: The specification of the policy and value networks.
:param activation_fn: Activation function
:param features_extractor_class: Features extractor to use.
:param normalize_images: Whether to normalize images or not,
dividing by 255.0 (True by default)
:param optimizer_class: The optimizer to use,
``th.optim.Adam`` by default
:param optimizer_kwargs: Additional keyword arguments,
excluding the learning rate, to pass to the optimizer
"""
def __init__(
self,
observation_space: gym.spaces.Space,
action_space: gym.spaces.Space,
lr_schedule: Schedule,
net_arch: Optional[List[int]] = None,
activation_fn: Type[nn.Module] = nn.ReLU,
features_extractor_class: Type[BaseFeaturesExtractor] = NatureCNN,
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
normalize_images: bool = True,
optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
optimizer_kwargs: Optional[Dict[str, Any]] = None,
):
super(CnnPolicy, self).__init__(
observation_space,
action_space,
lr_schedule,
net_arch,
activation_fn,
features_extractor_class,
features_extractor_kwargs,
normalize_images,
optimizer_class,
optimizer_kwargs,
)
register_policy("MlpPolicy", MlpPolicy)
register_policy("CnnPolicy", CnnPolicy)

查看文件

@@ -0,0 +1,2 @@
github stablebaseline3
https://github.com/DLR-RM/stable-baselines3

查看文件

@@ -0,0 +1,27 @@
"In practice, we found that a high-entropy initial state is more likely to increase the speed of training.
The entropy is calculated by:
$$H=-\sum_{k= 1}^{n_k} p(k) \cdot \log p(k), p(k)=\frac{|A_k|}{|\mathcal{A}|}$$
where $H$ is the entropy, $|A_k|$ is the number of agent nodes in $k$-th cluster, $|\mathcal{A}|$ is the total number of agents.
To ensure the Cooperation Graph initialization has higher entropy,
we will randomly generate multiple initial states,
rank by their entropy and then pick the one with maximum $H$."
```
FROM ubuntu:latest
RUN apt-get update && \
apt-get install -y python3 python3-pip && \
rm -rf /var/lib/apt/lists/*
RUN echo '[global]' > /etc/pip.conf && \
echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
RUN pip3 install gradio requests[socks] mdtex2html
COPY . /gpt
WORKDIR /gpt
CMD ["python3", "main.py"]
```

查看文件

@@ -1,70 +0,0 @@
# From project chatglm-langchain
from langchain.document_loaders import UnstructuredFileLoader
from langchain.text_splitter import CharacterTextSplitter
import re
from typing import List
class ChineseTextSplitter(CharacterTextSplitter):
def __init__(self, pdf: bool = False, sentence_size: int = None, **kwargs):
super().__init__(**kwargs)
self.pdf = pdf
self.sentence_size = sentence_size
def split_text1(self, text: str) -> List[str]:
if self.pdf:
text = re.sub(r"\n{3,}", "\n", text)
text = re.sub('\s', ' ', text)
text = text.replace("\n\n", "")
sent_sep_pattern = re.compile('([﹒﹔﹖﹗.。!?]["’”」』]{0,2}|(?=["‘“「『]{1,2}|$))') # del :;
sent_list = []
for ele in sent_sep_pattern.split(text):
if sent_sep_pattern.match(ele) and sent_list:
sent_list[-1] += ele
elif ele:
sent_list.append(ele)
return sent_list
def split_text(self, text: str) -> List[str]: ##此处需要进一步优化逻辑
if self.pdf:
text = re.sub(r"\n{3,}", r"\n", text)
text = re.sub('\s', " ", text)
text = re.sub("\n\n", "", text)
text = re.sub(r'([;;.!?。!?\?])([^”’])', r"\1\n\2", text) # 单字符断句符
text = re.sub(r'(\.{6})([^"’”」』])', r"\1\n\2", text) # 英文省略号
text = re.sub(r'(\{2})([^"’”」』])', r"\1\n\2", text) # 中文省略号
text = re.sub(r'([;;!?。!?\?]["’”」』]{0,2})([^;;!?,。!?\?])', r'\1\n\2', text)
# 如果双引号前有终止符,那么双引号才是句子的终点,把分句符\n放到双引号后,注意前面的几句都小心保留了双引号
text = text.rstrip() # 段尾如果有多余的\n就去掉它
# 很多规则中会考虑分号;,但是这里我把它忽略不计,破折号、英文双引号等同样忽略,需要的再做些简单调整即可。
ls = [i for i in text.split("\n") if i]
for ele in ls:
if len(ele) > self.sentence_size:
ele1 = re.sub(r'([,,.]["’”」』]{0,2})([^,,.])', r'\1\n\2', ele)
ele1_ls = ele1.split("\n")
for ele_ele1 in ele1_ls:
if len(ele_ele1) > self.sentence_size:
ele_ele2 = re.sub(r'([\n]{1,}| {2,}["’”」』]{0,2})([^\s])', r'\1\n\2', ele_ele1)
ele2_ls = ele_ele2.split("\n")
for ele_ele2 in ele2_ls:
if len(ele_ele2) > self.sentence_size:
ele_ele3 = re.sub('( ["’”」』]{0,2})([^ ])', r'\1\n\2', ele_ele2)
ele2_id = ele2_ls.index(ele_ele2)
ele2_ls = ele2_ls[:ele2_id] + [i for i in ele_ele3.split("\n") if i] + ele2_ls[
ele2_id + 1:]
ele_id = ele1_ls.index(ele_ele1)
ele1_ls = ele1_ls[:ele_id] + [i for i in ele2_ls if i] + ele1_ls[ele_id + 1:]
id = ls.index(ele)
ls = ls[:id] + [i for i in ele1_ls if i] + ls[id + 1:]
return ls
def load_file(filepath, sentence_size):
loader = UnstructuredFileLoader(filepath, mode="elements")
textsplitter = ChineseTextSplitter(pdf=False, sentence_size=sentence_size)
docs = loader.load_and_split(text_splitter=textsplitter)
# write_check_file(filepath, docs)
return docs

查看文件

@@ -1,339 +0,0 @@
# From project chatglm-langchain
import os
import os
import uuid
import tqdm
import shutil
import threading
import numpy as np
from toolbox import Singleton
from loguru import logger
from langchain.vectorstores import FAISS
from langchain.docstore.document import Document
from typing import List, Tuple
from crazy_functions.vector_fns.general_file_loader import load_file
embedding_model_dict = {
"ernie-tiny": "nghuyong/ernie-3.0-nano-zh",
"ernie-base": "nghuyong/ernie-3.0-base-zh",
"text2vec-base": "shibing624/text2vec-base-chinese",
"text2vec": "GanymedeNil/text2vec-large-chinese",
}
# Embedding model name
EMBEDDING_MODEL = "text2vec"
# Embedding running device
EMBEDDING_DEVICE = "cpu"
# 基于上下文的prompt模版,请务必保留"{question}"和"{context}"
PROMPT_TEMPLATE = """已知信息:
{context}
根据上述已知信息,简洁和专业的来回答用户的问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题” 或 “没有提供足够的相关信息”,不允许在答案中添加编造成分,答案请使用中文。 问题是:{question}"""
# 文本分句长度
SENTENCE_SIZE = 100
# 匹配后单段上下文长度
CHUNK_SIZE = 250
# LLM input history length
LLM_HISTORY_LEN = 3
# return top-k text chunk from vector store
VECTOR_SEARCH_TOP_K = 5
# 知识检索内容相关度 Score, 数值范围约为0-1100,如果为0,则不生效,经测试设置为小于500时,匹配结果更精准
VECTOR_SEARCH_SCORE_THRESHOLD = 0
NLTK_DATA_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "nltk_data")
FLAG_USER_NAME = uuid.uuid4().hex
# 是否开启跨域,默认为False,如果需要开启,请设置为True
# is open cross domain
OPEN_CROSS_DOMAIN = False
def similarity_search_with_score_by_vector(
self, embedding: List[float], k: int = 4
) -> List[Tuple[Document, float]]:
def seperate_list(ls: List[int]) -> List[List[int]]:
lists = []
ls1 = [ls[0]]
for i in range(1, len(ls)):
if ls[i - 1] + 1 == ls[i]:
ls1.append(ls[i])
else:
lists.append(ls1)
ls1 = [ls[i]]
lists.append(ls1)
return lists
scores, indices = self.index.search(np.array([embedding], dtype=np.float32), k)
docs = []
id_set = set()
store_len = len(self.index_to_docstore_id)
for j, i in enumerate(indices[0]):
if i == -1 or 0 < self.score_threshold < scores[0][j]:
# This happens when not enough docs are returned.
continue
_id = self.index_to_docstore_id[i]
doc = self.docstore.search(_id)
if not self.chunk_conent:
if not isinstance(doc, Document):
raise ValueError(f"Could not find document for id {_id}, got {doc}")
doc.metadata["score"] = int(scores[0][j])
docs.append(doc)
continue
id_set.add(i)
docs_len = len(doc.page_content)
for k in range(1, max(i, store_len - i)):
break_flag = False
for l in [i + k, i - k]:
if 0 <= l < len(self.index_to_docstore_id):
_id0 = self.index_to_docstore_id[l]
doc0 = self.docstore.search(_id0)
if docs_len + len(doc0.page_content) > self.chunk_size:
break_flag = True
break
elif doc0.metadata["source"] == doc.metadata["source"]:
docs_len += len(doc0.page_content)
id_set.add(l)
if break_flag:
break
if not self.chunk_conent:
return docs
if len(id_set) == 0 and self.score_threshold > 0:
return []
id_list = sorted(list(id_set))
id_lists = seperate_list(id_list)
for id_seq in id_lists:
for id in id_seq:
if id == id_seq[0]:
_id = self.index_to_docstore_id[id]
doc = self.docstore.search(_id)
else:
_id0 = self.index_to_docstore_id[id]
doc0 = self.docstore.search(_id0)
doc.page_content += " " + doc0.page_content
if not isinstance(doc, Document):
raise ValueError(f"Could not find document for id {_id}, got {doc}")
doc_score = min([scores[0][id] for id in [indices[0].tolist().index(i) for i in id_seq if i in indices[0]]])
doc.metadata["score"] = int(doc_score)
docs.append(doc)
return docs
class LocalDocQA:
llm: object = None
embeddings: object = None
top_k: int = VECTOR_SEARCH_TOP_K
chunk_size: int = CHUNK_SIZE
chunk_conent: bool = True
score_threshold: int = VECTOR_SEARCH_SCORE_THRESHOLD
def init_cfg(self,
top_k=VECTOR_SEARCH_TOP_K,
):
self.llm = None
self.top_k = top_k
def init_knowledge_vector_store(self,
filepath,
vs_path: str or os.PathLike = None,
sentence_size=SENTENCE_SIZE,
text2vec=None):
loaded_files = []
failed_files = []
if isinstance(filepath, str):
if not os.path.exists(filepath):
logger.error("路径不存在")
return None
elif os.path.isfile(filepath):
file = os.path.split(filepath)[-1]
try:
docs = load_file(filepath, SENTENCE_SIZE)
logger.info(f"{file} 已成功加载")
loaded_files.append(filepath)
except Exception as e:
logger.error(e)
logger.error(f"{file} 未能成功加载")
return None
elif os.path.isdir(filepath):
docs = []
for file in tqdm(os.listdir(filepath), desc="加载文件"):
fullfilepath = os.path.join(filepath, file)
try:
docs += load_file(fullfilepath, SENTENCE_SIZE)
loaded_files.append(fullfilepath)
except Exception as e:
logger.error(e)
failed_files.append(file)
if len(failed_files) > 0:
logger.error("以下文件未能成功加载:")
for file in failed_files:
logger.error(f"{file}\n")
else:
docs = []
for file in filepath:
docs += load_file(file, SENTENCE_SIZE)
logger.info(f"{file} 已成功加载")
loaded_files.append(file)
if len(docs) > 0:
logger.info("文件加载完毕,正在生成向量库")
if vs_path and os.path.isdir(vs_path):
try:
self.vector_store = FAISS.load_local(vs_path, text2vec)
self.vector_store.add_documents(docs)
except:
self.vector_store = FAISS.from_documents(docs, text2vec)
else:
self.vector_store = FAISS.from_documents(docs, text2vec) # docs 为Document列表
self.vector_store.save_local(vs_path)
return vs_path, loaded_files
else:
raise RuntimeError("文件加载失败,请检查文件格式是否正确")
def get_loaded_file(self, vs_path):
ds = self.vector_store.docstore
return set([ds._dict[k].metadata['source'].split(vs_path)[-1] for k in ds._dict])
# query 查询内容
# vs_path 知识库路径
# chunk_conent 是否启用上下文关联
# score_threshold 搜索匹配score阈值
# vector_search_top_k 搜索知识库内容条数,默认搜索5条结果
# chunk_sizes 匹配单段内容的连接上下文长度
def get_knowledge_based_conent_test(self, query, vs_path, chunk_conent,
score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
vector_search_top_k=VECTOR_SEARCH_TOP_K, chunk_size=CHUNK_SIZE,
text2vec=None):
self.vector_store = FAISS.load_local(vs_path, text2vec)
self.vector_store.chunk_conent = chunk_conent
self.vector_store.score_threshold = score_threshold
self.vector_store.chunk_size = chunk_size
embedding = self.vector_store.embedding_function.embed_query(query)
related_docs_with_score = similarity_search_with_score_by_vector(self.vector_store, embedding, k=vector_search_top_k)
if not related_docs_with_score:
response = {"query": query,
"source_documents": []}
return response, ""
# prompt = f"{query}. You should answer this question using information from following documents: \n\n"
prompt = f"{query}. 你必须利用以下文档中包含的信息回答这个问题: \n\n---\n\n"
prompt += "\n\n".join([f"({k}): " + doc.page_content for k, doc in enumerate(related_docs_with_score)])
prompt += "\n\n---\n\n"
prompt = prompt.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
# logger.info(prompt)
response = {"query": query, "source_documents": related_docs_with_score}
return response, prompt
def construct_vector_store(vs_id, vs_path, files, sentence_size, history, one_conent, one_content_segmentation, text2vec):
for file in files:
assert os.path.exists(file), "输入文件不存在:" + file
import nltk
if NLTK_DATA_PATH not in nltk.data.path: nltk.data.path = [NLTK_DATA_PATH] + nltk.data.path
local_doc_qa = LocalDocQA()
local_doc_qa.init_cfg()
filelist = []
if not os.path.exists(os.path.join(vs_path, vs_id)):
os.makedirs(os.path.join(vs_path, vs_id))
for file in files:
file_name = file.name if not isinstance(file, str) else file
filename = os.path.split(file_name)[-1]
shutil.copyfile(file_name, os.path.join(vs_path, vs_id, filename))
filelist.append(os.path.join(vs_path, vs_id, filename))
vs_path, loaded_files = local_doc_qa.init_knowledge_vector_store(filelist, os.path.join(vs_path, vs_id), sentence_size, text2vec)
if len(loaded_files):
file_status = f"已添加 {''.join([os.path.split(i)[-1] for i in loaded_files if i])} 内容至知识库,并已加载知识库,请开始提问"
else:
pass
# file_status = "文件未成功加载,请重新上传文件"
# logger.info(file_status)
return local_doc_qa, vs_path
@Singleton
class knowledge_archive_interface():
def __init__(self) -> None:
self.threadLock = threading.Lock()
self.current_id = ""
self.kai_path = None
self.qa_handle = None
self.text2vec_large_chinese = None
def get_chinese_text2vec(self):
if self.text2vec_large_chinese is None:
# < -------------------预热文本向量化模组--------------- >
from toolbox import ProxyNetworkActivate
logger.info('Checking Text2vec ...')
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
with ProxyNetworkActivate('Download_LLM'): # 临时地激活代理网络
self.text2vec_large_chinese = HuggingFaceEmbeddings(model_name="GanymedeNil/text2vec-large-chinese")
return self.text2vec_large_chinese
def feed_archive(self, file_manifest, vs_path, id="default"):
self.threadLock.acquire()
# import uuid
self.current_id = id
self.qa_handle, self.kai_path = construct_vector_store(
vs_id=self.current_id,
vs_path=vs_path,
files=file_manifest,
sentence_size=100,
history=[],
one_conent="",
one_content_segmentation="",
text2vec = self.get_chinese_text2vec(),
)
self.threadLock.release()
def get_current_archive_id(self):
return self.current_id
def get_loaded_file(self, vs_path):
return self.qa_handle.get_loaded_file(vs_path)
def answer_with_archive_by_id(self, txt, id, vs_path):
self.threadLock.acquire()
if not self.current_id == id:
self.current_id = id
self.qa_handle, self.kai_path = construct_vector_store(
vs_id=self.current_id,
vs_path=vs_path,
files=[],
sentence_size=100,
history=[],
one_conent="",
one_content_segmentation="",
text2vec = self.get_chinese_text2vec(),
)
VECTOR_SEARCH_SCORE_THRESHOLD = 0
VECTOR_SEARCH_TOP_K = 4
CHUNK_SIZE = 512
resp, prompt = self.qa_handle.get_knowledge_based_conent_test(
query = txt,
vs_path = self.kai_path,
score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
vector_search_top_k=VECTOR_SEARCH_TOP_K,
chunk_conent=True,
chunk_size=CHUNK_SIZE,
text2vec = self.get_chinese_text2vec(),
)
self.threadLock.release()
return resp, prompt

查看文件

@@ -1,114 +0,0 @@
from pydantic import BaseModel, Field
from typing import List
from toolbox import update_ui_lastest_msg, disable_auto_promotion
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
import copy, json, pickle, os, sys, time
def read_avail_plugin_enum():
from crazy_functional import get_crazy_functions
plugin_arr = get_crazy_functions()
# remove plugins with out explaination
plugin_arr = {k:v for k, v in plugin_arr.items() if ('Info' in v) and ('Function' in v)}
plugin_arr_info = {"F_{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict_parse = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict_parse.update({f"F_{i}":v for i, v in enumerate(plugin_arr.values(), start=1)})
prompt = json.dumps(plugin_arr_info, ensure_ascii=False, indent=2)
prompt = "\n\nThe defination of PluginEnum:\nPluginEnum=" + prompt
return prompt, plugin_arr_dict, plugin_arr_dict_parse
def wrap_code(txt):
txt = txt.replace('```','')
return f"\n```\n{txt}\n```\n"
def have_any_recent_upload_files(chatbot):
_5min = 5 * 60
if not chatbot: return False # chatbot is None
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
if not most_recent_uploaded: return False # most_recent_uploaded is None
if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
else: return False # most_recent_uploaded is too old
def get_recent_file_prompt_support(chatbot):
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
path = most_recent_uploaded['path']
prompt = "\nAdditional Information:\n"
prompt = "In case that this plugin requires a path or a file as argument,"
prompt += f"it is important for you to know that the user has recently uploaded a file, located at: `{path}`"
prompt += f"Only use it when necessary, otherwise, you can ignore this file."
return prompt
def get_inputs_show_user(inputs, plugin_arr_enum_prompt):
# remove plugin_arr_enum_prompt from inputs string
inputs_show_user = inputs.replace(plugin_arr_enum_prompt, "")
inputs_show_user += plugin_arr_enum_prompt[:200] + '...'
inputs_show_user += '\n...\n'
inputs_show_user += '...\n'
inputs_show_user += '...}'
return inputs_show_user
def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
plugin_arr_enum_prompt, plugin_arr_dict, plugin_arr_dict_parse = read_avail_plugin_enum()
class Plugin(BaseModel):
plugin_selection: str = Field(description="The most related plugin from one of the PluginEnum.", default="F_0000")
reason_of_selection: str = Field(description="The reason why you should select this plugin.", default="This plugin satisfy user requirement most")
# ⭐ ⭐ ⭐ 选择插件
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n查找可用插件中...", chatbot=chatbot, history=history, delay=0)
gpt_json_io = GptJsonIO(Plugin)
gpt_json_io.format_instructions = "The format of your output should be a json that can be parsed by json.loads.\n"
gpt_json_io.format_instructions += """Output example: {"plugin_selection":"F_1234", "reason_of_selection":"F_1234 plugin satisfy user requirement most"}\n"""
gpt_json_io.format_instructions += "The plugins you are authorized to use are listed below:\n"
gpt_json_io.format_instructions += plugin_arr_enum_prompt
inputs = "Choose the correct plugin according to user requirements, the user requirement is: \n\n" + \
">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
try:
gpt_reply = run_gpt_fn(inputs, "")
plugin_sel = gpt_json_io.generate_output_auto_repair(gpt_reply, run_gpt_fn)
except JsonStringError:
msg = f"抱歉, {llm_kwargs['llm_model']}无法理解您的需求。"
msg += "请求的Prompt为\n" + wrap_code(get_inputs_show_user(inputs, plugin_arr_enum_prompt))
msg += "语言模型回复为:\n" + wrap_code(gpt_reply)
msg += "\n但您可以尝试再试一次\n"
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
return
if plugin_sel.plugin_selection not in plugin_arr_dict_parse:
msg = f"抱歉, 找不到合适插件执行该任务, 或者{llm_kwargs['llm_model']}无法理解您的需求。"
msg += f"语言模型{llm_kwargs['llm_model']}选择了不存在的插件:\n" + wrap_code(gpt_reply)
msg += "\n但您可以尝试再试一次\n"
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
return
# ⭐ ⭐ ⭐ 确认插件参数
if not have_any_recent_upload_files(chatbot):
appendix_info = ""
else:
appendix_info = get_recent_file_prompt_support(chatbot)
plugin = plugin_arr_dict_parse[plugin_sel.plugin_selection]
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n提取插件参数...", chatbot=chatbot, history=history, delay=0)
class PluginExplicit(BaseModel):
plugin_selection: str = plugin_sel.plugin_selection
plugin_arg: str = Field(description="The argument of the plugin.", default="")
gpt_json_io = GptJsonIO(PluginExplicit)
gpt_json_io.format_instructions += "The information about this plugin is:" + plugin["Info"]
inputs = f"A plugin named {plugin_sel.plugin_selection} is selected, " + \
"you should extract plugin_arg from the user requirement, the user requirement is: \n\n" + \
">> " + (txt + appendix_info).rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
plugin_sel = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)
# ⭐ ⭐ ⭐ 执行插件
fn = plugin['Function']
fn_name = fn.__name__
msg = f'{llm_kwargs["llm_model"]}为您选择了插件: `{fn_name}`\n\n插件说明:{plugin["Info"]}\n\n插件参数:{plugin_sel.plugin_arg}\n\n假如偏离了您的要求,按停止键终止。'
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
yield from fn(plugin_sel.plugin_arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, -1)
return

查看文件

@@ -1,81 +0,0 @@
from pydantic import BaseModel, Field
from typing import List
from toolbox import update_ui_lastest_msg, get_conf
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.json_fns.pydantic_io import GptJsonIO
import copy, json, pickle, os, sys
def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG')
if not ALLOW_RESET_CONFIG:
yield from update_ui_lastest_msg(
lastmsg=f"当前配置不允许被修改如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
chatbot=chatbot, history=history, delay=2
)
return
# ⭐ ⭐ ⭐ 读取可配置项目条目
names = {}
from enum import Enum
import config
for k, v in config.__dict__.items():
if k.startswith('__'): continue
names.update({k:k})
# if len(names) > 20: break # 限制最多前10个配置项,如果太多了会导致gpt无法理解
ConfigOptions = Enum('ConfigOptions', names)
class ModifyConfigurationIntention(BaseModel):
which_config_to_modify: ConfigOptions = Field(description="the name of the configuration to modify, you must choose from one of the ConfigOptions enum.", default=None)
new_option_value: str = Field(description="the new value of the option", default=None)
# ⭐ ⭐ ⭐ 分析用户意图
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n读取新配置中", chatbot=chatbot, history=history, delay=0)
gpt_json_io = GptJsonIO(ModifyConfigurationIntention)
inputs = "Analyze how to change configuration according to following user input, answer me with json: \n\n" + \
">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
user_intention = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)
explicit_conf = user_intention.which_config_to_modify.value
ok = (explicit_conf in txt)
if ok:
yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}",
chatbot=chatbot, history=history, delay=1
)
yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}\n\n正在修改配置中",
chatbot=chatbot, history=history, delay=2
)
# ⭐ ⭐ ⭐ 立即应用配置
from toolbox import set_conf
set_conf(explicit_conf, user_intention.new_option_value)
yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,重新页面即可生效。", chatbot=chatbot, history=history, delay=1
)
else:
yield from update_ui_lastest_msg(
lastmsg=f"失败,如果需要配置{explicit_conf},您需要明确说明并在指令中提到它。", chatbot=chatbot, history=history, delay=5
)
def modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG')
if not ALLOW_RESET_CONFIG:
yield from update_ui_lastest_msg(
lastmsg=f"当前配置不允许被修改如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
chatbot=chatbot, history=history, delay=2
)
return
yield from modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,五秒后即将重启!若出现报错请无视即可。", chatbot=chatbot, history=history, delay=5
)
os.execl(sys.executable, sys.executable, *sys.argv)

查看文件

@@ -1,28 +0,0 @@
import pickle
class VoidTerminalState():
def __init__(self):
self.reset_state()
def reset_state(self):
self.has_provided_explaination = False
def lock_plugin(self, chatbot):
chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端'
chatbot._cookies['plugin_state'] = pickle.dumps(self)
def unlock_plugin(self, chatbot):
self.reset_state()
chatbot._cookies['lock_plugin'] = None
chatbot._cookies['plugin_state'] = pickle.dumps(self)
def set_state(self, chatbot, key, value):
setattr(self, key, value)
chatbot._cookies['plugin_state'] = pickle.dumps(self)
def get_state(chatbot):
state = chatbot._cookies.get('plugin_state', None)
if state is not None: state = pickle.loads(state)
else: state = VoidTerminalState()
state.chatbot = chatbot
return state

文件差异内容过多而无法显示 加载差异

查看文件

@@ -0,0 +1,186 @@
from request_llm.bridge_chatgpt import predict_no_ui
from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down, get_conf
import re, requests, unicodedata, os
def download_arxiv_(url_pdf):
if 'arxiv.org' not in url_pdf:
if ('.' in url_pdf) and ('/' not in url_pdf):
new_url = 'https://arxiv.org/abs/'+url_pdf
print('下载编号:', url_pdf, '自动定位:', new_url)
# download_arxiv_(new_url)
return download_arxiv_(new_url)
else:
print('不能识别的URL')
return None
if 'abs' in url_pdf:
url_pdf = url_pdf.replace('abs', 'pdf')
url_pdf = url_pdf + '.pdf'
url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs')
title, other_info = get_name(_url_=url_abs)
paper_id = title.split()[0] # '[1712.00559]'
if '2' in other_info['year']:
title = other_info['year'] + ' ' + title
known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI']
for k in known_conf:
if k in other_info['comment']:
title = k + ' ' + title
download_dir = './gpt_log/arxiv/'
os.makedirs(download_dir, exist_ok=True)
title_str = title.replace('?', '')\
.replace(':', '')\
.replace('\"', '')\
.replace('\n', '')\
.replace(' ', ' ')\
.replace(' ', ' ')
requests_pdf_url = url_pdf
file_path = download_dir+title_str
# if os.path.exists(file_path):
# print('返回缓存文件')
# return './gpt_log/arxiv/'+title_str
print('下载中')
proxies, = get_conf('proxies')
r = requests.get(requests_pdf_url, proxies=proxies)
with open(file_path, 'wb+') as f:
f.write(r.content)
print('下载完成')
# print('输出下载命令:','aria2c -o \"%s\" %s'%(title_str,url_pdf))
# subprocess.call('aria2c --all-proxy=\"172.18.116.150:11084\" -o \"%s\" %s'%(download_dir+title_str,url_pdf), shell=True)
x = "%s %s %s.bib" % (paper_id, other_info['year'], other_info['authors'])
x = x.replace('?', '')\
.replace(':', '')\
.replace('\"', '')\
.replace('\n', '')\
.replace(' ', ' ')\
.replace(' ', ' ')
return './gpt_log/arxiv/'+title_str, other_info
def get_name(_url_):
import os
from bs4 import BeautifulSoup
print('正在获取文献名!')
print(_url_)
# arxiv_recall = {}
# if os.path.exists('./arxiv_recall.pkl'):
# with open('./arxiv_recall.pkl', 'rb') as f:
# arxiv_recall = pickle.load(f)
# if _url_ in arxiv_recall:
# print('在缓存中')
# return arxiv_recall[_url_]
proxies, = get_conf('proxies')
res = requests.get(_url_, proxies=proxies)
bs = BeautifulSoup(res.text, 'html.parser')
other_details = {}
# get year
try:
year = bs.find_all(class_='dateline')[0].text
year = re.search(r'(\d{4})', year, re.M | re.I).group(1)
other_details['year'] = year
abstract = bs.find_all(class_='abstract mathjax')[0].text
other_details['abstract'] = abstract
except:
other_details['year'] = ''
print('年份获取失败')
# get author
try:
authors = bs.find_all(class_='authors')[0].text
authors = authors.split('Authors:')[1]
other_details['authors'] = authors
except:
other_details['authors'] = ''
print('authors获取失败')
# get comment
try:
comment = bs.find_all(class_='metatable')[0].text
real_comment = None
for item in comment.replace('\n', ' ').split(' '):
if 'Comments' in item:
real_comment = item
if real_comment is not None:
other_details['comment'] = real_comment
else:
other_details['comment'] = ''
except:
other_details['comment'] = ''
print('年份获取失败')
title_str = BeautifulSoup(
res.text, 'html.parser').find('title').contents[0]
print('获取成功:', title_str)
# arxiv_recall[_url_] = (title_str+'.pdf', other_details)
# with open('./arxiv_recall.pkl', 'wb') as f:
# pickle.dump(arxiv_recall, f)
return title_str+'.pdf', other_details
@CatchException
def 下载arxiv论文并翻译摘要(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
import glob
import os
# 基本信息:功能、贡献者
chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO])
yield chatbot, history, '正常'
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import pdfminer, bs4
except:
report_execption(chatbot, history,
a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。")
yield chatbot, history, '正常'
return
# 清空历史,以免输入溢出
history = []
# 提取摘要,下载PDF文档
try:
pdf_path, info = download_arxiv_(txt)
except:
report_execption(chatbot, history,
a = f"解析项目: {txt}",
b = f"下载pdf文件未成功")
yield chatbot, history, '正常'
return
# 翻译摘要等
i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}"
i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user); history.append(gpt_say)
yield chatbot, history, msg
# 写入文件
import shutil
# 重置文件的创建时间
shutil.copyfile(pdf_path, f'./gpt_log/{os.path.basename(pdf_path)}'); os.remove(pdf_path)
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载"))
yield chatbot, history, msg

查看文件

@@ -0,0 +1,139 @@
import threading
from request_llm.bridge_chatgpt import predict_no_ui_long_connection
from toolbox import CatchException, write_results_to_file, report_execption
from .crazy_utils import breakdown_txt_to_satisfy_token_limit
def extract_code_block_carefully(txt):
splitted = txt.split('```')
n_code_block_seg = len(splitted) - 1
if n_code_block_seg <= 1: return txt
# 剩下的情况都开头除去 ``` 结尾除去一次 ```
txt_out = '```'.join(splitted[1:-1])
return txt_out
def break_txt_into_half_at_some_linebreak(txt):
lines = txt.split('\n')
n_lines = len(lines)
pre = lines[:(n_lines//2)]
post = lines[(n_lines//2):]
return "\n".join(pre), "\n".join(post)
@CatchException
def 全项目切换英文(txt, top_p, temperature, chatbot, history, sys_prompt, WEB_PORT):
# 第1步清空历史,以免输入溢出
history = []
# 第2步尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import openai, transformers
except:
report_execption(chatbot, history,
a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade openai transformers```。")
yield chatbot, history, '正常'
return
# 第3步集合文件
import time, glob, os, shutil, re, openai
os.makedirs('gpt_log/generated_english_version', exist_ok=True)
os.makedirs('gpt_log/generated_english_version/crazy_functions', exist_ok=True)
file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \
[f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]
# file_manifest = ['./toolbox.py']
i_say_show_user_buffer = []
# 第4步随便显示点什么防止卡顿的感觉
for index, fp in enumerate(file_manifest):
# if 'test_project' in fp: continue
with open(fp, 'r', encoding='utf-8') as f:
file_content = f.read()
i_say_show_user =f'[{index}/{len(file_manifest)}] 接下来请将以下代码中包含的所有中文转化为英文,只输出转化后的英文代码,请用代码块输出代码: {os.path.abspath(fp)}'
i_say_show_user_buffer.append(i_say_show_user)
chatbot.append((i_say_show_user, "[Local Message] 等待多线程操作,中间过程不予显示."))
yield chatbot, history, '正常'
# 第5步Token限制下的截断与处理
MAX_TOKEN = 3000
from transformers import GPT2TokenizerFast
print('加载tokenizer中')
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
get_token_fn = lambda txt: len(tokenizer(txt)["input_ids"])
print('加载tokenizer结束')
# 第6步任务函数
mutable_return = [None for _ in file_manifest]
observe_window = [[""] for _ in file_manifest]
def thread_worker(fp,index):
if index > 10:
time.sleep(60)
print('Openai 限制免费用户每分钟20次请求,降低请求频率中。')
with open(fp, 'r', encoding='utf-8') as f:
file_content = f.read()
i_say_template = lambda fp, file_content: f'接下来请将以下代码中包含的所有中文转化为英文,只输出代码,文件名是{fp},文件代码是 ```{file_content}```'
try:
gpt_say = ""
# 分解代码文件
file_content_breakdown = breakdown_txt_to_satisfy_token_limit(file_content, get_token_fn, MAX_TOKEN)
for file_content_partial in file_content_breakdown:
i_say = i_say_template(fp, file_content_partial)
# # ** gpt request **
gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, top_p=top_p, temperature=temperature, history=[], sys_prompt=sys_prompt, observe_window=observe_window[index])
gpt_say_partial = extract_code_block_carefully(gpt_say_partial)
gpt_say += gpt_say_partial
mutable_return[index] = gpt_say
except ConnectionAbortedError as token_exceed_err:
print('至少一个线程任务Token溢出而失败', e)
except Exception as e:
print('至少一个线程任务意外失败', e)
# 第7步所有线程同时开始执行任务函数
handles = [threading.Thread(target=thread_worker, args=(fp,index)) for index, fp in enumerate(file_manifest)]
for h in handles:
h.daemon = True
h.start()
chatbot.append(('开始了吗?', f'多线程操作已经开始'))
yield chatbot, history, '正常'
# 第8步循环轮询各个线程是否执行完毕
cnt = 0
while True:
cnt += 1
time.sleep(0.2)
th_alive = [h.is_alive() for h in handles]
if not any(th_alive): break
# 更好的UI视觉效果
observe_win = []
for thread_index, alive in enumerate(th_alive):
observe_win.append("[ ..."+observe_window[thread_index][0][-60:].replace('\n','').replace('```','...').replace(' ','.').replace('<br/>','.....').replace('$','.')+"... ]")
stat = [f'执行中: {obs}\n\n' if alive else '已完成\n\n' for alive, obs in zip(th_alive, observe_win)]
stat_str = ''.join(stat)
chatbot[-1] = (chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt%10+1)))
yield chatbot, history, '正常'
# 第9步把结果写入文件
for index, h in enumerate(handles):
h.join() # 这里其实不需要join了,肯定已经都结束了
fp = file_manifest[index]
gpt_say = mutable_return[index]
i_say_show_user = i_say_show_user_buffer[index]
where_to_relocate = f'gpt_log/generated_english_version/{fp}'
if gpt_say is not None:
with open(where_to_relocate, 'w+', encoding='utf-8') as f:
f.write(gpt_say)
else: # 失败
shutil.copyfile(file_manifest[index], where_to_relocate)
chatbot.append((i_say_show_user, f'[Local Message] 已完成{os.path.abspath(fp)}的转化,\n\n存入{os.path.abspath(where_to_relocate)}'))
history.append(i_say_show_user); history.append(gpt_say)
yield chatbot, history, '正常'
time.sleep(1)
# 第10步备份一个文件
res = write_results_to_file(history)
chatbot.append(("生成一份任务执行报告", res))
yield chatbot, history, '正常'

查看文件

@@ -0,0 +1,127 @@
from request_llm.bridge_chatgpt import predict_no_ui
from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
fast_debug = False
def 解析docx(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
import time, os
# pip install python-docx 用于docx格式,跨平台
# pip install pywin32 用于doc格式,仅支持Win平台
print('begin analysis on:', file_manifest)
for index, fp in enumerate(file_manifest):
if fp.split(".")[-1] == "docx":
from docx import Document
doc = Document(fp)
file_content = "\n".join([para.text for para in doc.paragraphs])
else:
import win32com.client
word = win32com.client.Dispatch("Word.Application")
word.visible = False
# 打开文件
print('fp', os.getcwd())
doc = word.Documents.Open(os.getcwd() + '/' + fp)
# file_content = doc.Content.Text
doc = word.ActiveDocument
file_content = doc.Range().Text
doc.Close()
word.Quit()
print(file_content)
prefix = "接下来请你逐文件分析下面的论文文件," if index == 0 else ""
# private_upload里面的文件名在解压zip后容易出现乱码rar和7z格式正常,故可以只分析文章内容,不输入文件名
i_say = prefix + f'请对下面的文章片段用中英文做概述,文件名是{os.path.relpath(fp, project_folder)},' \
f'文章内容是 ```{file_content}```'
i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 假设你是论文审稿专家,请对下面的文章片段做概述: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature,
history=[]) # 带超时倒计时
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user);
history.append(gpt_say)
yield chatbot, history, msg
if not fast_debug: time.sleep(2)
"""
# 可按需启用
i_say = f'根据你上述的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一篇英文的。'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
i_say = f'我想让你做一个论文写作导师。您的任务是使用人工智能工具(例如自然语言处理)提供有关如何改进其上述文章的反馈。' \
f'您还应该利用您在有效写作技巧方面的修辞知识和经验来建议作者可以更好地以书面形式表达他们的想法和想法的方法。' \
f'根据你之前的分析,提出建议'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
"""
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature,
history=history) # 带超时倒计时
chatbot[-1] = (i_say, gpt_say)
history.append(i_say)
history.append(gpt_say)
yield chatbot, history, msg
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res))
yield chatbot, history, msg
@CatchException
def 总结word文档(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
import glob, os
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"批量总结Word文档。函数插件贡献者: JasonGuo1"])
yield chatbot, history, '正常'
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
from docx import Document
except:
report_execption(chatbot, history,
a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。")
yield chatbot, history, '正常'
return
# 清空历史,以免输入溢出
history = []
# 检测输入参数,如没有给定输入参数,直接退出
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
# 搜索需要处理的文件清单
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)]
# [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
# 如果没找到任何文件
if len(file_manifest) == 0:
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}")
yield chatbot, history, '正常'
return
# 开始正式执行任务
yield from 解析docx(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)

查看文件

@@ -0,0 +1,154 @@
from request_llm.bridge_chatgpt import predict_no_ui
from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
import re
import unicodedata
fast_debug = False
def is_paragraph_break(match):
"""
根据给定的匹配结果来判断换行符是否表示段落分隔。
如果换行符前为句子结束标志(句号,感叹号,问号),且下一个字符为大写字母,则换行符更有可能表示段落分隔。
也可以根据之前的内容长度来判断段落是否已经足够长。
"""
prev_char, next_char = match.groups()
# 句子结束标志
sentence_endings = ".!?"
# 设定一个最小段落长度阈值
min_paragraph_length = 140
if prev_char in sentence_endings and next_char.isupper() and len(match.string[:match.start(1)]) > min_paragraph_length:
return "\n\n"
else:
return " "
def normalize_text(text):
"""
通过把连字ligatures等文本特殊符号转换为其基本形式来对文本进行归一化处理。
例如,将连字 "fi" 转换为 "f""i"
"""
# 对文本进行归一化处理,分解连字
normalized_text = unicodedata.normalize("NFKD", text)
# 替换其他特殊字符
cleaned_text = re.sub(r'[^\x00-\x7F]+', '', normalized_text)
return cleaned_text
def clean_text(raw_text):
"""
对从 PDF 提取出的原始文本进行清洗和格式化处理。
1. 对原始文本进行归一化处理。
2. 替换跨行的连词,例如 “Espe-\ncially” 转换为 “Especially”。
3. 根据 heuristic 规则判断换行符是否是段落分隔,并相应地进行替换。
"""
# 对文本进行归一化处理
normalized_text = normalize_text(raw_text)
# 替换跨行的连词
text = re.sub(r'(\w+-\n\w+)', lambda m: m.group(1).replace('-\n', ''), normalized_text)
# 根据前后相邻字符的特点,找到原文本中的换行符
newlines = re.compile(r'(\S)\n(\S)')
# 根据 heuristic 规则,用空格或段落分隔符替换原换行符
final_text = re.sub(newlines, lambda m: m.group(1) + is_paragraph_break(m) + m.group(2), text)
return final_text.strip()
def 解析PDF(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
import time, glob, os, fitz
print('begin analysis on:', file_manifest)
for index, fp in enumerate(file_manifest):
with fitz.open(fp) as doc:
file_content = ""
for page in doc:
file_content += page.get_text()
file_content = clean_text(file_content)
print(file_content)
prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
print('[1] yield chatbot, history')
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时
print('[2] end gpt req')
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user); history.append(gpt_say)
print('[3] yield chatbot, history')
yield chatbot, history, msg
print('[4] next')
if not fast_debug: time.sleep(2)
all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时
chatbot[-1] = (i_say, gpt_say)
history.append(i_say); history.append(gpt_say)
yield chatbot, history, msg
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res))
yield chatbot, history, msg
@CatchException
def 批量总结PDF文档(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
import glob, os
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"批量总结PDF文档。函数插件贡献者: ValeriaWong,Eralien"])
yield chatbot, history, '正常'
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import fitz
except:
report_execption(chatbot, history,
a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
yield chatbot, history, '正常'
return
# 清空历史,以免输入溢出
history = []
# 检测输入参数,如没有给定输入参数,直接退出
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
# 搜索需要处理的文件清单
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \
# [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
# 如果没找到任何文件
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}")
yield chatbot, history, '正常'
return
# 开始正式执行任务
yield from 解析PDF(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)

查看文件

@@ -0,0 +1,151 @@
from request_llm.bridge_chatgpt import predict_no_ui
from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
fast_debug = False
def readPdf(pdfPath):
"""
读取pdf文件,返回文本内容
"""
import pdfminer
from pdfminer.pdfparser import PDFParser
from pdfminer.pdfdocument import PDFDocument
from pdfminer.pdfpage import PDFPage, PDFTextExtractionNotAllowed
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.pdfdevice import PDFDevice
from pdfminer.layout import LAParams
from pdfminer.converter import PDFPageAggregator
fp = open(pdfPath, 'rb')
# Create a PDF parser object associated with the file object
parser = PDFParser(fp)
# Create a PDF document object that stores the document structure.
# Password for initialization as 2nd parameter
document = PDFDocument(parser)
# Check if the document allows text extraction. If not, abort.
if not document.is_extractable:
raise PDFTextExtractionNotAllowed
# Create a PDF resource manager object that stores shared resources.
rsrcmgr = PDFResourceManager()
# Create a PDF device object.
# device = PDFDevice(rsrcmgr)
# BEGIN LAYOUT ANALYSIS.
# Set parameters for analysis.
laparams = LAParams(
char_margin=10.0,
line_margin=0.2,
boxes_flow=0.2,
all_texts=False,
)
# Create a PDF page aggregator object.
device = PDFPageAggregator(rsrcmgr, laparams=laparams)
# Create a PDF interpreter object.
interpreter = PDFPageInterpreter(rsrcmgr, device)
# loop over all pages in the document
outTextList = []
for page in PDFPage.create_pages(document):
# read the page into a layout object
interpreter.process_page(page)
layout = device.get_result()
for obj in layout._objs:
if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal):
# print(obj.get_text())
outTextList.append(obj.get_text())
return outTextList
def 解析Paper(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
import time, glob, os
from bs4 import BeautifulSoup
print('begin analysis on:', file_manifest)
for index, fp in enumerate(file_manifest):
if ".tex" in fp:
with open(fp, 'r', encoding='utf-8') as f:
file_content = f.read()
if ".pdf" in fp.lower():
file_content = readPdf(fp)
file_content = BeautifulSoup(''.join(file_content), features="lxml").body.text.encode('gbk', 'ignore').decode('gbk')
prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
print('[1] yield chatbot, history')
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时
print('[2] end gpt req')
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user); history.append(gpt_say)
print('[3] yield chatbot, history')
yield chatbot, history, msg
print('[4] next')
if not fast_debug: time.sleep(2)
all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时
chatbot[-1] = (i_say, gpt_say)
history.append(i_say); history.append(gpt_say)
yield chatbot, history, msg
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res))
yield chatbot, history, msg
@CatchException
def 批量总结PDF文档pdfminer(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
import glob, os
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"批量总结PDF文档,此版本使用pdfminer插件,带token约简功能。函数插件贡献者: Euclid-Jie。"])
yield chatbot, history, '正常'
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import pdfminer, bs4
except:
report_execption(chatbot, history,
a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。")
yield chatbot, history, '正常'
return
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \
# [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或pdf文件: {txt}")
yield chatbot, history, '正常'
return
yield from 解析Paper(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)

查看文件

@@ -0,0 +1,203 @@
from toolbox import CatchException, report_execption, write_results_to_file
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
def read_and_clean_pdf_text(fp):
"""
**输入参数说明**
- `fp`需要读取和清理文本的pdf文件路径
**输出参数说明**
- `meta_txt`:清理后的文本内容字符串
- `page_one_meta`:第一页清理后的文本内容列表
**函数功能**
读取pdf文件并清理其中的文本内容,清理规则包括
- 提取所有块元的文本信息,并合并为一个字符串
- 去除短块字符数小于100并替换为回车符
- 清理多余的空行
- 合并小写字母开头的段落块并替换为空格
- 清除重复的换行
- 将每个换行符替换为两个换行符,使每个段落之间有两个换行符分隔
"""
import fitz
import re
import numpy as np
# file_content = ""
with fitz.open(fp) as doc:
meta_txt = []
meta_font = []
for index, page in enumerate(doc):
# file_content += page.get_text()
text_areas = page.get_text("dict") # 获取页面上的文本信息
# 块元提取 for each word segment with in line for each line cross-line words for each block
meta_txt.extend([" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace(
'- ', '') for t in text_areas['blocks'] if 'lines' in t])
meta_font.extend([np.mean([np.mean([wtf['size'] for wtf in l['spans']])
for l in t['lines']]) for t in text_areas['blocks'] if 'lines' in t])
if index == 0:
page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace(
'- ', '') for t in text_areas['blocks'] if 'lines' in t]
def 把字符太少的块清除为回车(meta_txt):
for index, block_txt in enumerate(meta_txt):
if len(block_txt) < 100:
meta_txt[index] = '\n'
return meta_txt
meta_txt = 把字符太少的块清除为回车(meta_txt)
def 清理多余的空行(meta_txt):
for index in reversed(range(1, len(meta_txt))):
if meta_txt[index] == '\n' and meta_txt[index-1] == '\n':
meta_txt.pop(index)
return meta_txt
meta_txt = 清理多余的空行(meta_txt)
def 合并小写开头的段落块(meta_txt):
def starts_with_lowercase_word(s):
pattern = r"^[a-z]+"
match = re.match(pattern, s)
if match:
return True
else:
return False
for _ in range(100):
for index, block_txt in enumerate(meta_txt):
if starts_with_lowercase_word(block_txt):
if meta_txt[index-1] != '\n':
meta_txt[index-1] += ' '
else:
meta_txt[index-1] = ''
meta_txt[index-1] += meta_txt[index]
meta_txt[index] = '\n'
return meta_txt
meta_txt = 合并小写开头的段落块(meta_txt)
meta_txt = 清理多余的空行(meta_txt)
meta_txt = '\n'.join(meta_txt)
# 清除重复的换行
for _ in range(5):
meta_txt = meta_txt.replace('\n\n', '\n')
# 换行 -> 双换行
meta_txt = meta_txt.replace('\n', '\n\n')
return meta_txt, page_one_meta
@CatchException
def 批量翻译PDF文档(txt, top_p, temperature, chatbot, history, sys_prompt, WEB_PORT):
import glob
import os
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"批量总结PDF文档。函数插件贡献者: Binary-Husky二进制哈士奇"])
yield chatbot, history, '正常'
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import fitz
import tiktoken
except:
report_execption(chatbot, history,
a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken```。")
yield chatbot, history, '正常'
return
# 清空历史,以免输入溢出
history = []
# 检测输入参数,如没有给定输入参数,直接退出
if os.path.exists(txt):
project_folder = txt
else:
if txt == "":
txt = '空空如也的输入栏'
report_execption(chatbot, history,
a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
# 搜索需要处理的文件清单
file_manifest = [f for f in glob.glob(
f'{project_folder}/**/*.pdf', recursive=True)]
# 如果没找到任何文件
if len(file_manifest) == 0:
report_execption(chatbot, history,
a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}")
yield chatbot, history, '正常'
return
# 开始正式执行任务
yield from 解析PDF(file_manifest, project_folder, top_p, temperature, chatbot, history, sys_prompt)
def 解析PDF(file_manifest, project_folder, top_p, temperature, chatbot, history, sys_prompt):
import os
import tiktoken
TOKEN_LIMIT_PER_FRAGMENT = 1600
generated_conclusion_files = []
for index, fp in enumerate(file_manifest):
# 读取PDF文件
file_content, page_one = read_and_clean_pdf_text(fp)
# 递归地切割PDF文件
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
enc = tiktoken.get_encoding("gpt2")
def get_token_num(txt): return len(enc.encode(txt))
# 分解文本
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
# 为了更好的效果,我们剥离Introduction之后的部分
paper_meta = page_one_fragments[0].split('introduction')[0].split(
'Introduction')[0].split('INTRODUCTION')[0]
# 单线,获取文章meta信息
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取{paper_meta}",
inputs_show_user=f"请从{fp}中提取出“标题”、“收录会议或期刊”等基本信息。",
top_p=top_p, temperature=temperature,
chatbot=chatbot, history=[],
sys_prompt="Your job is to collect information from materials。",
)
# 多线,翻译
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=[
f"以下是你需要翻译的文章段落:\n{frag}" for frag in paper_fragments],
inputs_show_user_array=[f"" for _ in paper_fragments],
top_p=top_p, temperature=temperature,
chatbot=chatbot,
history_array=[[paper_meta] for _ in paper_fragments],
sys_prompt_array=[
"请你作为一个学术翻译,把整个段落翻译成中文,要求语言简洁,禁止重复输出原文。" for _ in paper_fragments],
max_workers=16 # OpenAI所允许的最大并行过载
)
final = ["", paper_meta_info + '\n\n---\n\n---\n\n---\n\n']
final.extend(gpt_response_collection)
create_report_file_name = f"{os.path.basename(fp)}.trans.md"
res = write_results_to_file(final, file_name=create_report_file_name)
generated_conclusion_files.append(
f'./gpt_log/{create_report_file_name}')
chatbot.append((f"{fp}完成了吗?", res))
msg = "完成"
yield chatbot, history, msg
# 准备文件的下载
import shutil
for pdf_path in generated_conclusion_files:
# 重命名文件
rename_file = f'./gpt_log/总结论文-{os.path.basename(pdf_path)}'
if os.path.exists(rename_file):
os.remove(rename_file)
shutil.copyfile(pdf_path, rename_file)
if os.path.exists(pdf_path):
os.remove(pdf_path)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files)))
yield chatbot, history, msg

某些文件未显示,因为此 diff 中更改的文件太多 显示更多