镜像自地址
https://github.com/binary-husky/gpt_academic.git
已同步 2025-12-06 22:46:48 +00:00
比较提交
18 次代码提交
frontier_2
...
frontier_w
| 作者 | SHA1 | 提交日期 | |
|---|---|---|---|
|
|
7415d532d1 | ||
|
|
97eef45ab7 | ||
|
|
0c0e2acb9b | ||
|
|
9fba8e0142 | ||
|
|
7d7867fb64 | ||
|
|
f9dbaa39fb | ||
|
|
bbc2288c5b | ||
|
|
64ab916838 | ||
|
|
8fe559da9f | ||
|
|
09fd22091a | ||
|
|
e296719b23 | ||
|
|
2f343179a2 | ||
|
|
4d9604f2e9 | ||
|
|
bbf9e9f868 | ||
|
|
aa1f967dd7 | ||
|
|
0d082327c8 | ||
|
|
80acd9c875 | ||
|
|
17cd4f8210 |
@@ -1,14 +1,14 @@
|
|||||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||||
name: build-with-latex-arm
|
name: build-with-all-capacity-beta
|
||||||
|
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
branches:
|
branches:
|
||||||
- "master"
|
- 'master'
|
||||||
|
|
||||||
env:
|
env:
|
||||||
REGISTRY: ghcr.io
|
REGISTRY: ghcr.io
|
||||||
IMAGE_NAME: ${{ github.repository }}_with_latex_arm
|
IMAGE_NAME: ${{ github.repository }}_with_all_capacity_beta
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build-and-push-image:
|
build-and-push-image:
|
||||||
@@ -18,17 +18,11 @@ jobs:
|
|||||||
packages: write
|
packages: write
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Set up QEMU
|
|
||||||
uses: docker/setup-qemu-action@v3
|
|
||||||
|
|
||||||
- name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v3
|
|
||||||
|
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
- name: Log in to the Container registry
|
- name: Log in to the Container registry
|
||||||
uses: docker/login-action@v3
|
uses: docker/login-action@v2
|
||||||
with:
|
with:
|
||||||
registry: ${{ env.REGISTRY }}
|
registry: ${{ env.REGISTRY }}
|
||||||
username: ${{ github.actor }}
|
username: ${{ github.actor }}
|
||||||
@@ -41,11 +35,10 @@ jobs:
|
|||||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||||
|
|
||||||
- name: Build and push Docker image
|
- name: Build and push Docker image
|
||||||
uses: docker/build-push-action@v6
|
uses: docker/build-push-action@v4
|
||||||
with:
|
with:
|
||||||
context: .
|
context: .
|
||||||
push: true
|
push: true
|
||||||
platforms: linux/arm64
|
file: docs/GithubAction+AllCapacityBeta
|
||||||
file: docs/GithubAction+NoLocal+Latex
|
|
||||||
tags: ${{ steps.meta.outputs.tags }}
|
tags: ${{ steps.meta.outputs.tags }}
|
||||||
labels: ${{ steps.meta.outputs.labels }}
|
labels: ${{ steps.meta.outputs.labels }}
|
||||||
44
.github/workflows/build-with-jittorllms.yml
vendored
普通文件
44
.github/workflows/build-with-jittorllms.yml
vendored
普通文件
@@ -0,0 +1,44 @@
|
|||||||
|
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||||
|
name: build-with-jittorllms
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'master'
|
||||||
|
|
||||||
|
env:
|
||||||
|
REGISTRY: ghcr.io
|
||||||
|
IMAGE_NAME: ${{ github.repository }}_jittorllms
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build-and-push-image:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
packages: write
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: Log in to the Container registry
|
||||||
|
uses: docker/login-action@v2
|
||||||
|
with:
|
||||||
|
registry: ${{ env.REGISTRY }}
|
||||||
|
username: ${{ github.actor }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
- name: Extract metadata (tags, labels) for Docker
|
||||||
|
id: meta
|
||||||
|
uses: docker/metadata-action@v4
|
||||||
|
with:
|
||||||
|
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||||
|
|
||||||
|
- name: Build and push Docker image
|
||||||
|
uses: docker/build-push-action@v4
|
||||||
|
with:
|
||||||
|
context: .
|
||||||
|
push: true
|
||||||
|
file: docs/GithubAction+JittorLLMs
|
||||||
|
tags: ${{ steps.meta.outputs.tags }}
|
||||||
|
labels: ${{ steps.meta.outputs.labels }}
|
||||||
@@ -1,6 +1,5 @@
|
|||||||
> [!IMPORTANT]
|
> [!IMPORTANT]
|
||||||
> 2024.10.10: 突发停电,紧急恢复了提供[whl包](https://drive.google.com/file/d/19U_hsLoMrjOlQSzYS3pzWX9fTzyusArP/view?usp=sharing)的文件服务器
|
> 2024.6.1: 版本3.80加入插件二级菜单功能(详见wiki)
|
||||||
> 2024.10.8: 版本3.90加入对llama-index的初步支持,版本3.80加入插件二级菜单功能(详见wiki)
|
|
||||||
> 2024.5.1: 加入Doc2x翻译PDF论文的功能,[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x)
|
> 2024.5.1: 加入Doc2x翻译PDF论文的功能,[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x)
|
||||||
> 2024.3.11: 全力支持Qwen、GLM、DeepseekCoder等中文大语言模型! SoVits语音克隆模块,[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/)
|
> 2024.3.11: 全力支持Qwen、GLM、DeepseekCoder等中文大语言模型! SoVits语音克隆模块,[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/)
|
||||||
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
|
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ from loguru import logger
|
|||||||
def get_crazy_functions():
|
def get_crazy_functions():
|
||||||
from crazy_functions.读文章写摘要 import 读文章写摘要
|
from crazy_functions.读文章写摘要 import 读文章写摘要
|
||||||
from crazy_functions.生成函数注释 import 批量生成函数注释
|
from crazy_functions.生成函数注释 import 批量生成函数注释
|
||||||
|
from crazy_functions.Rag_Interface import Rag问答
|
||||||
from crazy_functions.SourceCode_Analyse import 解析项目本身
|
from crazy_functions.SourceCode_Analyse import 解析项目本身
|
||||||
from crazy_functions.SourceCode_Analyse import 解析一个Python项目
|
from crazy_functions.SourceCode_Analyse import 解析一个Python项目
|
||||||
from crazy_functions.SourceCode_Analyse import 解析一个Matlab项目
|
from crazy_functions.SourceCode_Analyse import 解析一个Matlab项目
|
||||||
@@ -51,6 +52,13 @@ def get_crazy_functions():
|
|||||||
from crazy_functions.SourceCode_Comment import 注释Python项目
|
from crazy_functions.SourceCode_Comment import 注释Python项目
|
||||||
|
|
||||||
function_plugins = {
|
function_plugins = {
|
||||||
|
"Rag智能召回": {
|
||||||
|
"Group": "对话",
|
||||||
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"Info": "将问答数据记录到向量库中,作为长期参考。",
|
||||||
|
"Function": HotReload(Rag问答),
|
||||||
|
},
|
||||||
"虚空终端": {
|
"虚空终端": {
|
||||||
"Group": "对话|编程|学术|智能体",
|
"Group": "对话|编程|学术|智能体",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
@@ -699,31 +707,6 @@ def get_crazy_functions():
|
|||||||
logger.error(trimmed_format_exc())
|
logger.error(trimmed_format_exc())
|
||||||
logger.error("Load function plugin failed")
|
logger.error("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
|
||||||
from crazy_functions.Rag_Interface import Rag问答
|
|
||||||
|
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"Rag智能召回": {
|
|
||||||
"Group": "对话",
|
|
||||||
"Color": "stop",
|
|
||||||
"AsButton": False,
|
|
||||||
"Info": "将问答数据记录到向量库中,作为长期参考。",
|
|
||||||
"Function": HotReload(Rag问答),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
)
|
|
||||||
except:
|
|
||||||
logger.error(trimmed_format_exc())
|
|
||||||
logger.error("Load function plugin failed")
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# try:
|
# try:
|
||||||
# from crazy_functions.高级功能函数模板 import 测试图表渲染
|
# from crazy_functions.高级功能函数模板 import 测试图表渲染
|
||||||
# function_plugins.update({
|
# function_plugins.update({
|
||||||
|
|||||||
@@ -2,7 +2,20 @@ from toolbox import CatchException, update_ui, get_conf, get_log_folder, update_
|
|||||||
from crazy_functions.crazy_utils import input_clipping
|
from crazy_functions.crazy_utils import input_clipping
|
||||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
|
|
||||||
|
VECTOR_STORE_TYPE = "Milvus"
|
||||||
|
|
||||||
|
if VECTOR_STORE_TYPE == "Milvus":
|
||||||
|
try:
|
||||||
|
from crazy_functions.rag_fns.milvus_worker import MilvusRagWorker as LlamaIndexRagWorker
|
||||||
|
except:
|
||||||
|
VECTOR_STORE_TYPE = "Simple"
|
||||||
|
|
||||||
|
if VECTOR_STORE_TYPE == "Simple":
|
||||||
|
from crazy_functions.rag_fns.llama_index_worker import LlamaIndexRagWorker
|
||||||
|
|
||||||
|
|
||||||
RAG_WORKER_REGISTER = {}
|
RAG_WORKER_REGISTER = {}
|
||||||
|
|
||||||
MAX_HISTORY_ROUND = 5
|
MAX_HISTORY_ROUND = 5
|
||||||
MAX_CONTEXT_TOKEN_LIMIT = 4096
|
MAX_CONTEXT_TOKEN_LIMIT = 4096
|
||||||
REMEMBER_PREVIEW = 1000
|
REMEMBER_PREVIEW = 1000
|
||||||
@@ -10,16 +23,6 @@ REMEMBER_PREVIEW = 1000
|
|||||||
@CatchException
|
@CatchException
|
||||||
def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
|
|
||||||
# import vector store lib
|
|
||||||
VECTOR_STORE_TYPE = "Milvus"
|
|
||||||
if VECTOR_STORE_TYPE == "Milvus":
|
|
||||||
try:
|
|
||||||
from crazy_functions.rag_fns.milvus_worker import MilvusRagWorker as LlamaIndexRagWorker
|
|
||||||
except:
|
|
||||||
VECTOR_STORE_TYPE = "Simple"
|
|
||||||
if VECTOR_STORE_TYPE == "Simple":
|
|
||||||
from crazy_functions.rag_fns.llama_index_worker import LlamaIndexRagWorker
|
|
||||||
|
|
||||||
# 1. we retrieve rag worker from global context
|
# 1. we retrieve rag worker from global context
|
||||||
user_name = chatbot.get_user()
|
user_name = chatbot.get_user()
|
||||||
checkpoint_dir = get_log_folder(user_name, plugin_name='experimental_rag')
|
checkpoint_dir = get_log_folder(user_name, plugin_name='experimental_rag')
|
||||||
|
|||||||
@@ -644,17 +644,8 @@ def run_in_subprocess(func):
|
|||||||
|
|
||||||
|
|
||||||
def _merge_pdfs(pdf1_path, pdf2_path, output_path):
|
def _merge_pdfs(pdf1_path, pdf2_path, output_path):
|
||||||
try:
|
|
||||||
logger.info("Merging PDFs using _merge_pdfs_ng")
|
|
||||||
_merge_pdfs_ng(pdf1_path, pdf2_path, output_path)
|
|
||||||
except:
|
|
||||||
logger.info("Merging PDFs using _merge_pdfs_legacy")
|
|
||||||
_merge_pdfs_legacy(pdf1_path, pdf2_path, output_path)
|
|
||||||
|
|
||||||
|
|
||||||
def _merge_pdfs_ng(pdf1_path, pdf2_path, output_path):
|
|
||||||
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
||||||
from PyPDF2.generic import NameObject, TextStringObject, ArrayObject, FloatObject, NumberObject
|
from PyPDF2.generic import NameObject, TextStringObject,ArrayObject,FloatObject,NumberObject
|
||||||
|
|
||||||
Percent = 1
|
Percent = 1
|
||||||
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
|
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
|
||||||
@@ -697,203 +688,65 @@ def _merge_pdfs_ng(pdf1_path, pdf2_path, output_path):
|
|||||||
),
|
),
|
||||||
0,
|
0,
|
||||||
)
|
)
|
||||||
if "/Annots" in new_page:
|
if '/Annots' in page1:
|
||||||
annotations = new_page["/Annots"]
|
page1_annot_id = [annot.idnum for annot in page1['/Annots']]
|
||||||
|
else:
|
||||||
|
page1_annot_id = []
|
||||||
|
|
||||||
|
if '/Annots' in page2:
|
||||||
|
page2_annot_id = [annot.idnum for annot in page2['/Annots']]
|
||||||
|
else:
|
||||||
|
page2_annot_id = []
|
||||||
|
if '/Annots' in new_page:
|
||||||
|
annotations = new_page['/Annots']
|
||||||
for i, annot in enumerate(annotations):
|
for i, annot in enumerate(annotations):
|
||||||
annot_obj = annot.get_object()
|
annot_obj = annot.get_object()
|
||||||
|
|
||||||
# 检查注释类型是否是链接(/Link)
|
# 检查注释类型是否是链接(/Link)
|
||||||
if annot_obj.get("/Subtype") == "/Link":
|
if annot_obj.get('/Subtype') == '/Link':
|
||||||
# 检查是否为内部链接跳转(/GoTo)或外部URI链接(/URI)
|
# 检查是否为内部链接跳转(/GoTo)或外部URI链接(/URI)
|
||||||
action = annot_obj.get("/A")
|
action = annot_obj.get('/A')
|
||||||
if action:
|
if action:
|
||||||
|
|
||||||
if "/S" in action and action["/S"] == "/GoTo":
|
if '/S' in action and action['/S'] == '/GoTo':
|
||||||
# 内部链接:跳转到文档中的某个页面
|
# 内部链接:跳转到文档中的某个页面
|
||||||
dest = action.get("/D") # 目标页或目标位置
|
dest = action.get('/D') # 目标页或目标位置
|
||||||
# if dest and annot.idnum in page2_annot_id:
|
if dest and annot.idnum in page2_annot_id:
|
||||||
if dest in pdf2_reader.named_destinations:
|
|
||||||
# 获取原始文件中跳转信息,包括跳转页面
|
# 获取原始文件中跳转信息,包括跳转页面
|
||||||
destination = pdf2_reader.named_destinations[
|
destination = pdf2_reader.named_destinations[dest]
|
||||||
dest
|
page_number = pdf2_reader.get_destination_page_number(destination)
|
||||||
]
|
#更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
|
||||||
page_number = (
|
#“/D”:[10,'/XYZ',100,100,0]
|
||||||
pdf2_reader.get_destination_page_number(
|
annot_obj['/A'].update({
|
||||||
destination
|
NameObject("/D"): ArrayObject([NumberObject(page_number),destination.dest_array[1], FloatObject(destination.dest_array[2] + int(page1.mediaBox.getWidth())) ,destination.dest_array[3],destination.dest_array[4]]) # 确保键和值是 PdfObject
|
||||||
)
|
})
|
||||||
)
|
rect = annot_obj.get('/Rect')
|
||||||
# 更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
|
|
||||||
# “/D”:[10,'/XYZ',100,100,0]
|
|
||||||
if destination.dest_array[1] == "/XYZ":
|
|
||||||
annot_obj["/A"].update(
|
|
||||||
{
|
|
||||||
NameObject("/D"): ArrayObject(
|
|
||||||
[
|
|
||||||
NumberObject(page_number),
|
|
||||||
destination.dest_array[1],
|
|
||||||
FloatObject(
|
|
||||||
destination.dest_array[
|
|
||||||
2
|
|
||||||
]
|
|
||||||
+ int(
|
|
||||||
page1.mediaBox.getWidth()
|
|
||||||
)
|
|
||||||
),
|
|
||||||
destination.dest_array[3],
|
|
||||||
destination.dest_array[4],
|
|
||||||
]
|
|
||||||
) # 确保键和值是 PdfObject
|
|
||||||
}
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
annot_obj["/A"].update(
|
|
||||||
{
|
|
||||||
NameObject("/D"): ArrayObject(
|
|
||||||
[
|
|
||||||
NumberObject(page_number),
|
|
||||||
destination.dest_array[1],
|
|
||||||
]
|
|
||||||
) # 确保键和值是 PdfObject
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
rect = annot_obj.get("/Rect")
|
|
||||||
# 更新点击坐标
|
# 更新点击坐标
|
||||||
rect = ArrayObject(
|
rect = ArrayObject([FloatObject(rect[0]+ int(page1.mediaBox.getWidth())),rect[1],
|
||||||
[
|
FloatObject(rect[2]+int(page1.mediaBox.getWidth())),rect[3] ])
|
||||||
FloatObject(
|
annot_obj.update({
|
||||||
rect[0]
|
NameObject("/Rect"): rect # 确保键和值是 PdfObject
|
||||||
+ int(page1.mediaBox.getWidth())
|
})
|
||||||
),
|
if dest and annot.idnum in page1_annot_id:
|
||||||
rect[1],
|
|
||||||
FloatObject(
|
|
||||||
rect[2]
|
|
||||||
+ int(page1.mediaBox.getWidth())
|
|
||||||
),
|
|
||||||
rect[3],
|
|
||||||
]
|
|
||||||
)
|
|
||||||
annot_obj.update(
|
|
||||||
{
|
|
||||||
NameObject(
|
|
||||||
"/Rect"
|
|
||||||
): rect # 确保键和值是 PdfObject
|
|
||||||
}
|
|
||||||
)
|
|
||||||
# if dest and annot.idnum in page1_annot_id:
|
|
||||||
if dest in pdf1_reader.named_destinations:
|
|
||||||
|
|
||||||
# 获取原始文件中跳转信息,包括跳转页面
|
# 获取原始文件中跳转信息,包括跳转页面
|
||||||
destination = pdf1_reader.named_destinations[
|
destination = pdf1_reader.named_destinations[dest]
|
||||||
dest
|
page_number = pdf1_reader.get_destination_page_number(destination)
|
||||||
]
|
#更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
|
||||||
page_number = (
|
#“/D”:[10,'/XYZ',100,100,0]
|
||||||
pdf1_reader.get_destination_page_number(
|
annot_obj['/A'].update({
|
||||||
destination
|
NameObject("/D"): ArrayObject([NumberObject(page_number),destination.dest_array[1], FloatObject(destination.dest_array[2]) ,destination.dest_array[3],destination.dest_array[4]]) # 确保键和值是 PdfObject
|
||||||
)
|
})
|
||||||
)
|
rect = annot_obj.get('/Rect')
|
||||||
# 更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
|
rect = ArrayObject([FloatObject(rect[0]),rect[1],
|
||||||
# “/D”:[10,'/XYZ',100,100,0]
|
FloatObject(rect[2]),rect[3] ])
|
||||||
if destination.dest_array[1] == "/XYZ":
|
annot_obj.update({
|
||||||
annot_obj["/A"].update(
|
NameObject("/Rect"): rect # 确保键和值是 PdfObject
|
||||||
{
|
})
|
||||||
NameObject("/D"): ArrayObject(
|
|
||||||
[
|
|
||||||
NumberObject(page_number),
|
|
||||||
destination.dest_array[1],
|
|
||||||
FloatObject(
|
|
||||||
destination.dest_array[
|
|
||||||
2
|
|
||||||
]
|
|
||||||
),
|
|
||||||
destination.dest_array[3],
|
|
||||||
destination.dest_array[4],
|
|
||||||
]
|
|
||||||
) # 确保键和值是 PdfObject
|
|
||||||
}
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
annot_obj["/A"].update(
|
|
||||||
{
|
|
||||||
NameObject("/D"): ArrayObject(
|
|
||||||
[
|
|
||||||
NumberObject(page_number),
|
|
||||||
destination.dest_array[1],
|
|
||||||
]
|
|
||||||
) # 确保键和值是 PdfObject
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
rect = annot_obj.get("/Rect")
|
elif '/S' in action and action['/S'] == '/URI':
|
||||||
rect = ArrayObject(
|
|
||||||
[
|
|
||||||
FloatObject(rect[0]),
|
|
||||||
rect[1],
|
|
||||||
FloatObject(rect[2]),
|
|
||||||
rect[3],
|
|
||||||
]
|
|
||||||
)
|
|
||||||
annot_obj.update(
|
|
||||||
{
|
|
||||||
NameObject(
|
|
||||||
"/Rect"
|
|
||||||
): rect # 确保键和值是 PdfObject
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
elif "/S" in action and action["/S"] == "/URI":
|
|
||||||
# 外部链接:跳转到某个URI
|
# 外部链接:跳转到某个URI
|
||||||
uri = action.get("/URI")
|
uri = action.get('/URI')
|
||||||
output_writer.addPage(new_page)
|
output_writer.addPage(new_page)
|
||||||
# Save the merged PDF file
|
|
||||||
with open(output_path, "wb") as output_file:
|
|
||||||
output_writer.write(output_file)
|
|
||||||
|
|
||||||
|
|
||||||
def _merge_pdfs_legacy(pdf1_path, pdf2_path, output_path):
|
|
||||||
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
|
||||||
|
|
||||||
Percent = 0.95
|
|
||||||
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
|
|
||||||
# Open the first PDF file
|
|
||||||
with open(pdf1_path, "rb") as pdf1_file:
|
|
||||||
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
|
|
||||||
# Open the second PDF file
|
|
||||||
with open(pdf2_path, "rb") as pdf2_file:
|
|
||||||
pdf2_reader = PyPDF2.PdfFileReader(pdf2_file)
|
|
||||||
# Create a new PDF file to store the merged pages
|
|
||||||
output_writer = PyPDF2.PdfFileWriter()
|
|
||||||
# Determine the number of pages in each PDF file
|
|
||||||
num_pages = max(pdf1_reader.numPages, pdf2_reader.numPages)
|
|
||||||
# Merge the pages from the two PDF files
|
|
||||||
for page_num in range(num_pages):
|
|
||||||
# Add the page from the first PDF file
|
|
||||||
if page_num < pdf1_reader.numPages:
|
|
||||||
page1 = pdf1_reader.getPage(page_num)
|
|
||||||
else:
|
|
||||||
page1 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
|
||||||
# Add the page from the second PDF file
|
|
||||||
if page_num < pdf2_reader.numPages:
|
|
||||||
page2 = pdf2_reader.getPage(page_num)
|
|
||||||
else:
|
|
||||||
page2 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
|
||||||
# Create a new empty page with double width
|
|
||||||
new_page = PyPDF2.PageObject.createBlankPage(
|
|
||||||
width=int(
|
|
||||||
int(page1.mediaBox.getWidth())
|
|
||||||
+ int(page2.mediaBox.getWidth()) * Percent
|
|
||||||
),
|
|
||||||
height=max(page1.mediaBox.getHeight(), page2.mediaBox.getHeight()),
|
|
||||||
)
|
|
||||||
new_page.mergeTranslatedPage(page1, 0, 0)
|
|
||||||
new_page.mergeTranslatedPage(
|
|
||||||
page2,
|
|
||||||
int(
|
|
||||||
int(page1.mediaBox.getWidth())
|
|
||||||
- int(page2.mediaBox.getWidth()) * (1 - Percent)
|
|
||||||
),
|
|
||||||
0,
|
|
||||||
)
|
|
||||||
output_writer.addPage(new_page)
|
output_writer.addPage(new_page)
|
||||||
# Save the merged PDF file
|
# Save the merged PDF file
|
||||||
with open(output_path, "wb") as output_file:
|
with open(output_path, "wb") as output_file:
|
||||||
|
|||||||
1
docs/Dockerfile+JittorLLM
普通文件
1
docs/Dockerfile+JittorLLM
普通文件
@@ -0,0 +1 @@
|
|||||||
|
# 此Dockerfile不再维护,请前往docs/GithubAction+JittorLLMs
|
||||||
@@ -0,0 +1,57 @@
|
|||||||
|
# docker build -t gpt-academic-all-capacity -f docs/GithubAction+AllCapacity --network=host --build-arg http_proxy=http://localhost:10881 --build-arg https_proxy=http://localhost:10881 .
|
||||||
|
# docker build -t gpt-academic-all-capacity -f docs/GithubAction+AllCapacityBeta --network=host .
|
||||||
|
# docker run -it --net=host gpt-academic-all-capacity bash
|
||||||
|
|
||||||
|
# 从NVIDIA源,从而支持显卡(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
|
||||||
|
FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest
|
||||||
|
|
||||||
|
# edge-tts需要的依赖,某些pip包所需的依赖
|
||||||
|
RUN apt update && apt install ffmpeg build-essential -y
|
||||||
|
|
||||||
|
# use python3 as the system default python
|
||||||
|
WORKDIR /gpt
|
||||||
|
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
|
||||||
|
|
||||||
|
# # 非必要步骤,更换pip源 (以下三行,可以删除)
|
||||||
|
# RUN echo '[global]' > /etc/pip.conf && \
|
||||||
|
# echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
|
||||||
|
# echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
|
||||||
|
|
||||||
|
# 下载pytorch
|
||||||
|
RUN python3 -m pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
|
||||||
|
# 准备pip依赖
|
||||||
|
RUN python3 -m pip install openai numpy arxiv rich
|
||||||
|
RUN python3 -m pip install colorama Markdown pygments pymupdf
|
||||||
|
RUN python3 -m pip install python-docx moviepy pdfminer
|
||||||
|
RUN python3 -m pip install zh_langchain==0.2.1 pypinyin
|
||||||
|
RUN python3 -m pip install rarfile py7zr
|
||||||
|
RUN python3 -m pip install aliyun-python-sdk-core==2.13.3 pyOpenSSL webrtcvad scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
||||||
|
# 下载分支
|
||||||
|
WORKDIR /gpt
|
||||||
|
RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
||||||
|
WORKDIR /gpt/gpt_academic
|
||||||
|
RUN git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llms/moss
|
||||||
|
|
||||||
|
RUN python3 -m pip install -r requirements.txt
|
||||||
|
RUN python3 -m pip install -r request_llms/requirements_moss.txt
|
||||||
|
RUN python3 -m pip install -r request_llms/requirements_qwen.txt
|
||||||
|
RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
|
||||||
|
RUN python3 -m pip install -r request_llms/requirements_newbing.txt
|
||||||
|
RUN python3 -m pip install nougat-ocr
|
||||||
|
|
||||||
|
|
||||||
|
# 预热Tiktoken模块
|
||||||
|
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||||
|
|
||||||
|
# 安装知识库插件的额外依赖
|
||||||
|
RUN apt-get update && apt-get install libgl1 -y
|
||||||
|
RUN pip3 install transformers protobuf langchain sentence-transformers faiss-cpu nltk beautifulsoup4 bitsandbytes tabulate icetk --upgrade
|
||||||
|
RUN pip3 install unstructured[all-docs] --upgrade
|
||||||
|
RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()'
|
||||||
|
RUN rm -rf /usr/local/lib/python3.8/dist-packages/tests
|
||||||
|
|
||||||
|
|
||||||
|
# COPY .cache /root/.cache
|
||||||
|
# COPY config_private.py config_private.py
|
||||||
|
# 启动
|
||||||
|
CMD ["python3", "-u", "main.py"]
|
||||||
@@ -3,19 +3,33 @@
|
|||||||
# - 2 构建 docker build -t gpt-academic-nolocal-latex -f docs/GithubAction+NoLocal+Latex .
|
# - 2 构建 docker build -t gpt-academic-nolocal-latex -f docs/GithubAction+NoLocal+Latex .
|
||||||
# - 3 运行 docker run -v /home/fuqingxu/arxiv_cache:/root/arxiv_cache --rm -it --net=host gpt-academic-nolocal-latex
|
# - 3 运行 docker run -v /home/fuqingxu/arxiv_cache:/root/arxiv_cache --rm -it --net=host gpt-academic-nolocal-latex
|
||||||
|
|
||||||
FROM menghuan1918/ubuntu_uv_ctex:latest
|
FROM fuqingxu/python311_texlive_ctex:latest
|
||||||
ENV DEBIAN_FRONTEND=noninteractive
|
ENV PATH "$PATH:/usr/local/texlive/2022/bin/x86_64-linux"
|
||||||
SHELL ["/bin/bash", "-c"]
|
ENV PATH "$PATH:/usr/local/texlive/2023/bin/x86_64-linux"
|
||||||
|
ENV PATH "$PATH:/usr/local/texlive/2024/bin/x86_64-linux"
|
||||||
|
ENV PATH "$PATH:/usr/local/texlive/2025/bin/x86_64-linux"
|
||||||
|
ENV PATH "$PATH:/usr/local/texlive/2026/bin/x86_64-linux"
|
||||||
|
|
||||||
|
# 指定路径
|
||||||
WORKDIR /gpt
|
WORKDIR /gpt
|
||||||
|
|
||||||
|
RUN pip3 install openai numpy arxiv rich
|
||||||
|
RUN pip3 install colorama Markdown pygments pymupdf
|
||||||
|
RUN pip3 install python-docx pdfminer
|
||||||
|
RUN pip3 install nougat-ocr
|
||||||
|
|
||||||
|
# 装载项目文件
|
||||||
COPY . .
|
COPY . .
|
||||||
RUN /root/.cargo/bin/uv venv --seed \
|
|
||||||
&& source .venv/bin/activate \
|
|
||||||
&& /root/.cargo/bin/uv pip install openai numpy arxiv rich colorama Markdown pygments pymupdf python-docx pdfminer \
|
# 安装依赖
|
||||||
&& /root/.cargo/bin/uv pip install -r requirements.txt \
|
RUN pip3 install -r requirements.txt
|
||||||
&& /root/.cargo/bin/uv clean
|
|
||||||
|
# edge-tts需要的依赖
|
||||||
|
RUN apt update && apt install ffmpeg -y
|
||||||
|
|
||||||
# 可选步骤,用于预热模块
|
# 可选步骤,用于预热模块
|
||||||
RUN .venv/bin/python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||||
|
|
||||||
# 启动
|
# 启动
|
||||||
CMD [".venv/bin/python3", "-u", "main.py"]
|
CMD ["python3", "-u", "main.py"]
|
||||||
|
|||||||
@@ -256,8 +256,6 @@ model_info = {
|
|||||||
"max_token": 128000,
|
"max_token": 128000,
|
||||||
"tokenizer": tokenizer_gpt4,
|
"tokenizer": tokenizer_gpt4,
|
||||||
"token_cnt": get_token_num_gpt4,
|
"token_cnt": get_token_num_gpt4,
|
||||||
"openai_disable_system_prompt": True,
|
|
||||||
"openai_disable_stream": True,
|
|
||||||
},
|
},
|
||||||
"o1-mini": {
|
"o1-mini": {
|
||||||
"fn_with_ui": chatgpt_ui,
|
"fn_with_ui": chatgpt_ui,
|
||||||
@@ -266,8 +264,6 @@ model_info = {
|
|||||||
"max_token": 128000,
|
"max_token": 128000,
|
||||||
"tokenizer": tokenizer_gpt4,
|
"tokenizer": tokenizer_gpt4,
|
||||||
"token_cnt": get_token_num_gpt4,
|
"token_cnt": get_token_num_gpt4,
|
||||||
"openai_disable_system_prompt": True,
|
|
||||||
"openai_disable_stream": True,
|
|
||||||
},
|
},
|
||||||
|
|
||||||
"gpt-4-turbo": {
|
"gpt-4-turbo": {
|
||||||
|
|||||||
@@ -2,15 +2,14 @@ https://public.agent-matrix.com/publish/gradio-3.32.10-py3-none-any.whl
|
|||||||
fastapi==0.110
|
fastapi==0.110
|
||||||
gradio-client==0.8
|
gradio-client==0.8
|
||||||
pypdf2==2.12.1
|
pypdf2==2.12.1
|
||||||
httpx<=0.25.2
|
|
||||||
zhipuai==2.0.1
|
zhipuai==2.0.1
|
||||||
tiktoken>=0.3.3
|
tiktoken>=0.3.3
|
||||||
requests[socks]
|
requests[socks]
|
||||||
pydantic==2.9.2
|
pydantic==2.5.2
|
||||||
|
llama-index~=0.10
|
||||||
protobuf==3.20
|
protobuf==3.20
|
||||||
transformers>=4.27.1,<4.42
|
transformers>=4.27.1,<4.42
|
||||||
scipdf_parser>=0.52
|
scipdf_parser>=0.52
|
||||||
spacy==3.7.4
|
|
||||||
anthropic>=0.18.1
|
anthropic>=0.18.1
|
||||||
python-markdown-math
|
python-markdown-math
|
||||||
pymdown-extensions
|
pymdown-extensions
|
||||||
@@ -33,14 +32,3 @@ loguru
|
|||||||
arxiv
|
arxiv
|
||||||
numpy
|
numpy
|
||||||
rich
|
rich
|
||||||
|
|
||||||
|
|
||||||
llama-index-core==0.10.68
|
|
||||||
llama-index-legacy==0.9.48
|
|
||||||
llama-index-readers-file==0.1.33
|
|
||||||
llama-index-readers-llama-parse==0.1.6
|
|
||||||
llama-index-embeddings-azure-openai==0.1.10
|
|
||||||
llama-index-embeddings-openai==0.1.10
|
|
||||||
llama-parse==0.4.9
|
|
||||||
mdit-py-plugins>=0.3.3
|
|
||||||
linkify-it-py==2.0.3
|
|
||||||
@@ -1,12 +0,0 @@
|
|||||||
"""
|
|
||||||
对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py
|
|
||||||
"""
|
|
||||||
|
|
||||||
import init_test
|
|
||||||
import os, sys
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
from test_utils import plugin_test
|
|
||||||
|
|
||||||
plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A point moving along function culve y=sin(x), starting from x=0 and stop at x=4*\pi.")
|
|
||||||
在新工单中引用
屏蔽一个用户