镜像自地址
https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese.git
已同步 2025-12-06 06:26:48 +00:00
add literature data
We tried to use the GPT3.5 API to integrate the conclusion in the medical literature as external information into multiple rounds of dialogue, and based on this, we fine-tuned the instructions of LLaMA.
这个提交包含在:
@@ -0,0 +1,13 @@
|
||||
#!/bin/sh
|
||||
|
||||
# If inferring with the llama model, set 'use_lora' to 'False' and 'prompt_template' to 'ori_template'.
|
||||
# If inferring with the default alpaca model, set 'use_lora' to 'True', 'lora_weights' to 'tloen/alpaca-lora-7b', and 'prompt_template' to 'alpaca'.
|
||||
# If inferring with the llama-med model, download the LORA weights and set 'lora_weights' to './lora-llama-med' (or the exact directory of LORA weights) and 'prompt_template' to 'med_template'.
|
||||
|
||||
"""多轮交互"""
|
||||
python infer_literature.py \
|
||||
--base_model 'decapoda-research/llama-7b-hf' \
|
||||
--lora_weights './lora-llama-literature' \
|
||||
--single_or_multi 'multi' \
|
||||
--use_lora True \
|
||||
--prompt_template 'literature_template'
|
||||
@@ -0,0 +1,13 @@
|
||||
#!/bin/sh
|
||||
|
||||
# If inferring with the llama model, set 'use_lora' to 'False' and 'prompt_template' to 'ori_template'.
|
||||
# If inferring with the default alpaca model, set 'use_lora' to 'True', 'lora_weights' to 'tloen/alpaca-lora-7b', and 'prompt_template' to 'alpaca'.
|
||||
# If inferring with the llama-med model, download the LORA weights and set 'lora_weights' to './lora-llama-med' (or the exact directory of LORA weights) and 'prompt_template' to 'med_template'.
|
||||
|
||||
"""单轮"""
|
||||
python infer_literature.py \
|
||||
--base_model 'decapoda-research/llama-7b-hf' \
|
||||
--lora_weights './lora-llama-literature' \
|
||||
--single_or_multi 'single' \
|
||||
--use_lora True \
|
||||
--prompt_template 'literature_template'
|
||||
在新工单中引用
屏蔽一个用户