镜像自地址
https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese.git
已同步 2025-12-05 22:16:49 +00:00
We tried to use the GPT3.5 API to integrate the conclusion in the medical literature as external information into multiple rounds of dialogue, and based on this, we fine-tuned the instructions of LLaMA.
13 行
702 B
Bash
13 行
702 B
Bash
#!/bin/sh
|
|
|
|
# If inferring with the llama model, set 'use_lora' to 'False' and 'prompt_template' to 'ori_template'.
|
|
# If inferring with the default alpaca model, set 'use_lora' to 'True', 'lora_weights' to 'tloen/alpaca-lora-7b', and 'prompt_template' to 'alpaca'.
|
|
# If inferring with the llama-med model, download the LORA weights and set 'lora_weights' to './lora-llama-med' (or the exact directory of LORA weights) and 'prompt_template' to 'med_template'.
|
|
|
|
"""多轮交互"""
|
|
python infer_literature.py \
|
|
--base_model 'decapoda-research/llama-7b-hf' \
|
|
--lora_weights './lora-llama-literature' \
|
|
--single_or_multi 'multi' \
|
|
--use_lora True \
|
|
--prompt_template 'literature_template' |