add literature data

We tried to use the GPT3.5 API to integrate the conclusion in the medical literature as external information into multiple rounds of dialogue, and based on this, we fine-tuned the instructions of LLaMA.
这个提交包含在:
DYR1
2023-04-24 10:00:47 +08:00
父节点 d732b74870
当前提交 eb7c87339b
共有 7 个文件被更改,包括 306 次插入19 次删除

查看文件

@@ -0,0 +1,13 @@
#!/bin/sh
# If inferring with the llama model, set 'use_lora' to 'False' and 'prompt_template' to 'ori_template'.
# If inferring with the default alpaca model, set 'use_lora' to 'True', 'lora_weights' to 'tloen/alpaca-lora-7b', and 'prompt_template' to 'alpaca'.
# If inferring with the llama-med model, download the LORA weights and set 'lora_weights' to './lora-llama-med' (or the exact directory of LORA weights) and 'prompt_template' to 'med_template'.
"""多轮交互"""
python infer_literature.py \
--base_model 'decapoda-research/llama-7b-hf' \
--lora_weights './lora-llama-literature' \
--single_or_multi 'multi' \
--use_lora True \
--prompt_template 'literature_template'

查看文件

@@ -0,0 +1,13 @@
#!/bin/sh
# If inferring with the llama model, set 'use_lora' to 'False' and 'prompt_template' to 'ori_template'.
# If inferring with the default alpaca model, set 'use_lora' to 'True', 'lora_weights' to 'tloen/alpaca-lora-7b', and 'prompt_template' to 'alpaca'.
# If inferring with the llama-med model, download the LORA weights and set 'lora_weights' to './lora-llama-med' (or the exact directory of LORA weights) and 'prompt_template' to 'med_template'.
"""单轮"""
python infer_literature.py \
--base_model 'decapoda-research/llama-7b-hf' \
--lora_weights './lora-llama-literature' \
--single_or_multi 'single' \
--use_lora True \
--prompt_template 'literature_template'