We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
https://github.com/Lightning-AI/lit-llama/blob/main/howto/finetune_lora.md 使用 LoRA 进行微调 低秩自适应 (LoRA) 是一种使用低秩矩阵分解近似更新到 LLM 中线性层的技术.这大大减少了可训练参数的数量,并加快了训练速度,而对模型的最终性能几乎没有影响。我们通过在单个GTX 3090(24GB)GPU上的Alpaca数据集上的指令微调LLaMA 7B来演示这种方法。
制备 此处的步骤只需执行一次:
按照自述文件中的说明安装依赖项。
下载并转换权重,并将其保存在文件夹中,如此处所述。./checkpoints
下载数据并生成指令调优数据集:
python scripts/prepare_alpaca.py 另请参阅:对非结构化数据集进行微调
运行微调
The text was updated successfully, but these errors were encountered:
No branches or pull requests
https://github.com/Lightning-AI/lit-llama/blob/main/howto/finetune_lora.md
使用 LoRA 进行微调
低秩自适应 (LoRA) 是一种使用低秩矩阵分解近似更新到 LLM 中线性层的技术.这大大减少了可训练参数的数量,并加快了训练速度,而对模型的最终性能几乎没有影响。我们通过在单个GTX 3090(24GB)GPU上的Alpaca数据集上的指令微调LLaMA 7B来演示这种方法。
制备
此处的步骤只需执行一次:
按照自述文件中的说明安装依赖项。
下载并转换权重,并将其保存在文件夹中,如此处所述。./checkpoints
下载数据并生成指令调优数据集:
python scripts/prepare_alpaca.py
另请参阅:对非结构化数据集进行微调
运行微调
The text was updated successfully, but these errors were encountered: