We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
对 Qwen-7B-Chat 进行 Lora 微调以及模型合并后,得到自己的一个新的模型。但是我想对这个新的模型进行二次微调,提示 GPU 内存超出,这个是因为什么原因呢,是什么配置文件错了吗。在对 Qwen-7B-Chat 进行微调时并不会超出 GPU 内存限制。
The text was updated successfully, but these errors were encountered:
No branches or pull requests
对 Qwen-7B-Chat 进行 Lora 微调以及模型合并后,得到自己的一个新的模型。但是我想对这个新的模型进行二次微调,提示 GPU 内存超出,这个是因为什么原因呢,是什么配置文件错了吗。在对 Qwen-7B-Chat 进行微调时并不会超出 GPU 内存限制。
The text was updated successfully, but these errors were encountered: