-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible to release LORA fine-tune or full-tune training code? #3
Comments
We will release the fine-tuning code in the near future. |
Now we have released the training code. We are looking forward to your feedbacks. |
thanks for sharing. by the way, may i ask do you have any plans to release the fully finetuning model? @maochaojie |
The exact time is still uncertain, but based on the current training trends, it is possible that it will be released in the first two weeks of February. |
What is the difference between released training code and full finetjning ? Current code is lora fine tuning is it ? |
No description provided.
The text was updated successfully, but these errors were encountered: