Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible to release LORA fine-tune or full-tune training code? #3

Open
BruceLeeeee opened this issue Jan 11, 2025 · 5 comments
Open

Comments

@BruceLeeeee
Copy link

No description provided.

@maochaojie
Copy link
Collaborator

We will release the fine-tuning code in the near future.

@maochaojie
Copy link
Collaborator

Now we have released the training code. We are looking forward to your feedbacks.

@BruceLeeeee
Copy link
Author

BruceLeeeee commented Jan 16, 2025

thanks for sharing. by the way, may i ask do you have any plans to release the fully finetuning model? @maochaojie

@maochaojie
Copy link
Collaborator

The exact time is still uncertain, but based on the current training trends, it is possible that it will be released in the first two weeks of February.

@NitishTRI3D
Copy link

What is the difference between released training code and full finetjning ?

Current code is lora fine tuning is it ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants