-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
loss turns to 0 after several steps for llama2 #42
Comments
Hi, Loss turning to 0 usually means there is some precision issue. I would suggest switching fp32 training (remove |
Thanks for your timely reply. I also notice that switching fp32 training indeed works for llama2, while fp16 only works for llama1 but llama2. Im wondering if the problem is due to the fact that the parameters are not restored to their original values after the perturbation. Because i notice that in fp16 training for llama2, the model params are not the same before and after the perturbation, but are slightly different. Would you be so kind as to offer some insights? |
This is probably due to (a) how those models were pre-trained and what precisions they used during pre-training, and (b) what precisions they used for the released parameters. It is normal that "the model params are not the same before and after"---after all they are perturbed.. |
Aha that's interesting... that definitely sounds like a precision issue.. |
Hi! I recently learned about this impressive and effective work, and I made a modest attempt at the code. The script I run is like this
The output is
The loss turns to 0 after several steps for llama2-7b. But it wouldn't happen to opt-6.7b and llama-7b.
Im really confused. I understand that lr & zo_eps are significant for the optimization process. Are there usable hyper-params? Or is there something else I set wrong?
The text was updated successfully, but these errors were encountered: