Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss turns to 0 after several steps for llama2 #42

Open
liuxiaozhu01 opened this issue Dec 9, 2024 · 5 comments
Open

loss turns to 0 after several steps for llama2 #42

liuxiaozhu01 opened this issue Dec 9, 2024 · 5 comments

Comments

@liuxiaozhu01
Copy link

Hi! I recently learned about this impressive and effective work, and I made a modest attempt at the code. The script I run is like this

# MODEL="/root/home/workspace/LLM/opt/facebook/opt-6.7b"
# MODEL_NAME="opt-6.7b"

MODEL="/root/Llama-2-7b-hf"
MODEL_NAME="llama-2-7b-hf"

# MODEL="/root/home/workspace/LLM/llama/decapoda-research/llama-7b-hf"
# MODEL_NAME="llama-7b-hf"

BS=8
LR=1e-6
EPS=1e-4
SEED=0
TRAIN=1000
DEV=500
EVAL=1000
STEPS=20000
EVAL_STEPS=4000

MODE="ft"

TAG=mezo-$MODE-$STEPS-$BS-$LR-$EPS-$SEED

TASK="SST2"

CUDA_VISIBLE_DEVICES=3 python run.py \
    --model_name $MODEL \
    --task_name $TASK \
    --output_dir result/$TASK-${MODEL_NAME}-$TAG --tag $TAG --train_set_seed $SEED --num_train $TRAIN --num_dev $DEV --num_eval $EVAL --logging_steps 10 \
    --max_steps $STEPS \
    --trainer zo --load_float16 \
    --learning_rate $LR --zo_eps $EPS --per_device_train_batch_size $BS --lr_scheduler_type "constant" \
    --load_best_model_at_end --evaluation_strategy steps --save_strategy steps --save_total_limit 1 \
    --eval_steps $EVAL_STEPS --save_steps $EVAL_STEPS \
    --train_as_classification

The output is

2024-12-09 21:37:32,695 - INFO - Sample train set 1500/67349
2024-12-09 21:37:32,695 - INFO - ... including dev set 500 samples
2024-12-09 21:37:32,695 - INFO - Loading model with FP16...
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:12<00:00,  6.13s/it]
2024-12-09 21:37:45,740 - INFO - Done with 13.04s
2024-12-09 21:37:45,755 - INFO - Tokenizing training samples...
2024-12-09 21:37:47,023 - INFO - Done with 1.27s
/root/miniconda3/envs/mezo/lib/python3.10/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  warnings.warn(
2024-12-09 21:37:47,409 - INFO - ***** Running training *****
2024-12-09 21:37:47,410 - INFO -   Num examples = 1000
2024-12-09 21:37:47,410 - INFO -   Num Epochs = 160
2024-12-09 21:37:47,410 - INFO -   Instantaneous batch size per device = 8
2024-12-09 21:37:47,410 - INFO -   Total train batch size (w. parallel, distributed & accumulation) = 8
2024-12-09 21:37:47,410 - INFO -   Gradient Accumulation steps = 1
2024-12-09 21:37:47,410 - INFO -   Total optimization steps = 20000
2024-12-09 21:37:47,411 - INFO -   Number of trainable parameters = 6738415616
{'loss': 0.7912, 'learning_rate': 1e-06, 'epoch': 0.08}                                                                                                                                           
{'loss': 0.5282, 'learning_rate': 1e-06, 'epoch': 0.16}                                                                                                                                           
{'loss': 0.7984, 'learning_rate': 1e-06, 'epoch': 0.24}                                                                                                                                           
{'loss': 0.0, 'learning_rate': 1e-06, 'epoch': 0.32}                                                                                                                                              
{'loss': 0.0, 'learning_rate': 1e-06, 'epoch': 0.4}                                                                                                                                               
{'loss': 0.0, 'learning_rate': 1e-06, 'epoch': 0.48}                                                                                                                                              
{'loss': 0.0, 'learning_rate': 1e-06, 'epoch': 0.56}                                                                                                                                              
{'loss': 0.0, 'learning_rate': 1e-06, 'epoch': 0.64}                                                                                                                                              
{'loss': 0.0, 'learning_rate': 1e-06, 'epoch': 0.72}                                                                                                                                              
{'loss': 0.0, 'learning_rate': 1e-06, 'epoch': 0.8}                                                                                                                                               
{'loss': 0.0, 'learning_rate': 1e-06, 'epoch': 0.88}                                                                                                                                              
{'loss': 0.0, 'learning_rate': 1e-06, 'epoch': 0.96}                                                                                                                                              
{'loss': 0.0, 'learning_rate': 1e-06, 'epoch': 1.04}                                                                                                                                              
{'loss': 0.0, 'learning_rate': 1e-06, 'epoch': 1.12}  

The loss turns to 0 after several steps for llama2-7b. But it wouldn't happen to opt-6.7b and llama-7b.
Im really confused. I understand that lr & zo_eps are significant for the optimization process. Are there usable hyper-params? Or is there something else I set wrong?

@gaotianyu1350
Copy link
Member

Hi,

Loss turning to 0 usually means there is some precision issue. I would suggest switching fp32 training (remove --load_float16) and also trying a smaller learning rate/eps.

@liuxiaozhu01
Copy link
Author

Thanks for your timely reply. I also notice that switching fp32 training indeed works for llama2, while fp16 only works for llama1 but llama2.

Im wondering if the problem is due to the fact that the parameters are not restored to their original values after the perturbation. Because i notice that in fp16 training for llama2, the model params are not the same before and after the perturbation, but are slightly different.

Would you be so kind as to offer some insights?

@gaotianyu1350
Copy link
Member

This is probably due to (a) how those models were pre-trained and what precisions they used during pre-training, and (b) what precisions they used for the released parameters. It is normal that "the model params are not the same before and after"---after all they are perturbed..

@liuxiaozhu01
Copy link
Author

Sorry if i didn't make myself clear. I mean the model params are not the same before and after the two function evaluation. It might cannot "reset model back to its parameters at start of step", but a slightly different.

@gaotianyu1350
Copy link
Member

Aha that's interesting... that definitely sounds like a precision issue..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants