You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using the lateset code to train COCO2017 dataset. But only get very low mAP.
By using the yolov3.weights, I can get mAP=65 in validation dataset and mAP=67 in training dataset using test.py. which can prove that my dataset setup is right. But when i train the dataset with the same yolov3.weights module, the mAP will fall to nearly 0 after 1 or 2 epochs.
Can you analyse why this happen? If I set the leraning rate very litlle such as 1e-6, the mAP still go down each training epoch.
The text was updated successfully, but these errors were encountered:
Did you try to train for more epochs? Is sounds kind of weird, but maybe the optimizer choice or something else is different to the training setup of the pretrained weights, and this results in a convergence to a different (but maybe somewhat equivalent performance wise) point in the parameter space. This could result in a temporary reduction of performance for a few epochs while we move from one local minimum to the other. Try training for something like 100 epochs with a reasonable learning rate and show us the results.
I'm using the lateset code to train COCO2017 dataset. But only get very low mAP.
By using the yolov3.weights, I can get mAP=65 in validation dataset and mAP=67 in training dataset using test.py. which can prove that my dataset setup is right. But when i train the dataset with the same yolov3.weights module, the mAP will fall to nearly 0 after 1 or 2 epochs.
Can you analyse why this happen? If I set the leraning rate very litlle such as 1e-6, the mAP still go down each training epoch.
The text was updated successfully, but these errors were encountered: