Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss to convergence #15

Closed
980202006 opened this issue Aug 27, 2021 · 4 comments
Closed

loss to convergence #15

980202006 opened this issue Aug 27, 2021 · 4 comments
Assignees
Labels
question Further information is requested

Comments

@980202006
Copy link

how much the loss when the model convergence

@980202006 980202006 changed the title loss to loss to convergence Aug 27, 2021
@mimbres mimbres self-assigned this Aug 28, 2021
@mimbres mimbres added the question Further information is requested label Aug 28, 2021
@mimbres
Copy link
Owner

mimbres commented Aug 28, 2021

  • It depends. Usually train-loss < 0.3 when setting the Softmax temperature parameter, tau=0.05 (see 640_lamb.yaml).
  • With this setup, validation-loss (red) and train-loss (blue) until the 100th epoch:

  • On each epoch end, we perform a mini-search-test. Accuracy@1s and @3s would be a useful validation of the model. As in the figure below, the accuracy is about 82.x@1s or 94.x@3s.

@mimbres
Copy link
Owner

mimbres commented Aug 28, 2021

The learning curve is from the pre-trained model in #10 .

@980202006
Copy link
Author

thank you!

@mimbres
Copy link
Owner

mimbres commented May 10, 2022

The loss and val_acc are updated. Please see #26 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants