-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how much GPU memory does the next QA need? #7
Comments
Thanks for the question. It needs about 24G for training of the model with batch size 64 with 8 clips for each video, whereas 8G is enough for inference. If you want to train with 16 clips, you need to change batchsize to 32. |
Thank you for your reply. |
Well,Could you offer the pre-trained bert model? Thanks. |
Well, I also need this |
Hi, please find the code and mode for BERT finetuning/feature extraction. You should launch a new issue with a proper title, otherwise I may slow in finding your questions.. |
ok, many thanks. Is the model in nextqa for question features the final model? Or I still need to train? I just utilize the model to extract question feature while the results drop significantly by nearly 5 points. |
No description provided.
The text was updated successfully, but these errors were encountered: