-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Facing errors in basic test run #1
Comments
Use |
Alternatively:
to
on the offending line. It works. |
^ Not advisable. The torch version might break more places, create inconsistencies and will have to fix everywhere. If you want to translate a large batch of samples, I'd recommend the fairseq-ilmt which is minimal mods on fairseq. I know this works with fairseq v0.7.2 (where I branched to make some mods which was compatible with pytorch 1.0.0 and maybe 1.1.0). The example here is just to for some specific use-cases and demonstrate this model works. If you have higher volumes you should switch to fairseq level batching optimizations which are in fairseq-ilmt. |
OK - makes sense. Tested it on a colab so was more adventurous. :-) |
Unable to run either examples/mm_all.py or the basic test script provided.
Steps:
No errors reported.
Downloaded models via"
Test Code:
Error log:
The text was updated successfully, but these errors were encountered: