You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Kyubyong,
great work - thank you very much for proving these word vectors!
One question: Which model did you use to train your word vectors with word2vec? Skip-gram or cbow? Is this the standard model as reported in Mikolov et al. (2013) or a modified variant?
And which parameters did you use to train the model for each language? Always the default parameters in make_wordvectors.sh?
The text was updated successfully, but these errors were encountered:
Given make_wordvectors.sh and make_wordvectors.py, it seems that @Kyubyong used the gensim implementation of word2vec. Thus, by default, I believe he chose CBOW model. see gensim doc
Dear Kyubyong,
great work - thank you very much for proving these word vectors!
One question: Which model did you use to train your word vectors with word2vec? Skip-gram or cbow? Is this the standard model as reported in Mikolov et al. (2013) or a modified variant?
And which parameters did you use to train the model for each language? Always the default parameters in make_wordvectors.sh?
The text was updated successfully, but these errors were encountered: