-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the precision #1
Comments
I did not record precisions in the paper. Is there a big gap on macro-F1 scores? Which dataset are you talking about? You should be able to get similar results as reported by running the code directly under Python 2.7 and other specified dependencies. I did not test the code under Python 3. |
Thanks for your reply. I'm asking in the wrong place. Actually, the question is aimed at Unsupervised-Aspect-Extraction. But I have known the reason is that I modify the function in the train.py. Now I have a similar precision with you. thank you anyway. |
Confused with the score recorded in the paper? Are the way like select the best configuration based on performance on the dev set and evaluation the configuration on the test set? |
@tc-yue which paper are you talking about? (as this issue was originally talking about another paper). For this aspect-level-sentiment paper, the configuration (hyper-params) are selected based on dev set. During training, the results on the test set are recorded at the epoch where the model achieves best accuracy on dev set. |
@ruidan thanks, about this aspect-level paper.
|
@tc-yue There are multiple ways to implement multi-task learning. I usually do that in a multi inputs and outputs way as implemented in my code. It should be straightforward to implement the first way as well. Suppose you have two tasks with training set a and b respectively:
|
@ruidan thanks a lot for your reply. |
hello, I modified the code to run with python3 and it can run, but I can't get the same precision you mentioned in the paper, the precision just was 50%, when I adjust the learning rate the precision improve to 67% but still has a gap with your experiment. I used your preprocessed_data. Do you have any idea about improving the precision? thank you!
The text was updated successfully, but these errors were encountered: