Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduce the scores in the paper #6

Open
roboinlovo opened this issue Nov 9, 2022 · 1 comment
Open

Reproduce the scores in the paper #6

roboinlovo opened this issue Nov 9, 2022 · 1 comment

Comments

@roboinlovo
Copy link

roboinlovo commented Nov 9, 2022

I read your paper and tried to reproduce the scores shown in the paper.
However, I could not get those scores. Could you give me any idea to reproduce the scores on GQA (val: 73.6, test-dev: 72.1)?

I followed the instruction in this repository and got the score 71.43 (99.86) with test.sh.
Does split 'val' produce the score for val (should be 73.6)?
With split 'test', I got the following error.

Traceback (most recent call last):
  File "evaluate.py", line 159, in <module>
    eval_cfrf_score, _, _, _, bound = evaluate(model, eval_loader, args)
  File "src/FFOE/train.py", line 158, in evaluate
    for i, (v, b, w, e, attr, q, s, a, ans) in enumerate(dataloader):
ValueError: too many values to unpack (expected 9)

I really appreciate it if you could answer my question.

@Dhan800
Copy link

Dhan800 commented Nov 10, 2022

I have questions about their results too. In their original paper, there is no mention of integrating LXMERT into their model. In the code base, the model is already LXMERT-integrated. No way to verify the performance of their method before integration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants