You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I read your paper and tried to reproduce the scores shown in the paper.
However, I could not get those scores. Could you give me any idea to reproduce the scores on GQA (val: 73.6, test-dev: 72.1)?
I followed the instruction in this repository and got the score 71.43 (99.86) with test.sh.
Does split 'val' produce the score for val (should be 73.6)?
With split 'test', I got the following error.
Traceback (most recent call last):
File "evaluate.py", line 159, in <module>
eval_cfrf_score, _, _, _, bound = evaluate(model, eval_loader, args)
File "src/FFOE/train.py", line 158, in evaluate
for i, (v, b, w, e, attr, q, s, a, ans) in enumerate(dataloader):
ValueError: too many values to unpack (expected 9)
I really appreciate it if you could answer my question.
The text was updated successfully, but these errors were encountered:
I have questions about their results too. In their original paper, there is no mention of integrating LXMERT into their model. In the code base, the model is already LXMERT-integrated. No way to verify the performance of their method before integration.
I read your paper and tried to reproduce the scores shown in the paper.
However, I could not get those scores. Could you give me any idea to reproduce the scores on GQA (val: 73.6, test-dev: 72.1)?
I followed the instruction in this repository and got the score 71.43 (99.86) with test.sh.
Does split
'val'
produce the score for val (should be 73.6)?With split
'test'
, I got the following error.I really appreciate it if you could answer my question.
The text was updated successfully, but these errors were encountered: