Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interpreting Attention Visualization #61

Open
fennievdg opened this issue Jan 14, 2025 · 0 comments
Open

Interpreting Attention Visualization #61

fennievdg opened this issue Jan 14, 2025 · 0 comments

Comments

@fennievdg
Copy link

Hi Sybil authors!

Could you elaborate more on how to interpret the attention visualization?

  • Is th red the image attention and blue the volume attention?
  • If so, how are these image and volume attentions related? What about the model's attention does each explain?
  • The blue masks seem to be binary. Does this mean that there is non-zero attention on those regions, a maximum value of attention, or is it based on some other threshold?
  • If there are red heatmaps on different sections of the scan, does Sybil find those sections equally important? Or does it give more attention to one of those sections?
  • Sometimes, the blue masks seem to extend to sections outside of where the red heatmaps are, or blue mask pieces are seen far away from red heatmaps. What does this mean?

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant