Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trace back relevant words from claim predictions' feature importances #1

Open
4 tasks
jcezarms opened this issue Oct 3, 2020 · 0 comments
Open
4 tasks
Assignees
Labels
enhancement New feature or request interpretability Relates to a model's interpretability

Comments

@jcezarms
Copy link
Owner

jcezarms commented Oct 3, 2020

By identifying a prediction's best ranked EMB_# feature importances, it should be possible to trace back and get the most important words within a claim, from the model's perspective.
Say, if feature EMB_3 is the best ranked in a claim's robustness prediction, this would yield the 3rd word within the claim as the most relevant.

  • Extract the 10 most relevant words from embedding importances, for each claim
  • Store extracted words within the claims (as a list) and the discussions they belong to (as a Counter)
  • Make a wordcloud for each discussion's extracted words
  • Determine if the final result adds positively in interpretability, be it at claim or discussion level.
@jcezarms jcezarms added enhancement New feature or request interpretability Relates to a model's interpretability labels Oct 3, 2020
@jcezarms jcezarms self-assigned this Oct 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request interpretability Relates to a model's interpretability
Projects
None yet
Development

No branches or pull requests

1 participant