-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regarding Visualize Attention #5
Comments
You should call the function utils.visualize_attention anywhere in your test code. Another way to do it would be to add a secondary output to the attention layer that returns the attention weights as well. |
By test code, do you mean the code that needs to written for test like eval.py? or a particular section of code in hatt.py? |
Hi Alex, thanks for this, brilliant to have an accessible self-attention layer for Keras! I'm having issues with this function aswell, im passing it ([tokens],model,reverse_dict,10) I'm passing the model (which is working fine with some slightly altered code) model.predict([[tokens]]) I get: If i log [topkeys] i get an array of numbers, but they don't correspond to the test sequence either in terms of the tokens themselves or the index ie. if [topkeys] returns [14, 25, 90] is [topkeys] supposed to return indexs or tokens? |
Also Alex, i was wondering why you're using binary_crossentropy when normally you'd use categorical for multi label? |
Where should I call visualize attention from? I am going through your code and have not been able to decide from where should I call it. Can you tell me from where can I call the visual attention code like in HATT.py?I have my own dataset and I am trying to visualize which words have caused the labels to be assigned?
The text was updated successfully, but these errors were encountered: