You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, your work is amazing. I have a question about the alpha parameter of the cross attention and self attention fusion module in the decoder. It was 0.5 in version one and the paper, but it became 0.3 in version two. Does this mean that the network pays more attention to the characteristics of the encoder?
The text was updated successfully, but these errors were encountered:
Hello, your work is amazing. I have a question about the alpha parameter of the cross attention and self attention fusion module in the decoder. It was 0.5 in version one and the paper, but it became 0.3 in version two. Does this mean that the network pays more attention to the characteristics of the encoder?
The text was updated successfully, but these errors were encountered: