You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thanks for sharing this great work.
I just draw the gradient for the Entropy and the proposed loss in your work and found hard to tell.
You said After plotting the gradient function image on Fig. 1, we can see that the gradient of the high probability point is much larger than the mediate point. As a result, the key principle behind the entropy minimization method is that the training of target samples is guided by the high probability area, which is assumed to be more accurate.
In fact, the gradient is dominant when the probability is near 82% and the gradient vanishes when prediction becomes confident. It is very hard to imagine that when more confident you are, the larger gradient you have. Usually, Entropy minimization leads to a more stable model. Entroy_loss and gradient
Your proposed loss indeed exhibits a lower gradient for high confident regions, that is true. The proposed loss and gradient
The loss is plotted as -p ** 2 - (1 - p) ** 2 + 1. I added a constant to make the loss to be all positive.
Thanks in advance for your attention and comments.
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for sharing this great work.
I just draw the gradient for the Entropy and the proposed loss in your work and found hard to tell.
You said
After plotting the gradient function image on Fig. 1, we can see that the gradient of the high probability point is much larger than the mediate point. As a result, the key principle behind the entropy minimization method is that the training of target samples is guided by the high probability area, which is assumed to be more accurate.
In fact, the gradient is dominant when the probability is near
82%
and the gradient vanishes when prediction becomes confident. It is very hard to imagine that when more confident you are, the larger gradient you have. Usually, Entropy minimization leads to a more stable model.Entroy_loss and gradient
Your proposed loss indeed exhibits a lower gradient for high confident regions, that is true.
The proposed loss and gradient
The loss is plotted as
-p ** 2 - (1 - p) ** 2 + 1
. I added a constant to make the loss to be all positive.Thanks in advance for your attention and comments.
The text was updated successfully, but these errors were encountered: