-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add Action Dropout on minerva WN18RR #12
Comments
You mean changing action dropout rate from 0.0 to 0.1? 0.9 is very aggressive dropout and 1.0 implies dropout everything and randomly sample an edge. If so 0.1 dropout rate shouldn't make such a huge difference. Would you mind posting the action dropout code you added to the original minerva code? And how many iterations did you train to observe this result difference? It would be great if you can plot the training curve before and after adding action dropout for comparison. |
@todpole3 I re-edited my issue to show more detailed information of the training results. |
@David-Lee-1990 In MINERVA code, is For us we only use action dropout to encourage diverse sampling but the policy gradient is still computed using the original probability vector. |
In MINERVA, there is no dropout and I try to add this advancement on it. Follow your idea, I use dropout to encourage diverse sampling and the policy gradient is still compuated using the original distribution. I tested this for two versions: one is relation-only and the other not. Both versions show the similar results as i stated in the issue. |
@David-Lee-1990 My question is, after adding "action dropout", did you use the updated probability vector |
No, i use the original one. |
@David-Lee-1990 I cannot spot anything wrong with the code snippet you posted. Thanks for sharing. It might have something to do with the integration with the rest of MINERVA code. Technically you only disturbed the sampling prob by a small factor (and your policy gradient computation still follows the traditional formula) so the result shouldn't change so significantly no matter what. Would you mind running a sanity-checking experiments by setting the dropout rate to 0.01 and see how the result turned out? Technically the change should be very small. Then maybe try 0.02 and 0.05 and see if the results change gradually? |
@David-Lee-1990 Very interesting. I want to look deeper into this issue. The most noticeable difference is that the dev result you reported without action dropout is close to what we have with 0.1 action dropout and significantly higher than what we have without action dropout. Besides action dropout rate, did you use the same set of hyperparameters as we did in the configuration files? And one more question, did you observe similar trend on other datasets using MINERVA code + action dropout? |
@David-Lee-1990 Are the plots shown above generated with |
@todpole3 about the dev result, i need to clarify that i used "sum" method, which is different with "max" method as you used, when calculating hit@k and MRR. "sum" method ranks the predicted entity by adding those probalibily, which predict the same end entity, up. The following code is from MINERVA, where lse is calculating the log sum. And I also test "max" method on WN18RR, MRR on dev set is as follows. as comparison, i paste the result of "sum" and "max "method together here: |
relation only |
I give my hyperparameters in the form of your notation as follows: group_examples_by_query="False" |
|
Hi, I tried action dropout trick on original minerva code on WN18RR. However, hit@10 decreased from 0.47 to 0.37 when action dropout rate changed from 1.0 to 0.9. Is there any other auxiliary tricks for action dropout?
The following is action drop code, where
self.params['flat_epsilon'] = float(np.finfo(float).eps)
:pre_distribution = tf.nn.softmax(scores)
if mode == "train":
pre_distribution = pre_distribution * dropout_mask + self.params['flat_epsilon'] * (1 - dropout_mask)
dummy_scores_1 = tf.zeros_like(prelim_scores)
pre_distribution = tf.where(mask, dummy_scores_1, pre_distribution)
dist = tf.distributions.Categorical(probs=pre_distribution)
action = tf.to_int32(dist.sample())
And the dropout mask is given as follows:
rans = np.random.random(size=[self.batch_size * self.num_rollouts, self.max_num_actions])
dropout_mask = np.greater(rans, 1.0 - self.score_keepRate)
Another mask in above code is for padding other unavailable actions.
And I also calculate the final softmax loss by the original distribution as follows
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=scores, labels=label_action)
and when I change self.score_keepRate from 1.0 and 0.9, while training 100 batches with batch size 128, and the hits@k values on the dev set is as follows:
For 1000 batches training,
the MRR of dev set variates as follows:
the hit@1 of training batches variates as follows:
The text was updated successfully, but these errors were encountered: