Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The compute_loss function is wrong for the Simplest Policy Gradient #414

Open
alantpetrescu opened this issue Jun 5, 2024 · 2 comments
Open

Comments

@alantpetrescu
Copy link

alantpetrescu commented Jun 5, 2024

I have been reading the 3 parts from "Introduction to RL" section and I have observed in part 3 that the compute_loss function for the Simplest Policy Gradient returns the mean of the product between the log probabilities of the actions taken by the agent and the weights of those actions, in other words, the finite-horizon undiscounted returns of the episodes in which they were taken.

image

In the estimation of the Basic Gradient Policy above, the sums of products is divided by the number of trajectories, but in the implementation, when you return the mean, the sums of products is divided by the number of all the actions taken across all the trajectories from one epoch. Maybe I am understanding this wrong, but I wanted to get a clear picture on the implementation.

image

@hirodeng
Copy link

I notice the same problem

@earnesdm
Copy link

earnesdm commented Aug 25, 2024

@alantpetrescu I think you are correct that the equation written differs from what is implemented in code, but only by a constant multiple. Since we multiply the gradient estimate by the learning rate when performing gradient ascent, the constant multiple doesn't really matter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants