Training not "working" on my own custom maps #98
Unanswered
Rajivrocks-Ltd
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Hi, First things first, have you carefully checked that your reward function and observations make sense using You need to record a reward function for your track, it cannot work if you use the default reward function (which is for the |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello yet again. I have made a very short custom map of my own to test the SAC example agent on. What I find tho that even after training for 3 hours on my GPU i still get a "train return" of 0. When I look at the graph on W&B it also reflects this. just a straight line at zero. the "episode_train_length" also is just stuck at 81 as far as I can telll. My first thought was, maybe this has something to do with the hyperparameters. I assume the hyperparameters in the config.json are specifically for the map that was provided with TMRL and that they won't really translate to good performance on other tracks.
Coming to the next point. I am testing a DuelingDQN agent as well. I tried to train it on my own map but the same thing happens. No training is happening "return train" is always 0 and the "episode_train_length" is also stuck at 81. But when I move back to the TMRL-test map it somehow starts to train.
The only thing that I can think of which is honestly pretty logical, is that hyperparemeters need to be searchd for my own custom track.
Am I correct in my assumption that I need different hyper parameters for different tracks, and that the hyperparemters in the config.json are specifically tuned for this tmrl-test map that is provided?
Beta Was this translation helpful? Give feedback.
All reactions