-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unity Environment #10
Comments
Hi @Ademord, That line isn't the correct one. What you are looking for is in the tennis_maddpg.py:15-19. Let me know if you want some help. Getting better Unity integration is on my long todo list but as you can see from this repo's activity... I'm quite busy with other stuff. But, people in need always add priority to things :) |
Hello, thanks for getting back at me, I managed to use the GymWrapper unity provides with SB3 so I am so far satisfied on what I need, but I might come back to this repo to contribute (after the thesis is done). If you dont mind writing me a sentece to highlight the contribution of your repo in comparison to the GymWrapper implementation I can mention it in my thesis :) |
Cool :) Any contributions are more than welcome. Again, feel free to let me know, either here or on my email, whether there's anything you'd like to see/use. As for the difference... So both packages, ai-traineree and ml-agents, are much more than just the Unity wrappers. Specifically on the difference between MultiAgentUnityTask and UnityToGymWrapper is that the former allows controlling many agents in an environment whereas the latter focuses only on a single agent. At least that was the last time I checked which was many months ago. In case you are asking about the difference between two packages. In general I think both are trying to solve a similar problem, i.e. providing a package with customizable deep reinforcement learning agents. There is an overlap except that ml-agents is much bigger project as it has been around for longer and is a "job" for many people. The reason why AI Traineree exists is that 1) ml-agent didn't support multi agent, 2) they have super convoluted codebase which seems to be focused on agents usage, not development, and 3) I don't agree with their philosophy on training agents. (And lack of communication but, hey, I'm just a rando on the internet so why should they reply?) Their product is Unity and ml-agents is just an addition. Don't feel the pressure but if you find this package useful and thought about citing it then feel free to use citation just added in to the Readme. |
Hey, I am glad I ran into your repository from the mlagents threads.
How could I use your project to import my unity environment?
I see you do
I want to test different RL algorithms on my unity env.
The text was updated successfully, but these errors were encountered: