Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unity Environment #10

Open
Ademord opened this issue Jun 5, 2021 · 3 comments
Open

Unity Environment #10

Ademord opened this issue Jun 5, 2021 · 3 comments

Comments

@Ademord
Copy link

Ademord commented Jun 5, 2021

Hey, I am glad I ran into your repository from the mlagents threads.

How could I use your project to import my unity environment?
I see you do

task = GymTask('CartPole-v1)

I want to test different RL algorithms on my unity env.

@laszukdawid
Copy link
Owner

Hi @Ademord,

That line isn't the correct one. GymTask is a wrapper class to provide OpenAI compatible APIs.

What you are looking for is in the tennis_maddpg.py:15-19.

Let me know if you want some help. Getting better Unity integration is on my long todo list but as you can see from this repo's activity... I'm quite busy with other stuff. But, people in need always add priority to things :)

@Ademord
Copy link
Author

Ademord commented Jun 18, 2021

Hello, thanks for getting back at me, I managed to use the GymWrapper unity provides with SB3 so I am so far satisfied on what I need, but I might come back to this repo to contribute (after the thesis is done).

If you dont mind writing me a sentece to highlight the contribution of your repo in comparison to the GymWrapper implementation I can mention it in my thesis :)

laszukdawid added a commit that referenced this issue Jun 19, 2021
@laszukdawid
Copy link
Owner

Cool :) Any contributions are more than welcome. Again, feel free to let me know, either here or on my email, whether there's anything you'd like to see/use.

As for the difference... So both packages, ai-traineree and ml-agents, are much more than just the Unity wrappers. Specifically on the difference between MultiAgentUnityTask and UnityToGymWrapper is that the former allows controlling many agents in an environment whereas the latter focuses only on a single agent. At least that was the last time I checked which was many months ago.

In case you are asking about the difference between two packages. In general I think both are trying to solve a similar problem, i.e. providing a package with customizable deep reinforcement learning agents. There is an overlap except that ml-agents is much bigger project as it has been around for longer and is a "job" for many people. The reason why AI Traineree exists is that 1) ml-agent didn't support multi agent, 2) they have super convoluted codebase which seems to be focused on agents usage, not development, and 3) I don't agree with their philosophy on training agents. (And lack of communication but, hey, I'm just a rando on the internet so why should they reply?) Their product is Unity and ml-agents is just an addition.

Don't feel the pressure but if you find this package useful and thought about citing it then feel free to use citation just added in to the Readme.

laszukdawid added a commit that referenced this issue Nov 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants