Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How specify target cuda device? #10

Closed
tszhang97 opened this issue Feb 4, 2024 · 6 comments
Closed

How specify target cuda device? #10

tszhang97 opened this issue Feb 4, 2024 · 6 comments

Comments

@tszhang97
Copy link

image
I'm using onnx gpu backend and want to set cuda:2 device. It seems that I could not select cuda device.

@Tau-J
Copy link
Owner

Tau-J commented Feb 4, 2024

Hi @tszhang97 , thanks for using rtmlib. The easiest way is to set the visible device, e.g. CUDA_VISIBLE_DEVICES=gpu_ids python3 xxx.py

@tszhang97
Copy link
Author

I know this operation, however I'm using rtmlib as a submodule of a large project. Maybe we can add provider_options like this:
provider_options = [{"device_id": device_id}]
self.session = ort.InferenceSession(path_or_bytes=onnx_model,
providers=[providers],
provider_options=provider_options)

@tszhang97
Copy link
Author

Additional operation to get device_id and add provider_options works for me.

@Tau-J
Copy link
Owner

Tau-J commented Feb 4, 2024

Additional operation to get device_id and add provider_options works for me.

Hi @tszhang97 , would you like to raise a PR to this repo? Any contribution is much appreciated.

@edwardnguyen1705
Copy link

edwardnguyen1705 commented Dec 3, 2024

Dear @Tau-J ,

Would it be possible to support specifying GPU ID like this cap-ntu/ML-Model-CI#37?

If multiple processes use the same GPU ID, it will out of CUDA mem.

@edwardnguyen1705
Copy link

Hi @Tau-J ,
Please check this PR: #44

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants