Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hope to support ONNX Runtime (Training version & Inferencing version) and DirectML #149

Open
Looong01 opened this issue Dec 1, 2022 · 1 comment

Comments

@Looong01
Copy link

Looong01 commented Dec 1, 2022

Hope to support ONNX Runtime (Training version & Inferencing version) and DirectML.

They can optimize the training process and inferring process, if you use it as a back end.

ONNX Runtime supports any OS and any kind of GPU, including CUDA(NVIDIA), ROCm (AMD), oneDNN (Intel), Metal (Apple M1) and other devices as in the following picture. And its performance will be much better than OpenCL.
image
https://github.com/microsoft/onnxruntime

DirectML supports any kind of GPU on Windows, but its cost of code migrating is much less than it of ONNX Runtime. And its performance will also be better than OpenCL.
https://github.com/microsoft/DirectML

@Looong01
Copy link
Author

Looong01 commented Jan 6, 2023

Hope to add a DirectML back end. It's easy to use. It support any GPU that suppots DirectX12 on Windows. It's much much much better than OpenCL. It does not need the users to install any software or computing platforms (Like CUDA, ROCm, TensorRT, CUDNN, etc.). It only needs a Win10/11 os and its GPU drivers. The only defect is the performance is lower than CUDA+CUDNN+TNESORRT a little.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant