You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hope to support ONNX Runtime (Training version & Inferencing version) and DirectML.
They can optimize the training process and inferring process, if you use it as a back end.
ONNX Runtime supports any OS and any kind of GPU, including CUDA(NVIDIA), ROCm (AMD), oneDNN (Intel), Metal (Apple M1) and other devices as in the following picture. And its performance will be much better than OpenCL. https://github.com/microsoft/onnxruntime
DirectML supports any kind of GPU on Windows, but its cost of code migrating is much less than it of ONNX Runtime. And its performance will also be better than OpenCL. https://github.com/microsoft/DirectML
The text was updated successfully, but these errors were encountered:
Hope to add a DirectML back end. It's easy to use. It support any GPU that suppots DirectX12 on Windows. It's much much much better than OpenCL. It does not need the users to install any software or computing platforms (Like CUDA, ROCm, TensorRT, CUDNN, etc.). It only needs a Win10/11 os and its GPU drivers. The only defect is the performance is lower than CUDA+CUDNN+TNESORRT a little.
Hope to support ONNX Runtime (Training version & Inferencing version) and DirectML.
They can optimize the training process and inferring process, if you use it as a back end.
ONNX Runtime supports any OS and any kind of GPU, including CUDA(NVIDIA), ROCm (AMD), oneDNN (Intel), Metal (Apple M1) and other devices as in the following picture. And its performance will be much better than OpenCL.
https://github.com/microsoft/onnxruntime
DirectML supports any kind of GPU on Windows, but its cost of code migrating is much less than it of ONNX Runtime. And its performance will also be better than OpenCL.
https://github.com/microsoft/DirectML
The text was updated successfully, but these errors were encountered: