-
-
Notifications
You must be signed in to change notification settings - Fork 808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix missing install candidate for ONNX Runtime on Apple Silicon #1517
Conversation
This patch alters the requirements to use another ONNX runtime package which provides pre-build wheels for for Apple Silicon in case of running on arm64/M1/M2 Mac. Fixes pyannote#1505
PR is failing build tests. Can you please check? |
It seems like Would it be ok to fallback to onnxruntime in case of macosx < 13 and !arm64 ? |
I have tried this patch, it work for me, but my macstudio m1 GPU is not used. (i correctly set torch to mps backend) |
Which MacOS version are you running? |
13.6.1 edit: i said that gpu is not used but i'm unsure. if used, I'm sure it is not fully used. i have monitored gpu usage and it's usage seems no more than in minimal usage condition (display of mac os desktop). |
At least this proves it installs on Ventura. Did you experience just speed degradation like in pytorch/pytorch#77799 |
@dr-duplo when running this import torch
import timeit
b_cpu = torch.rand((10000, 10000), device='cpu')
b_mps = torch.rand((10000, 10000), device='mps')
print('cpu', timeit.timeit(lambda: b_cpu @ b_cpu, number=100))
print('mps', timeit.timeit(lambda: b_mps @ b_mps, number=100)) i get:
For pyannote, when settings pipeline to cpu:
when settings pipeline to mps:
so GPU don't seem to be used. |
@stygmate |
@dr-duplo Yesterday, I discovered that in the Pyannote code the Onnxruntime execution provider (CoreMLExecutionProvider) corresponding to the Torch mps device was not set. I added it: Index: pyannote/audio/pipelines/speaker_verification.py
@@ -455,6 +455,8 @@
},
)
]
+ elif device.type == "mps":
+ providers = ["CoreMLExecutionProvider"]
else:
warnings.warn(
f"Unsupported device type: {device.type}, falling back to CPU" But the MPS backend does not support FFT operators and it crashes 😩🔫 |
FYI: #1537 |
Closing as pyannote 3.1 will get rid of this ONNX mess. |
Latest version no longer relies on ONNX runtime. |
This patch alters the requirements to use another ONNX runtime package which provides pre-build wheels for Apple Silicon in case of running on arm64/M1/M2 Mac.
Fixes #1505