Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure with non-default group sizes on quantized models. #178

Open
aaronn opened this issue Jan 13, 2025 · 1 comment
Open

Failure with non-default group sizes on quantized models. #178

aaronn opened this issue Jan 13, 2025 · 1 comment

Comments

@aaronn
Copy link

aaronn commented Jan 13, 2025

Running the following to quantize models using a group size of 32 causes a number of error when running the model:

mlx_lm.convert --hf-path microsoft/phi-4 --quantize --q-bits 2 --q-group-size 32

Attempting to run this with MLXLLM.loadModelContainer will cause the following error:

Error downloading model: mismatchedSize(key: "biases", expectedShape: [100352, 80], actualShape: [100352, 160])

Steps to reproduce:

  1. Convert a model using a group size of 32 (or alternatively, here: https://huggingface.co/mlx-community/phi-4-2bit).
  2. Load it using MLXLLM.loadModelContainer

Expected Result: The model should load.
Actual result: You'll get an error, instead of it loading.

@awni
Copy link
Member

awni commented Jan 13, 2025

I just tried running it and it ran fine for me on the main branch of mlx-swift-examples. Are you using an outdated version?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants