Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Phi-3.5-vision-instruct: Uncaught RuntimeError: Aborted(). Build with -sASSERTIONS for more info #1144

Open
1 of 5 tasks
kungfooman opened this issue Jan 13, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@kungfooman
Copy link
Contributor

System Info

@huggingface/transformers@3.2.4
Browser: Chromium Version 133.0.6845.0 (Entwickler-Build) (64-Bit)
Node/bundler: none, simple HTML setup

Environment/Platform

  • Website/web-app
  • Browser extension
  • Server-side (e.g., Node.js, Deno, Bun)
  • Desktop app (e.g., Electron)
  • Other (e.g., VSCode extension)

Description

I just wanted to test your work on #1094

I took the example code and simply turned it into something you can just put into a HTML and run it.

But the result is:

Image

Do you have any idea what's going on? It seems to load the entire model, then trying to load it, but just crashing shortly after.

Reproduction

<body>
<script type="module">
import {
  AutoProcessor,
  AutoModelForCausalLM,
  TextStreamer,
  load_image,
} from "https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.2.4";
// Load processor and model
const model_id = "onnx-community/Phi-3.5-vision-instruct";
const processor = await AutoProcessor.from_pretrained(model_id, {
  legacy: true, // Use legacy to match python version
});
const model = await AutoModelForCausalLM.from_pretrained(model_id, {
  dtype: {
    vision_encoder: "q4", // 'q4' or 'q4f16'
    prepare_inputs_embeds: "q4", // 'q4' or 'q4f16'
    model: "q4f16", // 'q4f16'
  },
});
// Load image
const image = await load_image("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/meme.png");
// Prepare inputs
const messages = [
  { role: "user", content: "<|image_1|>What's funny about this image?" },
];
const prompt = processor.tokenizer.apply_chat_template(messages, {
  tokenize: false,
  add_generation_prompt: true,
});
const inputs = await processor(prompt, image, { num_crops: 4 });
// (Optional) Set up text streamer
const streamer = new TextStreamer(processor.tokenizer, {
  skip_prompt: true,
  skip_special_tokens: true,
});
// Generate response
const output = await model.generate({
  ...inputs,
  streamer,
  max_new_tokens: 256,
});
</script>
</body>
@kungfooman kungfooman added the bug Something isn't working label Jan 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant