We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
transformers@3.2.4 Chromium Version 133.0.6845.0 (Entwickler-Build) (64-Bit) OS: Linux Mint
The script is just ending with this info:
Standalone code:
<body> <script type="module"> import { MultiModalityCausalLM, pipeline, AutoProcessor, } from "https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.2.4"; const model_id = "onnx-community/Janus-1.3B-ONNX"; const processor = await AutoProcessor.from_pretrained(model_id); const fp16_supported = false; // do feature check const model = await MultiModalityCausalLM.from_pretrained(model_id, { dtype: fp16_supported ? { prepare_inputs_embeds: "q4", language_model: "q4f16", lm_head: "fp16", gen_head: "fp16", gen_img_embeds: "fp16", image_decode: "fp32", } : { prepare_inputs_embeds: "fp32", language_model: "q4", lm_head: "fp32", gen_head: "fp32", gen_img_embeds: "fp32", image_decode: "fp32", }, device: { prepare_inputs_embeds: "wasm", // TODO use "webgpu" when bug is fixed language_model: "webgpu", lm_head: "webgpu", gen_head: "webgpu", gen_img_embeds: "webgpu", image_decode: "webgpu", }, }); // Prepare inputs const conversation = [ { role: "User", content: "<image_placeholder>\nConvert the formula into latex code.", images: ["https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/quadratic_formula.png"], }, ]; const inputs = await processor(conversation); // Generate response const outputs = await model.generate({ ...inputs, max_new_tokens: 150, do_sample: false, }); // Decode output const new_tokens = outputs.slice(null, [inputs.input_ids.dims.at(-1), null]); const decoded = processor.batch_decode(new_tokens, { skip_special_tokens: true }); console.log(decoded[0]); Object.assign(window, { processor, model_id, fp16_supported, model, conversation, inputs, outputs, new_tokens_decoded, }); </script> </body>
Edit: Tested aswell https://huggingface.co/spaces/webml-community/Janus-1.3B-WebGPU and the result is:
The text was updated successfully, but these errors were encountered:
No branches or pull requests
System Info
transformers@3.2.4
Chromium Version 133.0.6845.0 (Entwickler-Build) (64-Bit)
OS: Linux Mint
Environment/Platform
Description
The script is just ending with this info:
Reproduction
Standalone code:
Edit: Tested aswell https://huggingface.co/spaces/webml-community/Janus-1.3B-WebGPU and the result is:
The text was updated successfully, but these errors were encountered: