Checking cache...
Ready — click Load Model to initialize
LFM2.5-VL-1.6B · ONNX · WebGPU
SEE.
THINK.
ANSWER.
👁️
Vision
Language
WebGPU
Accelerated
🔒
Zero Data
Leaves Browser
🖼️
Attach
Images
attached image
Model
LFM2.5-VL-1.6B
Inference
WebGPU + ONNX
Vision
SigLIP2 NaFlex
Privacy
100% Local
Download
~1.5GB (one-time)
Runtime
Transformers.js + ORT
DOWNLOADING MODEL
Initializing...
0%
0 MB / ~1.5 GB
Calculating speed...
Tokenizer
~5 MB
Token Embedder
~30 MB
Vision Encoder (SigLIP2)
~400 MB
Language Decoder (Q4)
~1.1 GB
⚡ First load downloads ~1.5 GB from Hugging Face.
🔒 Everything runs 100% in-browser — zero data leaves your device.
🛜 Keep this tab open. Do not refresh during download.