qwen3-vl:4b-instruct
qwen3vl · 4.4B · Q4_K_M
Mac Studio (Apple M2 Max)
96 GB · macOS 26.1
Tested on April 20, 2026 · Submitted by Frank
Global Score
83 /100
Excellent
Hardware Fit
96/100
Quality
78/100
Get this model
🦙
Ollama
ollama pull qwen3-vl:4b-instruct View on Ollama Library
ollama.com/library/qwen3-vl
Get it in LM Studio
Search and download models directly from the app
🤗
Find on HuggingFace
GGUF versions & quantizations
Hardware
- Machine
- Mac Studio
- CPU
- Apple M2 Max
- Cores
- 12 total (8 perf + 4 eff)
- Frequency
- 2.4 GHz
- RAM
- 96 GB LPDDR5
- GPU
- Apple M2 Max
- OS
- macOS 26.1
- Arch
- arm64
- Power Mode
- balanced
Performance
- Tokens/sec
- 91.2
- Standard deviation
- ±0.8
- First chunk latency
- 150 ms
- Time to first token
- 150 ms
- Load time
- 5.3 s
- Memory usage
- 42.5 GB (44%)
- Total tokens
- 905
Score breakdown
Speed
50/50
Time to first token
20/20
Memory
26/30
Quality
Reasoning
14/20
Coding
16/20
Instruction following
16/20
Structured output
15/15
Math
8/15
Multilingual
9/10
Category levels
Reasoning: Adequate Coding: Strong Instruction Following: Strong Structured Output: Strong Math: Adequate Multilingual: Strong
Metadata
- Spec version
- 0.2.1
- Runtime
- Ollama 0.20.7
- Model format
- GGUF
- Hardware profile
- HIGH-END
- Result hash
- cbdae82dfb3aec07679cfb77cf5c3c115dc3402e697a8f2883b57d6bd886d95b
Interpretation
Hardware fit: 96/100. Overall suitability: EXCELLENT (Global 83/100). Category profile: Reasoning: Adequate, Coding: Strong, Instruction Following: Strong, Structured Output: Strong, Math: Adequate, Multilingual: Strong.
Bench Environment
Power: AC CPU load: avg 8% (peak 10%)
Run yours now
$
npm install -g metrillm@latest$
metrillmRequires Node 20+ and Ollama or LM Studio running
Or run without installing: npx metrillm@latest