qwen3.5-9b-mlx
qwen3_5 · 9B · 4bit
THINKING MODEL
Mac mini (Apple M4)
24 GB · macOS 26.3
Tested on March 11, 2026
Global Score
58 /100
Marginal
Hardware Fit
68/100
Quality
54/100
Get this model
Hardware
- Machine
- Mac mini
- CPU
- Apple M4
- Cores
- 10 total (4 perf + 6 eff)
- Frequency
- 2.4 GHz
- RAM
- 24 GB LPDDR5
- GPU
- Apple M4
- OS
- macOS 26.3
- Arch
- arm64
- Power Mode
- balanced
Performance
- Tokens/sec
- 16.7
- Standard deviation
- ±0.7
- First chunk latency
- 23 ms
- Time to first token
- 14.1 s
- Load time
- N/A
- Memory usage
- 7.8 GB (33%)
- Total tokens
- 1617
- Thinking tokens (est.)
- ~614
Score breakdown
Speed
36/50
Time to first token
3/20
Memory
29/30
Quality
Reasoning
10/20
Coding
16/20
Instruction following
4/20
Structured output
9/15
Math
10/15
Multilingual
5/10
Category levels
Reasoning: Adequate Coding: Strong Instruction Following: Poor Structured Output: Adequate Math: Adequate Multilingual: Adequate
Metadata
- Spec version
- 0.2.1
- Runtime
- LM Studio 0.4.6+1
- Model format
- MLX
- Hardware profile
- ENTRY
- Result hash
- 0e0e2a1ff35248290d303ba7311e82c4c6431c38ddb14fb311c4ffe8fc39e609
Interpretation
Hardware fit: 68/100. Overall suitability: MARGINAL (Global 58/100). Category profile: Reasoning: Adequate, Coding: Strong, Instruction Following: Poor, Structured Output: Adequate, Math: Adequate, Multilingual: Adequate.
Warnings
- Token throughput is estimated from LM Studio output because native token stats were unavailable. Compare tok/s across backends cautiously.
- Model memory footprint is estimated via LM Studio CLI rather than measured from a fresh load.
Bench Environment
Power: AC CPU load: avg 25% (peak 34%)
Run yours now
$
npm install -g metrillm@latest$
metrillmRequires Node 20+ and Ollama or LM Studio running
Or run without installing: npx metrillm@latest