deepseek-r1:14b
qwen2 · 14.8B · Q4_K_M
THINKING MODEL
MacBook Air (Apple M4)
32 GB · macOS 26.3
Tested on March 2, 2026
Global Score
26 /100
Not Rec.
Hardware Fit
50/100
Quality
10/100
Get this model
🦙
Ollama
ollama pull deepseek-r1:14b View on Ollama Library
ollama.com/library/deepseek-r1
🤗
Find on HuggingFace
GGUF versions & quantizations
Hardware
- Machine
- MacBook Air
- CPU
- Apple M4
- Cores
- 10 total (4 perf + 6 eff)
- Frequency
- 2.4 GHz
- RAM
- 32 GB LPDDR5
- GPU
- Apple M4
- OS
- macOS 26.3
- Arch
- arm64
- Power Mode
- balanced
Performance
- Tokens/sec
- 11.4
- Standard deviation
- ±0.1
- Time to first token
- 30.0 s
- Load time
- 1.0 s
- Memory usage
- 16.5 GB (52%)
- Total tokens
- 1394
- Thinking tokens (est.)
- ~1038
Score breakdown
Speed
20/40
Time to first token
0/30
Memory
30/30
Quality
Reasoning
1/20
Coding
3/20
Instruction following
1/20
Structured output
0/15
Math
0/15
Multilingual
5/10
Category levels
Reasoning: Poor Coding: Poor Instruction Following: Poor Structured Output: Poor Math: Poor Multilingual: Adequate
Metadata
- Spec version
- 0.2.0
- Runtime
- Ollama 0.17.4
- Model format
- GGUF
- Hardware profile
- BALANCED
- Result hash
- c78fb619362ffaaf8b15abad81fd394ec172f6c1806361b5e97384876d10018a
Interpretation
Hardware fit: 50/100. Overall suitability: NOT RECOMMENDED (Global 26/100). Category profile: Reasoning: Poor, Coding: Poor, Instruction Following: Poor, Structured Output: Poor, Math: Poor, Multilingual: Adequate. Warning: model produced very low accuracy on quality tasks — results may be unusable despite good hardware performance.
Warnings
- Model produced very low accuracy on quality tasks — results may be unusable despite good hardware performance.
Disqualifiers
- Time to first token too high: 30000ms (maximum: 20705ms for BALANCED profile)
Run yours now
npx metrillm@latest benchRequires Node 20+ and Ollama running