deepseek-r1:14b
qwen2 · 14.8B · Q4_K_M
MacBook Air (Apple M4)
32 GB · macOS 26.3
Tested on March 1, 2026
Global Score
25 /100
Not Rec.
Hardware Fit
46/100
Quality
11/100
Get this model
🦙
Ollama
ollama pull deepseek-r1:14b View on Ollama Library
ollama.com/library/deepseek-r1
🤗
Find on HuggingFace
GGUF versions & quantizations
Hardware
- Machine
- MacBook Air
- CPU
- Apple M4
- Cores
- 10 total (4 perf + 6 eff)
- Frequency
- 2.4 GHz
- RAM
- 32 GB LPDDR5
- GPU
- Apple M4
- OS
- macOS 26.3
- Arch
- arm64
Performance
- Tokens/sec
- 10.8
- Standard deviation
- ±0.1
- Time to first token
- 30.0 s
- Load time
- 5.3 s
- Memory usage
- 16.5 GB (52%)
- Total tokens
- 1394
Score breakdown
Speed
16/40
Time to first token
0/30
Memory
30/30
Quality
Reasoning
1/20
Coding
0/20
Instruction following
2/20
Structured output
1/15
Math
1/15
Multilingual
6/10
Category levels
Reasoning: Poor Coding: Poor Instruction Following: Poor Structured Output: Poor Math: Poor Multilingual: Adequate
Metadata
- Spec version
- 0.1.0
- Runtime
- Ollama 0.17.4
- Model format
- GGUF
- Hardware profile
- BALANCED
- Result hash
- 0ef02bf00d4eeea52de3da0732b8bddd41ace3da777312928b24d9c43f682444
Interpretation
Hardware fit: 46/100. Overall suitability: NOT RECOMMENDED (Global 25/100). Category profile: Reasoning: Poor, Coding: Poor, Instruction Following: Poor, Structured Output: Poor, Math: Poor, Multilingual: Adequate. Warning: model produced very low accuracy on quality tasks — results may be unusable despite good hardware performance.
Warnings
- Model produced very low accuracy on quality tasks — results may be unusable despite good hardware performance.
Disqualifiers
- Time to first token too high: 30000ms (maximum: 15000ms for BALANCED profile)
Run yours now
npx metrillm@latest benchRequires Node 20+ and Ollama running