deepseek-coder:6.7b
llama · 7B · Q4_0
MacBook Air (Apple M4)
32 GB · macOS 26.3
Tested on March 2, 2026
Global Score
63 /100
Good
Hardware Fit
98/100
Quality
39/100
Get this model
🦙
Ollama
ollama pull deepseek-coder:6.7b View on Ollama Library
ollama.com/library/deepseek-coder
🤗
Find on HuggingFace
GGUF versions & quantizations
Hardware
- Machine
- MacBook Air
- CPU
- Apple M4
- Cores
- 10 total (4 perf + 6 eff)
- Frequency
- 2.4 GHz
- RAM
- 32 GB LPDDR5
- GPU
- Apple M4
- OS
- macOS 26.3
- Arch
- arm64
- Power Mode
- balanced
Performance
- Tokens/sec
- 24.1
- Standard deviation
- ±0.4
- Time to first token
- 251 ms
- Load time
- 0.9 s
- Memory usage
- 12.6 GB (39%)
- Total tokens
- 1209
Score breakdown
Speed
38/40
Time to first token
30/30
Memory
30/30
Quality
Reasoning
4/20
Coding
14/20
Instruction following
9/20
Structured output
6/15
Math
1/15
Multilingual
5/10
Category levels
Reasoning: Poor Coding: Adequate Instruction Following: Weak Structured Output: Weak Math: Poor Multilingual: Weak
Metadata
- Spec version
- 0.2.0
- Runtime
- Ollama 0.17.4
- Model format
- GGUF
- Hardware profile
- BALANCED
- Result hash
- 0ea0d60f7c3e97b6cc713f1d27c7c5fa7d41a9335639dda31e08341149345e4e
Interpretation
Hardware fit: 98/100. Overall suitability: GOOD (Global 63/100). Category profile: Reasoning: Poor, Coding: Adequate, Instruction Following: Weak, Structured Output: Weak, Math: Poor, Multilingual: Weak.
Run yours now
npx metrillm@latest benchRequires Node 20+ and Ollama running