llama3.2:latest
llama · 3.2B · Q4_K_M
Mac Mini (Apple M4 Pro)
64 GB · macOS 15.7.4
Tested on February 28, 2026
Global Score
77 /100
Good
Hardware Fit
100/100
Quality
61/100
Get this model
🦙
Ollama
ollama pull llama3.2:latest View on Ollama Library
ollama.com/library/llama3.2
🤗
Find on HuggingFace
GGUF versions & quantizations
Hardware
- Machine
- Mac Mini
- CPU
- Apple M4 Pro
- Cores
- 14 total (10 perf + 4 eff)
- Frequency
- 2.4 GHz
- RAM
- 64 GB LPDDR5
- GPU
- Apple M4 Pro
- OS
- macOS 15.7.4
- Arch
- arm64
Performance
- Tokens/sec
- 98.9
- Standard deviation
- ±0.4
- Time to first token
- 125 ms
- Load time
- 2.3 s
- Memory usage
- 22.1 GB (35%)
- Total tokens
- 1112
Score breakdown
Speed
40/40
Time to first token
30/30
Memory
30/30
Quality
Reasoning
11/20
Coding
14/20
Instruction following
12/20
Structured output
12/15
Math
3/15
Multilingual
9/10
Category levels
Reasoning: Adequate Coding: Adequate Instruction Following: Adequate Structured Output: Strong Math: Poor Multilingual: Strong
Metadata
- Spec version
- 0.1.0
- Runtime
- Ollama 0.17.4
- Model format
- GGUF
- Hardware profile
- HIGH-END
- Result hash
- 40f1f574f63053019ada56d61392288f4a9ad33adafd324c05bd80a4713be64f
Interpretation
Hardware fit: 100/100. Overall suitability: GOOD (Global 77/100). Category profile: Reasoning: Adequate, Coding: Adequate, Instruction Following: Adequate, Structured Output: Strong, Math: Poor, Multilingual: Strong.
Run yours now
npx metrillm@latest benchRequires Node 20+ and Ollama running