llama3.1:8b
llama · 8.0B · Q4_K_M
Mac Mini (Apple M4 Pro)
64 GB · macOS 15.7.4
Tested on February 28, 2026
Global Score
64 /100
Not Rec.
Hardware Fit
56/100
Quality
70/100
Get this model
🦙
Ollama
ollama pull llama3.1:8b View on Ollama Library
ollama.com/library/llama3.1
🤗
Find on HuggingFace
GGUF versions & quantizations
Hardware
- Machine
- Mac Mini
- CPU
- Apple M4 Pro
- Cores
- 14 total (10 perf + 4 eff)
- Frequency
- 2.4 GHz
- RAM
- 64 GB LPDDR5
- GPU
- Apple M4 Pro
- OS
- macOS 15.7.4
- Arch
- arm64
Performance
- Tokens/sec
- 5.5
- Standard deviation
- ±1.7
- Time to first token
- 1.4 s
- Load time
- 3.7 s
- Memory usage
- 28.6 GB (45%)
- Total tokens
- 997
Score breakdown
Speed
5/40
Time to first token
22/30
Memory
29/30
Quality
Reasoning
12/20
Coding
17/20
Instruction following
14/20
Structured output
13/15
Math
4/15
Multilingual
10/10
Category levels
Reasoning: Adequate Coding: Strong Instruction Following: Adequate Structured Output: Strong Math: Weak Multilingual: Strong
Metadata
- Spec version
- 0.1.0
- Runtime
- Ollama 0.17.4
- Model format
- GGUF
- Hardware profile
- HIGH-END
- Result hash
- 7981e64efe711bdaad8975e34885500b1fb4977fe6bf51f2591d61ddf9e9b0b9
Interpretation
Hardware fit: 56/100. Overall suitability: NOT RECOMMENDED (Global 64/100). Category profile: Reasoning: Adequate, Coding: Strong, Instruction Following: Adequate, Structured Output: Strong, Math: Weak, Multilingual: Strong.
Warnings
- Token speed is unstable (stddev 1.7 tok/s, mean 5.5 tok/s) — may indicate thermal throttling or memory pressure.
Disqualifiers
- Token speed too low: 5.5 tok/s (minimum: 6 tok/s for HIGH-END profile)
Run yours now
npx metrillm@latest benchRequires Node 20+ and Ollama running