lmstudio-community/meta-llama-3.1-8b-instruct
llama · 8B · Q4_K_M
Mac mini (Apple M4 Pro)
64 GB · macOS 15.7.4
Tested on March 5, 2026
Global Score
78 /100
Good
Hardware Fit
100/100
Quality
69/100
Get this model
Hardware
- Machine
- Mac mini
- CPU
- Apple M4 Pro
- Cores
- 14 total (10 perf + 4 eff)
- Frequency
- 2.4 GHz
- RAM
- 64 GB LPDDR5
- GPU
- Apple M4 Pro
- OS
- macOS 15.7.4
- Arch
- arm64
- Power Mode
- balanced
Performance
- Tokens/sec
- 46.0
- Standard deviation
- ±0.7
- First chunk latency
- 245 ms
- Time to first token
- 245 ms
- Load time
- N/A
- Memory usage
- 4.6 GB (7%)
- Total tokens
- 1300
Score breakdown
Speed
50/50
Time to first token
20/20
Memory
30/30
Quality
Reasoning
11/20
Coding
15/20
Instruction following
14/20
Structured output
15/15
Math
4/15
Multilingual
10/10
Category levels
Reasoning: Adequate Coding: Adequate Instruction Following: Adequate Structured Output: Strong Math: Poor Multilingual: Strong
Metadata
- Spec version
- 0.2.1
- Runtime
- LM Studio 0.4.6+1
- Model format
- GGUF
- Hardware profile
- HIGH-END
- Result hash
- 7026ec2769092288ef8727b9e25bf7230edd3e5df2ad750920bb5496bb95f7e1
Interpretation
Hardware fit: 100/100. Overall suitability: GOOD (Global 78/100). Category profile: Reasoning: Adequate, Coding: Adequate, Instruction Following: Adequate, Structured Output: Strong, Math: Poor, Multilingual: Strong.
Bench Environment
Power: AC CPU load: avg 18% (peak 20%)
Run yours now
$
npm install -g metrillm@latest$
metrillmRequires Node 20+ and Ollama or LM Studio running
Or run without installing: npx metrillm@latest