mlx-community/meta-llama-3.1-8b-instruct

llama · 8B · 4bit

Mac mini (Apple M4 Pro)

64 GB · macOS 15.7.4

Tested on March 5, 2026
Top 24% Compare
Global Score
77 /100
Good
Hardware Fit
100/100
Quality
67/100

Get this model

Hardware

Machine
Mac mini
CPU
Apple M4 Pro
Cores
14 total (10 perf + 4 eff)
Frequency
2.4 GHz
RAM
64 GB LPDDR5
GPU
Apple M4 Pro
OS
macOS 15.7.4
Arch
arm64
Power Mode
balanced

Performance

Tokens/sec
54.3
Standard deviation
±0.8
First chunk latency
277 ms
Time to first token
277 ms
Load time
N/A
Memory usage
4.2 GB (7%)
Total tokens
1269

Score breakdown

Speed
50/50
Time to first token
20/20
Memory
30/30

Quality

Reasoning
11/20
Coding
14/20
Instruction following
14/20
Structured output
15/15
Math
4/15
Multilingual
9/10

Category levels

Reasoning: Adequate Coding: Adequate Instruction Following: Adequate Structured Output: Strong Math: Poor Multilingual: Strong

Metadata

Spec version
0.2.1
Runtime
LM Studio 0.4.6+1
Model format
MLX
Hardware profile
HIGH-END
Result hash
d4961db8b5159f438a53459029328cddcd8de35810e985c4939fdf31326588a4

Interpretation

Hardware fit: 100/100. Overall suitability: GOOD (Global 77/100). Category profile: Reasoning: Adequate, Coding: Adequate, Instruction Following: Adequate, Structured Output: Strong, Math: Poor, Multilingual: Strong.

Bench Environment

Power: AC CPU load: avg 25% (peak 28%)

Run yours now

$ npm install -g metrillm@latest
$ metrillm

Requires Node 20+ and Ollama or LM Studio running

Or run without installing: npx metrillm@latest