deepseek-r1-distill-qwen-14b-mlx

qwen2 · 14B · 5bit

THINKING MODEL

Mac mini (Apple M4)

24 GB · macOS 26.3

Tested on March 11, 2026 · Submitted by cryptoepops
Top 94% Compare
Global Score
36 /100
Not Rec.
Hardware Fit
53/100
Quality
28/100

Get this model

Hardware

Machine
Mac mini
CPU
Apple M4
Cores
10 total (4 perf + 6 eff)
Frequency
2.4 GHz
RAM
24 GB LPDDR5
GPU
Apple M4
OS
macOS 26.3
Arch
arm64
Power Mode
balanced

Performance

Tokens/sec
10.3
Standard deviation
±0.0
First chunk latency
27 ms
Time to first token
20.7 s
Load time
15.4 s
Memory usage
1.0 GB (4%)
Total tokens
1621
Thinking tokens (est.)
~959

Score breakdown

Speed
23/50
Time to first token
0/20
Memory
30/30

Quality

Reasoning
3/20
Coding
6/20
Instruction following
6/20
Structured output
3/15
Math
2/15
Multilingual
8/10

Category levels

Reasoning: Poor Coding: Weak Instruction Following: Weak Structured Output: Poor Math: Poor Multilingual: Strong

Metadata

Spec version
0.2.1
Runtime
LM Studio 0.4.6+1
Model format
MLX
Hardware profile
ENTRY
Result hash
8facd3ad7d5e2e68152880b87796ceca928d463d9d118405317d6d25a158d6c7

Interpretation

Hardware fit: 53/100. Overall suitability: NOT RECOMMENDED (Global 36/100). Category profile: Reasoning: Poor, Coding: Weak, Instruction Following: Weak, Structured Output: Poor, Math: Poor, Multilingual: Strong.

Bench Environment

Power: AC CPU load: avg 13% (peak 18%)

Run yours now

$ npm install -g metrillm@latest
$ metrillm

Requires Node 20+ and Ollama or LM Studio running

Or run without installing: npx metrillm@latest