openai/gpt-oss-20b

gpt_oss · 20B · MXFP4

THINKING MODEL

Mac mini (Apple M4)

24 GB · macOS 26.3

Tested on March 19, 2026 · Submitted by cryptoepops
Top 25% Compare
Global Score
77 /100
Not Rec.
Hardware Fit
67/100
Quality
81/100

Get this model

Hardware

Machine
Mac mini
CPU
Apple M4
Cores
10 total (4 perf + 6 eff)
Frequency
2.4 GHz
RAM
24 GB LPDDR5
GPU
Apple M4
OS
macOS 26.3
Arch
arm64
Power Mode
balanced

Performance

Tokens/sec
30.5
Standard deviation
±1.0
First chunk latency
18 ms
Time to first token
30.0 s
Load time
N/A
Memory usage
15.8 GB (66%)
Total tokens
1633
Thinking tokens (est.)
~954

Score breakdown

Speed
50/50
Time to first token
0/20
Memory
17/30

Quality

Reasoning
19/20
Coding
11/20
Instruction following
12/20
Structured output
15/15
Math
14/15
Multilingual
10/10

Category levels

Reasoning: Strong Coding: Adequate Instruction Following: Adequate Structured Output: Strong Math: Strong Multilingual: Strong

Metadata

Spec version
0.2.1
Runtime
LM Studio 0.4.7+4
Model format
MLX
Hardware profile
ENTRY
Result hash
db23b57b6f3224c815833d106078e6bfc3556d1d002b4bc7144e099190615692

Interpretation

Hardware fit: 67/100. Overall suitability: NOT RECOMMENDED (Global 77/100). Category profile: Reasoning: Strong, Coding: Adequate, Instruction Following: Adequate, Structured Output: Strong, Math: Strong, Multilingual: Strong.

Warnings

  • Token throughput is estimated from LM Studio output because native token stats were unavailable. Compare tok/s across backends cautiously.
  • Model memory footprint is estimated via LM Studio CLI rather than measured from a fresh load.

Disqualifiers

  • Time to first token too high: 30000ms (maximum: 21386ms for ENTRY profile)

Bench Environment

Power: AC CPU load: avg 21% (peak 23%)

Run yours now

$ npm install -g metrillm@latest
$ metrillm

Requires Node 20+ and Ollama or LM Studio running

Or run without installing: npx metrillm@latest