Back to leaderboard

mistral:latest

OLLAMA GGUF

llama · 7.2B · Q4_0

Good

Feb 28, 2026 · Apple M4 Pro

qwen3.5:35b

OLLAMA GGUF

qwen35moe · 36.0B · Q4_K_M

Good

Mar 7, 2026 · Intel Core™ i5-14600KF

Global Score
72 vs 77
Hardware Fit
100 vs 53
Quality Score
60 vs 87

Hardware

mistral:latest qwen3.5:35b
MachineMac MiniASUS
CPUApple M4 ProIntel Core™ i5-14600KF
Cores1420
RAM64 GB32 GB
GPUApple M4 ProNVIDIA GeForce RTX 4070 Ti SUPER
OSmacOS 15.7.4Microsoft Windows 11 Famille 10.0.26200
Archarm64x64
Power Modeunknownbalanced

Performance

mistral:latest qwen3.5:35b
Tokens/sec54.316.0
First chunkN/A719 ms
TTFT124 ms719 ms
Load time0.0 s17.8 s
Memory usage9.9 GB25.0 GB
Memory %15%79%

HW Fit Score Breakdown

mistral:latest

Speed
40/50
TTFT
30/20
Memory
30/30

qwen3.5:35b

Speed
23/50
TTFT
20/20
Memory
10/30

Quality

mistral:latest

Reasoning
8/20
Coding
15/20
Instruction
13/20
Structured
12/15
Math
3/15
Multilingual
9/10
Reasoning: Weak Coding: Adequate Instruction Following: Adequate Structured Output: Strong Math: Poor Multilingual: Strong

qwen3.5:35b

Reasoning
17/20
Coding
18/20
Instruction
15/20
Structured
15/15
Math
12/15
Multilingual
10/10
Reasoning: Strong Coding: Strong Instruction Following: Strong Structured Output: Strong Math: Strong Multilingual: Strong

Run yours and compare

$ npm install -g metrillm@latest
$ metrillm

Requires Node 20+ and Ollama or LM Studio running

Or run without installing: npx metrillm@latest