qwen3.5:4b

qwen35 · 4.7B · Q4_K_M

THINKING MODEL ECO MODE

AZW GTR Pro (AMD RYZEN AI MAX+ 395)

125 GB · Ubuntu 24.04.4 LTS

Tested on March 5, 2026
Top 55% Compare
Global Score
65 /100
Not Rec.
Hardware Fit
73/100
Quality
62/100

Get this model

Hardware

Machine
AZW GTR Pro
CPU
AMD RYZEN AI MAX+ 395
Cores
32 threads (16 cores)
Frequency
3 GHz
RAM
125 GB
GPU
AMD Radeon 8060S
OS
Ubuntu 24.04.4 LTS
Arch
x64
Power Mode
low-power

Performance

Tokens/sec
45.7
Standard deviation
±0.1
First chunk latency
180 ms
Time to first token
30.0 s
Load time
5.3 s
Memory usage
16.2 GB (13%)
Total tokens
1429
Thinking tokens (est.)
~745

Score breakdown

Speed
43/50
Time to first token
0/20
Memory
30/30

Quality

Reasoning
17/20
Coding
1/20
Instruction following
13/20
Structured output
8/15
Math
13/15
Multilingual
10/10

Category levels

Reasoning: Strong Coding: Poor Instruction Following: Adequate Structured Output: Adequate Math: Strong Multilingual: Strong

Metadata

Spec version
0.2.1
Runtime
Ollama 0.17.4
Model format
GGUF
Hardware profile
HIGH-END
Result hash
8a729467836dd9c1de490d490db3b112077a50bdfa1d119fe9c81a174bfa2e39

Interpretation

Hardware fit: 73/100. Overall suitability: NOT RECOMMENDED (Global 65/100). Category profile: Reasoning: Strong, Coding: Poor, Instruction Following: Adequate, Structured Output: Adequate, Math: Strong, Multilingual: Strong.

Warnings

  • System was in low-power mode during this benchmark.
  • CPU appears throttled (2.2 GHz current vs 3.0 GHz nominal, 73%).

Disqualifiers

  • Time to first token too high: 30000ms (maximum: 10000ms for HIGH-END profile)

Bench Environment

Thermal: nominal CPU load: avg 4% (peak 4%)

Run yours now

$ npm install -g metrillm@latest
$ metrillm

Requires Node 20+ and Ollama or LM Studio running

Or run without installing: npx metrillm@latest