glm-4.7-flash:latest
glm4moelite · 29.9B · Q4_K_M
THINKING MODEL
AZW GTR Pro (AMD RYZEN AI MAX+ 395)
125 GB · Ubuntu 24.04.4 LTS
Tested on March 22, 2026
Global Score
86 /100
Not Rec.
Hardware Fit
74/100
Quality
91/100
Get this model
🦙
Ollama
ollama pull glm-4.7-flash:latest View on Ollama Library
ollama.com/library/glm-4.7-flash
Get it in LM Studio
Search and download models directly from the app
🤗
Find on HuggingFace
GGUF versions & quantizations
Hardware
- Machine
- AZW GTR Pro
- CPU
- AMD RYZEN AI MAX+ 395
- Cores
- 32 threads (16 cores)
- Frequency
- 3 GHz
- RAM
- 125 GB
- GPU
- Radeon 8060S
- OS
- Ubuntu 24.04.4 LTS
- Arch
- x64
- Power Mode
- performance
Performance
- Tokens/sec
- 47.8
- Standard deviation
- ±0.0
- First chunk latency
- 194 ms
- Time to first token
- 30.0 s
- Load time
- 7.4 s
- Memory usage
- 37.7 GB (30%)
- Total tokens
- 1404
- Thinking tokens (est.)
- ~807
Score breakdown
Speed
44/50
Time to first token
0/20
Memory
30/30
Quality
Reasoning
19/20
Coding
17/20
Instruction following
15/20
Structured output
15/15
Math
15/15
Multilingual
10/10
Category levels
Reasoning: Strong Coding: Strong Instruction Following: Strong Structured Output: Strong Math: Strong Multilingual: Strong
Metadata
- Spec version
- 0.2.1
- Runtime
- Ollama 0.17.4
- Model format
- GGUF
- Hardware profile
- HIGH-END
- Result hash
- b641c726549778aaeede13818907589fe8d2432d4b260b59ea734cac9ce71385
Interpretation
Hardware fit: 74/100. Overall suitability: NOT RECOMMENDED (Global 86/100). Category profile: Reasoning: Strong, Coding: Strong, Instruction Following: Strong, Structured Output: Strong, Math: Strong, Multilingual: Strong.
Warnings
- CPU appears throttled (2.4 GHz current vs 3.0 GHz nominal, 79%).
Disqualifiers
- Time to first token too high: 30000ms (maximum: 10000ms for HIGH-END profile)
Bench Environment
Thermal: nominal CPU load: avg 3% (peak 4%)
Run yours now
$
npm install -g metrillm@latest$
metrillmRequires Node 20+ and Ollama or LM Studio running
Or run without installing: npx metrillm@latest