deepseek-r1:7b
qwen2 · 7.6B · Q4_K_M
THINKING MODEL ECO MODE
Gigabyte Technology Co., Ltd. H170M-D3H (Intel Core™ i5-6500)
16 GB · Ubuntu 24.04.4 LTS
Tested on March 5, 2026
Global Score
66 /100
Good
Hardware Fit
63/100
Quality
67/100
Get this model
Hardware
- Machine
- Gigabyte Technology Co., Ltd. H170M-D3H
- CPU
- Intel Core™ i5-6500
- Cores
- 4 total (4 perf)
- Frequency
- 3.2 GHz
- RAM
- 16 GB
- GPU
- GP104 [GeForce GTX 1070]
- OS
- Ubuntu 24.04.4 LTS
- Arch
- x64
- Power Mode
- low-power
Performance
- Tokens/sec
- 35.6
- Standard deviation
- ±0.1
- First chunk latency
- 224 ms
- Time to first token
- 7.5 s
- Load time
- 7.7 s
- Memory usage
- 4.6 GB (29%)
- Total tokens
- 1394
- Thinking tokens (est.)
- ~1041
Score breakdown
Speed
50/50
Time to first token
7/20
Memory
6/30
Quality
Reasoning
16/20
Coding
10/20
Instruction following
9/20
Structured output
10/15
Math
14/15
Multilingual
8/10
Category levels
Reasoning: Strong Coding: Adequate Instruction Following: Weak Structured Output: Adequate Math: Strong Multilingual: Strong
Metadata
- Spec version
- 0.2.1
- Runtime
- Ollama 0.17.6
- Model format
- GGUF
- Hardware profile
- ENTRY
- Result hash
- cace4b07deac3be49ae34dfa695ed2c4b6e598a329a71ca90d8c961716667c98
Interpretation
Hardware fit: 63/100. Overall suitability: GOOD (Global 66/100). Category profile: Reasoning: Strong, Coding: Adequate, Instruction Following: Weak, Structured Output: Adequate, Math: Strong, Multilingual: Strong.
Warnings
- System was in low-power mode during this benchmark.
Bench Environment
Thermal: nominal Swap delta:
+0.0 GB
CPU load: avg 26% (peak 35%)
Run yours now
$
npm install -g metrillm@latest$
metrillmRequires Node 20+ and Ollama or LM Studio running
Or run without installing: npx metrillm@latest