internlm2_5-1_8b-chat
LM-STUDIO GGUFinternlm2 · Q4_K_M
Mar 8, 2026 · Intel Core™ i5-5300U
glm-4.7-flash:q4_K_M
OLLAMA GGUFglm4moelite · 29.9B · Q4_K_M
Mar 7, 2026 · Intel Core™ i5-14600KF
Global Score
55 vs 81
Hardware Fit
60 vs 89
Quality Score
53 vs 77
Hardware
internlm2_5-1_8b-c… glm-4.7-flash:q4_K_M
MachineLENOVO 20BUS00700ASUS
CPUIntel Core™ i5-5300UIntel Core™ i5-14600KF
Cores420
RAM16 GB32 GB
GPUIntel(R) HD Graphics 5500NVIDIA GeForce RTX 4070 Ti SUPER
OSMicrosoft Windows 10 Professionnel 10.0.19045Microsoft Windows 11 Famille 10.0.26200
Archx64x64
Power Modebalancedbalanced
Performance
internlm2_5-1_8b-c… glm-4.7-flash:q4_K_M
Tokens/sec6.635.1
First chunk3037 ms440 ms
TTFT3.0 s440 ms
Load timeN/A11.6 s
Memory usage1.1 GB18.4 GB
Memory %7%58%
HW Fit Score Breakdown
internlm2_5-1_8b-chat
Speed
23/50
TTFT
16/20
Memory
21/30
glm-4.7-flash:q4_K_M
Speed
49/50
TTFT
20/20
Memory
20/30
Quality
internlm2_5-1_8b-chat
Reasoning
10/20
Coding
9/20
Instruction
14/20
Structured
10/15
Math
5/15
Multilingual
5/10
Reasoning: Adequate Coding: Weak Instruction Following: Adequate Structured Output: Adequate Math: Weak Multilingual: Adequate
glm-4.7-flash:q4_K_M
Reasoning
13/20
Coding
17/20
Instruction
12/20
Structured
15/15
Math
10/15
Multilingual
10/10
Reasoning: Adequate Coding: Strong Instruction Following: Adequate Structured Output: Strong Math: Adequate Multilingual: Strong
Run yours and compare
$
npm install -g metrillm@latest$
metrillmRequires Node 20+ and Ollama or LM Studio running
Or run without installing: npx metrillm@latest