Which LLM runs best
on YOUR machine?
Stop guessing. Benchmark local LLMs directly on your hardware — speed, quality, memory — and get a clear fitness verdict.
npm install -g metrillm@latestmetrillmRequires Node 20+ and Ollama or LM Studio running
Or run without installing: npx metrillm@latest
One command. Real answers.
42.1
tok/s
68%
quality
3.2 GB
memory
MacBook Air M2 · 16 GB
28.5
tok/s
74%
quality
5.8 GB
memory
Mac Mini M4 Pro · 48 GB
14.3
tok/s
81%
quality
9.4 GB
memory
Desktop RTX 4090 · 64 GB
Example results — yours will reflect your actual hardware.
How it works
Install & run
One command. No config. Works with Ollama and LM Studio.
Get your verdict
Speed, quality, memory — tested with 14 targeted prompts.
Join the leaderboard
Upload your results. Compare with real hardware from the community.
The community is benchmarking
Why MetriLLM
Open source
Free CLI, public methodology. Every result is reproducible.
Your hardware, your data
Benchmarks run locally. No cloud, no uploading your models.
Real-world testing
14 targeted prompts covering reasoning, math, coding, and more.
Community-driven
Real results on real hardware. No synthetic benchmarks.
Ready to find out?
Run your first benchmark. Free and open source.
npm install -g metrillm@latestmetrillmRequires Node 20+ and Ollama or LM Studio running
Or run without installing: npx metrillm@latest
Don't miss new benchmarks
Get notified when new models and hardware configurations are tested. No spam, unsubscribe anytime.