# MetriLLM > Open-source CLI tool and leaderboard for benchmarking local LLM inference on real hardware. Measures speed (tokens/s, TTFT), memory usage, and response quality (reasoning, math, coding, instruction following, structured output, multilingual). MIT license, npm package. - [Homepage](https://metrillm.dev/): Live leaderboard with benchmark results across real hardware configurations - [CLI on GitHub](https://github.com/MetriLLM/metrillm): Source code, installation instructions, and contribution guide ## Documentation - [Methodology](https://metrillm.dev/methodology): How benchmarks are scored — global score formula, hardware fit, quality metrics, verdict system, hardware profiles - [Specification](https://metrillm.dev/specification): Benchmark output format and data schema - [Models directory](https://metrillm.dev/models): All benchmarked models with individual performance pages - [Hardware directory](https://metrillm.dev/hardware): All benchmarked hardware configurations with per-CPU pages ## Community - [Blog](https://metrillm.dev/blog): Articles about local LLM performance and benchmarking insights - [Contributors](https://metrillm.dev/contributors): People who contributed benchmark data - [Founding Supporters](https://metrillm.dev/founding): Early supporter program ## Tools - [My Benchmarks](https://metrillm.dev/my-benchmarks): Look up your own benchmark results by machine ID - [Compare](https://metrillm.dev/compare): Side-by-side model comparison tool ## Optional - [Privacy Policy](https://metrillm.dev/privacy): Privacy policy - [Terms of Service](https://metrillm.dev/terms): Terms of service - [Legal](https://metrillm.dev/legal): Legal notices