Back to leaderboard

lfm2-24b-a2b

LM-STUDIO GGUF
Not Rec.

Mar 6, 2026 · Apple M4 Pro

mlx-community/quantized-gemma-2b-it

LM-STUDIO GGUF
Marginal

Mar 5, 2026 · Apple M4

Global Score
18 vs 45
Hardware Fit
59 vs 100
Quality Score
0 vs 21

Hardware

lfm2-24b-a2b mlx-community/quan…
MachineMac miniMacBook Air
CPUApple M4 ProApple M4
Cores1410
RAM64 GB32 GB
GPUApple M4 ProApple M4
OSmacOS 15.7.4macOS 26.3
Archarm64arm64
Power Modebalancedbalanced

Performance

lfm2-24b-a2b mlx-community/quan…
Tokens/sec6.644.4
First chunk1025 ms336 ms
TTFT1.0 s336 ms
Load timeN/AN/A
Memory usage0.0 GB0.1 GB
Memory %0%0%

HW Fit Score Breakdown

lfm2-24b-a2b

Speed
9/50
TTFT
20/20
Memory
30/30

mlx-community/quantized…

Speed
50/50
TTFT
20/20
Memory
30/30

Quality

lfm2-24b-a2b

Reasoning
0/20
Coding
0/20
Instruction
0/20
Structured
0/15
Math
0/15
Multilingual
0/10
Reasoning: Poor Coding: Poor Instruction Following: Poor Structured Output: Poor Math: Poor Multilingual: Poor

mlx-community/quantized…

Reasoning
4/20
Coding
1/20
Instruction
9/20
Structured
4/15
Math
2/15
Multilingual
1/10
Reasoning: Poor Coding: Poor Instruction Following: Weak Structured Output: Weak Math: Poor Multilingual: Poor

Run yours and compare

$ npm install -g metrillm@latest
$ metrillm

Requires Node 20+ and Ollama or LM Studio running

Or run without installing: npx metrillm@latest