mistral:latest
llama · 7.2B · Q4_0
Mac Mini (Apple M4 Pro)
64 GB · macOS 15.7.4
Tested on February 28, 2026
Global Score
76 /100
Good
Hardware Fit
100/100
Quality
60/100
Get this model
🦙
Ollama
ollama pull mistral:latest View on Ollama Library
ollama.com/library/mistral
🤗
Find on HuggingFace
GGUF versions & quantizations
Hardware
- Machine
- Mac Mini
- CPU
- Apple M4 Pro
- Cores
- 14 total (10 perf + 4 eff)
- Frequency
- 2.4 GHz
- RAM
- 64 GB LPDDR5
- GPU
- Apple M4 Pro
- OS
- macOS 15.7.4
- Arch
- arm64
Performance
- Tokens/sec
- 54.3
- Standard deviation
- ±0.1
- Time to first token
- 124 ms
- Load time
- 0.0 s
- Memory usage
- 9.9 GB (15%)
- Total tokens
- 1213
Score breakdown
Speed
40/40
Time to first token
30/30
Memory
30/30
Quality
Reasoning
8/20
Coding
15/20
Instruction following
13/20
Structured output
12/15
Math
3/15
Multilingual
9/10
Category levels
Reasoning: Weak Coding: Adequate Instruction Following: Adequate Structured Output: Strong Math: Poor Multilingual: Strong
Metadata
- Spec version
- 0.1.0
- Runtime
- Ollama 0.17.4
- Model format
- GGUF
- Hardware profile
- HIGH-END
- Result hash
- 62b8c3588c2d3110746db6e6bb1e75ae7380cb0bb2d03936b367519799d02a33
Interpretation
Hardware fit: 100/100. Overall suitability: GOOD (Global 76/100). Category profile: Reasoning: Weak, Coding: Adequate, Instruction Following: Adequate, Structured Output: Strong, Math: Poor, Multilingual: Strong.
Run yours now
npx metrillm@latest benchRequires Node 20+ and Ollama running