mistralai/magistral-small-2509
mistral3 · 24B · 4bit
MacBook Air (Apple M4)
32 GB · macOS 26.3
Tested on March 3, 2026
Global Score
20 /100
Not Rec.
Hardware Fit
65/100
Quality
0/100
Get this model
🦙
Ollama
ollama pull mistralai/magistral-small-2509 View on Ollama Library
ollama.com/library/mistralai/magistral-small-2509
🤗
Find on HuggingFace
GGUF versions & quantizations
Hardware
- Machine
- MacBook Air
- CPU
- Apple M4
- Cores
- 10 total (4 perf + 6 eff)
- Frequency
- 2.4 GHz
- RAM
- 32 GB LPDDR5
- GPU
- Apple M4
- OS
- macOS 26.3
- Arch
- arm64
- Power Mode
- balanced
Performance
- Tokens/sec
- 5.1
- Standard deviation
- ±0.1
- Time to first token
- 2.0 s
- Load time
- 0.0 s
- Memory usage
- 0.0 GB (0%)
- Total tokens
- 1956
Score breakdown
Speed
8/40
Time to first token
27/30
Memory
30/30
Quality
Reasoning
0/20
Coding
0/20
Instruction following
0/20
Structured output
0/15
Math
0/15
Multilingual
0/10
Category levels
Reasoning: Poor Coding: Poor Instruction Following: Poor Structured Output: Poor Math: Poor Multilingual: Poor
Metadata
- Spec version
- 0.2.0
- Runtime
- Lm-studio unknown
- Model format
- GGUF
- Hardware profile
- BALANCED
- Result hash
- 215cea644dfe9180e0762902aff6ded1894da955a3ee589acd4dd5f35c869e4d
Interpretation
Hardware fit: 65/100. Overall suitability: NOT RECOMMENDED (Global 20/100). Category profile: Reasoning: Poor, Coding: Poor, Instruction Following: Poor, Structured Output: Poor, Math: Poor, Multilingual: Poor. Warning: model produced very low accuracy on quality tasks — results may be unusable despite good hardware performance.
Warnings
- Model produced very low accuracy on quality tasks — results may be unusable despite good hardware performance.
Run yours now
npx metrillm@latest benchRequires Node 20+ and Ollama or LM Studio running