meta-llama-3.1-8b-instruct
llama · 8B · 4bit
MacBook Air (Apple M4)
32 GB · macOS 26.3
Tested on March 3, 2026
Global Score
76 /100
Good
Hardware Fit
95/100
Quality
68/100
Get this model
🦙
Ollama
ollama pull meta-llama-3.1-8b-instruct View on Ollama Library
ollama.com/library/meta-llama-3.1-8b-instruct
🤗
Find on HuggingFace
GGUF versions & quantizations
Hardware
- Machine
- MacBook Air
- CPU
- Apple M4
- Cores
- 10 total (4 perf + 6 eff)
- Frequency
- 2.4 GHz
- RAM
- 32 GB LPDDR5
- GPU
- Apple M4
- OS
- macOS 26.3
- Arch
- arm64
- Power Mode
- balanced
Performance
- Tokens/sec
- 21.6
- Standard deviation
- ±0.3
- Time to first token
- 512 ms
- Load time
- 0.0 s
- Memory usage
- 0.0 GB (0%)
- Total tokens
- 1228
Score breakdown
Speed
35/40
Time to first token
30/30
Memory
30/30
Quality
Reasoning
11/20
Coding
15/20
Instruction following
14/20
Structured output
14/15
Math
4/15
Multilingual
10/10
Category levels
Reasoning: Adequate Coding: Strong Instruction Following: Adequate Structured Output: Strong Math: Poor Multilingual: Strong
Metadata
- Spec version
- 0.2.0
- Runtime
- Lm-studio unknown
- Model format
- GGUF
- Hardware profile
- BALANCED
- Result hash
- 017a6b62a9b9fd0b1ade26008465d911d127db6191b4ecceb5af8d1729355ad0
Interpretation
Hardware fit: 95/100. Overall suitability: GOOD (Global 76/100). Category profile: Reasoning: Adequate, Coding: Strong, Instruction Following: Adequate, Structured Output: Strong, Math: Poor, Multilingual: Strong.
Run yours now
npx metrillm@latest benchRequires Node 20+ and Ollama or LM Studio running