AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] MLX is not faster. I benchmarked MLX vs llama.cpp on M1 Max across four real workloads. Effective tokens/s is quite an issue. What am I missing? Help me with benchmarks and M2 through M5 comparison.

Impact: 6/10
Swipe left/right

Summary

A user benchmarked MLX against llama.cpp on an M1 Max, challenging the common belief that MLX is significantly faster for local LLM inference. Contrary to initial observations in LM Studio, real-world tasks like document classification showed GGUF/llama.cpp performing better, with multi-turn conversations showing little difference. The user is seeking community input to understand these discrepancies and improve benchmarks.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...