AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] llama.cpp on $500 MacBook Neo: Prompt: 7.8 t/s / Generation: 3.9 t/s on Qwen3.5 9B Q3_K_M

Impact: 6/10
Swipe left/right

Summary

A user successfully compiled and ran `llama.cpp` on a new MacBook Neo (featuring an Apple A18 Pro chip and 8GB RAM) with the Qwen3.5 9B Q3_K_M model. While the performance was slow, achieving 7.8 tokens/second for prompting and 3.9 tokens/second for generation, it demonstrates the capability of future Apple silicon to run relatively large language models locally even with limited memory. This highlights the continued progress in making LLMs accessible on consumer hardware.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...