0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] I'm running qwen3.6-35b-a3b with 8 bit quant and 64k context thru OpenCode on my mbp m5 max 128gb and it's as good as claude

Impact: 7/10
Swipe left/right

Summary

A user successfully ran the Qwen3.6-35B model with 8-bit quantization and 64k context locally on an M5 Max MacBook Pro. They reported being "VERY impressed" with its speed and ability to handle complex research tasks, claiming its performance is comparable to Claude. This anecdotal report highlights the growing capability of powerful large language models on consumer-grade hardware.

Editorial note

AI Dose summarizes public reporting and links to original sources when they are available. Review the Editorial Policy, Disclaimer, or Contact page if you need to flag a correction or understand how this site handles sources.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] Zero-shot World Models Are Developmentally Efficient Learners [R], [HN] Do I Stop Learning Coding? DSA?.

Related Articles

Next read

[r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT

Stay with the thread by reading one adjacent story before leaving this update.

Comments

Sign in to leave a comment.

Loading comments...