AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] Qwen 3.5 27B Macbook M4 Pro 48GB

Impact: 4/10
Swipe left/right

Summary

A user on r/LocalLLaMA is seeking feedback from others running the Qwen 3.5 27B large language model on a MacBook Pro M4 Pro with 48GB RAM. They are interested in performance results, optimal quantizations (e.g., 4b, 6bit, 7bit, mxfp8), and whether the model runs smoothly with sufficient cache and context. The user notes that the 27B version is rumored to outperform the 35B-A3B model.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...