AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] Qwen 3.5 do I go dense or go bigger MoE?

Impact: 2/10
Swipe left/right

Summary

A user on r/LocalLLaMA is discussing their experience running Qwen 3.5 models (27b and 35b-a3b) on a workstation with 40GB VRAM, noting that while the 27b model is almost sufficient for daily coding, its performance is slow. They are debating whether to pursue larger models like Qwen 122b or focus on optimizing the speed of their current setup.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...