AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] Running Qwen3.5-35B-A3B and Nemotron-3-Super-120B-A12B on a 5060ti and 1080ti with llama.cpp (Fully on GPU for Qwen; 64GB RAM needed for Nemotron)

Impact: 8/10
Swipe left/right

Summary

A user successfully ran large language models, including Qwen3.5-35B and Nemotron-3-Super-120B, on a consumer-grade setup combining an RTX 5060ti and a GTX 1080ti using `llama.cpp`. The 35B model achieved 60 tokens/sec fully on GPU, while the 120B model ran at 3 tokens/sec, requiring 64GB RAM. This demonstrates the increasing feasibility of running very large AI models on more accessible hardware, significantly improving local development and experimentation.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...