Summary
A user successfully ran large language models, including Qwen3.5-35B and Nemotron-3-Super-120B, on a consumer-grade setup combining an RTX 5060ti and a GTX 1080ti using `llama.cpp`. The 35B model achieved 60 tokens/sec fully on GPU, while the 120B model ran at 3 tokens/sec, requiring 64GB RAM. This demonstrates the increasing feasibility of running very large AI models on more accessible hardware, significantly improving local development and experimentation.
Continue Reading
Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].
Related Articles
- [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT
March 29, 2026
- [r/LocalLLaMA] karpathy / autoresearch
March 10, 2026
- [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros)
April 7, 2026
- [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]
April 5, 2026
Comments
Sign in to leave a comment.