AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] M5 Max 128GB with three 120B models

Impact: 6/10
Swipe left/right

Summary

A user on r/LocalLLaMA benchmarked three 120B-class language models (Nemotron-3 Super, GPT-OSS 120B, and Qwen3.5 122B) on an M5 Max 128GB system. The tests revealed that while Nemotron-3 Super offered slightly better quality, GPT-OSS 120B was twice as fast, achieving approximately 77 tokens/second. This provides practical performance insights for running large, quantized models locally.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...