AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] Nemotron-3-Super-120B-A12B NVFP4 inference benchmark on one RTX Pro 6000 Blackwell

Impact: 8/10
Swipe left/right

Summary

A benchmark was performed on the Nemotron-3-Super-120B-A12B NVFP4 model, running inference on a single RTX Pro 6000 Blackwell GPU using vLLM. The testing covered a wide range of context lengths (1K to 512K) and concurrent requests, demonstrating the model's performance under sustained load. This highlights the capability of running very large language models efficiently on advanced single-GPU setups.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...