AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] I spent 8+ hours benchmarking every MoE backend for Qwen3.5-397B NVFP4 on 4x RTX PRO 6000 (SM120). Here's what I found.

Impact: 8/10
Swipe left/right

Summary

A user benchmarked MoE backends for Qwen3.5-397B NVFP4 on 4x RTX PRO 6000 GPUs, achieving a sustained decode rate of 50.5 tok/s. This performance is significantly lower than claims of 130+ tok/s, with the discrepancy attributed to broken NVIDIA CUTLASS kernels on their own workstation Blackwell GPUs (SM120). The finding exposes a critical software-hardware compatibility issue impacting real-world performance for users of this specific setup.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...