AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] 55 → 282 tok/s: How I got Qwen3.5-397B running at speed on 4x RTX PRO 6000 Blackwell

Impact: 8/10
Swipe left/right

Summary

A user successfully optimized Qwen3.5-397B, an MoE model, on Blackwell SM120 GPUs (like RTX PRO 6000), increasing inference speed from 55 tok/s to 282 tok/s. This significant performance boost was achieved by developing a custom CUTLASS kernel to resolve issues with MoE GEMM tiles specific to these GPUs. The solution has been submitted as a PR to FlashInfer, and a pre-built Docker image is available for other users.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...