Summary
Developers faced issues with native FP4 Mixture of Experts (MoE) operations on NVIDIA's new SM120 (Blackwell) GPUs, as CUTLASS grouped GEMM produced garbage output or crashed. The problem was resolved by systematically patching FlashInfer 0.6.5 with SM120 capability checks and utilizing `compute_120f` from CUDA 13.0. This crucial fix enabled the first known correct native FP4 MoE operation on Blackwell, achieving 39 tokens/second.
Continue Reading
Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].
Related Articles
- [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT
March 29, 2026
- [r/LocalLLaMA] karpathy / autoresearch
March 10, 2026
- [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros)
April 7, 2026
- [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]
April 5, 2026
Comments
Sign in to leave a comment.