AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] IndexCache: Accelerating Sparse Attention via Cross-Layer Index Reuse

Impact: 8/10
Swipe left/right

Summary

IndexCache is a new patch for SGLang and vLLM designed to accelerate sparse attention models, including DeepSeek-V3.2 and GLM-5. It achieves this by reusing indices across layers, eliminating up to 75% of indexer computations. This optimization delivers significant speedups, up to 1.82x for prefill and 1.48x for decode, with negligible quality degradation and no additional GPU memory.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...