AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] We compressed 6 LLMs and found something surprising: they don't degrade the same way

Impact: 8/10
Swipe left/right

Summary

Researchers compressed MLP layers of six LLMs without quantization, discovering that models degrade differently and some are significantly more compressible than others. For example, Gemma 2B retained 92% accuracy at 14% compression, while Llama 3.1 8B dropped to 85% at the same level. This surprising finding also revealed that initial perplexity improvements did not correlate with downstream benchmark performance after compression.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...