AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] How I topped the Open LLM Leaderboard using 2x 4090 GPUs — no weights modified.

Impact: 9/10
Swipe left/right

Summary

A user discovered that duplicating a specific block of seven middle layers in Qwen2-72B, without modifying weights, significantly improved its performance and topped the Open LLM Leaderboard. This technique, effective only with "circuit-sized blocks" of layers, suggests that pretraining carves out discrete functional units within LLMs. This finding has had a lasting impact, with top models on the leaderboard in 2026 still being descendants of this method.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...