AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] One year ago DeepSeek R1 was 25 times bigger than Gemma 4

Impact: 6/10
Swipe left/right

Summary

A Reddit post observes the rapid evolution of AI models, noting that DeepSeek R1, released a year ago, was 25 times larger (671B parameters) than the current Gemma 4 MoE (26B parameters). Despite its significantly smaller size, Gemma 4 is considered genuinely impressive, sparking discussion on the efficiency and capabilities of smaller models. The author expresses excitement for the future of local LLMs.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...