A Reddit post observes the rapid evolution of AI models, noting that DeepSeek R1, released a year ago, was 25 times larger (671B parameters) than the current Gemma 4 MoE (26B parameters). Despite its significantly smaller size, Gemma 4 is considered genuinely impressive, sparking discussion on the efficiency and capabilities of smaller models. The author expresses excitement for the future of local LLMs.