0
Likes
0
Saves
Back to updates

[r/ML] I tested 14 LLMs from 0.6B to 123B. All of them get worse at following instructions when users are hostile [R]

Impact: 8/10
Swipe left/right

Summary

A study testing 14 diverse LLMs, ranging from 0.6B to 123B parameters (including Llama 3.1, Mistral, and Qwen3), revealed a significant degradation in instruction-following performance when users employed hostile prompts. This negative effect, approximately a 10% drop for 7-8B models, was consistent across various architectures, quantization tiers, and routing methods. Although the impact attenuates with increasing model scale, it remains a significant issue even for the largest models tested.

Editorial note

AI Dose summarizes public reporting and links to original sources when they are available. Review the Editorial Policy, Disclaimer, or Contact page if you need to flag a correction or understand how this site handles sources.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [HN] Is anyone else bothered that AI agents can basically do what they want?, [r/ML] Why production systems keep making “correct” decisions that are no longer right [D].

Related Articles

Next read

[r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT

Stay with the thread by reading one adjacent story before leaving this update.

Comments

Sign in to leave a comment.

Loading comments...