AI Dose
0
Likes
0
Saves
Back to updates

[r/ML] [P] Runtime GGUF tampering in llama.cpp: persistent output steering without server restart

Impact: 8/10
Swipe left/right

Summary

A new research PoC, "llm-inference-tampering," demonstrates a significant runtime integrity risk in local llama.cpp inference setups using default memory mapping. It shows that if another process can write to the GGUF model file, the AI's generation behavior can be persistently altered during serving without requiring a server restart. This vulnerability allows for unauthorized and persistent output steering by modifying model weights on disk while the model is actively running.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...