AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] Abliteration method for LiquidAI's LFM 2.5 + abliterated examples of their 1.2b model

Impact: 6/10
Swipe left/right

Summary

A user developed an "abliteration" method to remove alignment checks from LiquidAI's LFM 2.5 models, specifically the 1.2B version. This experiment aimed to observe how the unique framework reacts to a loss of alignment. The Python script and abliterated model samples (in .safetensors and gguf formats) have been shared on Hugging Face for community use.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...