0
Likes
0
Saves
Back to updates

[r/ML] We open-sourced Chaperone-Thinking-LQ-1.0 — a 4-bit GPTQ + QLoRA fine-tuned DeepSeek-R1-32B that hits 84% on MedQA in ~20GB[N]

Impact: 7/10
Swipe left/right

Summary

Chaperone-Thinking-LQ-1.0, an open-source reasoning model, has been released, built on a 4-bit GPTQ + QLoRA fine-tuned DeepSeek-R1-32B. This model achieves an impressive 84% on MedQA while significantly compressing its size from ~60GB to just ~20GB. Its development involved quantization-aware training and fine-tuning on medical and scientific datasets, making powerful AI more accessible for specialized applications.

Editorial note

AI Dose summarizes public reporting and links to original sources when they are available. Review the Editorial Policy, Disclaimer, or Contact page if you need to flag a correction or understand how this site handles sources.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [HN] Is anyone else bothered that AI agents can basically do what they want?, [r/ML] Why production systems keep making “correct” decisions that are no longer right [D].

Related Articles

Next read

[r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT

Stay with the thread by reading one adjacent story before leaving this update.

Comments

Sign in to leave a comment.

Loading comments...