AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] Running a 72B model across two machines with llama.cpp RPC — one of them I found at the dump

Impact: 7/10
Swipe left/right

Summary

A user successfully ran a 72B language model locally by distributing it across two machines using `llama.cpp`'s RPC backend. This innovative approach allowed them to overcome the 24GB VRAM limitation of their primary RTX 3090, leveraging a second machine's GPU as additional VRAM over the network. Notably, one of the machines was a repurposed "dump find," showcasing a cost-effective method for running larger LLMs.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...