AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] [Architecture Help] Serving Embed + Rerank + Zero-Shot Classifier on 8GB VRAM. Fighting System RAM Kills and Latency.

Impact: 3/10
Swipe left/right

Summary

A developer is seeking architectural and MLOps advice to deploy a unified Knowledge Graph/RAG service, incorporating embedding, reranking, and zero-shot classification, on a laptop with limited resources (8GB VRAM, 16GB system RAM). The service, running in a FastAPI Docker container, is encountering severe memory limit issues and latency, especially after migrating from Windows WSL to native Linux, highlighting the challenges of local AI deployment under stress.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...