AI Dose
0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] My company just handed me a 2x H200 (282GB VRAM) rig. Help me pick the "Intelligence" ceiling.

Impact: 2/10
Swipe left/right

Summary

A user on r/LocalLLaMA announced their company acquired a server equipped with 2x Nvidia H200 GPUs, providing a massive 282GB of VRAM. Tasked with testing LLMs on this powerful new setup, they are seeking community recommendations for "intelligent" models and quantizations. The user aims to leverage the extensive VRAM for raw intelligence rather than just high speeds, marking a significant upgrade from their prior local LLM experience.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] [R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros), [r/ML] Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P].

Related Articles

Comments

Sign in to leave a comment.

Loading comments...