0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] My company just handed me a 2x H200 (282GB VRAM) rig. Help me pick the "Intelligence" ceiling.

Impact: 2/10
Swipe left/right

Summary

A user on r/LocalLLaMA announced their company acquired a server equipped with 2x Nvidia H200 GPUs, providing a massive 282GB of VRAM. Tasked with testing LLMs on this powerful new setup, they are seeking community recommendations for "intelligent" models and quantizations. The user aims to leverage the extensive VRAM for raw intelligence rather than just high speeds, marking a significant upgrade from their prior local LLM experience.

Editorial note

AI Dose summarizes public reporting and links to original sources when they are available. Review the Editorial Policy, Disclaimer, or Contact page if you need to flag a correction or understand how this site handles sources.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [HN] Is anyone else bothered that AI agents can basically do what they want?, [r/ML] Why production systems keep making “correct” decisions that are no longer right [D].

Related Articles

Next read

[r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT

Stay with the thread by reading one adjacent story before leaving this update.

Comments

Sign in to leave a comment.

Loading comments...