0
Likes
0
Saves
Back to updates

[r/LocalLLaMA] Qwen 35B trying to recreate scenes from photos in 3D!

Impact: 5/10
Swipe left/right

Summary

A user experimented with Qwen 35B, running locally via llama.cpp, to generate HTML 3D scenes from input photos. While the results are imperfect and purely for fun, they demonstrate the model's surprising capability to recreate visual scenes in a navigable 3D format. This early-stage experiment hints at future potential for image-to-3D generation from relatively small, local models.

Editorial note

AI Dose summarizes public reporting and links to original sources when they are available. Review the Editorial Policy, Disclaimer, or Contact page if you need to flag a correction or understand how this site handles sources.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] You can decompose models into a graph database [N], [r/ML] KIV: 1M token context window on a RTX 4070 (12GB VRAM), no retraining, drop-in HuggingFace cache replacement - Works with any model that uses DynamicCache [P].

Related Articles

Next read

[r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT

Stay with the thread by reading one adjacent story before leaving this update.

Comments

Sign in to leave a comment.

Loading comments...