0
Likes
0
Saves
Back to updates

[r/ML] [R] What kind on video benchmark is missing VLMs?

Impact: 3/10
Swipe left/right

Summary

A user on r/ML is seeking input on what kind of video benchmarks are currently missing for evaluating Video Large Language Models (VLMs). Despite acknowledging existing benchmarks like VideoMME and MVBench, they are looking for ideas to create a new, more "physical and open world" dataset. This aims to identify gaps in current VLM evaluation methods and foster development of more comprehensive testing.

Editorial note

AI Dose summarizes public reporting and links to original sources when they are available. Review the Editorial Policy, Disclaimer, or Contact page if you need to flag a correction or understand how this site handles sources.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [r/ML] KIV: 1M token context window on a RTX 4070 (12GB VRAM), no retraining, drop-in HuggingFace cache replacement - Works with any model that uses DynamicCache [P], [r/ML] LLMs learn backwards, and the scaling hypothesis is bounded. [D].

Related Articles

Next read

[r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT

Stay with the thread by reading one adjacent story before leaving this update.

Comments

Sign in to leave a comment.

Loading comments...