0
Likes
0
Saves
Back to updates

AI update explained

[r/ML] ICML final decisions rant [D]

ICML recently announced its final decisions, accepting roughly 6,500 papers from a total of approximately 24,000 submissions. This acceptance rate, around 27%, means a large number of papers were rejected.

Impact: 5/10

In 10 seconds

What to know first

  • The International Conference on Machine Learning (ICML) accepted approximately 6,500 out of 24,000 submissions this year.
  • This situation underscores the growing pressure on the academic peer-review system in machine learning, potentially impacting researchers' ability to publish and receive constructive feedback. It points to a bottleneck in the dissemination of research and challenges in maintaining review quality amidst escalating submission volumes.

Why it matters

This situation underscores the growing pressure on the academic peer-review system in machine learning, potentially impacting researchers' ability to publish and receive constructive feedback. It points to a bottleneck in the dissemination of research and challenges in maintaining review quality amidst escalating submission volumes.

Swipe left/right

Summary

The International Conference on Machine Learning (ICML) accepted approximately 6,500 out of 24,000 submissions this year. This high rejection rate is expected to significantly increase the submission count for upcoming conferences like NeurIPS, perpetuating a cycle of high volume and low acceptance. The discussion also highlighted concerns about the quality and adequacy of peer reviews.

What happened

ICML recently announced its final decisions, accepting roughly 6,500 papers from a total of approximately 24,000 submissions. This acceptance rate, around 27%, means a large number of papers were rejected.

Key details

Rejected papers from ICML are anticipated to be resubmitted to other major conferences, such as NeurIPS, which is expected to further inflate their submission numbers. This creates a continuous cycle of high submission volumes and relatively low acceptance rates across top-tier machine learning conferences.

More context

Community discussions also brought attention to issues with the quality of peer reviews. Examples cited included reviews that were perceived as inadequate or overly critical for not including specific benchmarks, even if those benchmarks were not central to the paper's core contribution.

What to watch

Researchers and the academic community will be observing the submission numbers for upcoming conferences like NeurIPS to see the full impact of this cascade effect. The ongoing discussion around improving the quality and fairness of the peer-review process in high-volume academic publishing remains a critical area for the machine learning community.

Editorial note

AI Dose summarizes public reporting and links to original sources when they are available. Review the Editorial Policy, Disclaimer, or Contact page if you need to flag a correction or understand how this site handles sources.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [r/ML] Why ML conference reviews sometimes feel like a “lottery“ [D], [HN] Show HN: Hackamaps – A global hackathon map I build after hitting Lovable Limits, [r/ML] A Hackable ML Compiler Stack in 5,000 Lines of Python [P], [r/ML] Phosphene local video and audio generation for Apple Silicon open source (LTX 2.3) [P].

Related Articles

Next read

[r/ML] Why ML conference reviews sometimes feel like a “lottery“ [D]

Stay with the thread by reading one adjacent story before leaving this update.

Comments

Sign in to leave a comment.

Loading comments...