0
Likes
0
Saves
Back to updates

AI update explained

[r/ML] Seems ICML is rejecting MANY unanimous positively rated papers [D]

Members of the r/MachineLearning community are observing that the International Conference on Machine Learning (ICML) is rejecting numerous papers that had previously received unanimous positive ratings from reviewers. One user reported their own paper, which had a high pre-rebuttal score, was ultimately rejected.

Impact: 3/10

In 10 seconds

What to know first

  • Reports from the r/MachineLearning community indicate that the International Conference on Machine Learning (ICML) is rejecting a notable number of papers that received unanimous positive ratings from reviewers.
  • This situation highlights potential issues within the peer-review process for major AI conferences, which could impact researchers' publication opportunities and the perceived fairness of academic evaluations. It may prompt discussions on how review systems are structured and incentivized.

Why it matters

This situation highlights potential issues within the peer-review process for major AI conferences, which could impact researchers' publication opportunities and the perceived fairness of academic evaluations. It may prompt discussions on how review systems are structured and incentivized.

Swipe left/right

Summary

Reports from the r/MachineLearning community indicate that the International Conference on Machine Learning (ICML) is rejecting a notable number of papers that received unanimous positive ratings from reviewers. This trend suggests a potential misalignment in the conference's review incentives, particularly during the rebuttal phase.

What happened

Members of the r/MachineLearning community are observing that the International Conference on Machine Learning (ICML) is rejecting numerous papers that had previously received unanimous positive ratings from reviewers. One user reported their own paper, which had a high pre-rebuttal score, was ultimately rejected.

Key details

The core issue appears to stem from a perceived misalignment in the incentives within ICML's review process, particularly during the rebuttal phase. Reviewers reportedly feel pressure to adjust their scores to achieve more homogeneous ratings, potentially to avoid further scrutiny from Area Chairs (ACs). While the stated goal of encouraging reviewers to reconsider scores is positive, the practical outcome seems to be a distorted dynamic where initial positive consensus may not guarantee acceptance.

What to watch

The community's reaction to these rejections may lead to further discussion regarding the transparency and fairness of the peer-review process at major AI conferences. It could also prompt conference organizers to re-evaluate the guidelines and incentives provided to reviewers and Area Chairs to ensure a more consistent and equitable evaluation system.

Editorial note

AI Dose summarizes public reporting and links to original sources when they are available. Review the Editorial Policy, Disclaimer, or Contact page if you need to flag a correction or understand how this site handles sources.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [HN] Show HN: Sprogeny – mashup public Spotify playlists, [r/ML] [D] MYTHOS-INVERSION STRUCTURAL AUDIT, [r/LocalLLaMA] karpathy / autoresearch, [HN] $38k AWS Bedrock bill caused by a simple prompt caching miss.

Related Articles

Next read

[HN] Show HN: Sprogeny – mashup public Spotify playlists

Stay with the thread by reading one adjacent story before leaving this update.

Comments

Sign in to leave a comment.

Loading comments...