0
Likes
0
Saves
Back to updates

AI update explained

[r/ML] UAI Rebuttal [D]

A user on the r/MachineLearning subreddit posted their paper review scores for the Conference on Uncertainty in Artificial Intelligence (UAI), both before and after submitting a rebuttal. They are seeking advice from the community on their paper's acceptance chances.

Impact: 1/10

In 10 seconds

What to know first

  • A user on r/MachineLearning shared their UAI paper review scores, noting a slight improvement after submitting a rebuttal.
  • This post highlights the common experience of researchers navigating the peer review process for AI conferences, including the impact of rebuttals on reviewer scores and the strategic decisions authors face regarding paper submissions.
  • **Pre-rebuttal scores/confidence**: 6/4, 6/4, 4/3, 3/3
  • **Post-rebuttal scores/confidence**: 6/4, 6/4, 5/3, 4/3

Why it matters

This post highlights the common experience of researchers navigating the peer review process for AI conferences, including the impact of rebuttals on reviewer scores and the strategic decisions authors face regarding paper submissions.

Swipe left/right

Summary

A user on r/MachineLearning shared their UAI paper review scores, noting a slight improvement after submitting a rebuttal. The scores increased from 4/3 to 5/3 and 3/3 to 4/3 for two reviewers, while two scores remained 6/4.

What happened

A user on the r/MachineLearning subreddit posted their paper review scores for the Conference on Uncertainty in Artificial Intelligence (UAI), both before and after submitting a rebuttal. They are seeking advice from the community on their paper's acceptance chances.

Key details

  • **Pre-rebuttal scores/confidence**: 6/4, 6/4, 4/3, 3/3
  • **Post-rebuttal scores/confidence**: 6/4, 6/4, 5/3, 4/3
  • Two of the four reviewer scores saw a minor increase after the rebuttal (from 4/3 to 5/3, and 3/3 to 4/3).
  • The user is considering whether to pursue acceptance at UAI or prepare for submission to NeurIPS.

What to watch

Community discussions around such posts often provide insights into the subjective nature of academic peer review, common reviewer expectations, and strategies for effective rebuttals. The advice offered could reflect current sentiment on acceptance thresholds for top-tier AI conferences.

Editorial note

AI Dose summarizes public reporting and links to original sources when they are available. Review the Editorial Policy, Disclaimer, or Contact page if you need to flag a correction or understand how this site handles sources.

Continue Reading

Explore related coverage about community news and adjacent AI developments: [HN] Show HN: Agent-desktop – Native desktop automation CLI for AI agents, [HN] Show HN: Hackamaps – A global hackathon map I build after hitting Lovable Limits, [r/ML] Why ML conference reviews sometimes feel like a “lottery“ [D], [r/ML] A Hackable ML Compiler Stack in 5,000 Lines of Python [P].

Related Articles

Next read

[HN] Show HN: Agent-desktop – Native desktop automation CLI for AI agents

Stay with the thread by reading one adjacent story before leaving this update.

Comments

Sign in to leave a comment.

Loading comments...