The author observes a concerning trend in AI/ML conference peer review, where reviewers now feel obligated to find flaws, even in strong papers, to demonstrate diligence. This shift, intended to improve review quality, has eliminated "easy" reviews and often leads to authors conducting additional, sometimes detrimental, experiments during the rebuttal phase. Consequently, the pressure to find faults can paradoxically worsen the quality of submitted papers.