A recent paper audited "shadow APIs" claiming to provide access to advanced AI models, revealing that 187 academic papers, with one service having nearly 6,000 citations, relied on these unreliable platforms. The audit found severe issues including performance divergence up to 47%, unpredictable safety behavior, and failed identity verification. This suggests a significant portion of AI research might be built on inconsistent or fake model outputs, severely impacting reproducibility and the validity of findings.