https://19-pacheco-torgal-19.blogspot.com/2026/04/can-ai-discover-what-humans-cannot.html
Building on a previous post (linked above) I disclose yet another interesting paper by researchers from the University of Illinois Urbana-Champaign. If the previous post asked whether AI can discover what humans cannot, this paper asks something equally audacious: can AI predict where science is going before it gets there?
The authors make a deceptively simple but radical move: they reframe research proposal generation as a forecasting problem. Given a question and a body of literature available before a fixed cutoff date, the model generates a structured proposal — evaluated not by how sophisticated it sounds, but by how accurately it anticipates research directions that actually materialise in papers published afterwards. Trained on 17,771 papers, the system learns to spot overlooked gaps and draw inspiration across disciplinary boundaries — precisely where the most consequential ideas tend to hide. The implications reach well beyond academia. This could become the instrument through which funding agencies, science policymakers and research evaluators make higher-stakes decisions: not which proposals sound compelling in committee, but which ones the arc of science is already bending towards. https://arxiv.org/abs/2603.27146
Yet the promise comes with a shadow. If funding decisions and research agendas start leaning on AI forecasts, there is a risk of reinforcing existing patterns rather than fostering genuine innovation. By privileging areas the model predicts will succeed, we could inadvertently narrow the scope of exploration, crowding out high-risk, unconventional ideas that fall outside the AI’s learned trajectories. Over time, this might entrench a “predictable science,” where AI-guided choices favor incremental advances and safe bets, undermining the serendipitous leaps that often drive paradigm shifts.
P.S. — The above-mentioned paper cites a compelling companion work: PreScience: A Benchmark for Forecasting Scientific Contributions, which approaches the same ambition from a different angle — benchmarking how well AI can anticipate the actual future impact of not-yet-published research. Taken together, these two papers signal something significant: a new subfield is quietly assembling itself, one that treats scientific forecasting not as speculation, but as a rigorous and measurable discipline. https://arxiv.org/abs/2602.20459