A recent large-scale study by three scholars from Hebrew University of Jerusalem, published in Scientometrics journal, analyzes fifteen years of longitudinal publication data for more than 310,000 faculty members across American research universities nationwide. One of its central findings is that between 32% and 47% of all career years include zero publications, as defined by the authors, which they somewhat dramatically describe as an “annus horribilis.” https://link.springer.com/article/10.1007/s10734-026-01665-7
This conclusion, however, depends on a relatively narrow definition of research productivity. The study equates productivity with outputs indexed in specific bibliometric databases—namely journal articles listed in CrossRef and books catalogued by Baker & Taylor. Such a definition excludes a wide range of legitimate scholarly contributions. These include conference proceedings (which are often the primary dissemination channel in fields such as computer science and engineering), as well as working papers, preprints, policy reports, datasets, software, technical reports, book reviews, and other forms of scholarly and public engagement.
A similar limitation appears in the study’s treatment of research funding. Funding is measured exclusively through federal grants in which a faculty member is identified as Principal Investigator. This approach excludes other significant sources of research funding, including internal university funding, private foundation grants, industry-sponsored research, international funding agencies, and sub-awards in which a scholar participates as a co-investigator. Smaller-scale funding mechanisms, such as fellowships and travel grants, are also not considered, despite their importance in sustaining research activity.
Finally, the study does not adequately address differences in publication practices across disciplines. Patterns of scholarly production vary considerably between fields. In the humanities, for instance, the monograph often serves as the primary form of scholarly output and may require several years of sustained work. By contrast, fields such as the biomedical sciences typically involve large collaborative teams that produce multiple articles annually.
Taken seriously, these limitations collapse the central claim. The “productivity crisis” reads less as a discovery than as a byproduct of poorly specified metrics. Before advancing any further conclusions, the three Israeli scholars need to show that their measurement strategy is not fundamentally miscalibrated. In this context, it may be worth revisiting my earlier letter, “The Illusion of Scientific Talent Identification Through Publication Counts.”