Still following the post above about the interesting Koltun presentation see below an article published this week in Science Business that speaks about a ‘disruptive’ plan for research assessment. https://sciencebusiness.net/news/france-helps-brussels-move-ahead-disruptive-plan-research-assessment
In fact, there´s absolutely nothing disruptive about it. In fact, is just a way to spend more money on peer review instead of using bibliometrics to help the peer-review assessment and thus saving millions that could be used to fund research and hire researchers. As Peter Drucker used to say “You can’t manage what you can’t measure”
It’s true that every scientist knows that as Koltun said in order to evaluate the work of any researcher you need to read his work, you need to understand his work including the details, the context in which it was done, and you should even be able to reproduce it. Unfortunately, there´s no time (and also not enough money) for such a deep approach.
So let me tell you about my country, Portugal, which tried both ways. In a previous Portuguese research assessment of 2013, the international experts of the panels were entirely free to base their assessment on visits to the research units and also on a bibliometric study based on Scopus carried out by Elsevier that produced several metrics (Publications per FTE, Citations per FTE, h-index, Field-Weighted Citation Impact, Top cited publications, National and International Collaborations).
But in the last years, we had a Science Minister with similar ideas to the French bibliometric haters so in the last research assessment exercise of 2018 where 348 research units composed of almost 20.000 researchers were evaluated the Evaluation Guide clearly dictated that absolutely no metric could be used by the panels (note that all panels were composed by international experts, 51 from UK, 21 from USA, 17 from Germany, 17 from France, 11 from The Netherlands, 8 from Finland, 8 from Ireland, 7 from Switzerland, 6 from Sweden, 5 from Norway and also from other countries).
However, after the research assessment was concluded i search all the reports of all scientific areas and found that the reviewers have given a lot of value to the number of publications and the “quality” of journals (despite the fact that they were not allowed by the Evaluation Guide). I found that “publications”, “quartiles” and even “impact factors” were mentioned in the assessment reports more than 500 times. Meaning that in the absence of any metric the international experts (most ironically) decide to use the worst of them all.
PS – Could it be that France’s hatred of metrics is due to the fact that it has a low number of highly cited articles and a low number of highly cited scientists, such that it cannot even appear in this group of 17 countries of Scopus Highly Cited Scientists per million inhabitants? https://pacheco-torgal.blogspot.com/2021/11/switzerland-denmark-and-sweden-has.html Also if France’s research performance was assessed in terms of funding efficiency (as was suggested by Wohlrabe et al. 2019 and also by de Marco 2019) then France’s performance would be even worse.