Banning journal impact factors is bad for Dutch science

Abandoning measurable evaluation criteria will make judgements more political and more random, say Raymond Poot and Willem Mulder

Published on
August 3, 2021
Last updated
August 12, 2021
Divers feet and a measure stick showing above water level only as a metaphor for Journal impact factor ban is bad for Dutch science
Source: Alamy

POSTSCRIPT:

Print headline: Journal impact factor ban is bad for Dutch science

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (6)

I hope you also publish the excellent reply https://www.scienceguide.nl/2021/07/we-moeten-af-van-telzucht-in-de-wetenschap/ by Annemijn Algra et al.
I found this article rather confusing, as it contains a lot of opinion but precious little evidence. It also starts with an illogical argument, saying "Utrecht's assertion that the journal impact factor plays a disproportionately large role in the evaluation of researchers is misleading. For a considerable number of research fields, impact factors are not that relevant." If that is so, why are the authors so worried about dropping the impact factor? They then go on to say to contrast the quality of papers in Cell/Nature/Science with that of papers in "technical journals" - I'm not sure what is meant by that - there are plenty of journals that fit neither category that report excellent work. Reviewing standards are at least as high in many society journals, which have editors who are familiar with the specific topic of a paper - more so than the journalist editors of the 'top' journals. The pressure to get publications in CNS is known to create perverse incentives (https://royalsocietypublishing.org/doi/10.1098/rsos.171511), and these journals have a relatively high rate of retractions: https://www.nature.com/articles/nature.2014.15951. It's also interesting that they see open science as a political rather than scientific matter. I could not disagree more: it's frustrating to read these 'high impact' papers in 'top' journals that make extraordinary claims but then just say 'data available on request' (it never is). If we cannot see the data and have a clear account of methods, then the research paper remains more like an advertisement than a scientific contribution. Finally, the authors' concern for early-career researchers is laudable, but have they surveyed early-career researchers to ask what they think about the new criteria?
And here is a response from some Dutch scientists who take a different perspective from the authors (including some of early-career researchers). https://recognitionrewards.nl/2021/08/03/why-the-new-recognition-rewards-actually-boosts-excellent-science/
Sigh. I suppose that there will always be some dinosaurs who oppose progress. The journal impact factor has been totally debunked for decades now. None of that work is referred to. I'll cite one example from my own experience. In 1981 we published a preliminary account of results that we'd obtained with the (then new) method for recording the activity of single ion channels. It was brief and crude, but in the early 80's anything with single channels in it sailed into Nature. After four more years of work we published a much better account of the work, 57 printed pages in the Journal of Physiology. The idea that the short note is worth more than the real paper is beyond absurd. How about reading the applicant's (self-nominated) three best papers? It doesn't matter a damn where they are published (or even whether they're published yet).
If the venue does not matter then every university should have an inhouse journal and academics should publish in those. Why bother with other journals in the first place?
So Poot and Mulder, both from Dutch medical centres, want to retain academic evaluation by journal impact factor. Their logic is hard to follow and harder to swallow: top journals have high impact factors and publish the best papers because they get assistance from world experts who safeguard high impact and quality. What does this mean? In many research fields, they say, journal impact factors are not nearly as significant as they are in medicine. Oh really? Academic performance is measured almost entirely by metrics these days and by far the most important of these is the journal impact factor. As David Colquhoun says, this is not because the journal impact factor is a good measure, but because it has long been gamed, and manipulation has been most successfully in medicine. For years, the editors of the Lancet and BMJ have bewailed the corruption that produces the high impact factors boasted by the top journals in medicine. Papers are written to order and to a formula that will generate citation and thereby contribute most to journal impact factor. Dozens typically claim authorship of a paper in a medical journal; equally typically, none of them wrote it. As citations grow older and ever more positive, the research base becomes shakier. Some of most prolific authors have never actually existed, nor have the papers they have written, or the journals in which they have published. The system is absurd and has long been recognized as quite daft, but a lot of capital has been sunk into working this system, and probably in no discipline more than medicine.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT