Call to remove ‘hyper-authored’ papers from research metrics

Web of Science study says articles with more than 100 authors or involving dozens of countries can artificially boost citation impact

Published on
December 11, 2019
Last updated
December 11, 2019
Crowded King's Cross station
Source: iStock

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (1)

The root cause of this problem is the assumption that 'citation impact' is a meaningful measure of research performance. Citation impact is simply a count of citations to a paper (or a normalisation of these citations based on the average citations within an arbitrarily defined 'field'). It is a bibliometric concept and not a research concept. Citations cannot directly tell you if research is high quality, replicable, innovative, accurate, or valuable. They can only tell you if the research has been cited and how often, but not why. They are a proxy for performance because they indicate utility of the research, but there is no quantitative measure of research quality that we can use to calibrate their sensitivity to research quality. Therefore, there is no specific logic or rationale we can use for removing outliers that isn't just an arbitrary decision. Why treat papers with more than 10 authors differently to a paper with 9? Why are 30 countries the limit for valid multinational research collaborations? Outliers affect any measure that uses a mean, but perhaps using this mean to measure aggregate research performance is the real problem.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT