Will AI liberate research from institutional bean-counting?

ChatGPT’s ability to churn out mediocre papers should lead us to reappraise how research is carried out, reported and evaluated, says Martyn Hammersley

Published on
June 22, 2023
Last updated
July 7, 2023
Montage image of a man holding onto robot arms to illustrate Can AI free universities from the institutional bean-counting scourge?
Source: Stock/Getty images Montage

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

The AI chatbot may soon kill the undergraduate essay, but its transformation of research could be equally seismic. Jack Grove examines how ChatGPT is already disrupting scholarly practices and where the technology may eventually take researchers – for good or ill

16 March

Reader's comments (4)

"What is the point here? Please tell us
This reads rather like the Chaos in the Brickyard, written in 1963 - https://www.science.org/doi/10.1126/science.142.3590.339.a The stories are similar in the sense that they point to the same danger, namely the erosion of the essential value in an activity (ie research) when it becomes a target-driven job.
I am sure I have already read articles produced like this, but not necessarily by AI. I don't see why reviewing should not be done in the same way, cutting out the middleperson altogether. Social scientists could take a leaf out of natural scientists' books by swapping research results and arguments online leaving commercial companies (including universities) and quangoes in their own closed worlds.
This prompts three thoughts. First, ChatGPT, and other AIs might be good at writing at least a first draft of review type articles, so perhaps the AI should be credited as an author. I suspect, and hope, that the human co-author(s) would want to explain the AI's role in the production of the article. The second thought is that as these large language models work by hoovering up lots of text from the web, they may be good at building on existing ideas and perhaps extending them as little, but they are unlikely to be able to come up with new ideas, and the long term the development of human knowledge requires these inputs of new ideas. AI generated research may produce more and more specialised research which may eventually only be "intelligible" to AIs - which may not be entirely negative. (The peer review system may have a similar conservative influence - resulting in research which is only intelligible to peers.) The third thought is that this danger may be mitigated to some extent by asking the AI to use ideas from other disciplines: e.g. "write an academic article on the treatment of depression in the style of a mathematician." This almost certainly would not produce anything interesting, but the principle of mixing disciplines and genres may, sometimes, yield interesting results.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT