AI poses threats to education, ethics and eureka moments

The sudden rise of generative AI offers an opportunity for reflection and renewal of our scholarly values, say Ella McPherson and Matei Candea

Published on
March 19, 2024
Last updated
March 25, 2024
Illustration: Archimedes unveils a circuit board from behind a curtain
Source: Getty Images/iStock montage

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (3)

Generally good article - the point on the value of scholarship and eureka moments are well made, the opening gambit that academia is "excited" about AI is unwarranted. The problem with reducing "lower-level" work, is it os often unclear where low-level work stops and more critical work begins. You mention visualising data - however the choice of visualisation depends critically on understanding the data and your reason for visualising it. note the word "understanding" the key part AI lacks.The ethics of where where AI gets its data is raised, but you miss the elephant the room "trust". It is not just where the data came from but how and why the AI joined specific bits of data. LLMs are black boxes that cannot provide an audit trail or explanation of what was done and why, so I see no reason to trust the output of the box. Indeed it is not cleat that the result is reproducible in any sense. I see no reason to get excited about an unreliable, untrustworthy tool, use of which could cost more time than it saves plus cause reputational damage Academia is supposed to be concerned about kowledge and thought, when it comes too AI there is a distinct lack of either
Generative AI (gAI) is a tool. We need to learn (and teach) how to use it correctly and when it is appropriate to use it at all. This includes the intelligent selection of prompts for the gAI and - even more vital - critical analysis of what it produces, then reasoned choices as to what if any of the output we want to include in our work. Students need to 'show their working' by including prompts used and analysis of output in any piece of work where they want to utilise gAI. A final year undergraduate computer science student asked me just today about using gAI to assist with code. I suggested comparing the gAI code with what they'd written (which goes already) and decide which was better... and talk about it in the report that accompanies the code they were working on. We don't complain about students using spell check or even Grammerly, I don't think gAI should be chucked out with the bathwater either.
AI and any or all of its components--which must be distinguished for each other--used well has absolutely NO RELATIONSHIP to ethics and--come on, now;; "Eureka moments." It is 2024 not 1924. Education will benefit from proper adoption, which must be both exemplified AND taught. The same reaction to cave painting, early alphabets, printing, telegraph, typewriter, radio, TV, computers.... Please!

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT