partly cloudyDunedin 18 | 10
Thursday, Thu, 15 MayMay 2025
Subscribe

ChatGPT — time to ask right questions

ChatGPT is a conversational artificial intelligence software application developed by OpenAI....
ChatGPT is a conversational artificial intelligence software application developed by OpenAI. PHOTO: TNS
It's no surprise to learn that my beloved University of Otago is worrying about ChatGPT (Varsities tackle AI challenge, ODT 24.1.23).

It’s potentially a game changer for the education sector and a lot of other sectors too. But my own experience with this latest AI tool left me suspecting we have bigger things to worry about.

If you have missed the buzz around ChatGPT, it is a computer application that uses artificial intelligence to generate content on demand.

it is owned by OpenAI, the tech giant who also brought us Dall-E, which generates pictures, and Jukebox, which does the same for music.

In the case of ChatGPT, what it creates is text.

Generative models like ChatGPT do not just find things other people have already written, like the results of a Google search. They generate new content.

If I wanted 1000 words on why the Otago Daily Times is the best newspaper in the southern hemisphere, I would type in that request and, in a second or two, GPT would come up with something that has never been written before.

In this case, GPT tells me it is due to ‘‘its unwavering commitment to quality journalism and its passion for keeping readers informed’’. Who could argue with that?

When I flipped that question around and asked why the ODT is the worst paper in the southern hemisphere, GPT informed me it is because ‘‘it is often filled with opinion pieces that lack any facts or evidence’’. So yeah, ahem, safe to say it is not perfect yet.

You can have fun with GPT. Want to rewrite your washing machine instructions in the style of a Lee Childs thriller? GPT will deliver.

But you can probably imagine why universities are a bit worried about this.

Just this week, ChatGPT passed law exams at Minnesota University Law School. It did not exactly ace them, as it was graded at C+. But for desperate, failing or just plain lazy students, you can see the attraction.

Of course, student dishonesty is not a new concern. AI technology has so far been on the side of the good guys, with plagiarism detectors like TurnItIn helping spot answers copied verbatim from other sources.

GPT technology will not trigger plagiarism detectors, though, because it is not plagiarism. This would be new writing ... just not the student’s own.

Whether the original human authors will be due a cut if an AI trained on their work ever writes a bestseller will be one for the courts and the law exams of the future.

Last year, I gave a demonstration of GPT-3 (a younger sibling of ChatGPT) to an Otago undergraduate law class. I fed it an old exam question and we all looked on as it churned out a pretty solid answer.

Looking out at that sea of intrigued young faces, I had a sudden pang of doubt. The class essay was due pretty soon. Had I just shown my students a new way to cheat?

But as the class went on, another worry started to take hold. Is there much point, I wondered, in testing my students on something that AI can already do in a fraction of the time?

Sure, GPT still cannot write as well as the better students. But what will it be capable of by the time these young people go out into the workplace?

Will anyone be paying human lawyers for legal opinions or contract drafting, if AI could come up with the same results faster and, presumably, cheaper? My focus is on law, for obvious reasons, but you can substitute accountants, architects ... yes, even journalists.

To stay relevant, graduates are going to need new skills. For instance, generative models rely on prompts from the user. If your request is not clear or specific, then you are likely to get something back that misses the mark. Garbage in, garbage out, as we used to say.

Courses on how to use AI technologies will be very popular with students in the near future.

Trying to ban students from using GPT can also feel like trying to hold back the tides — a bit like banning spellcheck or Wikipedia.

These things exist. Students (and their professors) need to learn to deal with that rather than ignore it.

When I had my first experience in a law firm, the partners got by with a dictaphone and a secretary with good shorthand. That is just not the world any more.

Universities are going to have to think hard about how to deal with technologies like ChatGPT. Sure, we need to guard against AI-enabled cheating, by learning to spot GPT tells and setting tests that can only be answered properly by human students.

But we also need to be thinking about the purposes of those tests.

Because if AI can already provide the answers, then maybe we’re not asking the right questions.

 

 Colin Gavaghan is a former University of Otago law professor who specialises in the law and emerging technologies.