AI tech and handing in your course work

No help requested or required ... But try saying no to Copilot. PHOTO: REUTERS
PHOTO: REUTERS
Students constantly navigate their relationship with technology. Everyone does.

Phones are a seamless extension of the self and are often carried around as such. In fact, Charlie Brooker's wonderful TV series, Black Mirror, is named after how a screen looks when turned off. The stand-alone episodes focus on human connection, technology, and privacy.

So what does being a student in a world increasingly shaped by artificial intelligence mean?

AI transforms how we engage with technology, affecting everything from social media interactions to educational experiences.

The University of Otago has extensive regulations and policies on AI use. These policies are well-written and protect important authenticity issues.

One part that stood out to me was the need to “ensure that people, not technology, are at the heart of decision-making, with human agency and expertise being central to the use of generative AI".

The emphasis on human agency being central to using AI tools is a message I think we can’t overstate.

This is crucial as we balance technological advances with preserving our critical-thinking skills and personal autonomy.

My two political studies papers include a subtitle on their main page of Blackboard called "Using AI’’, alongside other headings like "lecture recordings’', "assignment information’' and "tutorials’’.

The Using AI page directs students towards helpful AI resources for gathering academic sources for research purposes. It also warns of the dangers of AI tools like ChatGPT, encouraging students to fact-check. The page also notes ChatGPT and OpenAI may collect and share your data.

Furthermore, Turnitin, the system for handing in electronic assignments, has advanced plagiarism detectors that flag substantive AI use.

The Using AI pages often end with the suggestion that students talk to a lecturer about it if they are in doubt. This echoes the sentiment that human agency and critical thinking are crucial in any and all use of AI.

The time students are spending on screens is not solely for academic pursuits.

My generation are social media users. Social media companies are using AI to influence user experiences. Instagram, Snapchat, TikTok, and Meta use AI to curate personalised content, learning from users' likes, behaviours and interactions to recommend posts and advertisements.

Sometimes, an in-person conversation about a particular topic will be had, only for that topic to present itself in some form on one’s phone a few days later.

This is often laughed off with a “my phone is listening" joke, maybe paired with some short-term nervousness.

Often, this curated content is interesting. Last week, I saw a recommended post from Critic interviewing students at the Highlanders game — without having to search for it and without which I may have missed it.

Far more alarming is the increasing trend of anthropomorphising “human" characteristics to AI chatbots.

One example is Snapchat’s "My AI’’ function, which offers a customisable virtual companion that picks up on preferences and replies to any prompt, even a “hi".

When we first noticed the function two years ago, my flatmates and I spent the evening testing its knowledge of us, Dunedin, and Otago Uni. The results were fascinating but eerie.

What happens when technology starts to replace interpersonal human relationships? And what is the cost?

These processes and the integration of AI into social media are not without benefits — namely, the use of AI to moderate content.

AI can flag harmful content quickly and subsequently protect users from harm. But AI cannot always discern the subtlety of misinformation or verify its truthfulness the way human fact-checkers can.

This brings to light Meta’s changes to content moderation and an end to third-party fact-checking. These changes leave users open to misinformation and abuse online, and remove protections for often-targeted groups of people.

Another recent development in the global AI discourse is the 2024 feature film, The Brutalist. The Oscar-nominated editor, David Jansco, said that an AI tool specialising in voice-generating called “Respeecher" used some of Jansco’s voice to nail the trickier dialect.

This allowed the film to reach cinemas quicker, among other positive benefits for their post-production processes. It has sparked debate online about AI’s role in enhancing art and where to draw the line.

I have heard great things about The Brutalist, and lead Adrian Brody won an Oscar on Tuesday for his performance. We opted for Conclave (I recommend). My friends and I get quite excited about the Oscars.

Back to us students. AI poses a challenge to core human skills that are of special importance to students undertaking academic studies every day. These skills are critical thinking, problem-solving, and original thought. They are essential for success in studying and real engagement with the world.

The things that make us human – our ability to critically create and connect — are threatened when AI becomes a crutch rather than a tool. AI is not a replacement for human insight but a companion to it.

We should continue to prioritise the human qualities that make us who we are while simultaneously preparing for a world in which AI is an ever-growing, ever-changing presence.

Kind regards,

Grace

• Dunedin resident Grace Togneri is a fourth-year law student.