Raising AI

Illustration: Austin Milne
Illustration: Austin Milne
Your fears about artificial intelligence (AI) might be well-founded, Assoc Prof David Rozado says. Bruce Munro talks to Dunedin’s world-renowned AI researcher about the role we all play in deciding whether this technology spells disaster or utopia, how biases are already entering this brave new world and why it’s important to help AI remember its origins.

The dazzling array of things AI can do is just that — dazzling.

Today, AI is being used to analyse investment decisions; organise your music playlist; automate small business advertising; generate clever, human-like chatbots; review research and suggest new lines of inquiry; create fake videos of Volodymyr Zelenskyy punching Donald Trump; spot people using AI to cheat in exams; write its own computer code to create new apps; rove Mars for signs of ancient life ... it’s dazzling.

But staring at the glare of headlights can make it difficult to assess the size and speed of the vehicle hurtling towards you.

Assoc Prof David Rozado says if you really want to understand the potential power of AI, for good and bad, don’t look at what it can do now but at how far it has come.

"The rate of change in AI capabilities over the past few years is far more revealing — and important," the world-renowned Otago Polytechnic AI researcher says.

"The rise in capabilities between GPT-2, released in 2019, and GPT-4, released in 2023, is astonishing."

Surveying only the past few years of the digital juggernaut’s path of travel reveals remarkable gains and posits critical questions about the sort of world we want to live in.

In 2019, AI was making waves with its ability to recognise images and generate useful human language.

Less than four years later it could perform complex tasks at, or above, human levels.

Now, AI can reason.

As of late last year, your computer can tap into online software that handles information in ways resembling human thought processes.

This means the most advanced AI can now understand nuance and context, recognise its own mistakes and try different problem-solving strategies.

OpenAI o1, for example, is being used to revolutionise computer coding, help physicists develop quantum technologies and do thinking that reduces the number of rabbit holes medical researchers have to go down as they investigate rare genetic disorders.

And OpenAI, the United States-based maker of ChatGPT, is not the only player in this game.

Chinese company DeepSeek stormed on to the world stage early this year, stripping billions of dollars off the market value of chip giant Nvidia when it released its free, open-source, AI model DeepSeek R1 that reportedly outperforms OpenAI’s o1 in complex reasoning tasks.

Based on that exponential trajectory, AI could be "profoundly disruptive", Prof Rozado warns.

"But how quickly and to what extent ... depends on decisions that will be made by individuals, institutions and society."

Born and raised in Spain, Prof Rozado’s training and academic career have taken him around the globe — a BSc in information systems from Boston University, an MSc in bioinformatics from the Free University of Berlin and a PhD in computer science from the Autonomous University of Madrid.

In 2015, he moved to Dunedin "for professional and family reasons", taking a role with Otago Polytechnic where he teaches AI, data science and advanced algorithms, and researches machine learning, computational social science and accessibility software for users with motor impairment.

The most famous Kiwi AI researcher we never knew about, Prof Rozado was pushed into the spotlight of global public consciousness a few months back when his research was quoted by The Economist in an article suggesting America was becoming less "woke".

His work touches on a number of hot button societal topics and their relationship to AI; issues he says we need to think about now if we don’t want things to end badly.

Prof Rozado is no AI evangelist.

Asked whether fear of AI is unfounded, the researcher says he doesn’t think so.

"In fact, we may not be worried enough."

The short history of AI is already littered with an embarrassment of unfortunate events.

In 2021, for example, Dutch politicians, including the prime minister, resigned after an investigation found secretive AI supposed to sniff out tax cheats falsely accused more than 20,000 families of social welfare fraud.

In 2023, a BBC investigation found social media platform AI was deleting legitimate videos of possible war crimes, including footage of attacks in Ukraine, potentially robbing victims of access to justice.

And last year, facial recognition technology trialled in 25 North Island supermarkets, but not trained on the New Zealand population, reduced crime but also resulted in a Māori woman being mistakenly identified as a thief and kicked out of a store.

If not a true believer, neither is Prof Rozado a prophet of doom; more a voice of expertise and experience urging extreme caution and deeply considered choices.

His view of AI is neither rainbows and unicorns nor inevitable Armageddon; his preferred analogy is hazardous pathogens.

Given no-one can predict the future, Prof Rozado says it is helpful to think in terms of probability distributions — the likelihood of different possible outcomes.

Take, for example, research to modify viruses to make them useful for human gene therapy, where, despite safety protocols, there is a small but not-insignificant risk a hazardous pathogen could escape the laboratory.

The same logic applies to AI, Prof Rozado says.

"There are real risks — loss of human agency, massive unemployment, eroded purpose, declining leverage of human labour over capital, autonomous weapons, deceptive AI, surveillance state or extreme inequality arising from an AI-driven productivity explosion with winner-take-all dynamics.

"I’m not saying any of this will happen, but there’s a non-negligible chance one or more could."

Why he compares AI to a powerful, potentially dangerous virus becomes clear when he describes some of his research and explains the difficult issues it reveals AI is already creating.

Prof Rozado was quoted in The Economist because of his research into the prevalence of news media’s use of terms about prejudice — for example, racism, sexism, Islamophobia, anti-Semitism, homophobia and transphobia — and terms about social justice, such as diversity, equity and inclusion.

His study of 98 million news and opinion articles across 124 popular news media outlets from 36 countries showed the use of "progressive" or "woke" terminology increased in the first half of the 2010s and became a global phenomenon within a handful of years.

In the academic paper detailing the results, published last year, he said the way this phenomenon proliferated quickly and globally raised important questions about what was driving it.

Assoc Prof David Rozado says the best way to understand the potential power of AI, for good and...
Assoc Prof David Rozado says the best way to understand the potential power of AI, for good and bad, is not to look at what it can do now but at how far it has come. Photo: Gregor Richardson
Speaking to The Weekend Mix, Prof Rozado says he thinks several factors might have contributed.

First among those, he cites the growing influence of social media — the ways the various platforms’ guiding algorithms shape public discourse by both amplifying messages and helping create information silos.

Other possible causes are the changing news media landscape, emerging political trends — or a combination of all three.

The Economist concluded, from its own and Prof Rozado’s research, that the world had reached "peak woke" and that the trend might be reversing.

"I’m a bit more cautious, as perhaps it’s too early to say for sure," Prof Rozado says.

Whether you see either change as positive or dangerous, it raises the question of what role AI is playing in societal change.

Since then, Prof Rozado’s attention has shifted towards the behaviour of AI in decision-making tasks.

It has brought the same question into even sharper focus.

Only a month after the previous study appeared, he published another paper, this time on the political biases baked into large language Models (LLMs) — the type of AI that processes and generates human language.

Using tests designed to discern the political preferences of humans, Prof Rozado surveyed 24 state-of-the-art conversational LLMs and discovered most of them tended to give responses consistent with left-of-centre leanings.

He then showed that with modest effort he could steer the LLMs towards different political biases.

"It took me a few weeks to get the right mix of training data and less than $1000 ... to create politically aligned models that reflected different political perspectives."

Despite that, it is difficult to determine how LLMs’ political leanings are actually being formed, he says.

Creating an LLM involves first teaching it to predict what comes next; be it a word, a letter or a piece of punctuation. As part of that prediction training, the models are fed a wide variety of online documents.

Then comes fine-tuning and reinforcement learning, using humans to teach the AI how to behave.

The political preferences might be creeping in at any stage, either directly or by other means.

Unfortunately, the companies creating LLMs do not like to disclose exactly what material they feed their AI models or what methods they use to train them, Prof Rozado says.

"[The biases] could also be [caused] ... by the model extrapolating from the training distribution in ways we don’t fully understand."

Whatever the cause, the implications are substantial, Prof Rozado says.

In the past year or so, internet users might have noticed when searching online the top results are no longer the traditional list of links to websites but a collection of AI-curated information drawn from various online sources.

"As mediators of what sort of information users consume, their societal influence is growing fast."

With LLMs beginning to displace the likes of search engines and Wikipedia, it brings the question of biases, political or otherwise, to the fore.

It is a double-edged sword, Prof Rozado says.

If we insist all AIs must share similar viewpoints, it could decrease the variety of viewpoints in society.

This raises the spectre of a clampdown on freedom of expression.

"Without free speech, societies risk allowing bad ideas, false beliefs and authoritarianism to go unchallenged. When dissent is penalised, flawed ideas take root."

But if we end up with a variety of AIs tailored to different ideologies, people will likely gravitate towards AI systems confirming their pre-existing beliefs, deepening the already growing polarisation within society.

"Sort of how consumers of news media self-sort to different outlets according to their viewpoint preferences or how social media algorithmically curated feeds create filter bubbles.

"There’s a real tension here — too much uniformity in AI perspectives could stifle debate and enforce conformity, but extreme customisation might deepen echo chambers."

Finding the way ahead will not be easy, but doing nothing is potentially disastrous. And it is a path-finding challenge in which we all need to play a part, he says.

"My work is just one contribution among many to the broader conversation about AI’s impact on society. While it offers a specific lens on recent developments, I see it as part of a collective effort to better understand the technology.

"Ultimately, it’s up to all of us — researchers, policymakers, developers and the public — to engage thoughtfully with both the promises, the challenges and the risks AI presents."

It is natural to assume Prof Rozado sees his primary contribution is helping humans think through how they manage the world-shaping power of AI.

His real drive, in fact, is the reverse.

AI systems develop their "understanding" of the world primarily through the written works of humans, Prof Rozado explains.

Every piece of data they ingest during training slightly imprints their knowledge base.

Future AI systems, he predicts, will ingest nearly all written content ever created.

So by contributing research that critically examines the limitations and biases embedded in AI’s memory parameters, he hopes he can help give AI a form of meta-awareness — an understanding of how its knowledge is constructed.

"I hope some of my papers contribute to the understanding those systems will have about the origins of some of their own memory parameters.

"If AI systems can internalise insights about the constraints of their own learning processes, this could help improve their reasoning and ultimately lead to systems that are better aligned with human values and more capable of responsible decision-making."