Election campaign stops in Midjourney

One of many fake images circulating of Donald Trump being "arrested". Photo: Eliot Higgins
One of many fake images circulating of Donald Trump being "arrested". Photo: Eliot Higgins
Last month, people scrolling through the National party’s social media accounts might have encountered several bizarre attack advertisements, including images depicting a woman feeding her six-fingered baby, a group of dog-like robbers raiding a jewellery store, and two nurses of Polynesian descent with peculiarly Plasticine skin.

It soon transpired that the online AI service Midjourney was used to generate these uncanny-valley-esque images. A party spokesperson called the software "an innovative way to drive our social media", while adding the party was "committed to using it responsibly". Yeah, right.

Text-based AI applications such as OpenAI’s ChatGPT and Google’s Bard, and image-based versions such as Dall-E 2 and Midjourney, are set to revolutionise political communications globally.

The rapid progress in AI capabilities, however, has sparked concerns over the public’s ability to discern real from fake media, leading to debates on whether political parties should be transparent about their use of AI-generated content.

In April, the Republican party released a wholly AI-generated attack ad depicting a dystopic vision of the United States in the event of a Biden re-election. And then a few weeks ago, an AI-generated image of an explosion outside the Pentagon went viral on social media, forcing the Department of Defence to release a statement denouncing the image as "misinformation".

Why is AI becoming more prevalent in political advertising, and what are the unintended consequences of this phenomenon? To begin with, the use of AI in the creation of images and videos can massively reduce the production costs that accompany traditional graphic design and video creation. Midjourney, for example, runs on the social messaging platform Discord. Subscription fees start at a mere £8 ($NZ16) per month.

The content of political advertising matters, but so does the volume of advertisements. In the 2000 US presidential election, for example, residents in swing states were twice as likely to see a Bush ad as a Gore ad, costing Gore four points among these voters. The cost-efficiency of AI-generated imagery enables campaigns to allocate their resources more efficiently, including spending more money on social media promotions to ensure their advertisements are viewed by a wider audience more frequently.

I wouldn’t be surprised if apps such as Midjourney disrupt the job market given that AI negates the need for models, graphic designers and other creative professionals.

There are also concerns about the lack of diversity in AI-generated political advertisements and the reinforcement of certain biases. When AI systems are trained on biased or limited datasets, they have the propensity to perpetuate biases related to social, gender or racial factors in the images and text they produce.

Political communications work best when they are timely and responsive to current events, according to Disinformation Project research fellow Sanjana Hattotuwa, who expressed concern about the ease and speed of AI tools in the hands of "bad actors" in an interview with Stuff.

"People don’t need to know Photoshop or video editing, or photo manipulation to create synthetic media that looks good enough and real enough to manipulate public opinion."

AI tools such as Spotify’s personalised playlists demonstrate the power of granular personalisation in delivering curated experiences. Similarly, in political campaigns, AI can analyse vast amounts of data to generate personalised advertisements that align with an individual’s preferences and values, thereby amplifying the effectiveness of political communications.

AI services such as Midjourney are trained on billions of images available on the internet, raising concerns about the use of personal data without explicit consent. AI-generated images may also closely resemble real people, generating further concerns about impersonation and consent.

At the extreme, there is the potential for politicians to use deepfakes of their opponents to spread misinformation and devastate the reputation of other political parties. In March last year, for example, against the backdrop of Russia’s invasion of Ukraine, a deepfake video of Ukrainian president Volodymyr Zelenskiy circulated on social media, instructing citizens to surrender.

On the other hand, morally bankrupt politicians can benefit from this slew of misinformation. Just as "fake news" was used by the Trump administration to dodge responsibility, I wouldn’t be surprised if certain politicians try to escape accountability by dismissing legitimate images, video and audio as deepfakes.

National’s use of AI-generated imagery was immediately apparent because of a number of tell-tale signs; too many fingers, unnaturally smooth skin and incongruent background details.

AI-generated imagery, however, is only going to become more realistic. A recent study tested whether online volunteers could distinguish between passport-like headshots created by the AI StyleGAN2 and real images.

"On average, people were pretty much at chance performance," co-author and Lancaster University psychologist Sophie Nightingale said.

"People can’t reliably perceive the difference between those synthetic faces and actual, real faces."

Another study found that people may also be more willing to overlook the tell-tale signs of AI involvement if the image aligned with the person’s pre-existing beliefs.

According to the recent global Trust Barometer survey by Edelman, trust in government is collapsing, especially in democracies. A majority of those surveyed believed that journalists (67%), government leaders (66%) and business executives (63%) were "purposely trying to mislead people by saying things they know are false or gross exaggerations".

In this climate, the use of AI-generated imagery in political communications may exacerbate existing mistrust towards the party in question and their advertising methods, especially if the viewer believes she is being deceived.

The best defence against AI-generated imagery is, ironically, another AI system that detects artificial images. But this in itself is a game of catch-up.

The rapid evolution of technology moreover has created a significant gap between the development of AI and the regulatory frameworks needed to address its societal implications. I can’t help but think that the best thing policymakers can insist on is the mandatory labelling of all AI-generated content in political material.

Last month, US lawmaker Yvette Clarke introduced a new Bill that would force politicians to disclose when AI tools were used to generate imagery in their advertising, arguing that AI has the potential for "exacerbating and spreading misinformation and disinformation at scale and with unprecedented speed".

While I rather enjoy the oddness of National’s AI-generated imagery, I am relieved that the majority of New Zealand’s political parties have vowed not to use AI generated fake images in their campaigns.

 - Jean Balchin, a former English student at the University of Otago, is studying at Oxford University after being awarded a Rhodes Scholarship.