Bias between the zeroes and ones

Photo: Getty Images
Photo: Getty Images
Racism, sexism and ableism will be carried over to artificial intelligence unless society addresses them first, Meredith Broussard tells Zoe Corbyn.

Data journalist and academic Meredith Broussard has been in the vanguard sounding the alarm about unchecked AI. Her book Artificial Unintelligence (2018) coined the term "technochauvinism" to describe the blind belief in the superiority of tech solutions. Now, her new book More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech takes the argument further.

Q The message that bias can be embedded in our technological systems isn’t really new. Why do we need this book?

A This book is about helping people understand the very real social harms that can be embedded in technology. We have had an explosion of wonderful journalism and scholarship about algorithmic bias and the harms that have been experienced by people. I try to lift up that reporting and thinking. I also want people to know that we have methods now for measuring bias in algorithmic systems. They are not entirely unknowable black boxes: algorithmic auditing exists and can be done.

Q Why is the problem "more than a glitch"? If algorithms can be racist and sexist because they are trained using biased data sets that don’t represent all people, isn’t the answer just more representative data?

A A glitch suggests something temporary that can be easily fixed. I’m arguing that racism, sexism and ableism are systemic problems that are baked into our technological systems because they’re baked into society. It would be great if the fix were more data. But more data won’t fix our technological systems if the underlying problem is society. Take mortgage approval algorithms, which have been found to be 40-80% more likely to deny borrowers of colour than their white counterparts. The reason is the algorithms were trained using data on who had received mortgages in the past and, in the US, there’s a long history of discrimination in lending. We can’t fix the algorithms by feeding better data in because there isn’t better data.

Q You argue we should be choosier about the tech we allow into our lives and our society. Should we just reject any AI-based technology that encodes bias at all?

Meredith Broussard speaks in New York. PHOTO: GETTY IMAGES
Meredith Broussard speaks in New York. PHOTO: GETTY IMAGES
A AI is in all our technologies nowadays. But we can demand that our technologies work well — for everybody — and we can make some deliberate choices about whether to use them.

I’m enthusiastic about the distinction in the proposed European Union AI Act that divides uses into high and low risk based on context. A low-risk use of facial recognition might be using it to unlock your phone: the stakes are low — you have a passcode if it doesn’t work. But facial recognition in policing would be a high-risk use that needs to be regulated or — better still — not deployed at all because it leads to wrongful arrests and isn’t very effective. It isn’t the end of the world if you don’t use a computer for a thing. You can’t assume that a technological system is good because it exists.

Q There is enthusiasm for using AI to help diagnose disease. But racial bias is also being baked in, including from unrepresentative data sets (for example, skin cancer AIs will probably work far better on lighter skin because that is mostly what is in the training data). Should we try to put in "acceptable thresholds" for bias in medical algorithms, as some have suggested?

A I don’t think the world is ready to have that conversation. We’re still at a level of needing to increase awareness of racism in medicine. We need to take a step back and fix a few things about society before we start freezing it in algorithms. Formalised in code, a racist decision becomes difficult to see or eradicate.

Q Any hope we can improve our algorithms?

A I am optimistic about the potential of algorithmic auditing — the process of looking at the inputs, outputs and the code of an algorithm to evaluate it for bias. I have done some work on this. The aim is to focus on algorithms as they are used in specific contexts and address concerns from all stakeholders, including members of an affected community.

Q AI chatbots are all the rage. But the tech is also rife with bias. Guardrails added to OpenAI’s ChatGPT have been easy to get around. Where did we go wrong?

A Though more needs to be done, I appreciate the guardrails. This has not been the case in the past, so it is progress. But we also need to stop being surprised when AI screws up in very predictable ways. The problems we are seeing with ChatGPT were anticipated and written about by AI ethics researchers ... We need to recognise this technology is not magic. It’s assembled by people, it has problems and it falls apart.

Q OpenAI’s co-founder Sam Altman recently promoted AI doctors as a way of solving the healthcare crisis. He appeared to suggest a two-tier healthcare system — one for the wealthy, where they enjoy consultations with human doctors, and one for the rest of us, where we see an AI. Is this the way things are going and are you worried?

A AI in medicine doesn’t work particularly well, so if a very wealthy person says, "hey, you can have AI to do your healthcare and we’ll keep the doctors for ourselves", that seems to me to be a problem and not something that is leading us towards a better world. Also, these algorithms are coming for everybody, so we might as well address the problems. — Guardian News and Media 2023

Meredith Broussard is an associate professor at New York University’s Arthur L Carter Journalism Institute.