Report warns of AI use by govt

Colin Gavaghan
Colin Gavaghan
Simply including a human ''in the loop'' will not fully counter the dangers of using Artificial Intelligence algorithms in government decision-making, Dunedin researchers warn.

A New Zealand Law Foundation-funded report, released yesterday, found that establishing a regulator and developing guidelines were also needed to avoid the risks of government AI use, including use of predictive algorithms.

The report also warns against ''regulatory placebos''- measures that ''make us feel like we're being protected without actually making us any safer''.

''There's been a lot of talk about keeping a 'human in the loop' - making sure that no decisions are made just by the algorithm, without a person signing them off,'' Associate Prof Colin Gavaghan, one of the report co-authors, said.

''But there's good evidence that humans tend to become over-trusting and uncritical of automated systems - especially when those systems get it right most of the time. ''

Prof Gavaghan, of the Otago Law Faculty, is the first director of the foundation-sponsored Centre for Law and Policy in Emerging Technologies.

Predictive algorithms were ''powerful tools'' and their recommendations could affect ''some of the most important parts of our lives''.

The report on government use of artificial intelligence in New Zealand urges more transparency and controls, and champions home-grown, transparent New Zealand AI development approaches.

The report, from the University of Otago's Artificial Intelligence and Law in New Zealand Project, said New Zealand was a world leader in government algorithm use, but measures were needed to guard against their dangers.

Media reports last year highlighted community questioning of algorithm use by ACC and several government departments.

An AI predictive analytics pilot programme that prioritised overstayers for deportation was also suspended by Immigration Minister Iain Lees-Galloway in April last year.

Corrections, police, Immigration, ACC and other agencies use such algorithms - computer-based statistical tools - to help them make decisions about individual people and their lives, including if an offender should be released from prison, based on their likelihood of reoffending.

The report pointed out that, in the United States, an algorithm used for years in the youth justice system had never been ''properly tested for accuracy''.

A Compas algorithm, also in the US, had been widely criticised for overstating the risk of black prisoners reoffending, compared with white counterparts - resulting in longer imprisonment.

The report is titled 'Government Use of Artificial intelligence in New Zealand', and its other co-authors are Alistair Knott, James Maclaurin, John Zerilli and Joy Liddicoat, all of Otago University.

Comments

In reality, AI are still dills at thinking.

Programmed to indicate, predict and recommend, not decide.

 

Advertisement