A few years ago the cosmologist Max Tegmark found himself weeping outside the Science Museum in South Kensington. He'd just visited an exhibition that represented the growth in human knowledge, everything from Charles Babbage's difference engine to a replica of Apollo 11. What moved him to tears wasn't the spectacle of these iconic technologies but an epiphany they prompted.
Tegmark's melancholy insight was not some idle hypothesis, but instead an intellectual challenge to himself at the dawn of the age of artificial intelligence. What will become of humanity, he was moved to ask, if we manage to create an intelligence that outstrips our own?
Of course, this is a question that has repeatedly occurred in science fiction. However, it takes on a different kind of meaning and urgency as AI becomes science fact. And Tegmark decided it was time to examine the issues surrounding AI and the possibility, in particular, that it might lead to a so-called superintelligence.
With his friend the Skype co-founder Jaan Tallinn, and funding from the tech billionaire Elon Musk, he set up the Future of Life Institute, which researches the existential risks facing humanity. It's located in Cambridge, Massachusetts, where Tegmark is a professor at MIT, and it's not unlike the Future of Humanity Institute in Oxford, the body set up by his fellow Swede, the philosopher Nick Bostrom.
One of the difficulties in getting a clear perspective on AI is that it mired in myth and misunderstanding. Tegmark has tried to address this image problem by carefully unpacking the ideas involved or associated with AI - intelligence, memory, learning, consciousness - and then explaining them in demystifying fashion.
First, though, Tegmark, speaking on the phone from Boston, is eager to make it clear what AI is not about.
''I think Hollywood has got us worrying about the wrong thing,'' he says.
Life 3.0 is very far from a jeremiad against AI. In fact it's much more a celebration of the potential of superintelligence. But what is superintelligence? Indeed, what is intelligence? Tegmark defines it as the ''ability to accomplish complex goals''.
Therefore computers qualify as intelligent. However, their intelligence is narrow.
At the moment, computers are able to process information in specific areas that go far beyond human capacity. For example, the best chess player in the world stands no chance against a modern computer program. But that program would be useless against a child in a game of noughts and crosses. Humans, even the very young, possess a general intelligence across a broad range of abilities, whereas, for all their processing power, computers are confined to prescribed tasks.
So computers are only as intelligent as we allow them to be, as we program them to be. But as we move into the AI era, that is beginning to change. There are early examples at places such as Google's AI subsidiary, DeepMind, of computers self-learning, adapting through trial and error. So far this facility has only been demonstrated in the realm of video games and the board game Go, but presumably that will spread out into other media. And if it spreads enough it's likely to have a profound effect on how we think about ourselves, about life and many other fundamental issues.
Life 2.0, or the cultural stage, is where humans are: able to learn, adapt to changing environments, and intentionally change those environments. However we can't yet change our physical selves, our biological inheritance. Tegmark describes this situation as one of hardware and software. We design our own software - our ability to ''walk, read, write, calculate, sing and tell jokes'' - but our biological hardware (the nature of our brains and bodies) is subject to evolution and necessarily restricted.
The third stage, Life 3.0, is technological, in which post-humans can redesign not only their software but their hardware too. Life, in this form, Tegmark writes, is ''master of its own destiny, finally fully free from its evolutionary shackles''.
This new intelligence would be immortal and able to fan out across the universe. In other words, it would be life, Jim, but not as we know it. But would it be life or something else? It's fair to say that Tegmark, a physicist by training, is not a biological sentimentalist. He is a materialist who views the world and the universe beyond as being made up of varying arrangements of particles that enable differing levels of activity. He draws no meaningful or moral distinction between a biological, mortal intelligence and that of an intelligent, self-perpetuating machine.
Tegmark describes a future of boundless possibility for Life 3.0, and at times his writing borders on the fantastic, even triumphalist; but then he is a theorist, attempting to envisage what for most of us is either unimaginable or unpalatable.There is, though, a logic to his projections which even his detractors would allow, although they may argue over the timescale. Put simply, we are in the early phase of AI: self-driving cars, smart-home control units and other automata. But if trends continue apace, then it's not unreasonable to assume that at some point - 30 years' time, 50 years, 200 years? - computers will reach a general intelligence equivalent in many ways to that of humans.
And once computers reach this stage their improvement will increase rapidly because they will bring ever more processing capacity to working out how to increase their processing capacity. This is the argument that Bostrom laid out in his 2014 book Superintelligence, and the result of this massive expansion in intelligence - or the ability to accomplish complex goals - is indeed superintelligence, a singularity that we can only guess at.
Superintelligence, however, is not an inevitability. There are many in the field who believe that computers will never match human intelligence, or that if they do, humans themselves will have learned to adapt their own biology by then. But if it's a possibility, then it's one Tegmark believes we urgently need to think seriously about.
''When we're in a situation where something truly dramatic might happen, within decades, to me that's a really good time to start preparing so that it becomes a force for good. It would have been nice if we'd prepared more for climate change 30 years ago.''
Like Bostrom, Tegmark argues that development of AI is an even more pressing concern than climate change. Yet if we're looking at creating an intelligence that we can't possibly understand, how much will preparation affect what takes place on the other side of the singularity? How can we attempt to confine an intelligence that is beyond our imagining?Tegmark acknowledges that this is a question no-one can answer at the moment, but he argues that there are many other tasks that we should prioritise.
''Before we worry about long-term challenges of superintelligence, there are some very short-term things we need to address. Let's not make perfect the enemy of good. Everyone agrees that never under any circumstances do we want airplanes to fly into mountains or buildings. When Andreas Lubitz got depressed, he told his autopilot to go down to 100m and the computer said OK! The computer was completely clueless about human goals, even though we have the technology today to build airplanes that whenever the pilot tries to fly into something, it goes into safe mode, locks the cockpit and lands at the nearest airport. This kind of kindergarten ethic we should start putting in our machines today.''
But before that, there's even more pressing work to be done, Tegmark says.
''How do we transform today's buggy and hackable computers into robust AI systems that we really trust? This is hugely important. I feel that we as a society have been way too flippant about this. And world governments should include this as a major part of computer science research.''
Preventing the rise of a superintelligence by abandoning research in artificial intelligence is not, he believes, a credible approach.
''Every single way that 2017 is better than the stone age is because of technology. And technology is happening. Nobody here is talking about stopping technology. Asking if you're for or against AI is as ridiculous as asking if you're for or against fire. We all love fire for keeping our homes warm and we all want to prevent arson.''
Preventing arson, in this case, is a job that's already upon us. As Tegmark notes, we're on the cusp of starting an arms race in lethal autonomous weapons. Vladimir Putin said just recently that whoever mastered AI would become the ''ruler of the world''. In November there is a UN meeting to look at the viability of an international treaty to ban these weapons in much the same way that biological and chemical weapons have been banned.
''The AI community support this very strongly,'' says Tegmark. In terms of technology, there's very little difference, he says, between ''an autonomous assassination drone and an Amazon book delivery drone''.
''Another big issue over the next decade is job automation. Many leading economists think that the growing inequality that gave us Brexit and Trump is driven by automation. Here, there's a huge opportunity to make everyone better off if the government can redistribute some of this great wealth that machines can produce to benefit everybody.''
In this respect Tegmark believes the UK, with its belief in the free market and history of the NHS and welfare state, could play a leading role in harnessing corporate innovation for national benefit. The problem with that analysis is that, aside from the fact that much AI research is led by authoritarian regimes in Russia and China, the lion's share of advances are coming from America or American companies; and as a society the US has not been traditionally over-concerned with issues of inequality.
In the book, Tegmark hails Google's Larry Page, one of the wealthiest men on Earth, as someone who might turn out to be the most influential human who has ever lived: ''My guess is that if super intelligent digital life engulfs our universe in my lifetime, it will be because of Larry's decisions''.
He describes Page as he describes Musk, as thoughtful and sincerely concerned about humanity's plight. No doubt he is, but as a businessman he's primarily concerned with profit and stealing a march on competitors. And as things stand, far too much decision-making power resides in the hands of unrepresentative tech billionaires.
It seems to me that while the immediate issues of AI are essentially technological or, in the political sense, technical, those waiting along the road are far more philosophical in nature. Tegmark outlines several different outcomes that might prevail, from dystopian totalitarian dictatorship to benign machine control.
''It's important to realise that intelligence equals power,'' he says.
''The reason we have power over tigers isn't because we have bigger muscles or sharper teeth. It's because we're smarter. A greater power is likely to ultimately control our planet. It could be either that some people get great power thanks to advanced AI and do things you wouldn't like them to, or it could be that machines themselves outsmart us and manage to take control. That doesn't have to be a bad thing, necessarily. Children don't mind being in the presence of more intelligent beings, named mummy and daddy, because the parents' goals are in line with theirs. AI could solve all our thorny problems and help humanity flourish like never before.''
But wouldn't that radically alter humanity's sense of itself, looking to superior agents to take care of us? We would no longer be the primary force shaping our world.
''That's right,'' he says, with a smile in his voice, ''but there are many people in the world today who already believe that's how it is and feel quite happy about it. Religious people believe there is a being much more powerful and intelligent than them who looks out for them. I feel that what we really need to quit is this hubristic idea of building our self-worth on a misplaced idea of human exceptionalism. We humans are much better off if we can be humble and say maybe there can be beings much smarter than us, but that's OK, we get our self-worth from other things: having really profound relationships with our fellow humans and wonderfully inspired experiences.''
At such moments Tegmark can sound less like a hardcore materialist physicist than some trippy new-age professor who's spent too long contemplating the cosmos. But surely, I say, the modernist project that has built these machines was fuelled by a belief that God was an invention we no longer required: wouldn't it be a bitter historical irony if we ended up inventing new gods to supplant the old one?
Tegmark laughs.
''I think one of the things we will need in the age of AI is a good sense of humour and appreciation of irony. We keep gloating about being the smartest on the planet precisely because we're able to build all this fancy technology which is on track to make us not be the smartest on the planet!''
Having researched and written this book, Tegmark is much more optimistic than he was in that lachrymose moment in South Kensington. But it's not an optimism built on the assumption that everything will turn out OK in the end. Rather, he believes we must act if we're to secure a beneficial outcome. People and governments alike, he says, must turn their attention to the oncoming future, prepare appropriate safety engineering, and think deeply about the kind of world we want to create.
So what would he say if he could address that UN meeting in November?
''Fund AI safety research, ban lethal autonomous weapons, and expand social services so that wealth created by AI makes everybody well off.''
As ever, the road ahead will be filled with the unforeseen consequences of today's action or lack of it, but adopting that three-point plan seems like a firm step in the direction of making the future that much less worrying.
- Guardian News and Media
Comments
Superintelligence is flawed when it is not very bright. Will the Dyson self drive vehicle atavistically revert to a vacuum cleaner? Once, my cellphone, put together in Holland, 'thought' it was in the Netherlands, not NZ, and responded accordingly.
There is a humanist case for programming anthropomorphic robots with self defence mechanisms.