Risks that could annihilate mankind

Research group focuses on synthetic biology, artificial intelligence and environmental threats

LOGO-SCMP

Consider these two scenarios – supercomputers take over the world, destroying their human creators; or else a man-made virus is released, wiping out life on earth. They may sound like the plots of Hollywood sci-fi films, but “existential risks” are real and deserve serious study, say the founders of a new centre at Cambridge University.

Founded by a philosopher, a scientist and a software entrepreneur, the Cambridge Centre for the Study of Existential Risk (CSER) is dedicated to looking at the possible risks to human existence.

“The main threats to sustained human existence now come from people, not from nature,” says Martin Rees, Britain’s Astronomer Royal and one of the centre’s co-founders.

Together with co-founders Jaan Tallinn, one of the brains behind Skype, and Huw Price, professor of philosophy at Cambridge University, Rees presented the CSER’s first public lecture on existential risk last month.

Rees, a distinguished astrophysicist, has been concerned about existential risk for more than a decade. His 2003 book Our Final Hour looked at the chance of the human race surviving the 21st century – and the conclusion was unsettling, putting the chance of civilisation coming to an end this century as high as 50:50.

The book was influential. It got a lot of people thinking and talking about existential risk, and the calibre of advisers on the CSER team is testament to how seriously the issue is being taken. Cosmologist Stephen Hawking, economist Partha Dasgupta, zoologist Robert May and geneticist George Church are among 26 scientists who have signed on to support the centre.

Rees says those of us lucky enough to live in the developed world worry about minor risks, such as train crashes or possible carcinogens in food, when we should instead think more about risks that although improbable would be catastrophic if they happened.

“The unfamiliar is not the same as the improbable,” he says.

Rees uses the analogy of an insurance policy whereby you multiply the probability by the consequences. If something has devastating consequences even though the chance of it happening is low you still think it’s worth taking out insurance.

“Just as we don’t feel it was a waste getting a fire insurance policy if our house doesn’t burn down, so we shouldn’t feel we’ve wasted our money if we tried to study some of these potential threats,” he says.

CSER’s main focus will be on three key areas: synthetic biology, artificial intelligence and extreme environmental threats.

The first is one of Rees’ chief concerns. He worries it will become possible for people to create even worse viruses than the ones that cause natural pandemics and that individuals or small groups will misuse this technology.

“For example, there are extreme environmentalists who think there are too many human beings in the world. Imagine if you have one or two people like that empowered by the latest or futuristic abilities in synthetic biology – it’s a really scary prospect,” says Rees.

Huw Price, who first became interested in existential risk after a chance encounter with Tallinn and was the instigator of the centre, says his first conversation with William Sutherland, professor of zoology at Cambridge and an adviser to the team, made a big impact on him.

“Bill said he was worried about a time in the not too distant future when you’ll have an app on your iPhone which you can use to design your own bacterium,” says Price.

The big issues of those concerned about the risks of artificial intelligence are whether machines will become more intelligent than humans and what we can do to ensure that machines will have our best interests at heart.

It’s the issue Price finds most compelling. He thinks this century the earth will undergo the most significant transition in the history of any planet with life.

“That’s the point at which intelligence escapes its biological constraints and exists in other forms. In raw processing terms, machines can work millions, perhaps billions of times faster than we can,” says Price.

It raises plenty of questions – what form will this intelligence take? Will it be like us? He says artificial intelligence experts vary in their estimates of how far ahead this transition may be. Some suggest it could be as soon as the next 15 to 20 years; others say not until the end of the century. Either way, that’s not far off – within the lifetime of those born today.

Perhaps not as immediately engaging as a technological threat, but just as potentially devastating are extreme environmental threats. These would involve worst-case scenarios for climate change and are part of a broader concern about mankind’s increasing reliance on limited resources and the planet’s fragile systems.

The British government’s national risk register identifies the perceived risks to the country, and cyberthreats and pandemics are high on that list. What CSER aims to do is to extend that risk register by highlighting threats that might seem unlikely but could annihilate our civilisation.

CSER isn’t alone in its mission. Oxford University’s Future of Humanity Institute was founded in 2005 to research threats to humanity, but there is no rivalry. Indeed, the institute’s director Nick Bostrom was involved early on in the CSER project and is among its advisers.

Not everyone will agree with CSER’s concerns and ideas. Price says they hope to overcome any sceptics with the intellectual clout of their impressive team of advisers. After all, if Stephen Hawking says something is worth thinking about, people will listen.

“All these distinguished people have been prepared to lend us their name and say in effect, ‘Yes, I think this is important’. And once more distinguished people say that it’ll be much harder to sideline,” says Price.

Regular seminars and research reports will give the public access to CSER’s findings, although, just like the debate a couple of years ago over whether details of how to modify the influenza virus should be published, some especially sensitive information may be withheld. And it’s not just about highlighting risk, but about suggesting solutions and ultimately bringing about international policy change, although those involved are well aware that won’t be easy.

The key, says Rees, is long-term thinking at a time when technology is moving ever more swiftly.

This reluctance to think and act long-term is ironic, says Rees, when he considers that almost a century ago England’s great cathedral makers thought the world might last only a few thousand years, yet they were prepared to spend 100 years building a cathedral.

“Now we’ve got much wider horizons, we understand much more and know our planet has a future of millions of years, but we are less likely to plan 50 years ahead than the cathedral makers were – that’s very sad,” says Rees.

The message isn’t antitechnology. CSER doesn’t want to get in the way of progress, but it wants to drive home the message that there are vital issues that we should be thinking about.

“We aren’t doomsters, we don’t think the world is doomed, we just think there is a risk which is possibly small but not so small that we should ignore it given the devastating consequences,” says Rees.

Original Link: SCMP

 

About author

Kate Whitehead

Kate Whitehead is a Hongkonger and has made the city her home since she was eight. She got her first degree (BA English Lit) from Warwick University and her postgrad (MA English Lit) from Sussex University. She was on staff at the Hong Kong Standard and South China Morning Post and was the editor of Cathay Pacific’s inflight magazine, Discovery.

Related Articles