The Astronomer Royal tells io9 how he plans to save humanity from extinction

Back in 2003, the preeminent physicist and cosmologist Sir Martin Rees created a considerable stir with the publication of his book, Our Final Hour, a look into how "terror, error, and environmental disaster threaten humankind's future". Rees argued that we've grossly underestimated the potential risks posed by modern technology — and humanity has about a 50% chance of surviving until the next century.

But not content to just take on the modern day role of Chicken Little, and with the help of philosopher Huy Price and Skype co-founder Jaan Tallinn, Rees recently co-founded The Cambridge Project for Existential Risk (CPER) — a group that's intent on making sure that humanity does in fact survive to see the 22nd century. We talked to Sir Martin Rees, and he told us how he intends to save humanity.

Cause for concern

The Astronomer Royal tells io9 how he plans to save humanity from extinctionS

There's no shortage of reasons to be worried about human survival, says Rees. With the advent of nuclear weapons nearly 70 years ago, humanity took the dubious step of graduating into a new class of civilization — one with the means to completely destroy itself with its own technology.

And now, a growing number of scientists, philosophers, and futurists is warning that the worst is yet to come — respected thinkers like Stephen Hawking, Bill Joy, Jared Diamond, and many others. They argue that we're on the cusp of developing an entire arsenal of new technologies that will rival nuclear weapons in terms of their apocalyptic potential — things like molecular nanotechnology and advanced artificial intelligence. Very recently, there has been considerable concern expressed about the potential misuse of artificial life or a deliberately engineered pandemic. And of course, there's the ongoing threat posed by human-instigated climate change.

So you'd expect that lots of people would be thinking about these challenges and how to surmount them. But Rees tells io9 these risks are for the most part being ignored or outright dismissed in academic circles. Hence the Cambridge Project for Existential Risks, which Rees hopes will change all that.

"It's early days and we're only just getting started," he told io9. "But our intent was to create a group that can focus on these rather understudied threats." He admitted that many of the risks may be low in terms of their probability, but that they each carry tremendous consequences.

Hard to predict

And it's not just a matter of studying and assessing these anticipated risks, it's also about trying to predict entirely new ones altogether — a prospect that Rees admits is particularly challenging. "What we're learning is how unpredictable the implications of new technologies can be," says Rees, who's the Master of Trinity College, Cambridge and the British Astronomer Royal.

Take for example, the possibility of human error or misuse. Future technologies will be so powerfully sweeping in their scope, argues Rees, "that any misapplication, by error of design of these technologies, could have grave consequences."

The Astronomer Royal tells io9 how he plans to save humanity from extinctionS

We focus too much on the dangers of military misuse of technology, says Rees, and not enough on other dangers. For example, misprogrammed nanotechnology could take on a life of its own, creating a so-called grey goo scenario in which microscopic devices consume everything in their vicinity. There's also the grim potential for an advanced A.I. that could wreak catastrophic-scale damage. And of course, there's the ongoing threat of human error in the context of such things as the ongoing management of our nuclear arsenals.

"The threats are getting larger rather than smaller," said Rees, "and it's surprising how little attention they are getting." We won't even know what we should be worrying about until we do a thorough and rigorous exploration of potential risks.

Like Rees, technologist Jaan Tallinn is gravely concerned about the future. And like Rees, he's equally convinced that we're grossly underappreciating the potential for existential risks. Speaking at the Singularity Summit last year, Tallinn noted, "Our future is increasingly determined by individuals and small groups wielding powerful technologies — and society is quite incompetent when it comes to predicting and handling the consequences."

Big problems require big thoughts

Rees and his partners agree that these issues require more attention from scientists and other specialists. To that end, CPER aims to establish a multidisciplinary research centre dedicated to the study and mitigation of existential-scale risks. They're hoping to tap into Cambridge's brain-trust and work to create an academic culture that's more in tune with taking these threats seriously.

And they're not thinking small. Their preliminary list of advisors includes such heavy-hitters as Nick Bostrom (a pioneer in evaluating existential risks), philosopher David Chalmers, geneticist George Church, physicist Max Tegmark, and legal expert Jonathan Wiener.

"It will be a significant step forward for our species when we start applying our brains in earnest to figure out how to reduce existential risks", Nick Bostrom tells io9. "If the Cambridge project comes off properly, it could increase the total amount of effort that humanity is devoting to understanding how to secure its long-term future by maybe one fifth." Bostrom believes that this would make it one of the most worthwhile projects in the world. He hopes that other universities will soon follow suit and add the study of humanity's future to the curriculum or the research agenda. "They are doing the world an important service by throwing down the gauntlet." he said.

The odds haven't changed

More recently, Rees has published his book, From Here to Infinity: A Scientist's Vision for the 21st Century, in which he warns about threats stemming from our collective rise on the planet, and those that could come about from misuse (even by just a few people) of powerful new technologies.

When we ask Rees which particular technology he's most concerned about, he remains surprisingly tight-lipped, saying it would be premature to pick one threat over another. "Making these sorts of assessments is very difficult — we just don't know," he says. It's through the ongoing work of CPER that Rees hopes we can be more certain.

Rees wrote Our Final Hour nearly ten years ago, so we thought to ask him if he's changed the 50/50 odds for humanity's survival later this century. "It's about the same," he says. "I'm still concerned about something happening later this century that will significantly set back civilization."

Top image via Bethesda Game Studios. Inset images via Martin Rees Tumblr, Futuretek.