Words: Tom Ward
Ever since ChatGPT launched in November 2022, the tech-suspicious among us have had one question: at what point is this all going to go wrong? Forget AI replacing actors and writers, at what point will Skynet become conscious, launch the nukes, and wipe humankind off the face of the Earth? It sounds like paranoia fuelled by ‘80s apocalyptic action movies, sure. But even those involved in the development of AI have warned about going too far, too fast.
“These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening,” Geoffrey Hinton, the ‘godfather of AI’ warned recently. “I thought for a long time that we were, like, 30 to 50 years away from that…Now, I think we may be much closer, maybe only five years away from that,” he warned, having quit his job at Google in order to sound the alarm. “This isn't just a science fiction problem. This is a serious problem that's probably going to arrive fairly soon, and politicians need to be thinking about what to do about it now,” Hinton concludes.
So what can be done? Apart from a call to slow AI research – signed by Elon Musk, among others – the answer may be ‘not much’. In a world beset by conspiracy theories and right-wing outdoorsmen (and women) looking for any excuse to load up on AR-15s and tinned beans and set out for the woods, one answer seems to be ‘prepare for the worse’. Even Sam Altman, the OpenAI CEO has a plan, telling the New Yorker in a 2016 profile that he has “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to”. Altman isn’t just preparing to flee artificial intelligence – he cites the entire end of world bingo card including viral outbreaks and nuclear war, but his fear that AI “attacks us” will surely have many almost-rational people running for the hills. Or at least, New Zealand.
In an outstanding recent piece for The Atlantic, titled ‘It’s a Weird Time to Be a Doomsday Prepper’, technology writer Jacob Sweet hits the nail on the head when he explains why, in fact, doomsday preppers don’t actually seem to be doing much to prepare against the threat of AI. “Part of why AI-doomsday prepping does not seem to be much of a thing is that it’s still hard to imagine the precise mechanics of an AI threat,” he writes. “Familiar methods of destruction come to mind first, but with an AI twist: Rogue AI launches nuclear weapons, bombs the electrical grid, stages cyberattacks…Whether the nuclear weapon is sent by an unstable foreign leader or by a malfunctioning or malicious robot, a bomb is still a bomb.”
In other words, many of the mechanics AI could use to end the world already exist – it doesn’t matter who’s pushing the red button. It could be Russia, China, Iran, North Korea, America. It could be AI acting independently. Or it could be AI acting on orders. In one frequently touted scenario of how AI might get out of hand, researchers paint a scenario in which AI is told to kill a specific target. Like the Terminator, the AI may stop at nothing to do so. And, should a human then change this order, there’s a chance the AI would see this as interference in its original goal, and kill its operator. Granted, this is a long way from AI illegally skimming Stephen King books online in order to produce knock-off novels and help students write fraudulent essays.
To follow this fatalistic line of thinking all the way to its conclusion, Sweet speaks with Douglas Rushkoff, author of ‘Survival of the Richest: Escape Fantasies of the Tech Billionaires’ who believes prepping is pointless in the case of AI, because AI can get you anytime, anywhere. “I don’t care how insulated the technology in your bunker is..The AI nanos are going to be able to penetrate your bunker … You can’t escape them,” Rushkoff says.
Eliezer Yudkowsky, the senior research fellow at the Machine Intelligence Research Institute, tells Sweet that “If you’re facing a superintelligence, you’ve already lost,” Yudkowsky said. “Building an elaborate bunker would not help the tiniest bit in any superintelligence disaster I consider realistic, even if the bunker were on Mars.” Hiding from malicious AI in even the most high tech bunker, then, would be akin to hiding from a tidal wave in a sand castle.
There is hope. According to the ‘Existential Risk Persuasion Tournament’ held by Forecasting Research Institute earlier this summer, AI experts said there is just a three percent chance that AI will kill humanity by 2100. However, 12 percent of the same experts did think AI might cause a major catastrophe that will cause the death of at least 10 percent of humans over a five year period.
Worrying stuff. But what the who, what, where, why and when of it all looks like is hard to say “Researchers and industry leaders have warned that A.I. could pose an existential risk to humanity. But they've been light on the details,” writes Cade Metz in the New York Times. “No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye,” argues PhD student Michael Timothy Bennet in the Conversation, who argues that “Doomsaying is an old occupation.” All of which is to say, the future is unclear but is developing rapidly. Between now and then, slowing everything down until we can better understand the new world we’re creating seems like the only logical step.
Become a Gentleman’s Journal Member?
Like the Gentleman’s Journal? Why not join the Clubhouse, a special kind of private club where members receive offers and experiences from hand-picked, premium brands. You will also receive invites to exclusive events, the quarterly print magazine delivered directly to your door and your own membership card.