The Singularity is Nigh! [Republished from The Economist]
The cult-like battles between e/acc and doomers
[NB - the below has just been published by The Economist under the title “AI’s big rift is like a religious schism, says Henry Farrell.” It is being republished here, with The Economist’s permission. The picture above (replacing a blander Singularity picture), is what ChatGPT4 comes up with when asked for “an inspiring religious portrait of Based Buff Jesus ushering in the technocapital Singularity”]
Two centuries ago Henri de Saint-Simon, a French utopian, proposed a new religion, worshipping the godlike force of progress, with Isaac Newton as its chief saint. He believed that humanity’s sole uniting interest, “the progress of the sciences”, should be directed by the “elect of humanity”, a 21-member “Council of Newton”. Friedrich Hayek, a 20th-century economist, later gleefully described how this ludicrous “religion of the engineers” collapsed into a welter of feuding sects.
Today, the engineers of artificial intelligence (AI) are experiencing their own religious schism. One sect worships progress, canonising Hayek himself. The other is gripped by terror of godlike forces. Their battle has driven practical questions to the margins of debate.
Both cults are accidental by-products of science fiction. In 1993 Vernor Vinge drew on computer science and his fellow science-fiction writers to argue that ordinary human history was drawing to a close. We would surely create superhuman intelligence sometime within the next three decades, leading to a “Singularity”, in which AI would start feeding on itself. The future might be delightful or awful, depending on whether machines enhanced human intelligence or displaced it.
Some were optimistic. The futurist Ray Kurzweil wrote an enormous tome, “The Singularity is Near”, predicting a cusp in 2045. We humans would become immortal, spreading intelligence throughout the universe, and eventually merging into God. For all its statistics and exponentials, the book prophesied “the Rapture of the Nerds”, as one unkind critic called it. Its title really should have been “The Singularity is Nigh”.
Others feared the day of judgment. Eliezer Yudkowsky, a self-taught AI researcher, was deeply influenced by Mr Vinge’s ideas. He fathered Silicon Valley’s “rationalist” movement, which sought to improve human reasoning and stop AI destroying humankind.
Rationalists believed that Bayesian statistics and decision theory could de-bias human thinking and model the behaviour of godlike intelligences. They revelled in endless theoretical debates, like medieval Christian philosophers disputing the nature of angels, applying amateur game theory instead of Aristotelian logic. Sometimes their discussions were less erudite. Mr Yudkowsky popularised his ideas in a 660,000-word fan-fiction epic, “Harry Potter and the Methods of Rationality”.
Rationalists feared that superhuman AIs wouldn’t have our best interests at heart. One notorious thought experiment—a modern version of Pascal’s wager, dubbed “Roko’s basilisk”—claimed that logic dictated that future divine intelligences would torture anyone who had known that AI was possible and hadn’t devoted themselves to bringing it into existence. AIs might also use their awesome reasoning powers to escape any limits that humans imposed on them, creating an “x risk” (existential risk) to human survival.
Rationalism explains why AI pioneers became obsessed with x risk. Sam Altman, Elon Musk and others founded OpenAI, the creator of Chatgpt, as a non-profit so that it wouldn’t duck the dangers of machine intelligence. But the incentives shifted as the funding flooded in. Some OpenAI staffers feared that their employer cared more about the opportunities than the dangers and defected to found Anthropic, a rival AI firm. More recently, clashes over AI risk, money and power reportedly led to the fracture between Mr Altman and his board.
If rationalists are frustrated by Silicon Valley’s profit model, Silicon Valley is increasingly frustrated by rationalism. Marc Andreessen, the co-founder of Andreessen Horowitz, a venture-capital firm, fulminated in June that the extremist AI-risk “cult” was holding back an awesome AI-augmented future, in which humanity could reach for the stars.
This backlash is turning into its own religion of the engineers. Grimes, a musician and Silicon Valley icon, marvels that AI engineers are “designing the initial culture of the universe”. She calls for a “Council of Elrond” (this conclave a nod to “The Lord of the Rings”) comprising the “heads of key AI companies and others who understand it” to set AI policy. Grimes met Mr Musk, the father of her children, through a shared joke about Roko’s basilisk.
In October Mr Andreessen published his own “Techno-Optimist Manifesto” to wide acclaim from Silicon Valley entrepreneurs. In it, he takes aim at a decades-long “demoralisation campaign…against technology and life”, under various names including “sustainable development goals”, “social responsibility”, “trust and safety” and “tech ethics”. Efforts to decelerate AI “will cost human lives” and are thus tantamount to “murder”.
Mr Andreessen’s manifesto is a Nicene creed for the cult of progress: the words “we believe” appear no less than 113 times in the text. His list of the “patron saints” of techno-optimism begins with Based Beff Jezos, the social-media persona of a former Google engineer who claims to have founded “effective accelerationism”, a self-described “meta-religion” which puts its faith in the “technocapital Singularity”.
Our future is currently being built around Mr Vinge’s three-decades-old essay, a work that only Silicon Valley thinkers and science-fiction fans have read. Warring cults dispute whether engineers are as gods, or just unwitting Dr Frankensteins.
This schism is an attention-sucking black hole that makes its protagonists more likely to say and perhaps believe stupid things. Of course, many AI-risk people recognise that there are problems other than the Singularity, but it’s hard to resist its relentless gravitational pull. Before Mr Andreessen was fully dragged past the event horizon, he made more nuanced arguments about engineers’ humility and addressing the problems of AI as they arose.
But we need even more to listen to other people. Last month, at Rishi Sunak’s global AI-policy summit, Mr Musk pontificated about the need for an “off switch” for hostile AI. The main event was all about x risk and AI’s transformative promise, consigning other questions to a sideshow dubbed the “AI Fringe”.
At the same time, Rachel Coldicutt, a British tech thinker, was putting together a “Fringe of the Fringe”, where a much more diverse group of thinkers debated the topics that hadn’t made the main agenda: communities, transparency, power. They didn’t suggest a Council of the Elect. Instead, they proposed that we should “make AI work for eight billion people, not eight billionaires”. It might be nice to hear from some of those 8bn voices.■
Henry Farrell is a professor of international affairs and democracy at Johns Hopkins University, and co-author of “Underground Empire: How America Weaponized the World Economy”.
© The Economist Newspaper Limited, London, 2023
This article is an over-simplistic framing of the issue and evades exploring any nuance of the particulars. Inventing factions and reducing the arguments to a label is an outstanding way to impede productive, educational debate. It's a Crayola line drawing of a complex nature scene.
AI has begun to contribute to science. We will see new medical therapies, manufacturing materials, small molecules, fusion reaction control, self-driving robo-taxis, and uber-competent Siri-like assistants. These areas can improve critical topics like longevity, energy abundance, and access to education while enabling an explosion in productivity.
But the flip side is real, too. Today, you can interact with open-source competitors to ChatGPT that will gladly outline the production of sarin gas or cultivation of anthrax. These models are already employed to displace journalists and commercial illustrators, AstroTurf social media, and terrorize screen actors and writers into strikes.
In the next few years, maybe months, there's a real chance that one morning you'll roll out of bed and learn that an AI can now pass every human-level-ability test at the expert level, medical boards, bar exams, MENSA tests, doctorate level physics, and math tests. It can re-derive Einsteins' Special Relativity, the Quantum Standard Model and produce new frontier science with a simple query. When you describe your business, it will lay out a strategy that factors in currency risk, customer sentiments, competitors, regulation, and the health of your suppliers.
Likely before your current car lease ends, there's a day after which it's irresponsible to make choices without getting feedback from an AI. A day where the possibility of people being the first to discover new science has passed.
This very smart AI doesn't have a will, desires, or the ability to act. It won't sneak around and take over the world. It's just smarter at every intellectual task than anyone alive and available to everyone as an app on your phone.
The societal impacts will be tremendous and hard to predict. But it begs the question: What's your value add if all you're doing is querying super-Siri? Being a low or no value-add person is a scary prospect in the developed world.
Agency has shown itself to be as simple as adding recursion, memory, and internet access to existing LLMs. This configuration enables an AI to break complex problems into subtasks and attack each subtask. It's a short hop from here to an autonomously AI solving world hunger or planning to turn the world into paper clips.
Interesting take, and great points. Two minor complaints:
First, Yudkowsky's thinking regarding AI alignment is very well developed and massively influential. The references to him being self schooled or writing Potter fan fiction are ad hominem attacks that say nothing about the validity of his writings. He's certainly not perfect and his ideas have been criticized - but I don't think your approach here is constructive.
Second, Vernor Vinges' singularity is the idea that as technological progress accelerates, eventually the speed of development will become steeper and steeper until it is effectively a vertical line - hence the metaphor of a singularity. For people living before the singularity, the world afterward will be incomprehensible. AI is a possible driver of this acceleration but is not the only factor at play. Vernor's vision of the singularity is more nuanced and much broader than how you've described it here.
This is very different from (and much less rosy than) Kurzweil's vision, in which humanity is the beneficiary and object of the singularity.