What OpenAI shares with Scientology
Strange beliefs, fights over money and bad science fiction
When Sam Altman was ousted as CEO of OpenAI, some hinted that lurid depravities lay behind his downfall. Surely, OpenAI’s board wouldn’t have toppled him if there weren’t some sordid story about to hit the headlines? But the reporting all seems to be saying that it was God, not Sex, that lay behind Altman’s downfall. And Money, that third great driver of human behavior, seems to have driven his attempted return and his new job at Microsoft, which is OpenAI’s biggest investor by far.
As the NYT describes the people who pushed Altman out:
Ms. McCauley and Ms. Toner [HF - two board members] have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that A.I. could one day destroy humanity. Today’s A.I. technology cannot destroy humanity. But this community believes that as the technology grows increasingly powerful, these dangers will arise.
McCauley and Toner reportedly worried that Altman was pushing too hard, too quickly for new and potentially dangerous forms of AI (similar fears led some OpenAI people to bail out and found a competitor, Anthropic, a couple of years ago). The FT’s reporting confirms that the fight was over how quickly to commercialize AI
The back-story to all of this is actually much weirder than the average sex scandal. The field of AI (in particular, its debates around Large Language Models (LLMs) like OpenAI’s GPT-4) is profoundly shaped by cultish debates among people with some very strange beliefs.
As LLMs have become increasingly powerful, theological arguments have begun to mix it up with the profit motive. That explains why OpenAI has such an unusual corporate form - it is a non-profit, with a for-profit structure retrofitted on top, sweatily entangled with a profit-maximizing corporation (Microsoft). It also plausibly explains why these tensions have exploded into the open.
********
I joked on Bluesky that the OpenAI saga was as if “the 1990s browser wars were being waged by rival factions of Dianetics striving to control the future.” Dianetics - for those who don’t obsess on the underbelly of American intellectual history - was the 1.0 version of L. Ron Hubbard’s Scientology. Hubbard hatched it in collaboration with the science fiction editor John W. Campbell (who had a major science fiction award named after him until 2019, when his racism finally caught up with his reputation).
The AI safety debate too is an unintended consequence of genre fiction. In 1987, multiple-Hugo award winning science-fiction critic Dave Langford began a discussion of the “newish” genre of cyberpunk with a complaint about an older genre of story on information technology, in which “the ultimate computer is turned on and asked the ultimate question, and replies `Yes, now there is a God!'
However, the cliche didn’t go away. Instead, it cross-bred with cyberpunk to produce some quite surprising progeny. The midwife was the writer Vernor Vinge, who proposed a revised meaning for “singularity.” This was a term already familiar to science fiction readers as the place inside a black hole where the ordinary predictions of physics broke down. Vinge suggested that we would soon likely create true AI, which would be far better at thinking than baseline humans, and would change the world in an accelerating process, creating a historical singularity, after which the future of the human species would be radically unpredictable.
These ideas were turned into novels by Vinge himself, including A Fire Upon the Deep (fun!) and Rainbow’s End (weak!). Other SF writers like Charles Stross wrote novels about humans doing their best to co-exist with “weakly godlike” machine intelligence (also fun!). Others who had no notable talent for writing, like the futurist Ray Kurzweil, tried to turn the Singularity into the foundation stone of a new account of human progress. I still possess a mostly-unread copy of Kurzweil’s mostly-unreadable magnum opus, The Singularity is Near, which was distributed en masse to bloggers like meself in an early 2000s marketing campaign. If I dug hard enough in my archives, I might even be able to find the message from a publicity flack expressing disappointment that I hadn’t written about the book after they sent it. All this speculation had a strong flavor of end-of-days. As the Scots science fiction writer, Ken MacLeod memorably put it, the Singularity was the “Rapture of the Nerds.” Ken, being the offspring of a Free Presbyterian preacher, knows a millenarian religion when he sees it: Kurzweil’s doorstopper should really have been titled The Singularity is Nigh.
Science fiction was the gateway drug, but it can’t really be blamed for everything that happened later. Faith in the Singularity has roughly the same relationship to SF as UFO-cultism. A small minority of SF writers are true believers; most are hearty skeptics, but recognize that superhuman machine intelligences are (a) possible) and (b) an extremely handy engine of plot. But the combination of cultish Singularity beliefs and science fiction has influenced a lot of external readers, who don’t distinguish sharply between the religious and fictive elements, but mix and meld them to come up with strange new hybrids.
Just such a syncretic religion provides the final part of the back-story to the OpenAI crisis. In the 2010s, ideas about the Singularity cross-fertilized with notions about Bayesian reasoning and some really terrible fanfic to create the online “rationalist” movement mentioned in the NYT.
I’ve never read a text on rationalism, whether by true believers, by hangers-on, or by bitter enemies (often erstwhile true believers), that really gets the totality of what you see if you dive into its core texts and apocrypha. And I won’t even try to provide one here. It is some Very Weird Shit and there is really great religious sociology to be written about it. The fights around Roko’s Basilisk are perhaps the best known example of rationalism in action outside the community, and give you some flavor of the style of debate. But the very short version is that Eliezer Yudkowsky, and his multitudes of online fans embarked on a massive collective intellectual project, which can reasonably be described as resurrecting David Langford’s hoary 1980s SF cliche, and treating it as the most urgent dilemma facing human beings today. We are about to create God. What comes next? Add Bayes’ Theorem to Vinge’s core ideas, sez rationalism, and you’ll likely find the answer.
The consequences are what you might expect when a crowd of bright but rather naive (and occasionally creepy) computer science and adjacent people try to re-invent theology from first principles, to model what human-created gods might do, and how they ought be constrained. They include the following, non-comprehensive list: all sorts of strange mental exercises, postulated superhuman entities benign and malign and how to think about them; the jumbling of parts from fan-fiction, computer science, home-brewed philosophy and ARGs to create grotesque and interesting intellectual chimeras; Nick Bostrom, and a crew of very well funded philosophers; Effective Altruism, whose fancier adherents often prefer not to acknowledge the approach’s somewhat disreputable origins.
All this would be sociologically fascinating, but of little real world consequence, if it hadn’t profoundly influenced the founders of the organizations pushing AI forward. These luminaries think about the technologies that they were creating in terms that they have borrowed wholesale from the Yudkowsky extended universe. The risks and rewards of AI are seen as largely commensurate with the risks and rewards of creating superhuman intelligences, modeling how they might behave, and ensuring that we end up in a Good Singularity, where AIs do not destroy or enslave humanity as a species, rather than a bad one.
Even if rationalism’s answers are uncompelling, it asks interesting questions that might have real human importance. However, it is at best unclear that theoretical debates about immantenizing the eschaton tell us very much about actually-existing “AI,” a family of important and sometimes very powerful statistical techniques, which are being applied today, with emphatically non-theoretical risks and benefits.
Ah, well, nevertheless. The rationalist agenda has demonstrably shaped the questions around which the big AI ‘debates’ regularly revolve, as demonstrated by the Rishi Sunak/Sam Altman/Elon Musk love-fest “AI Summit” in London a few weeks ago.
We are on a very strange timeline. My laboured Dianetics/Scientology joke can be turned into an interesting hypothetical. It actually turns out (I only stumbled across this recently) that Claude Shannon, the creator of information theory (and, by extension, the computer revolution) was an L. Ron Hubbard fan in later life. In our continuum, this didn’t affect his theories: he had already done his major work. Imagine, however, a parallel universe, where Shannon’s science and standom had become intertwined and wildly influential, so that debates in information science obsessed over whether we could eliminate the noise of our engrams, and isolate the signal of our True Selves, allowing us all to become Operating Thetans. Then reflect on how your imagination doesn’t have to work nearly as hard as it ought to. A similarly noxious blend of garbage ideas and actual science is the foundation stone of the Grand AI Risk Debates that are happening today.
To be clear - not everyone working on existential AI risk (or ‘x risk’ as it is usually summarized) is a true believer in Strong Eliezer Rationalism. Most, very probably, are not. But you don’t need all that many true believers to keep the machine running. At least, that is how I interpret this Shazeda Ahmed essay, which describes how some core precepts of a very strange set of beliefs have become normalized as the background assumptions for thinking about the promise and problems of AI. Even if you, as an AI risk person, don’t buy the full intellectual package, you find yourself looking for work in a field where the funding, the incentives, and the organizational structures mostly point in a single direction (NB - this is my jaundiced interpretation, not hers).
********
There are two crucial differences between today’s AI cult and golden age Scientology. The first was already mentioned in passing. Machine learning works, and has some very important real life uses. E-meters don’t work and are useless for any purpose other than fleecing punters.
The second (which is closely related) is that Scientology’s ideology and money-hustle reinforce each other. The more that you buy into stories about the evils of mainstream psychology, the baggage of engrams that is preventing you from reaching your true potential and so on and so on, the more you want to spend on Scientology counselling. In AI, in contrast, God and Money have a rather more tentative relationship. If you are profoundly worried about the risks of AI, should you be unleashing it on the world for profit? That tension helps explain the fight that has just broken out into the open.
It’s easy to forget that OpenAI was founded as an explicitly non-commercial entity, the better to balance the rewards and the risks of these new technologies. To quote from its initial manifesto:
It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly. Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.
We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.
That … isn’t quite how it worked out. The Sam Altman justification for deviation from this vision, laid out in various interviews, is that it turned out to just be too damned expensive to train the models as they grew bigger, and bigger and bigger. This necessitated the creation of an add-on structure, which would sidle into profitable activity. It also required massive cash infusions from Microsoft (reportedly in the range of $13 billion), which also has an exclusive license to OpenAI’s most recent LLM, GPT-4. Microsoft, it should be noted, is not in the business of prioritizing “a good outcome for all over its own self-interest.” It looks instead, to invest its resources along the very best Friedmanite principles, so as to create whopping returns for shareholders. And $13 billion is a lot of invested resources.
This, very plausibly explains the current crisis. OpenAI’s governance arrangements are shaped by the fact that it was a non-profit until relatively recently. The board is a non-profit board. The two members already mentioned, McCauley and Toner, are not the kind of people you would expect to see making the big decisions for a major commercial entity. They plausibly represent the older rationalist vision of what OpenAI was supposed to do, and the risks that it was supposed to avert.
But as OpenAI’s ambitions have grown, that vision has been watered down in favor of making money. I’ve heard that there were a lot of people in the AI community who were really unhappy with OpenAI’s initial decision to let GPT rip. That spurred the race for commercial domination of AI which has shaped pretty well everything that has happened since, leading to model after model being launched, and to hell with the consequences. People like Altman still talk about the dangers of AGI. But their organizations and businesses keep releasing more, and more powerful systems, which can be, and are being, used in all sorts of unanticipated ways, for good and for ill.
It would perhaps be too cynical to say that AGI existential risk rhetoric has become a cynical hustle, intended to redirect the attentions of regulators toward possibly imaginary future risks in the future, and away from problematic but profitable activities that are happening right now. Human beings have an enormous capacity to fervently believe in things that it is in their self-interest to believe, and to update those beliefs as the interests change or become clearer. I wouldn’t be surprised at all if Altman sincerely thinks that he is still acting for the good of humankind (there are certainly enough people assuring him that he is). But it isn’t surprising either that the true believers are revolting, as Altman stretches their ideology ever further and thinner to facilitate raking in the benjamins.
The OpenAI saga is a fight between God and Money; between a quite peculiar quasi-religious movement, and a quite ordinary desire to make cold hard cash. You should probably be putting your bets on Money prevailing in whatever strange arrangement of forces is happening as Altman is beamed up into the Microsoft mothership. But we might not be all that better off in this particular case if the forces of God were to prevail, and the rationalists who toppled Altman were to win a surprising victory. They want to slow down AI, which is good, but for all sorts of weird reasons, which are unlikely to provide good solutions for the actual problems that AI generates. The important questions about AI are the ones that neither God or Mammon has particularly good answers for - but that’s a topic for future posts.
ChatGPT is just Zapp Brannigan or a McKinsey consultant. A veneer of confidence and a person to blame when the executive "needs" to make a hard decision. You previously blamed the Bain consultants when you offshored a factory, now you blame AI.
Came here via Dave Karpf's link. Beautiful stuff, and "The Singularity is Nigh" made me laugh out loud.
The psychological and sociological/cultural side of the current GPT-fever is indeed far more important and telling than the technical reality. Short summary: quantity has its own certain quality, but the systems may be impressive, we humans are impressionable.
Recently, Sam Altman received a Hawking Fellowship for the OpenAI Team and he spoke for a few minutes followed by a Q&A (available on YouTube). In that session he was asked what are important qualities for 'founders' of these innovative tech firms. He answered that founders should have ‘deeply held convictions’ that are stable without a lot of ‘positive external reinforcement’, ‘obsession’ with a problem, and a ‘super powerful internal drive’. They needed to be an 'evangelist'. The link with religion shows here too. (https://erikjlarson.substack.com/p/gerben-wierda-on-chatgpt-altman-and). TED just released Ilya Sutskever’s talk and you see it there too. We have strong believers turned evangelists and we have a world of disciples and followers. It is indeed a very good analogy.