17 Comments

This article is an over-simplistic framing of the issue and evades exploring any nuance of the particulars. Inventing factions and reducing the arguments to a label is an outstanding way to impede productive, educational debate. It's a Crayola line drawing of a complex nature scene.

AI has begun to contribute to science. We will see new medical therapies, manufacturing materials, small molecules, fusion reaction control, self-driving robo-taxis, and uber-competent Siri-like assistants. These areas can improve critical topics like longevity, energy abundance, and access to education while enabling an explosion in productivity.

But the flip side is real, too. Today, you can interact with open-source competitors to ChatGPT that will gladly outline the production of sarin gas or cultivation of anthrax. These models are already employed to displace journalists and commercial illustrators, AstroTurf social media, and terrorize screen actors and writers into strikes.

In the next few years, maybe months, there's a real chance that one morning you'll roll out of bed and learn that an AI can now pass every human-level-ability test at the expert level, medical boards, bar exams, MENSA tests, doctorate level physics, and math tests. It can re-derive Einsteins' Special Relativity, the Quantum Standard Model and produce new frontier science with a simple query. When you describe your business, it will lay out a strategy that factors in currency risk, customer sentiments, competitors, regulation, and the health of your suppliers.

Likely before your current car lease ends, there's a day after which it's irresponsible to make choices without getting feedback from an AI. A day where the possibility of people being the first to discover new science has passed.

This very smart AI doesn't have a will, desires, or the ability to act. It won't sneak around and take over the world. It's just smarter at every intellectual task than anyone alive and available to everyone as an app on your phone.

The societal impacts will be tremendous and hard to predict. But it begs the question: What's your value add if all you're doing is querying super-Siri? Being a low or no value-add person is a scary prospect in the developed world.

Agency has shown itself to be as simple as adding recursion, memory, and internet access to existing LLMs. This configuration enables an AI to break complex problems into subtasks and attack each subtask. It's a short hop from here to an autonomously AI solving world hunger or planning to turn the world into paper clips.

Expand full comment

Interesting take, and great points. Two minor complaints:

First, Yudkowsky's thinking regarding AI alignment is very well developed and massively influential. The references to him being self schooled or writing Potter fan fiction are ad hominem attacks that say nothing about the validity of his writings. He's certainly not perfect and his ideas have been criticized - but I don't think your approach here is constructive.

Second, Vernor Vinges' singularity is the idea that as technological progress accelerates, eventually the speed of development will become steeper and steeper until it is effectively a vertical line - hence the metaphor of a singularity. For people living before the singularity, the world afterward will be incomprehensible. AI is a possible driver of this acceleration but is not the only factor at play. Vernor's vision of the singularity is more nuanced and much broader than how you've described it here.

This is very different from (and much less rosy than) Kurzweil's vision, in which humanity is the beneficiary and object of the singularity.

Expand full comment
Dec 12, 2023·edited Dec 12, 2023

I first encountered Yudkowsky when a sci-fi reading club I had joined tackled his "Three Worlds Collide" story. I did the reading late and wound up not finishing it in time because as I was hastily reading en route to the meeting, I had to stop and google the author after reading the passage in which he casually lets the reader know we are set in a future were rape was legalized and that somehow improved society, to the point that seeing rape as illegal is considered beyond prude.

I feel like this factoid about him isn't promoted enough.

https://www.lesswrong.com/s/qWoFR4ytMpQ5vw3FT/p/bojLBvsYck95gbKNM

Expand full comment

For my part, I view this essay as itself part of a minor movement, a skeptical or denialist movement that wants to deny there is any imminent prospect of "superhuman AI", and I would be interested to understand its nature and origins.

I hypothesize that one motivation is a liberal-to-progressive outlook which rejects belief in divine superintelligence in favor of humanist atheism, and which sees belief in artificial superintelligence as a return to the "demon-haunted world", from which secularism liberated humanity.

There's also a complex of interrelated attitudes such as: hostility or skepticism towards the idea of IQ or degrees of intelligence; hostility towards Big Tech and the billionaires who own it, as undemocratic, exploitative, capitalist, a dangerous concentration of private power, etc; preference for social and political solutions to this situation, rather than technical ones.

All this could make a person receptive to deflationary claims about AI, e.g. that LLMs are just stochastic parrots, or that the buzz around AI is just marketing hype.

Expand full comment

I'm happy to learn of Dr Saint-Simon. His views of a meta-religion as expressed by this writer form the bulk of a series of ideas on how Cerebral Valley, mostly made up of avowed atheists is reinventing a quasi-Abrahamic religion where they're crowned as the gods of a new dimensional stage of the human race.

The tech community's embracal of a16z's echoes a gnawing temptation to godhood that is shared by many.

Expand full comment

Seems to completely ignore the fact that there are many software engineers (many of who work on AI) who dismiss the notion that anyone as of December 2023 has come close to acheving AI.

Expand full comment

Yud is super smart, and I find his writing and interviews fascinating. He just takes certain bad possibilities to be given that I can't. Otoh, I'm not in the pmarca cult either, thought I think he too has a bunch of good points. My personal philosophy is that we need to take an these examples of possible doom as a set of constraints for "how not to do it." But the benefit of AI stands to be so great to everyone that I don't think we can afford to not do it at all.

Expand full comment