17 Comments

ChatGPT is just Zapp Brannigan or a McKinsey consultant. A veneer of confidence and a person to blame when the executive "needs" to make a hard decision. You previously blamed the Bain consultants when you offshored a factory, now you blame AI.

Expand full comment
Nov 21, 2023·edited Nov 21, 2023

Came here via Dave Karpf's link. Beautiful stuff, and "The Singularity is Nigh" made me laugh out loud.

The psychological and sociological/cultural side of the current GPT-fever is indeed far more important and telling than the technical reality. Short summary: quantity has its own certain quality, but the systems may be impressive, we humans are impressionable.

Recently, Sam Altman received a Hawking Fellowship for the OpenAI Team and he spoke for a few minutes followed by a Q&A (available on YouTube). In that session he was asked what are important qualities for 'founders' of these innovative tech firms. He answered that founders should have ‘deeply held convictions’ that are stable without a lot of ‘positive external reinforcement’, ‘obsession’ with a problem, and a ‘super powerful internal drive’. They needed to be an 'evangelist'. The link with religion shows here too. (https://erikjlarson.substack.com/p/gerben-wierda-on-chatgpt-altman-and). TED just released Ilya Sutskever’s talk and you see it there too. We have strong believers turned evangelists and we have a world of disciples and followers. It is indeed a very good analogy.

Expand full comment

Delighted to find that I’m not the only person who finds Scientology an apt metaphor for Koolaid-drinking AI cultists. I run into too many of these creeps in SF, and the conversation is a lot like trying to convince a Scientologist that MAYBE Xenu the Intergalactic Overlord ain’t all they make out.😅

At least L. Ron Hubbard never BS’d anyone about HIS motive for founding a religion (“That’s where the money is!”).

Expand full comment

Henry, I have sympathy with this view. But... I also feel that you are relying on an argument by incredulous stare. I know you have many projects under way, but I hope you can say more about why the imminent AGI proponents are wrong.

Expand full comment

Insurance cos are already blaming Algorithms for "REJECT PAYMENT" decisions! Clever no? Given that AI is a "black box"!

Expand full comment

Jonathan Cockrell

3464790527

I'm ready to lean please contact me and show me the way. It is time to learn

Expand full comment

If it wasn't already taken, we would call this the Narrative Fallacy: when a concept fits into a narrative form, one that preexists across human cultures ("Man Makes Thing Too Powerful For Himself to Control"), considerations of evidence or rational argument evaporate. The concept is easy to digest, easy to fit evidence and other beliefs to, and easy to communicate. It is also easier to form communities around it. And therein lies the staying power of any narratively-framed concept: it helps you relate to others, gives you a common language and set of beliefs that ensure a fit between you and them.

Only what I call "luxury" beliefs can be this way, beliefs you have no reason to update, no reason to change your point of view, because they have minimal or no consequences, or the consequences that you experience (belonging, socialization, feeling of self-importance or being on the correct even if embattled side, shared beliefs, dating and mating opportunities, and even money) are all positive.

And it almost goes without saying that all our advances in understanding the works scientifically began with deliberately discarding narratives to invent new concepts to fit the phenomena. For a superb, lightly fictionalized take on this point, read Benjamin Labatut's account of 20th century mathematics and physical science, "When We Cease to Understand the World".

Consider: if Eliezer Yudkowsky woke in the night suddenly, realized he was a victim of a story, that AI existential risk was not a thing, that he was wasting his time and that of others, would he get on Twitter (I will never ever call it the 24th letter of the alphabet - the original name owns its own vacuous mess) and say so? Or would he have a drink of water, go back to bed, and forget about the whole thing by morning? He is in a place where having these beliefs is really working out for him.Honestly, given his personality in interviews, and accounts I have read of people who dealt with him first hand, I'm not sure whether he has any hireable skills or value as an employee.

Or consider Greta Thunberg for that matter, equallyl the victim of a related, also ancient but easily updated for modern times narrative, the "Apocalypse Wrought By Human Sinfulness" story. If she abandons her beliefs, she goes back to being "young adult woman with autism spectrum disorder," rather than Youthful Savior of the World.

Getting beyond "Argument by Incredulous Stare," starts with hammering this point home to AI Doomers. For self-described Rationalists, they are cutting themselves off from understanding the world, from understanding technology, because a story is giving them too many benefits to resist. It may not, probably won't, convince them, but it can help stop non-believers from letting policy and regulation be shaped by them.

Expand full comment

On the religious theme, there is a convincing case that much AI research constitutes a "Pascal's Mugging".

https://bramcohen.com/p/pascals-mugging-and-ai-safety

Expand full comment