17 Comments

ChatGPT is just Zapp Brannigan or a McKinsey consultant. A veneer of confidence and a person to blame when the executive "needs" to make a hard decision. You previously blamed the Bain consultants when you offshored a factory, now you blame AI.

Expand full comment
Nov 21, 2023·edited Nov 21, 2023

Came here via Dave Karpf's link. Beautiful stuff, and "The Singularity is Nigh" made me laugh out loud.

The psychological and sociological/cultural side of the current GPT-fever is indeed far more important and telling than the technical reality. Short summary: quantity has its own certain quality, but the systems may be impressive, we humans are impressionable.

Recently, Sam Altman received a Hawking Fellowship for the OpenAI Team and he spoke for a few minutes followed by a Q&A (available on YouTube). In that session he was asked what are important qualities for 'founders' of these innovative tech firms. He answered that founders should have ‘deeply held convictions’ that are stable without a lot of ‘positive external reinforcement’, ‘obsession’ with a problem, and a ‘super powerful internal drive’. They needed to be an 'evangelist'. The link with religion shows here too. (https://erikjlarson.substack.com/p/gerben-wierda-on-chatgpt-altman-and). TED just released Ilya Sutskever’s talk and you see it there too. We have strong believers turned evangelists and we have a world of disciples and followers. It is indeed a very good analogy.

Expand full comment

Delighted to find that I’m not the only person who finds Scientology an apt metaphor for Koolaid-drinking AI cultists. I run into too many of these creeps in SF, and the conversation is a lot like trying to convince a Scientologist that MAYBE Xenu the Intergalactic Overlord ain’t all they make out.😅

At least L. Ron Hubbard never BS’d anyone about HIS motive for founding a religion (“That’s where the money is!”).

Expand full comment

Hopefully, now their true motives have become much more clear 😅

Expand full comment

Henry, I have sympathy with this view. But... I also feel that you are relying on an argument by incredulous stare. I know you have many projects under way, but I hope you can say more about why the imminent AGI proponents are wrong.

Expand full comment
author

This - based on work with Cosma - summarizes what I think LLMs are - they seem far more like markets and bureaucracies than genuinely agentic actors, though they can play agents nicely in ways that plausibly slip past our cognitive safeguards - https://www.programmablemutter.com/p/shoggoths-amongst-us . Cosma has more here - http://bactra.org/notebooks/nn-attention-and-transformers.html - which I fully endorse (for values of 'endorse' that imply most of my good ideas on this I got from him in the first place).

Expand full comment

anyone who's that worried about imminent AGI could stand to learn more about complexity theory, the hierarchies of problems out there that are inherently difficult. We are never going to solve the Halting Problem or build the Universal Optimizer and we had proofs of this 50+ years ago.

And it's particularly ludicrous to imagine Chat-GPT pattern-matching off of a database of student programming projects will be able to solve these problems and magically hack into things when it's not even doing any actual reasoning.

I'm also not clear on how we're going to get to AGI when we still don't have a good handle on what NI is or how to distinguish genius from insanity. It's a "we'll know it when we see it" situation except what we actually seem to be getting are systems that are conducive to making us see things.

To be sure, I do fully expect we'll develop software/systems that will get better at simulating the particular sorts of pattern-matching/computation that the human brain happens to be good at that more traditional systems have been bad at,

... thus capable of generating more and more human-style mistakes/insanity faster and faster, and this WILL be useful in certain contexts, but we shouldn't be fooling ourselves about this being somehow able to Do Everything and Magically Solve All Problems.

Expand full comment

”argument by incredulous stare” is a great phrase. Stealing.

Expand full comment

Me too stealing this, especially because I've used "argument by incredulous stare" myself so it's good to add to the typology.

Expand full comment

https://erikjlarson.substack.com/p/gerben-wierda-on-chatgpt-altman-and might be a good start (or the entire presentation linked in that post).

Expand full comment

I concur with Mr. Gardner. This would hardly be the first time that someone raises a problem and then claims the solution is to give them money, but to a person of my limited capacities, the basic idea of AI risk- as Eliezer Yudkowsky (roughly) puts it: "something smarter than us, that doesn't want what we want"- seems plausible. Things have to go wrong only once for there to be bad, even catastrophic consequences, and it would seem difficult to avoid things going wrong at least once.

Expand full comment

Every 4 years, people fail the Politico-Turing test. Politicians posture as something they are not, present plans that are not the full story, and then do what they want when they get in to office. THere's nothing new under the sun.

Expand full comment

Insurance cos are already blaming Algorithms for "REJECT PAYMENT" decisions! Clever no? Given that AI is a "black box"!

Expand full comment

Jonathan Cockrell

3464790527

I'm ready to lean please contact me and show me the way. It is time to learn

Expand full comment

If it wasn't already taken, we would call this the Narrative Fallacy: when a concept fits into a narrative form, one that preexists across human cultures ("Man Makes Thing Too Powerful For Himself to Control"), considerations of evidence or rational argument evaporate. The concept is easy to digest, easy to fit evidence and other beliefs to, and easy to communicate. It is also easier to form communities around it. And therein lies the staying power of any narratively-framed concept: it helps you relate to others, gives you a common language and set of beliefs that ensure a fit between you and them.

Only what I call "luxury" beliefs can be this way, beliefs you have no reason to update, no reason to change your point of view, because they have minimal or no consequences, or the consequences that you experience (belonging, socialization, feeling of self-importance or being on the correct even if embattled side, shared beliefs, dating and mating opportunities, and even money) are all positive.

And it almost goes without saying that all our advances in understanding the works scientifically began with deliberately discarding narratives to invent new concepts to fit the phenomena. For a superb, lightly fictionalized take on this point, read Benjamin Labatut's account of 20th century mathematics and physical science, "When We Cease to Understand the World".

Consider: if Eliezer Yudkowsky woke in the night suddenly, realized he was a victim of a story, that AI existential risk was not a thing, that he was wasting his time and that of others, would he get on Twitter (I will never ever call it the 24th letter of the alphabet - the original name owns its own vacuous mess) and say so? Or would he have a drink of water, go back to bed, and forget about the whole thing by morning? He is in a place where having these beliefs is really working out for him.Honestly, given his personality in interviews, and accounts I have read of people who dealt with him first hand, I'm not sure whether he has any hireable skills or value as an employee.

Or consider Greta Thunberg for that matter, equallyl the victim of a related, also ancient but easily updated for modern times narrative, the "Apocalypse Wrought By Human Sinfulness" story. If she abandons her beliefs, she goes back to being "young adult woman with autism spectrum disorder," rather than Youthful Savior of the World.

Getting beyond "Argument by Incredulous Stare," starts with hammering this point home to AI Doomers. For self-described Rationalists, they are cutting themselves off from understanding the world, from understanding technology, because a story is giving them too many benefits to resist. It may not, probably won't, convince them, but it can help stop non-believers from letting policy and regulation be shaped by them.

Expand full comment

On the religious theme, there is a convincing case that much AI research constitutes a "Pascal's Mugging".

https://bramcohen.com/p/pascals-mugging-and-ai-safety

Expand full comment