8 Comments

This paper discusses Gopnik's views and may be of interest: https://arxiv.org/pdf/2401.04854.pdf.

Expand full comment

Thanks. Really nice piece. It's a long time ago now but there's a lot of debate about the intended meaning of Derrida's phrase “il n’y a pas de hors-texte”. Some of the alternatives to the more common interpretation may even be more apt to this discussion as it raises notions about what is "inside" and what is "outside" of the functioning of LLM's. I'm not the one to cogently summarize this but there's plenty out there. (Also, ditto Eno and Oulipo)

Expand full comment

This is great, and I was very heartened to see mention of Eno and Oulipo--while I’m pretty ambivalent about AI myself, I’m tired from hearing from anti-AI people who have zero historical context and just freeze up if you mention Oulipo, Eno, hip-hop sampling, the Yamaha DX-7, hack mystery writers’ “wheels of plots”, the musician’s strike when recorded music came into its own, etc. etc. AI brings up new questions about creativity, intellectual property, et al., but it mostly brings up the same old ones.

Expand full comment

Great insights here. Looking at LLM as a cultural input makes a hell of a lot of sense in predicting its unpredictable impact on society. Humans love stories, and in its larval state LLM is really good at cranking out stories. How that affects society as they monetize it will be interesting, as in "interesting times". I find the "AI" videos, like the "commercials", fascinating in their non-humanness. They remind me of various SF stories of the impact of aliens contacting humanity, the utter strangeness of its internal logic hits first, then the odd discomfort as the implications sink in.

Expand full comment

Nice. Insightful, too.

Sam Altman, by the way, is hard to pin down, because he shapeshifts between AGI and 'it's just a tool'. For one, he has stated that these 'errors' aren't errors but features (in line with what is written above), see https://ea.rna.nl/2023/11/01/the-hidden-meaning-of-the-errors-of-chatgpt-and-friends/

The fact that LLMs can be seen as 'lossy' (a 'lossy' transformation of token- or pixel-ordering) and an additional 'fundamental randomisation' of its output is also something that has been suggested elsewhere (I recall in The Atlantic, and here: https://ea.rna.nl/2023/12/26/memorisation-the-deep-problem-of-midjourney-chatgpt-and-friends/).

In other words, the above is — if you ask me — filled with insightful observations. Keep it coming. I'm going to read that article.

The situation is even more dire for the AGI-is-nigh people. Because it is not just that LLMs can't discover these relations, they do not even understand the relations they do effectively use.

The situation for human societies is dire as well. Because while these models *do* have (mainly productivity, but also 'supporting creativity') use cases, at the same time they could very well unleash a 'tsunami of muck' (think of poor code ending up everywhere in IT landscapes, or bad information in society).

Expand full comment

Superb post, Henry.

Expand full comment