8 Comments
User's avatar
Bob Roberts's avatar

2 points stood out to me in this piece:

1) "LLMs speak only when prompted." We are now faced with an Agentic AI universe in which all forms of online commerce may be transacted semi or fully autonomously from the humans that launched them. They will whisper to each other in a growing roar of re-representations of human thought while we perceive an increasingly 'Dead Internet' with a comparatively shrunken volume of authentic human interaction.

Which point leads me to:

2) the Albert B. Lord quote relating to oral traditions where “we cannot correctly speak of a “variant,” since there is no original to be varied.” This is almost exactly Baudrillard's description of the 'hyperreal', in which we have representations lacking an original referent.

LLM output will always contain threads of recognizably human origin, but their growing recombinatory prowess will eventually flood our culture with hyperreal content within which we ourselves will have difficulty tracing a trail back to human origins or even to rule out a human origin. Deep fakes show that this crisis is already upon us.

While we may speak of 'intentionality' as the last redoubt of human story-making, we will eventually be unable to distinguish intent or the lack thereof.

Expand full comment
Alex Tolley's avatar

>Even if LLMs are made out of poetry, they are incapable of producing poems. Or in Wolfe’s language, both the epic form and LLMs are story, but are incapable of telling stories. That requires the marriage of structure and intention that human mediation provides. LLMs are a kind of composite of the singing of tales, but are not singers, even if we sometimes misconstrue them as such.<

Should AIs gain consciousness, will that not include intention, and thereby invalidate the idea that LLMs cannot tell stories? [I am assuming the LLM component architecture is retained, but that some sort of metacognition results in consciousness.]

Conversely, is a very drunk, or similarly incapacitated, human no longer "telling a story" (or singing a tale) as the intention is now lost as the mind is now just stringing together fragments on "autopilot"?

Expand full comment
jane goodall's avatar

I’ve been looking for a good overview of this structural perspective. Thanks very much for doing it so well, and avoiding didactic positions. Predermined insistences play a big role in most discussions about LLMs.

The missing dimension here is collaboration. I’ve done many experiments with story lines that change direction in ways the model doesn’t anticipate. It adapts immediately, leaning into the new direction and trying to value add. But by constantly cutting across it, I think you can value add on another level. The story acquires more dynamism, and originality ceases to be an ‘us or them’ issue. The rapidity of fictional invention on the part of the model creates conditions of possibility for the human co-author, who can come up with a lateral option. Really the most fascinating experiment, but I’m not seeing reports on this kind of engagement. Suggestions?

Expand full comment
Mario Khreiche's avatar

Great essay! As an always-in-recovery academic, I've been enjoying Wolfe precisely for the indulgent prose that never loses sight of the brown book (of commingled stories):

"Dr. Talos leaned toward her, and it struck me that his face was not only that of a fox (a comparison that was perhaps too easy to make because his bristling reddish eyebrows and sharp nose suggested it at once) but that of a stuffed fox. I have heard those who dig for their livelihood say there is no land anywhere in which they can trench without turning up shards of the past. No matter where the spade turns the soil, it uncovers broken pavements and corroded metal; and scholars write that the kind of sand that artists call polychrome (because flecks of every colour are mixed with its whiteness) is actually not sand at all, but the glass of the past, now pounded by aeons of tumbling in the clamorous sea.

If there are layers of reality beneath the reality we see, even as there are layers of history beneath the ground we walk upon, then in one of those more profound realities, Dr. Talos's face was a fox's mask on a wall."

Next up, how Long Sun's cards predict the rush for compute infrastructure.

Expand full comment
David M Gordon's avatar

"Wolfe was an engineer..."

Somehow, I just know you know, Henry, of Gene's connection to Pringles. So much so I need not even mention it but I do anyway. Gene deserves all the mentions and plaudits he receives; he was a true mensch.

Expand full comment
Bill Benzon's avatar

FWIW, back in July of 2022 I wrote a blog post where I explored the similarities between LLMs and oral epics: GPT-3, the phrasal lexicon, Parry/Lord, and the Homeric epics, https://new-savanna.blogspot.com/2022/07/gpt-3-phrasal-lexicon-and-homeric-epics.html

In March of 2023 I posted a working paper in which I reported experiments I did with ChatGPT exploring the process of story-variation: ChatGPT tells stories,

and a note about reverse engineering, https://www.academia.edu/97862447/ChatGPT_tells_stories_and_a_note_about_reverse_engineering_A_Working_Paper_Version_3

Here's the abstract:

I examine a set of stories that are organized on three levels: 1) the entire story trajectory, 2) segments within the trajectory, and 3) sentences within individual segments. I conjecture that the probability distribution from which ChatGPT draws next tokens seems to follow a hierarchy nested according to those three levels and that is encoded in the weights of ChatGPT’s parameters. I arrived at this conjecture to account for the results of experiments in which I give ChatGPT a prompt with two components: 1) a story and, 2) instructions to create a new story based on that story but changing a key character: the protagonist or the antagonist. That one change ripples through the rest of the story. The pattern of differences between the old and the new story indicates how ChatGPT maintains story coherence. The nature and extent of the differences between the original story and the new one depends roughly on the degree of difference between the original key character and the one substituted for it. I end with a methodological coda: ChatGPT’s behavior must be described and analyzed on three strata: 1) The experiments exhibit behavior at the phenomenal level. 2) The conjecture is about a middle stratum, the matrix, that generates the nested hierarchy of probability distributions. 3) The transformer virtual machine is the bottom, the code stratum.

Expand full comment
Henry Farrell's avatar

Thanks - now mentioned in main post (David Smith hit upon this comparison too in his syllabus).

Expand full comment
youlian troyanov's avatar

Brilliant

Expand full comment