Discussion about this post

User's avatar
Alex Tolley's avatar

>Even if LLMs are made out of poetry, they are incapable of producing poems. Or in Wolfe’s language, both the epic form and LLMs are story, but are incapable of telling stories. That requires the marriage of structure and intention that human mediation provides. LLMs are a kind of composite of the singing of tales, but are not singers, even if we sometimes misconstrue them as such.<

Should AIs gain consciousness, will that not include intention, and thereby invalidate the idea that LLMs cannot tell stories? [I am assuming the LLM component architecture is retained, but that some sort of metacognition results in consciousness.]

Conversely, is a very drunk, or similarly incapacitated, human no longer "telling a story" (or singing a tale) as the intention is now lost as the mind is now just stringing together fragments on "autopilot"?

Expand full comment
jane goodall's avatar

I’ve been looking for a good overview of this structural perspective. Thanks very much for doing it so well, and avoiding didactic positions. Predermined insistences play a big role in most discussions about LLMs.

The missing dimension here is collaboration. I’ve done many experiments with story lines that change direction in ways the model doesn’t anticipate. It adapts immediately, leaning into the new direction and trying to value add. But by constantly cutting across it, I think you can value add on another level. The story acquires more dynamism, and originality ceases to be an ‘us or them’ issue. The rapidity of fictional invention on the part of the model creates conditions of possibility for the human co-author, who can come up with a lateral option. Really the most fascinating experiment, but I’m not seeing reports on this kind of engagement. Suggestions?

Expand full comment
5 more comments...

No posts