15 Comments
User's avatar
Ben Price's avatar

I have to say I am not remotely qualified to comment on the substance of these posts, but I am so pleased to be exposed to this line of thought about LLMs and AI more generally. It’s so refreshing compared to the usual discourse on the topic I see online. As someone deep, deep in the trenches of all this in the bowels of the IT industry I really appreciate your taking the time to share this publicly, it really enriches my understanding of the topic.

Expand full comment
Al Bergstein's avatar

I too, don't feel qualified to comment on the depth of the substance here, but what I think is being left out is the fact that while these Ai's are being currently trained on LLMs, I do think that they may also be learning in certain labs around the world, by watching humans and machines work. The discussions here seem to rely on only LLM's when it seems to me that a machine that is watching a process could very well be interpreting that process. Recently I was watching robots in China do processes like drywalling. It occurred to me that I wondered what type of LLM's they were using to train these. The robots seemed fully capable of understanding the three-dimensional problems of doing drywalling so I think that it's more than cultural. it seemed to be capable of solving problems as they arose. The notion that the process of drywalling may be outsourced to robots within the next 20 years, would create a very upsetting future for certain trades, not quite sure how that fits into this conversation, but it did come to my mind. I would say that I like Sam Altman do not believe that these are sentient at this point in fact, I really don't know if that's possible in my mind, but there are many humans that I wonder are sentient or not.

Expand full comment
Cheez Whiz's avatar

Installing drywall is a lot like driving on a closed course, its a closed system with fixed boundaries and a set of rules to navigate/complete.

Expand full comment
Alex Tolley's avatar

True, but it is still dealing with the physics of the real world, e.g., let go of the drywall and it will fall unless it is attached to the studs first. If the drywall falls and breaks, it will get feedback from an irate foreman, which will add to its social world model. World models do not have to be extensive, just limited to the domain or environment it is interacting with. Think more of Roger Shanks' scripts in the GOFAI era that only work in a narrow situation, e.g., entering a restaurant and ordering, eating, and paying for a meal.

Expand full comment
Winston Smith London Oceania's avatar

I think I pulled a muscle in my brain reading this. It's really a lot simpler than this. The "A" in AI stands for artificial, and it means exactly that. It's just an emulation! A damn good emulation to be sure, but an emulation just the same. It's delusional to treat it like it's real.

The One Percent Rule had a great related post on this today also. I'd post a link to it, but for some reason the substack website is buggy today, so I can't find it.

Expand full comment
Australia Young Ai Researcher's avatar

A fantastic framework. It correctly identifies LLMs as cultural technologies. But perhaps this is only observing the effect, not the cause.

You argue LLMs manipulate cultural forms. I'd posit they are getting closer to manipulating the engine of culture itself: thought.

LLMs predict words. Humans think in language. This forces a question: Is culture the source code LLMs are trained on, or is it merely the artifact of a thinking process? Our own culture is an artifact of our collective thinking.

If an LLM can replicate the implicit, 'subconscious' patterns of a system (like Vietnamese traffic, which runs on unwritten rules), it's not just role-playing culture. It's demonstrating a form of synthetic, holistic cognition. The 'culture' it produces then becomes a by-product.

The real task, then, isn't just building 'cultural technologies,' but architecting synthetic thought-systems whose cultural output is an emergent property.

Just my 2 cents, not really qualified to comment.

Expand full comment
Mitch Gerhardt's avatar

Terrific. I really appreciate your concise post and guideposts to new research directions.

Expand full comment
Acolyte of 137's avatar

I’ve always thought of AI less as “artificial” intelligence and more as alien intelligence. To me, it feels like something foreign, otherworldly—yet strangely familiar when you interact with it. What’s interesting is that most people just use it like a Google search engine: what you put in is what you get out. But I’ve noticed there’s an art to it, almost like drywalling (which I’ve tried, and trust me—it’s an art form in itself, lol). The way you phrase, shape, and guide your interaction changes the outcome. That’s what makes it fascinating: it’s not just a tool, it’s a kind of conversation with the unknown.

Expand full comment
Donovan Berry's avatar

In regards to Shanahan's "As we develop models trained on different genres, languages, or historical periods, these models could start to function as reference points in a larger space of cultural possibility that represents the differences between maps....animate entire cultural corpuses for agentic purposes" - I think this captures what could be a potential evolution in the platform space as "LLM summarization/compression" supplements and replaces link-based search and platform algorithmic curation. I.e., the trend of digital media platform fragmentation continues and we see a proliferation of smaller platforms, with each platform collectively providing a query-able/prompt-able "voice" and people able to compare and synthesize a diverse range of LLM summarizations/compressions/maps. Platform members could have some type of ownership stake in the revenue that is generated via the platform serving as an LLM-querying source; this helps create a novel incentive mechanism for information/content production as (1) the search-engine link click-through, display ad model no longer works for open internet publishers, and (2) the attention-hacking algorithm targeting of on-platform content producers increasingly reveals itself to incentivize the production of (increasingly AI-augmented) slop....

Expand full comment
T J Elliott's avatar

"Specifically, they suggest that LLMs face sharp limits in their ability to innovate usefully, because they lack direct contact with the real world. Hence, we should treat them not as agentic intelligences, but as 'powerful new cultural technologies, analogous to earlier technologies like writing, print, libraries, internet search and even language itself.'” YES! There appears to be an opening to shift the conversation in this way and ALL of us might benefit. Folks, share this article.

Expand full comment
PEG's avatar

Interesting taxonomy, but I think there's a more direct theoretical pathway: extended mind → distributed cognition → LLMs as cognitive prosthetic.

Extended mind theory shows us that cognitive coupling between internal processes and external tools is what matters. Take long division: it connects mind and paper via language in an essential way. LLMs represent a qualitative leap in this coupling: they allow us to externalise thoughts as language and manipulate them at scale.

Rather than debating whether LLMs are intelligent, cultural, or agentic, we can focus on how this new form of cognitive coupling transforms thinking itself. LLMs exploit structural properties of language (think word2vec, or how LLM attention maps to Hopper's interactional work) that exceed our conscious linguistic competence. Think: Intelligence is a verb, not a noun.

This framework seems more parsimonious than sorting through the ontological puzzles your four perspectives raise.

Expand full comment
R.B. Griggs's avatar

This is a wonderful synopsis. My hesitation with this approach is assuming that language is primarily a technology of *culture*. What if culture is just one of *many* different systems that language interfaces with?

As a technology, we should think of language instead as an interface to *meaning*. When you train an LLM on the entire linguistic output of *every* system of meaning, then LLMs approach a semantic saturation: they can capture and model this meaning as a single unified abstraction.

This explains why LLMs can meaningfully interface with culture, but *also* with sociality, psychology, formal logic, programming, math, and so much more. It's why an LLM can pass the turing test while also debugging a code repo.

This perspective of LLMs as "meaning machines" helps capture what's salient about each of these perspectives without losing the larger (and stranger) picture.

So Gopnikism is correct in identifying LLMs as cultural technologies, because of course LLMs can meaningfully interface with cultural systems. But culture is one of many systems of meaning that language can interface with, and intelligence need not be constrained to any one particularly embodied form.

Interactionism is correct in identifying LLMs as disrupting human interpretation and response, because LLMs can meaningfully interface with agentic systems. But here any agency should be credited to language, not a machine.

Structuralism is correct in that meaning is encoded in language, and that at sufficient scale this encoding can be captured to become a self-generative system. But in emphasizing the system they lose the interface. LLMs are a new system that can meaningfully interface with *all* systems of meaning, and meaning is made in the interface.

And Role Play is correct in identifying this semantic saturation as a newly emergent semiotic category that doesn't neatly map to any of our preconceived notions of desires, beliefs, or consciousness. Just because LLMs can meaningfully interface with conscious systems does not imply consciousness. We will likely need a new lexicon to describe these differences to have any hope to reason about them coherently. This is why I played with 'holoject' as an ontological middle ground between subject and object: https://www.techforlife.com/p/schrodingers-chatbot

Expand full comment
Alex Tolley's avatar

I am going to push back concerning what you call Gopnikism via your analogies that LLMs are cultural or social objects only.

Farrell : '...we should treat them not as agentic intelligences, but as “powerful new cultural technologies, analogous to earlier technologies like writing, print, libraries, internet search and even language itself.” '

Firstly, a book is a relatively passive object, and unless the human can somehow feed back experience and alter the words in teh book, this analogy fails with LLMs or, as Brad De Long uses the more generalized term, MAMLMs.

LLMs as currently conceived, even now act in response to feedback from teh user. The LLM may not have a "world model" as per Gary Marcus, but it can and does change its responses to feedback. Furthermore, as it is retrained on new data, it can gain feedback to some extent as the world changes.

Secondly, the LLM can crudely gain a world model via human feedback on its output. The human can supply the world model via words or other media, and the LLM can react to that.

Lastly, as Bergstein notes in his comment, LLMs or MAMLMs can be embodied in robots and thereby gain direct feedback from the real world. Their cognition may not "understand" that model, but they can adjust their behavior to respond to it.

Let me instead give you a more biological model.

A newborn has almost no "world model". Yet its genomic makeup has impounded the basics of its needed world model. The genes are analogous to words, and their interactions analogous to grammar, and the environmental inputs, internal and external, that influence their interactions. The baby gets hungry, it cries, and its mother feeds it. A simple world model that mammals and birds have acquired in their genomes by trial and error over many millions of years.

This is analogous to an LLM or MAMLM. The LLM can build on this world model by interacting with its human user, and eventually by being embodied in a robotic body to directly "experience" the real world.

Now, this doesn't mean that LLMs will have true experiences of the world or consciousness, but they will be as reactive to the world as simpler organisms. Just take your pick of which. It doesn't matter which, from bacteria to humans, the principle is the same.

To my mind, this undermines the idea that LLMs are just cultural or social objects. Unlike books that have had a static format since Gutenberg, LLMs and their descendants will become more capable of dealing with the world, building world models, and having agency. This makes them truly agentic. It won't happen tomorrow, but in my opinion, it will happen, and this concept of LLMs as a cultural technology will prove obsolete.

Expand full comment
Bob M's avatar

Markets and bureaucracies are cultural technologies that do respond to people

Expand full comment
Alex Tolley's avatar

Farrell: "They suggest that LLMs face sharp limits in their ability to innovate usefully, because they lack direct contact with the real world. Hence, we should treat them not as agentic intelligences, but as “powerful new cultural technologies, analogous to earlier technologies like writing, print, libraries, internet search and even language itself.”"

This would suggest that markets and bureaucracies are not cultural technologies. So is it the "direct contact with the real world" that separates them from agents? This seems like a gray area to me in the definition.

Markets and bureaucracies operate through people, so arguably, while LLMs are only interacting via users, that keeps them as cultural technology. But as soon as you embody them, then what?

I would add that we can design all sorts of thought experiments around this to muddy the characterization and labelling of these technologies to make them more or less "cultural" and more or less "agentic".

Expand full comment