Large AI models are cultural and social technologies
A new article in Science, with Alison Gopnik, Cosma Shalizi, James Evans, and, well, me.
I’ve tried to use this newsletter to highlight ideas from various people, who are thinking about AI without getting stuck on AGI. These include Alison Gopnik’s argument that Large Language Models are best understood as “cultural technologies,” Cosma Shalizi’s interpretation of AI anxieties as an aftershock of the long Industrial Revolution and James Evans’ investigations of the relationship between AI and innovation.
I’m delighted to say that Alison, Cosma, James and myself have written a piece that has just come out as a “Policy Forum” article in Science. It weaves these various strands of thought together into a broader argument that large models are not agentic super-intelligence in the making, but a new and powerful cultural and social technology. I’m really happy with how it came out and, together with the others, enormously grateful to those who made it possible.*
Science allows authors to post the accepted version (in other words, not quite the final published version, but not too far short of it) on their personal websites. As per Science’s rules, I hereby am making it clear that this is the author's version of the work. It is posted by permission of the AAAS for personal use, not for redistribution. The definitive version was published in Science on March 13, 2025, DOI:10.1126/science.adt9819. I’ve excerpted the first paragraphs of the author’s version below. The main text is on my website. If you prefer it in PDF form, click here instead.
Large AI models are cultural and social technologies
Implications draw on the history of transformative information systems from the past
By Henry Farrell, Alison Gopnik, Cosma Shalizi, and James Evans
Debates about artificial intelligence (AI) tend to revolve around whether large models are intelligent, autonomous agents. Some AI researchers and commentators speculate that we are on the cusp of creating agents with artificial general intelligence (AGI), a prospect anticipated with both elation and anxiety. There have also been extensive conversations about cultural and social consequences of large models, orbiting around two foci: immediate effects of these systems as they are currently used, and hypothetical futures when these systems turn into AGI agents perhaps even superintelligent AGI agents.
But this discourse about large models as intelligent agents is fundamentally misconceived. Combining ideas from social and behavioral sciences with computer science can help us understand AI systems more accurately. Large Models should not be viewed primarily as intelligent agents, but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated.
The new technology of large models combines important features of earlier technologies. Like pictures, writing, print, video, Internet search, and other such technologies, large models allow people to access information that other people have created. Large Models – currently language, vision, and multi-modal depend on the fact that the Internet has made the products of these earlier technologies readily available in machine-readable form. But like economic markets, state bureaucracies, and other social technologies, these systems not only make information widely available, they allow it to be reorganized, transformed, and restructured in distinctive ways. Adopting Herbert Simon’s terminology, large models are a new variant of the “artificial systems of human society” that process information to enable large-scale coordination.
Our central point here is not just that these technological innovations, like all other innovations, will have cultural and social consequences. Rather we argue that Large Models are themselves best understood as a particular type of cultural and social technology. They are analogous to such past technologies as writing, print, markets, bureaucracies, and representative democracies. Then we can ask the separate question about what the effects of these systems will be. New technologies that aren’t themselves cultural or social, such as steam and electricity, can have cultural effects. Genuinely new cultural technologies, Wikipedia for example, may have limited effects. However, many past cultural and social technologies also had profound, transformative effects on societies, for good and ill, and this is likely to be true for Large Models.
These effects are markedly different from the consequences of other important general technologies such as steam or electricity. They are also different from what we might expect from hypothetical AGI. Reflecting on past cultural and social technologies and their impact will help us understand the perils and promise of AI models better than worrying about superintelligent agents.
Click here for the rest (or here for the final version of the article, which is paywalled).
* There isn’t space for the ordinary acknowledgments in a Science article, and the reference list is necessarily short. In addition to those who we we’re collectively grateful to - the anonymous referees, editor and staff, I owe a particular personal debt to the Center for Advanced Studies in the Behavioral Sciences (CASBS), where my bits of the collective endeavor first began to come together (in Claude Shannon’s one-time cabin), and to my colleagues in SNF Agora and SAIS. Also, I seriously owe my other co-authors on these topics (especially Marion Fourcade, Margaret Levi and Bruce Schneier) and many, many others. If Large Models are a kind of mediated social relationship with the knowledge of many other people, then so too is this article.
"an autocratic AI future with Chinese characteristics."
We see what you did there. ;^)
Thank you for making the science article available. It would be interesting to know how many read it.
I was surprised that Science printed this as it had no bearing on any scientific discovery or finding. The main purpose was to say that LMs were cultural and social technologies [SCT} and therefore - to be investigated by sociologists. Science requires evidence, but the article provides no evidence for the claim but rather picks some technologies as past SCTs and claims LMs are similar. But how exactly beyond some rather vague claims of features? The concluding section "Looking Forward" uses a strawman argument of a binary POV to claim we would get more subtle and useful ways to discuss and work with such LMs as if this wasn't already happening.
I am not clear why some technologies are labeled SCTs and others, like steam engines, are neutral and not SCTs. The article doesn't provide any support for this assertion, where the null hypothesis might be that all technologies lay on a continuum within several axes including "social" and "cultural".
The references seem to cover a number of subjects, but it struck me that the Blodgett reference concerned "bias" in natural language processing, a somewhat different subject and published before deep neural networks were invented (2006) which are the forerunners of LLMs and not particularly applicable to what Brad DeLong calls MAMLMs and I think you use LMs which are beyond the LLM interface that Chiang compares well to lossy JPGs. Yes, bias exists because of the content slurped up, the RAG documents selected, and of course the HF action on training. Use different content and the bias will change direction. Force an LLM to only answer from selected documents in a database and the bias will depend on the content of those documents. I welcome that bias in STEM subjects, and I am biased in favor of democracy in politics even if Churchill's famous phrase is wrong.
I am sorry to be so critical of this piece, as your posts are well-written and erudite, which I enjoy reading.