Wow. I'm not sure what to make of this. From a computer science perspective, LLM's are not all they're cracked up to be. Impressive emulations for sure, but emulations and nothing more. The greatest danger of AI is people forgetting that the "A" in "AI" stands for "artificial" and means exactly that. The danger here is people treating it like it's real, when it's not.
Another issue arises from the fact that the purveyors and profiteers pushing AI care not one whit what impact it has on society or individuals, beyond their own enrichment.
Vinge, AI 2027 and others in the AGI / "superhuman AI" camp talk of human intelligence but have a remarkably impoverished notion of human cognition and what it is to be human. The field of AI has always been steeped in pre-20th C. philosophical tropes. Hubert Dreyfus has been telling them to read some Heidegger and Wittgenstein since the early 1960s and very few of them appear to have bothered. Since the start of the cognitive revolution in the late 1950s there have always been people critical of the dominant brain/mind is a computer metaphor. LLMs don't use language and language is not representation. Language is meaningful because it is embedded in human practices. Meaning is out there in our interactions with other people and the world. Human cognition evolved. It's continuous with animal cognition. But even LLM skeptics like Marcus and LeCun are still hung up on internal representations, believing they will solve LLM problems, of which there is no shortage, by combining symbolic AI with neural network AI to have better representations of the world. Good luck with that. They appear to willfully avoid the enactivist literature going back to Varela, Thompson and Rosch (1991) and beyond. The only way you get to anything remotely similar to human-like intelligence in a machine is through being in the world. To do so you would have to follow down the pathway taken by Rodney Brooks, likely a very long and winding path. The "superhuman AI" camp seem to imagine their intelligence-in-a-box will become self-aware and develop intentions given enough processing power and data. That seems like a dubious assumption but it if you believe the human brain is a computer it might be a logical assumption.
I take it that the point of it being a social technology is similar to the one made by Matteo Pasquinelli. He writes "AI is not constituted by the imitation of biological intelligence but by the intelligence of labour and social relations." (Although the cultural investment in the former goes hand in hand, I think, with the latter.) On the surface GenAI doesn't seem likely to be very good at the latter either. The tech companies pushing GenAI are burning hundreds of billions, soon to be trillions, of dollars. AI uses expensive and rapidly depreciating hardware, along with vast amounts of power and water. And the market, almost 3 years after the release of ChatGPT, is tiny compared to the capital outlay because it doesn't do what it's claimed to be able to do. What uses there are are incremental and ordinary and one has to balance those benefits against the dysfunction (see Daron Acemoglu). But, maybe that misses the point, if the true value of the technology as a social technology lies more with the uses dysfunction has for the move-fast-and-break-things gang.
Really appreciate this and found it fascinating. Is there anyone writing on this (on embodied intelligence, enactivism etc.) right now whose work you'd say is worth following?
I don't claim to be an expert on this topic but here are some suggestions to get you started. Once you start digging you'll discover that there are lots of people working on this and it's a big field with a diversity of approaches. While the “brain is a computer” metaphor is still dominant in cognitive science--and popular science coverage might suggest it's the only game in town--there are a lot of people working on embodied and enactive approaches to cognition across different disciplines.
A foundational text that many people reference is the one I referencd above:
Varela, Francisco J., Evan Thompson, and Eleanor Rosch. 2017. The Embodied Mind: Cognitive Science and Human Experience. The MIT Press. https://doi.org/10.7551/mitpress/9780262529365.001.0001. (original edition was 1991)
That may be a good place to start.
An important collection early on was Núñez, Rafael E., and Walter J. Freeman. 1999. Reclaiming Cognition: The Primacy of Action, Intention, and Emotion. Academic.
Other people that come to mind: Shaun Gallagher, Louise Barrett, Anthony Chemero, Daniel Hutto and Erik Myin. Here are some books but I'd go search for recent papers written by these people. Some of them have written short overviews of enactivism. I'd also look at the people they cite. Gallagher’s short 2023 book discusses the broader field of 4E and embodied cognition and contrasts different approaches with enactivism. It's currently a free download from Cambridge UP.
Barrett, Louise. 2015. Beyond the Brain: How Body and Environment Shape Animal and Human Minds. Princeton University Press.
Hutto, Daniel D., and Erik Myin. 2017. Evolving Enactivism: Basic Minds Meet Content. The MIT Press.
Noë, Alva. 2006. Action in Perception. MIT Press.
This work is grounded in phenomenology but also references ordinary language philosophy (esp. Wittgenstein), and pragmatism (esp. Dewey). The work of James Gibson is also an important reference point. I found the following a very accessible introduction that walks you through tge history of philosophical and psychological theories of the mind all the way through to AI, Dreyfus, and enactivism:
Stephan, and Anthony Chemero. 2021. Phenomenology: An Introduction. Second edition. Polity Press.
Back in the 1990s the late David Hays and I sketched out a framework that covers this territory in a series of articles and one book (by Hays). By "this territory" I mean technologies of communication and computation (speech, writing, calculation, and computing), expressive culture, narrative and the self, music, technology, forms of governance and economic organization. This little paper sketches things out: Mind-Culture Coevolution: Major Transitions in the Development of Human Culture and Society (https://www.academia.edu/37815917/Mind_Culture_Coevolution_Major_Transitions_in_the_Development_of_Human_Culture_and_Society_Version_2_1)
This is the basic paper, The Evolution of Cognition (1990), and its brief abstract: With cultural evolution new processes of thought appear. Abstraction is universal, but rationalization first appeared in ancient Greece, theorization in Renaissance Italy, and model building in twentieth-century Europe. These processes employ the methods of metaphor, metalingual definition, algorithm, and control, respectively. The intellectual and practical achievements of populations guided by the several processes and exploiting the different mechanisms differ so greatly as to warrant separation into cultural ranks. The fourth rank is not completely formed, while regions of the world and parts of every population continue to operate by the processes of earlier ranks.
During the early 1990s Hays taught an obline course on the history of technology offered by Connected Education through the New School. The book he wrote for that course resides on the web: The Evolution of Technology Four Cognitive Ranks (1993) http://asweknowit.ca/evcult/Tech/FRONT.shtml You might want to take a look at Chapter 5, Politics, Cognition, and Personality, http://asweknowit.ca/evcult/Tech/CHAPTER5.shtml
I'm ashamed to say I have not read Hays or your work, but now on my reading list! I'd be interested in your take on Boyd and Richerson, and more controversially what you think of memetics a la Dawkins and Blackmore? I have recently been thinking of LLMs as meme machines - an essay here
The approach Boyd and Richerson take (often known as dual inheritance theory) is perhaps the dominant approach to cultural evolution in the academic world. It's OK as far as it goes, but it's not going to help you much in thinking about, for example, the evolution of pop music styles in the 20th century. As for memetics, it seems to have settled on the idea of memes as little memebots going about the world from mind to mind, or brain to brain if you will, but it doesn't explain anything because it just gives the bots the power to lodge in minds/brains. I present an alternative approach in this paper: “Rhythm Changes” Notes on Some Genetic Elements in Musical Culture, https://www.academia.edu/23287434/_Rhythm_Changes_Notes_on_Some_Genetic_Elements_in_Musical_Culture
From ChatGPT's essay: "From this perspective, the real danger is not AI itself but the metaphysics that insists on separation. If we cling to the human/AI divide, we doom ourselves to endless cycles of fear: of replacement, of obsolescence, of destruction. If we accept LLMs as cultural technologies, the anxiety shifts to something more tractable: how to embed them in institutions that nurture dignity, creativity, and play.
"In Latour’s terms, the challenge is not to purify the categories — human here, machine there — but to recognize the hybrids we already are. In Farrell’s terms, the task is to design institutions that democratize access and prevent capture by oligarchy. "
Whichever social engine you wish to explain - markets, AI, science research - you must acknowledge that human laws dictate what can and cannot be done and that some actors, driven by greed or other asocial motivation, will corrupt the process by breaking the laws. These bad actors frequently cause the social engine to produce effects harmful for humanity. For example, long before AI can produce any imagined Singularity (which I doubt because if computational limits), the financial bubble that bursts when major investors pull out of the AI market will harm millions.
I like this framing. Partly because I don't think any technology is 'normal'. Isn't this the main insight of history? We should consider every instance in its specificity. I wonder if it's actually useful to talk about AI as a technology at all. As David Edgerton has argued, 'technology' is an intellectually bankrupt term since it has become more and more capacious over time. I believe other languages have more specific terms. Maybe the endless comparisons people make (electricity, fire, the dot com crash etc. etc.) are actually more misleading than useful.
A big problem facing the social sciences - in many areas, but I think interestingly in this one - is the methodological question. Or rather, it is hard to answer the question of what AI can and should be used for while being pressured to use it as a key component of research and analysis. Even before recent cuts, I saw so many more funding programs to "do X with LLMs/AI" than to "test how well LLMs/AI can do X." Or to examine the phenomenon of *why* people think AI can do X.
One question it raises is what it means for this or any technological system to be “social.” One sense is that it aggregates local information in a way that enables human interaction at larger, non-local scales. In this sense, markets, democracy, bureaucracy, and LLMs are directly comparable.
There’s another, related sense, which is more internal to the notion of “intelligence” itself. In that sense, intelligence in any form is not an isolated faculty but something constituted through interaction. It emerges from internalized collective representations (Durkheim), communicative role-taking (Mead), ritual coordination (Randall Collins), and the like. Collins’ old essay on a sociological AI (from the 1990s) is a version of this idea that’s worth a look.
From this perspective, part of the appearance of intelligence in LLMs comes from their architecture of sociality: the chat interface that positions them in relation to humans, or the growing interactions between LLMs trained on different “cultures,” each with its own dialect. That suggests “singularity” may be the wrong metaphor. These systems are plural from the bottom up. They already face challenges of coordination and mutual intelligibility, not only with humans but among themselves. Even so, their designers tend to operate according to the singularity image of a single super mind rather than a plurality image of interconnected mutually interconnected minds.
How this constitutive sense of the social interacts with the aggregative sense could be worth thinking through.
Lots of food for thought - I like the framing of AI as a social technology; for the space of AI and work (both macroeconomic and in business administration), our new newsletter https://workcode.substack.com/ is attempting to bridge the gap between AI as a technology and work as a central theme in humanity - just launched!
I think this is a great framing of the way AI will really grow and develop. I see social as a complementary adjective to normal - AI is both those types of technology. The really big technology shifts have enormous social, economic and political consequences. AI is likely to be the next in that series of shifts.
@Henry Farrell plays the history back as a sort of weirdness where systems like markets and governments are a singularity that consumes and controls humans. I find this way of framing weird and disturbing. "Human beings have quite limited internal ability to process information" - yes and our economic system and institutions are mechanisms for networking that limited processing into powerful social forces that allow us to innovate, build and operate on an unimaginable scale. These things represent humanity as a collective and evolving organism rather than being separate.
That means the question of "What happens next?" is not important at all. Thinking about this question is important but answering it is intrinsically impossible - a kind of existential version of the "plans are useless but planning is indispensable" mindset.
And surely this kind of thinking is a job for social sciences AND computer scientists and all sorts of other thinkers - is this is not a cross discipline field of study what is?
I tend to dismiss "the imminent singularity of superintelligent AI". The assumption is that this can be done by design, something that could have been possible, perhaps with symbolic AI. However, with very complex and difficult-to-understand artificial neural network technology, it will likely be very challenging to achieve this by design. It would likely require evolution, which would necessitate even more massive computational resources. We don't even know if achieving superintelligence is possible, let alone having humans or machines develop it. It is a similar problem to stating that we can design superintelligent humans. Leaving aside the likely need for a larger skull and birthing problems, what elements of the brain need to be changed? Increasing the area of teh cortex? Deepening the cortical layers? Changing the routing of the nerves between neural components? Adding specialized neural components? THHGTTG's Deep Thought was a comedic idea, not a magical design concept. [And, as we know, that superintelligent computer was the Earth and all its fauna and flora.] If evolving such a machine is the only path that could lead to SI, then it won't suddenly happen as a runaway process as posited by adherents to the idea. Therefore, the social issues will likely be involved as such, increasing intelligence is manifested.
Humanity has developed such a "super intelligence" as an organized anthology of specialized intelligences. Perhaps that will be the machine solution too, rather than a single machine, with the issue that connection and coordinating latencies between the components will be a limiting factor to its speed of operation and capabilities.
Currently, we have seen the rapid improvement of LLM and related technology. The adoption of LLM technology seems to have lagged behind the hype. We may well be at the inflection point of a logistic curve in performance, with improvements coming incrementally while size and costs are increasing exponentially. If it takes decades to even reach AGI, won't human social systems adapt to that, rather than being "Future Shocked"?
Desktop PCs appeared in volume by 1980. It has been half a century since then, and the predictions of mass unemployment based on the "lump of labor fallacy" proved incorrect. Computers have become smaller and more powerful, yet we seem to have adapted well to the changes. I read that it was novelists who failed to come to grips with technology, as their stories did not seem to include a common device - smartphones - that were well integrated into common use, on a global scale.
Where I am seeing potential social change with AI is in their being accepted as "persons". That will change the social dynamic, whether we relate to them as confidants, friends, lovers, assistants, or even cognitive slaves. In this regard, SciFi writers are perhaps the best people to explore the possible social impacts of AIs. Asimov was doing this as early as the 1940s.
The social technology point is great but isn’t it only relevant in the normal scenario? I also think that comparing distributed information processing systems to AI might occlude what is novel about it - it has the potential to generate explanatory knowledge. It can, in David deutsch’s terms, become a universal problem solver of much greater ability than humans
Great article! I’m not familiar enough with “social technologies” or “cultural technologies” to really understand the full ramifications of those positions - so this may be redundant with those ideas. If so, carry on.
It does seem pretty evident - as others have pointed out here - that the underlying technology is not as good as described. While the technology itself will be impactful, general purpose, and cross-sectoral, the perception of the technology has become eschatological and quasi-religious. I worked early in my career (late aughts early 2010s) for a transhumanist organization and quickly realized how hand-wavey “AI can solve X or destroy Y” is.
This perception of AI and complete lack of focus on its practical applications is very bad for our society and accelerative in all the wrong ways. From the AI arms race to consolidation of political power - offloading power and decision-making to flawed systems does more to bring us to the vision of something like AI 2027’s bad ending than the technology will, at this point.
How do agents change the story? Let’s say that you could give an agent 100k and it would pretty quickly turn it into a million. That’s Suleyman’s definition of AGI. Seems possible that it comes in, for example, 7 years. Or something that does your taxes. This example is still on the individual level rather than the collective or social level. But it’s getting at AI as forcé in the world, and increase intelligence as something that changes the world, vs AI as something that is stuck in the dimension of Words and pictures.
Wow. I'm not sure what to make of this. From a computer science perspective, LLM's are not all they're cracked up to be. Impressive emulations for sure, but emulations and nothing more. The greatest danger of AI is people forgetting that the "A" in "AI" stands for "artificial" and means exactly that. The danger here is people treating it like it's real, when it's not.
Another issue arises from the fact that the purveyors and profiteers pushing AI care not one whit what impact it has on society or individuals, beyond their own enrichment.
Vinge, AI 2027 and others in the AGI / "superhuman AI" camp talk of human intelligence but have a remarkably impoverished notion of human cognition and what it is to be human. The field of AI has always been steeped in pre-20th C. philosophical tropes. Hubert Dreyfus has been telling them to read some Heidegger and Wittgenstein since the early 1960s and very few of them appear to have bothered. Since the start of the cognitive revolution in the late 1950s there have always been people critical of the dominant brain/mind is a computer metaphor. LLMs don't use language and language is not representation. Language is meaningful because it is embedded in human practices. Meaning is out there in our interactions with other people and the world. Human cognition evolved. It's continuous with animal cognition. But even LLM skeptics like Marcus and LeCun are still hung up on internal representations, believing they will solve LLM problems, of which there is no shortage, by combining symbolic AI with neural network AI to have better representations of the world. Good luck with that. They appear to willfully avoid the enactivist literature going back to Varela, Thompson and Rosch (1991) and beyond. The only way you get to anything remotely similar to human-like intelligence in a machine is through being in the world. To do so you would have to follow down the pathway taken by Rodney Brooks, likely a very long and winding path. The "superhuman AI" camp seem to imagine their intelligence-in-a-box will become self-aware and develop intentions given enough processing power and data. That seems like a dubious assumption but it if you believe the human brain is a computer it might be a logical assumption.
I take it that the point of it being a social technology is similar to the one made by Matteo Pasquinelli. He writes "AI is not constituted by the imitation of biological intelligence but by the intelligence of labour and social relations." (Although the cultural investment in the former goes hand in hand, I think, with the latter.) On the surface GenAI doesn't seem likely to be very good at the latter either. The tech companies pushing GenAI are burning hundreds of billions, soon to be trillions, of dollars. AI uses expensive and rapidly depreciating hardware, along with vast amounts of power and water. And the market, almost 3 years after the release of ChatGPT, is tiny compared to the capital outlay because it doesn't do what it's claimed to be able to do. What uses there are are incremental and ordinary and one has to balance those benefits against the dysfunction (see Daron Acemoglu). But, maybe that misses the point, if the true value of the technology as a social technology lies more with the uses dysfunction has for the move-fast-and-break-things gang.
Really appreciate this and found it fascinating. Is there anyone writing on this (on embodied intelligence, enactivism etc.) right now whose work you'd say is worth following?
I don't claim to be an expert on this topic but here are some suggestions to get you started. Once you start digging you'll discover that there are lots of people working on this and it's a big field with a diversity of approaches. While the “brain is a computer” metaphor is still dominant in cognitive science--and popular science coverage might suggest it's the only game in town--there are a lot of people working on embodied and enactive approaches to cognition across different disciplines.
A foundational text that many people reference is the one I referencd above:
Varela, Francisco J., Evan Thompson, and Eleanor Rosch. 2017. The Embodied Mind: Cognitive Science and Human Experience. The MIT Press. https://doi.org/10.7551/mitpress/9780262529365.001.0001. (original edition was 1991)
That may be a good place to start.
An important collection early on was Núñez, Rafael E., and Walter J. Freeman. 1999. Reclaiming Cognition: The Primacy of Action, Intention, and Emotion. Academic.
Other people that come to mind: Shaun Gallagher, Louise Barrett, Anthony Chemero, Daniel Hutto and Erik Myin. Here are some books but I'd go search for recent papers written by these people. Some of them have written short overviews of enactivism. I'd also look at the people they cite. Gallagher’s short 2023 book discusses the broader field of 4E and embodied cognition and contrasts different approaches with enactivism. It's currently a free download from Cambridge UP.
Barrett, Louise. 2015. Beyond the Brain: How Body and Environment Shape Animal and Human Minds. Princeton University Press.
Chemero, Anthony P. 2009. Radical Embodied Cognitive Science. The MIT Press. https://doi.org/10.7551/mitpress/8367.001.0001.
Gallagher, Shaun. 2017. Enactivist Interventions: Rethinking the Mind. Oxford university press.
Gallagher, Shaun. 2023. Embodied and Enactive Approaches to Cognition. 1st ed. Cambridge University Press. https://doi.org/10.1017/9781009209793.
Hutto, Daniel D., and Erik Myin. 2017. Evolving Enactivism: Basic Minds Meet Content. The MIT Press.
Noë, Alva. 2006. Action in Perception. MIT Press.
This work is grounded in phenomenology but also references ordinary language philosophy (esp. Wittgenstein), and pragmatism (esp. Dewey). The work of James Gibson is also an important reference point. I found the following a very accessible introduction that walks you through tge history of philosophical and psychological theories of the mind all the way through to AI, Dreyfus, and enactivism:
Stephan, and Anthony Chemero. 2021. Phenomenology: An Introduction. Second edition. Polity Press.
Thank you so much for taking the time to make these recommendations. Can't wait to dive in!
Back in the 1990s the late David Hays and I sketched out a framework that covers this territory in a series of articles and one book (by Hays). By "this territory" I mean technologies of communication and computation (speech, writing, calculation, and computing), expressive culture, narrative and the self, music, technology, forms of governance and economic organization. This little paper sketches things out: Mind-Culture Coevolution: Major Transitions in the Development of Human Culture and Society (https://www.academia.edu/37815917/Mind_Culture_Coevolution_Major_Transitions_in_the_Development_of_Human_Culture_and_Society_Version_2_1)
This is the basic paper, The Evolution of Cognition (1990), and its brief abstract: With cultural evolution new processes of thought appear. Abstraction is universal, but rationalization first appeared in ancient Greece, theorization in Renaissance Italy, and model building in twentieth-century Europe. These processes employ the methods of metaphor, metalingual definition, algorithm, and control, respectively. The intellectual and practical achievements of populations guided by the several processes and exploiting the different mechanisms differ so greatly as to warrant separation into cultural ranks. The fourth rank is not completely formed, while regions of the world and parts of every population continue to operate by the processes of earlier ranks.
Link: https://www.academia.edu/243486/The_Evolution_of_Cognition
Other papers:
The Evolution of Narrative and the Self (by me 1993), https://www.academia.edu/235114/The_Evolution_of_Narrative_and_the_Self
The Evolution of Expresive Culture (by David Hays), https://www.academia.edu/9547332/The_Evolution_of_Expressive_Culture
Stages in the Evolution of Music (by me 1998), https://www.academia.edu/8583092/Stages_in_the_Evolution_of_Music
During the early 1990s Hays taught an obline course on the history of technology offered by Connected Education through the New School. The book he wrote for that course resides on the web: The Evolution of Technology Four Cognitive Ranks (1993) http://asweknowit.ca/evcult/Tech/FRONT.shtml You might want to take a look at Chapter 5, Politics, Cognition, and Personality, http://asweknowit.ca/evcult/Tech/CHAPTER5.shtml
I'm ashamed to say I have not read Hays or your work, but now on my reading list! I'd be interested in your take on Boyd and Richerson, and more controversially what you think of memetics a la Dawkins and Blackmore? I have recently been thinking of LLMs as meme machines - an essay here
https://sphelps.substack.com/p/from-genomes-to-memomes
Those are tricky issues and don't have simple answers. You might want to take a look at a short document I wrote sorting out the different approaches to cultura evolution: A quick guide to cultural evolution for humanists, https://www.academia.edu/40930224/A_quick_guide_to_cultural_evolution_for_humanists
The approach Boyd and Richerson take (often known as dual inheritance theory) is perhaps the dominant approach to cultural evolution in the academic world. It's OK as far as it goes, but it's not going to help you much in thinking about, for example, the evolution of pop music styles in the 20th century. As for memetics, it seems to have settled on the idea of memes as little memebots going about the world from mind to mind, or brain to brain if you will, but it doesn't explain anything because it just gives the bots the power to lodge in minds/brains. I present an alternative approach in this paper: “Rhythm Changes” Notes on Some Genetic Elements in Musical Culture, https://www.academia.edu/23287434/_Rhythm_Changes_Notes_on_Some_Genetic_Elements_in_Musical_Culture
BTW, I've just published a piece in 3 Quarks Daily in which, at my prompting, ChatGPT has a small essay connecting the ideas of Bruno Latour with the idea of AI as cultural technology: https://3quarksdaily.com/3quarksdaily/2025/09/some-hybrid-remarks-on-man-machine-collaboration.html#more-287191
From ChatGPT's essay: "From this perspective, the real danger is not AI itself but the metaphysics that insists on separation. If we cling to the human/AI divide, we doom ourselves to endless cycles of fear: of replacement, of obsolescence, of destruction. If we accept LLMs as cultural technologies, the anxiety shifts to something more tractable: how to embed them in institutions that nurture dignity, creativity, and play.
"In Latour’s terms, the challenge is not to purify the categories — human here, machine there — but to recognize the hybrids we already are. In Farrell’s terms, the task is to design institutions that democratize access and prevent capture by oligarchy. "
Whichever social engine you wish to explain - markets, AI, science research - you must acknowledge that human laws dictate what can and cannot be done and that some actors, driven by greed or other asocial motivation, will corrupt the process by breaking the laws. These bad actors frequently cause the social engine to produce effects harmful for humanity. For example, long before AI can produce any imagined Singularity (which I doubt because if computational limits), the financial bubble that bursts when major investors pull out of the AI market will harm millions.
It was a fantastic speech and it is great to be able to revist your thoughts in writing!
One day, you might want to give as a prompt tutorial on how to create these amazing shoggoth et al images. ;-)
I like this framing. Partly because I don't think any technology is 'normal'. Isn't this the main insight of history? We should consider every instance in its specificity. I wonder if it's actually useful to talk about AI as a technology at all. As David Edgerton has argued, 'technology' is an intellectually bankrupt term since it has become more and more capacious over time. I believe other languages have more specific terms. Maybe the endless comparisons people make (electricity, fire, the dot com crash etc. etc.) are actually more misleading than useful.
A big problem facing the social sciences - in many areas, but I think interestingly in this one - is the methodological question. Or rather, it is hard to answer the question of what AI can and should be used for while being pressured to use it as a key component of research and analysis. Even before recent cuts, I saw so many more funding programs to "do X with LLMs/AI" than to "test how well LLMs/AI can do X." Or to examine the phenomenon of *why* people think AI can do X.
I think the best framework for capturing a complex set of relationships that I am aware of is the Ostrom’s Social Ecological Systems (SES) framework.
This is super interesting, thank you!
One question it raises is what it means for this or any technological system to be “social.” One sense is that it aggregates local information in a way that enables human interaction at larger, non-local scales. In this sense, markets, democracy, bureaucracy, and LLMs are directly comparable.
There’s another, related sense, which is more internal to the notion of “intelligence” itself. In that sense, intelligence in any form is not an isolated faculty but something constituted through interaction. It emerges from internalized collective representations (Durkheim), communicative role-taking (Mead), ritual coordination (Randall Collins), and the like. Collins’ old essay on a sociological AI (from the 1990s) is a version of this idea that’s worth a look.
From this perspective, part of the appearance of intelligence in LLMs comes from their architecture of sociality: the chat interface that positions them in relation to humans, or the growing interactions between LLMs trained on different “cultures,” each with its own dialect. That suggests “singularity” may be the wrong metaphor. These systems are plural from the bottom up. They already face challenges of coordination and mutual intelligibility, not only with humans but among themselves. Even so, their designers tend to operate according to the singularity image of a single super mind rather than a plurality image of interconnected mutually interconnected minds.
How this constitutive sense of the social interacts with the aggregative sense could be worth thinking through.
Lots of food for thought - I like the framing of AI as a social technology; for the space of AI and work (both macroeconomic and in business administration), our new newsletter https://workcode.substack.com/ is attempting to bridge the gap between AI as a technology and work as a central theme in humanity - just launched!
I think this is a great framing of the way AI will really grow and develop. I see social as a complementary adjective to normal - AI is both those types of technology. The really big technology shifts have enormous social, economic and political consequences. AI is likely to be the next in that series of shifts.
@Henry Farrell plays the history back as a sort of weirdness where systems like markets and governments are a singularity that consumes and controls humans. I find this way of framing weird and disturbing. "Human beings have quite limited internal ability to process information" - yes and our economic system and institutions are mechanisms for networking that limited processing into powerful social forces that allow us to innovate, build and operate on an unimaginable scale. These things represent humanity as a collective and evolving organism rather than being separate.
That means the question of "What happens next?" is not important at all. Thinking about this question is important but answering it is intrinsically impossible - a kind of existential version of the "plans are useless but planning is indispensable" mindset.
And surely this kind of thinking is a job for social sciences AND computer scientists and all sorts of other thinkers - is this is not a cross discipline field of study what is?
I tend to dismiss "the imminent singularity of superintelligent AI". The assumption is that this can be done by design, something that could have been possible, perhaps with symbolic AI. However, with very complex and difficult-to-understand artificial neural network technology, it will likely be very challenging to achieve this by design. It would likely require evolution, which would necessitate even more massive computational resources. We don't even know if achieving superintelligence is possible, let alone having humans or machines develop it. It is a similar problem to stating that we can design superintelligent humans. Leaving aside the likely need for a larger skull and birthing problems, what elements of the brain need to be changed? Increasing the area of teh cortex? Deepening the cortical layers? Changing the routing of the nerves between neural components? Adding specialized neural components? THHGTTG's Deep Thought was a comedic idea, not a magical design concept. [And, as we know, that superintelligent computer was the Earth and all its fauna and flora.] If evolving such a machine is the only path that could lead to SI, then it won't suddenly happen as a runaway process as posited by adherents to the idea. Therefore, the social issues will likely be involved as such, increasing intelligence is manifested.
Humanity has developed such a "super intelligence" as an organized anthology of specialized intelligences. Perhaps that will be the machine solution too, rather than a single machine, with the issue that connection and coordinating latencies between the components will be a limiting factor to its speed of operation and capabilities.
Currently, we have seen the rapid improvement of LLM and related technology. The adoption of LLM technology seems to have lagged behind the hype. We may well be at the inflection point of a logistic curve in performance, with improvements coming incrementally while size and costs are increasing exponentially. If it takes decades to even reach AGI, won't human social systems adapt to that, rather than being "Future Shocked"?
Desktop PCs appeared in volume by 1980. It has been half a century since then, and the predictions of mass unemployment based on the "lump of labor fallacy" proved incorrect. Computers have become smaller and more powerful, yet we seem to have adapted well to the changes. I read that it was novelists who failed to come to grips with technology, as their stories did not seem to include a common device - smartphones - that were well integrated into common use, on a global scale.
Where I am seeing potential social change with AI is in their being accepted as "persons". That will change the social dynamic, whether we relate to them as confidants, friends, lovers, assistants, or even cognitive slaves. In this regard, SciFi writers are perhaps the best people to explore the possible social impacts of AIs. Asimov was doing this as early as the 1940s.
The social technology point is great but isn’t it only relevant in the normal scenario? I also think that comparing distributed information processing systems to AI might occlude what is novel about it - it has the potential to generate explanatory knowledge. It can, in David deutsch’s terms, become a universal problem solver of much greater ability than humans
Great article! I’m not familiar enough with “social technologies” or “cultural technologies” to really understand the full ramifications of those positions - so this may be redundant with those ideas. If so, carry on.
It does seem pretty evident - as others have pointed out here - that the underlying technology is not as good as described. While the technology itself will be impactful, general purpose, and cross-sectoral, the perception of the technology has become eschatological and quasi-religious. I worked early in my career (late aughts early 2010s) for a transhumanist organization and quickly realized how hand-wavey “AI can solve X or destroy Y” is.
This perception of AI and complete lack of focus on its practical applications is very bad for our society and accelerative in all the wrong ways. From the AI arms race to consolidation of political power - offloading power and decision-making to flawed systems does more to bring us to the vision of something like AI 2027’s bad ending than the technology will, at this point.
How do agents change the story? Let’s say that you could give an agent 100k and it would pretty quickly turn it into a million. That’s Suleyman’s definition of AGI. Seems possible that it comes in, for example, 7 years. Or something that does your taxes. This example is still on the individual level rather than the collective or social level. But it’s getting at AI as forcé in the world, and increase intelligence as something that changes the world, vs AI as something that is stuck in the dimension of Words and pictures.
It was a fantastic speech and it is great to be able to revist your thoughts in writing!
One day, you might want to give as a prompt tutorial on how to create these amazing shoggoth et al images. ;-)