This newsletter has repeatedly argued that the central debate about the implications of actually-existing ‘AI’ is not just wrong; it is wrong in ways that reflect the narrative logic of science fiction. Brian Aldiss and David Wingrove’s history of the genre, Trillion Year Spree, finds its true beginnings in Mary Shelley’s Frankenstein: The Modern Prometheus. Two of the standard themes of science fiction descend from that book: We Are Become Gods, and Aren’t We Just Awesome, and We Are Become Gods: OMG What Have We Wrought.
These standard narratives entwine and reinforce each other in ways that are highly emotionally satisfying to the purportedly Promethean heroes of technology who are remaking the world. They are are the double helix of the current debates about the consequences of Large Language Models (LLMs). Are the engineers of AI creating the universe’s operating system ? Or are they dooming the human race to extinction? Who knows? Either way, they are the protagonists of a cosmos-shaping future, in which we can safely ignore the messiness and piffling details of what is happening right now.
But what is actually happening right now, thanks to these technologies, is enormously important. To understand it better I want to start pulling together a loosely existing alternative account in a series of posts over the next several weeks. You can find different elements of this account in Alison Gopnik’s description of LLMs as “cultural technologies” and in my and Cosma’s comparison of LLMs to markets and bureaucracies. What these arguments share is an emphasis on how LLMs are processors of human created intelligence, rather than putative intelligences in their own right. That brings the political economy of these developments to the center of attention. It suggests that we should pay less attention to putative future transformations of the conditions of existence, and more to how these technologies resemble (and differ from) other cultural technologies in the past. Like those technologies, they may lead to substantial economic, social and political changes, with winners and losers, and struggles over who will get what.
One of the best starting points for understanding this (certainly the best written one) is Ted Chiang’s New Yorker piece arguing that LLMs are “blurry JPEGs of the web.” Ted thinks that this makes LLMs more or less useless: I disagree. But whichever side you take, the value of this understanding is that it starts from the human produced knowledge that we find on the Internet and elsewhere, and asks what LLMs do to change it.
How are LLMs like blurry JPEGs? To build an LLM, start with a large corpus of textual information - web scrapes, pirated books, whatever other useful data you can get your hands on. Then break this corpus down into words and bits of words, and vectorize them. Run the vectors through a transformer, which weights the vectors so that they represent the statistical associations between words in the corpus, so that, for example, it knows that “girl” and “daughter” are in some sense close to each other. Congratulations! You now have an LLM - an imperfect but useful model of the relationships between words and wordlets that were in the massive pile of text that you dumped in at the beginning.
As it turns out, LLMs turn out to be pretty good at prediction, much of the time. They are not perfect (the map is not the territory). Sometimes, e.g, ChatGPT extrapolates details incorrectly, based on what seems most likely, given its model of the text. When asked for a biography of me, it used to say that I was educated at Trinity College Dublin. It “knew” that my name was associated with having an academic career, and that I was Irish, making this a pretty likely prediction (albeit a wrong one). Now, boringly, its interface skips away from plausible and entertaining surmises, and simply summarizes my Wikipedia entry.
Like a blurry JPEG, an LLM is an imperfect mapping of vast mountains of textual information which interpolates best statistical guesses to fill in the details that are missing. LLMs also allow you to do fun mash-ups, prompting it, e.g. to predict what would happen if you combined two wildly different genres of writing. Ask what Hamlet would look like if written in the style of a chemistry textbook and the LLM will even generate a little cod-Latin.
But - and this is the point - what you get from LLMs is a catalyzed compound of what human beings put in in the first place. If Ted (and Alison, and Cosma, and, far less convincingly, me) are right, these technologies are not going to give birth to superhuman entities. JPEGs are by definition incapable of bootstrapping themselves into real intelligence. Douglas Adams does describe a super-intelligent shade of the color blue in one of his books, but he’s joking. Some people do believe that LLMs, as they get bigger, are displaying ‘sparks’ of general intelligence. Others - including most of the experts I talk to at Hopkins and elsewhere - are profoundly skeptical about such claims.
Certainly, AI is physically possible, and perhaps LLMs could be one component of an intelligent agentic system, under some reasonable definition of intelligence. But that is not likely to be a problem in the short term.
The blurry-JPEG account entails that LLMs are fundamentally dependent on human-created knowledge, which they summarize, perhaps usefully but certainly imperfectly. This flatters engineers far less than the AGI story. They are not, as it turns out, becoming as gods. But they are still doing interesting and potentially very useful work. Blurry JPEGs that can summarize large bodies of human knowledge from a wide variety of potential vantage points, and can furthermore carry out certain kinds of combinatorial transformations, are potentially extremely handy things to have.
Equally, the important consequences of blurry JPEGs are not going to be giving birth to putative godlike synthetic intelligences, but reshaping human society. These technologies may generate important improvements. Some things are becoming a lot easier to do. They may generate terrible problems. Some of the things that are becoming easier, are things that we might reasonably prefer to remain difficult. The benefits and problems are not going to be distributed equally. Some people are likely to do very well out of LLMs and other forms of machine learning. Some people may find their lives substantially worse than before. And as people start considering the likely outcomes, they are, being people, going to push for outcomes that benefit themselves, sometimes at the expense of others.
To many readers, all this may seem completely obvious, because it is obvious, if you start from the kind of starting point that folks like me usually start from. But it isn’t at all obvious if you start from the actually existing AI debates that are happening in op-ed pages, panels at swishy international meetings and the like.
So over the next several weeks, I want to lay out some of the implications of a blurry JPEG account of the political economy of AI. This will draw on the work of other people, obviously, as well as my own half-formed ideas. Among the themes that I hope to cover (I make no absolute promises, and the order may chop and change) are:
The Summarization Society. How do new technologies for summarizing textual information change what we can and cannot do.
If Software Eats the World, What Comes Out the Other End? Generative AI, enshittification, and the scarcity value of high quality knowledge. Published as After software eats the world, what comes out the other end?
Who Gets What from LLMs. Fights over copyright as a proxy for the division of spoils. [Published as The Map Is Eating the Territory: The Political Economy of AI].
If Markets Could Speak. Big collective systems and the pitfalls of human cognition. [Published as Kevin Roose’s Shoggoth].
The Stories in the Lazaret. What LLMs’ output does, and does not, have in common with fiction. [Published as Even if AI makes art, it may be bad for culture]
Vico’s Singularity. Vernor Vinge is a fun science-fiction writer - but a mediocre guide to the future of AI. [Published as Vico’s Singularity].
These planned posts are less intended to add up into a single cohesive whole than to provide different starting points to understand a very messy set of ongoing historical processes. They don’t suggest a single, neat narrative arc, but talk about different bundles of complex relations. And of course, they may be wrong! But thinking in this way seems to me a better path forward than the endless recapitulation of genre cliches. Onward!
This is promising, and every attempt to frame AI politics in terms of a struggle between different interests, instead of a campaign to save the world, is extremely useful.
If you are interested, I have tried to sum up the political economy of AI copyright here: https://matteonebbiai.substack.com/p/three-ai-copyright-trade-offs
A promising sketch of what sounds like a great project. I especially like focusing on the emerging political economy of cultural technologies as a way of pushing the bullshit artists and those trapped in old, worn out narratives to the side to ask questions about power. Two quotes for you:
"Artificial Intelligence assumes the past has the answer to the future’s questions." Jade E. Davis, The Other Side of Empathy, Duke Press, 2023.
“Asking whether AI can be conscious in exactly the same way that a human is, is similar to asking whether a submarine can swim.”
The Morphospace of Consciousness: Three Kinds of Complexity for Minds and Machines, Xerxes D. Arsiwalla, Ricard Solé, et al., NeuroSci 2023, 4(2), 79-102.