The political scientist James Scott died last week. I only knew him through email - an occasional and irregular correspondence, mostly involving unsuccessful attempts to organize discussion at political science conferences around his work. As he suggested in a biographical essay, “Intellectual Diary of an Iconoclast,” which just came out a few months ago, he was semi-detached from his academic discipline.
I’ve wandered away from political science, though I could argue that political science has wandered away from me. I am honored even to be seen as a specialist, and probably as much to be embraced by anthropology and history.
The world was better for his iconoclasm. Scott wrote far more beautifully than political scientists are supposed to write and his ideas and work were too big to fit into any discipline. Although arguments were largely rooted in the past, his book, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed, has shaped how we think about technology.
Seeing Like a State is important because of how it sets up the problem of modernity. Scott was a critic of the vast impersonal systems - bureaucracies and markets - that modern society depends on. He believed that they prioritized the kind of thinking that comes easily to engineers over the kind that comes readily to peasants and craftsmen, and that we had lost something very important as a result.
In Scott’s account, both governments and long distance markets “see” the world through abstractions - technical standards, systems of categories and the like. A government cannot see its people directly, or what they are doing. What it can see are things like statistics measuring population, the number of people who are employed or unemployed, the percentages of citizens who work in this sector or that, and the like. These measures - in numbers, charts and categories - allow it to set policy.
Such knowledge grants its users enormous power to shape society - but often without the detailed, intimate understanding that would allow them to shape it well. There is a lot of social reality that is described poorly, or not at all, by categories or statistics. Even so, as governments and markets established their power, they not only saw the world in highly limited ways but shaped it so that it conformed better to their purblind understanding, ironing out the idiosyncrasies and apparent inefficiencies that got in the way of their vast projects. The state did not just ‘see’ its society through bureaucratic categories, but tried to remake this society so that it fit better with the government’s preconceptions.
So too for the abstractions and general categories that long distance markets depend on, as the historian William Cronon observed in his great book on nineteenth century Chicago, Nature’s Metropolis (Scott was a fan). As another scholar observed of Chicago’s late twentieth century markets, abstract seeming financial conceptions may be engines, not cameras, making the economy rather than merely reflecting them.
This abstraction of the world’s tangled complexities into simplified categories and standards underpinned vast state projects, and supported enormous gains in market efficiency. We could not live what we now consider to be acceptable lives without it, as Scott somewhat grudgingly acknowledged. It also often precipitated disaster, including Soviet collectivization and China’s Great Famine.
So what does this have to do with modern information technology? Quite straightforwardly: if you read Scott, you will see marked similarities between e.g. the ambitions of 1960s bureaucrats, convinced that they can plan out countries and cities for “abstract citizens” and the visions of Silicon Valley entrepreneurs, convinced that algorithms and objective functions would create a more efficient and more harmonious world.
Scott focuses on officials in developing countries, who were starry-eyed about “planning.” Many of their notions came second-hand from the most striking example of high modernism, the effort of Soviet bureaucrats to use production statistics and linear programming to make the planned economy work. This provides the most obvious connection between what Scott talks about and the algorithmic ambitions of Silicon Valley today. A distinct whiff of “Comrades, Let’s Optimize!” lingers on, for example, in the airy optimism of Facebook executive Andrew Bosworth’s infamous “We connect people. Period” memo.
Both the old ambitions and the new are bets on the universal power of a particular kind of engineering knowledge - what Scott calls techne, the kind of knowledge that can “be expressed precisely and comprehensively in the form of hard-and-fast rules (not rules of thumb), principles, and propositions.” Scott describes the limits of techne in ways that resonate today. The grand failed projects of the mid-to-late twentieth century - vast rationalized cities like Brasilia laid out according to plans that seemed almost to be the squares of a chessboard; efforts to displace peasants and plan agriculture at scale - are close cousins to Facebook’s failed ambitions to build a world of shared connections on algorithmic foundations, and the resulting social media Brezhnevism of today.
Hence, 20th century state planning and 21st century social media evangelism are different flavors of what Scott called “high modernism … a sweeping, rational engineering of all aspects of social life in order to improve the human condition.” High modernism was both a faith and a practice. It turned rich and diffuse social relations into something much thinner, which could be measured and observed.
Against this kind of knowledge, Scott suggested the value of metis - “the kind of knowledge that can be acquired only by long practice at similar but rarely identical tasks, which requires constant adaptation to changing circumstances.” This is the kind of tacit knowledge that peasants come to build about their land and the weather, or that people in less regimented societies accumulate about how to live with others in tolerable peace. Scott - an anarchist - greatly preferred this latter kind of knowledge, and the societies that valued it more, to the kind of world we live in today.
Scott provides intellectual ammunition for those who want to understand what Silicon Valley has in common with past grand efforts to improve the human condition. It’s a fountain of useful comparisons. Marion Fourcade and I have an article, “The Moral Economy of High-Tech Modernism,” which riffs on the parallels and differences between the classification engines of machine learning and the older categorizing machineries of bureaucracy and the market. Maria Farrell (my sister) and Robin Berjon's essay, Rewilding the Internet, turns an example from Scott into a powerful general metaphor to explain the difference between an actual lived ecology and the “plantations” of social media: “highly concentrated and controlled environments, closer kin to the industrial farming of the cattle feedlot or battery chicken farms that madden the creatures trapped within.” Eugene Wei talks about how Elon Musk’s “reign at Twitter resembles one of James Scott’s authoritarian high modernist failures.” Barath Ragavan and Bruce Schneier write about “Seeing Like a Data Structure” explaining how:
Computing supercharges the creative and practical use of abstraction. This is what life is like when we see the world the way a data structure sees the world. These are the same tricks Scott documented. What has changed is their speed and their ubiquity.
And in one of my favorite examples, Max Gladstone’s Craft series of fantasy novels deploys Scott’s ideas (among many other ideas) to talk about how markets and technology have devoured the world, and what we have lost and gained.
I don’t have any idea what Scott thought about all these people repurposing his ideas and work to describe technology. I always wanted to ask him - but I didn’t feel I knew him well enough to enquire by email. He was known for his kindness and generosity, but also for forcefully expressed impatience with people who quibbled with him in the wrong ways. I always worried I might fall in the latter category.
And he might well have been right to be impatient with me! I can’t overstate how much I’ve learned from Scott - but I do have quibbles and more than quibbles. His ideas were more than just important. They provide a profound and largely consistent grounded philosophy of the world and of what was going wrong with it. But like all such philosophies, there are limits and things that aren’t well captured.
In my partial and perhaps mistaken understanding (again - I never properly talked with him), the great strength of Scott’s work was the power of its critique, while its biggest weakness was its lack of a general compelling alternative. The parts of Scott’s work that are likely to live, exfoliate and cross-pollinate with other people’s ideas are the descriptions and analysis of what goes wrong when you treat complex societies as so many engineering problems. It is striking how well his arguments describe and explain the problems of technologies that had not even begun to take root when he was writing. But we are never going to go back to societies organized around the peasant knowledge and closely observed but inarticulate intelligence of immediate circumstances that he loved. More or less no-one wants to: not Scott, and not, by and large, the peasants themselves.
So how do we preserve the better parts of modernity while mitigating the worse? Scott’s answers to this question were notably less compelling than his critique. He reluctantly allowed that the collective goods and liberal provisions that make modern life tolerable depend on the vast machineries of state and market. He rightly invoked the importance of democracy as a check on the excesses of bureaucracy, and an alternative source of information. But he never advanced a compelling framework putting these together in ways that came close to the force and originality of his criticisms.
One possible answer to the question of what to do is provided by Suresh Naidu: we should build high modernism better and different.
We can’t take Scott too seriously and still hope to have complex, technologically advanced societies. As Foucault observed, only somewhat critically, what Chicago school economics offered is a particular set of philosophical moves that allow governance, abstracting from complex motives and environments, and reducing everything to incentives and rationality, similar to disciplines like psychology and criminology in earlier moments in history. … Perhaps thankfully, economics is done being a master metaphor for governance for the foreseeable future. Perhaps one post-neoliberal philosophical move will come from computer science, which will operationalize its own blindspots into the rational agents it is constructing. But the emergence of computational social science, algorithmic mechanism design, and artificial intelligence have further institutionalized economics into some of the algorithms and protocols of the digital economy, with scholars actively working on how to also incorporate principles of justice and fairness into these.
Here, one of the most interesting outstanding questions is whether new forms of machine learning can map some of the kinds of tacit knowledge that Scott (and Hayek, and Michael Polanyi) value, better than the rules and formalizations of classical High Modernism. An LLM is a representation and simplification of the text that it battens on, but it is not a rule or anything like one. In other words, LLMs and models like them are not techne, even if they are not metis either. They are powerful but incomprehensible. Will their employment worsen or improve the problems that Scott describes, or generate new possibilities or new problems outside his framework? These are open questions.
Another is to try to figure out hybrid approaches. I’m not at all convinced that Ohlaver, Weyl and Buterin’s proposal to rebuild social relations on the foundation of “soulbound tokens” would work. But one plausible interpretation of it is this: it seeks to provide the technological building blocks that would allow people to build Scott-type communities at greater scale. The soulbound token approach moves away from the high modernism of most crypto - the assumption that you can simply replace trust with cryptography, zero knowledge proofs and preference revelation mechanisms - to the belief that you want to foster and encourage human trust relations, and build technologies around them rather than to supplant them. These tokens are - if I understand right - supposed to facilitate community building, without dictating the values or form of the community.
So too, Maria and Robin’s proposal that technologists and regulators should think about the Internet as an ecology rather than a problem in abstract economics - an essay that really needs to grow up into a book, and soon:
We need to stop thinking of internet infrastructure as too hard to fix. It’s the underlying system we use for nearly everything we do. …It’s how we organize, connect and build knowledge, even — perhaps — planetary intelligence. Right now, it’s concentrated, fragile and utterly toxic.
Ecologists have reoriented their field as a “crisis discipline,” a field of study that’s not just about learning things but about saving them. We technologists need to do the same. Rewilding the internet connects and grows what people are doing across regulation, standards-setting and new ways of organizing and building infrastructure, to tell a shared story of where we want to go. It’s a shared vision with many strategies. The instruments we need to shift away from extractive technological monocultures are at hand or ready to be built.
All this suggests that you could reframe my criticisms of Scott in more positive ways. His contribution is not to provide a systematic framework for getting ourselves out of the hole we have dug ourselves into, but to plant some of the seeds for a different intellectual ecology, in which others will take up his thoughts, use them to argue, also arguing with them and arguing with each other, and hence discover aspects of the world that they would never have seen otherwise. That would be as fine a legacy as any thinker could want.
As much as my priors made me love the ideas in Seeing Like a State, I felt the same way about the "well what shall we do then?" take. Most implementations of high modernism we have today are (IMO) net *good*, not bad, even though there've been downsides. I've always been a both-siders on many things — every system comes with pros and cons. You make trades. Somewhere between living-like-peasants with rich tribal knowledge and the authoritarian nanny state is some medium we can make work (we already do).
To me the key is avoiding the guardrail extremes, but algorithm culture seems to constantly pull people to the edges: one group wanting to go back to the middle ages, the other to drag the world into a global New Economic Order. One group crows Malthusian overpopulation and climate disaster, the other unmitigated technological acceleration. Maybe we could meet in the middle...
Henry, thank you for this, particularly your, "OK, then what shall we do?" criticism.
I spent my career modelling enormous data sets on health systems to try get better care to more people. Now that I am in end-of-life care, I have a close and personal view of the truths Scott pointed to.
Unfortunately, harrowing suffering and bottomless unmet needs for care persist; I do not see how Scott's approach could address them. Therefore, I remain committed to my life's work. Doctors should read Seeing Like a State, but they also need to follow the evidence, such as it is.