Marion Fourcade and Kieran Healy’s new book, The Ordinal Society, has just come out, and it’s great! I’m biased: Marion is a co-author. Kieran, like Dan Davies from last week, was my co-blogger for well over a decade at Crooked Timber (as too the co-author of a third great piece, my sister Maria). It’s been a good week for CT-OG’s and their co-authors …
I’ve been waiting eagerly for The Ordinal Society to be published, not just because it is important in its own right, but because it provides a different, and extremely useful way to understand the consequences of soi-disant AI. Marion and Kieran ask how AI and related algorithmic technologies shape individuals and their dispositions. How are we likely to think of ourselves and others as these technologies take further root? How will we think of ourselves and of others?
This is a different perspective on AIs as cultural technologies from those of scholars like Alison Gopnik, Hugo Mercier and Scott Atran, whom I’ve written about previously. Sociologists don’t usually ask questions about the relationship between human culture and the more-or-less invariant cognitive architectures of our brains. Instead, they inquire as to how collective institutions and culture shape the individual selves that inhabit them, and vice versa. In this understanding, the “self” and “culture” co-constitute each other.
The drawback of this approach is that it makes it hard to identify what is causing what in any very systematic way. The advantage is that everyday experience suggests that such feedback loops between self-understanding and broader culture are ubiquitous. And emphasizing this two way feedback allows The Ordinal Society to distinguish itself, politely but distinctly, from cruder sociological accounts that have little room for individual action or independence.
There’s an old joke, attributed to John Duesenberry, that economics is all about how people make choices, while sociology is about how they don’t have any choices to make. And for sure, some sociologists studying AI don’t provide much scope for human agency. In a previous post, I quoted Shoshanna Zuboff’s take that AI and algorithms are a kind of industrialized mind control:
people have become targets for remote control, as surveillance capitalists discovered that the most predictive data come from intervening in behavior to tune, herd and modify action in the direction of commercial objectives. … “We are learning how to write the music,” one scientist said, “and then we let the music make them dance.” This new power “to make them dance” … works its will through the medium of ubiquitous digital instrumentation to manipulate subliminal cues, psychologically target communications, impose default choice architectures, trigger social comparison dynamics and levy rewards and punishments — all of it aimed at remotely tuning, herding and modifying human behavior in the direction of profitable outcomes and always engineered to preserve users’ ignorance.
But algorithms don’t succeed simply because they brainwash people. Very often, they succeed because they are giving people something that they want. As Kieran noted in 2017:
In his book, The Sneetches (1961), Dr Seuss discusses the disruptive entrepreneur Sylvester McMonkey McBean, a pioneer in the development of smart devices that satisfy the needs of socially connected groups with strong community values:
“Just pay me your money and hop right aboard!”
So they clambered inside. Then the big machine roared.
And it klonked. And it bonked. And it jerked. And it berked.
And it bopped them about. But the thing really worked!McBean’s device was a pernicious technology of social classification. But I think it’s important to keep in mind that, as Seuss points out, the thing really worked. It really did put stars on the bellies of the Sneetches who had none upon thars, and they loved it. If it hadn’t really worked it would have been pernicious as well, just in a different way.
The Sneetches with stars on their bellies wanted to be distinguished from those that didn’t have them. And vice versa! Neither star-bellied Sneetches nor unadorned ones perceived the big klonking machine as a vast Foucaultian apparatus to tune, herd and modify human action in the direction of commercial objectives, but as a device that allowed them to differentiate themselves from others in a somewhat competitive way.
This, I think, is the key insight behind The Ordinal Society. Technologies like AI are both pernicious engines of desire-shaping, and miracle-technologies that produce just what we want, at one and the same time. We are perpetually eager to discover how we compare to others, and how others compare to us. Hence, the “ordinal society.” Ordinal rankings tell us whether something is first, second, fifth, eleventh or whatever, according to some schema of measurement, which might correspond to the real world, or some artificial or synthetic measure, or even some arbitrary or entirely notional gradation. Many aspects of the algorithmic economy involve this kind of ranking and sometimes this rankles. But sometimes, it doesn’t.
New technologies are creating new ways to categorize and rank people and things, automatically, speedily, and at scale. This has many important consequences, because ranking systems organize key aspects of human society. Complex societies, for better or worse, require McBean machines to work. As markets and societies grow beyond small communities of people who know and trust each other, they come to depend on technologies of classification - machineries that identify items, people, or situations as members of this or that broader class. If markets and bureaucracies are to work at scale, they need to be able to categorize this load of wheat as grain of a particular quality, or that individual as an immigrant with permission to work. The traditional machineries for doing this include laws (both common and formal), bureaucratic procedures, measurement devices, and specialized standard setting institutions, such as the international Codex Alimentarius (which has e.g. a graded scale for classifying stages of putrescence in fish).
And humans love to turn classification systems into ranking systems. The high culture version of this is the novels of Marcel Proust, with their exquisitely fine distinctions between people from different backgrounds and social categories, all looking to maintain or improve their position through acquiring the right connections and displaying the right dispositions. The not-quite-so-high culture version is Jack Vance’s slyly hilarious science fiction novel, Night Lamp, where “strivers” compete to ascend a complex social hierarchy of affinity groups, climbing through the Zonkers, the Bad Gang, the Sick Chickens and such to the dizzying social peak of membership in the rigidly exclusive Clam Muffins.
Now, as The Ordinal Society tells us, we have a whole new set of technologies for measuring and classifying people, things, movies, songs, and ephemeral concatenations of this or that thing happening at this or that time, quickly and at scale. Marion and Kieran’s book investigates how these technologies are creating new social rankings, some highly visible, others all the more powerful for their near invisibility. As they put it:
An ordinal society creates order through automated ranking and matching. The apparent power of its methods justifies the apparent rightness of its hierarchies and categories. Interaction and exchange are built around a flow of personally tailored, data-driven possibilities. For people who are “well classified,” the results are often quite gratifying and carry a sense that what is personally convenient is also somehow morally correct. For those who are not, the outcomes can be more punitive, but are no less moralized.
This involves power, but of a rather more subtle form than mind control
the growth of these services and the digital economy in general was not simply imposed on people. The tech landscape is littered with the wreckage of huge investments that were catastrophic failures, rusted hulks of grand schemes that were a gigantic waste of money because people simply did not care to use them.
Half by accident, platform companies stumbled on the value of feedback, as they tried to build reputation systems that could allow markets to run at scale, without much need for expensive policing. That, in turn, opened up the possibility of “producing groups or categories through a rank-and-match process, and of habituating people to think in terms of that process,” creating an engine both for capturing the “constant flux of social life” and recomposing it in ways that were both more measurable, and more likely to be profitable. Again, this wasn’t the product either of simple manipulation by platform companies, nor of the free and unchanneled choices made by platform users, but the concatenation of, and sometimes collision between the two.
it is not that people were “forced” to tell Facebook their prayers, or that it “colonized” or “appropriated” those activities. Often, users introduce wider aspects of sociality and social practice into this world, and then the company realizes this is a behavior that might be fashioned into something more delimited, organized, data-generating, and ultimately profitable. Something that can generate an easily manageable order, fit for an algorithm to digest, analyze, and sell to advertisers.
The oversharing economy turned out to be wildly profitable for companies like Facebook. And it was supercharged by deep learning based machine learning, which made it far easier for tech companies to classify people, things and situations on the fly. When these new machines begin to replace older, clunkier technologies, they begin to generate new categories at scale, taking data from society, turning it into a distorted representation that heightens some social aspects at the expense of others, and reflecting it back:
By inciting organizations to treat all data as useful, and by developing tools with a hitherto unmatched ability to find patterns, software engineers have created a machine for social representation and reproduction endowed with new and somewhat alien powers. Together, the social origins of training data and the unblinking eye of a deep-learning classifier combine first to translate the social world into the digital one and then, potentially, to recombine and reconfigure categories based on what it sees.
New classes of people, and new identities emerge through a kind of digital stratification, which remakes social networks and markets to the advantage of those whom the algorithms rank as valuable, and the disadvantage of those whom it does not. For those possessed of “eigencapital,” the distributed favor of the algorithms, this is delightful, and often invisible. You may not realize that your path is being smoothed by the machines of loving grace, and the path may be all the more pleasant for that. The experience is notably less pleasant for those who are conducted along more jarring trails towards less attractive choices, although even for them, there are benefits. These technologies do, after all, accomplish things at scale that would have seemed wildly implausible a generation or two generations ago - maps, matching, search.
As Marion and Kieran say, the ordinal society is all about how individuals are classified and ranked by these vast machines, and how they respond to this process of classification. In earlier systems, people belonged to quite broad and generic classes, because the systems were incapable of fine distinctions. Now, they may be fit into categories that verge on the bespoke. This also creates an implied individual morality. Those who do well are perceived, and perceive themselves, as more virtuous in some ineffable sense than those who do not. People don’t understand the algorithms that shape their lives - sometimes even the algorithms’ creators cannot properly fathom their decisions (transformers are arguably as close to ‘black boxes’ as any system that humans have invented). But they try their best to placate them, through idiosyncratic mixtures of prescribed behaviors and folk wisdom.
The Ordinal Society shares a lot of ground with The Unaccountability Machine. It too is a vision of a complex system that has become a “tangled and unfair mess” - but it is much more skeptical than Dan’s book that there are technical fixes. The technologies that categorize us shape our culture, our dispositions, our understandings of who we are, in ways that may be self-reinforcing. This doesn’t deny human agency, but it tends to channel it in specific directions, prompting us to think of ourselves as individuals rather than beings capable of exercising collective agency. It is a vision of cybernetics not as a mode of collective control, but as a mode of extirpating collectivity in favor of individualized feedback loops in which we each have our own personal relationship with the algorithms that shape our lives.
I do wonder whether there are more possibilities for change than the final sections of the book suggest. Another sociological perspective might be that for better or worse, Max Weber’s disenchantment is being rolled back a little. We live in a world that we do not, and on some level cannot understand, and are having our noses rubbed in it by black-box algorithms. It’s an age of enthusiasms, in the original sense of the word, where fervors spread through the population like contagious viruses. Of course - to draw out the explicit rather than to leave it implied - there is no necessary reason to expect that such changes will be good - simply that there are other forces at play besides the tendencies toward individuation that Marion and Kieran identify. But none of that is to deny the importance of what The Ordinal Society describes and explains.
NB - as with other posts on work by people I know, all mistakes are mine - I haven’t consulted with them before writing, on the basis that they shouldn’t be held responsible for any errors of interpretation.
Does the book mention how our education system here, and others abroad, have been ordinal for a long time? Does it make any suggestions on how to improve that situation in the near future in our education systems?
"economics is all about how people make choices, while sociology is about how they don’t have any choices to make"
Like many things AI, this takes me back to the 1970s, when I learned about discriminant analysis (Fisher) and conditional logit (McFadden). Discriminant analysis is all about classification (which group does X belong to), whereas conditional logit is all about choice (what factors make choice A more likely). Of course, being an economist, I was taught to cheer for McFadden, with the result that I've always been dismissive of machine learning, which is mostly classification.