Rabbit-holes, zombies and platform pathologies
Engagement maximizing algorithms are less consequential than you might think.
Cosma Shalizi and I have a new piece up in Communications of the ACM (for the moment read the web version, rather than the PDF which is temporarily screwed up). I say ‘new,’ but it’s based on an idea we have been playing with for years.
Lots of people argue that we’ve been plunged into online hell by social media algorithms: the various forms of machine learning enabled optimization that Facebook, YouTube, Twitter etc started using at scale about a decade ago. We argue instead that hell is other people and ourselves, under the wrong circumstances.
Platform companies rebuilt much of their business models around the thesis that algorithms will shape people’s attention flows in highly profitable ways. Companies wanted people to keep on clicking, scrolling, and watching videos, so that they would see ads. They created machine learning algorithms to predict what content would keep them scrolling or watching (or otherwise make them more profitable). These algorithms e.g. suggested new videos (or, worse, autoplayed them), and updated their model of people’s preferences, serving up more of the same if the users kept watching or scrolling.
The notion was that engagement-maximizing algorithms* would create a virtuous feedback loop. Consumers would benefit: they would see more of what they liked. Advertisers would benefit: they would get more eyeballs for their ads. And the platform company would benefit from the real hard money paid by advertisers. Everyone would end up happy!
At least, that was the story. In recent years, people have started telling more alarming tales, in which these algorithms worked like the rabbit holes in Lewis Carroll’s Alice’s Adventures in Wonderland, propelling their victims into a topsy-turvy underworld. Under this account, people are often more ‘engaged’ by shocking, alarming or disturbing content. So if an algorithm serves up an extremist video, they are likely to keep on watching, encouraging the algorithm to serve up more of the same, and more, and more, and more. Gradually, inexorably, a perfectly normal person could be transformed into a Q-Anon zombie.
And there are even more frightening arguments than this! See, for example, Shoshana Zuboff’s 658 page tome, The Age of Surveillance Capitalism (I am not a great fan of the book, but I bow in awe before her invention of the catchphrase in the title). Zuboff claims that social media algorithms are nothing less than industrialized mind control.
Now people have become targets for remote control, as surveillance capitalists discovered that the most predictive data come from intervening in behavior to tune, herd and modify action in the direction of commercial objectives. … “We are learning how to write the music,” one scientist said, “and then we let the music make them dance.” This new power “to make them dance” … works its will through the medium of ubiquitous digital instrumentation to manipulate subliminal cues, psychologically target communications, impose default choice architectures, trigger social comparison dynamics and levy rewards and punishments — all of it aimed at remotely tuning, herding and modifying human behavior in the direction of profitable outcomes and always engineered to preserve users’ ignorance.
In this version, the rabbit-hole has widened out to become a Tunnel Under the World. At long last, we have invented the Advertising Mind Control Machine from classic 1950s sci-fi story, Don’t Create the Advertising Mind Control Machine. Dance, zombie, dance!
But is any of this actually right? Over the last few years, we’ve seen more and more research that complicates what initially seemed like a powerfully compelling story. What evidence we have suggests that it mostly isn’t the “default choice architectures” or “subliminal cues” (another worry with strong 1950s SF energy) that manipulates people to behave in particular ways, but their own wants and desires.
This evidence isn’t perfect. Platform companies keep on changing their algorithms, so that studies done now may not tell us much about what the algorithms were doing five years ago. Moreover, for a variety of reasons, it is really hard to disentangle people’s pre-existing desires from the ways in which social media channels these desires, so that you couls answer questions about the causal consequences of these algorithms. Social scientists can’t look at the alternative timeline in which these algorithms were never invented. Nor can we run experiments in which one large group of people is manipulated by these algorithms and another, completely isolated control group is not.
That is why Cosma and I go in a very different direction. We suggest that in the absence of science fiction wormholes to alternative realities, or grand Mad Science experiments on the human species, the best we can do is create simple models of what the world would look like without social media algorithms.
The article imagines what the world would look like if attention maximizing algorithms had never been invented, so that we still had the kinds of social media that people had a decade ago. Back in 2012, people could publish online easily, but had to find each other’s work through links, old style Google search and similarly clunky technology. The article also assumes that people look for information in psychologically realistic ways (building on the work of Hugo Mercier and Dan Sperber). That is, people are inclined to look for “rationalizations” which confirm what they want to believe, but they still have some capacity to change their minds if they are forced to consider compelling counterarguments and evidence. And we (for particular values of “we” that mean “Cosma”) construct a very simple mathematical model of how this all might work out.
The model predicts a fairly straightforward outcome. If people are able to search for evidence and arguments that confirm their biases, and to easily publish such evidence too, they will tend to create large online communities glom together around shared rationalizations, defend them, and produce more of them. In other words: hell hath no limits, nor is circumscribed. You don’t need modern social media algorithms to act as Virgils, conducting people into our current online inferno. There are entrances everywhere, and all you need to find them is open search, easy self-publishing and ordinary human psychology.
If our model is right, we would likely be in much the same situation as we are at the moment, even if platform companies had never discovered machine learning. People would still be driven by their own wants to discover and create the kinds of shared rationalization that dominate online political debate today, and search and Web 2.0 type publishing would make discovery and sharing really easy. People aren’t programmable zombies. They are very good at fooling themselves under the wrong circumstances. And that remains constant, no matter what the algorithms are
There are some important provisos and subtleties here. First and most obviously, don’t gloss over the ‘if the model is right’ bit. Models are no more and no less than formalized intuitions. If they start from the wrong place, they are likely to end up at the wrong destination. We think that our model is reasonable and plausible, but different assumptions might have led to quite different conclusions.
Second “much the same situation” is subtly different from “the same situation.” Even if we are right, it is perfectly plausible that social media algorithms may have served as an important accelerant, making it easier, e.g. for extremists to find each other. So perhaps, in our imagined alternative reality, the road to hell would have been slower and more circuitous, giving people more time to figure out how to respond.
Finally, our account does not absolve platform companies of blame! Indeed, it implies that their entire underlying philosophy is wrong. There is a notorious leaked memo by Facebook CTO, Andrew ‘Boz’ Bosworth, laying out the company’s philosophy:
We connect people. Period. That’s why all the work we do in growth is justified. All the questionable contact importing practices. … All of it … The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.
Maybe … not so much? Our model suggests that “connecting people” without paying specific attention to the kinds of connections you are building, and the social consequences of doing it at scale has some very obvious downsides. And that is not even to begin talking about the many other pathologies that afflict the platform economy.
Equally, there are some possible lessons. The Mercier and Sperber account of human psychology suggests that people are very inclined to get high on their own supply - but that they are also able to spot the flaws in other people’s preferred bullshit, and grudgingly capable of accepting when the other side scores a point.
That suggests a simple theory of why we are in trouble. In a world where communications are less efficient, people will still want to construct and share self-serving rationalizations. But they will find it hard to coordinate on the same ones, instead likely going off in a myriad different directions. When it becomes easy to publish and find such rationalizations, they are likely to scale up quickly, and become more politically and socially consequential.
Equally, this understanding suggests that democratic politics are not irredeemably doomed to be a benighted clash between ignorant armies. If people are obliged to respond to each other’s better criticisms, rather than self-selecting into blobs of mutually reinforcing rationalizations, they are likely to end up better off. As Cosma and I have argued elsewhere, this is how democracy is supposed to work, and what it is supposed to do - not arrive at some common truth, but to create a system of competition, where different groups are free to discover their interests and organize in pursuit of them, but where they also have to grudgingly take account of each other’s best arguments (and sometimes, if they can get away with it, steal them outright and pretend they were theirs in the first place).
We want (or, at least, Cosma and I want) online systems that support this kind of competition and discovery, and in particular that allow groups that have been systematically discriminated against to mobilize. We also want systems that do not systematically undermine the minimal beliefs that people need to agree to, if democracy is to reproduce itself. There is a lot of room for disagreement over the specifics of how this ought work.
But if we are right, some directions are more promising than others. In particular, we need to recognize that communities of rationalization can have their upsides as well as downsides. More generally, when you realize that the names for such communities include ‘political parties,’ ‘social movements,’ and ‘elite expert consensus,’ you realize that you can’t actually get away from them. The trick instead is to get competing communities into some form of social and political relationship, where they have to grudgingly take account of each others’ useful criticisms.
And if we’re wrong, then hopefully others will come back with alternative models and compelling criticisms, crucial countervailing evidence etc. One of the advantages of simple models over other forms of handwaving is that they are more likely to be usefully wrong when they are wrong- it is easier to see what their starting assumptions are, and what their logic implies, and correct or counter them accordingly.
* Actually, these algorithms don’t just maximize on engagement - but it’s probably not worth getting into the complications in a post like this.
Customised social media have been far less damaging than Fox News and rightwing radio, which serve up the same package to everyone who watches/listens. And old-fashioned word of mouth is still a major vector for conspiracy theories of all kinds.
Recommendation models (aka "algorithms") might help people pick a slightly more appealing hole in the rightwing rabbit warren, but that's all.
It's strange to see a "thought experiment" on this subject ignore an archive of internal Facebook/Instagram documents at Harvard laying out a great deal of A/B test evidence of how specific design features, not just machine learning-based rec systems, shape the formation of communities and what content they consume.
No serious critic of social media suggests that it turns people into zombies or that self-rationalization and homophily aren't real. The question of wether it is an "important accelerant" is the more serious one, and it's not something that a thought experiment can address.