A few months ago, OpenAI announced a solicitation for proposals to set up “a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.” There were 10 grants of $100,000 each up for grabs. I didn’t put a bid in (I know people who did, and good luck to them!), partly because I’m crazed and overwhelmed with other stuff, and partly because I was pretty sure that OpenAI’s understanding of a good democratic process wouldn’t match up particularly well with mine.
There is a lot of talk about the relationship between democracy and AI, and I do plan to get stuck into that debate, just as soon as I have something that resembles a life again. But much of that conversation seems to me to be at best incomplete. It has a very limited understanding of what democracy is for, how it works, and what kinds of decisions it should make.
The OpenAI solicitation seemed to me to be all these things at once. First, “democracy,” in OpenAI’s understanding, was not about making authoritative decisions, so much as providing a thin patina of legitimacy (perhaps with a light sprinking of polite second guessing on top). As the proposal put it, “[w]hile these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future.” Second, OpenAI adopted a very particular definition of democracy as involving deliberation and consensus seeking among individual citizens: “by ‘democratic process’ we mean a process in which a broadly representative group of people exchange opinions, engage in deliberative discussions, and ultimately decide on an outcome via a transparent decision making process.” Finally, it seemingly wanted to restrict the democratic decision making to the kinds of awkward political problem that OpenAI didn’t want to make controversial judgment calls on itself, and that didn’t interfere directly with its profit model:
For example: under what conditions should AI systems condemn or criticize public figures, given different opinions across groups regarding those figures? How should disputed views be represented in AI outputs? Should AI by default reflect the persona of a median individual in the world, the user’s country, the user’s demographic, or something entirely different.
Shorter OpenAI: we want a very special kind of democracy that is non-binding, consensus seeking, and that confines itself to the kinds of ticklish issues that OpenAI would really prefer not to have to take responsibility for itself, thank you very much. Or even shorter OpenAI: could we please have democracy, only without the politics?
I imagine that a lot of academics applied for the grants, and not just because there is free money looking for a home. There is an obvious sweet spot between how many scholars think about democracy and how OpenAI does. In particular, there is a lot of lively debate around ‘deliberative democracy’ - the notion that citizens themselves should figure out complex political issues by talking them through. This goes together nicely with “sortition” - the suggestion that we ought hand over many (perhaps all) political decisions to smaller randomly selected groups of citizens who can stand in for the broader population, listening to experts and arguments for or against particular proposals, conversing amicably together, and reaching some agreement on what is to be done.
Moreover, as new technologies have been developed some academics, activists and other people have started to think about whether there might be better ways of doing this at scale. Could you use different voting systems, which better reflect the intensity of individual preferences? Could you use machine learning to increase the scale at which deliberation happens? There is a lot of exciting speculation happening. Some of the speculators are my friends; both together with Cosma Shalizi and on my own, I’ve indulged in some of this speculation myself. So there is a real agenda of interesting and possibly practically valuable research that many people could apply $100,000 of seed funding towards, which would satisfy both the desires of OpenAI, and the independent aspiration of scholars and experimenters to find better ways of reaching collective decisions.
But (and I expect that attentive readers have been expecting this ‘but’ for the last couple of paragraphs), that is not all that democracy is. The people who are most excited about deliberation tend, almost by definition, to be unenthusiastic about the democratic role of groups and parties. At best, such groups seem like highly imperfect proxies for the true desires of citizens, and could be substituted if we had better ways to figure out what those desires actually were. At worst, they are corrupting influences, which purport to represent ordinary people, but actually exist to shovel all the benefits towards themselves. Hence the ambitions for a better system: if we could only figure out better ways to do democratic politics, we could reach decisions that (a) more clearly reflected the true desires of citizens, (b) identify areas of agreement among them and make better policy on the merits. To temporarily adopt the specialized language of political theory, this approach suggests that we should want institutions that are more representative, more consensus-oriented and epistemically superior.
Obviously, representation is good (so too are reaching agreement, and making good policy). But it is hard to fit a lot of actually-existing democratic politics into this frame. To steal some arguments from my colleague at SNF Agora, Hahrie Han, (who I’ve been debating these questions with for years), democracy isn’t just about representation and knowledge. It’s about power and groups too. And to steal further from Daron Acemoglu and Simon Johnson’s excellent recent book, Power and Progress (which I’ll have more to say about soon), discussions about democracy and technology need to pay close attention to J.K. Galbraith’s notion of countervailing power. A democratic approach to AI is not, actually, democratic, if OpenAI and its competitors/collaborators are setting the agenda and calling the important shots.
Hahrie’s alternative way of thinking about this, as I imperfectly understand it, is as follows. Contrary to the claims of many people who have epistemic understandings of democracy, we, as citizens, don’t necessarily know what we want. We don’t have clear preferences for the one policy outcome over the other, because we mostly don’t have any good understanding of what these policies are, what their consequences might be, and how these consequences may affect our lives.
This is why groups (including political parties, interest groups, activist groups and so on) are really important in politics, as is political leadership. Both create two way bridges between citizens and politics, articulating the interests of their people, arguing on their behalf, and explaining to their people, as necessary, what those interests are. Margaret Levi and John Ahlquist’s book, In the Interest of Others, provides another great discussion of this, explaining for example, how Harry Bridges and the ILWU represented the interests of ordinary union members, articulating their concerns, bargaining on their behalf, and explaining what was to their benefit and what was not. Hahrie has done research on the circumstances under which people who you wouldn’t expect to be able to organize on their own behalf, do. People who are poor, overwhelmed, and have little conventional social capital may still be able to press their needs. But they need to build groups and cultivate leadership to do this.
Democratic politics, under this account, is not harmonious kumbayah. The interests of the ILWU’s longshoremen are not the interests of consumers, or, perhaps, of some other workers (though as Levi and Ahlquist discuss, good leadership and trust can help people develop a much more expansive and inclusive understanding of their interests than you might expect). There are are going to be political disagreements and conflicts. Some groups are going to win, and some are going to lose.
Here, the Acemoglu and Johnson argument comes to the fore. Their book suggests (my words here; not theirs) that when AI entrepreneurs appeal to the common interest and the awesomeness of Progress, they are spinning out a line. Their own particular interests lurk beneath claims about the common weal. This is not to say that they are necessarily cynically hypocritical. It is possible, even likely, that OpenAI’s stance on these topics is perfectly sincere. Human cognition is arguably purpose designed to discover plausible generally appealing justifications for things that you want for less altruistic reasons. But these justifications are still self-interested, as are the justifications their opponents may offer in support of the contrary outcome, whatever that is. Under such circumstances, you don’t want to accept the framing of powerful actors, nor necessarily to reach consensus with them. You want to have countervailing powers with their own alternative weight- actors with contrary interests who have sufficient clout to counteract those who would otherwise be too powerful. Such powers can present one interest, or a small group of them, from dominating.
This all adds up to a very different understanding of democracy than the one suggested in the OpenAI proposal. It is one where democracy is in charge, rather than providing advice (which can be ignored if inconvenient) to inform the decisions of powerful corporations, or a blame-sump where politically controversial questions can be dumped. Under this alternative understanding of democracy, there isn’t any necessary expectation of consensus, beyond the minimal consensus where losers in democratic battles accept that they have lost this time around, and retreat to lick their wounds and plan their comebacks. This vision of democracy is expansive and open-ended, capable of deciding not simply the ticklish issues of how to depict political figures, but of determining who gets what when the benefits of new technologies are distributed.
The last couple of days have provided a model of how this understanding of democracy ought to work for AI. The SAG-AFTRA dispute with movie studios was fought over many issues. One of the most important ones was how AI might affect actors’ livelihoods.
Simplifying crudely, the movie studios thought that they could demand the rights to AI representations of actors in perpetuity (e.g. allowing them to use AI representations of actors who had died) and get away with it. After several months of industrial action, they were disabused of this notion. SAG-AFTRA - the actors’ union - articulated the demands of their members (as hadthe screen writers union before them), figuring out the complex politics of how best to protect the interests of their members. There was a remarkable amount of solidarity among both actors and screen writers. A couple of months ago, I spent a little time in LA with an actor friend and his screen writer spouse, who told me about how the studios had expected to be able to hive off the better paid from those who were more economically vulnerable, and were startled to discover that they couldn’t. Now they have reached a deal with the studios, under which thier members have an opportunity to share in the profits of new AI technology, rather than being presented with individually exploitative demands as the studios would have preferred.
This kind of bargaining between different interests may appear sordid and self-interested from the perspective of more starry-eyed democratic theorists. But a lot of democracy is about defending interests, and doing it right involves creating bridging organizations like unions. Such organizations don’t represent pre-given individual interests so much as they explain and create collective ones. Carrying out this task well is one of the most crucial foundations of actually existing democracy.
I strongly suspect that SAG-AFTRA’s victory will do far more to democratize AI in the practical sense than any number of OpenAI funding competitions. Nor, obviously, can we expect any future OpenAI competitions, looking e.g. to explore the advantages of democratic models in distributing the economic gains of LLMs and other forms of machine learning among a wider community of workers and citizens. That kind of democracy would not be in OpenAI’s interests. To be clear: it is perfectly fine for researchers to solicit money that might develop narrow range of improvements and experiments that OpenAI and its peers are prepared to support. Good things may come of those proposals! But you shouldn’t mistake OpenAI’s systematically stunted version of democracy for democracy as such. Nor should you want the application of democratic practices and principles to AI to be limited in the ways that OpenAI’s solicitation suggests they ought be limited.
Really interesting piece. For me, you nail the crux of the argument when you write, "But a lot of democracy is about defending interests, and doing it right involves creating bridging organizations like unions. Such organizations don’t represent pre-given individual interests so much as they explain and create collective ones."
It's often difficult given most explanations strong desire for micro-foundations to accept Goffman's missive that we should not speak of "men and their moments. Rather, moment and their men". Organizations and the process of organizing creates both the contexts for creation of interests and the power to bring them into being. From this perspective, the OpenAI democracy project really does look pretty silly.
OpenAI’s notion of a democratic process for deciding the rules and laws that govern AI, seems rather reminiscent of Meta’s Potemkin independent content moderation body the Facebook Oversight Board --which I always thought has the most perfect acronym given that to FOB someone off is to persuade them to accept something that is of a low quality or different from what they really wanted