Cybernetics is the science of the polycrisis
What Stafford Beer and Dan Davies say and why you need to read them
One of the most interesting ‘might have been’ moments in intellectual history happened in the early 1970s, when Brian Eno traipsed to a dingy cottage in Wales to pay homage to Stafford Beer. Eno had written a fan letter to Beer after reading his book on management cybernetics, Brain of the Firm. Beer had then come to visit Eno in London, bearing with him a box of cigars and a bottle of sherry, and telling Eno about:
new forms of governance and what governments would have to be in the future. None of that has come true yet!
According to Eno’s report, “the house took a long time to defumigate” afterwards. That didn’t deter him from making his return visit to Beer in Cwarel Isaf on a miserably wet day, where he was mauled and wrestled to the ground by Beer’s four dogs, which he described as looking and acting “like huge mud paint brushes.” Inside the cottage (Eno is very funny on all of this), he could barely discern Beer, who was furiously sketching out complicated diagrams, through the cigar smoke and steam from a pot of boiling potatoes, the sole constituent element of a meal intended to substitute for lunch and dinner. These were perhaps not the most propitious circumstances for a grand offer. Still, Beer made one. As Eno recounts it:
[Beer] said ‘I carry a torch, a torch that was handed to me along a chain from Ross Ashby.” … He was telling me the story of the lineage of … this body of ideas [cybernetics] and said ‘I want to hand it to you, I know it’s a responsibility and you don’t have to accept, I just want you to think about it’. It was a very strange moment for me, it was a sort of religious initiation of some kind and I didn’t feel comfortable about it somehow. I said ‘Well, I'm flattered that you think that but I don’t see how I can accept it without deciding to give up the work I do now and I would have to think very hard about that’. We left it saying the offer is there, but it was very strange, we never referred to it again, I wasn’t in touch with him that much after that. I'm sure it was meant with the best of intentions and so on but it was slightly weird.
It’s almost certainly a good thing that Eno decided to make music rather than become a guru of cybernetics (he’s an extraordinary musician but surely has and had far too much self-awareness to have ever made for a really successful pundit). Still, it’s interesting to speculate about what history might have looked like had he said yes, and somehow succeeded at it. As it was, management cybernetics remained “one of the most important bodies of theory in [Eno’s] life.”
Fifty years later, Beer has found a different torchbearer, one, who, like Eno, has a visible sense of humor about it all. Dan Davies has written a new book, The Unaccountability Machine, which blows the fug off Stafford Beer Thought, shooing away all the mud-encrusted dogs so that you see what is really useful.
And there is a lot that is useful. Beer’s books are loaded with important insights, but they are typically transmitted through confusing Gurdjieff-with-a-hipflask managerial parables with a lot of unnecessary jargon. Dan - who is a mate - hasn’t just revived Beer’s version of cybernetics but presented it in clear, easily read chapters, and remade it for a different era. It’s an utterly fantastic book, and I’ve been pressing it urgently on everyone I know. Dan argues that management cybernetics is the great lost tradition of thinking that might actually provide an alternative to neo-classical economics. And he is very plausibly right.
So why is Beer’s version of management cybernetics (to be distinguished from the optimization-focused versions that were popular in the USSR and China), potentially valuable? It builds a bridge to span the yawning divide between how we think about information, and how we think about the economy, politics, and society. There was a moment - right after World War II - when people were trying to think about all these things in relation to each other. That moment didn’t turn into what it ought to have, because it turned out to be much easier to make progress on the information and technology side - all of the fantastic things that could be done with vacuum tubes and then semiconductors - than on the more difficult challenge of understanding the complex workings of modern societies.
But - and this is Dan’s most important point, I think - the intellectual tools are still there. There is a lot we could learn if we understood social, political and economic relations as really involving information flows. And Beer’s version of cybernetics provides one very intellectually attractive way of doing this.
This point is just slightly more obscure than it ought to be. The chapter that really lays out the connection is the one chapter in the book that didn’t work for me. It relies on a complicated analogy with Rubik’s cubes that I found more confusing than enlightening - I had to identify the underlying theory that Dan was waving towards, and then use that to figure out the metaphor, rather than vice versa. That was the only really hard bit, though, and you end up picking up the broad gist by intellectual osmosis as you read the book. It is a very important gist!
I’m not going to try to match Dan’s jokes (we were co-bloggers for fifteen years or so, and I quickly realized that I’d never be nearly as funny). Take it on trust that they are excellent, and buy the book, if you haven’t already. Also, admire the planning (don’t put a strange seeming “computing pond” on the mantlepiece in Act One unless it is going to go off in Act Three), and the enjoyable ruthlessness. The splendidly damning sentence “The biggest blind spot of economics is the economy” is the culminating claim of the part of the book that I’m not going to talk about. Instead, I’m going to pull out a couple of the other key ideas, to give you a sense of the argument and how it might be applied.
Dan draws on Beer, who pulls from Ross Ashby, the person from whom he said that he inherited the living flame. And Ashby’s “Principle of Requisite Variety” is a really, really important idea.
Here’s my imperfect and unmathematical gloss on it. We live in a complex world which keeps on producing variety that builds on previous variety. That means that there are many, many surprises - complex systems are by their nature difficult to predict. If you do want to anticipate these surprises, and even more importantly, to manage them, you need to have your own complex systems, built into your organization. And these systems need to be as complex as the system that you’re trying to manage.
Hence, the “Requisite Variety.” In Dan’s summarization “anything which aims to be a ‘regulator’ of a system needs to have at least as much variety as that system.” Or, put a little differently, “if a manager or management team doesn’t have information-handling capacity at least as great as the complexity of the thing they’re in charge of, control is not possible and eventually, the system will become unregulated.”
This points to the very important differences between the Soviet Union and China’s notions of cybernetics, and Beer’s approach. State socialist cybernetics mostly assumed, as does Silicon Valley today, that economic complexities could be disentangled, simplified and turned into easily solvable optimization problems. Beer’s management cybernetics - as I at least read it - suggests that this is a very nice trick when you can get away with it, but that you really wouldn’t want to be making it your starting assumption about the world. Sometimes, simplification-and-optimization will be very useful. Sometimes, it may leave you worse off than when you started. Read Francis Spufford’s Red Plenty for more.
Another important difference is that Beer and people in his tradition treat the math as a source of valuable metaphors rather than directly applicable methods. There is some serious math behind Ashby’s principle (which is to say: math that the likes of me can only follow if it is explained slowly and patiently by more intelligent people), but it is not the kind of math that is readily applied. Instead, it is the kind of math that rubs your nose in crucial but annoying facts about the complexities of the world, without giving you handy means to turn these complexities into truly tractable simplifications. This is, to repeat its title, management cybernetics . It gives you a sense of the problems of you have to manage, and some very useful perspectives and rules of thumb for how you might tackle them, but its fundamental message is that while you can manage a complex environment, you cannot usually manage it away, without changing the environment, or (the more common default choice) pretending that the complexities don’t exist.
So how do you manage an inherently complex system? Beer talks about “variety engineering”, and points to two broad approaches to making it work. One has already been hinted at: attenuation. Here, you take what is complex, and you make it less so. You reduce the variety of the environment you are trying to deal with, so that the system produces fewer possible states of the world to be anticipated or managed. Or you pretend to yourself that the variety is less than it is, and hope that you aren’t devoured by the unknowns that you have chosen to unknow.
The second is amplification. Here, crudely speaking, you amp up the variety inside the organizational structures that you have built, so that it better matches the variety of the environment you find yourself in. Very often, this involves building better feedback loops through which different bits of the organization can negotiate with each other over unexpected problems.
There is a lot more to this - e.g. thinking about how different parts of the regulatory organization ought work as different ‘systems’ - but again, it’s management more than science. What you do in a given instance will greatly depend on your understanding of the scale of the problems that you are addressing, and the regulatory apparatus you are using to address them. The great advantage of this approach is that it can be scaled up or down. The great disadvantage is that it offers you no inherent technique for figuring out which scale you ought be working at, or which particular means you ought be using at that scale. Again, management cybernetics is best thought of as a set of useful perspectives and associated management techniques, rather than a generalizable methodology.
But - like good perspectives and techniques - once you have grasped what it tells you, you see examples of it everywhere. The Unaccountability Machine has re-arranged my brain, so that I now see cybernetic problems wherever I look. Not only that - I think that there is the potential to use cybernetics as a common framework for understanding all sorts of problems that span information and politics. More about this at the end, but first, a few examples.
Social media content moderation. This is an inherently horrible cybernetic task in ways that Mike Masnick’s “Impossibility Theorem” captures nicely.
any moderation is likely to end up pissing off those who are moderated. … Despite some people’s desire to have content moderation be more scientific and objective, that’s impossible. By definition, content moderation is always going to rely on judgment calls, and many of the judgment calls will end up in gray areas where lots of people’s opinions may differ greatly. … If you assume that there are 1 million decisions made every day, even with 99.9% “accuracy” (and, remember, there’s no such thing, given the points above), you’re still going to “miss” 1,000 calls. But 1 million is nothing.
and Mike has plenty more!
The academic literature on the history of moderation (e.g. Tarleton Gillespie) emphasize how much the companies that have to do it hate it, and how keenly they would love to hand over the messy, difficult decisions to someone else. And cybernetics provides a very clear understanding of why it is so horrible. Social media at scale is inherently unpredictable, which is another way of saying that there is an enormous variety of possible direction that millions of people’s interactions can take, and many of these directions lead to awful places. But stopping this is hard! Some problems involve people saying bad and horrible things that others will be upset by. Others involve scams and fraud. In both cases, the bad actors can display a lot of ingenuity in trying to figure out how to counteract moderation and propel things in bad directions, sometimes manipulating the rules, sometimes striking on unexpected strategies that dispose towards unwanted states of the world. The result is (a) enormous variety, and (b) malign actors looking to increase this variety, and to push it in all sorts of nasty directions. So how do you variety engineer content moderation so that it doesn’t devolve into an utter shitshow?
The initial approach of most social media companies was to just pretend that the founders were inspired by the ideal of a thriving “marketplace of ideas” where censorship was unnecessary, the good stuff would rise to the top, and everyone would police themselves in some happy but carefully unspecified decentralized fashion. No company could stick to this for long. Now, social media companies find themselves obliged to amplify (increasing their ability to moderate through hiring or investing in machine learning), to attenuate (limiting variety; e.g. by stifling political discussion as Meta’s Threads has done), or some combination (Bluesky and the Fediverse combine new tools with smaller scale and lesser variety in particular instances, each of which can have its own culture and rules).
Each of these is an unhappy outcome in its own special way. But if we understand that moderation in cybernetic terms, we can better appreciate why it keeps going wrong. For example: the spat the week before last over whether Threads had deliberately censored a critical story about Meta or not is really, as best as anyone can tell, the product of amplification techniques (machine learning applied to spam recognition) trying desperately keep up with the variety of ingenious tricks that spammers use, and misidentifying real content as fake.
This led Anil Dash to quote Stafford Beer’s most famous dictum, “The Purpose of the System is What It Does.” Anil was possibly just being sarcastic about the specifics. But Beer’s dictum is still a quite precise diagnosis of what happened, and points toward the actual underlying problem. Which is not, in this case, that Meta deliberately chose to silence its critics, but that Meta that owns the Means of Amplification, and the Means of Attenuation too.
For example: when Meta decides that Threads will deal with the problem of spiraling political disagreement by dampening down all political discussions on its platform, it is dealing with a cybernetic problem using cybernetic means. It is attenuating the variety of the system so that it is easier to deal with. But should it be Meta that is in charge of making such a profound and political decision? Cybernetics doesn’t provide any very specific answer to that question, but it makes it much easier to see the problem. We don’t need to believe that Meta is deliberately tweaking the algorithms to silence its critics to be worried that Meta is able to dampen down vast swathes of the human conversation in pursuit of its business model. Equally, we need to recognize that if we are going to have to regulate vast swathes of the human conversation, we are going to face some messy and unhappy tradeoffs.
The proper constitution of the state. The most cybernetic book, apart from Dan’s, that I have read in the last few years is Jen Pahlka’s Recoding America, even if it doesn’t mention Beer, cybernetics or any of the technical terms that I’ve been sharing with you. If you read Jen’s book carelessly, you might come away with the impression that it is about the U.S. government’s incompetence at contracting out software development. If you read it carefully, you will realize that it is actually an applied informational theory of the state. The U.S. government is bad at making all kinds of policy in a non-hierarchical way. Everything seems to come from the top. Old policies are rarely erased, and new ones are perpetually layered on top, in ways that are at best inefficient, and at worst contradictory in mutually toxic ways. Previous efforts to fix the problem (e.g. through the Paperwork Reduction Act) have tended to make it worse. And civil servants have every incentive to just go along with the orders from the top, making “concrete boats” (to use a pithy phrase from one of the jobsworths that Jen talks to) without paying any attention to whether they will float, or whether they are wanted in the first place.
Jen argues that we need to move away from top down decision making, to systems that will allow bureaucrats a lot more autonomy. She frames her argument for change in terms of “agile” software design, which would appear to have an awful lot in common with Beer’s approach to thinking about organization. I would guess that this is less because agile software developers are secret fans of Brain of the Firm, than because they are applying loosely related ideas to broadly similar problems (cybernetics did feed into a lot of post WWII technological thinking, which later forgot the terminology). The solutions that Jen emphasizes - bringing policy design and implementation into much closer contact; identifying bottlenecks and chokepoints; allowing people far greater flexibility to do needed stuff towards the shared end goal, even if no-one anticipated this stuff was needed - are just the kinds of solutions that a cybernetician would press for too.
Most of Jen’s examples involve information systems, because that is what she has worked on, but the logic extends far further. In particular, I think that it extends to an important debate that is happening right now over economic security policy making.
A lot of our current thinking about how to make such policy takes a brute force approach, dismissing efforts to calibrate and fine tune policy as unhelpful and irrelevant. Adam Tooze, for example, whom I agree with on vast swathes of issues, more or less dismisses “so-called “Swiss army-knife strategies” or “polysolutions” that try “to fix several interconnected problems at the same time” as an overly ambitious “optimizing approach,” which makes the “strong assumption” that “we do, in fact, have a pretty good idea of the major challenges and how they hang together.” Instead, he prefers big fixes for the most immediate pain points.
And there is a lot to be said for addressing the big pain points and for brute force solutions! But I read Dan’s and Jen’s books as providing strong reason to think that polysolutions aren’t necessarily optimizing strategies, or, for that matter, reliant on a previously articulated masterplan of micromanagement. Instead, properly conceived polysolutions will try to do lots of things at once, not because they have a clear expectation that all or even many of these things will work, but because they are experimenting. Some polysolutions will fail, some will succeed, and some might work far better than we might ever have anticipated. In other words, there is also a strong case for making policy agile and on the fly, exploring the landscape of possibilities, rather than exploiting what we think we know already. And when something really works, one can try to double down and see what happens!
Put differently: and I realize that this is a quite strong claim: management cybernetics is the best candidate we have for a science of the polycrisis. It is the only practically oriented approach that I am aware of that really takes the management of complex interlocking problems as its explicit central aim. There are certainly insights that it misses, and its classic formulation has half a century of intellectual development to catch up on. But if there is another broad framework out there that is better fitted to tackling our complex problems … I’d love to hear about it! We need one, and management cybernetics seems to me to fit the bill.
Of course - and this is the burden of Jen’s book - right now, that kind of agile policy making is far out of reach. The U.S. government is spectacularly poorly equipped to do the kind of agile policy that is needed to address these interlocking problems. Most other governments aren’t much better, and some are spectacularly worse. The problems of the polycrisis are compounded by the various flavors of political omnishambles that are trying to solve them, and failing badly.
But it is at least worth asking if we could make government more agile than it is. Putting Jen’s and Dan’s ideas together, a really useful start would be to have some kind of cybernetic survey of U.S. government decision making, examining the chokepoints and failure modes (and not just of the U.S. government either). As the state gears up to tackle new responsibilities such as economic security, it is going to need something like this. Large chunks of my and Abe Newman’s recentish Foreign Affairs piece on the pathologies of economic security policy making apply Jen’s and Dan’s ideas to the pathologies of the national security state. Specifically:
If the state is going to do the kinds of things it is supposed to do, it will need far greater capacities to analyze information and to gather it … Furthermore, it will need to implement policies in different ways. It is going to have to experiment - and revise policies quickly, when they don’t work, or when they turn out to have unexpected benefits that can be capitalized on quickly. That, then, is the main argument of our piece. While we only use the word “cybernetics” in passing, we are making a cybernetic argument. We need government institutions that can come up with reasonable representations of complex problems involving both economics and security, take actions that look to solve those problems, and have feedback loops that allow policy makers to revise those actions when they turn out to have unexpected consequences.
Our intellectual debt is pretty obvious, but so too, I hope, is the point! And it is a point that generalizes to other governmental institutions, other governments, and non-governmental organizations trying to address all the different aspects of the polycrisis that we face.
The progress agenda. There is a lot of disagreement among and between liberals and people on the left over how the U.S. should think about progress. Some of that disagreement stems from disputes over whether we should prefer to solve collective problems or to prioritize democratic control. Crudely speaking, some people argue that we need to build, and that this involves clearing out the crud in decision making, and eliminating multiple veto points that they say are making it impossible or wildly expensive to do what is needed at scale. Others argue that this undermines democratic control, and wildly overstates the difficulties of building coalitions for change. Ezra Klein here, and Dave Dayen here make the most reasoned big statements of the case for each side - there is plenty more out there if you look for it.
This is a real disagreement, and one that I don’t think can be simply resolved. But it is one that the language of cybernetics could at least help clarify. The people on both sides, as I understand them, agree on much more than you might think. They both want big scale solutions to big scale problems. But they disagree on whether democratic input will help or hurt the creation of these solutions, and the coalitions that are needed to press them through.
For better or worse, the language of cybernetics is a technocratic language, not a democratic one. That is to say: it focuses our attention not on political values and how to achieve them, but on the relationship between tools and outcomes. That carries a host of problems with it (technocracy is unbeloved for good reason). But it can have some benefits too. One of the major reasons why neoliberalism, which is its own kind of technocracy, succeeded, is because it helped turn insuperable seeming political conflicts into manageable ones. As I read Elizabeth Popp Berman’s fantastic history, Thinking Like an Economist, neo-liberalism succeeded not, or at least not simply, because of Milton Friedman, the Mont Pelerin Society and the rest of it. It came to dominate because it was the only plausible language that people could minimally agree on, at a moment when enormously consequential new policies needed to be enacted. On Berman’s account, the Great Society and neo-liberalism went hand-in-hand - the Great Society needed neo-liberalism, or something like it, to create a common framing that would make policy coherent and disputes solvable.
Or put differently, the great benefit of technocracy is that it can sometimes provide an imperfect means to resolve otherwise irresolvable political disputes. And if the government is to do things, we are going to need such a language. What the language of cybernetics could possibly offer is a way to talk about which kinds of input help resolve problems, which forms of coupling and consultation work best, and which work badly. This will not resolve disputes - cybernetics is even worse than neo-liberal economics in providing clear and decisive answers to complex and difficult questions.* Still, it may create a frame in which people are more willing to lose some of the time, because they recognize the merits of their opponents’ case, while hoping to win on other occasions in the future.
Here, I’m riffing not just on Berman’s book, but a really great essay by Suresh Naidu, which makes the point that neo-liberalism will not be replaced by liberal humanism, because liberal humanism isn’t up to the task of managing a complex society at scale.
Any social science that aims to inform (and perform) the function of a complex social organization, like a state or corporation, that enforces even somewhat impartial rules needs to ruthlessly abstract from particularities. In particular, it must use mathematics, for making incommensurable claims commensurable, for representing the workings of fantastically complex adaptive systems, and for complementarity with technologies of organizational administration, like spreadsheets.
And more generally (it is hard not to just quote the whole damn thing)
a social science that is useful for the legal needs of a large administrative state operating in a complex heterogeneous society must also be parsimonious. This social science ought to be cognitively lightweight and context-independent so that citizens and experts and bureaucrats and judges and lawyers can easily communicate new situations across a large population in a common idiom. Late 20th century neoclassical economics provided a primitive, ideology-laden language for doing this … But its successor will not be found in pendulous, wordy treatises penned by ethnographers and humanists; it will be instantiated in formal organizational protocols and algorithms that are the logic of some mathematical social science. … Perhaps thankfully, economics is done being a master metaphor for governance for the foreseeable future. Perhaps one post-neoliberal philosophical move will come from computer science, which will operationalize its own blindspots into the rational agents it is constructing.
The great advantage of cybernetics is that it provides exactly a language that can span the chasm between computer science and the needs of the large administrative state. It surely isn’t the only candidate for that task. You could, for example, revive some of the ideas of Herbert Simon, which have slightly different valences and applications. But it is a pretty good one, with an excellent pedigree.
There are things that it doesn’t do as well as economics. Beer’s flavor of it is - as already noted - less mathematical in its application. But there are also things that it does better. Dan’s book suggests that economists are rather more allergic to the spreadsheets of organizational administration than Suresh suggests, and that cybernetics provides an excellent understanding of how balance sheets and financial accounts, themselves being models, inevitably attenuate out the things that their creators don’t want to pay attention to, even while they serve to amplify the possibilities of control in other areas.
So extending Dan’s arguments to the stuff that I, rather than Dan, think about, the case for management cybernetics is the following. It makes a whole host of problems more clearly visible than they were before. It maps well onto the fundamental problems of state policy making, offering some reasonably clear prescriptions as to how government ought organize itself if it is to have the requisite internal variety to deal with the external problems of a complex world riddled with polycrisis. It provides the kind of technocratic language that can make it easier for people with different ideologies to accept unwieldy compromises, clear defeats and ambiguous victories, so that ambitious policy measures can be undertaken.
There is a case against too. It can easily collapse into handwaving. Its lack of mathematical precision over the specifics means that it will have an easier time developing a kind of folk wisdom that unsophisticated practitioners can latch onto, but a harder time reconciling different versions of that folk wisdom to preserve coherence in tasks carried out across very complex organizations. And like all technocratic approaches, it is prone to fail the more it succeeds. Political success would see it degenerating from a live set of ideas into an orthodoxy. Even though Dan deploys it as a means to identify blind spots, it most certainly has its own. Not all interesting problems - not even all interesting informational problems - can be collapsed into the managerial framework that it provides.
But even so, it holds immense promise. One of the greatest challenges we face is the mismatch between the vast complexity of the problems we need to solve (climate change; migration; international security), and the inadequacy of the informational and managerial institutions that we have to solve them. The bits of Dan’s book that I have not talked about explain why free market economics is incapable of of resolving them. Managerial cybernetics doesn’t just simply help us focus on these problems, but provides some very helpful ways to start remaking organizations so that they can actually do what they need to do to fix them. In Brian Eno’s description, cybernetics provides a coherent way to think about:
different responsiveness, different ways of making decisions and absorbing information. How do you get the right feedback, basically; how do you filter it — governments are obviously swamped in stuff; how to make longer term decisions? Governments have to be thinking many years ahead but they very rarely are.
Dan has written an exciting and important book, which lays out how organizations, including government should think about getting the right feedback, making decisions, and absorbing information. And I haven’t even covered the half of it. If you’ve gotten this far, you are almost certainly the kind of person who ought to read it. Go buy!
* If economists have more than one hand, cyberneticians are multi-armed bandits. This is a terrible joke about technical jargon that I ought be shot for, which is why I have concealed it in the decent obscurity of an endnote. When I said that I wasn’t as funny as Dan, I was speaking the plain and obvious truth.
I’m surprised that Farrell didn’t mention that the words cybernetics and government come from the same Greek root word: kubernesis. Meaning the pilot of a ship.
Practically speaking, the helmsman of a ship does not steer by dead-reckoning, but by many minute adjustments in response to the wind, the waves, the currents, and how the ship’s structure, sails, and crew respond to those external influences, all while steering towards the goal, and usually with the intent for commerce.
Hence, modern governing organizations could then be compared to an armada of ships each with their own Individual, independent pilot responding to the actions, directions, and movement of the other ships in the armada as well. And they do this as well, without communicating directly with all the other pilots, but by watching all the minute data continuously.
And the more a pilot sails, the more experience she has to draw on for the future trips, completing the feedback loop.
As a bear of little brain, I worry about the boring implementation details. I was with you right up to the phrase "agile software development," triggered PTSD. Agile is a brilliant, simple concept hampered by the boring implementation details that the requirements of its implementation are anathema to a hierarchical system, and requiring end-user guinea pigs to simulate real-world users, introducing their own biases. In other words, it's a lot harder than it looks on paper.