"Small Yard, High Fence": These four words conceal a mess
We need a new state for the new geopolitics
First some fantastic news this morning: Daron Acemoglu, Simon Johnson and James Robinson have won the Nobel memorial prize in economic science for their work on institutions. I hope to write more about those ideas when I have a chance, but for more that I’ve written already (lots more!) on the theory of power in Acemoglu and Johnson’s most recent book, see here and here.
Now: the main post. Johns Hopkins SAIS is creating a new center on China and the World under the leadership of my wonderful new colleague, Jessica Chen Weiss. It hasn’t officially opened yet, but it had a pre-launch workshop last week, gathering together a bunch of academic and policy people, nearly all of whom were experts on China.
Myself, I’m not a China expert, but I work on technology and economic security: topics that have been thoroughly engulfed by the US-China relationship. Since the broader workshop took place under the Chatham House Rule, I’m not going to talk about what other people said there, but there’s nothing stopping me from providing an expanded and better organized version of what I said myself- so here goes.
My broad argument is as follows. The current U.S. approach to technology and China is working poorly, though it will work much worse if Trump wins and takes office in January. Various self-reinforcing political feedback loops and self-reinforcing expectations are leading to breakdown. But the fundamental problem of managing geopolitics through manipulating technological trajectories is not readily solvable given existing means. We live in a much more complex world than existing state institutions are capable of handling. Therefore, we need to remake the state.
America’s approach to the geopolitics of technology is a mess. The U.S. wants to retain its leading role in technological innovation, military prowess and economic competitiveness.* Its ability to achieve these goals has been complicated by its relationship with China over the last fifteen years, which it views as a geopolitical competitor, rival, or adversary, depending on who you talk to. But China is also a major trading partner, which the U.S. cannot readily disentangle itself from. That poses some serious dilemmas. You certainly don’t want to provide an adversary with a technological advantage that it can use against you, and you might not want to do that for a rival or competitor either, depending. But it is hard to isolate flows of technological knowhow from the trade flows and subcontracting relationships that you still want to maintain.
Hence, the Biden administration has tried to be selective. It has cut off Chinese access to certain advanced semiconductors, and (working with allies) to the manufacturing equipment China would need to build its own highly advanced chips. The rationale is that this will make it much harder for China to develop AI with military applications. The U.S. is going to block China from selling connected electric vehicles on U.S. markets. But it has left many areas of trade more or less untouched.
In a widely reported 2022 speech, US National Security Advisor Jake Sullivan used a crucial metaphor to explain how extensive restrictions on semiconductors would not be applied indiscriminately elsewhere:
Many of you have heard the term “small yard, high fence” when it comes to protecting critical technologies. The concept has been citied at think tanks and universities and conferences for years. We are now implementing it.
Chokepoints for foundational technologies have to be inside that yard, and the fence has to be high—because our strategic competitors should not be able to exploit American and allied technologies to undermine American and allied security.
This is quite a good metaphor as metaphors deployed by high officials go. And it arguably has had some benefits in soothing Chinese fears (a little), and in restraining (also a little) the appetite of China hawks for much, much more. Which is all by way of emphasizing that I don’t imagine I could have done any better than Sullivan and his speechwriter myself (and indeed am reasonably certain that I would have done much worse). It has done real political work.
However, it hasn’t solved the problem, so much as it has swept its messier aspects beneath a carpet. Now that the dirt is really beginning to pile up, it is probably time to start looking at what actually lies behind those four simple words.
First and most important - the problem of technological geopolitical competition, once you have framed it as such, is inherently a mess. More technically, it is inherently complex in ways that the small yard, high fence language makes harder to see. If you are a US National Security Advisor you want to prevent strategic competitors from gaining access to foundational technologies, the small yard approach seems to offer a simple way to do that. You identify the technologies you need to protect, look for the chokepoints that can be used to prevent access, and surround these chokepoints with a thicket of bristling defenses, so that no-one can approach without your permission.
But those apparent simplicities conceal the complexities. The most basic of which is this: what exactly is a “foundational technology”? This is a good question, to which the U.S. government has no especially good answer that I have seen.
As best as I can make out (if there is an official definition somewhere I would love to be alerted to it), foundational technologies are technologies which create virtuous feedback loops from which many forms of national advantage flow, including, but not necessarily limited to, military advantage. Advanced semiconductors, “AI,” perhaps quantum, depending on your degree of excitability - all are plausibly valuable not just in themselves but because they can serve as a foundation for other uses and technologies that are as yet unknown. If you’ve played Civilization, these are the technologies at the base of the good stuff in your technology tree. Don’t develop pottery because it seems boring, and you’ll find you are out of luck when you want to discover optics a couple of centuries later.
And that suggests the problem, in a perverse and contrary fashion. Civilization players can look down the tree to see what comes next, and what comes after that. National security officials cannot. Who could possibly have guessed a decade ago that Claude Shannon’s 1940’s speculations about text prediction would combine with neural nets to produce large language models? And what is the next foundational technology going to be? If innovation were predictable in that sense, it would not be innovation.
And that’s just the beginning. If foundational technologies are hard to identify, so too, much of the time, will be the chokepoints that prevent access to them. I feel slightly guilty about this, because Abe and I did our small bit to focus policymakers’ attention on chokepoints as a policy tool (though this was probably an obvious enough move anyway). As technology moves on, new paths may develop to circumvent old strangleholds, and the more you try to deploy these chokeholds, the more you provide incentives to others to find different paths. The thickets of rules and regulations that you lay down to defend the chokepoints may have their own unpredictable vulnerabilities. And so on.
The point is not that nothing can be done. It is that national security officials are increasingly operating in a complex policy environment, combining economic security and national security in ways that are very different from those they have been trained to think about and work on. Back in the Cold War, it was easier to ignore these complexities, focusing on the brutal logic of nuclear deterrence on the one hand, and the awkwardness of defending the Fulda Gap on the other. The economics were mostly subject to the politics. During the high era of globalization, you could forget these complexities for quite different reasons, delegating them to the Hayekian logic of the market, which, you believed was going to handle them much better than government by definition. Now, the U.S. is stuck between the two, managing national security in a world where (a) many of the key resources and problems are in the private sector and hence outside direct government control, and (b) where its economy is entangled with that of its adversary, so that actions will have unpredictable reverberations.
National security officials are stuck having to deal with inherently irresolvable questions. How do you figure out what is a foundational technology and what is not? What do you do to maintain your grasp on one when you do, or to compensate when your competitor has it and you don’t? Might the consequences of your actions in pursuit of either goal be worse than the outcomes that you are trying to prevent? How do you maximize benefits while minimizing unfortunate side-effects? If there are any practical general guidebooks on this - or even largely abstract but sort-of-helpful academic theories - I am unaware of them.
The closest I know is the work of the late Robert Jervis, and in particular his book, System Effects, which tries to bring complex systems logic to international relations. It never had the impact he wanted. In part that was because he couldn’t say much about what to do once you had identified the complexities of national security and production, in part because he was a brilliant thinker, but not a notably brilliant writer, inclined to bury his valuable points beneath mounds of quasi-relevant detail.
Still there are pearls of great insight to be discovered if you dig. Jervis - writing in 1997 - discussed how technologies such as semiconductors might possibly create virtuous feedback loops that in turn might cement other kinds of dominance.
A country that pushes ahead in “cutting edge” technologies gains greater wealth and political influence as a strong economy gives it instruments to further economic development and exploit others on unrelated issues. For example, a Japan that dominated the advanced computer chip industry could not only refuse to export the most advanced models until its own manufacturers had put them into place, thereby increasing their leads over rival firms, but might also try to use foreign dependence on their products to reinforce economic dominance (for example, by demanding that foreign firms cease certain lines of advanced research) or even to gain unrelated political ends, as sketched in the infamous Japan That Can Say No.
History relates that Japan did not, in fact deploy semiconductor feedback loops to dictate terms to the United States, and instead folded to U.S. pressure. But as Jervis went on to discuss, beliefs about feedback loops can create their own self-reinforcing expectations.
people’s stance toward trade disputes with Japan are strongly influenced by beliefs about whether far-reaching feedbacks are at work. Those who stress the need to be concerned with relative as opposed to absolute gains believe that if Japan grows more rapidly than the U.S. and is able to move ahead of it in high technology, future American growth will be slowed and American power will be weakened.
So too for fears about China - but much more so. Even in the most crazed and ludicrous 1990s fantasies about Japan - Michael Crichton’s Rising Sun with its fiendish Japanese business leaders carefully plotting their takeover of the US economy - there wasn’t much worry about military domination, for very obvious reasons. Where you do worry about military domination too, it is even easier for fears about foundational technologies and feedback loops to develop, and harder to dispel them.
Those fears may, or may not, be justified, depending on complex future developments that are impossible to predict. Hence, one of the unsung purposes of the small yard, high fence approach is political rather than strategic. It is not necessarily good at identifying foundational technologies and figuring out what to do with them. But it has served some temporary purpose in limiting panic about them, and preventing it from overspilling into a universal alarm that might provoke large scale decoupling and escalation.
That isn’t working as well as it did. Small yard, high fence is increasingly being overwhelmed by general issue hawkishness, thanks to the kinds of self-reinforcing expectations that Jervis identifies. Electoral politics, bureaucratic politics and business models all reinforce hawkish beliefs about China so that the only proposal that will beat harsh measures is one for even harsher measures. If you look at bipartisan reports of Congress’s “Select Committee of the CCP,” you’ll find lots and lots of proposals for high fencing, but little visible interest in keeping the yard small. No-one wants to be accused by their primary or election opponent of being a China wuss though in fairness, the Democrats have become a little less inclined towards chest-pounding in the last few months.
Bureaucratic politics too plays its role. If you want to justify your existing funding as part of the foreign policy apparatus, or to ask for more, you are well advised to demonstrate how essential you are to the new political struggle. Congressional appropriators frown on peaceniks. There is a lot of interest in developing enforcement muscle. There is much less interest in ensuring that the enforcement is well targeted. The Bureau of Industry and Security, for example, is in charge of export controls. It is remarkably poorly equipped to gather information on the businesses that it oversees, or the broader strategic consequences of its actions. It was designed for a different era.
More generally, the U.S. heavily relies (as Abe Newman and I have discussed) on national security authorities dating from the Cold War to implement policy. Even when it takes decisions that are meant to further economic security (e.g. the recent steps to prevent Chinese electric vehicles from hitting the U.S. market) it justifies them in terms of traditional national security risks. It doesn’t have any kind of specialized apparatus to think through the relationships between national security, economic security and technological innovation, even though these relationships are central to U.S. national interests. Daleep Singh, the current deputy national security advisor, clearly wants to develop a doctrine of economic statecraft. That might make a difference. Of course, if Donald Trump is elected in a few weeks time, it will make a very different kind of a difference altogether …
There is a final set of self-reinforcing expectations that few people are thinking about, but that may be just as important. Up until a few years ago, U.S. tech companies justified their role by talking about globalized networks and the spread of liberal values. This explains, for example, Facebook’s creed of connecting up the world, and Google’s justifications for working in unpleasant dictatorships. By providing connectivity, they weren’t just establishing lucrative monopolies, but paving the way for a globalized, peaceful and prosperous humanity. Who could possibly object?
Now, instead, they are increasingly likely to align themselves with the U.S confrontation with China. This doesn’t just affect ideology but business plans - there is a lot more interchange between Silicon Valley and the Pentagon than there was a decade ago. As companies seek profits in military AI, drones and so on, a new kind of military-industrial complex may be emerging, which may further cement a future trajectory of confrontation. Tech companies are pressing for further restrictions against Chinese technology, to promote their own interests. They are already deploying arguments about America’s “ability to compete with China and Russia in AI” to push against domestic antitrust enforcement.
All this is gradually undermining the small yard, high fence approach. When politicians, bureaucrats and tech companies have incentives to keep embiggening the yard, and when there are no clear rules or procedures for deciding what should be inside the fence, and what should stay outside a generic hawkishness will devour substitute for serious thinking.
To be absolutely clear: I am not making an argument against hawkishness as such. There may be excellent reasons in the general or the particular for technology restrictions or other countermeasures against China and other countries. Instead, I am defending the narrower claim that the U.S. doesn’t currently have much systematic means for deciding when and whether it ought be hawkish, and that in the absence of such means, self-reinforcing suspicions and fears are likely to lead to the kind of spiral that Jervis foresaw. The actual problem is that the U.S. doesn’t have any very good means of determining when it ought adopt restrictions on technology, and when it ought refrain. Small yard, high fence is fine in principle, but is liable to collapse when there isn’t any strong and publicly defensible system for figuring out where the yard should be, and what the fence ought protect.
I had an email conversation a few months ago with a colleague about an early version of these ideas, and the relevance of Jervis’s work. The colleague very reasonably objected that Jervis’s ideas about complexity were all very good, but they left the reader at an impasse. Once you had acknowledged that national security officials had to deal with a complex world, what were you supposed to do next? What were the practical implications?
There aren’t any very straightforward solutions, but there are some guidelines that might at least help alleviate the problem.
First and most simply - avoid basing policy on technological misunderstandings. That may sound obvious advice, but it is commonly ignored in practice. Policy makers’ understanding of these technologies is generally low (with some notable exceptions). Elected politicians are worse again. In neither case is this really their fault - they usually have not been trained to think about complex technologies and economic security. International policy schools, like the one that I teach at, are only gradually shifting to teach courses on these topics, and re-orient their curriculums. Members of the U.S. Congress used to have the Office of Technology Assessment, to provide them with guidance on complex issues - but that was defunded in political fights in the 1990s. It has become almost a ritual in these debates to call for its reinstatement. But it is only a ritual because it is so obviously the right thing to do.
Second - and academics are mostly to blame for the lack of this - build intellectual frameworks that better capture the trade-offs of innovation and national security. If you are trying to figure out whether it makes sense to cut off access to a particular technology, you at least want to have some rough sense of what the consequences will be for innovation. For example: the provisional U.S. decision to ban Chinese connected electric vehicles from the U.S. market might secure America in the short term but incur medium and long term costs by insulating U.S. car manufacturers from competition, and keeping them far away from the technological cutting edge. The emerging European alternative - which is likely to allow Chinese imports under some combination of tariffs and voluntary controls - may give European car manufacturers a better feel for cutting edge technologies, but at the likely cost of increasing their dependence on China.
So do you even begin to think about these trade-offs? It is not the fault of policy makers that they don’t even know where to start - their job is to make specific decisions rather than developing the broad intellectual frameworks that these decisions ought be situated in. There are academics who are at least beginning to talk about these problems, and I know of some promising seeming unpublished research - but we’re only at the very early stages.
The third challenge is the biggest one. The U.S. national security state is built around the twin assumptions that the U.S. is by far the dominant global power, and that the best answer to most problems is the threat or application of overwhelming force. The U.S. is, indeed, still the world’s biggest military power by a substantial distance. And it retains some remarkable advantages in global finance, where the dollar plays a central role, and will continue to for the foreseeable future. But it is discovering that it does not possess the hegemony over technology that it is used to possessing.
That is a big part of the explanation for why the U.S. has such difficulty in thinking straight about China. People who think about the politics of technology like to talk about ‘stacks’ - sets of technologies that interlock with each other to create dominance. The U.S. is used to a world in which the most important stacks - e.g. the combination of Internet, platform companies and connectivity hardware - tended to reinforce U.S. technological hegemony. It is now having to get used to a world in which that isn’t obviously true any more.
What do you do when your competitor is racing far ahead of you in building an alternative technology stack that has strategic implications? It’s pretty clear that China is doing just that with green technology: combining batteries, advanced photovoltaics, transmission and transport in an overwhelming package that has a lot of spinoffs and advantages for countries that adopt it. The U.S. doesn’t have good general answers. Indeed, politicians have a hard time even framing the question. Perhaps the U.S. decision on connected Chinese EVs is the beginning of a response - but it is not a very compelling one. The measure was clearly motivated by economic worries (that US manufacturers Could Not Compete) more than the surveillance issues that served as purported justifications. But bans are not going to spur successful competition without policies that go far beyond the Inflation Reduction Act.
Equally, it has become clear over the last two years that technology restrictions are not nearly as effective as financial sanctions in cutting off adversaries. The U.S. has had a hard time in blocking Russian access to semiconductors, and has downgraded its aims to causing “friction” for the Russian economy. The effort to use the semiconductor chokehold to hamper China’s development of AI is at best highly imperfect. China has been able to import significant quantities, and to find other ways to train AI. It is very hard to find technological chokepoints that are effective and that will last even into the medium term, when you are trying to deploy them against a country with its own advanced manufacturing base. This isn’t the USSR.
These are not the kinds of problems that can be solved through the application of overwhelming force, which the U.S. does not, in fact, really have in the ways that it is used to thinking that it has. These are, yet again, complex problems, and the U.S. is trying to solve them with simple means that don’t really scale. That is not going to work. So what does the U.S. need to do?
Most obviously - it needs to build up bureaucratic capacities that it doesn’t currently have. Without a bare minimum of internal institutions to internally process and understand the tradeoffs between different technological choices, it is going to be at a loss. The problem isn’t just with Congress. It is crazy that the White House Office of the Science and Technology Advisor has to rely on private sector funding to even approach being capable of doing its job.
But that is only the barest of beginnings. Making the right choices in a complex policy environment requires an approach that is a world away from the application of brute force at scale. Your maps of the environment are going to be all wrong when you go in, and brute force is likely to have unexpected consequences. It isn’t just that you are going to make mistakes (you are), but your map of the actual problem you are trying to solve is likely to be utterly out of whack. As you try to catch up with China on EV, you discover that you don’t understand the market right. As you try to impose controls on military use of semiconductors, you find out that you don’t have the information you need to really actually understand how the semiconductor market works.
The problem - as Jen Pahlka’s book Recoding America explains at length - is that addressing such complex problems does not fit well with the way that the U.S. government works. When you are trying to impose order a vast sprawling bureaucracy, which is its own mid-sized global economy, and when your people don’t trust government much, you rely on rigid contracting systems, which define the problem in advance down to its finest details, even if that definition is out of whack with reality. You don’t build connections between the bureaucracy and outside actors, unless they run through cumbersome and rigidly pre-defined channels because it takes months or years to get approval for such connections. And you certainly don’t try to remake policy in realtime as your understanding of the situation changes. Pahlka’s book is cunningly disguised as an account of US software outsourcing practices. If it mentions either ‘national security’ or ‘economic security’ once, I don’t remember it. But it is arguably (along with Dan Davies’ similarly motivated The Unaccountability Machine) the most important book on these topics of the last twenty years.
So the answer to my colleague’s question - once you acknowledge Jervis’s point that the world is complex, what do you do - is this. You start to think, as Jervis didn’t ever quite do, because he didn’t have the tools, about how to build economic security institutions that are designed from the ground up to manage complexity. If you want to take ‘small yard, high fence’ seriously as a policy approach, you need to build the apparatus to discover what lies inside, what lies outside, and what the barriers ought be. That apparatus - and its prescriptions - need to change over time both to match a better understanding of the policy environment, and changes in the environment itself.
And we don’t have the apparatus to actually implement small yard, high fence properly. Nor do we have it for pretty well every other plausible economic security policy you might imagine, short of a brute force decoupling of the U.S. and Chinese economies. And if you did that, you would need enormous capacity to manage the horrifically complex aftermath, if that aftermath could even be managed at all.
Clearly, it is far easier to make these arguments in the general than the particular. Saying that you need reforms is straightforward, but figuring out what they ought to be, let alone how to implement them in current political circumstances, is an altogether more difficult challenge. But it is where the debate needs to be going - and there is a role for technology in it. We are in a situation that rhymes in weird ways with the situation discovered by Vannevar Bush after World War II - recognizing that the needs of government had changed, that vastly better information and feedback systems were required to meet those needs, and that even if we didn’t exactly know what those systems were, we needed to start figuring them out, and quickly. That world had its pathologies. This one does too. But to prevent them becoming worse, we need better ways to manage them, and to ensure that the solutions are better than the problems that they are supposed to mitigate.
This is - obviously - a radical set of claims. But it’s one that is entailed by the diagnosis of the problem that I’ve presented. If we need to manage complex challenges - of which the U.S. China relationship is only one - we need a state that is capable of managing complexities. We don’t have one. And that remains a first order problem, regardless of however hawkish or dovish you are.
* You may of course very reasonably disagree with these goals, especially if you are not American, but that is not an argument I propose to get embroiled in here, except to observe, as I suggest later, that problems similar to those I describe confront any plausible definition of what the U.S. national interest ought be. Managing decline might even be a tougher nut to crack.
Doubling down on your argument: Another depressing side effect of getting foundational technologies wrong might be to *hugely* prolong and cement bubbles. I'm pretty much with Yann LeCun and others on the current level and potential paths of the current bundle of technologies we label "AI", which is "useful but nowhere near or on the path to intelligence, never mind super-intelligence." I had assumed that the hype would erode as familiarity with chatbots reset cultural assumptions of what sort of things imply intelligence (and as OpenAI et al failed to find sustainable business models) but if "OpenAI-style AIs are necessary to long-term national security" becomes embedded in US policy and its attendant cashflows, that's a bubble of bullshit that can be sustained for a lot longer and do a lot more damage.
To be clear, I do think "using software to push forward the frontier of how well we can think" (for various values of "we" and "think") is a linchpin challenge for strategic competition at all levels; misunderstanding or over-committing to specific paths to get there risks not just not keeping your advantage on the field but not developing one in the first place.
Henry,
Echoing the praise of this piece.
Theoretical proposal. The economic statecraft debate may have a third dimension that you and Newman have helped explicate.
1) Hawkishness vs. dovishness. How big the fence is. This depends on context, humanitarians can be extremely hawkish on arms exports, labor extremely hawkish manufacturing, and the Chamber of Commerce hawkish on intellectual property protection.
2) Circle of trust size. There's a bipartisan group that can be hawkish towards China but favors friend/ally shoring. Even within that group, the inclusion of various Middle Eastern partner states or of the Eurozone as an alternate hub within that circle of trust will be hotly debated.
3) The extent of complexity and capacity with which government institutions are trusted. Incumbent institutions are protective of their prerogatives even when burdened with tasks they lack the workforce and resources to achieve. Many reformers want to clear the thicket of regulation but have little faith in the potential to increase governance capacity.
I think you do valuable work in shining a light on that third dimension. I tend to think you're right on the need for institutions that can manage this complexity, but a critical first step is acknowledging that this is a factor to be debated.