The Biden administration, in its final days, has laid out an extraordinarily ambitious plan to use its chokepoint on high end semiconductors to control global AI. The best explanation I’ve seen is this Bloomberg scoop by Mackenzie Hawkins and Jenny Leonard, published a few days before the draft plan was published. The draft plan itself is available on the Federal Register (warning - there is a lot of export controls lawyerly jargon).
The idea is to use export controls to restrict the selling and use of AI, to achieve two U.S. policy goals. The first is America’s desire to keep the most advanced AI out of the grasp of China, for fear that China will use strong AI to undermine U.S. security. The second is its desire to allow some degree of continued access to semiconductors and AI in most countries, to mitigate the anticipated shrieks of protest from big U.S. firms that don’t want to see their export markets disappear.
Hence, this highly complex plan involves controlling access to the advanced semiconductors that are used to train advanced AI models, as well as the model ‘weights’ themselves. The plan continues to very sharply restrict China’s and some other countries’ access to highly advanced semiconductors (what Hawkins and Leonard call Tier 3 countries - the actual terminology is more technical and abstruse). It allows a much more liberal regime of exports without much in the way of controls to a small group of ‘Tier 1’ countries - important allies and other friendlies such as Norway and Ireland. Finally, there is a large intermediary zone of other countries, including some traditional U.S. allies, that will be allowed access to U.S. semiconductors, but under complex restrictions. This loosely extends the regime that the U.S. is trying to make Saudi Arabia and the United Arab Emirates comply with (in which they would agree to detach their data centers from Chinese technologies in exchange for semiconductor access) to the rest of the world. As Hawkins and Leonard describe it:
the vast majority of the world would face limits on the total computing power that can go to one country. … Companies headquartered in [Tier 2 countries] would be able to bypass their national limits — and get their own, significantly higher caps — by agreeing to a set of US government security requirements and human rights standards, according to the people. That type of designation — called a validated end user, or VEU — aims to create a set of trusted entities that develop and deploy AI in secure environments around the world.
… Companies [in Tier 1 countries] can freely deploy computing power in those places, and firms headquartered there can apply for blanket US government permission to ship chips to data centers in most other parts of the world. That’s provided that no more than a quarter of their total computing power is located outside of Tier 1 countries, and no more than 7% in any one Tier 2 country. Companies would also have to abide by US government security requirements.
Additionally, US-headquartered companies that apply for that type of permission — a so-called universal VEU designation — have to keep at least half of their total computing power on American soil, people familiar with the matter said. The broader goal of these regulations is ensuring that the US and allied countries always have more computing power than the rest of the world.
That last sentence is the most important - this complex system is intended to cement U.S. power over information technology over the longer term. So will it work?
That depends on five distinct bets; two on technology, and three on politics. Here’s what they are:
The bet on scaling
The most straightforward bet behind this policy is that the “scaling hypothesis” is right. That is, (a) the more computer power is applied to training AI, the more powerful it will be, and (b) access to the most advanced parallel processing semiconductors is essential to building cutting edge AI models. If this is so, then the U.S. has a possible trump card. U.S. based and dependent companies like Nvidia and AMD, that design the cutting edge semiconductors that are used for training AI, have a considerable advantage over their competitors. China and other U.S. rivals and adversaries have no equivalent producers, and are obliged to rely on the inferior chips that they can make themselves, or that the U.S. allows them access to.
If this bet is right, then the U.S. indeed potentially possesses a chokehold that might allow it to shape the world’s AI system, selectively providing access to those countries and companies that it favors, while denying access to those it does not. Controlling the chips used for training, while restricting the export of AI weights, will allow it to shape what other countries do.
There is, however, some possible evidence suggesting that the relationship between chips and scaling is more complicated than the US might like. A couple of weeks ago, a Chinese company, DeepSeek, announced results that suggest that it has trained a frontier AI model without access to the most advanced semiconductors. DeepSeek’s model seems to achieve equivalent results to powerful US models with far fewer parameters. If this works as it appears to, it may be that the semiconductor chokehold is less decisive than the U.S. hopes.
The bet on AGI
One other belief, which is quite widespread among people in the U.S. national security debate as well as many in Silicon Valley, is that we are on the verge of real AGI - ‘artificial general intelligence.’ In other words, we are about to witness a moment where there will be a vast leap forward in the ability of AI to do things in the world, creating self reinforcing dynamics where those with strong AI are going to be capable of creating yet stronger AI and so on in a feedback loop. This then implies that short term AI superiority over the next couple of years might be readily into a long term strategic advantage. This article, by Matt Pottinger (former Deputy National Security Advisor in the first Trump administration) and Dario Amodei (founder of Anthropic) gives a sense of this argument, and how the scaling hypothesis reinforces it.
By 2027, AI developed by frontier labs will likely be smarter than Nobel Prize winners across most fields of science and engineering. … It will be able to … complete complex tasks that would take people months or years, such as designing new weapons or curing diseases. Imagine a country of geniuses contained in a data center. … The nations that are first to build powerful AI systems will gain a strategic advantage over its development. Incoming Trump administration officials can take steps to ensure the U.S. and its allies lead in developing this technology. If they succeed, it could … extend American military pre-eminence. If they fail, another nation—most likely China—could surpass us economically and militarily. It’s imperative that free societies with democratic oversight and the rule of law set the norms by which AI is employed.
Something like these assumptions are likely what explains the Financial Times quote from an anonymous government official that:
“time is really of the essence”. “We’re in a critical window right now, particularly vis-à-vis China. If you think about where our models are today relative to People’s Republic of China models, the estimates range from being six to 18 months ahead right now, and so every minute counts.”
A temporary lead might turn into an incredibly powerful long term advantage. Equally, if the ‘AGI by 2027’ thesis turns out to be wrong, then it’s more likely that any advantage will be less important and less enduring, since the AI will generally be less useful, and will be incapable of building on itself in the ways that the “Super-Intelligence As a Service” theory predicts.
Here, for example, Arvind Narayanan and Sayash Kapoor argue that we should be skeptical about the hype that is bubbling out right now from inside the big AI companies.
Industry leaders don’t have a good track record of predicting AI developments. … There are some reasons why we might want to give more weight to insiders’ claims, but also important reasons to give less weight to them. … there’s a huge and obvious reason why we should probably give less weight to their views, which is that they have an incentive to say things that are in their commercial interests, and have a track record of doing so.
There is a lot more in Narayanan and Kapoor’s article, about the specifics of what is happening right now, as we (perhaps) move from one model of AI development to another. I find their arguments compelling - your own mileage may of course vary.
The bet on export control effectiveness
There are other questions then the technological ones. Most notably: are export controls an effective means of blocking China’s access to these semiconductors? There is some evidence that export controls are less efficacious than U.S. government officials would like. Initially, many national security people saw export controls as financial sanctions but for physical products. The U.S. had a lot of experience in using the global dollar clearing system to terrify the bejasus out of global banks and to turn the world’s financial system into a generalized system of coercion. Export controls would surely do the same for semiconductors and other key technologies.
It turned out to be more complicated than that. Export controls are much messier to implement than sanctions. Producers are not as dependent on U.S. licensing requirements as banks are, and are often more willing to marshall their political allies to help, and to press right up to the edge of what is legal to keep making profits. It is much harder for U.S. authorities to get good data on who is sending what physical goods to whom, than on who is sending what money to whom, because there aren’t any central clearing houses for information, as there are with financial flows (SWIFT). Nor is the data always particularly good when you can get it. Product codes are broad and sometimes ambiguous and open to being gamed.
Doing export controls well is hard. If this plan gets implemented, the U.S. is going to have to do it at much broader scale than before, for much more ambitious objectives with the grudging cooperation of surly and truculent chip companies (at best), who have made it clear how much they hate the rules. In addition, there will be further complications in getting the ‘end users’ who buy semiconductors to comply with the requirements that the U.S. wants to impose.
The bet on organizational capacity
Even if you can get the information, you have to have the right organizational structures and resources to analyze it, and use it to implement policy. One of the open secrets of Washington DC enforcement is that the Department of Commerce’s Bureau of Industry and Security (BIS) - the entity that is in charge of export controls - doesn’t have what it needs to do its job. Until relatively recently, the BIS was mostly unknown, and incredibly technical seeming except to the specialists. Kevin Wolf, who used to run it, joked that export control regulations were like tax law, but without the sex appeal. These days, export controls certainly have sizzle - but they are still mostly administered by legal specialists. There isn’t nearly as much capacity for strategic analysis as there needs to be. The information systems aren’t great. And they don’t have nearly enough people.
So what happens if the BIS is asked to administer this vast scheme for dividing the world into three parts, and regulating how semiconductors are used across them? It is certainly not inconceivable that the BIS could beef up its capacities, as Treasury’s Office of Foreign Assets Control has done, building up the internal structures to analyze, to think strategically, and to revise strategy based on results. But it is a real challenge - and will be much harder to pull off in a new administration that is apparently quite hostile to technical expertise and the “deep state.”
The bet on politics
None of this will happen if the Trump administration doesn’t want it to. And there are clearly Republicans who are listening to industry protests, and promising to do what they can to get the plan reversed. A lot of people are speculating that the plan is dead on arrival.
That may be premature. One plausible interpretation is that the Biden people are trying to create facts on the ground that will bolster China hawks in the incoming administration, who want strong technology restrictions, so that they have a greater chance of prevailing over the people who want to let technology rip. And that might perhaps work!
It isn’t just the foreign policy people who want sharp restrictions on China. It is also some important people in the AI debate. Pottinger is probably not going to be coming back in (he demonstrated Insufficient Loyalty to the Beloved Leader in the days surrounding January 6 2021) but his co-author, Amodei reflects a general hawkish turn among many people in Silicon Valley. If you buy into the AGI argument and into the notion that there is a battle of systems between democracy and autocracy, you may be quite willing to support controls. China has few friends on the Hill.
I don’t feel particularly confident in making any predictions about what the Trump administration will do. I am not the person you ought turn to for accurate gossip about who has influence among the people who are about to take power. But I don’t see any unambiguous signals (yet) that the one side or the other has the upper hand in the internal arguments.
So on balance, I think that there are quite strong reasons for skepticism about the Biden administration’s plans to control AI. A lot will have to go right for this to work as well as they would like it to do. Equally, it might work out, for example, if I and (more importantly) other AGI-skeptics are mistaken. If we are indeed just on the verge of a massive epochal technological transformation, the U.S. doesn’t necessarily have to get everything right to stay ahead for just long enough. I’m personally very confident that we are not on the verge, and that the Singularity is going to remain Nigh for a very long time, but again, I am not an oracle.
Update - for a deeper dive into the technological/regulatory questions (aimed at people who are deeply engaged with these questions), go read this ChinaTalk debate.
* Yes, if you are a classics pedant, you can object that “Gaia” is Greek, not Latin. But yer modal imperial Roman intellectual, being fluent in the Attic, would have gotten the bad joke, even if they might have groaned at it.
Interesting.
On the AGI-thing: there recent have been suggestions to extend the 2-way split of looking at AI into 'Narrow' and 'General' (such as Google DeepMind have promoted) into a 3-way split: 'Narrow', 'General', and in between: 'Broad&Shallow' (Marcus), GenAI is in that category and has no real route to 'General' (e.g. not via scaling). See https://garymarcus.substack.com/p/agi-versus-broad-shallow-intelligence
Or https://ea.rna.nl/2025/01/08/lets-call-gpt-and-friends-wide-ai-and-not-agi/ (where 'Broad&Shallow' is called 'Wide'). Marcus links to that story as well. Broad&Shallow is more precise, as the 'shallowness' is the essential issue that makes evolving into 'General' hard if not impossible. The 'Wide' article contains some more background, e.g. GPT-o3 ARC-AGI results and the importance of 'imagination' when looking at intelligence).
Regarding the critical role BIS plays in tech export controls, as well as the resource challenges they face, I recommend watching this excellent conversation between Greg Allen of CSIS (a DoD Joint AI Center plank holder) and Alan Estevez, Under Secretary of Commerce for Industry and Security.
It is always helpful to hear directly from the source, so to speak, on the motivations behind and the challenges of implementing technology export controls. It's a thoughtful, even introspective discussion. With Alan acknowledging at the end that he has no idea how the next administration will approach these issues.
https://www.csis.org/events/reflecting-commerce-departments-role-protecting-critical-technology-under-secretary-commerce