Here are some features of DOGE’s approach to changing government:
DOGE is all about scaling. Its' fundamental ambition is to get big things done very quickly, and on the cheap.
DOGE looks to scale through data. Humans don’t scale well - hiring and firing take time and come with a lot of politics. Data and algorithms can be scaled up much more easily.
DOGE is highly tolerant of mistakes. You can’t build big and build quickly without making messes along the way.
DOGE looks to overwhelm the opposition before the opposition can even figure out what is happening. Scale up fast enough, and you will be able to set the rules of the game before the other players even realize that there is a game to win.
DOGE relies on a small elite team to completely reshape a much larger organization.
DOGE is hostile to regulation. Rules are made to be broken.
It’s not news that DOGE is a badly implemented effort to apply the Silicon Valley start-up approach to the federal government. Mike Masnick wrote a great piece on this in January. But it is still notable that the first five of these features of DOGE are all explicitly recommended in Reid Hoffman and Chris Yeh’s book on Silicon Valley’s secret ingredient, Blitzscaling: The Lightning-Fast Path to Building Massively Valuable Companies, while the sixth was the observed practice of Uber, Paypal and other Silicon Valley unicorns that blitzscaled their way to success.
This does not mean that blitzscaling is authoritarianism, or suggest that Hoffman and Yeh are incipient fascists (they are not). Hoffman has explicitly condemned DOGE’s approach, saying that businesses and governments are very different things. But people in Silicon Valley should think, really, really hard about how easily their preferred model of transformation can be applied to efforts to subvert the rule of law, and how readily a generic hostility to regulation can morph into the desire to eliminate democratic accountability. If DOGE demonstrates the failure mode of Silicon Valley applied to government, there is still the possibility of a horrific success in mass deportation and prisons, shepherded through with the help of Palantir
*******
Reid Hoffman is one of the PayPal mafia (unlike most of the others, he doesn’t seem to have taken a sharp right turn) and a prominent venture capitalist. His and Yeh’s book weaves example and anecdote into a story of how to maximize your chances of success as an entrepreneur. Scale up as quickly as you can, and don’t worry about the mistakes you’ll make along the way.
In Hoffman and Yeh’s description, blitzscaling is “an aggressive, all-out program of growth … prioritizing speed over efficiency, even in an environment of uncertainty.” To blitzscale your way into success, you, as a founder, need a ‘sudden all-out effort’ along the lines of Heinz Guderian’s so-called ‘blitzkrieg’ strategy in World War II. You should throw away the usual MBA handbooks that tell you about good management practice, and instead adopt sloppy but rapidly iterated and evolving strategies aimed at dominating your particular market, before your competitors can build up their own control and squash you. Traditional management theory tells you that you should prioritize “correctness and efficiency” over speed. But when a “market is up for grabs,” the root of failure isn’t inefficiency, but playing it too safe. Blitzscaling means that you will “definitely make many mistakes” but that is fine - so long as you learn from them.
It’s do-or-die stuff - and very possibly your company will die. But if it succeeds, it may succeed at a scale that you would never reasonably have expected. “When a start-up matures … it has the opportunity to become a “scale-up,” which is a world-changing company that touches millions or billions of lives.”
Blitzscaling, then, is a strategy to build scale-ups by beating your competitors and reshaping your entire market sector around your phenomenal success.
On offense, blitzscaling allows you to do several things. First, you can take the market by surprise, bypassing heavily defended niches to exploit breakout opportunities. … Second, you can leverage your lead to build long-term competitive advantages before other players are able to respond … On defense, blitzscaling lets you set a pace that keeps your competitors gasping simply to keep up, affording them little time and space to counterattack. Because they’re focused on responding to your moves, which can often take them by surprise and force them to play catch-up, they don’t have as much time available to develop and execute differentiated strategies that might threaten your position. Blitzscaling helps you determine the playing field to your great advantage.
The key is what Hoffman and Yeh call “first-scaler advantage … once a scale-up occupies the high ground in its ecosystem, the networks around it recognize its leadership, and both talent and capital flood in.” While Hoffman and Yeh are mostly writing for aspiring entrepreneurs, they also describe how big incumbent organizations too can blitzscale or be blitzscaled by small, agile teams, so long as these teams insulate themselves from the larger pathologies of the institutions that they want to change.
It is hard to read the above and not see DOGE prefigured in every sentence. Be aggressive, prizing speed over efficiency. Who cares if you make mistakes? You are not looking to achieve perfection, but victory! Aim to achieve complete domination, before your enemies can crush you. Take them by surprise, bypassing their defenses. Keep them perpetually on the back foot, as you move from one aggressive action to another, so that they are always reeling and off balance. Work with a small team, which is deliberately insulated from the bigger organization, and not accountable to it. That is, very much, the story of the last four months of DOGE. It hasn’t succeeded in reshaping the federal government around its mission, and it almost certainly won’t. But like the original blitzkrieg, it has absolutely created large scale devastation.
*******
There is a strong case for blitzscaling under some circumstances. As Hoffman and Yeh rightly emphasize, we face large scale challenges, and very likely need to scale up quickly and effectively to address them. The book was written before Operation Warp Speed, but I would guess that Hoffman and Yeh would be happy to claim it as a victory for their approach, and with some justification. When we face major crises, we need to do things quickly, and absolutely, and we need to be tolerant of mistakes, as we figure out what the best approaches are to messy, complex and enormous challenges.
Equally, when scaling becomes a universal approach, it can become its own pathology. Catherine Bracy’s new book, World Eaters, provides the other side of the story. Most people who like World Eaters probably won’t like Blitzscaling, and vice versa. Still, they’re both describing the same phenomenon (Bracy has a lot to say about the concept of blitzscaling) from radically different perspectives.
Bracy explains the venture capital model behind the do-or-die approach to blitzscaling. Venture capitalists are not actually interested in accumulating a portfolio of investment, all of which have a solid chance of doing well. Instead, they are perfectly happy with nine companies that die so long as there is one company that can build up to whatever it does at scale. Companies that break out, and that achieve dominance over a large market, can achieve super sized returns which more than compensate for failed investments.* That is why tech founders focus on achieving scale at all costs, in contrast to smaller technology companies that might accomplish more specific and limited useful things. Their investors demand it.
World Eaters is a really good book - it’s the best analysis I have read of the underlying political economy of Silicon Valley. Bracy argues that this model has further consequences. In its current iteration, it is well fitted to funding software companies - where you start with a maybe-just-about-adequate product and continuously rewrite and improve it. As the book describes it:
“The part of the software approach where you iterate quickly, and if it doesn’t work, you change it. Continuous improvements. All these things are nice ideas,” [her informant] said. And they come at a low cost. Rewriting software to tweak a product is relatively simple and usually doesn’t require a massive shift in operational capacity.
It has gotten much, much easier to throw together a start-up in Silicon Valley since ‘compute’ - computing power in the cloud - became commoditized. To get a tech start up going, you don’t have to buy your own servers. Instead, you rent server capacity, allowing you to respond rapidly to changes in demand, by renting more or less.
But other inputs cannot be scaled up and down so easily. Unlike rack servers, humans object to being hired and fired in response to temporary fluctuations in demand. U.S. employees furthermore often expect that their employers will pay for health care, insurance, workman’s comp and the like.
According to Bracy, this explains Silicon Valley’s heavy reliance on independent contractors, and pushback against laws and judicial decisions that would require them to treat contractors like employees. Hoffman and Yeh suggest that human employees are “growth limiters” because relationships with them are difficult to manage, and say that “one approach” for start-ups is a business model “that requires as few human beings as possible.” Another is “to find ways to outsource work to contractors or suppliers.” Even so, they acknowledge that start-ups can “delay the reckoning” but will likely be faced, if they succeed, with the need to hire thousands or even tens of thousands of employees.
It would be fair to say that this business model views most people beyond the founder and his (it is, usually, a he) top people as either resources or constraints. To use Karl Polanyi’s language, it “commoditizes” labour and other human beings (such as social media consumers, whose attention is bought and sold on ever-adapting spot markets). Hence, Silicon Valley’s hostility to labour rights, which make it more expensive to scale, and indeed to norms and rules that require them to pay any more attention to dealing with human-level idiosyncracies than they absolutely have to. If your business model is based on scaling you want to do everything you can to avoid or automate granular decisions. Having to consider individual problems one by one is death for business models built on scaling.
That leads to start-up business models that (a) rely as much as possible on data, (b) are pulled together quickly (and are revised as needed), and (c) regularly make mistakes as they work towards the greater cause of market domination. Such start-ups have the obvious advantages that Hoffman and Yeh identify, but they also have characteristic pathologies. Because they regularly rely on algorithms and data (which scales better than humans), they often make heroic assumptions about data’s validity, throwing away information. Data is always and everywhere a highly imperfect substitute for the actual physical and social relations that it represents. In Thi Nguyen’s description:
The basic methodology of data—as collected by real-world institutions obeying real-world forces of economy and scale—systematically leaves out certain kinds of information. Big datasets are not neutral and they are not all-encompassing. There are profound limitations on what large datasets can capture.
Standard-issue start-ups, and their funders, have strong incentives to ignore such limitations. They are tolerant of algorithms that make mistakes in the rush to market domination. And once successful companies have achieved domination, they will look to maintain it at scale, as cheaply as they can.
The best description of the consequences that I’ve read is an amazing Bloomberg article by Spencer Soper on Amazon Flex. When Amazon decided to create its own delivery service, it didn’t want to hire long term employees with rights. Nor did it want to have to spend money on supervising them and making sure that they did a good job. So instead, it used algorithms to hire and fire a host of independent contractors.
The problems were predictable. Flex was able to gather a lot of data on the contracted drivers’ delivery patterns. That data, however, did not always provide an accurate picture of what was happening in the real world.
the moment they sign on, Flex drivers discover algorithms are monitoring their every move. Did they get to the delivery station when they said they would? Did they complete their route in the prescribed window? Did they leave a package in full view of porch pirates instead of hidden behind a planter as requested? Amazon algorithms scan the gusher of incoming data for performance patterns and decide which drivers get more routes and which are deactivated. Human feedback is rare. Drivers occasionally receive automated emails, but mostly they’re left to obsess about their ratings, which include four categories: Fantastic, Great, Fair or At Risk.
… Amazon algorithms rate drivers based on their reliability and delivery quality, mostly measured by whether they arrived to pick up packages on time, if they made the deliveries within the expected window and followed customers’ special requests. Flex metrics focus mostly on punctuality, unlike ride-hailing services such as Uber and Lyft, which also prioritize things like a car’s cleanliness or driver courtesy. Moreover, Uber and Lyft passengers know when they’re stuck in traffic, so drivers are less likely to be penalized for circumstances beyond their control. … When she spotted a nail in her tire, Amazon didn’t offer to come retrieve the packages but asked her to return them to the delivery station. Lira was afraid the tire would go flat but complied to protect her standing. Despite explaining the situation, her rating dropped to “at risk” from “great” for abandoning the route and took several weeks to recover. … Lira was provided an email address and invited to appeal the termination within 10 days. … Without the driving gig, Lira began to struggle financially. … “It just wasn’t fair,” Lira said. “I nearly lost my house.”
And not only were the problems predictable. They were predicted!
Amazon knew delegating work to machines would lead to mistakes and damaging headlines, these former managers said, but decided it was cheaper to trust the algorithms than pay people to investigate mistaken firings so long as the drivers could be replaced easily. … Inside Amazon, the Flex program is considered a great success, whose benefits far outweigh the collateral damage, said a former engineer who helped design the system. “Executives knew this was gonna shit the bed,” this person said. “That’s actually how they put it in meetings. The only question was how much poo we wanted there to be.”
Blitzscaling produces businesses that tend to work in particular ways. These businesses are very good indeed at achieving efficiencies of scale - that is what the current VC model optimizes for. They treat inputs that are not inherently scalable, such as human labor, as though they were such inputs, and select for social and legal arrangements that make them as scalable as possible. They are hostile to outside constraints that might limit scalability, such as labor rights, and to minimize the role of human decisions (which are relatively expensive) as much as possible, in favor of algorithmic decisions (which are relatively cheap). All the above tends to systematically limit their accountability to workers and to others whose lives may be very substantially affected by their workings. Granular accountability for specific mistakes is computationally expensive for all organizations, and it is especially expensive for blitzscaled organizations that rely on minimizing human oversight as much as possible.
There may be justifications for automated decisions, especially when they do not involve very important problems. Temporarily banning social media users for seemingly acting the maggot is obviously far less problematic than firing someone by algorithm (with box-ticker appeal processes). But (rewording Bracy’s conclusions in my terms) the current Silicon Valley funding model selects systematically for building organizations at one extreme end of these tradeoffs, regardless of the specifics.
These pathologies are not unique to the algorithms-plus-data model of governance, although they are especially marked in it. Pathologies of scale are the besetting pathologies of modernity, and bureaucracies and markets too are notoriously terrible at delivering local justice. Bureaucracies, for example, have similar tradeoffs between inflexible rules and specific situations that don’t fit into them. However, the fact that bureaucratic rules are administered by human beings means that there are more opportunities at street level to vary implementation in response to specific circumstances that the rules don’t anticipate or fully cover (of course, the extent to which humans are willing to vary implementation may be limited, and where it is not limited, other problems are likely to creep in). Silicon Valley blitzscalers tend to skimp on humans whenever possible, meaning that they are bad at dealing with out-of-sample problems. They are furthermore hostile to outside forces that either impose general costs, or oblige them to deal with particularities, in ways that would make scaling less viable.
*******
So what happens when this model is applied opportunistically to government? The obvious answer of DOGE is: nothing good. Stories abound about the wildly unrealistic assumptions that DOGE’s people have made, their fundamental lack of understanding of the federal government’s reliance on ancient software and cruddy data sources, patched together by ad-hoc systems of duct-tape and technical debt. Things have already started going badly wrong, and they are likely to get worse. Bracy’s informant’s warning that continuous improvement is a “nice idea” where it works applies too to enormous, complex systems that people rely on for many of the basics of their lives. And it gets worse when insouciance about real people’s lives shades into the active desire to get rid of government, because you think it is inherently bad.
Hoffman has spoken out to condemn this problem. He clearly recognizes that making mistakes as a business trying to scale up fast, is very different from making mistakes as a government that is taking life and death decisions every day.
“I worry that very bad risks are being taken,” Hoffman told Bloomberg TV on Monday. “Speed is not a problem. Risks are a problem.”
“For example, it’s like, ‘Well, we’re just going to fire a whole bunch of people. Oh, oops, we fired a whole bunch of nuclear safety inspectors,’” he added. “That’s the kind of thing that is taking risks that [are] unwarranted.”
… “Governments are not companies,” he said. “You actually have to say, ‘We take less risk here, even at the price of some inefficiency, because it’s more important for us to not have things blow up.’”
But there are actually worse outcomes than DOGE making mistakes. There is a world in which DOGE-type reforms are not a failure mode, and instead of junking government move towards creating a new mode of government that mingles the worst aspects of algorithmic processes at scale and bureaucratic unaccountability. It’s worth contrasting the DOGE approach with that of Palantir, which has been engaged with government and large organizations for a long time, and - even if it is not altogether successful, is visibly far less incompetent than DOGE has been so far. These reflections by a former long-time Palantir employee are worth reading at length, to get a sense (with the strengths and weaknesses of an insider account) of how the company does its work.**
FDEs were typically expected to ‘go onsite’ to the customer’s offices and work from there 3-4 days per week, which meant a ton of travel. This is, and was, highly unusual for a Silicon Valley company. There’s a lot to unpack about this model, but the key idea is that you gain intricate knowledge of business processes in difficult industries (manufacturing, healthcare, intel, aerospace, etc.) and then use that knowledge to design software that actually solves the problem.
… You took disparate sources of data — work orders, missing parts, quality issues (“non-conformities”) — and put them in a nice interface, with the ability to check off work and see what other teams are doing, where the parts are, what the schedule is, and so on. Allow them the ability to search (including fuzzy/semantic search) previous quality issues and see how they were addressed. These are all sort of basic software things, but you’ve seen how crappy enterprise software can be - just deploying these ‘best practice’ UIs to the real world is insanely powerful. This ended up helping to drive the A350 manufacturing surge and successfully 4x’ing the pace of manufacturing while keeping Airbus’s high standards of quality.
This made the software hard to describe concisely - it wasn’t just a database or a spreadsheet, it was an end-to-end solution to that specific problem, and to hell with generalizability. Your job was to solve the problem, and not worry about overfitting; PD’s job was to take whatever you’d built and generalize it, with the goal of selling it elsewhere.
… data integration … was (and still is) the core of what the company does, and its importance was underrated by most observers for years. In fact, it’s only now with the advent of AI that people are starting to realize the importance of having clean, curated, easy-to-access data for the enterprise.
Why is data integration so hard? The data is often in different formats that aren’t easily analyzed by computers – PDFs, notebooks, Excel files (my god, so many Excel files) and so on. But often what really gets in the way is organizational politics: a team, or group, controls a key data source, the reason for their existence is that they are the gatekeepers to that data source, and they typically justify their existence in a corporation by being the gatekeepers of that data source … Being a successful FDE required an unusual sensitivity to social context – what you really had to do was partner with your corporate (or government) counterparts at the highest level and gain their trust, which often required playing political games.
The aspiration here is to doing what DOGE at least says it wants to do, but doing it well. Taking account of context. Building relationships with the people who have control of the information. Recognizing how messy the data is, and figuring out ways to patch it together usefully. And creating new interfaces that pull the information together.
But what happens when the “problem” that you want to solve is making it easy to imprison and deport millions of people. A leaked internal Palantir document uses corporatespeak to explain its current response to this question:
Coupled with the incoming administration's priorities, HSi's vision grew substantially more inclusive of ICE-wide efforts by March 2025. At that time, the HSI leadership team sought our assistance to accelerate mission progress across the agency. Two main factors drove HSI's new sense of urgency: 1) clear failure by custom-developed projects to deliver real results for the field and 2) a renewed focus across ICE on immigration enforcement, which shined a light on the agency's data systems challenges and shortcomings. … Palantir is aligned with enabling immigration-focused agencies to utilize our platform to better track the immigration lifecycle and serve our national security while also promoting efficiency, transparency, and accountability. We provide the tools for our customers to enable fair treatment and legal protections for individuals across the spectrum of immigration status.
The weasel words about “provid[ing] the tools for our customers to enable fair treatment and legal protections” belie the actuality. The Trump administration explicitly wants to eliminate due process for the people whom it wants to deport. In J.D. Vance’s words:
To say the administration must observe 'due process' is to beg the question: what process is due is a function of our resources, the public interest, the status of the accused, the proposed punishment, and so many other factors, …When the media and the far left obsess over an MS-13 gang member and demand that he be returned to the United States for a *third* deportation hearing, what they're really saying is they want the vast majority of illegal aliens to stay here permanently. …
Here's a useful test: ask the people weeping over the lack of due process what precisely they propose for dealing with Biden's millions and millions of illegals. And with reasonable resource and administrative judge constraints, does their solution allow us to deport at least a few million people per year?
To put it just a little differently: it will be impossible to scale up mass deportation to the required level, if we have to grant people the right to contest life-changing decisions, based on the specifics of their particular circumstances and whether they fit or don’t fit the rule. Again: having to consider individual problems one by one is death for business models built on scaling.
The big danger then, is not blitzscaling government as a failure, but blitzscaling government as a catastrophic possible success. Palantir’s current approach provides a “that’s not my department says Wernher von Braun” attitude to cleaning and integrating the information needed for mass deportation, while the Trump administration wants to pull every bureaucratic trick that it can to deny rights to the people whom it wants to detain.
It’s genuinely heartening to see that some Silicon Valley people - including those who I’d disagree with on plenty of other political topics - recognize the problem.
Y Combinator founder Paul Graham shared headlines about Palantir’s contract on X, writing, “It’s a very exciting time in tech right now. If you’re a first-rate programmer, there are a huge number of other places you can go work rather than at the company building the infrastructure of the police state.”
Some former Palantir alumni appear extremely unhappy about the company’s mindmeld with the Trump administration.
Equally, this isn’t a problem that harsh words are going to dispel. The Y Combinator Approach To Remaking The World is indeed remaking the world, albeit in ways that are not those that were expected. If there are any coherent perspectives in the Silicon Valley elite on the limits of blitzscaling - where it ought to be applied and where not - beyond Hoffman’s and Graham’s immediate comments, I’d love to be able to read them. And if not, there ought to be and soon.
Quinn Slobodian has a new book, Hayek’s Bastards, about how a new generation of thinkers and politicians took Hayek’s and other right-libertarians ideas in unexpected directions, welding them together with essentialist arguments about race and destiny. Silicon Valley has its bastards too, companies and individuals that are applying scaling, algorithmic governance and other techniques to the large scale abuse of human rights. It’s long past time for people in Silicon Valley to start thinking hard about why it is that so much of what they do can readily be converted into tools for making tyranny work better. When is blitzscaling appropriate and when inappropriate? What are its social and political consequences? Does blitzscaling - by breaking down regulations and attacking alternative forces that might mobilize against it help pave the way towards unaccountable government? If so, how do you build accountability back in, or passively or actively foster counterforces that will stop this from happening?
It’s fair for people in Silicon Valley to highlight the differences between their political aspirations and the blitzscaling for tyranny that we’re seeing happening right now in Washington DC. But at a minimum, they also need to think about how their own investment and business strategies enable the actions that they deplore, and how to start changing that.
* Bracy calls this the “power law logic.” To avoid upsettingCosma, I’ll call it the “power-law-or-log-normal-or-stretched-exponential logic,” while admitting you’d never be able to get away with language like that in a trade book.
** NB that I have no reason to believe that this employee defends or endorses what Palantir is now doing, and suspect on the odds that they likely do not. A lot of former Palantir people seem to be extremely unhappy with its current pivot.
I suppose one could apply blitzscaling to the dissemination of memes, like religion, that go back thousands of years. Using force to convert populations to a particular religion might be counted as blitzscaling.
The Chinese social surveillance system is partially blitzscaling by technology, and partially human scaling of censors. The US is going for full technology, which I suspect is why errors keep happening, like relying on facial recognition by machines and not checking the results, sometimes with fatal consequences. I can easily imagine AI being embodied in some Omni Consumer Products ED209 by some tech company cosying up to Trump, just as I G Farben did with the Nazi administration with Zyklon B for use in the death camps.
Someone should rephrase Vance's disgusting verbiage about dealing with immigration into a pastiche of how Reinhard Heydrich talked about implementing the "final solution" to the Jewish problem. Vance has sidled up to becoming a neo-Nazi. Dorothy Thompson would undoubtedly have picked him out in her piece "<a href="https://harpers.org/archive/1941/08/who-goes-nazi/">Who Goes Nazi?</a>
If the guy (Hoffman) who enabled the no real employees just contractors via a jobs scam website you have to be on to be employable in most cities is one of the 'good guys' we're really in trouble.