AI is a bureaucratic technology. So is war.
What happens when AI slop hits targeting systems and civil liberties?
A short post as an addendum to the previous, on an aspect of the fight between Anthropic and the Department of Defense that ought be clear but perhaps isn’t, because it doesn’t fit easily into the stated terms of disagreement. The important beef is not over whether Claude is going to become the overbrain for an army of T-1000s, marching in lockstep to advance America’s interests. It is over the bureaucratic uses of AI in war: both war fought abroad and war, perhaps, waged by the state against its own people.
If you have any experience at all of the US Department of Defense, you will know that it is a labyrinthine bureaucracy, with its own complex interests. Paul Krugman asks sarcastically this morning whether US troops are supposed to flex their biceps at attacking drones. There is an important lesson behind this query. The “modern system” of war doesn’t depend on biceps, or even materiel, so much as it does on the complex organizational structures that allow assets to be deployed successfully in ways that reinforce each other. Logistics play an incredibly important role - if you don’t get stuff to roughly the right place at the right time, you are going to lose. A myriad of specific decisions taken by individuals need to cumulate properly. That all helps explain why the Pentagon has so much bureaucracy: even if it is inefficient in the specific; even if sometimes it is inefficient in the general, you can’t do without it.
That, in turn, helps explain why AI, including both general summarize-and-pull-information-together-and-generate systems like Claude, and more specialized systems for particular purposes are valuable to the Department of Defense. They potentially improve coordination. The “Management Singularity” is incredibly useful to large organizations that have a lot of information to manage. I was just on Jordan Schneider’s Chinatalk podcast (not up on Substack yet as far as I can see), talking with people who, unlike me, have direct experience of the US military. It’s hard to overestimate the advantages of LLM for carrying out tasks of organizational translation, such as semi-automating the stripping of sensitive sources-and-methods information from classified documents to be shared with allies.
Equally, there are things you ought worry about if these technologies are widely adopted. In actual war, you ought worry about target selection. See, for example, +972 Magazine’s account, based on disaffected sources from within the military, of how Israel used AI to decide whom to hit in Gaza.
During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based. One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. This was despite knowing that the system makes what are regarded as “errors” in approximately 10 percent of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all.
AI classifiers - which is what Lavender clearly is - make a lot of sloppy mistakes. Slop means something very different when a system is designed to kill people based on incomprehensible and unreliable embeddings, than when it is designed to serve up ads.
So too for the deployment of AI to the home front, where parts of the US military could parse information on US citizens. This has direct consequences for the fight between Hegseth and Anthropic, which reportedly turned in large part on the circumstances under which Anthropic’s LLM could be used for “lawful surveillance of Americans.” So what kinds of surveillance are, in fact, lawful?
It is widely understood that e.g. the NSA is forbidden from deliberately conducting surveillance on US citizens, by Executive Order 12333. Past scandals (when the NSA e.g. was bugging the Reverend Martin Luther King) led to reforms in the 1970s and after. What is much less well known is that there are no strong legal controls that prevent the US military from purchasing ‘open source’ information that has been gathered by commercial providers.
There is an entire for-profit equivalent of the surveillance state that gathers data which it sells on to other businesses for targeted advertising and similar. And it doesn’t just sell on to other businesses. Government too - including some of the military parts of government - are reportedly enthusiastic customers.
This already presents dilemmas - government can potentially use this data to develop sophisticated profiles of US citizens with existing technologies. But LLMs can potentially greatly increase the abilities of bureaucracies to weave together different sources of data to provide a much more coherent picture of the individual and what they are doing. As Marion Fourcade and Kieran Healy describe it:
In the early days of the internet, being online brought certain freedoms. Not only was online anonymity or pseudonymity common, it was celebrated as a kind of liberation. Users embraced the opportunity to experiment with different versions of themselves. This multiplication of identities was a feature, not a bug. It also reflected the technical architecture of a less integrated internet, which gave participants what we might call ‘interstitial liberty’. This is the liberty granted us by the gaps between systems that will not or cannot efficiently talk to one another. It is a kind of negative freedom. If your gaming profile cannot easily be linked to your professional email or your forum discussions, you enjoy a form of privacy that depends less on explicit legal protections and more on the technical limitations of systems that are connected in principle but not integrated in practice. … Tools that recognise patterns, predict behaviours and detect anomalies can now work across previously separate domains.
These tools are still very imperfect, but that creates its own problems. Slop and error can be an integral part of the system. The dystopias we ought fear will be less like 1984 with its all seeing Big Brother, and more like Terry Gilliam’s Brazil, in which a bug caught in a teleprinter results in the wrong person being targeted and tormented.
This, it seems to me, captures the logic of the fight between Anthropic and the Department of Defense better than a lot of the commentary that I am reading. We should worry less about autonomous robots, and more about pseudo-autonomous systems embedded in bureaucracies that enable them to do things that they used not be able to do, but with a lot of slop.
I have taught a fair number of officers over my two decades in the Washington DC higher education nexus. I am very confident in their integrity and willingness to push back against the systematization of war crimes. I am, to put it more politely than I want to, less confident in the ethics and integrity of their current civilian leaders. You don’t need to agree with Dario Amodei on whether we are about to see the rapid deployment of “countries of geniuses in a datacenter” to worry about what an untrammeled Hegsethian wannabe-Department-of-War might try to do with this technology, or to believe that Anthropic did the right thing when it refused to cooperate (while wishing it had done more), or to hope that Amodei is right in speculating that this might spur some debate about where we have gotten to with these technologies, and where we might be heading.



Thank you for such a clear depiction of the dangers, not so much of technology, but of the minds that program it. At this moment some of those minds are too limited, ideological,childish, or disturbed to be unleashed with no accountability, and no off ramps.
PBS
https://www.pbs.org
How AT&T Helped the NSA Spy on Millions
Aug 17, 2015 — AT&T provided the NSA with access to billions of communications records — including emails and phone call data — as they...
This Frontljne Program is still in the web. The knee-jerk policies post 911 and the internet enabled our government to spy on us via the internet plus colkect data about us from private cor-pirate data aggregators with no warrants necessary. This is ancient news the technologjes can now see and hear voices in your house. Your local swat team likely has the tools to spy on anyone. Ice certainly does.