37 Comments
User's avatar
The Demon, Kia's avatar

I find the current AI/AGI discussions make a lot more sense if I view them as complaints about having to hire servants. Complaining about 'the help', a favored pastime of the servant-hiring classes going back in perpetuity. Only now the servant-hiring class is lured by fantasies of never having to be bogged down in other people, robotics will replace all those annoying servant-people with their needs & their wants & their foibles & limitations. Perfected machine-servants will fix so many problems, starting with all that money spent on administering non-disclosure agreements. No more worrying about servants spying & gossiping & knowing. The Shakespearean bonus that is firing all those lawyers needed to administer all those NDAs. Above & beyond the more obvious advantages fantasy pseudo-sentient machinery would have over actual people.

The way that AI talk gets venture-capital wallets open has been much discussed & is kinda obvious, but the underlying emotion-logic of servant-keeping has & is not.

Expand full comment
Inside Outrance's avatar

I think it's incredibly telling how involved some people who could reasonably be called 'private equity vultures' are with the whole DOGE project. To me, this demonstrates that even if some of the people supporting it are motivated by a belief that AGI is imminent, there are others who view it purely as an extractive enterprise. I suppose those beliefs aren't mutually exclusive though.

https://www.thenation.com/article/society/doges-private-equity-playbook/

Expand full comment
WinstonSmithLondonOceania's avatar

Nope, not mutually exclusive. Quite the contrary, they go hand in hand.

Expand full comment
Sherri Nichols's avatar

I’ve been hearing the argument that Artificial Intelligence was going to exceed humans Real Soon Now for 40 years, and it hasn’t happened. We don’t have autonomous vehicles ready to operate at scale yet, either, and that was supposed to be any day now.

I have seen nothing to indicate that AGI really is right around the corner, yet lots to indicate that some people benefit from the hype that it is. I wouldn’t care as much if they were only sucking down investment capital, but now they’re destroying my government so they can avoid paying what few taxes they pay.

Expand full comment
Timothy Burke's avatar

A precursor argument that I've had with some people about information and power is as follows:

In the past many people have argued that the digitization of information and knowledge along with pre-AGI forms of algorithmic sorting let decision-makers in government, corporations and civil society make more informed decisions by providing them with more and better information. Just as way back in the mid-20th Century some people championed the idea of comprehensive intelligence services as a way to provide decision-makers with more information about what other governments and institutions were planning or thinking and therefore make better decisions.

I think this is a basic empirical and conceptual misfire. First, that decision-making in large organizations is based on accurate information in the first place and that decision-making in large organizations is both meant to be reasonably based on accurate information and that accurate information leads to better decisions with better outcomes for the organization and its beneficiaries and clients. That is just not how decisions actually happen in real life--this is mistaking a simplified model for lived experience. A lot of decisions are made based on the deliberate exclusion of information that would confound or complicate the will or interests of the people making decisions. Many organizations mobilize power to make the world correspond to what they want to believe it is; power does not generally derive from actually predicting what is happening in the world from a position of detached observation. If understanding information would require an organization to abandon a prior ideology or conceptual frame that its ability to act depends upon, then that information will never figure into decisions even if the consequences of ignoring it are unfortunate. The United States military, for example, was fundamentally not capable of understanding what it technically "knew" about Afghanistan during its military engagement there because to understand what it knew would have meant that the basic modality of its operations were inevitably a losing proposition.

Sure, yes, there are occasions where the central decision-making apparatus of large organizations can process better-quality information and arrive at decisions that are more optimally fit to the organization's mission, interests and clientele, but they are rare. There's a presumption that they should automatically become more common among those that see the world in terms of adaptive competition--that the organizations that make better decisions from better information should have higher fitness and out-compete others. But modern organizations, including corporate and governmental ones, transform environments to suit them, not the other way round. They can be bloated, irrational, etc. for very very long periods of time without paying a price for it, and when the price does get paid, they don't necessarily get replaced by some leaner, faster, smarter competitor.

The AGI fantasy in play in some quarters rests on the same kinds of assumptions--that the problem with government is its procedural inefficiency and its capture by "special interests", and that an AGI could handle all governmental functions with efficiency and relentlessly eliminate all forms of transfer-seeking and capture. That's what some people have been believing about AI since the 1950s--that there could be relentlessly logical, disinterested, rational "brains" who would be perfect Weberian bureaucrats, that by taking people out of the loop entirely, we would achieve a frictionless utilitarian society where the greatest number of people receive the maximum benefit from government.

Again, there might be narrow systems where something like that might be close to the truth--say, a dispatching system for emergency responders or a triaging system for emergency health care--that would quickly allot finite resources to the most urgent needs faster than human beings could. Maybe, though certainly that's not the AIs we're seeing right now.

But an AGI couldn't route resources to need unless it had the authority to completely re-allot all existing distributions in society. A fully egalitarian social order might actually find an AGI useful, since a large centralized state staffed by humans would always create new forms of inequality and maldistribution. But something like the current political economy of the United States? Tell an AGI to maintain it pretty much as-is and it would be as inefficient, contradictory and unjust as anything we have now, or worse. The AGI that Gates and Schmidt etc. are fantasizing about is either one that has authority that they would never, ever allow an existing human government to have or it is just an ideological prosthetic, a fake AGI that they've custom-built to make their own preferences and interests seem efficient and rational. As you say, the AGI that the boosters are thinking of is really just themselves, stamping on the human face forever with a silicon boot. It's based on the theological belief that markets are already efficient non-human mechanisms for distribution and routing, or would be if it weren't for human beings mucking with them.

Expand full comment
Geoff Olynyk's avatar

Organizations that react more accurately to real-world information really do do better, though. The “yes man” is a negative stereotype for a reason.

Honestly the best use of an AGI for an organization that really wants to take advantage of it would be as the ultimate impartial strategic advisor.

Programme the robot to not care about its career, give it all the information, and it would have told the US military brass that Afghanistan is no-win. They can ignore it, but at least they would have known.

Expand full comment
Timothy Burke's avatar

I am not actually sure that organizations that react more accurately do better--most of the evidence we have for that is selective and anecdotal.

I am sure that you could get an AGI to deliver analyses that would mimic what many experts with access to big bodies of information would say, but that's the point: if an organization doesn't want to know it or is culturally/ideologically ill-fit to understand it, the analysis will be ignored regardless of the source. Most of the worst administrative, military and economic decisions of the last 75 years were not the product of a fog-of-war lack of information, they were the product of organizations that had other reasons or incentives to do what they did that outweighed considerable information to the contrary. The Kennedy and Johnson Administrations and military advisors had plenty of information that told them that what they were doing in Vietnam was disastrously wrong-headed but they made themselves incapable of processing that information, more or less on purpose.

Expand full comment
Geoff Olynyk's avatar

I think we’re saying the same thing? The Kennedy and Johnson administrations preferred to live in a world of make-believe, and ultimately the US lost that war and retreated. The organization was not successful.

If you’re saying that they wouldn’t have done any better with a perfect AGI strategic advisor because they’d ignore the robot just like they ignored their human strategic advisors, sure. I can’t dispute that.

Success will require telling truth to power, and power being willing to listen.

Expand full comment
Timothy Burke's avatar

Yes, I think we're saying the same thing--I would just say that this makes the problem "power being willing to listen", which is not a problem that AGI or any system of expertise or information-provision can solve.

Expand full comment
neroden's avatar

If you want perfect Weberian bureaucrats, it turns out you hire autistic people. Not fake-AI.

Expand full comment
Jack Shanahan's avatar

This is the biggest, most consequential divide in the world of US AI yet it’s not getting nearly enough attention. All the US chips are being placed on AGI and, by extension, DOGE’s “destroy the government to make it better” argument.

That strategy failed in Vietnam and it will fail here. It’s a radical and stupendously bad bet. Once you rip out the roots of the government, you’re left with no hedge strategy. For the reasons you lay out in clear detail.

AGI, whatever it is, will not be capable of putting Humpty-Dumpty together again.

This will not end well.

Whatever you want to say about China’s approach to AI, taking the DOGE route is not part of the plan.

Expand full comment
WinstonSmithLondonOceania's avatar

That's the whole idea. Profiteering is the only goal. You see, either way, they profit. If anything even remotely resembling AGI is developed, or if it isn't, they still profit. The rest of us? Swept aside like so much useless dirt.

Expand full comment
Lance Khrome's avatar

Rent-seeking by another, tarted-up name...bank it, it's monopoly capitalism - or will be in time.

Expand full comment
neroden's avatar

Yeah, only they don't profit. Their profits depend on a stable, functioning US government with the rule of law. When they tear that apart, as they are doing, whoopsie-daisy, their profits go the way of the profits of Russian aristocrats in 1914

Expand full comment
WinstonSmithLondonOceania's avatar

Ahhhh, but you're not counting the capital that they >steal< from tearing the government apart. Especially the rule of law part. But you're right about the Russian aristocrats of 1914. Something along those lines will be the most likely outcome. Either that or the French Revolution: "Let them eat cake!" Chop!

Expand full comment
Craig's avatar

If DOGE really is motivated by impending AGI (as opposed to AGI being a post hoc strategy to explain the impetuous decisions of Trump and Musk, as many Republican "strategies" really are) then at best this is akin to ripping out traffic lights in 2016 because full self-driving is "only a year away." At worst it's razing the government to the ground in preparation for a technogod that will never arrive.

It's also not a self-consistent action. If AGI is a year or two away, bringing a new era of abundance and eliminating the need for knowledge workers, what's the point in firing people now? If the primary threat is an autocratic AI from China dominating the world and that drives DOGE's actions, why isn't China similarly jettisoning a lot of their knowledge bureaucracy to achieve said goal faster? The government is being hollowed out by a group of deranged cultists; the only uncertainty is if the cult is primarily devoted to the principles of the Heritage Foundation or the newer AI accelerationists.

Expand full comment
Nate Boyd's avatar

Great analogy. One needs a lot of ketamine to conclude that DOGE is doing any good, irrespective of the AGI timeline.

Expand full comment
mike harper's avatar

Seems in our economic system AGI will only be used to monetize solutions to problems as seen by the rich and corporations. It sure as hell won't "trickle down" to the hoi palloi.

Expand full comment
WinstonSmithLondonOceania's avatar

Oh I don't know about that. I've been feeling trickled on for the last forty five years already. Now it's becoming a whole stream.

Expand full comment
Augusta Fells's avatar

My personal take is that the AGI preppers are MASSIVELY undervaluing the value and scarcity of labled training data. reCAPTCHA, e.g., has been using *real humans* for YEARS to label images, which is a primary reason why image recognition software works so well. There is no equivalent "problem solving data" that can be used to train neural networks (nor feasibly could be as many problems don't have yes/no answers the way "does this picture have a dog" problems do.

Expand full comment
Rapier's avatar

AI is a financial bubble. $500bn is said to have been invested in it last year. The ROI? A rant here has some info on that. https://www.wheresyoured.at/longcon/

That is not to say that AI will not have a glorious future soon like the touts have it eventually. There has to be a first bubble in history that just keeps on inflating. The word inflate used consciously.

Expand full comment
WinstonSmithLondonOceania's avatar

This is an interesting analysis of what's going on, but I think I can simplify it one word: greed.

If you look at the people promoting this, and promoting it hard, you can see they are the ones who stand to profit the most from it.

They don't need AGI to actually be superior to us. They only need to create the illusion of it. That's all that's needed for their (true) purposes.

With the advent of AGI replacing all human workers, humans themselves become obsolete and expendable. That leaves the world to the surviving oligarchs who will then boldly move forward into the new "paradise" they've created for themselves.

Delusional? Totally. And that's who's running our government now.

Expand full comment
Peter Murphy's avatar

All this AGI-prepping makes me think "Lysenkoism for the 21st century, baby!"

Expand full comment
Swag Valance's avatar

Nobody wants a Manhattan Project where Fat Man or Little Boy routinely hallucinate.

Expand full comment
Philip Koop's avatar

The fanatical universalism of AGI true believers was present in embryo from the earliest days, back when we used to call it "neural networks'. Back then, neural nets were just one of a profusion of ideas that were being proposed to solve "AI" problems (I still have a large collection of papers in three volumes called "The AI Handbook" on my shelf.)

There was a standard bit in one of Geoff Hinton's talks where he said (in paraphrase, it's been a long time since I heard it): "back in the 90's, we were running around telling everyone else that they were wasting their time, because neural nets would solve all of the problems they were working on. Unfortunately, it wasn't working. We had only two problems: not enough data, and not enough compute."

Expand full comment
neroden's avatar

Heh. Now that there's enough data and enough compute, we've found the REAL problems with neural nets. They just kind of suck for most applications.

Expand full comment
Philip Koop's avatar

I perceive a certain conflation between three arguments, each progressively more toxic and also progressively more stupid:

1. AGI is definitely on the horizon. Our current program of development is certain to reach it. Therefore we ought to think about the consequences and prepare for them.

This "moderate perspective" is perfectly reasonable if you believe the premise; there are no *anticipatory* adjustment pains because we don't have blow anything up before time. The problem is that the premise is wrong; we have no idea how to create AGI. So we ought to think instead about the consequences of the technology we are actually building, not the one we fantasize about.

The final sentence above is the substance of your argument as I understand it.

2. We don't have to worry about our problems, such as global warming, because soon we will have AGI and it will solve everything so quickly that whatever effort we expend now will be wasted.

This one is wrong even on the premise: what if you are mistaken about exactly when we achieve AGI? If you're off by a year or two or three, maybe we can't get there from here without fixing our current problems.

3. Soon we will have AGI so we can just blow up everything and it won't matter because we'll be able to replace our existing institutions, arrangements, and workers with AGI ha ha ha ha.

This is the sort of delusional, 'round-the-twist lunacy for which the phrase "not even wrong" was coined. How do you know the AGI wouldn't say "hmm, yes, I see the problem, and the solution is to execute all the idiots who keep blowing things up"?

Expand full comment
Rapier's avatar

AI is Hyperloop, without all the obviously impossible physical obstacles like cubic miles of near vacuum or very low pressure tunnels. Then think leak. ROTFL as long as you don't think about the blood. Not to mention how much energy does it take to plunge air pressure per cubic mile?

AI is doesn't have those problems, well gigawatts do cost. No it's problem is it just doesn't work. It just spits our errors, with total confidence. Nobody is going to want to pay for that.

Expand full comment
SJ's avatar

Please don’t conflate this very specific power play, which involves totalizing capture of all systems by a small group, with general ideas about AGI, which traditionally imagine systems still designed to distill and operate in the public interest.

Expand full comment