24 Comments
User's avatar
mike harper's avatar

In my weirdness I noticed the numbered square where each row and column adds to 34 and Albrecht's signature and his flying mouse carrying "Melancholia". I noticed that hammers and compass and wood planes haven't changed much since 1514. I made a wood bodied plane like shown.

Expand full comment
Sean Matthews's avatar

I'm more sceptical even than you - I have not, so far, seen any convincing evidence that LLMs will actually have any sort of material commercial impact (there is a notable lack of actual prototypes that have actually impressed disinterested observers so far). I am, though, beginning to worry that they will result in the enshittification of some important things like software. [They will surely contribute to the enshittification of social and other media, but that is already commonly recognised.]

Expand full comment
Cheez Whiz's avatar

Great summary of the "and then a miracle happens" plan for AGI. The unspoken assumption that the datacenter full of geniuses will have all the attributes their creator wants and none of the ones he doesn't is what shines through for me, given there's no coherent description of how to get there. Sounds like the basis for a story of some sort.

Expand full comment
Henry Farrell's avatar

I think there is a story but it isn't one that I think is true. If you buy into his understanding of intelligence, his outcomes follow reasonably well from his assumptions - it's just that I don't think it is a very compelling understanding.

Expand full comment
Cheez Whiz's avatar

This is all far above my pay grade, but the popular literature is not very forthcoming on their understanding of "intelligence". There's a lot of what it can do, but not what it is or how it is acheived. My impression is it is supposed to spontaneously appear once a sufficiently complex "neural network" is fed sufficient data. Why that should happen is left as a exercise.

Expand full comment
donny rumsfeld's avatar

great essay.

what may i ask is Mercier's explanation for errors post-deployment rather than during editing?

Expand full comment
Brad DeLong's avatar

Ditto! What is it?

Expand full comment
Henry Farrell's avatar

So I may very likely be mangling it (reporting half remembered conversation), but he has or knows of research showing that people are more likely to see errors that they themselves have made when those errors are presented as someone else's. Myside bias for the win. The extension to 'I can see errors when the work is out in the world and no longer feels as though it is precisely mine' is trivial.

Expand full comment
JavaidShackman's avatar

There is too much money and the reputation of too many billionaires, pundits, media figures, and "thought leaders" for AGI not to basically be "defined" into existence. What I see happening in the coming years is that the models will continue getting "better" as the AI companies keep throwing more things into the "stone soup". AI sales-people are going to convince everyone that "cognitive jobs" can already be automated - "Just look at the benchmarks!!!". Universities will be forced shutter many departments; and the remaining "knowledge jobs" -unless you are already at a top-institution or a business owner (not sure what business one would be in...) - is going to be training AIs (there are already lots of ads on job boards for "free lance" writers and scientists to train AI systems). So, most "knowledge work" will be AI generated; the remaining displaced high-level knowledge work is going to be training these AI systems ... that are apparently already at "PhD level of reasoning"... But the economy will keep growing since the "line will keep going up" while normal people are living increasingly enshitified lives: but the economy is going to do great because we will keep producing "stuff!" ... as climate change worsens ... The end.

Expand full comment
Josh Brake's avatar

Henry, this is the best thing I've read so far on the Amodei manifesto. Thanks for the incisive commentary!

I also appreciate the many links to other great pieces and in particular the link to the Ramani and Wang piece on bottlenecks. My first reaction to Amodei's essay was a sense that he underestimates the last mile problem of getting the intelligence deployed into the real world. Having interfaced with some experimental neuroscience during my time at Caltech, I sense that the main bottleneck is firmly on the hardware/experiment side and not so much on the computation/simulation end.

I also very much appreciate the connections you draw between technology and our dreams of magic.

Expand full comment
Ary Shalizi's avatar

Great column as usual, but I believe it’s “Amodei,” not “Amadei.”

Expand full comment
Henry Farrell's avatar

Yes! I spotted that literally right after hitting send (the spelling is fixed except for the original email).

Expand full comment
Mike's avatar

Maybe, but I don't see how one can be confident that transformers and things built on top of them (like AlphaProof) can't reason and won't be able to reason (and/or that reasoning in the strict sense being used is needed for the kinds of capabilities Amodei and others think we should prepare for). I mean, what capabilities would someone making this argument 5 years ago have predicted for today?

Expand full comment
Henry Farrell's avatar

You can - as noted in the piece - take the other side of that bet than the one that I am taking ...

Expand full comment
John imperio's avatar

I was reading that the london police department didn’t form until the early 19th century. So before that time for example during the 1500s people of England turned to magicians to help people who were victims of crime.

Expand full comment
Henry Farrell's avatar

I have read what is possibly the same article or review of the same book but can't remember where ...

Expand full comment
John imperio's avatar

Brian klaas had a great article about medieval magicians on his substack page. Really interesting 🧐

Expand full comment
Joe Jordan's avatar

The best source I know of for such information is Keith Thomas' "Religion and the Decline of Magic." Of all the endlessly fascinating details in this long book, the one that has stuck with me the most is that people generally believed that black magic only worked on people who deserved it, so bringing a suit against someone for casting a spell on you was also an implicit admission that you had wronged them.

Expand full comment
John imperio's avatar

Foreign affairs put “underground empire” at the top of the list for best books of 2024. Congratulations 🎈

Expand full comment
Henry Farrell's avatar

I’m delighted with it altogether!

Expand full comment
Kaleberg's avatar

A slightly older take from the New Yorker: http://kaleberg.com/public/Weather-Forecasting.gif

Expand full comment
Henry Farrell's avatar

I usually see AI art as utilitarian rather than interesting, but I quite liked the illustration here - https://www.programmablemutter.com/p/cybernetics-is-the-science-of-the - prompt was something like "a complex machine built in the 1970s to foretell the future"

Expand full comment
Kaleberg's avatar

Ooooh! I want one of those.

Expand full comment