In my weirdness I noticed the numbered square where each row and column adds to 34 and Albrecht's signature and his flying mouse carrying "Melancholia". I noticed that hammers and compass and wood planes haven't changed much since 1514. I made a wood bodied plane like shown.
I'm more sceptical even than you - I have not, so far, seen any convincing evidence that LLMs will actually have any sort of material commercial impact (there is a notable lack of actual prototypes that have actually impressed disinterested observers so far). I am, though, beginning to worry that they will result in the enshittification of some important things like software. [They will surely contribute to the enshittification of social and other media, but that is already commonly recognised.]
Great summary of the "and then a miracle happens" plan for AGI. The unspoken assumption that the datacenter full of geniuses will have all the attributes their creator wants and none of the ones he doesn't is what shines through for me, given there's no coherent description of how to get there. Sounds like the basis for a story of some sort.
I think there is a story but it isn't one that I think is true. If you buy into his understanding of intelligence, his outcomes follow reasonably well from his assumptions - it's just that I don't think it is a very compelling understanding.
This is all far above my pay grade, but the popular literature is not very forthcoming on their understanding of "intelligence". There's a lot of what it can do, but not what it is or how it is acheived. My impression is it is supposed to spontaneously appear once a sufficiently complex "neural network" is fed sufficient data. Why that should happen is left as a exercise.
So I may very likely be mangling it (reporting half remembered conversation), but he has or knows of research showing that people are more likely to see errors that they themselves have made when those errors are presented as someone else's. Myside bias for the win. The extension to 'I can see errors when the work is out in the world and no longer feels as though it is precisely mine' is trivial.
There is too much money and the reputation of too many billionaires, pundits, media figures, and "thought leaders" for AGI not to basically be "defined" into existence. What I see happening in the coming years is that the models will continue getting "better" as the AI companies keep throwing more things into the "stone soup". AI sales-people are going to convince everyone that "cognitive jobs" can already be automated - "Just look at the benchmarks!!!". Universities will be forced shutter many departments; and the remaining "knowledge jobs" -unless you are already at a top-institution or a business owner (not sure what business one would be in...) - is going to be training AIs (there are already lots of ads on job boards for "free lance" writers and scientists to train AI systems). So, most "knowledge work" will be AI generated; the remaining displaced high-level knowledge work is going to be training these AI systems ... that are apparently already at "PhD level of reasoning"... But the economy will keep growing since the "line will keep going up" while normal people are living increasingly enshitified lives: but the economy is going to do great because we will keep producing "stuff!" ... as climate change worsens ... The end.
Henry, this is the best thing I've read so far on the Amodei manifesto. Thanks for the incisive commentary!
I also appreciate the many links to other great pieces and in particular the link to the Ramani and Wang piece on bottlenecks. My first reaction to Amodei's essay was a sense that he underestimates the last mile problem of getting the intelligence deployed into the real world. Having interfaced with some experimental neuroscience during my time at Caltech, I sense that the main bottleneck is firmly on the hardware/experiment side and not so much on the computation/simulation end.
I also very much appreciate the connections you draw between technology and our dreams of magic.
Maybe, but I don't see how one can be confident that transformers and things built on top of them (like AlphaProof) can't reason and won't be able to reason (and/or that reasoning in the strict sense being used is needed for the kinds of capabilities Amodei and others think we should prepare for). I mean, what capabilities would someone making this argument 5 years ago have predicted for today?
I was reading that the london police department didn’t form until the early 19th century. So before that time for example during the 1500s people of England turned to magicians to help people who were victims of crime.
The best source I know of for such information is Keith Thomas' "Religion and the Decline of Magic." Of all the endlessly fascinating details in this long book, the one that has stuck with me the most is that people generally believed that black magic only worked on people who deserved it, so bringing a suit against someone for casting a spell on you was also an implicit admission that you had wronged them.
In my weirdness I noticed the numbered square where each row and column adds to 34 and Albrecht's signature and his flying mouse carrying "Melancholia". I noticed that hammers and compass and wood planes haven't changed much since 1514. I made a wood bodied plane like shown.
Wolfram on Durer's Magic Square: https://mathworld.wolfram.com/DuerersMagicSquare.html
I'm more sceptical even than you - I have not, so far, seen any convincing evidence that LLMs will actually have any sort of material commercial impact (there is a notable lack of actual prototypes that have actually impressed disinterested observers so far). I am, though, beginning to worry that they will result in the enshittification of some important things like software. [They will surely contribute to the enshittification of social and other media, but that is already commonly recognised.]
Great summary of the "and then a miracle happens" plan for AGI. The unspoken assumption that the datacenter full of geniuses will have all the attributes their creator wants and none of the ones he doesn't is what shines through for me, given there's no coherent description of how to get there. Sounds like the basis for a story of some sort.
I think there is a story but it isn't one that I think is true. If you buy into his understanding of intelligence, his outcomes follow reasonably well from his assumptions - it's just that I don't think it is a very compelling understanding.
This is all far above my pay grade, but the popular literature is not very forthcoming on their understanding of "intelligence". There's a lot of what it can do, but not what it is or how it is acheived. My impression is it is supposed to spontaneously appear once a sufficiently complex "neural network" is fed sufficient data. Why that should happen is left as a exercise.
great essay.
what may i ask is Mercier's explanation for errors post-deployment rather than during editing?
Ditto! What is it?
So I may very likely be mangling it (reporting half remembered conversation), but he has or knows of research showing that people are more likely to see errors that they themselves have made when those errors are presented as someone else's. Myside bias for the win. The extension to 'I can see errors when the work is out in the world and no longer feels as though it is precisely mine' is trivial.
There is too much money and the reputation of too many billionaires, pundits, media figures, and "thought leaders" for AGI not to basically be "defined" into existence. What I see happening in the coming years is that the models will continue getting "better" as the AI companies keep throwing more things into the "stone soup". AI sales-people are going to convince everyone that "cognitive jobs" can already be automated - "Just look at the benchmarks!!!". Universities will be forced shutter many departments; and the remaining "knowledge jobs" -unless you are already at a top-institution or a business owner (not sure what business one would be in...) - is going to be training AIs (there are already lots of ads on job boards for "free lance" writers and scientists to train AI systems). So, most "knowledge work" will be AI generated; the remaining displaced high-level knowledge work is going to be training these AI systems ... that are apparently already at "PhD level of reasoning"... But the economy will keep growing since the "line will keep going up" while normal people are living increasingly enshitified lives: but the economy is going to do great because we will keep producing "stuff!" ... as climate change worsens ... The end.
Henry, this is the best thing I've read so far on the Amodei manifesto. Thanks for the incisive commentary!
I also appreciate the many links to other great pieces and in particular the link to the Ramani and Wang piece on bottlenecks. My first reaction to Amodei's essay was a sense that he underestimates the last mile problem of getting the intelligence deployed into the real world. Having interfaced with some experimental neuroscience during my time at Caltech, I sense that the main bottleneck is firmly on the hardware/experiment side and not so much on the computation/simulation end.
I also very much appreciate the connections you draw between technology and our dreams of magic.
Great column as usual, but I believe it’s “Amodei,” not “Amadei.”
Yes! I spotted that literally right after hitting send (the spelling is fixed except for the original email).
Maybe, but I don't see how one can be confident that transformers and things built on top of them (like AlphaProof) can't reason and won't be able to reason (and/or that reasoning in the strict sense being used is needed for the kinds of capabilities Amodei and others think we should prepare for). I mean, what capabilities would someone making this argument 5 years ago have predicted for today?
You can - as noted in the piece - take the other side of that bet than the one that I am taking ...
I was reading that the london police department didn’t form until the early 19th century. So before that time for example during the 1500s people of England turned to magicians to help people who were victims of crime.
I have read what is possibly the same article or review of the same book but can't remember where ...
Brian klaas had a great article about medieval magicians on his substack page. Really interesting 🧐
The best source I know of for such information is Keith Thomas' "Religion and the Decline of Magic." Of all the endlessly fascinating details in this long book, the one that has stuck with me the most is that people generally believed that black magic only worked on people who deserved it, so bringing a suit against someone for casting a spell on you was also an implicit admission that you had wronged them.
Foreign affairs put “underground empire” at the top of the list for best books of 2024. Congratulations 🎈
I’m delighted with it altogether!
A slightly older take from the New Yorker: http://kaleberg.com/public/Weather-Forecasting.gif
I usually see AI art as utilitarian rather than interesting, but I quite liked the illustration here - https://www.programmablemutter.com/p/cybernetics-is-the-science-of-the - prompt was something like "a complex machine built in the 1970s to foretell the future"
Ooooh! I want one of those.