"...or that excitable claim that humanity is doomed to be superseded because of the performance of AI on this or that test". We're doomed to be superseded because that's what the purveyors and profiteers are pushing for.
Human judgement can't really be measured, much less optimized. Finding those areas outside the "sweet spot" of AI optimization is where the remaining (hopefully gainful) employment will be found.
Recht points to the limits of statistical rationality.
The real issue is that once governance relies on model‑driven logic, the system drifts; because models cannot carry human values or political judgment.
Great read, Henry. I really agree with the point that neither the rationalists nor the denialists are helpful here. I've always thought of AI as a massive data warehouse. Garbage in, garbage out. Also, agreed on the measurement points: The tasks that can be clearly measured are the same tasks computers will beat humans at. So the framework evaluating AI's limits is the same one hiding them. That's a loop that's hard to escape once you see it.
Great review! but what happens to what falls outside the sweet spot and doesn't disappear? Does the model at least register the existence of what it can't capture? It’s a strange pattern nowadays, everyone cites cybernetics, but no one talks about displacement. The framework describes the boundary but says nothing about entropy (which is ironic, given its own origins).
Great post. Thought-provoking! An aspect I'd add is the often overlooked aspect of 'legitimacy' of choices/decisions. The (in some debatable sense) optimal solution isn't always legitimate, and the legitimate solution isn't neccesarilly optimal in some clearly defined sense.
The IBM 360 is a great choice of image. IBM’s 1979 internal memo, later leaked, stated that a computer “can never be held accountable, and therefore must never make a management decision.” That rule was not abolished. It was engineered around. The org structures we built to diffuse accountability are a very large part of why the limits question keeps coming back.
A dialogue with AI working through the psychological mechanisms underneath institutional collapse — ironic identification, the Powell project as completed, the ruling class control problem. latestwriting.substack.com
What stands out is how often intelligence gets mistaken for coverage. If you can map it, measure it, reduce it into something clean, it suddenly looks solvable. The messier parts don’t disappear, they just get pushed outside the frame where the tools can’t reach.
The real edge has always lived in that unmeasured space. The part where outcomes aren’t neatly scored and tradeoffs don’t resolve into a single answer. That’s where judgment still has to carry the weight, and where the models start to feel more like guides than authorities.
Great read. As a union official, I was always troubled by the increasing reliance on data to drive HR decisions - which are frequently obviously poor decisions. This has gone to its extreme now that ‘AI’ is taking over recruitment processes. Doomed to fail. “Is the candidate a fit for 80% of the job?!!!” (I’m sitting in a train next to a ln HR person on the phone.) Never mind that - which 80%: the important bit or the dross? What did your judgment tell you? Will they fit in or simply piss everyone else off?
As you say “the problems begin when technocrats begin to treat human beings and the complex societies they create as though they were simplified “standard cells” that can readily be re-arranged in more optimal patterns”
One might say that the following claim is in the spirit of Edmund Husserl……. “mathematical rationality is limited in what kinds of problems it is best placed to solve but has sweet spots that have yielded remarkable technological advances.”
The danger is not that AI can optimize everything. It’s that institutions may increasingly restructure reality so more domains become optimizable.
"...or that excitable claim that humanity is doomed to be superseded because of the performance of AI on this or that test". We're doomed to be superseded because that's what the purveyors and profiteers are pushing for.
Human judgement can't really be measured, much less optimized. Finding those areas outside the "sweet spot" of AI optimization is where the remaining (hopefully gainful) employment will be found.
Second book suggestion that turned into an order. You're a valuable resource... :-)
Recht points to the limits of statistical rationality.
The real issue is that once governance relies on model‑driven logic, the system drifts; because models cannot carry human values or political judgment.
Great read, Henry. I really agree with the point that neither the rationalists nor the denialists are helpful here. I've always thought of AI as a massive data warehouse. Garbage in, garbage out. Also, agreed on the measurement points: The tasks that can be clearly measured are the same tasks computers will beat humans at. So the framework evaluating AI's limits is the same one hiding them. That's a loop that's hard to escape once you see it.
This reminds me of Rittel and Webber's "Dilemmas in a General Theory of Planning"(https://urbanpolicy.net/wp-content/uploads/2015/06/Rittel-Webber_1973_DilemmasInAGeneralTheoryOfPlanning.pdf) and also to VO Key's "The Lack of a Budgetary Theory" (https://www.jstor.org/stable/1948194). This is not at all to diminish Recht's contribution, but we have known for a long time that many decisions are not amenable to optimization.
Great review! but what happens to what falls outside the sweet spot and doesn't disappear? Does the model at least register the existence of what it can't capture? It’s a strange pattern nowadays, everyone cites cybernetics, but no one talks about displacement. The framework describes the boundary but says nothing about entropy (which is ironic, given its own origins).
You had me at Dan Davies.
Thank you for this post. I will buy and read the book. And maybe share with my Rationalist friends. Maybe.
AI isn’t simply limited, if it’s targeting arbitrary output (metaphors not tumors which are spoecific) then it’s false, illusory technology.
Turing was dead wrong, LLMs demonstrate how wrong his theory is.
https://substack.com/@eventperception/p-182707220
Great post. Thought-provoking! An aspect I'd add is the often overlooked aspect of 'legitimacy' of choices/decisions. The (in some debatable sense) optimal solution isn't always legitimate, and the legitimate solution isn't neccesarilly optimal in some clearly defined sense.
The IBM 360 is a great choice of image. IBM’s 1979 internal memo, later leaked, stated that a computer “can never be held accountable, and therefore must never make a management decision.” That rule was not abolished. It was engineered around. The org structures we built to diffuse accountability are a very large part of why the limits question keeps coming back.
A dialogue with AI working through the psychological mechanisms underneath institutional collapse — ironic identification, the Powell project as completed, the ruling class control problem. latestwriting.substack.com
What stands out is how often intelligence gets mistaken for coverage. If you can map it, measure it, reduce it into something clean, it suddenly looks solvable. The messier parts don’t disappear, they just get pushed outside the frame where the tools can’t reach.
The real edge has always lived in that unmeasured space. The part where outcomes aren’t neatly scored and tradeoffs don’t resolve into a single answer. That’s where judgment still has to carry the weight, and where the models start to feel more like guides than authorities.
https://regulatingai.substack.com/p/ai-governance-now-your-weekly-compass-970?r=3pjruc&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
Great read. As a union official, I was always troubled by the increasing reliance on data to drive HR decisions - which are frequently obviously poor decisions. This has gone to its extreme now that ‘AI’ is taking over recruitment processes. Doomed to fail. “Is the candidate a fit for 80% of the job?!!!” (I’m sitting in a train next to a ln HR person on the phone.) Never mind that - which 80%: the important bit or the dross? What did your judgment tell you? Will they fit in or simply piss everyone else off?
As you say “the problems begin when technocrats begin to treat human beings and the complex societies they create as though they were simplified “standard cells” that can readily be re-arranged in more optimal patterns”
One might say that the following claim is in the spirit of Edmund Husserl……. “mathematical rationality is limited in what kinds of problems it is best placed to solve but has sweet spots that have yielded remarkable technological advances.”