5 Comments
User's avatar
Lee A. Arnold's avatar

I try to provide a substantive, indeed pictorial articulation of a common analytical framework, which accomplishes the extending of boundaries, and which political scientists may find suitable. Math is optional.

In your table, I think that {markets & large models} and {bureaucracy & democracy} are TWO DIFFERENT patterns of organization. Visual patterns. Maybe the only two main visual patterns. This comes out of an empirical approach, by diagramming many different societies and ecosystems in flow-chart animations, and trying to make them do everything they do. I call them "feed-forward webs" and "groups".

I use these two patterns to present the market vs. the pro-social group, in a sequential video playlist called "New Addition to Economics." This is the first row, at the top of the following YouTube page:

https://www.youtube.com/@ecolanguage

Among other things, this presents a full-throated explanation and defense of the market system, better than Hayek if I say so myself, and then shows that it is NOT ENOUGH. This is meant to persuade people that BOTH market and pro-social organization are co-equals. Each is needed for different kinds of goods and services, including governance. I hope that this helps to inform a new, inclusive style of politics.

The playlist sequentially compares and contrasts market and group under these chapter headings: Intentionality, Decisions, Motivations, Possible Efficiencies, Innovations, Knowledge, Failures and Freedoms.

The two different patterns also sometimes join together, and the next chapter in the playlist, almost finished, will be an example that joins both patterns together to form a universal healthcare system with a single payer, that preserves variety, efficiency and innovation on the supply-side.

But outside of this playlist, these two patterns are to be applied far more broadly. Feed-forward webs include markets, large language models, local wildlife food webs, and the global climate system.

And "group" is any center or rule system around which "members" are arrayed. The rules partly govern the transactions or transformations between them. This can be nested in larger such groups, forming hierarchies. (An early systems-theoretical analogy is the "directed attractor," but the nearest conception I've ever found, which includes nesting, is Arthur Koestler's "holon".) Examples: a social institution, a geographic center, a piece of technology, a business firm, an individual person with a field of attention. It may seem odd to equate a piece of labor-saving technology with a social institution, but they both reduce costs ("institutions reduce transaction costs," say the economists), and they both can become obsolete, and so on.

AI is another entity that combines both patterns, using things like large language models together with changing rules, directions, or prompts. You can do almost anything. Indeed, the question we may all be forced to consider shortly is NOT how to increase individual liberty continually so that we all somehow compile atomistically and unpredictably into the best possible future (i.e., the unspoken assumption of liberal modernism). Because now, you can do almost anything. Instead the question will become, what is our image of the future? What world do we want to see?

Expand full comment
Alex Tolley's avatar

I have no training in the humanities, so my views are worth little. I found this to be an excellent article for the annual reviews and worth keeping the PDF in my AI library.

It encapsulates a number of my hopes and concerns for AI in governance.

On the one hand, there is the line from "Back to the Future II": " The justice system works swiftly in the future now that they've abolished all lawyers."

I have been dismayed by the US legal system, where the law can be interpreted very differently by different judges. More so these days, when SCOTUS seems to reverse constitutional law judgments by the lower courts. The political bias seems obvious.

Yet if we use an AI to "bake in" an interpretation of the constitution and the law more generally, we get an extreme version of the constitutional "originalists" that remains unchanging despite changing social views. OTOH, we know that current AIs built on LLM architecture can be gamed by hidden prompts in documents to change outcomes. This seems to allow for manipulation of the AI by people with their hands on the platform.

Experience as a volunteer at an agency indicates that management can be very resistant to change. A rule book that could easily have been coded as symbolic decision-making was outright rejected when suggested as a way to increase client handling. Was this management protecting itself from change?

There seems to be an acceptance that AIs can be biased, yet so are judges and juries. There should be a balance. Would AIs allow for Jury Nullification in the case of socially unjust legal procedures?

Perhaps there is a role for the law to be interpreted by a balance between AIs (however implemented) and humans?

As someone who tends towards technocratic approaches, especially when science should be an important input, I do think that computers and AI (symbolic rules or LLM neural architectures) should have a place. What I do not want is a world run by AIs, as in Iain Banks' "Culture novels" [https://en.wikipedia.org/wiki/The_Culture]. Humans should stay in control, but again, how do we stop the pathologies that we see in politics, authoritarian rule overriding the popular demand, whether monarchic, fascist, or communis? But this reflects my bias for democracy and my ideals for economic distribution.

Expand full comment
Mickie Morganfield's avatar

Slop. Excrement. Boatloads of raw data. Speed. Bias, because AI is never really neutral. There will be winners and losers. I'd like to have a contest to give AI a new name. Artificial Something Else.

Expand full comment
Alex Tolley's avatar

But humans are biased too. Exhibit 1 - the consistent differences between the conservative and liberal judges on SCOTUS when interpreting the constitutionality of legal cases.

Expand full comment
Mickie Morganfield's avatar

Total agreement. AI is biased because of the human element involved in data selection and application.

Expand full comment