25 Comments
User's avatar
La Gata Geopolítica's avatar

🤯🤯🤯🤯

Lost all cognitive function by Week 3. By Week 9 I was reorganizing my bookshelf and my worldview. Not sure if I should thank you or file a complaint.

Expand full comment
Jason Blakeburn's avatar

In addition to the Moynihan piece, I recommend a new report from Kevin de Liban and his non-profit, Techtonic Justice, “Inescapable AI: The Ways AI Decides How Low-Income People Work, Live, Learn, and Survive,” which focuses on the negative impacts of AI on low income people

Expand full comment
Robert Hartinger's avatar

A large majority of this syllabus is already out of date or are propaganda pieces deliberately designed to mislead the public. Use common sense to understand what is happening. Over 100,000 jobs eliminated in Silicon Valley in the first six months of 2025. These were not replaced with different, better jobs. AI is viewed as a job replacement device by every CEO implementing it. It will increase unemployment and lower wages as it increases competition for jobs. Lower wages reduces economic growth as consumption falls. See GDP growth in the 1980’s vs the 2000’s. Historical patterns have no real relevance to the future of AI political economy since a technology like this has never existed before. Stop comparing this to electricity and the steam engine. That comparison is utterly meaningless.

Expand full comment
Tom Austin's avatar

Thanks for sharing this thoughtful syllabus. This would be a great course!

I've been building AI systems since GPT-2 and working in tech for years.

The core challenge isn't just understanding AI's societal impacts—it's preparing students to think about system stability and resilience during rapid technological transitions.

We should be teaching students to recognize when technological change threatens the stability of democratic systems themselves.

Three key thinking frameworks students need:

Regional Tipping Points: Small geographic concentrations of AI-driven unemployment could create political coalitions powerful enough to destabilize national democratic governance. Students need to map technological disruption against electoral geography and understand how concentrated economic pain translates into political power.

Training vs. Inference Economics: The economics of developing AI (centralized, expensive) versus deploying it (distributed, cheap) create entirely different strategic dynamics. This distinction changes everything about accessibility, competition, and control—but most discussions conflate the two.

The Consumption Paradox: Anthropic's CEO talks about economies with high wealth but high unemployment, but this is economically impossible in consumption-based systems. If AI displaces workers faster than it creates new opportunities, who buys the goods this super-productive economy produces? Students need to understand how prosperity must be broad-based to be sustainable.

We also need frameworks for novel policy responses—automation credits that can be traded like carbon credits, augmented intelligence transparency requirements, and new roles for labor unions in negotiating human-AI collaboration terms rather than just wages. The foundation model companies have ethical responsibilities here that go beyond traditional corporate governance.

The goal should be educating students who can help democratic societies navigate the AI transition without losing their democratic character. That requires understanding both the technical realities and the political vulnerabilities they create.

My background is building with these systems, but my concern is ensuring our educational approaches prepare students for the actual challenges ahead, not just the ones we imagine.

The goal should be educating students who can help democratic societies navigate the AI transition without losing their democratic character. That requires understanding both the technical realities and the political vulnerabilities they create.

Feedback loops also matter greatly. Economic disruption creates political responses that reshape AI development trajectories, which then create new economic effects, and so on. Understanding these dynamic interactions is crucial for predicting how AI transitions will actually unfold rather than assuming linear progression.

A foundational understanding of what these systems can and can't do is also essential. They represent a form of non-human intelligence that lacks the developmental grounding and embodied experience that shapes human cognition, leading to both unexpected capabilities and unpredictable failure modes. These systems can demonstrate sophisticated reasoning in some domains while making bizarre errors that no human would make, creating emergent behaviors that even their creators cannot anticipate or fully control. Without this technical grounding, political economy analysis built on intuitive but incorrect models of AI behavior will lead to ineffective policies.

There is a great opportunity for simulations and the use of similar jobs and models (like in climate modeling) in many of these spaces to deepen system thinking and system-level understanding.

I write about these ideas regularly on Substack (see AI 8-ball section) and am always interested in collaborating with others exploring how education can better prepare students for navigating these transitions effectively.

Expand full comment
David Vossbrink's avatar

Under “business model,” you should include any of Ed Zitron’s posts about the unsustainabilty of the massive VC investments in AI light of unlikely sufficient returns on those investments. https://www.wheresyoured.at/

Expand full comment
CyrLft's avatar

Glad to have this. Especially after last week, I was glad to read your ARPS review essay ( https://bit.ly/FarreH-2025 ).

I'm preparing an undergraduate syllabus "AI and Society" with sociological tilt. This will lean heavily on economic ("sociology of work) and political sociology, while putting 20% or more of course share on human ecology (by which I mean hard biophysical realities, that social organization engages and converts; call this environmental sociology, broadly conceived as I think we should).

There is right now, zero overlap between my syllabus draft and yours, nor much overlap on my list of stuff that I've moved off the draft to make room. Without writing my *entire* syllabus draft into this blog reply (!), I wonder what anyone thinks about the following.

I'm thinking to allocate perhaps 20% or greater to the political economy of welfare state *income replacements*, from unemployment insurance to UBI and/or UBS (universal basic services). For that I'd start with Adam Bonica's tech-taxing UBI dreamworks he blogged about June 29th ( https://bit.ly/BonicA-2025-6_29 ). I'll put that next to a wide-swatting rejection of UBI on welfare policy-historical evidence by Bo Rothstein in Social Europe ( 2017, https://bit.ly/RothB-2017 ). Pivot off that into a deep crawl (like, a week or even a week and a half of class meeting time) on Walter Korpi's excellent, but neglected, 2002 Politics and Society paper "The Great Trough in Unemployment..." ( https://bit.ly/Korpi2002_P-S ). From there get into more contemporary transfer-payment assessments. One useful review that foregrounds the political sociology of UBI, I find in Jeff Manza, Theory and Society 2023 ( https://bit.ly/ManzaJ-2023_TS ). A narrower but still widely searching article I rate highly, and want to fit onto my syllabus, is ecological and political sociologist Max Koch putting Sweden in the spotlight ( 2021 in Social Policy and Society, https://bit.ly/KochMa-2021 ).

I assume we may conclude from mutual exploration in the course, that these "universal" transfer payment ambitions find little evidence of viability from political and economic history. But, what if masses of people *couldn't* work??? Koch (above) thinks in those welfare states already more encompassing and robust (the Nordics), perhaps UBS (services, not cash transfers) could work even as 71% of Swedes opposed UBI (cash transfers) in a 2020 survey.

The 2024 Bloomsbury book Feeding the Machine ( https://bit.ly/CaMuGr-2024 ) by Cant (sociologist of work), Muldoon (political theorist), and Graham (internet geographer), was going to loom large on my AI and Society syllabus this fall. I may soon drop that book to free up room for the above journey into sociology of welfare state income transfers for unemployed and under-employed. This summer as I hunt and read, I've swung from not having found as much empirical social science as I would like for this course, to piling up publications that easily could fill a year-long course. That's because of new developments in AI and society, and newer studies coming out, and some publications my searches missed but I find from references in newer papers I read. And then I've widened the scope to study proposed remedies to work loss and earnings loss, that originally I had not conceived of folding into this course.

Even if the book Cant, Muldoon and Grahram book gets cut from this course, I'd want to follow its main upshot pointing to collective bargaining as, they think, the only pathway for egalitarian countervailing. Though it's a weakness in my view, that Feeding the Machine comes off, on my reading, naive about *politics* at national-state levels, including labor regulations. This moves me to grab and assign articles from the October 2024 forum published by Litwin et al in the October 2024 ILR Review ( https://bit.ly/LitwiEA-2024_10 ); from which I've read three articles, learned a lot, and plan to read the rest; then to decide which ones fit in my course.

I also have lined up some articles and book chapters by sociologists and others, that lay out the labor process theory (from Marxist sociology, via Braverman and more recent empirical refinements) to juxtapose with economists' routine-biased technological change theory. This, I'd like to put next to the contending studies that how high-skilled job losses in the USA as of late 2023, surveyed by sociologist Dahlin (2024, https://bit.ly/DahliE-2024-6 ). That now needs to be contrasted, I think contrary evidence from Denmark in the 2025 working paper by economists Humum and Vestergaard ( 2025, https://bit.ly/HumVes-2025 ), that I find summarized pretty well in Fortune by Ivanova (May 18, 2025: https://bit.ly/IvanoI-2025-5_18 ). I read the Humum and Vestergaard study as impressive and rigorous, matching administrative data to surveys and sampling a reasonable set of specific occupations. But Humum and Vestergaard wave away the Nordic welfare regime context of their observations, merely describing Denmark's labor regulations as "flexible". I take Humum's and Vestergaard's study as indicating a need to put considerations of AI and work into political contexts of capitalist varieties.

Between my current draft and my cutting room floor, I'm weighing your (Henry Farrell) and Marion Fourcade's 2024 Economist essay on AI and rituals ( https://bit.ly/3TELSxs ). That, I value as more up to date and overall more persuasive than your Dædalus 2023 that you both wrote (above drafted onto your syllabus). Also in earlier drafts for my course, I would have assigned Paul Krugman's 2024 review in Foreign Affairs ( https://fam.ag/3OcnmBr ) of your and Newman's book, Underground Empire; and then I'd lecture from the book itself. (Spoiler: I liked and learned from a lot from Underground Empire! Already I've assigned Krugman's review of it in a couple sections of Intro to American political development.). If I were to teach a politics version of my AI course at some point (I teach sociology and political science), then most likely I would assign *at least* Krugman's review of Underground Empire, if not Underground Empire itself.

Welcome any notes!

Expand full comment
Tom Austin's avatar

Your syllabus approach is genuinely excellent in several key ways—the focus on welfare state institutional analysis provides crucial grounding that most AI discussions lack, and your comparative perspective on varieties of capitalism (especially the Nordic examples) addresses real policy questions about how different societies might navigate AI transitions. The labor process theory foundation is particularly valuable because it connects AI to established sociological frameworks about technological change and work organization.

However, I think there's an opportunity to move beyond applying existing theories to AI toward developing new frameworks for unprecedented challenges. The purpose of education right now should be building adaptive, resilient institutions and mindsets—not just understanding past patterns but anticipating how AI might break those patterns entirely.

Three concepts that might complement your welfare state focus:

Corporate Responsibility and New Regulatory Models: We need frameworks like an "AIM Index" (similar to LEED certification for buildings or B-Corp standards) that creates transparent, auditable metrics for AI company behavior—with tax incentives for compliance and tradeable automation credits that ensure AI deployment goes to highest-value uses rather than just cost-cutting. Regional limits on unemployment rates could prevent concentrated disruption.

Beyond Traditional Welfare Responses: UBI discussions often miss that work provides identity and meaning, not just income. Geographic cost differences make "universal" programs impossible, and we're more likely seeing augmented super-workers rather than mass automation. This creates different inequality patterns requiring new institutional responses, potentially including white-collar unionization and new professional standards.

AI Company Ethical Obligations: Just as app stores regulate what software can do (no vaping apps for kids), AI companies should regulate API usage and monitor applications. It's absurd for them to claim they'll eliminate entry-level jobs while expecting others to handle the consequences.

These systems have fundamental reliability issues—they're non-deterministic, lack developmental grounding, and can fail unpredictably. This requires utility-like oversight and mandatory human-in-the-loop requirements for critical applications.

The goal should be proactive institutional design that preserves human agency while harnessing AI capabilities responsibly.

Expand full comment
CyrLft's avatar

Many thanks for your big-thinking and substantive response. I agree with most of what you wrote – until your pitch for consumeristic standard-setting in the public interest. What little sociology or political science I've read that would relate (just barely) to that, I find unpersuasive. Management schools might take that up as part of "business ethics" research and teaching – and I think that would fit better there (if it does).

Your comments on my reply post and Farrell's original (his syllabus) flag some stuff you think needs to be *taught* – regardless of empirical basis in social sciences. I have

two qualms with some of what you're saying there:

1.) You suggest claims to fuller knowledge of future states than I think we should. Though many people working big technical changes may *feel* far-seeing. Remember for example 10-15 years ago when all the Web 2.0 stuff was to bring great democratization and knowledge leveling up? Political and media sociologists, using information about past developments, generally looked askance at, or rejected, such exuberant predictions from California Ideology and the new whiz-bang machines.

2.) If empirical social sciences including sociology are to uphold utility in research and teaching, we should keep mindful that for example, sociology ≠ management studies ≠ entrepreneurship. Etc. If anything, now is a time I think we should hike up appreciation for humans learning past patterns of societal developments. It serves us to learn insights and explanations yielded by social sciences. That may support our best shots at predicting constraints, challenges, and opportunities. Vocational education has its place; but right now what's to be taught for vocational marketability is less predictable than before GenAI's volcanic deployments. What makes for vocational education in 2025, that would serve graduates in 2030? Does anyone know? Meanwhile broad education in social sciences is apt to be useful, I think.

Welcome any future discussion. Thanks again for the above!

Expand full comment
Tom Austin's avatar

I love the points you’re raising.

You're absolutely right about the Web 2.0 parallel.

Web 2.0 promised democratization through Wikipedia, blogs, and social media, but delivered filter bubbles, disinformation, and platform monopolies instead / as well. Sociologists correctly predicted this by recognizing that "democratizing" technologies (radio, TV) historically concentrate power while appearing to distribute it. Today's AI democratization claims echo the same pattern, validating sociological skepticism over Silicon Valley futurism.

Sociologists possess the unique ability to reveal how seemingly inevitable technological trajectories are actually contested political choices—making visible the power relations and institutional alternatives that technologists often render invisible.

Agree that purely vocational education is dangerous - developing the ability to think deeply, strong conceptual frameworks, intellectual humility, and strong moral ethics really matter.

I love deep history and rigorous disciplinary investigation. It's one of the most valuable sources of accurate information and the building blocks for cross-disciplinary thinking. Your point about grounding analysis in historical patterns rather than tech industry predictions is crucial - that's exactly the kind of rigorous thinking we need more of.

I'm going to write more about this in coming weeks/months.

Agree that it's really hard to predict the future and things will often change in ways we don't anticipate.

I like tech, but am far from a pure techno-optimist and don't think we should trust Silicon Valley and a small number of CEOs to self-regulate.

We need smart research - but many tech CEOs sound really ignorant when talking about things like possible economic policies to help if widespread job displacement happens fast. So I hope smarter voices, vision and educational models adapt to evolve smarter thinking here at a societal level.

I don't think AGI will arrive quickly, but significant societal harm can happen even without AGI.

I also think understanding how technical capabilities interact with institutional structures requires both disciplinary depth and cross-disciplinary literacy - not replacing one with the other, but strengthening traditional analysis with technical grounding.

P.S. I did not mean any offense if I caused any — there are many things I don’t know. And I’m an imperfect communicator.

Expand full comment
CyrLft's avatar

All very interesting. Thank you for these thoughts, Tom, and thanks to Henry Farrell for the original post and allowing comments.

Expand full comment
Alex Tolley's avatar

I fear that by the time students get to take the course in spring 2026, some of the references will have aged poorly, primarily those dealing with current LLM technology. As the technology rapidly changes, this will require an update of the course, especially the elements that deal with it as a technology. The politics should hopefully be more resilient, but who knows? Five years from now, this course may be dictated by our new A[G]I overlords. ;-)

Expand full comment
Steven M Friedman's avatar

Are you kidding? This is invaluable.

Expand full comment
Ary Shalizi's avatar

Suggest “Cobalt Red” by Siddarth Kara under the “material resources” week.

Expand full comment
John's avatar

Arvind Narayanan recommended the United Nations report A Metter of Choice: People and Possibilities in the Age of AI.

Expand full comment
K G Spence's avatar

Unless the cohort have already studied appropriately, I’d suggest adding Shannon & Weaver to the early weeks, along with Frankfurt’s ‘FoW & Concept of the person’ essay (maybe followed up later with ‘On Bullshit’ as appropriate’. Charles Taylor’s subsequent essays also. Without a context (but avoiding potentially infinite regress…) along these lines, the process & effects of thinking itself for humans rather than machines risks eliding focus. Frankfurt’s formulation of 2nd order evaluation condenses a lot of material very effectively in my view & helps us to frame what ML can do (fast analysis of complex data such as MRI scan outputs) and what it can’t (summarising and regurgitation can never reproduce the experience and potential effect of engaging with an actual text, another mind). You might also consider challenging with Benjamin’s ‘Work of Art in the Age of Mechanical Reproduction’ - given the garbage ‘AI’ images engulfing us now, whither the ‘Aura’? Perhaps to be found in South Park…

Expand full comment
JLCR's avatar

This is superb, million thanks for sharing. If I may, I would add "AI and the American Smile" by Jenka Gurfinkel https://medium.com/@socialcreature/ai-and-the-american-smile-76d23a0fbfaf

Expand full comment
Joanna's avatar

I have a course I've been teaching since 2020 called "governance and politics of AI," but it's for a governance school so more focussed on ensuring the students can actually go out and write legislation and audit. And it's not so focussed on generative AI. Maybe we should put a course in on this hype too...whether or not this is total overspend for the actual output, no question it alters geopolitics.

Expand full comment
Mary Clare McMahon's avatar

For the capital week, it might be interesting to add something about the Microsoft / OpenAI investment that explains (a) OpenAI's corporate structure; (b) its relationship to Microsoft; and (c) how cloud compute tokens are a form of investment in the context of the AI labs. Parmy Olson's Supremacy and Karen Hao's Empire of AI both cover the deal in detail. For shorter treatment, Open Markets Institute also has a report called "Stopping Big Tech from Becoming Big AI;" section II covers the overlap between the tech giants pretty well.

Expand full comment
Celine Nguyen's avatar

This is a great list—a lot of new papers/links for me to read, and a lot of my favorites here. I love C. Thi Nguyen’s “The Limits of Data,” the Nature paper on model collapse (which more humanists should read imo!!!), and Kate Crawford’s Atlas of AI, of course…also recently read and loved the one on AI as a normal technology

Expand full comment
Kevin McGahan's avatar

Thank you very much for this rich syllabus. At the National University of Singapore (NUS), my colleague, Simon Chesterman, has been writing about AI and governance (see https://simonchesterman.com/2021/08/06/we-the-robots/) - which might be useful. And I do not have a particular citation, but in working with several NGOs in Asia, AI is quickly widening the digital divide and generating greater inequalities in some cases (though I like the suggestion below to read Kara's book on Cobalt Red).

Expand full comment