One interesting point of contrast is how things have played out versus how early internet libertarian-utopians imagined it playing out ("information wants to be free", "the internet interprets censorship as damage and routes around it" etc.) I haven't done your required readings (sorry!) and I don't know how much of this is recapitulated in the historical perspective of the first week. Dave Karpf has short and accessible substack posts on the early politics of information. Here is one example, though perhaps not the best for your agenda: https://davekarpf.substack.com/p/that-old-wired-ideology.
ETA: I suppose the science fiction of Vernor Vinge offers a perspective from a true believer.
Week 1 is indeed supposed to give them this in highly abbreviated form - the dialectic between Diamond's article on "liberation technology" and Tucker, Roberts et al. on what happened next ...
I read “underground empire” which I thought was great. The part about Thomas schelling and the lessons he learned being a parent helped him become an effective nuclear strategist was so fascinating to me that I tracked down the original source but there is no mention about what particular aspect about being a parent prepared him for being a effective nuclear strategist. I apologize for being pedantic but what were the lessons schelling learned from being a parent?
One interesting absence from this is discussion of the role of intangible assets (a la Haskel and Westlake) - when talking about US power projection especially this is a really important point about the self-reinforcing nature of superstar firms.
I'd be interested to hear how their skepticism holds up in the face of the IDF AI programs, though I suppose they have a point about general applicability. I'll also admit that my fear of autonomous drones probably belongs more in the future worries pile, though it is still more applicable and less religious than many of the topics considered by various corporate AI risk teams. Goldfarb and Lindsay's argument about a lack of data doesn't seem to apply to the categorization and targeting operations conducted by Israel in their current conflict with Hamas. From what I understand, even as far back as Operation Condor various militaries have been using machine learning techniques in targeting operations, so it seems like a lot of the mechanisms and organizational capacity would already exist, which also seems to undermine their argument a little. Granted military operations and capacity are very much not my specialty. On an unrelated note, I wonder if Goodheart's Law would apply in these military targeting operations.
Thanks so much for posting this! Definitely wish I could audit this class. It's right in line with some of the research I've done on digital sovereignty for my MLIS. I personally think that this conflict will change the shape of the liberal information order, maybe not immediately, and hopefully not for the worse. the members of autocracy inc have weaponized epistemology in their attempt to topple it and the LIO has been slow to react, but when it does it will definitely have an effect on what it becomes. As interesting as it is to imagine the changes that will occur it is absolutely terrifying to imagine the changes that machine learning will instigate in military conflicts. Lavender, Where's dad, and the gospel are just the beginning, and if/when loitering munitions and drones get involved it becomes even scarier. The military does not need a "moral crumple zone" in their operational logic.
It's worth reading the Goldfarb and Lindsay piece for Week 2 on the ML and military conflicts question - they are quite skeptical about its general applicability.
One interesting point of contrast is how things have played out versus how early internet libertarian-utopians imagined it playing out ("information wants to be free", "the internet interprets censorship as damage and routes around it" etc.) I haven't done your required readings (sorry!) and I don't know how much of this is recapitulated in the historical perspective of the first week. Dave Karpf has short and accessible substack posts on the early politics of information. Here is one example, though perhaps not the best for your agenda: https://davekarpf.substack.com/p/that-old-wired-ideology.
ETA: I suppose the science fiction of Vernor Vinge offers a perspective from a true believer.
Week 1 is indeed supposed to give them this in highly abbreviated form - the dialectic between Diamond's article on "liberation technology" and Tucker, Roberts et al. on what happened next ...
I read “underground empire” which I thought was great. The part about Thomas schelling and the lessons he learned being a parent helped him become an effective nuclear strategist was so fascinating to me that I tracked down the original source but there is no mention about what particular aspect about being a parent prepared him for being a effective nuclear strategist. I apologize for being pedantic but what were the lessons schelling learned from being a parent?
One interesting absence from this is discussion of the role of intangible assets (a la Haskel and Westlake) - when talking about US power projection especially this is a really important point about the self-reinforcing nature of superstar firms.
I'd be interested to hear how their skepticism holds up in the face of the IDF AI programs, though I suppose they have a point about general applicability. I'll also admit that my fear of autonomous drones probably belongs more in the future worries pile, though it is still more applicable and less religious than many of the topics considered by various corporate AI risk teams. Goldfarb and Lindsay's argument about a lack of data doesn't seem to apply to the categorization and targeting operations conducted by Israel in their current conflict with Hamas. From what I understand, even as far back as Operation Condor various militaries have been using machine learning techniques in targeting operations, so it seems like a lot of the mechanisms and organizational capacity would already exist, which also seems to undermine their argument a little. Granted military operations and capacity are very much not my specialty. On an unrelated note, I wonder if Goodheart's Law would apply in these military targeting operations.
Is there any way of viewing the lectures? Or even just the slides?
Thanks so much for posting this! Definitely wish I could audit this class. It's right in line with some of the research I've done on digital sovereignty for my MLIS. I personally think that this conflict will change the shape of the liberal information order, maybe not immediately, and hopefully not for the worse. the members of autocracy inc have weaponized epistemology in their attempt to topple it and the LIO has been slow to react, but when it does it will definitely have an effect on what it becomes. As interesting as it is to imagine the changes that will occur it is absolutely terrifying to imagine the changes that machine learning will instigate in military conflicts. Lavender, Where's dad, and the gospel are just the beginning, and if/when loitering munitions and drones get involved it becomes even scarier. The military does not need a "moral crumple zone" in their operational logic.
It's worth reading the Goldfarb and Lindsay piece for Week 2 on the ML and military conflicts question - they are quite skeptical about its general applicability.