14 Comments

A rich and timely essay. I especially liked the Borges bit...I cannot think of a fiction writer more worth re-reading in light of generative AI.

You don't specifically call attention to schooling as an example of the Lovecraftian monstrosities that LLMs may replace, but it strikes me that LLMs were introduced to the general public through a moral panic about ChatGPT eating homework that has now subsided into a low boil of worries about the "disruption" of education. Now Khanmigo is about as likely to replace teachers as we are to see AGI, but the question of how exactly generative AI will change (or not) the bureaucratic structures and internal practices of education seems relevant.

Collective action in this space feels urgent, especially in light of your insight about the importance of the continued production [and distribution] of high quality knowledge.

Expand full comment

> I cannot think of a fiction writer more worth re-reading in light of generative AI.

As much as it embarrasses me to bring an Asimov to a Borges fight, I've recently found myself thinking about this story a lot:

https://en.wikipedia.org/wiki/The_Machine_That_Won_the_War_(short_story)

Expand full comment

Thanks for this thoughtful piece. Although there are other uses to LLMs than summarisation, and other AIs such as image generators and multimodal AIs with other uses, the basic logic that the societal usefulness of generative AI depends on knowledge and culture (or more correctly, embodiments of knowledge and culture in digital form) to perform seems sound.

You are right therefore that societal failure to acknowledge the costs of so-called knowledge and cultural production will ultimately threaten the technology (or, more correctly, our society).

How society funds this production, and the role AI corporates should play in it (if any), is the real question.

Confronting this question isn’t strictly in contradiction with the suggestion that imposing costs on AI creators for using digitally embodied knowledge and culture via copyright liability will threaten or kill AI development. There are as many problems with the idea that (traditional) copyright is a good fit for this job as there are with its use to fund public interest journalism. Big media corporations, as you suggest, often see knowledge and culture as a private commodity to be bought and sold, and they have the muscle to cut favourable licensing deals with the biggest search, social or (now) AI platforms.

Yet independent creators lack this power, placing them at a competitive disadvantage to big media.

Copyright licensing requirements also place independent or publicly-funded AI research at a disadvantage.

Together, copyright favours the dominant corporations and incumbents of both the media and AI industries, entrenching their dominant positions further.

And yet consider also, public interest news - and I’d suggest knowledge - needs to be freely accessible to benefit the public, and funded (one way or another) by the people it’s meant to serve.

Western publics generally contribute huge amounts to charities and their governments significant amounts to academic research and the arts - and sometimes news provision - through public grants, as these activities and cultural creations are deemed valuable public goods.

Far better then, that increased levies on the profits of AI corporations (and tech companies, more generally) be used to directly stimulate public-interest knowledge and cultural production, according to publicly-directed needs; and to allow this produce to be freely used for AI training.

Expand full comment

Just a spontaneous thought, but aren't all "new" insights and knowledge just a rearrangement and connection-making between what is already given?

Expand full comment

The ensh*ttification of Google is one of the reasons LLMs seem so impressive now. I've switched to Bing+Copilot because it will understand and answer questions written in natural language, whereas it has become harder and harder to make Google give me the answers I want: it ignores both natural language word order and the priority order in which I write search terms. But if someone had said in 1998 that it would take 25 years to deliver this functionality, I wouldn't have believed them (actually, SQL was promising that even further in the past).

Expand full comment

Perhaps this article only triggered my unconscious bias for self contradicting paradoxes.

But assuming all you say is true, and the human creators needs to stand up to manage the growth of the map, and we need to put institutions or guardrails in place, consider if the human creators and their alies flood current available LLMs with requests for text and images that describe those guardrails, systems, and/or institutions?

I’m not saying that should be done. But it might be interesting if the LLMs significantly help us narrow in on such guardrails that are worth implementing.

Certainly, the companies that build and host LLMs would sit up and take notice if a massive segment of humanity were using their tool to find out ways of managing that tool. It would certainly cost them for serving up all those possible answers (or non-answers depending on the training, and their algorithms). And image rendering of such answers would be costly as well.

A system for managing the LLMs might even be found that way perhaps.

However, those companies could certainly slow down a flood of requests, if they charge a penny for every prompt.

I’m trying to imagine an effective, and inexpensive, alternative to the courts, shouting into the ether on social networks, or politics.

Any thoughts?

Expand full comment

I genuinely have yet to see a remotely convincing high value use-case for LLMs (other, obviously, as a provoker of think pieces like this), in spite of paying careful and continuous attention. I don't think I fall into any of your classes of opinion above, because (speaking as a still-very-active computer scientist who wrote his PhD a long time ago in a leading AI department) In the end I cannot convince myself that there is anything much there, other than a cloud of deluded bullshit by blowhard techies making wild claims (which is not new in the AI biz), and thus I find it difficult to care.

I am, however, curious to see how the next five years work out, to reveal whether this time it really is different.

Expand full comment
Feb 13Β·edited Feb 13

> There isn’t any absolute logical contradiction between the two claims, and occasionally, quite stupid technologies have spread widely. Still, it’s unlikely that LLMs will become truly ubiquitous if they are truly useless. And there are lots of people who find them useful!

I do think the words "useful" and "useless" are doing a lot of work here. The real question is "useful for what, specifically?"

Is race science useful? Obviously not as a way of making sense of the world, but it is extremely useful as a way of justifying (and hiding) structural injustice. Is the American healthcare system useful? Well, maybe not for patients, but clearly very useful for insurance companies. The same is probably true of all these useless technologies that are widespread: they are indeed useful, just not useful for the consumers.

The concern of the writer's guild, for instance, isn't that LLMs will produce screenplays that will replace writers, but that they will APPEAR to do so enough that studios will get away with paying writers less, as they'll frame the act of turning inchoate drivel into a coherent screenplay as a "re-write", even though it will probably be as much work as writing from scratch, maybe more. In that framing, the concern isn't about useful vs useless, but the specifics of the use.

And, I do like that in your overall argument, you bring it down to an actual specific use (summarization), but I wonder if, were we all to agree that THAT is the real ground breaking use, wouldn't all the interest in LLMs suddenly appear to be a little silly? I don't even disagree with your assessment of the importance of summaries, but I do think that even pre LLMs, the supply of summaries was already well poised to outpace the demand, so adding summarization "at scale" to an already robust process does not seem like a disrupting event.

Finally, my own hippie dippie take is that an increase of leisure time for the general population is our best bet at keeping the engine of human ingenuity going, whatever takes us there (UBI, centralized planning, AI take over, fully automated luxury space communism, etc).

Expand full comment
Feb 12Β·edited Feb 12

I feel like there is a blind spot in this -- otherwise excellent -- article, that is the ecological / energetic one.

AFAIK it is one of the main push-back arguments of the so called lefties.

Expand full comment

Well I'm glad you put in Borges On Exactitude in Science, because it was the first thing that occurred to me when I read "the summaries can increasingly substitute for the things they purportedly represent". But I am skeptical of your "completely different reasons". Can a LLM really summarize a 50,000 word book in 500? I have never seen an example of such a marvel. I reckon the attraction of these "summaries" is more ordinary: rather than a free 500 word summary substituting for a costly 50,000 word book, a free 50,000 word copy will substitute for a costly 50,000 word book. You may quarrel with "costly", but is that right? Wouldn't you consider a student who wrote a paraphrase of some other work to be plagiarizing? Why is automated plagiarizing better than artisanal plagiarizing?

Please to note, I am not saying that the entire technology of "machine learning" is useless. I am certainly impressed with the performance of classifiers; the ability of a program to classify an X-ray better than a radiologist can, for example. And machine translation has become extremely useful. But I've never been shown a useful LLM output.

Expand full comment

Another thing that generative AI does well is a kind of "concept stitching", where it doesn't just summarize knowledge but takes those summaries and puts them together to communicate analogies from one person to another. This is much easier to see with the so-called AI art generators than the textual tools. A human takes two very different concepts and asks the AI to create a pictures where those concepts are combined in some way that is based on an analogy conceived by the human. The program then takes its "summarized" version of the artistic genre(s) and seamlessly attends to the minutiae involved with stitching them together to make a unified image (kind of like the way that computers take care of the vast numbers of sums needed to solve differential equations without requiring a human with expertise in calculus). The resulting image then can suggest the high-level analogy that was in the original person's mind to another person in a way that could only be done historically by a person who had developed a high degree of skill in artistic representation. In this case, the "summarization" is not valuable as a pedagogical tool to speed up book learning, but rather as an input for a communication tool.

Expand full comment

A great and much-needed overview of the oncoming AI train. In the short term, what LLMs generate is content, and the fight between the people hoping to monetize that and those content generators being replaced is getting the attention. Since the people promoting AGI have nothing more than "and then a miracle happens" as a theory for how we get there, we can ignore the threat of machines with souls for the time being. Which leaves the short term problem of the culture/artistic class being hollowed out the same way the working middle class has been, and the longer term problem of what happens to a culture whose artistic view of itself becomes a lossy hall of mirrors heading toward random noise.

Expand full comment

This is thought provoking. Thank you!! "The Map is Eating the Territory" seems to be a useful way of thinking about it, and I immediately thought about Doctorow's "enshittification cycle," but I wonder whether thinking about sustainable equilibria between parasites and hosts might also be a useful frame -- even if "parasite" might understate the useful potential of LLMs.

Expand full comment
author

Cory gets a shoutout for enshittification in the piece - this is one possible mechanism through which it might happen ...

Expand full comment