13 Comments

A rich and timely essay. I especially liked the Borges bit...I cannot think of a fiction writer more worth re-reading in light of generative AI.

You don't specifically call attention to schooling as an example of the Lovecraftian monstrosities that LLMs may replace, but it strikes me that LLMs were introduced to the general public through a moral panic about ChatGPT eating homework that has now subsided into a low boil of worries about the "disruption" of education. Now Khanmigo is about as likely to replace teachers as we are to see AGI, but the question of how exactly generative AI will change (or not) the bureaucratic structures and internal practices of education seems relevant.

Collective action in this space feels urgent, especially in light of your insight about the importance of the continued production [and distribution] of high quality knowledge.

Expand full comment

Just a spontaneous thought, but aren't all "new" insights and knowledge just a rearrangement and connection-making between what is already given?

Expand full comment

The ensh*ttification of Google is one of the reasons LLMs seem so impressive now. I've switched to Bing+Copilot because it will understand and answer questions written in natural language, whereas it has become harder and harder to make Google give me the answers I want: it ignores both natural language word order and the priority order in which I write search terms. But if someone had said in 1998 that it would take 25 years to deliver this functionality, I wouldn't have believed them (actually, SQL was promising that even further in the past).

Expand full comment

Perhaps this article only triggered my unconscious bias for self contradicting paradoxes.

But assuming all you say is true, and the human creators needs to stand up to manage the growth of the map, and we need to put institutions or guardrails in place, consider if the human creators and their alies flood current available LLMs with requests for text and images that describe those guardrails, systems, and/or institutions?

I’m not saying that should be done. But it might be interesting if the LLMs significantly help us narrow in on such guardrails that are worth implementing.

Certainly, the companies that build and host LLMs would sit up and take notice if a massive segment of humanity were using their tool to find out ways of managing that tool. It would certainly cost them for serving up all those possible answers (or non-answers depending on the training, and their algorithms). And image rendering of such answers would be costly as well.

A system for managing the LLMs might even be found that way perhaps.

However, those companies could certainly slow down a flood of requests, if they charge a penny for every prompt.

I’m trying to imagine an effective, and inexpensive, alternative to the courts, shouting into the ether on social networks, or politics.

Any thoughts?

Expand full comment

I genuinely have yet to see a remotely convincing high value use-case for LLMs (other, obviously, as a provoker of think pieces like this), in spite of paying careful and continuous attention. I don't think I fall into any of your classes of opinion above, because (speaking as a still-very-active computer scientist who wrote his PhD a long time ago in a leading AI department) In the end I cannot convince myself that there is anything much there, other than a cloud of deluded bullshit by blowhard techies making wild claims (which is not new in the AI biz), and thus I find it difficult to care.

I am, however, curious to see how the next five years work out, to reveal whether this time it really is different.

Expand full comment
Feb 13·edited Feb 13

> There isn’t any absolute logical contradiction between the two claims, and occasionally, quite stupid technologies have spread widely. Still, it’s unlikely that LLMs will become truly ubiquitous if they are truly useless. And there are lots of people who find them useful!

I do think the words "useful" and "useless" are doing a lot of work here. The real question is "useful for what, specifically?"

Is race science useful? Obviously not as a way of making sense of the world, but it is extremely useful as a way of justifying (and hiding) structural injustice. Is the American healthcare system useful? Well, maybe not for patients, but clearly very useful for insurance companies. The same is probably true of all these useless technologies that are widespread: they are indeed useful, just not useful for the consumers.

The concern of the writer's guild, for instance, isn't that LLMs will produce screenplays that will replace writers, but that they will APPEAR to do so enough that studios will get away with paying writers less, as they'll frame the act of turning inchoate drivel into a coherent screenplay as a "re-write", even though it will probably be as much work as writing from scratch, maybe more. In that framing, the concern isn't about useful vs useless, but the specifics of the use.

And, I do like that in your overall argument, you bring it down to an actual specific use (summarization), but I wonder if, were we all to agree that THAT is the real ground breaking use, wouldn't all the interest in LLMs suddenly appear to be a little silly? I don't even disagree with your assessment of the importance of summaries, but I do think that even pre LLMs, the supply of summaries was already well poised to outpace the demand, so adding summarization "at scale" to an already robust process does not seem like a disrupting event.

Finally, my own hippie dippie take is that an increase of leisure time for the general population is our best bet at keeping the engine of human ingenuity going, whatever takes us there (UBI, centralized planning, AI take over, fully automated luxury space communism, etc).

Expand full comment
Feb 12·edited Feb 12

I feel like there is a blind spot in this -- otherwise excellent -- article, that is the ecological / energetic one.

AFAIK it is one of the main push-back arguments of the so called lefties.

Expand full comment

Well I'm glad you put in Borges On Exactitude in Science, because it was the first thing that occurred to me when I read "the summaries can increasingly substitute for the things they purportedly represent". But I am skeptical of your "completely different reasons". Can a LLM really summarize a 50,000 word book in 500? I have never seen an example of such a marvel. I reckon the attraction of these "summaries" is more ordinary: rather than a free 500 word summary substituting for a costly 50,000 word book, a free 50,000 word copy will substitute for a costly 50,000 word book. You may quarrel with "costly", but is that right? Wouldn't you consider a student who wrote a paraphrase of some other work to be plagiarizing? Why is automated plagiarizing better than artisanal plagiarizing?

Please to note, I am not saying that the entire technology of "machine learning" is useless. I am certainly impressed with the performance of classifiers; the ability of a program to classify an X-ray better than a radiologist can, for example. And machine translation has become extremely useful. But I've never been shown a useful LLM output.

Expand full comment

A great and much-needed overview of the oncoming AI train. In the short term, what LLMs generate is content, and the fight between the people hoping to monetize that and those content generators being replaced is getting the attention. Since the people promoting AGI have nothing more than "and then a miracle happens" as a theory for how we get there, we can ignore the threat of machines with souls for the time being. Which leaves the short term problem of the culture/artistic class being hollowed out the same way the working middle class has been, and the longer term problem of what happens to a culture whose artistic view of itself becomes a lossy hall of mirrors heading toward random noise.

Expand full comment

This is thought provoking. Thank you!! "The Map is Eating the Territory" seems to be a useful way of thinking about it, and I immediately thought about Doctorow's "enshittification cycle," but I wonder whether thinking about sustainable equilibria between parasites and hosts might also be a useful frame -- even if "parasite" might understate the useful potential of LLMs.

Expand full comment