10 Comments

I would like to make a motion to refer to the risible answers from New Google as "anal leakage", e.g. Did you see the anal leakage google gave me when I asked about eating rocks?!

Expand full comment

Going along with your post, I thought this was a good, short, explanation for why those results would be part of a successful web search, but not for the summary: https://mikecaulfield.substack.com/p/the-elmers-glue-pizza-error-is-more

Expand full comment
author

That is indeed a really helpful link.

Expand full comment

Mork had that problem, needed Mindy to flag the HU-MORE for him. And it seems the root of the social problem is that the same is true for people in general.

Expand full comment

IIRC, the reason for New Coke was that Pepsi had been publicising blind taste tests which overwhelmingly favored the sweeter Pepsi taste. So, New Coke was Pepsi.

The analogy to Google's rush to jump on the AI bandwagon is obvious

Expand full comment

LLM's don't *sometimes* hallucinate - they *always* hallucinate; it's literally how they *work*. Sometimes they resemble reality to the viewer -but the LLM doesn't know that - just like a cloud can look like a puppy dog to a viewer.

A cloud ain't a puppy dog, and an AI ain't truth

Expand full comment

This is great stuff, and I think it's a weirdly hilarious irony that among the best use cases for LLM tools has been writing code. It does take some care, but as often as not you can cut-and-paste the result of asking an LLM tool to generate code, and it'll work. I cannot help but wonder if this feature of their behavior is partially responsible for why a large subset of folks are willing to believe LLMs are far more capable in other scenarios than they clearly are.

Expand full comment

If LLM summaries ever become really good, then users wouldn't scroll through the many search results, so they wouldn't see as many advertisements. The AI model business model seems less potentially less profitable to Google.

Perhaps a better use of Google's LLM's is to group together and collapse similar content under headings. This would help counter the proliferation of copycat content increasing generated by LLM's.

Expand full comment

Original Google met me quickly click on the source to make an assessment of its quality. That wasn't perfect, but it led to us not holding Google accountable for poor quality links, especially if most of the links for any given search were useful and nominally trustworthy. The 'cost' of reviewing (and rejecting) a given link was pretty low. By not showing its sources, LLMs essentially ask users to trust them. When they don't prove very trustworthy almost all the time, their utility massively diminishes.

Expand full comment

How does this gibe with Section 230 protection? It's one thing for Google to link to a libelous web site. It's similar if they provide a libelous excerpt and cite its origin. It's another thing to simply produce a libelous statement. As you note, Google doesn't have a reasonable-person-would-not-believe-this-statement defense.

If anything, there have been a number of successful high profile lawsuits for slander and libel recently. Lawyers try to sue defendants with the deep pockets, and Google has very deep pockets. Am I missing something or is Google missing something?

Expand full comment