Discussion about this post

User's avatar
Winston Smith London Oceania's avatar

Wow. I'm not sure what to make of this. From a computer science perspective, LLM's are not all they're cracked up to be. Impressive emulations for sure, but emulations and nothing more. The greatest danger of AI is people forgetting that the "A" in "AI" stands for "artificial" and means exactly that. The danger here is people treating it like it's real, when it's not.

Another issue arises from the fact that the purveyors and profiteers pushing AI care not one whit what impact it has on society or individuals, beyond their own enrichment.

Expand full comment
Alan Stockdale's avatar

Vinge, AI 2027 and others in the AGI / "superhuman AI" camp talk of human intelligence but have a remarkably impoverished notion of human cognition and what it is to be human. The field of AI has always been steeped in pre-20th C. philosophical tropes. Hubert Dreyfus has been telling them to read some Heidegger and Wittgenstein since the early 1960s and very few of them appear to have bothered. Since the start of the cognitive revolution in the late 1950s there have always been people critical of the dominant brain/mind is a computer metaphor. LLMs don't use language and language is not representation. Language is meaningful because it is embedded in human practices. Meaning is out there in our interactions with other people and the world. Human cognition evolved. It's continuous with animal cognition. But even LLM skeptics like Marcus and LeCun are still hung up on internal representations, believing they will solve LLM problems, of which there is no shortage, by combining symbolic AI with neural network AI to have better representations of the world. Good luck with that. They appear to willfully avoid the enactivist literature going back to Varela, Thompson and Rosch (1991) and beyond. The only way you get to anything remotely similar to human-like intelligence in a machine is through being in the world. To do so you would have to follow down the pathway taken by Rodney Brooks, likely a very long and winding path. The "superhuman AI" camp seem to imagine their intelligence-in-a-box will become self-aware and develop intentions given enough processing power and data. That seems like a dubious assumption but it if you believe the human brain is a computer it might be a logical assumption.

I take it that the point of it being a social technology is similar to the one made by Matteo Pasquinelli. He writes "AI is not constituted by the imitation of biological intelligence but by the intelligence of labour and social relations." (Although the cultural investment in the former goes hand in hand, I think, with the latter.) On the surface GenAI doesn't seem likely to be very good at the latter either. The tech companies pushing GenAI are burning hundreds of billions, soon to be trillions, of dollars. AI uses expensive and rapidly depreciating hardware, along with vast amounts of power and water. And the market, almost 3 years after the release of ChatGPT, is tiny compared to the capital outlay because it doesn't do what it's claimed to be able to do. What uses there are are incremental and ordinary and one has to balance those benefits against the dysfunction (see Daron Acemoglu). But, maybe that misses the point, if the true value of the technology as a social technology lies more with the uses dysfunction has for the move-fast-and-break-things gang.

Expand full comment
20 more comments...

No posts