Thank you for making the science article available. It would be interesting to know how many read it.
I was surprised that Science printed this as it had no bearing on any scientific discovery or finding. The main purpose was to say that LMs were cultural and social technologies [SCT} and therefore - to be investigated by sociologists. Science requires evidence, but the article provides no evidence for the claim but rather picks some technologies as past SCTs and claims LMs are similar. But how exactly beyond some rather vague claims of features? The concluding section "Looking Forward" uses a strawman argument of a binary POV to claim we would get more subtle and useful ways to discuss and work with such LMs as if this wasn't already happening.
I am not clear why some technologies are labeled SCTs and others, like steam engines, are neutral and not SCTs. The article doesn't provide any support for this assertion, where the null hypothesis might be that all technologies lay on a continuum within several axes including "social" and "cultural".
The references seem to cover a number of subjects, but it struck me that the Blodgett reference concerned "bias" in natural language processing, a somewhat different subject and published before deep neural networks were invented (2006) which are the forerunners of LLMs and not particularly applicable to what Brad DeLong calls MAMLMs and I think you use LMs which are beyond the LLM interface that Chiang compares well to lossy JPGs. Yes, bias exists because of the content slurped up, the RAG documents selected, and of course the HF action on training. Use different content and the bias will change direction. Force an LLM to only answer from selected documents in a database and the bias will depend on the content of those documents. I welcome that bias in STEM subjects, and I am biased in favor of democracy in politics even if Churchill's famous phrase is wrong.
I am sorry to be so critical of this piece, as your posts are well-written and erudite, which I enjoy reading.
"Debates about artificial intelligence (AI) tend to revolve around whether large models are intelligent, autonomous agents. Some AI researchers and commentators speculate that we are on the cusp of creating agents with artificial general intelligence (AGI), a prospect anticipated with both elation and anxiety. There have also been extensive conversations about cultural and social consequences of large models, orbiting around two foci: immediate effects of these systems as they are currently used, and hypothetical futures when these systems turn into AGI agents perhaps even superintelligent AGI agents.
"But this discourse about large models as intelligent agents is fundamentally misconceived. Combining ideas from social and behavioral sciences with computer science can help us understand AI systems more accurately. Large Models should not be viewed primarily as intelligent agents, but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated."
Thank you so much for an explanation that is roughly understandable, that illuminates AI, and is very comforting.
Particularly the part about AI as "allowing humans to take advantage of information that other humans have accumulated."
Your explanation makes sense of the AI phenomenon to me.
Good call, and thanks for making available the full text. Yes, the question is what impact on society, LLMs will have. As you say, there are well known ways to explore that, as used before for , eg democracy, bureaucracy, libraries, etc.
AI manufacturers will prefer, we didn't discuss their products in those terms. As "just another technology".
But The public debate of LLM AI entirely avoids your approach .. it focuses on an entirely different set of questions.
. Where people fail to do the obvious, one has to ask why. Sometimes it's just fluke or fashion, or what does and doesn't interest people. Or just plain human fallibility .. not stopping to think
But here, I think there is more going on . Someone or something worked to bring this about. It/they sought to shape how the public talks about these products. They hijacked our attention, and took it to places of their choosing.
As usual, the PR industry keep their roles and methods hidden. There firm's names, well away from public knowledge.
"Someone or something worked to bring this about. It/they sought to shape how the public talks about these products. They hijacked our attention, and took it to places of their choosing."
Isn't this true of all significant technologies?
We are fixated on new, shiny, things and don't think further about them. At least the Luddites didn't so much worry about the machines, but rather the consequences of owners not sharing the value with the workers while destroying the value of the cottage industry it was replacing.
The rush to try to create AGI might result in friendly, but fallible, Asimovia, 3 Laws abiding robots and SW's C-3PIO, and/or malevolent AIs like Colossus or Skynet. Both have societal consequences that need to be thought about and regulated as changes occur. Their current creators want no regulation at all, and that ideology is the problem.
"an autocratic AI future with Chinese characteristics."
We see what you did there. ;^)
Thank you for making the science article available. It would be interesting to know how many read it.
I was surprised that Science printed this as it had no bearing on any scientific discovery or finding. The main purpose was to say that LMs were cultural and social technologies [SCT} and therefore - to be investigated by sociologists. Science requires evidence, but the article provides no evidence for the claim but rather picks some technologies as past SCTs and claims LMs are similar. But how exactly beyond some rather vague claims of features? The concluding section "Looking Forward" uses a strawman argument of a binary POV to claim we would get more subtle and useful ways to discuss and work with such LMs as if this wasn't already happening.
I am not clear why some technologies are labeled SCTs and others, like steam engines, are neutral and not SCTs. The article doesn't provide any support for this assertion, where the null hypothesis might be that all technologies lay on a continuum within several axes including "social" and "cultural".
The references seem to cover a number of subjects, but it struck me that the Blodgett reference concerned "bias" in natural language processing, a somewhat different subject and published before deep neural networks were invented (2006) which are the forerunners of LLMs and not particularly applicable to what Brad DeLong calls MAMLMs and I think you use LMs which are beyond the LLM interface that Chiang compares well to lossy JPGs. Yes, bias exists because of the content slurped up, the RAG documents selected, and of course the HF action on training. Use different content and the bias will change direction. Force an LLM to only answer from selected documents in a database and the bias will depend on the content of those documents. I welcome that bias in STEM subjects, and I am biased in favor of democracy in politics even if Churchill's famous phrase is wrong.
I am sorry to be so critical of this piece, as your posts are well-written and erudite, which I enjoy reading.
Henry Farrell: You summarize AI here:
"Debates about artificial intelligence (AI) tend to revolve around whether large models are intelligent, autonomous agents. Some AI researchers and commentators speculate that we are on the cusp of creating agents with artificial general intelligence (AGI), a prospect anticipated with both elation and anxiety. There have also been extensive conversations about cultural and social consequences of large models, orbiting around two foci: immediate effects of these systems as they are currently used, and hypothetical futures when these systems turn into AGI agents perhaps even superintelligent AGI agents.
"But this discourse about large models as intelligent agents is fundamentally misconceived. Combining ideas from social and behavioral sciences with computer science can help us understand AI systems more accurately. Large Models should not be viewed primarily as intelligent agents, but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated."
Thank you so much for an explanation that is roughly understandable, that illuminates AI, and is very comforting.
Particularly the part about AI as "allowing humans to take advantage of information that other humans have accumulated."
Your explanation makes sense of the AI phenomenon to me.
Thank you so much!
Good call, and thanks for making available the full text. Yes, the question is what impact on society, LLMs will have. As you say, there are well known ways to explore that, as used before for , eg democracy, bureaucracy, libraries, etc.
AI manufacturers will prefer, we didn't discuss their products in those terms. As "just another technology".
But The public debate of LLM AI entirely avoids your approach .. it focuses on an entirely different set of questions.
. Where people fail to do the obvious, one has to ask why. Sometimes it's just fluke or fashion, or what does and doesn't interest people. Or just plain human fallibility .. not stopping to think
But here, I think there is more going on . Someone or something worked to bring this about. It/they sought to shape how the public talks about these products. They hijacked our attention, and took it to places of their choosing.
As usual, the PR industry keep their roles and methods hidden. There firm's names, well away from public knowledge.
Thanks for helping us see this.
"Someone or something worked to bring this about. It/they sought to shape how the public talks about these products. They hijacked our attention, and took it to places of their choosing."
Isn't this true of all significant technologies?
We are fixated on new, shiny, things and don't think further about them. At least the Luddites didn't so much worry about the machines, but rather the consequences of owners not sharing the value with the workers while destroying the value of the cottage industry it was replacing.
The rush to try to create AGI might result in friendly, but fallible, Asimovia, 3 Laws abiding robots and SW's C-3PIO, and/or malevolent AIs like Colossus or Skynet. Both have societal consequences that need to be thought about and regulated as changes occur. Their current creators want no regulation at all, and that ideology is the problem.