Great piece. One addition to: “LLMs are parametric probability models of symbol sequences, fit to large corpora of text by maximum likelihood. By design, their fitting process strives to reproduce the distribution of text in the training corpus”. This description omits a key mechanism: the (pseudo-)randomness of next token selection. A *set* of maximum likelihood is constructed from which *at random* a selection is made. This produces a mechanism that isn’t ’striving to reproduce’. In fact, reproduction is both sometimes desirable (correct quoting) and sometimes not (plagiarism). The randomness is a source of creativity as well as a means to fill in ‘holes’ in the distribution and a (slight) extension. Initially there was only a single such random-selection in the architectures, by now one can imagine multiple (e.g. a selection from multiple continuations, only insiders know).
What we do know is that none of this scales to AGI-levels of outputs.
At least not yet they don't. But combine it with machine learning and neural networks and you get another beast altogether. And the oligarch purveyors are working furiously to jam it down our collective throats as quickly as they can - and reap maximum profits, no matter how many deaths it results in.
If nothing else, I now understand what is meant by "cultural technologies". I didn't get that from the Science piece.
However, I think this framing is in some ways a version of Searle's and Dreyfus' critique of AI back in the GOFAI days. You are getting close the making human intelligence (and intellect) something almost ineffable, to distinguish it from machines. I see no intrinsic difficulty of adding goal-seeking algorithms to LLMs to make them more "biologic" in operation, constantly switching goals to satisfy some sub-goal.
Let's invert this and think of human brains as cultural technologies of genes. As is clear, humans transmit culture. We store it in compressed form and relate it in many forms, from stories before writing existed, to the many media and cultural artifacts and technologies we have invented. What distinguishes human minds from machines? At this point, consciousness seems to be the one that is most used to separate us. It is perhaps better than the idea of an implanted "soul or spirit".
Take apart human minds and compare the pieces and tasks that we do with LLMs. What are LLMs or their plausible descendants missing? Is this gap permanently out of reach, or a matter of adding additional technology, like wheels and a drive mechanism to a steam engine to make it a mobile train?
A most fascinating read, and unusual take on the problem. Now try telling it to the billionaire oligarchic purveyors of these overpowered spellcheck machines.
the event of scrolling through the dewey cards exposes other information sources implanting clues to informations unsought but encountered... whispers become shouts subconsciously as our native minds sort and catalogue automatically until we are at the gate of renewed dignity and service while we find each other in the swirling winds of a certainty found in our common past... identity found through our being bound by some worthwhile values bringing understanding between us... remember the Black Panthers, brought together to feed and defend their children from racists and misused police power... they fought and died against odds that we all should have stood up to with them as a union of solidarity and worth... the evil and ignorant failure that is GQP/drumpf attempts to treat all of us as just a source of grift, we are all the Black Panthers now... how we overcome this dilemma will define our future... the Founders gave us a Republic let's see if we may keep it...
Worth the price of admission for the intellect/intelligence distinction alone, which I had not run across in such a pithy formulation. Thank you.
I must confess I hadn't run across it at all. It's most intriguing food for thought.
Great insights from Cosma, as usual!
it is, indeed, a banger.
Great piece. One addition to: “LLMs are parametric probability models of symbol sequences, fit to large corpora of text by maximum likelihood. By design, their fitting process strives to reproduce the distribution of text in the training corpus”. This description omits a key mechanism: the (pseudo-)randomness of next token selection. A *set* of maximum likelihood is constructed from which *at random* a selection is made. This produces a mechanism that isn’t ’striving to reproduce’. In fact, reproduction is both sometimes desirable (correct quoting) and sometimes not (plagiarism). The randomness is a source of creativity as well as a means to fill in ‘holes’ in the distribution and a (slight) extension. Initially there was only a single such random-selection in the architectures, by now one can imagine multiple (e.g. a selection from multiple continuations, only insiders know).
What we do know is that none of this scales to AGI-levels of outputs.
At least not yet they don't. But combine it with machine learning and neural networks and you get another beast altogether. And the oligarch purveyors are working furiously to jam it down our collective throats as quickly as they can - and reap maximum profits, no matter how many deaths it results in.
If nothing else, I now understand what is meant by "cultural technologies". I didn't get that from the Science piece.
However, I think this framing is in some ways a version of Searle's and Dreyfus' critique of AI back in the GOFAI days. You are getting close the making human intelligence (and intellect) something almost ineffable, to distinguish it from machines. I see no intrinsic difficulty of adding goal-seeking algorithms to LLMs to make them more "biologic" in operation, constantly switching goals to satisfy some sub-goal.
Let's invert this and think of human brains as cultural technologies of genes. As is clear, humans transmit culture. We store it in compressed form and relate it in many forms, from stories before writing existed, to the many media and cultural artifacts and technologies we have invented. What distinguishes human minds from machines? At this point, consciousness seems to be the one that is most used to separate us. It is perhaps better than the idea of an implanted "soul or spirit".
Take apart human minds and compare the pieces and tasks that we do with LLMs. What are LLMs or their plausible descendants missing? Is this gap permanently out of reach, or a matter of adding additional technology, like wheels and a drive mechanism to a steam engine to make it a mobile train?
The Barzun passage was really helpful, along with a clearer view of collective intelligence.
A most fascinating read, and unusual take on the problem. Now try telling it to the billionaire oligarchic purveyors of these overpowered spellcheck machines.
the event of scrolling through the dewey cards exposes other information sources implanting clues to informations unsought but encountered... whispers become shouts subconsciously as our native minds sort and catalogue automatically until we are at the gate of renewed dignity and service while we find each other in the swirling winds of a certainty found in our common past... identity found through our being bound by some worthwhile values bringing understanding between us... remember the Black Panthers, brought together to feed and defend their children from racists and misused police power... they fought and died against odds that we all should have stood up to with them as a union of solidarity and worth... the evil and ignorant failure that is GQP/drumpf attempts to treat all of us as just a source of grift, we are all the Black Panthers now... how we overcome this dilemma will define our future... the Founders gave us a Republic let's see if we may keep it...
two posts autocombined...