Nice essay! I've always been frustrated that Vinge's conceptualization of the Singularity was so rarely linked to what I saw as one of its major implied reference points, which is the emergence of modern subjectivity, particularly in the way that Foucauldian analyses approached "epistemic" rupture. Foucault himself never managed (imho) to do a great job of actually exploring what kinds of selfhood or subjectivity people might have had before modernity, perhaps because modern epistemic frames overwhelm or infuse any such project. But I do think Foucault-inspired analyses were somewhat convincing that there *was* a rupture of some kind--that a person living before madness, before sexuality, before the prison, etc. would struggle to understand in a fundamental way what a modern self thought, did, felt and saw. (And thus also in the other direction.) Which is what I understood Vinge to be reaching for--that we were on the cusp of the emergence of a subjectivity that would be so radically different in some of its basic nature that there would be no communicating across that epistemic rupture, and that we couldn't really imagine on this side of it what it was going to be like on that side of it. The readings of the Singularity as "nerd rapture" or just as an enhancement/degradation of modern subjectivity missed the point, or perhaps just dragged it somewhere far afield from what he had in mind.
There is a bit of this flavor in Vinge as you say (early ideas for this post, which I've been playing around with for a long while, talked more about it). But it is mostly buried - I think that it is really post Nick Land - who builds on Deleuze etc - that you see it becoming a significant part of the overt debates. But this is all amateur intellectual history on my part - if anyone has done this proper, I haven't seen it (bits here and there in the Basilisk essays etc, but nothing really systematic and coherent)
There was a viral interview some months back with a small IT business owner in Maine on his support of Trump. When the interviewer asked if he was concerned about the chaos a Trump victory might bring, he replied "that's why I'm voting for Trump". An awful lot of people hate and fear the modern world in general and are comfortable with the idea of tearing it down and starting over (in theory, because no one in favor of this wants to think through the aftermath, other than Steve Bannon). The fear of AI is driven more by this than any threat of daemons. Not even the people building the current interactions of "AI" fully understand it, let alone what it will do to society (beyond granting them wealth and power, somehow). But I fully agree that we need some sort of pataphysical language to grapple with a coming wave no one understands.
The association of computer programming with magic has a long and questionable history quite apart from AI, such as calling system administrators "wizards". In fiction, this can be found in Rick Cook's Wizardry series, for example.
But you are right that there is a special link to AI. Have a look at this cover of the classic text "The Structure and Interpretation of Computer Programs", by Abelson and Sussman (with Sussman): https://www.amazon.ca/Structure-Interpretation-Computer-Programs-Abelson/dp/0262510367. It could easily have served as your header image, were it not for copyright issues. Function and logic programming was for a long time specifically linked to AI; in fact, my first exposure to it was in an AI course and SICP was one of our texts.
Coming at it from the other end, it is also true that traditionally, "daemons are very literal minded creatures", as in the genie that grants you the wish you ask for, rather than the wish you want. In fiction, my favourite example of this is Jack Vance; the refractory sandestin daemons that underlie the magic in his novels must be cajoled and reasoned with - according to their own notions of reason.
The sandestins are willfully perverse in how they interpret instructions - a different version is the indifference in John Crowley's Aegypt/The Solitudes: "And Midas, first and most terrible exemplar of all. It was not, Pierce decided, that those powers which grant wishes intend our destruction, or even our moral instruction: they are only compelled, by whatever circumstances, to do what we ask of them, no more, no less. Midas was not being taught a lesson about false and true values; the dæmon who granted his wish knew nothing of such values, did not know why Midas would wish his own destruction, and didn’t care."
The sandestins aren't so pure as the genie in the lamp. They display different aspects of their characters according to circumstances: sometimes perverse, sometimes indifferent, sometimes rebellious, sometimes comically ingenuous.
I think that Crowley has written Pierce to be a little slow on the uptake here. *Of course* the genie is indifferent! But the tale-teller is not, and does indeed intend the tale to be a moral lesson. It wouldn't be so effective if the genie were also the didact.
It's been a long time since I've read any of the Aegypt books; when I turn to Crowley, I usually re-read Little, Big, or Engine Summer. I have to try Flint and Mirror though.
There is a cultural history to be written about the weirdness of cybernetic discourse, and the multitudes of different uses to which the ideas have been put by very different people.
You've likely read it, but Arendt takes up Vico's claim about what we can know in her essay "The Concept of History: Ancient and Modern" (ch.2 in her Between Past and Future). Interesting to consider her discussion in connection with your observations (and those of Timothy Burke in his comment).
It's not like I've read that much on Vico's verum-factum principle, but I've never seen much made of the "we" at the heart of it. He's basically saying that we make history, therefore we can understand it. But which sense of "we" is he using? I don't think he means "we" in the sense of "all of us" or "anyone in particular." I think he means something closer to history being _in principle_ understandable by humans (in contradistinction to nature, which is not). Or more strongly, he means "we" metonymically, as when Neil Armstrong gave all of humanity credit for his single step. This group of historians here understand what the European Enlightenment was all about, therefore in some sense all of humanity does, too.
I suspect at least part of the tension within this whole discussion about AI hinges on this ambiguity. To be clear, I'm not trying to critique Vico here. I just mean that the ambiguity in his thinking reflects the basic ambiguity in _all_ situations where human artifice is involved. Some people know how to do this stuff better than others. So in one sense, "we" do understand limited pieces of anything human-made because some (or some group) of us understand pretty well, but it's not like there's any single person out there who understands how it all fits together. And the larger and more complicated the human-made world, the smaller those pieces become relative to the whole.
And I agree strongly that "both [metaphors] are wildly misleading." And I suspect a lot of that has to do with some basic misunderstandings about craft knowledge (_techne_) and how it differs from other kinds of knowledge. A significant number of these misapprehensions have been picked up in design culture and design thinking, which descends from the artisanal/workshop traditions of Renaissance Italy. The UX notion of a "conceptual model," for example, helps us articulate the gap between production and use--how, say, drivers who don't understand how internal combustion or power transmission or steering or braking actually work can still drive without any issues. And my point here is that _someone_ still has to know how to build and repair those systems or the car won't keep running.
The historian of technology P. G. Walker wrote, “Because we see the machine reshaping society and changing men’s habits and ways of life, we are apt to conclude that the machine is, so to speak, an autonomous force that determines the social superstructure. In fact, things happened the other way around.... The reason why the machine originated in Europe is to be found in human terms. Before men could evolve and apply the machine as a social phenomenon they had to become mechanics.” (P. G. Walker, “The Origins of the Machine Age,” History Today 16 (1966), 591-92.)
So when people talk about the Singularity, one question I always have in the back of mind is who's going to do maintenance on the material substrate for this "intelligence." And for anyone thinking that such an intelligence would just "take over" robotics factories, I would point out that the system that keeps robotics factories online is WAY more complex than is easily appreciable--and that the number of quite overlookable machines and human mechanics of all kinds and levels that are involved is likely to be astoundingly large. AI only works when all these machines and mechanics--each of whom understands pieces of the overall system--coordinate _their_ work. Impressive as it is, the whole system is far more fragile than it looks.
I'd love to read more about this - I am basically riffing on a mixture of secondary sources and sketchy and highly partial understanding of the original, rather than any deep engagement.
<High five> for lively speculation based on secondary sources and sketchy and highly partial understanding of the original! I’m just trying to effervesce alongside you.
Nice essay! I've always been frustrated that Vinge's conceptualization of the Singularity was so rarely linked to what I saw as one of its major implied reference points, which is the emergence of modern subjectivity, particularly in the way that Foucauldian analyses approached "epistemic" rupture. Foucault himself never managed (imho) to do a great job of actually exploring what kinds of selfhood or subjectivity people might have had before modernity, perhaps because modern epistemic frames overwhelm or infuse any such project. But I do think Foucault-inspired analyses were somewhat convincing that there *was* a rupture of some kind--that a person living before madness, before sexuality, before the prison, etc. would struggle to understand in a fundamental way what a modern self thought, did, felt and saw. (And thus also in the other direction.) Which is what I understood Vinge to be reaching for--that we were on the cusp of the emergence of a subjectivity that would be so radically different in some of its basic nature that there would be no communicating across that epistemic rupture, and that we couldn't really imagine on this side of it what it was going to be like on that side of it. The readings of the Singularity as "nerd rapture" or just as an enhancement/degradation of modern subjectivity missed the point, or perhaps just dragged it somewhere far afield from what he had in mind.
There is a bit of this flavor in Vinge as you say (early ideas for this post, which I've been playing around with for a long while, talked more about it). But it is mostly buried - I think that it is really post Nick Land - who builds on Deleuze etc - that you see it becoming a significant part of the overt debates. But this is all amateur intellectual history on my part - if anyone has done this proper, I haven't seen it (bits here and there in the Basilisk essays etc, but nothing really systematic and coherent)
There was a viral interview some months back with a small IT business owner in Maine on his support of Trump. When the interviewer asked if he was concerned about the chaos a Trump victory might bring, he replied "that's why I'm voting for Trump". An awful lot of people hate and fear the modern world in general and are comfortable with the idea of tearing it down and starting over (in theory, because no one in favor of this wants to think through the aftermath, other than Steve Bannon). The fear of AI is driven more by this than any threat of daemons. Not even the people building the current interactions of "AI" fully understand it, let alone what it will do to society (beyond granting them wealth and power, somehow). But I fully agree that we need some sort of pataphysical language to grapple with a coming wave no one understands.
The association of computer programming with magic has a long and questionable history quite apart from AI, such as calling system administrators "wizards". In fiction, this can be found in Rick Cook's Wizardry series, for example.
But you are right that there is a special link to AI. Have a look at this cover of the classic text "The Structure and Interpretation of Computer Programs", by Abelson and Sussman (with Sussman): https://www.amazon.ca/Structure-Interpretation-Computer-Programs-Abelson/dp/0262510367. It could easily have served as your header image, were it not for copyright issues. Function and logic programming was for a long time specifically linked to AI; in fact, my first exposure to it was in an AI course and SICP was one of our texts.
Coming at it from the other end, it is also true that traditionally, "daemons are very literal minded creatures", as in the genie that grants you the wish you ask for, rather than the wish you want. In fiction, my favourite example of this is Jack Vance; the refractory sandestin daemons that underlie the magic in his novels must be cajoled and reasoned with - according to their own notions of reason.
The sandestins are willfully perverse in how they interpret instructions - a different version is the indifference in John Crowley's Aegypt/The Solitudes: "And Midas, first and most terrible exemplar of all. It was not, Pierce decided, that those powers which grant wishes intend our destruction, or even our moral instruction: they are only compelled, by whatever circumstances, to do what we ask of them, no more, no less. Midas was not being taught a lesson about false and true values; the dæmon who granted his wish knew nothing of such values, did not know why Midas would wish his own destruction, and didn’t care."
The sandestins aren't so pure as the genie in the lamp. They display different aspects of their characters according to circumstances: sometimes perverse, sometimes indifferent, sometimes rebellious, sometimes comically ingenuous.
I think that Crowley has written Pierce to be a little slow on the uptake here. *Of course* the genie is indifferent! But the tale-teller is not, and does indeed intend the tale to be a moral lesson. It wouldn't be so effective if the genie were also the didact.
It's been a long time since I've read any of the Aegypt books; when I turn to Crowley, I usually re-read Little, Big, or Engine Summer. I have to try Flint and Mirror though.
NRx i associate more w Yarvin than Land…though it’s funny that I found the word “cybernetics” through Land via Yarvin via SSC
There is a cultural history to be written about the weirdness of cybernetic discourse, and the multitudes of different uses to which the ideas have been put by very different people.
You've likely read it, but Arendt takes up Vico's claim about what we can know in her essay "The Concept of History: Ancient and Modern" (ch.2 in her Between Past and Future). Interesting to consider her discussion in connection with your observations (and those of Timothy Burke in his comment).
https://archive.org/details/hannah-arendt-between-past-and-future
I've actually not!
With you in that those two novels are absolutely his best work and are not bested by any other Space Opera (and SF in general) ever written.
I did not understand why you call 'we can only understand what we have made" Vico's *Singularity*.
The idea is that the fundamental terms of the argument about the Singularity are the old debates about humanism presented in different garb.
Sorry, I am apparently missing context to be able to parse this. Probably because I am unaware what the 'old debates about humanism' stands for.
First guess, in the terms presented, Actor Network Theory should help. And what you posted recently about cybernetics.
And there might be some kind of local fix, following some kind of Polayian double movement, as has been custom, between the two forces.
Thank you, Henry, for the energizing thoughts!
It's not like I've read that much on Vico's verum-factum principle, but I've never seen much made of the "we" at the heart of it. He's basically saying that we make history, therefore we can understand it. But which sense of "we" is he using? I don't think he means "we" in the sense of "all of us" or "anyone in particular." I think he means something closer to history being _in principle_ understandable by humans (in contradistinction to nature, which is not). Or more strongly, he means "we" metonymically, as when Neil Armstrong gave all of humanity credit for his single step. This group of historians here understand what the European Enlightenment was all about, therefore in some sense all of humanity does, too.
I suspect at least part of the tension within this whole discussion about AI hinges on this ambiguity. To be clear, I'm not trying to critique Vico here. I just mean that the ambiguity in his thinking reflects the basic ambiguity in _all_ situations where human artifice is involved. Some people know how to do this stuff better than others. So in one sense, "we" do understand limited pieces of anything human-made because some (or some group) of us understand pretty well, but it's not like there's any single person out there who understands how it all fits together. And the larger and more complicated the human-made world, the smaller those pieces become relative to the whole.
And I agree strongly that "both [metaphors] are wildly misleading." And I suspect a lot of that has to do with some basic misunderstandings about craft knowledge (_techne_) and how it differs from other kinds of knowledge. A significant number of these misapprehensions have been picked up in design culture and design thinking, which descends from the artisanal/workshop traditions of Renaissance Italy. The UX notion of a "conceptual model," for example, helps us articulate the gap between production and use--how, say, drivers who don't understand how internal combustion or power transmission or steering or braking actually work can still drive without any issues. And my point here is that _someone_ still has to know how to build and repair those systems or the car won't keep running.
The historian of technology P. G. Walker wrote, “Because we see the machine reshaping society and changing men’s habits and ways of life, we are apt to conclude that the machine is, so to speak, an autonomous force that determines the social superstructure. In fact, things happened the other way around.... The reason why the machine originated in Europe is to be found in human terms. Before men could evolve and apply the machine as a social phenomenon they had to become mechanics.” (P. G. Walker, “The Origins of the Machine Age,” History Today 16 (1966), 591-92.)
So when people talk about the Singularity, one question I always have in the back of mind is who's going to do maintenance on the material substrate for this "intelligence." And for anyone thinking that such an intelligence would just "take over" robotics factories, I would point out that the system that keeps robotics factories online is WAY more complex than is easily appreciable--and that the number of quite overlookable machines and human mechanics of all kinds and levels that are involved is likely to be astoundingly large. AI only works when all these machines and mechanics--each of whom understands pieces of the overall system--coordinate _their_ work. Impressive as it is, the whole system is far more fragile than it looks.
I'd love to read more about this - I am basically riffing on a mixture of secondary sources and sketchy and highly partial understanding of the original, rather than any deep engagement.
<High five> for lively speculation based on secondary sources and sketchy and highly partial understanding of the original! I’m just trying to effervesce alongside you.
typo immantenizing -> immanentizing
Thanks!