I think this is basically right, but I'd add one important detail-- the rise of online communities as a central forum for identity formation and political discourse has necessarily led to the collapse of offline communities organized around place, vocation, etc. that previously facilitated these processes. (Maybe "necessarily led to" is a bit too strong and it's more accurate to say "been accompanied by," but my hunch is that the stronger version is closer to the truth-- people have a finite capacity to participate in various communities and form political conclusions, and the rise of one forum's importance past a certain point necessarily implies the fall of another.) This is really, really bad for the sort of collective democratic thinking that you highlight in the piece as the bedrock for healthy democratic government, since that process relies on each community in the polity first having preferences about issues that are germane to them and then secondly having more-or-less proportional representation so that these preferences are backed at a level appropriate for their prevalence. Here I'm reminded of Sam Rosenfeld and Daniel Schlozman's comment on Know Your Enemy that candidates from both parties have adopted a more national, homogeneous rhetorical affect and set of positions as the parties have been hollowed out. It used to be that a Republican stump speech in Nebraska would sound completely different from a stump speech in New York or Florida, but now you turn on any of the three and they're all talking about trans kids in bathrooms.
I like this explanation as a framework for thinking about why online publics are so poisonous for our political life because it draws attention to several revealing parallels. First off, these forums are accelerating this homogenizing process that Rosenfield and Schlozman were mostly talking about in the context of campaign finance. (I think they may have also discussed social media briefly, but it's been a while since I heard this interview and I'm not sure.) Both of these phenomena untether political elites from a set of positions that they'd otherwise have to represent by raising the salience of other issues, but social media is arguably more dangerous because of how users understand the information they encounter as something natural & intrinsic to their own community rather than something coming from the outside political sphere. Social media thus resembles the Brooks Brothers riot, the Tea Party, and other "astroturfed" political movements and affiliations in American history.
Second, this warping of various communities that were previously sorted into rough political districts by affinity shares some striking similarities to previous dislocations to regional politics that were caused by breakthroughs in information technology-- talk radio, tv, even the printing press if you want to go all Imagined Communities about it. Again, a clear difference here is the apparent "democratized" nature of social media, which is actually financialization slipping in the back as a wolf in sheep's clothing. This new technology is capable of creating new publics and shaping old ones in ways that previous technologies were not in large part BECAUSE each user on one of these platforms believes that they and their peers are actually the actors responsible for creating and shaping the discourse and the platform itself. This feature of social media should draw our attention to past forms of participatory community formation that were brought about by changes in the information landscape (for instance, call-ins on conservative AM radio) as early forerunners of social media that deserve reappraisal. It should also remind us of the letter to the editor or personals section of the now-obsolete local newspaper, again demonstrating how local communities are impoverished and dissolved by the internet and how this dissolution largely operates through economic dislocation.
It's definitely a scary moment! I think that the superficial freedom to move about the internet and associate with one's chosen community did a lot to obscure the ways that this social transformation put so much of our cultural and political life directly in the grasp of the tech companies and oligarchs that organize our virtual spaces. After all, a naive understanding of the internet that ignores the role of algorithms, moderation, and advertising would lead a person to conclude that online we're all just individual actors, making our own individual choices just as we might offline as we move through this new virtual space that Silicon Valley has created to connect the world. This focus on individual agency and the interconnectedness of previously separate actors also suggests that the neoliberal turn has something to do with the widespread adoption of online life in favor of offline life (or maybe it's more accurate to say that the two phenomena feed one another). Like you, I don't know how we might get out of this mess, but the framework you've laid out here is a great start to understanding the problem that has definitely changed my thinking in several important ways.
Love this: “This new technology is capable of creating new publics and shaping old ones in ways that previous technologies were not in large part BECAUSE each user on one of these platforms believes that they and their peers are actually the actors responsible for creating and shaping the discourse and the platform itself.”
I agree and am seeing this in our local politics online, which I mention in a comment further down.
I’m bemused how both Farrell and this commenter choose their examples, as it’s clear that Musk and Zuck are responding to political winds that are a reaction to the ten years prior. Just because an elite is more numerous than two doesn’t make it democratic. Eliding the U.S. Government’s role in jawboning Twitter and Facebook from 2020 on also misses a significant part of what Zuck’s announcement was about; I don’t think stopping censorship on certain topics is the same as privileging them, as Farrell and many others imply.
But there’s a lot to agree with here. The distortion of reality via social media is well-worn, and the mechanism that convinced many corporations to “get woke”, or at the very least fire troublesome employees who voiced their opinions well outside the normal boundary of the office. But when you replace “algorithm” with “editor” it becomes clearer what we’ve lost. Editors brought their own biases, which were the biases of their social groups, and we can easily point out ways that went wrong (e.g., “Crack Baby” and “Superpredator”, or going further back “Kitty Genovese”), but *good* editors went out of their way to search for other opinions, facts that would upset the current paradigm. What’s a “good” algorithm in this context? Doesn’t seem to be one, at least yet.
I happen to like Substack as an antidote (so one answer to Farrell’s call for alternatives is Madge’s classic “you’re soaking in it”). It has a lot of the old Blogosphere feel to it, and I would certainly like to return to a “reality based” version of the Left, but if I’m being honest, I gravitate towards what interests me, irrespective of algorithm, because there are certain people I read, and others I stay away from.
- First, that the problem with social media is not misinformation.
- Second, the porn analogy about how porn presents people with a distorted view of sex, which in turn can warp people's own views of what they want or should want and how social media does the same with political discussion.
But the last section goes awry. The problem isn't Musk or Zuckerberg. The problem is the structure of the discussion software itself (particularly Twitter and clones like Blue Sky).
Message platforms built on short messages and short replies; that can thread in any different direction; allow unlimited, immediate posting; and support anonymity are inevitably going to descend into snark, incivility, and tribalism no matter who owns or runs them. Platforms like Twitter and Blue Sky are always gong to devolve into showing us the worst version of ourselves in discussions of controversial issues, which in turn warps our view of view of these issues (for exactly the reason captured in the porn analogy).
What is needed for better social media is a discussion platform that helps participants show the best of themselves rather than the worst. Such a platform would:
- Encourage longer posts.
- Encourage people to share what they believe and why rather then responding to the posts of others.
- Have tools in place that prevent discussions from being dominated by the few (who tend to be the most extreme).
You are overstating the importance of short posts. Mastodon does all the other things you mention (except re anonymity, which is problematic) and works well. But lots of people accustomed to Twitter find it too dull. Bluesky somewhere in between: can be snarky but nowhere near as toxic as Twitter, let alone X.
John: It's not just the characters limits on posts that make a platform like Twitter so toxic. It's the combination of character limits, endless threading, the lack of limits on who can contribute, and anonymity.
But there is no doubt that the character limits are an important contributor. The problem is such limits force people to cut right to the chase about the point they're making and cut off their ability to express appreciation, to acknowledge common ground, and to provide context. These qualities are critical to promoting constructive conversations between people with different perspectives.
And there is no way to do this on replies on Twitter/BlueSky/Threads (probably Mastodon as well, though I've never used it), without posting multiple messages in a way that won't necessarily be read in the way those multiple messages are intended because of the way discussions fragment and thread. So there is tremendous pressure to reduce your response to a single core point. And since so much of these debates are about signaling your tribal affiliation, gaining attention, and scoring virtual points, it inevitably leads to snark being the dominant key of discussions.
Now, to be clear, this can be lots of fun. The ruthless editing that posting on these platforms requires means the content to word ratio is incredibly high. You get all meat and none of the filler. There is a reason these platforms are compelling.
But here is where Henry's porn analogy is so good. Porn can also be lots of fun. And interestingly, porn has gone through its own Twitterization process. No longer (at least for the vast majority of porn users), is there any set up or backstory. You're immediately brought to the, er, meat of whatever type of sex you're there for. And to be clear, I'm not complaining.
But there is no doubt that this has nothing to do with the way good sex works in the real world. And there is also no doubt in my mind that a world where people's perception of sex comes mainly through porn is a stultifying one.
So I think it's important that people treat political discussion on Twitter/BlueSky/Threads (and Facebook, TikTok) in an analogous way to porn: a guilty pleasure that can be fun in small doses but not where healthy people should be spending a significant amount of their time.
"First, that the problem with social media is not misinformation."
Is/isn't is too simplistic. Disinformation is a *massive, existential problem* on social media right now -- like Snake Oil, counterfeit money or counterfeit merchandise in the real world. Worse, it is hawked by a virtual Con Man (machine-powered algorithms) with superpowers of ubiquity, real-time tracking, and simultaneous knowledge of the individual psychological vulnerabilities of millions of potential Marks. Just as we try to maintain a zero-tolerance for counterfeit drugs, money, credentials, merchandise, etc. in the real world, we must attempt the same for counterfeiting (i.e. disinformation) in the virtual world.
The recent UK riots and COVID snake oil cures --fueled by disinformation on social media-- are indisputable and deadly examples of what any proposed solutions must address. P.S. misinformation and disinformation aren't the same thing; I'm thinking mostly about DIS-information, which Henry used in his title.
With the exception of stuff like Russian authored posts deliberately designed to create confusion (which do exist but are essentially a rounding error in terms of their number and impact), there is almost never a clear line between what you seem to be calling the disinformation and misinformation. To use the COVID cures example, the vast majority of people posting about stuff like ivermectin legitimately believed it (and in fairness, there were not crazy reasons to do so for a long time).
But regardless, the real point I’m making is that the platforms’ efforts to deal with this problem through fact checking and removal of posts were utterly ineffective. The ONLY way to improve the information ecosystem is to develop more constructive software/venues for political discussions that help bring out the best in people rather than the worst and to encourage them to see people who disagree with them as fellow citizens with different perspectives rather than enemies. In Haidt’s framework from The Righteous Mind, the elephant is ultimately in charge and people are going to believe what they want to believe. If you want them to believe more constructive things, you have to develop systems that make them want to.
Disinformation on social media is akin to counterfeiting in the real world. It's when someone *deliberately* fabricates a falsehood --comparable to forging a Bank Check-- to deceive others for personal gain. This is what far-Right actors did in the UK to foment the anti-immigrant riots last summer.
Misinformation is simply false or inaccurate information that someone shares out of ignorance, without intending to mislead. An example is the common claim that, "the violent crime rate in America has been increasing since the end of the pandemic."
Dis-information is huge problem on social media. The Institute of Strategic Dialogue think tank determined that the UK riots were fueled to a great degree by the very algorithms employed by the social media companies themselves. We would not accept companies using social media algorithms that encourage grooming of children or making pipe bombs; why then should we accept their algorithms that promote racist riots?
COVID disinformation was vastly more than Russians pushing ivermectin. It was QAnon, Agenda 21, false claims of people dropping dead from vaccines, "masks don't work", hydroxychloroquine, violent assaults on health care providers, burning 5G towers, etc, etc. --all supercharged by the business model of social media companies and the absence of consequences for their behavior.
I am not advocating we continue the exact practices that you say have failed to curb disinformation to date. I'm simply noting that we can't deny disinformation on social media *is* a massive problem. I honestly don't know what would be the most effective way to stop it.
The problem is in the app- but it absolutely is relevant who owns and controls the platform. Own it yes, then stfu. They are not mere owners- they are influencers and propagandists same as the Kardashians- and I don’t think I’d like the algorithms they established either. Zuckerberg collected and sold personal data for purposes of influencing an election. Who they are, how they run their companies , is important . The tech community via the gaming event is just now catching on that Musk is a charlatan - the info has been out there all along.
It’s even worse when we call news “news” when it’s not. And confusing for consumers of news generated by formerly esteemed journalistic orgs like NYT and WAPO.
Wow! I will be reflecting on this for quite a while. The chain of argument is outstanding and paints a very solid picture of group political culture and social dynamics in the private digital platform era. Great work!
In Ernest Gellner's conception of civil society (from Conditions of Liberty), he describes the mutually-coupled bidirectional feedbacks between civil society and the state that provide the stability necessary for democratic self-governance. The state is checked by institutions that have an economic base while still being dependent on economic growth. Individual interests are checked by the state while being entrusted with electing the legislative and executive directors of the state.
Proceeding by analogy, democratic publics and the institutions of information aggregation and dispersal must have mutually-coupled feedbacks that similarly constrain each other while allowing growth through co-evolution. Democratic publics need consent to be manufactured somehow, but they cannot do it without another institution to direct the process and they must have some control over that other institution. In the age of vertical information channels, the free press fulfilled that function reasonably well, notwithstanding the criticism of Herman and Chomsky, and others. The limited number of news outlets built prestige through impressing the public and selling subscriptions that would in turn attract advertisers to pay for journalism. There were feedbacks going in both directions that allowed the public and the press to constrain and influence each other and so self-organize their coupled growth.
Our current era of horizontal information channels, largely social media, has broken that coupling by finding a way to monetize the public's attention without any input from that public, so they can manufacture consent by diktat rather than through the give-and-take of a coupled process.
As Henry's post so artfully makes clear, this is disastrous for democracy. When thinking about what we can do to address this catastrophic failure, I think we should be looking for ways to re-couple the democratic public and the information environment in much the same way that Gellner describes the coupling between civil society and the state.
I think this is very insightful, but I'm struggling to think of ways to establishing the re-coupling. The only things I can think of are to insist on public accountability over information platforms.. which can only be achieved via regulation and sanctions, or public participation in platform governance.. Neither of which seen very likely. It feels like it's a runaway train...
It's definitely a runaway train, with very powerful interests dedicated to keeping it going. Still, we have to try.
It's probably going to be difficult to figure out a business plan that re-couples journalism and the publics that need to be informed, but we can regulate the social media companies to prevent monetization schemes that hinder or prevent that re-coupling. E.g., the sale of surveillance data on users to 3rd parties, without the users participation in that transaction, clearly violates the spirit of two interacting entities that constrain each other while being dependent on the growth of each entity.
What about an institution that more closely tied public funding of research with voters? Currently most of science is funded through grants that are managed by bureaucrats, but if we made that process more transparent and gave voters more direct access to it, we might be able to accomplish a new set of constraints and incentives that achieve the balance you are describing while informing the people.
As an aside, you seem like someone who might be more involved in taking action regarding this problem. Is there any community or movement that you could direct me toward?
>This isn’t brainwashing - people don’t have to internalize this or that aspect of what social media presents to them, radically changing their beliefs and their sense of who they are. That sometimes happens, but likely far more rarely than we think. The more important change is to our beliefs about what other people think, which we perpetually update based on social observation
Thanks for this. I have been struggling to articulate a tension around how "brainwashing" or "propaganda" models of understanding social media feel overly cynical (gives people no possible way to "think" on their own terms), despite the obvious brokenness of the much of existing social media.
This feels right as a resolution, and it has nice parallels to discourses around physical spaces in an architectural sense. "There aren't enough third spaces"; "we lack multigenerational spaces"; "anti-homeless architecture; etc. These are, much in the same way, discussions around malformed publics. I wonder to what degree we can look that way for models of how digital publics should be designed.
One way "malformed publics l" are created is the sheet fact that comment sections self-select for fanbois vs haters vs sealions etc. The vast majority of people who look at a YT video and say "meh" or "I like it" but don't care enough to type have zero representation. We could see this in the current election in which a largely successful President was undone (pre-Debate) by a discourse dominated by haters, phony "grade school litter box" stories and "Biden is a demented old mummy" memes. The platforms, wittingly or unwittingly, are designed to amplify the most extreme voices and mute everything in-between.
"In short, the technologies through which we see the public shape what we think the public is. And that, in turn, shapes how we behave politically and how we orient ourselves. We may end up believing - in a highly specific way - in things that we know we are ‘supposed’ to believe, given that we are Republicans or Democrats, Conservative or Labour Party members. We may end up not believing these things, but also declining to express our actual beliefs publicly, because we know we’re not supposed to believe whatever it is that we privately think."
This is dead-on correct. Perhaps the natural progression of Guy Debord's "Spectacle" - from mediation of social relationships by images to mediation by representations "produced" by technology. (The Society of the Spectacle https://g.co/kgs/nExwAvC).
This piece is such a sharp and necessary reframing of the challenges social media poses to democracy. The idea that platforms are less about "misinforming individuals" and more about "malforming publics" feels especially urgent as we see how algorithms reshape collective understanding in ways that amplify division.
I’ve been thinking a lot about how this aligns with broader systemic issues, particularly around anti-racism and equity. For instance, the same dynamics that create malformed publics also mirror characteristics of white supremacy culture—like perfectionism, either/or thinking, and defensiveness—which can stifle collective problem-solving in other spheres.
Your point about the technologies that shape "what we think others think" is critical. What do you see as the most promising strategies for countering these distortions? How do we build healthier publics that encourage inclusivity and long-term thinking?
One simple measure that might assist members of the public to fight back against the arbitrary shaping powers of social media is to repeal section 230 of the Communications Decency Act of 1996. Section 230(c)(1) provides immunity for online computer services with respect to third-party content generated by its users. I suspect that the political will to take this step may well emerge over the next few years as a result of Musk and Zuckerberg’s current actions. I wonder if anyone has tried to project the impact such a step may have on the whole social media hyperobject.
This is a very enlightening read. There are several points here that I have been thinking about for ages, especially since the election. Most other discussions about “disinformation” are way too neat, tidy and reductive. What you discuss is actually far more insidious and difficult to remedy. As you say, how can we deal with these issues when a handful of people control these algorithms? Doesn’t bode well for the next few years.
Great piece—thanks for sharing. I think an especially pernicious effect of the malformed public sphere comes from how *publicly* it’s malformed—as in, we all agree that our democracy isn’t working as it should be. But, we deeply disagree on whose fault it is; each side thinks (partially based on misinformation) that the other guys are the ones causing the problems. So, rather than a public sphere in which we may collectively pursue a common goal of justice, we have bitterly partisan publics where pursuing justice means beating the other guys before they can beat us.
Henry, you are a genius. But in the spirit of your post, I do not ask you to solve our problems. 😅 Building democratic institutions is a much better plan.
Excellent essay, a lot to chew on and revisit. I anticipated the malformed publics arising from people’s behaviors online, although I’m thinking specifically of budget disagreements in my city where an organized and vocal group are getting increasingly nasty online (and in person) and it gives the impression that this tiny but very active minority represents “the people,” if you will. To your point that my (and others’) observation of what the public wants comes into question based on the actions of this group.
It feels like Trumpian and Musk Era behaviors are rewarded - in part by the algorithm boosting the angry name-calling and pilings-on.
What great insights. One additional aspect it makes me consider about social media is that social media removes the mirror of an actual physical person in favor of one that reflects ourselves—part of why we have no defense to the imagined public we are creating via social media. While it feels like addressing the mountain of a problem with a spoon, reengaging in actual in-person publics, particularly those that lie outside one’s inner circle, seems an important step to take.
I think this is basically right, but I'd add one important detail-- the rise of online communities as a central forum for identity formation and political discourse has necessarily led to the collapse of offline communities organized around place, vocation, etc. that previously facilitated these processes. (Maybe "necessarily led to" is a bit too strong and it's more accurate to say "been accompanied by," but my hunch is that the stronger version is closer to the truth-- people have a finite capacity to participate in various communities and form political conclusions, and the rise of one forum's importance past a certain point necessarily implies the fall of another.) This is really, really bad for the sort of collective democratic thinking that you highlight in the piece as the bedrock for healthy democratic government, since that process relies on each community in the polity first having preferences about issues that are germane to them and then secondly having more-or-less proportional representation so that these preferences are backed at a level appropriate for their prevalence. Here I'm reminded of Sam Rosenfeld and Daniel Schlozman's comment on Know Your Enemy that candidates from both parties have adopted a more national, homogeneous rhetorical affect and set of positions as the parties have been hollowed out. It used to be that a Republican stump speech in Nebraska would sound completely different from a stump speech in New York or Florida, but now you turn on any of the three and they're all talking about trans kids in bathrooms.
I like this explanation as a framework for thinking about why online publics are so poisonous for our political life because it draws attention to several revealing parallels. First off, these forums are accelerating this homogenizing process that Rosenfield and Schlozman were mostly talking about in the context of campaign finance. (I think they may have also discussed social media briefly, but it's been a while since I heard this interview and I'm not sure.) Both of these phenomena untether political elites from a set of positions that they'd otherwise have to represent by raising the salience of other issues, but social media is arguably more dangerous because of how users understand the information they encounter as something natural & intrinsic to their own community rather than something coming from the outside political sphere. Social media thus resembles the Brooks Brothers riot, the Tea Party, and other "astroturfed" political movements and affiliations in American history.
Second, this warping of various communities that were previously sorted into rough political districts by affinity shares some striking similarities to previous dislocations to regional politics that were caused by breakthroughs in information technology-- talk radio, tv, even the printing press if you want to go all Imagined Communities about it. Again, a clear difference here is the apparent "democratized" nature of social media, which is actually financialization slipping in the back as a wolf in sheep's clothing. This new technology is capable of creating new publics and shaping old ones in ways that previous technologies were not in large part BECAUSE each user on one of these platforms believes that they and their peers are actually the actors responsible for creating and shaping the discourse and the platform itself. This feature of social media should draw our attention to past forms of participatory community formation that were brought about by changes in the information landscape (for instance, call-ins on conservative AM radio) as early forerunners of social media that deserve reappraisal. It should also remind us of the letter to the editor or personals section of the now-obsolete local newspaper, again demonstrating how local communities are impoverished and dissolved by the internet and how this dissolution largely operates through economic dislocation.
It's definitely a scary moment! I think that the superficial freedom to move about the internet and associate with one's chosen community did a lot to obscure the ways that this social transformation put so much of our cultural and political life directly in the grasp of the tech companies and oligarchs that organize our virtual spaces. After all, a naive understanding of the internet that ignores the role of algorithms, moderation, and advertising would lead a person to conclude that online we're all just individual actors, making our own individual choices just as we might offline as we move through this new virtual space that Silicon Valley has created to connect the world. This focus on individual agency and the interconnectedness of previously separate actors also suggests that the neoliberal turn has something to do with the widespread adoption of online life in favor of offline life (or maybe it's more accurate to say that the two phenomena feed one another). Like you, I don't know how we might get out of this mess, but the framework you've laid out here is a great start to understanding the problem that has definitely changed my thinking in several important ways.
Love this: “This new technology is capable of creating new publics and shaping old ones in ways that previous technologies were not in large part BECAUSE each user on one of these platforms believes that they and their peers are actually the actors responsible for creating and shaping the discourse and the platform itself.”
I agree and am seeing this in our local politics online, which I mention in a comment further down.
I’m bemused how both Farrell and this commenter choose their examples, as it’s clear that Musk and Zuck are responding to political winds that are a reaction to the ten years prior. Just because an elite is more numerous than two doesn’t make it democratic. Eliding the U.S. Government’s role in jawboning Twitter and Facebook from 2020 on also misses a significant part of what Zuck’s announcement was about; I don’t think stopping censorship on certain topics is the same as privileging them, as Farrell and many others imply.
But there’s a lot to agree with here. The distortion of reality via social media is well-worn, and the mechanism that convinced many corporations to “get woke”, or at the very least fire troublesome employees who voiced their opinions well outside the normal boundary of the office. But when you replace “algorithm” with “editor” it becomes clearer what we’ve lost. Editors brought their own biases, which were the biases of their social groups, and we can easily point out ways that went wrong (e.g., “Crack Baby” and “Superpredator”, or going further back “Kitty Genovese”), but *good* editors went out of their way to search for other opinions, facts that would upset the current paradigm. What’s a “good” algorithm in this context? Doesn’t seem to be one, at least yet.
I happen to like Substack as an antidote (so one answer to Farrell’s call for alternatives is Madge’s classic “you’re soaking in it”). It has a lot of the old Blogosphere feel to it, and I would certainly like to return to a “reality based” version of the Left, but if I’m being honest, I gravitate towards what interests me, irrespective of algorithm, because there are certain people I read, and others I stay away from.
There are two greats insights in this post:
- First, that the problem with social media is not misinformation.
- Second, the porn analogy about how porn presents people with a distorted view of sex, which in turn can warp people's own views of what they want or should want and how social media does the same with political discussion.
But the last section goes awry. The problem isn't Musk or Zuckerberg. The problem is the structure of the discussion software itself (particularly Twitter and clones like Blue Sky).
Message platforms built on short messages and short replies; that can thread in any different direction; allow unlimited, immediate posting; and support anonymity are inevitably going to descend into snark, incivility, and tribalism no matter who owns or runs them. Platforms like Twitter and Blue Sky are always gong to devolve into showing us the worst version of ourselves in discussions of controversial issues, which in turn warps our view of view of these issues (for exactly the reason captured in the porn analogy).
What is needed for better social media is a discussion platform that helps participants show the best of themselves rather than the worst. Such a platform would:
- Encourage longer posts.
- Encourage people to share what they believe and why rather then responding to the posts of others.
- Have tools in place that prevent discussions from being dominated by the few (who tend to be the most extreme).
- Reward folks who engage constructively.
- Not allow (or at least discourage) anonymity.
You are overstating the importance of short posts. Mastodon does all the other things you mention (except re anonymity, which is problematic) and works well. But lots of people accustomed to Twitter find it too dull. Bluesky somewhere in between: can be snarky but nowhere near as toxic as Twitter, let alone X.
John: It's not just the characters limits on posts that make a platform like Twitter so toxic. It's the combination of character limits, endless threading, the lack of limits on who can contribute, and anonymity.
But there is no doubt that the character limits are an important contributor. The problem is such limits force people to cut right to the chase about the point they're making and cut off their ability to express appreciation, to acknowledge common ground, and to provide context. These qualities are critical to promoting constructive conversations between people with different perspectives.
And there is no way to do this on replies on Twitter/BlueSky/Threads (probably Mastodon as well, though I've never used it), without posting multiple messages in a way that won't necessarily be read in the way those multiple messages are intended because of the way discussions fragment and thread. So there is tremendous pressure to reduce your response to a single core point. And since so much of these debates are about signaling your tribal affiliation, gaining attention, and scoring virtual points, it inevitably leads to snark being the dominant key of discussions.
Now, to be clear, this can be lots of fun. The ruthless editing that posting on these platforms requires means the content to word ratio is incredibly high. You get all meat and none of the filler. There is a reason these platforms are compelling.
But here is where Henry's porn analogy is so good. Porn can also be lots of fun. And interestingly, porn has gone through its own Twitterization process. No longer (at least for the vast majority of porn users), is there any set up or backstory. You're immediately brought to the, er, meat of whatever type of sex you're there for. And to be clear, I'm not complaining.
But there is no doubt that this has nothing to do with the way good sex works in the real world. And there is also no doubt in my mind that a world where people's perception of sex comes mainly through porn is a stultifying one.
So I think it's important that people treat political discussion on Twitter/BlueSky/Threads (and Facebook, TikTok) in an analogous way to porn: a guilty pleasure that can be fun in small doses but not where healthy people should be spending a significant amount of their time.
Gordon Strause wrote:
"First, that the problem with social media is not misinformation."
Is/isn't is too simplistic. Disinformation is a *massive, existential problem* on social media right now -- like Snake Oil, counterfeit money or counterfeit merchandise in the real world. Worse, it is hawked by a virtual Con Man (machine-powered algorithms) with superpowers of ubiquity, real-time tracking, and simultaneous knowledge of the individual psychological vulnerabilities of millions of potential Marks. Just as we try to maintain a zero-tolerance for counterfeit drugs, money, credentials, merchandise, etc. in the real world, we must attempt the same for counterfeiting (i.e. disinformation) in the virtual world.
The recent UK riots and COVID snake oil cures --fueled by disinformation on social media-- are indisputable and deadly examples of what any proposed solutions must address. P.S. misinformation and disinformation aren't the same thing; I'm thinking mostly about DIS-information, which Henry used in his title.
With the exception of stuff like Russian authored posts deliberately designed to create confusion (which do exist but are essentially a rounding error in terms of their number and impact), there is almost never a clear line between what you seem to be calling the disinformation and misinformation. To use the COVID cures example, the vast majority of people posting about stuff like ivermectin legitimately believed it (and in fairness, there were not crazy reasons to do so for a long time).
But regardless, the real point I’m making is that the platforms’ efforts to deal with this problem through fact checking and removal of posts were utterly ineffective. The ONLY way to improve the information ecosystem is to develop more constructive software/venues for political discussions that help bring out the best in people rather than the worst and to encourage them to see people who disagree with them as fellow citizens with different perspectives rather than enemies. In Haidt’s framework from The Righteous Mind, the elephant is ultimately in charge and people are going to believe what they want to believe. If you want them to believe more constructive things, you have to develop systems that make them want to.
Disinformation on social media is akin to counterfeiting in the real world. It's when someone *deliberately* fabricates a falsehood --comparable to forging a Bank Check-- to deceive others for personal gain. This is what far-Right actors did in the UK to foment the anti-immigrant riots last summer.
https://www.bbc.com/news/articles/cl4y0453nv5o
Misinformation is simply false or inaccurate information that someone shares out of ignorance, without intending to mislead. An example is the common claim that, "the violent crime rate in America has been increasing since the end of the pandemic."
Dis-information is huge problem on social media. The Institute of Strategic Dialogue think tank determined that the UK riots were fueled to a great degree by the very algorithms employed by the social media companies themselves. We would not accept companies using social media algorithms that encourage grooming of children or making pipe bombs; why then should we accept their algorithms that promote racist riots?
COVID disinformation was vastly more than Russians pushing ivermectin. It was QAnon, Agenda 21, false claims of people dropping dead from vaccines, "masks don't work", hydroxychloroquine, violent assaults on health care providers, burning 5G towers, etc, etc. --all supercharged by the business model of social media companies and the absence of consequences for their behavior.
I am not advocating we continue the exact practices that you say have failed to curb disinformation to date. I'm simply noting that we can't deny disinformation on social media *is* a massive problem. I honestly don't know what would be the most effective way to stop it.
The problem is in the app- but it absolutely is relevant who owns and controls the platform. Own it yes, then stfu. They are not mere owners- they are influencers and propagandists same as the Kardashians- and I don’t think I’d like the algorithms they established either. Zuckerberg collected and sold personal data for purposes of influencing an election. Who they are, how they run their companies , is important . The tech community via the gaming event is just now catching on that Musk is a charlatan - the info has been out there all along.
It’s even worse when we call news “news” when it’s not. And confusing for consumers of news generated by formerly esteemed journalistic orgs like NYT and WAPO.
Wow! I will be reflecting on this for quite a while. The chain of argument is outstanding and paints a very solid picture of group political culture and social dynamics in the private digital platform era. Great work!
In Ernest Gellner's conception of civil society (from Conditions of Liberty), he describes the mutually-coupled bidirectional feedbacks between civil society and the state that provide the stability necessary for democratic self-governance. The state is checked by institutions that have an economic base while still being dependent on economic growth. Individual interests are checked by the state while being entrusted with electing the legislative and executive directors of the state.
Proceeding by analogy, democratic publics and the institutions of information aggregation and dispersal must have mutually-coupled feedbacks that similarly constrain each other while allowing growth through co-evolution. Democratic publics need consent to be manufactured somehow, but they cannot do it without another institution to direct the process and they must have some control over that other institution. In the age of vertical information channels, the free press fulfilled that function reasonably well, notwithstanding the criticism of Herman and Chomsky, and others. The limited number of news outlets built prestige through impressing the public and selling subscriptions that would in turn attract advertisers to pay for journalism. There were feedbacks going in both directions that allowed the public and the press to constrain and influence each other and so self-organize their coupled growth.
Our current era of horizontal information channels, largely social media, has broken that coupling by finding a way to monetize the public's attention without any input from that public, so they can manufacture consent by diktat rather than through the give-and-take of a coupled process.
As Henry's post so artfully makes clear, this is disastrous for democracy. When thinking about what we can do to address this catastrophic failure, I think we should be looking for ways to re-couple the democratic public and the information environment in much the same way that Gellner describes the coupling between civil society and the state.
I think this is very insightful, but I'm struggling to think of ways to establishing the re-coupling. The only things I can think of are to insist on public accountability over information platforms.. which can only be achieved via regulation and sanctions, or public participation in platform governance.. Neither of which seen very likely. It feels like it's a runaway train...
It's definitely a runaway train, with very powerful interests dedicated to keeping it going. Still, we have to try.
It's probably going to be difficult to figure out a business plan that re-couples journalism and the publics that need to be informed, but we can regulate the social media companies to prevent monetization schemes that hinder or prevent that re-coupling. E.g., the sale of surveillance data on users to 3rd parties, without the users participation in that transaction, clearly violates the spirit of two interacting entities that constrain each other while being dependent on the growth of each entity.
What about an institution that more closely tied public funding of research with voters? Currently most of science is funded through grants that are managed by bureaucrats, but if we made that process more transparent and gave voters more direct access to it, we might be able to accomplish a new set of constraints and incentives that achieve the balance you are describing while informing the people.
As an aside, you seem like someone who might be more involved in taking action regarding this problem. Is there any community or movement that you could direct me toward?
🎯
>This isn’t brainwashing - people don’t have to internalize this or that aspect of what social media presents to them, radically changing their beliefs and their sense of who they are. That sometimes happens, but likely far more rarely than we think. The more important change is to our beliefs about what other people think, which we perpetually update based on social observation
Thanks for this. I have been struggling to articulate a tension around how "brainwashing" or "propaganda" models of understanding social media feel overly cynical (gives people no possible way to "think" on their own terms), despite the obvious brokenness of the much of existing social media.
This feels right as a resolution, and it has nice parallels to discourses around physical spaces in an architectural sense. "There aren't enough third spaces"; "we lack multigenerational spaces"; "anti-homeless architecture; etc. These are, much in the same way, discussions around malformed publics. I wonder to what degree we can look that way for models of how digital publics should be designed.
One way "malformed publics l" are created is the sheet fact that comment sections self-select for fanbois vs haters vs sealions etc. The vast majority of people who look at a YT video and say "meh" or "I like it" but don't care enough to type have zero representation. We could see this in the current election in which a largely successful President was undone (pre-Debate) by a discourse dominated by haters, phony "grade school litter box" stories and "Biden is a demented old mummy" memes. The platforms, wittingly or unwittingly, are designed to amplify the most extreme voices and mute everything in-between.
🎯
"In short, the technologies through which we see the public shape what we think the public is. And that, in turn, shapes how we behave politically and how we orient ourselves. We may end up believing - in a highly specific way - in things that we know we are ‘supposed’ to believe, given that we are Republicans or Democrats, Conservative or Labour Party members. We may end up not believing these things, but also declining to express our actual beliefs publicly, because we know we’re not supposed to believe whatever it is that we privately think."
This is dead-on correct. Perhaps the natural progression of Guy Debord's "Spectacle" - from mediation of social relationships by images to mediation by representations "produced" by technology. (The Society of the Spectacle https://g.co/kgs/nExwAvC).
Fantastic piece! And this is the first time I rehear the Polish joke that my grandfather used to tell.
This piece is such a sharp and necessary reframing of the challenges social media poses to democracy. The idea that platforms are less about "misinforming individuals" and more about "malforming publics" feels especially urgent as we see how algorithms reshape collective understanding in ways that amplify division.
I’ve been thinking a lot about how this aligns with broader systemic issues, particularly around anti-racism and equity. For instance, the same dynamics that create malformed publics also mirror characteristics of white supremacy culture—like perfectionism, either/or thinking, and defensiveness—which can stifle collective problem-solving in other spheres.
Your point about the technologies that shape "what we think others think" is critical. What do you see as the most promising strategies for countering these distortions? How do we build healthier publics that encourage inclusivity and long-term thinking?
One simple measure that might assist members of the public to fight back against the arbitrary shaping powers of social media is to repeal section 230 of the Communications Decency Act of 1996. Section 230(c)(1) provides immunity for online computer services with respect to third-party content generated by its users. I suspect that the political will to take this step may well emerge over the next few years as a result of Musk and Zuckerberg’s current actions. I wonder if anyone has tried to project the impact such a step may have on the whole social media hyperobject.
This doesn't feel like a problem that can be solved via one jurisdiction's laws. These platforms are global. They need to be regulated accordingly.
This is a very enlightening read. There are several points here that I have been thinking about for ages, especially since the election. Most other discussions about “disinformation” are way too neat, tidy and reductive. What you discuss is actually far more insidious and difficult to remedy. As you say, how can we deal with these issues when a handful of people control these algorithms? Doesn’t bode well for the next few years.
Great piece—thanks for sharing. I think an especially pernicious effect of the malformed public sphere comes from how *publicly* it’s malformed—as in, we all agree that our democracy isn’t working as it should be. But, we deeply disagree on whose fault it is; each side thinks (partially based on misinformation) that the other guys are the ones causing the problems. So, rather than a public sphere in which we may collectively pursue a common goal of justice, we have bitterly partisan publics where pursuing justice means beating the other guys before they can beat us.
Yes. Have been arguing for a while that mid/dids information misses the point. https://larger.us/ideas/revolution-will-not-be-on-social-media/
Great post in the link Hugh! Spot on.
thanks! @gordonst - glad you liked it.
Henry, you are a genius. But in the spirit of your post, I do not ask you to solve our problems. 😅 Building democratic institutions is a much better plan.
Excellent essay, a lot to chew on and revisit. I anticipated the malformed publics arising from people’s behaviors online, although I’m thinking specifically of budget disagreements in my city where an organized and vocal group are getting increasingly nasty online (and in person) and it gives the impression that this tiny but very active minority represents “the people,” if you will. To your point that my (and others’) observation of what the public wants comes into question based on the actions of this group.
It feels like Trumpian and Musk Era behaviors are rewarded - in part by the algorithm boosting the angry name-calling and pilings-on.
What great insights. One additional aspect it makes me consider about social media is that social media removes the mirror of an actual physical person in favor of one that reflects ourselves—part of why we have no defense to the imagined public we are creating via social media. While it feels like addressing the mountain of a problem with a spoon, reengaging in actual in-person publics, particularly those that lie outside one’s inner circle, seems an important step to take.