In praise of negativity
On average, we're better at criticizing others than thinking originally ourselves
[I’m on holiday this week, enjoying the characteristically delightful weather of the West of Ireland in December/January, so this is an old post. It’s also, I think, the best short thing I’ve ever written, in large part because it builds on the excellent ideas of other people]
Andrew Gelman has a post on the benefits of negative criticism, where he talks about the careful methodological demolitions he has done of others’ research that he has found to be slipshod.
if you want to go against the grain you have to work harder to convince people. My point is that this is the exact opposite of Cowen’s claim that following his advice “Avoid criticizing other public intellectuals. In fact, avoid the negative as much as possible” will force you to keep on thinking harder.
I’m in favor of a strong culture of criticism, but for a quite different reason: because serious criticism is probably the most valuable contribution we can make to the cognitive division of labour. There’s a possibly mistaken understanding of a truly excellent social science book behind this argument.
The book is Hugo Mercier and Dan Sperber’s The Enigma of Reason. I should note that Hugo is a co-author, but is absolved of any mistakes I make in interpreting his ideas. I should also note that his book with Sperber is at the least a ground breaking work. In my opinion, it’s a straightforward classic.
Mercier and Sperber’s basic argument is, as I understand it, as follows. First – that reasoning has not evolved in the ways that we think it has – as a process of ratiocination that is intended independently to figure out the world. Instead, it has evolved as a social capacity – as a means to justify ourselves to others. We want something to be so, and we use our reasoning capacity to figure out plausible seeming reasons to convince others that it should be so. However (and this is the main topic of a more recent book by Hugo), together with our capacity to generate plausible sounding rationales, we have a decent capacity to detect when others are bullshitting us. In combination, these mean that we are more likely to be closer to the truth when we are trying to figure out why others may be wrong, than when we are trying to figure out why we ourselves are right.
This has important consequences. The problem is that our individual reasoning processes are biased in ways that are really hard for us (individually) to correct. We have a strong tendency to believe our own bullshit. The upside is that if we are far better at detecting bullshit in others than in ourselves, and if we have some minimal good faith commitment to making good criticisms, and entertaining good criticisms when we get them, we can harness our individual cognitive biases through appropriate group processes to produce socially beneficial ends. Our ability to see the motes in others’ eyes while ignoring the beams in our own can be put to good work, when we criticize others and force them to improve their arguments. There are strong benefits to collective institutions that underpin a cognitive division of labor.
This superficially looks to resemble the ‘overcoming bias’/’not wrong’ approaches to self-improvement that are popular on the Internet. But it ends up going in a very different direction: collective processes of improvement rather than individual efforts to remedy the irremediable. The ideal of the individual seeking to eliminate all sources of bias so that he (it is, usually, a he) can calmly consider everything from a neutral and dispassionate perspective is replaced by a Humean recognition that reason cannot readily be separated from the desires of the reasoner. We need negative criticisms from others, since they lead us to understand weaknesses in our arguments that we are incapable of coming at ourselves, unless they are pointed out to us.
Even if most of the action is going to be at the collective or group level, there are some possible lessons for how we ought to behave individually (some individual dispositions will be more likely to give to or benefit from collective debate).
Most obviously, serious criticism/disagreement is one of the most valuable things that we can do or we can get as public intellectuals (for values of public intellectual that mean no more and no less than someone who wants to think and argue in public). On average, our criticism of others is going to be closer to the truth than our own original thoughts. Furthermore, our original thoughts are likely to be valuable just to the extent that they’re responsive to serious criticism from others, and have been modified in response to previous rounds of criticism. More broadly, reasoning well will often be less about reasoning purely, than being reasonable (i.e. being open to others’ reasoning).
When we criticize others, we should try to do so non-pejoratively, but crisply and clearly. Randall Jarrell says that “a good motto for critics might be what the Persians taught their children: to shoot the bow and speak the truth.” He’s completely right – but the critic doesn’t have to be an asshole about it. It’s likely that some people will still find plainly stated criticisms obnoxious (this may not be a good way to build alliances), but they will be more likely to benefit from the criticisms if they are clear rather than circumspect.
Furthermore, while defending our own (inevitably biased) perspectives, we should be open not only to the likelihood that people with other perspectives have important things to say, but that on average they will have a better understanding of the weaknesses of our ideas than we do ourselves. We should look to cultivate good criticisms from others, and in particular people from different perspectives, whose criticisms are more likely to hit on weaknesses in our own reasoning that aren’t visible either to us or those who agree with us.
As a corollary, what may initially seem to us as trolling (and sometimes, what actually is trolling), may contain valuable criticisms that we may benefit from. The tradeoffs are that diversity of perspective is typically correlated with diversity of goals – someone who disagrees with how you see the world is also likely to want different things from it. But you should still push towards the margins of diversity as best as you can, since it is at those margins that you will get the most unexpected criticisms, even if some of those criticisms are irrelevant, since they presuppose that you should want different things than those that you do want. There are judgment calls as to where you stop – but you should do your best to be open to criticisms that are intelligent, clearly expressed, and plausibly constructive with respect to the goals that you want to achieve, rather than overtly destructive of them.
So what this all points to is something very different than the pursuit of bias-free reason that’s still popular across much of the Internet. It’s not about a radical individual virtuosity, but a radical individual humility. Your most truthful contributions to collective reasoning are unlikely to be your own individual arguments, but your useful criticisms of others’ rationales. Even more pungently, you are on average best able to contribute to collective understanding through your criticisms of those whose perspectives are most different to your own, and hence very likely those you most strongly disagree with. The very best thing that you may do in your life is create a speck of intense irritation for someone whose views you vigorously dispute, around which a pearl of new intelligence may then accrete.
Of course, collective reasoning is not the only desideratum of public debate. Much argument is about politics, persuasion and collective action, where a very different logic applies. The advice in this post is advice for public intellectuals but not for politicians. Weber’s essays on science and politics as vocations are useful here, and in particular his defense of the nobility of the political hack. As a hack, your professional duties are different, and the logic outlined here is at best of questionable benefit.
Furthermore, there is an obvious clash between the collective benefits of reasoning, where one provides most value added through improving the ideas of others, and the individual rewards of being a public intellectual (this time in the sense of actual career) where one does best through polishing one’s own reputation. The counterpoint is that we likely radically underestimate the importance of the invisible and non-individually lucrative contributions that people make to the collective benefit by improving others’ ideas.
One of my favourite passages from anywhere is the closing of Middlemarch, where Eliot says of Dorothea:
Her full nature, like that river of which Cyrus broke the strength, spent itself in channels which had no great name on the earth. But the effect of her being on those around her was incalculably diffusive: for the growing good of the world is partly dependent on unhistoric acts; and that things are not so ill with you and me as they might have been, is half owing to the number who lived faithfully a hidden life, and rest in unvisited tombs.
Striving to be a Dorothea is a noble vocation, and likely the best we can hope for in any event; sooner or later, we will all be forgotten. In the long course of time, all of our arguments and ideas will be broken down and decomposed. At best we may hope, if we are very lucky, that they will contribute in some minute way to a rich humus, from which plants that we will never see or understand might spring.
Very good indeed.
This 'we are better at criticism of the statements of others than creating our own' is, I think, an important piece of the puzzle regarding 'best decision making'. Another piece is how our own convictions work and why they are normally stronger than our own observations and reasonings (for evolutionary reasons, I estimate, both for the speed of the individual as the effectiveness of the tribe, it might thus be evolutionary necessary for us to automatically believe our own bullshit and to believe what close 'relatives' tell us — see https://ea.rna.nl/2022/10/24/on-the-psychology-of-architecture-and-the-architecture-of-psychology/)
This 'collaborative criticism' has been part of my setup for Enterprise/IT decision making/governance since I first set it up myself. This means that criticism is important, but it needs to happen in a collaborative setting (we manage the criticism consent-based in a group, adversarial criticism doesn't work). So, we have all forms of peer review at all sorts of levels. This is embedded in the 'political organisation' that an enterprise is, but if it works well enough (and doesn't become adversarial) it can coexist with and maybe even stabilise the political 'hacking'. But I have also seen 'political hacking' destroy the 'collaborative criticism' as it was seen as 'adversarial'.
Food for thought. Thank you for repeating it here.
No one - so far - is making negative comments about this piece - which would be in spirit of the piece, right?
One thing - Gelman doesn’t exactly quote Cowen out of context, but the full quote makes clear that Cowen thinks the risk of going negative is that it’s the easy way out - it can be an obstacle to thinking harder.
“2. Avoid criticizing other public intellectuals. In fact, avoid the negative as much as possible. However pressing a social or economic issue may be, there is almost always a positive and constructive way to reframe your potential contribution. This also will force you to keep on thinking harder, because it is easier to take apparently justified negative slaps at the wrongdoers.”
https://marginalrevolution.com/marginalrevolution/2020/02/how-public-intellectuals-can-extend-their-shelf-lives.html