READING SUBTLY

This
was the domain of my Blogger site from 2009 to 2018, when I moved to this domain and started
The Storytelling Ape
. The search option should help you find any of the old posts you're looking for.
 

Dennis Junk Dennis Junk

Lab Flies: Joshua Greene’s Moral Tribes and the Contamination of Walter White

Joshua Greene’s book “Moral Tribes” posits a dual-system theory of morality, where a quick, intuitive system 1 makes judgments based on deontological considerations—”it’s just wrong—whereas the slower, more deliberative system 2 takes time to calculate the consequences of any given choice. Audiences can see these two systems on display in the series “Breaking Bad,” as well as in critics’ and audiences’ responses.

Walter White’s Moral Math

In an episode near the end of Breaking Bad’s fourth season, the drug kingpin Gus Fring gives his meth cook Walter White an ultimatum. Walt’s brother-in-law Hank is a DEA agent who has been getting close to discovering the high-tech lab Gus has created for Walt and his partner Jesse, and Walt, despite his best efforts, hasn’t managed to put him off the trail. Gus decides that Walt himself has likewise become too big a liability, and he has found that Jesse can cook almost as well as his mentor. The only problem for Gus is that Jesse, even though he too is fed up with Walt, will refuse to cook if anything happens to his old partner. So Gus has Walt taken at gunpoint to the desert where he tells him to stay away from both the lab and Jesse. Walt, infuriated, goads Gus with the fact that he’s failed to turn Jesse against him completely, to which Gus responds, “For now,” before going on to say,

In the meantime, there’s the matter of your brother-in-law. He is a problem you promised to resolve. You have failed. Now it’s left to me to deal with him. If you try to interfere, this becomes a much simpler matter. I will kill your wife. I will kill your son. I will kill your infant daughter.

In other words, Gus tells Walt to stand by and let Hank be killed or else he will kill his wife and kids. Once he’s released, Walt immediately has his lawyer Saul Goodman place an anonymous call to the DEA to warn them that Hank is in danger. Afterward, Walt plans to pay a man to help his family escape to a new location with new, untraceable identities—but he soon discovers the money he was going to use to pay the man has already been spent (by his wife Skyler). Now it seems all five of them are doomed. This is when things get really interesting.

      Walt devises an elaborate plan to convince Jesse to help him kill Gus. Jesse knows that Gus would prefer for Walt to be dead, and both Walt and Gus know that Jesse would go berserk if anyone ever tried to hurt his girlfriend’s nine-year-old son Brock. Walt’s plan is to make it look like Gus is trying to frame him for poisoning Brock with risin. The idea is that Jesse would suspect Walt of trying to kill Brock as punishment for Jesse betraying him and going to work with Gus. But Walt will convince Jesse that this is really just Gus’s ploy to trick Jesse into doing what he has forbidden Gus to do up till now—and kill Walt himself. Once Jesse concludes that it was Gus who poisoned Brock, he will understand that his new boss has to go, and he will accept Walt’s offer to help him perform the deed. Walt will then be able to get Jesse to give him the crucial information he needs about Gus to figure out a way to kill him.

It’s a brilliant plan. The one problem is that it involves poisoning a nine-year-old child. Walt comes up with an ingenious trick which allows him to use a less deadly poison while still making it look like Brock has ingested the ricin, but for the plan to work the boy has to be made deathly ill. So Walt is faced with a dilemma: if he goes through with his plan, he can save Hank, his wife, and his two kids, but to do so he has to deceive his old partner Jesse in just about the most underhanded way imaginable—and he has to make a young boy very sick by poisoning him, with the attendant risk that something will go wrong and the boy, or someone else, or everyone else, will die anyway. The math seems easy: either four people die, or one person gets sick. The option recommended by the math is greatly complicated, however, by the fact that it involves an act of violence against an innocent child.

In the end, Walt chooses to go through with his plan, and it works perfectly. In another ingenious move, though, this time on the part of the show’s writers, Walt’s deception isn’t revealed until after his plan has been successfully implemented, which makes for an unforgettable shock at the end of the season. Unfortunately, this revelation after the fact, at a time when Walt and his family are finally safe, makes it all too easy to forget what made the dilemma so difficult in the first place—and thus makes it all too easy to condemn Walt for resorting to such monstrous means to see his way through.

            Fans of Breaking Bad who read about the famous thought-experiment called the footbridge dilemma in Harvard psychologist Joshua Greene’s multidisciplinary and momentously important book Moral Tribes: Emotion, Reason, and the Gap between Us and Them will immediately recognize the conflicting feelings underlying our responses to questions about serving some greater good by committing an act of violence. Here is how Greene describes the dilemma:

A runaway trolley is headed for five railway workmen who will be killed if it proceeds on its present course. You are standing on a footbridge spanning the tracks, in between the oncoming trolley and the five people. Next to you is a railway workman wearing a large backpack. The only way to save the five people is to push this man off the footbridge and onto the tracks below. The man will die as a result, but his body and backpack will stop the trolley from reaching the others. (You can’t jump yourself because you, without a backpack, are not big enough to stop the trolley, and there’s no time to put one on.) Is it morally acceptable to save the five people by pushing this stranger to his death? (113-4)

As was the case for Walter White when he faced his child-poisoning dilemma, the math is easy: you can save five people—strangers in this case—through a single act of violence. One of the fascinating things about common responses to the footbridge dilemma, though, is that the math is all but irrelevant to most of us; no matter how many people we might save, it’s hard for us to see past the murderous deed of pushing the man off the bridge. The answer for a large majority of people faced with this dilemma, even in the case of variations which put the number of people who would be saved much higher than five, is no, pushing the stranger to his death is not morally acceptable.

Another fascinating aspect of our responses is that they change drastically with the modification of a single detail in the hypothetical scenario. In the switch dilemma, a trolley is heading for five people again, but this time you can hit a switch to shift it onto another track where there happens to be a single person who would be killed. Though the math and the underlying logic are the same—you save five people by killing one—something about pushing a person off a bridge strikes us as far worse than pulling a switch. A large majority of people say killing the one person in the switch dilemma is acceptable. To figure out which specific factors account for the different responses, Greene and his colleagues tweak various minor details of the trolley scenario before posing the dilemma to test participants. By now, so many experiments have relied on these scenarios that Greene calls trolley dilemmas the fruit flies of the emerging field known as moral psychology.

The Automatic and Manual Modes of Moral Thinking

One hypothesis for why the footbridge case strikes us as unacceptable is that it involves using a human being as an instrument, a means to an end. So Greene and his fellow trolleyologists devised a variation called the loop dilemma, which still has participants pulling a hypothetical switch, but this time the lone victim on the alternate track must stop the trolley from looping back around onto the original track. In other words, you’re still hitting the switch to save the five people, but you’re also using a human being as a trolley stop. People nonetheless tend to respond to the loop dilemma in much the same way they do the switch dilemma. So there must be some factor other than the prospect of using a person as an instrument that makes the footbridge version so objectionable to us.

Greene’s own theory for why our intuitive responses to these dilemmas are so different begins with what Daniel Kahneman, one of the founders of behavioral economics, labeled the two-system model of the mind. The first system, a sort of autopilot, is the one we operate in most of the time. We only use the second system when doing things that require conscious effort, like multiplying 26 by 47. While system one is quick and intuitive, system two is slow and demanding. Greene proposes as an analogy the automatic and manual settings on a camera. System one is point-and-click; system two, though more flexible, requires several calibrations and adjustments. We usually only engage our manual systems when faced with circumstances that are either completely new or particularly challenging.

According to Greene’s model, our automatic settings have functions that go beyond the rapid processing of information to encompass our basic set of moral emotions, from indignation to gratitude, from guilt to outrage, which motivates us to behave in ways that over evolutionary history have helped our ancestors transcend their selfish impulses to live in cooperative societies. Greene writes,

According to the dual-process theory, there is an automatic setting that sounds the emotional alarm in response to certain kinds of harmful actions, such as the action in the footbridge case. Then there’s manual mode, which by its nature tends to think in terms of costs and benefits. Manual mode looks at all of these cases—switch, footbridge, and loop—and says “Five for one? Sounds like a good deal.” Because manual mode always reaches the same conclusion in these five-for-one cases (“Good deal!”), the trend in judgment for each of these cases is ultimately determined by the automatic setting—that is, by whether the myopic module sounds the alarm. (233)

What makes the dilemmas difficult then is that we experience them in two conflicting ways. Most of us, most of the time, follow the dictates of the automatic setting, which Greene describes as myopic because its speed and efficiency come at the cost of inflexibility and limited scope for extenuating considerations.

The reason our intuitive settings sound an alarm at the thought of pushing a man off a bridge but remain silent about hitting a switch, Greene suggests, is that our ancestors evolved to live in cooperative groups where some means of preventing violence between members had to be in place to avoid dissolution—or outright implosion. One of the dangers of living with a bunch of upright-walking apes who possess the gift of foresight is that any one of them could at any time be plotting revenge for some seemingly minor slight, or conspiring to get you killed so he can move in on your spouse or inherit your belongings. For a group composed of individuals with the capacity to hold grudges and calculate future gains to function cohesively, the members must have in place some mechanism that affords a modicum of assurance that no one will murder them in their sleep. Greene writes,

To keep one’s violent behavior in check, it would help to have some kind of internal monitor, an alarm system that says “Don’t do that!” when one is contemplating an act of violence. Such an action-plan inspector would not necessarily object to all forms of violence. It might shut down, for example, when it’s time to defend oneself or attack one’s enemies. But it would, in general, make individuals very reluctant to physically harm one another, thus protecting individuals from retaliation and, perhaps, supporting cooperation at the group level. My hypothesis is that the myopic module is precisely this action-plan inspecting system, a device for keeping us from being casually violent. (226)

Hitting a switch to transfer a train from one track to another seems acceptable, even though a person ends up being killed, because nothing our ancestors would have recognized as violence is involved.

Many philosophers cite our different responses to the various trolley dilemmas as support for deontological systems of morality—those based on the inherent rightness or wrongness of certain actions—since we intuitively know the choices suggested by a consequentialist approach are immoral. But Greene points out that this argument begs the question of how reliable our intuitions really are. He writes,

I’ve called the footbridge dilemma a moral fruit fly, and that analogy is doubly appropriate because, if I’m right, this dilemma is also a moral pest. It’s a highly contrived situation in which a prototypically violent action is guaranteed (by stipulation) to promote the greater good. The lesson that philosophers have, for the most part, drawn from this dilemma is that it’s sometimes deeply wrong to promote the greater good. However, our understanding of the dual-process moral brain suggests a different lesson: Our moral intuitions are generally sensible, but not infallible. As a result, it’s all but guaranteed that we can dream up examples that exploit the cognitive inflexibility of our moral intuitions. It’s all but guaranteed that we can dream up a hypothetical action that is actually good but that seems terribly, horribly wrong because it pushes our moral buttons. I know of no better candidate for this honor than the footbridge dilemma. (251)

The obverse is that many of the things that seem morally acceptable to us actually do cause harm to people. Greene cites the example of a man who lets a child drown because he doesn’t want to ruin his expensive shoes, which most people agree is monstrous, even though we think nothing of spending money on things we don’t really need when we could be sending that money to save sick or starving children in some distant country. Then there are crimes against the environment, which always seem to rank low on our list of priorities even though their future impact on real human lives could be devastating. We have our justifications for such actions or omissions, to be sure, but how valid are they really? Is distance really a morally relevant factor when we let children die? Does the diffusion of responsibility among so many millions change the fact that we personally could have a measurable impact?

These black marks notwithstanding, cooperation, and even a certain degree of altruism, come natural to us. To demonstrate this, Greene and his colleagues have devised some clever methods for separating test subjects’ intuitive responses from their more deliberate and effortful decisions. The experiments begin with a multi-participant exchange scenario developed by economic game theorists called the Public Goods Game, which has a number of anonymous players contribute to a common bank whose sum is then doubled and distributed evenly among them. Like the more famous game theory exchange known as the Prisoner’s Dilemma, the outcomes of the Public Goods Game reward cooperation, but only when a threshold number of fellow cooperators is reached. The flip side, however, is that any individual who decides to be stingy can get a free ride from everyone else’s contributions and make an even greater profit. What tends to happen is, over multiple rounds, the number of players opting for stinginess increases until the game is ruined for everyone, a process analogical to a phenomenon in economics known as the Tragedy of the Commons. Everyone wants to graze a few more sheep on the commons than can be sustained fairly, so eventually the grounds are left barren.

The Biological and Cultural Evolution of Morality

            Greene believes that humans evolved emotional mechanisms to prevent the various analogs of the Tragedy of the Commons from occurring so that we can live together harmoniously in tight-knit groups. The outcomes of multiple rounds of the Public Goods Game, for instance, tend to be far less dismal when players are given the opportunity to devote a portion of their own profits to punishing free riders. Most humans, it turns out, will be motivated by the emotion of anger to expend their own resources for the sake of enforcing fairness. Over several rounds, cooperation becomes the norm. Such an outcome has been replicated again and again, but researchers are always interested in factors that influence players’ strategies in the early rounds. Greene describes a series of experiments he conducted with David Rand and Martin Nowak, which were reported in an article in Nature in 2012. He writes,

…we conducted our own Public Goods Games, in which we forced some people to decide quickly (less than ten seconds) and forced others to decide slowly (more than ten seconds). As predicted, forcing people to decide faster made them more cooperative and forcing people to slow down made them less cooperative (more likely to free ride). In other experiments, we asked people, before playing the Public Goods Game, to write about a time in which their intuitions served them well, or about a time in which careful reasoning led them astray. Reflecting on the advantages of intuitive thinking (or the disadvantages of careful reflection) made people more cooperative. Likewise, reflecting on the advantages of careful reasoning (or the disadvantages of intuitive thinking) made people less cooperative. (62)

These results offer strong support for Greene’s dual-process theory of morality, and they even hint at the possibility that the manual mode is fundamentally selfish or amoral—in other words, that the philosophers have been right all along in deferring to human intuitions about right and wrong.

            As good as our intuitive moral sense is for preventing the Tragedy of the Commons, however, when given free rein in a society comprised of large groups of people who are strangers to one another, each with its own culture and priorities, our natural moral settings bring about an altogether different tragedy. Greene labels it the Tragedy of Commonsense Morality. He explains,

Morality evolved to enable cooperation, but this conclusion comes with an important caveat. Biologically speaking, humans were designed for cooperation, but only with some people. Our moral brains evolved for cooperation within groups, and perhaps only within the context of personal relationships. Our moral brains did not evolve for cooperation between groups (at least not all groups). (23)

Expanding on the story behind the Tragedy of the Commons, Greene describes what would happen if several groups, each having developed its own unique solution for making sure the commons were protected from overgrazing, were suddenly to come into contact with one another on a transformed landscape called the New Pastures. Each group would likely harbor suspicions against the others, and when it came time to negotiate a new set of rules to govern everybody the groups would all show a significant, though largely unconscious, bias in favor of their own members and their own ways.

The origins of moral psychology as a field can be traced to both developmental and evolutionary psychology. Seminal research conducted at Yale’s Infant Cognition Center, led by Karen Wynn, Kiley Hamlin, and Paul Bloom (and which Bloom describes in a charming and highly accessible book called Just Babies), has demonstrated that children as young as six months possess what we can easily recognize as a rudimentary moral sense. These findings suggest that much of the behavior we might have previously ascribed to lessons learned from adults is actually innate. Experiments based on game theory scenarios and thought-experiments like the trolley dilemmas are likewise thought to tap into evolved patterns of human behavior. Yet when University of British Columbia psychologist Joseph Henrich teamed up with several anthropologists to see how people living in various small-scale societies responded to game theory scenarios like the Prisoner’s Dilemma and the Public Goods Game they discovered a great deal of variation. On the one hand, then, human moral intuitions seem to be rooted in emotional responses present at, or at least close to, birth, but on the other hand cultures vary widely in their conventional responses to classic dilemmas. These differences between cultural conceptions of right and wrong are in large part responsible for the conflict Greene envisions in his Parable of the New Pastures.

But how can a moral sense be both innate and culturally variable?  “As you might expect,” Greene explains, “the way people play these games reflects the way they live.” People in some cultures rely much more heavily on cooperation to procure their sustenance, as is the case with the Lamelara of Indonesia, who live off the meat of whales they hunt in groups. Cultures also vary in how much they rely on market economies as opposed to less abstract and less formal modes of exchange. Just as people adjust the way they play economic games in response to other players’ moves, people acquire habits of cooperation based on the circumstances they face in their particular societies. Regarding the differences between small-scale societies in common game theory strategies, Greene writes,

Henrich and colleagues found that payoffs to cooperation and market integration explain more than two thirds of the variation across these cultures. A more recent study shows that, across societies, market integration is an excellent predictor of altruism in the Dictator Game. At the same time, many factors that you might expect to be important predictors of cooperative behavior—things like an individual’s sex, age, and relative wealth, or the amount of money at stake—have little predictive power. (72)

In much the same way humans are programmed to learn a language and acquire a set of religious beliefs, they also come into the world with a suite of emotional mechanisms that make up the raw material for what will become a culturally calibrated set of moral intuitions. The specific language and religion we end up with is of course dependent on the social context of our upbringing, just as our specific moral settings will reflect those of other people in the societies we grow up in.

Jonathan Haidt and Tribal Righteousness

In our modern industrial society, we actually have some degree of choice when it comes to our cultural affiliations, and this freedom opens the way for heritable differences between individuals to play a larger role in our moral development. Such differences are nowhere as apparent as in the realm of politics, where nearly all citizens occupy some point on a continuum between conservative and liberal. According to Greene’s fellow moral psychologist Jonathan Haidt, we have precious little control over our moral responses because, in his view, reason only comes into play to justify actions and judgments we’ve already made. In his fascinating 2012 book The Righteous Mind, Haidt insists,

Moral reasoning is part of our lifelong struggle to win friends and influence people. That’s why I say that “intuitions come first, strategic reasoning second.” You’ll misunderstand moral reasoning if you think about it as something people do by themselves in order to figure out the truth. (50)

To explain the moral divide between right and left, Haidt points to the findings of his own research on what he calls Moral Foundations, six dimensions underlying our intuitions about moral and immoral actions. Conservatives tend to endorse judgments based on all six of the foundations, valuing loyalty, authority, and sanctity much more than liberals, who focus more exclusively on care for the disadvantaged, fairness, and freedom from oppression. Since our politics emerge from our moral intuitions and reason merely serves as a sort of PR agent to rationalize judgments after the fact, Haidt enjoins us to be more accepting of rival political groups— after all, you can’t reason with them. 

            Greene objects both to Haidt’s Moral Foundations theory and to his prescription for a politics of complementarity. The responses to questions representing all the moral dimensions in Haidt’s studies form two clusters on a graph, Greene points out, not six, suggesting that the differences between conservatives and liberals are attributable to some single overarching variable as opposed to several individual tendencies. Furthermore, the specific content of the questions Haidt uses to flesh out the values of his respondents have a critical limitation. Greene writes,

According to Haidt, American social conservatives place greater value on respect for authority, and that’s true in a sense. Social conservatives feel less comfortable slapping their fathers, even as a joke, and so on. But social conservatives do not respect authority in a general way. Rather, they have great respect for authorities recognized by their tribe (from the Christian God to various religious and political leaders to parents). American social conservatives are not especially respectful of Barack Hussein Obama, whose status as a native-born American, and thus a legitimate president, they have persistently challenged. (339)

The same limitation applies to the loyalty and sanctity foundations. Conservatives feel little loyalty toward the atheists and Muslims among their fellow Americans. Nor do they recognize the sanctity of Mosques or Hindu holy texts. Greene goes on,

American social conservatives are not best described as people who place special value on authority, sanctity, and loyalty, but rather as tribal loyalists—loyal to their own authorities, their own religion, and themselves. This doesn’t make them evil, but it does make them parochial, tribal. In this they’re akin to the world’s other socially conservative tribes, from the Taliban in Afghanistan to European nationalists. According to Haidt, liberals should be more open to compromise with social conservatives. I disagree. In the short term, compromise may be necessary, but in the long term, our strategy should not be to compromise with tribal moralists, but rather to persuade them to be less tribalistic. (340)

Greene believes such persuasion is possible, even with regard to emotionally and morally charged controversies, because he sees our manual-mode thinking as playing a potentially much greater role than Haidt sees it playing.

Metamorality on the New Pastures

            Throughout The Righteous Mind, Haidt argues that the moral philosophers who laid the foundations of modern liberal and academic conceptions of right and wrong gave short shrift to emotions and intuitions—that they gave far too much credit to our capacity for reason. To be fair, Haidt does honor the distinction between descriptive and prescriptive theories of morality, but he nonetheless gives the impression that he considers liberal morality to be somewhat impoverished. Greene sees this attitude as thoroughly wrongheaded. Responding to Haidt’s metaphor comparing his Moral Foundations to taste buds—with the implication that the liberal palate is more limited in the range of flavors it can appreciate—Greene writes,

The great philosophers of the Enlightenment wrote at a time when the world was rapidly shrinking, forcing them to wonder whether their own laws, their own traditions, and their own God(s) were any better than anyone else’s. They wrote at a time when technology (e.g., ships) and consequent economic productivity (e.g., global trade) put wealth and power into the hands of a rising educated class, with incentives to question the traditional authorities of king and church. Finally, at this time, natural science was making the world comprehensible in secular terms, revealing universal natural laws and overturning ancient religious doctrines. Philosophers wondered whether there might also be universal moral laws, ones that, like Newton’s law of gravitation, applied to members of all tribes, whether or not they knew it. Thus, the Enlightenment philosophers were not arbitrarily shedding moral taste buds. They were looking for deeper, universal moral truths, and for good reason. They were looking for moral truths beyond the teachings of any particular religion and beyond the will of any earthly king. They were looking for what I’ve called a metamorality: a pan-tribal, or post-tribal, philosophy to govern life on the new pastures. (338-9)

While Haidt insists we must recognize the centrality of intuitions even in this civilization nominally ruled by reason, Greene points out that it was skepticism of old, seemingly unassailable and intuitive truths that opened up the world and made modern industrial civilization possible in the first place.  

            As Haidt explains, though, conservative morality serves people well in certain regards. Christian churches, for instance, encourage charity and foster a sense of community few secular institutions can match. But these advantages at the level of parochial groups have to be weighed against the problems tribalism inevitably leads to at higher levels. This is, in fact, precisely the point Greene created his Parable of the New Pastures to make. He writes,

The Tragedy of the Commons is averted by a suite of automatic settings—moral emotions that motivate and stabilize cooperation within limited groups. But the Tragedy of Commonsense Morality arises because of automatic settings, because different tribes have different automatic settings, causing them to see the world through different moral lenses. The Tragedy of the Commons is a tragedy of selfishness, but the Tragedy of Commonsense Morality is a tragedy of moral inflexibility. There is strife on the new pastures not because herders are hopelessly selfish, immoral, or amoral, but because they cannot step outside their respective moral perspectives. How should they think? The answer is now obvious: They should shift into manual mode. (172)

Greene argues that whenever we, as a society, are faced with a moral controversy—as with issues like abortion, capital punishment, and tax policy—our intuitions will not suffice because our intuitions are the very basis of our disagreement.

            Watching the conclusion to season four of Breaking Bad, most viewers probably responded to finding out that Walt had poisoned Brock by thinking that he’d become a monster—at least at first. Indeed, the currently dominant academic approach to art criticism involves taking a stance, both moral and political, with regard to a work’s models and messages. Writing for The New Yorker, Emily Nussbaum, for instance, disparages viewers of Breaking Bad for failing to condemn Walt, writing,

When Brock was near death in the I.C.U., I spent hours arguing with friends about who was responsible. To my surprise, some of the most hard-nosed cynics thought it inconceivable that it could be Walt—that might make the show impossible to take, they said. But, of course, it did nothing of the sort. Once the truth came out, and Brock recovered, I read posts insisting that Walt was so discerning, so careful with the dosage, that Brock could never have died. The audience has been trained by cable television to react this way: to hate the nagging wives, the dumb civilians, who might sour the fun of masculine adventure. “Breaking Bad” increases that cognitive dissonance, turning some viewers into not merely fans but enablers. (83)

To arrive at such an assessment, Nussbaum must reduce the show to the impact she assumes it will have on less sophisticated fans’ attitudes and judgments. But the really troubling aspect of this type of criticism is that it encourages scholars and critics to indulge their impulse toward self-righteousness when faced with challenging moral dilemmas; in other words, it encourages them to give voice to their automatic modes precisely when they should be shifting to manual mode. Thus, Nussbaum neglects outright the very details that make Walt’s scenario compelling, completely forgetting that by making Brock sick—and, yes, risking his life—he was able to save Hank, Skyler, and his own two children.

But how should we go about arriving at a resolution to moral dilemmas and political controversies if we agree we can’t always trust our intuitions? Greene believes that, while our automatic modes recognize certain acts as wrong and certain others as a matter of duty to perform, in keeping with deontological ethics, whenever we switch to manual mode, the focus shifts to weighing the relative desirability of each option's outcomes. In other words, manual mode thinking is consequentialist. And, since we tend to assess outcomes according their impact on other people, favoring those that improve the quality of their experiences the most, or detract from it the least, Greene argues that whenever we slow down and think through moral dilemmas deliberately we become utilitarians. He writes,

If I’m right, this convergence between what seems like the right moral philosophy (from a certain perspective) and what seems like the right moral psychology (from a certain perspective) is no accident. If I’m right, Bentham and Mill did something fundamentally different from all of their predecessors, both philosophically and psychologically. They transcended the limitations of commonsense morality by turning the problem of morality (almost) entirely over to manual mode. They put aside their inflexible automatic settings and instead asked two very abstract questions. First: What really matters? Second: What is the essence of morality? They concluded that experience is what ultimately matters, and that impartiality is the essence of morality. Combing these two ideas, we get utilitarianism: We should maximize the quality of our experience, giving equal weight to the experience of each person. (173)

If you cite an authority recognized only by your own tribe—say, the Bible—in support of a moral argument, then members of other tribes will either simply discount your position or counter it with pronouncements by their own authorities. If, on the other hand, you argue for a law or a policy by citing evidence that implementing it would mean longer, healthier, happier lives for the citizens it affects, then only those seeking to establish the dominion of their own tribe can discount your position (which of course isn’t to say they can’t offer rival interpretations of your evidence).

            If we turn commonsense morality on its head and evaluate the consequences of giving our intuitions priority over utilitarian accounting, we can find countless circumstances in which being overly moral is to everyone’s detriment. Ideas of justice and fairness allow far too much space for selfish and tribal biases, whereas the math behind mutually optimal outcomes based on compromise tends to be harder to fudge. Greene reports, for instance, the findings of a series of experiments conducted by Fieke Harinick and colleagues at the University of Amsterdam in 2000. Negotiations by lawyers representing either the prosecution or the defense were told to either focus on serving justice or on getting the best outcome for their clients. The negotiations in the first condition almost always came to loggerheads. Greene explains,

Thus, two selfish and rational negotiators who see that their positions are symmetrical will be willing to enlarge the pie, and then split the pie evenly. However, if negotiators are seeking justice, rather than merely looking out for their bottom lines, then other, more ambiguous, considerations come into play, and with them the opportunity for biased fairness. Maybe your clients really deserve lighter penalties. Or maybe the defendants you’re prosecuting really deserve stiffer penalties. There is a range of plausible views about what’s truly fair in these cases, and you can choose among them to suit your interests. By contrast, if it’s just a matter of getting the best deal you can from someone who’s just trying to get the best deal for himself, there’s a lot less wiggle room, and a lot less opportunity for biased fairness to create an impasse. (88)

Framing an argument as an effort to establish who was right and who was wrong is like drawing a line in the sand—it activates tribal attitudes pitting us against them, while treating negotiations more like an economic exchange circumvents these tribal biases.

Challenges to Utilitarianism

But do we really want to suppress our automatic moral reactions in favor of deliberative accountings of the greatest good for the greatest number? Deontologists have posed some effective challenges to utilitarianism in the form of thought-experiments that seem to show efforts to improve the quality of experiences would lead to atrocities. For instance, Greene recounts how in a high school debate, he was confronted by a hypothetical surgeon who could save five sick people by killing one healthy one. Then there’s the so-called Utility Monster, who experiences such happiness when eating humans that it quantitatively outweighs the suffering of those being eaten. More down-to-earth examples feature a scapegoat convicted of a crime to prevent rioting by people who are angry about police ineptitude, and the use of torture to extract information from a prisoner that could prevent a terrorist attack. The most influential challenge to utilitarianism, however, was leveled by the political philosopher John Rawls when he pointed out that it could be used to justify the enslavement of a minority by a majority.

Greene’s responses to these criticisms make up one of the most surprising, important, and fascinating parts of Moral Tribes. First, highly contrived thought-experiments about Utility Monsters and circumstances in which pushing a guy off a bridge is guaranteed to stop a trolley may indeed prove that utilitarianism is not true in any absolute sense. But whether or not such moral absolutes even exist is a contentious issue in its own right. Greene explains,

I am not claiming that utilitarianism is the absolute moral truth. Instead I’m claiming that it’s a good metamorality, a good standard for resolving moral disagreements in the real world. As long as utilitarianism doesn’t endorse things like slavery in the real world, that’s good enough. (275-6)

One source of confusion regarding the slavery issue is the equation of happiness with money; slave owners probably could make profits in excess of the losses sustained by the slaves. But money is often a poor index of happiness. Greene underscores this point by asking us to consider how much someone would have to pay us to sell ourselves into slavery. “In the real world,” he writes, “oppression offers only modest gains in happiness to the oppressors while heaping misery upon the oppressed” (284-5).

            Another failing of the thought-experiments thought to undermine utilitarianism is the shortsightedness of the supposedly obvious responses. The crimes of the murderous doctor and the scapegoating law officers may indeed produce short-term increases in happiness, but if the secret gets out healthy and innocent people will live in fear, knowing they can’t trust doctors and law officers. The same logic applies to the objection that utilitarianism would force us to become infinitely charitable, since we can almost always afford to be more generous than we currently are. But how long could we serve as so-called happiness pumps before burning out, becoming miserable, and thus lose the capacity for making anyone else happier? Greene writes,

If what utilitarianism asks of you seems absurd, then it’s not what utilitarianism actually asks of you. Utilitarianism is, once again, an inherently practical philosophy, and there’s nothing more impractical than commanding free people to do things that strike them as absurd and that run counter to their most basic motivations. Thus, in the real world, utilitarianism is demanding, but not overly demanding. It can accommodate our basic human needs and motivations, but it nonetheless calls for substantial reform of our selfish habits. (258)

Greene seems to be endorsing what philosophers call "rule utilitarianism." We can approach every choice by calculating the likely outcomes, but as a society we would be better served deciding on some rules for everyone to adhere to. It just may be possible for a doctor to increase happiness through murder in a particular set of circumstances—but most people would vociferously object to a rule legitimizing the practice.

The concept of human rights may present another challenge to Greene in his championing of consequentialism over deontology. It is our duty, after all, to recognize the rights of every human, and we ourselves have no right to disregard someone else’s rights no matter what benefit we believe might result from doing so. In his book The Better Angels of our Nature, Steven Pinker, Greene’s colleague at Harvard, attributes much of the steep decline in rates of violent death over the past three centuries to a series of what he calls Rights Revolutions, the first of which began during the Enlightenment. But the problem with arguments that refer to rights, Greene explains, is that controversies arise for the very reason that people don’t agree which rights we should recognize. He writes,

Thus, appeals to “rights” function as an intellectual free pass, a trump card that renders evidence irrelevant. Whatever you and your fellow tribespeople feel, you can always posit the existence of a right that corresponds to your feelings. If you feel that abortion is wrong, you can talk about a “right to life.” If you feel that outlawing abortion is wrong, you can talk about a “right to choose.” If you’re Iran, you can talk about your “nuclear rights,” and if you’re Israel you can talk about your “right to self-defense.” “Rights” are nothing short of brilliant. They allow us to rationalize our gut feelings without doing any additional work. (302)

The only way to resolve controversies over which rights we should actually recognize and which rights we should prioritize over others, Greene argues, is to apply utilitarian reasoning.

Ideology and Epistemology

            In his discussion of the proper use of the language of rights, Greene comes closer than in other section of Moral Tribes to explicitly articulating what strikes me as the most revolutionary idea that he and his fellow moral psychologists are suggesting—albeit as of yet only implicitly. In his advocacy for what he calls “deep pragmatism,” Greene isn’t merely applying evolutionary theories to an old philosophical debate; he’s actually posing a subtly different question. The numerous thought-experiments philosophers use to poke holes in utilitarianism may not have much relevance in the real world—but they do undermine any claim utilitarianism may have on absolute moral truth. Greene’s approach is therefore to eschew any effort to arrive at absolute truths, including truths pertaining to rights. Instead, in much the same way scientists accept that our knowledge of the natural world is mediated by theories, which only approach the truth asymptotically, never capturing it with any finality, Greene intimates that the important task in the realm of moral philosophy isn’t to arrive at a full accounting of moral truths but rather to establish a process for resolving moral and political dilemmas.

What’s needed, in other words, isn’t a rock solid moral ideology but a workable moral epistemology. And, just as empiricism serves as the foundation of the epistemology of science, Greene makes a convincing case that we could use utilitarianism as the basis of an epistemology of morality. Pursuing the analogy between scientific and moral epistemologies even farther, we can compare theories, which stand or fall according to their empirical support, to individual human rights, which we afford and affirm according to their impact on the collective happiness of every individual in the society. Greene writes,

If we are truly interested in persuading our opponents with reason, then we should eschew the language of rights. This is, once again, because we have no non-question-begging (and utilitarian) way of figuring out which rights really exist and which rights take precedence over others. But when it’s not worth arguing—either because the question has been settled or because our opponents can’t be reasoned with—then it’s time to start rallying the troops. It’s time to affirm our moral commitments, not with wonky estimates of probabilities but with words that stir our souls. (308-9)

Rights may be the closest thing we have to moral truths, just as theories serve as our stand-ins for truths about the natural world, but even more important than rights or theories are the processes we rely on to establish and revise them.

A New Theory of Narrative

            As if a philosophical revolution weren’t enough, moral psychology is also putting in place what could be the foundation of a new understanding of the role of narratives in human lives. At the heart of every story is a conflict between competing moral ideals. In commercial fiction, there tends to be a character representing each side of the conflict, and audiences can be counted on to favor one side over the other—the good, altruistic guys over the bad, selfish guys. In more literary fiction, on the other hand, individual characters are faced with dilemmas pitting various modes of moral thinking against each other. In season one of Breaking Bad, for instance, Walter White famously writes a list on a notepad of the pros and cons of murdering the drug dealer restrained in Jesse’s basement. Everyone, including Walt, feels that killing the man is wrong, but if they let him go Walt and his family will at risk of retaliation. This dilemma is in fact quite similar to ones he faces in each of the following seasons, right up until he has to decide whether or not to poison Brock. Trying to work out what Walt should do, and anxiously anticipating what he will do, are mental exercises few can help engaging in as they watch the show.

            The current reigning conception of narrative in academia explains the appeal of stories by suggesting it derives from their tendency to fulfill conscious and unconscious desires, most troublesomely our desires to have our prejudices reinforced. We like to see men behaving in ways stereotypically male, women stereotypically female, minorities stereotypically black or Hispanic, and so on. Cultural products like works of art, and even scientific findings, are embraced, the reasoning goes, because they cement the hegemony of various dominant categories of people within the society. This tradition in arts scholarship and criticism can in large part be traced back to psychoanalysis, but it has developed over the last century to incorporate both the predominant view of language in the humanities and the cultural determinism espoused by many scholars in the service of various forms of identity politics. 

            The postmodern ideology that emerged from the convergence of these schools is marked by a suspicion that science is often little more than a veiled effort at buttressing the political status quo, and its preeminent thinkers deliberately set themselves apart from the humanist and Enlightenment traditions that held sway in academia until the middle of the last century by writing in byzantine, incoherent prose. Even though there could be no rational way either to support or challenge postmodern ideas, scholars still take them as cause for leveling accusations against both scientists and storytellers of using their work to further reactionary agendas.

For anyone who recognizes the unparalleled power of science both to advance our understanding of the natural world and to improve the conditions of human lives, postmodernism stands out as a catastrophic wrong turn, not just in academic history but in the moral evolution of our civilization. The equation of narrative with fantasy is a bizarre fantasy in its own right. Attempting to explain the appeal of a show like Breaking Bad by suggesting that viewers have an unconscious urge to be diagnosed with cancer and to subsequently become drug manufacturers is symptomatic of intractable institutional delusion. And, as Pinker recounts in Better Angels, literature, and novels in particular, were likely instrumental in bringing about the shift in consciousness toward greater compassion for greater numbers of people that resulted in the unprecedented decline in violence beginning in the second half of the nineteenth century.

Yet, when it comes to arts scholarship, postmodernism is just about the only game in town. Granted, the writing in this tradition has progressed a great deal toward greater clarity, but the focus on identity politics has intensified to the point of hysteria: you’d be hard-pressed to find a major literary figure who hasn’t been accused of misogyny at one point or another, and any scientist who dares study something like gender differences can count on having her motives questioned and her work lampooned by well intentioned, well indoctrinated squads of well-poisoning liberal wags.

            When Emily Nussbaum complains about viewers of Breaking Bad being lulled by the “masculine adventure” and the digs against “nagging wives” into becoming enablers of Walt’s bad behavior, she’s doing exactly what so many of us were taught to do in academic courses on literary and film criticism, applying a postmodern feminist ideology to the show—and completely missing the point. As the series opens, Walt is deliberately portrayed as downtrodden and frustrated, and Skyler’s bullying is an important part of that dynamic. But the pleasure of watching the show doesn’t come so much from seeing Walt get out from under Skyler’s thumb—he never really does—as it does from anticipating and fretting over how far Walt will go in his criminality, goaded on by all that pent-up frustration. Walt shows a similar concern for himself, worrying over what effect his exploits will have on who he is and how he will be remembered. We see this in season three when he becomes obsessed by the “contamination” of his lab—which turns out to be no more than a house fly—and at several other points as well. Viewers are not concerned with Walt because he serves as an avatar acting out their fantasies (or else the show would have a lot more nude scenes with the principal of the school he teaches in). They’re concerned because, at least at the beginning of the show, he seems to be a good person and they can sympathize with his tribulations.

The much more solidly grounded view of narrative inspired by moral psychology suggests that common themes in fiction are not reflections or reinforcements of some dominant culture, but rather emerge from aspects of our universal human psychology. Our feelings about characters, according to this view, aren’t determined by how well they coincide with our abstract prejudices; on the contrary, we favor the types of people in fiction we would favor in real life. Indeed, if the story is any good, we will have to remind ourselves that the people whose lives we’re tracking really are fictional. Greene doesn’t explore the intersection between moral psychology and narrative in Moral Tribes, but he does give a nod to what we can only hope will be a burgeoning field when he writes, 

Nowhere is our concern for how others treat others more apparent than in our intense engagement with fiction. Were we purely selfish, we wouldn’t pay good money to hear a made-up story about a ragtag group of orphans who use their street smarts and quirky talents to outfox a criminal gang. We find stories about imaginary heroes and villains engrossing because they engage our social emotions, the ones that guide our reactions to real-life cooperators and rogues. We are not disinterested parties. (59)

Many people ask why we care so much about people who aren’t even real, but we only ever reflect on the fact that what we’re reading or viewing is a simulation when we’re not sufficiently engrossed by it.

            Was Nussbaum completely wrong to insist that Walt went beyond the pale when he poisoned Brock? She certainly showed the type of myopia Greene attributes to the automatic mode by forgetting Walt saved at least four lives by endangering the one. But most viewers probably had a similar reaction. The trouble wasn’t that she was appalled; it was that her postmodernism encouraged her to unquestioningly embrace and give voice to her initial feelings. Greene writes, 

It’s good that we’re alarmed by acts of violence. But the automatic emotional gizmos in our brains are not infinitely wise. Our moral alarm systems think that the difference between pushing and hitting a switch is of great moral significance. More important, our moral alarm systems see no difference between self-serving murder and saving a million lives at the cost of one. It’s a mistake to grant these gizmos veto power in our search for a universal moral philosophy. (253)

Greene doesn’t reveal whether he’s a Breaking Bad fan or not, but his discussion of the footbridge dilemma gives readers a good indication of how he’d respond to Walt’s actions.

If you don’t feel that it’s wrong to push the man off the footbridge, there’s something wrong with you. I, too, feel that it’s wrong, and I doubt that I could actually bring myself to push, and I’m glad that I’m like this. What’s more, in the real world, not pushing would almost certainly be the right decision. But if someone with the best of intentions were to muster the will to push the man off the footbridge, knowing for sure that it would save five lives, and knowing for sure that there was no better alternative, I would approve of this action, although I might be suspicious of the person who chose to perform it. (251)

The overarching theme of Breaking Bad, which Nussbaum fails utterly to comprehend, is the transformation of a good man into a bad one. When he poisons Brock, we’re glad he succeeded in saving his family, some of us are even okay with methods, but we’re worried—suspicious even—about what his ability to go through with it says about him. Over the course of the series, we’ve found ourselves rooting for Walt, and we’ve come to really like him. We don’t want to see him break too bad. And however bad he does break we can’t help hoping for his redemption. Since he’s a valuable member of our tribe, we’re loath to even consider it might be time to start thinking he has to go.

Also read:

The Criminal Sublime: Walter White's Brutally Plausible Journey to the Heart of Darkness in Breaking Bad

And

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

The Self-Righteousness Instinct: Steven Pinker on the Better Angels of Modernity and the Evils of Morality

Is violence really declining? How can that be true? What could be causing it? Why are so many of us convinced the world is going to hell in a hand basket? Steven Pinker attempts to answer these questions in his magnificent and mind-blowing book.

51a5k0THlNL.jpg

Steven Pinker is one of the few scientists who can write a really long book and still expect a significant number of people to read it. But I have a feeling many who might be vaguely intrigued by the buzz surrounding his 2011 book The Better Angels of Our Nature: Why Violence Has Declined wonder why he had to make it nearly seven hundred outsized pages long. Many curious folk likely also wonder why a linguist who proselytizes for psychological theories derived from evolutionary or Darwinian accounts of human nature would write a doorstop drawing on historical and cultural data to describe the downward trajectories of rates of the worst societal woes. The message that violence of pretty much every variety is at unprecedentedly low rates comes as quite a shock, as it runs counter to our intuitive, news-fueled sense of being on a crash course for Armageddon. So part of the reason behind the book’s heft is that Pinker has to bolster his case with lots of evidence to get us to rethink our views. But flipping through the book you find that somewhere between half and a third of its mass is devoted, not to evidence of the decline, but to answering the questions of why the trend has occurred and why it gives every indication of continuing into the foreseeable future. So is this a book about how evolution has made us violent or about how culture is making us peaceful?

The first thing that needs to be said about Better Angels is that you should read it. Despite its girth, it’s at no point the least bit cumbersome to read, and at many points it’s so fascinating that, weighty as it is, you’ll have a hard time putting it down. Pinker has mastered a prose style that’s simple and direct to the point of feeling casual without ever wanting for sophistication. You can also rest assured that what you’re reading is timely and important because it explores aspects of history and social evolution that impact pretty much everyone in the world but that have gone ignored—if not censoriously denied—by most of the eminences contributing to the zeitgeist since the decades following the last world war.

            Still, I suspect many people who take the plunge into the first hundred or so pages are going to feel a bit disoriented as they try to figure out what the real purpose of the book is, and this may cause them to falter in their resolve to finish reading. The problem is that the resistance Better Angels runs to such a prodigious page-count simultaneously anticipating and responding to doesn’t come from news media or the blinkered celebrities in the carnivals of sanctimonious imbecility that are political talk shows. It comes from Pinker’s fellow academics. The overall point of Better Angels remains obscure owing to some deliberate caginess on the author’s part when it comes to identifying the true targets of his arguments. 

            This evasiveness doesn’t make the book difficult to read, but a quality of diffuseness to the theoretical sections, a multitude of strands left dangling, does at points make you doubt whether Pinker had a clear purpose in writing, which makes you doubt your own purpose in reading. With just a little tying together of those strands, however, you start to see that while on the surface he’s merely righting the misperception that over the course of history our species has been either consistently or increasingly violent, what he’s really after is something different, something bigger. He’s trying to instigate, or at least play a part in instigating, a revolution—or more precisely a renaissance—in the way scholars and intellectuals think not just about human nature but about the most promising ways to improve the lot of human societies.

The longstanding complaint about evolutionary explanations of human behavior is that by focusing on our biology as opposed to our supposedly limitless capacity for learning they imply a certain level of fixity to our nature, and this fixedness is thought to further imply a limit to what political reforms can accomplish. The reasoning goes, if the explanation for the way things are is to be found in our biology, then, unless our biology changes, the way things are is the way they’re going to remain. Since biological change occurs at the glacial pace of natural selection, we’re pretty much stuck with the nature we have. 

            Historically, many scholars have made matters worse for evolutionary scientists today by applying ostensibly Darwinian reasoning to what seemed at the time obvious biological differences between human races in intelligence and capacity for acquiring the more civilized graces, making no secret of their conviction that the differences justified colonial expansion and other forms of oppressive rule. As a result, evolutionary psychologists of the past couple of decades have routinely had to defend themselves against charges that they’re secretly trying to advance some reactionary (or even genocidal) agenda. Considering Pinker’s choice of topic in Better Angels in light of this type of criticism, we can start to get a sense of what he’s up to—and why his efforts are discombobulating.

If you’ve spent any time on a university campus in the past forty years, particularly if it was in a department of the humanities, then you have been inculcated with an ideology that was once labeled postmodernism but that eventually became so entrenched in academia, and in intellectual culture more broadly, that it no longer requires a label. (If you took a class with the word "studies" in the title, then you got a direct shot to the brain.) Many younger scholars actually deny any espousal of it—“I’m not a pomo!”—with reference to a passé version marked by nonsensical tangles of meaningless jargon and the conviction that knowledge of the real world is impossible because “the real world” is merely a collective delusion or social construction put in place to perpetuate societal power structures. The disavowals notwithstanding, the essence of the ideology persists in an inescapable but unremarked obsession with those same power structures—the binaries of men and women, whites and blacks, rich and poor, the West and the rest—and the abiding assumption that texts and other forms of media must be assessed not just according to their truth content, aesthetic virtue, or entertainment value, but also with regard to what we imagine to be their political implications. Indeed, those imagined political implications are often taken as clear indicators of the author’s true purpose in writing, which we must sniff out—through a process called “deconstruction,” or its anemic offspring “rhetorical analysis”—lest we complacently succumb to the subtle persuasion.

In the late nineteenth and early twentieth centuries, faith in what we now call modernism inspired intellectuals to assume that the civilizations of Western Europe and the United States were on a steady march of progress toward improved lives for all their own inhabitants as well as the world beyond their borders. Democracy had brought about a new age of government in which rulers respected the rights and freedom of citizens. Medicine was helping ever more people live ever longer lives. And machines were transforming everything from how people labored to how they communicated with friends and loved ones. Everyone recognized that the driving force behind this progress was the juggernaut of scientific discovery. But jump ahead a hundred years to the early twenty-first century and you see a quite different attitude toward modernity. As Pinker explains in the closing chapter of Better Angels,

A loathing of modernity is one of the great constants of contemporary social criticism. Whether the nostalgia is for small-town intimacy, ecological sustainability, communitarian solidarity, family values, religious faith, primitive communism, or harmony with the rhythms of nature, everyone longs to turn back the clock. What has technology given us, they say, but alienation, despoliation, social pathology, the loss of meaning, and a consumer culture that is destroying the planet to give us McMansions, SUVs, and reality television? (692)

The social pathology here consists of all the inequities and injustices suffered by the people on the losing side of those binaries all us closet pomos go about obsessing over. Then of course there’s industrial-scale war and all the other types of modern violence. With terrorism, the War on Terror, the civil war in Syria, the Israel-Palestine conflict, genocides in the Sudan, Kosovo, and Rwanda, and the marauding bands of drugged-out gang rapists in the Congo, it seems safe to assume that science and democracy and capitalism have contributed to the construction of an unsafe global system with some fatal, even catastrophic design flaws. And that’s before we consider the two world wars and the Holocaust. So where the hell is this decline Pinker refers to in his title?

            One way to think about the strain of postmodernism or anti-modernism with the most currency today (and if you’re reading this essay you can just assume your views have been influenced by it) is that it places morality and politics—identity politics in particular—atop a hierarchy of guiding standards above science and individual rights. So, for instance, concerns over the possibility that a negative image of Amazonian tribespeople might encourage their further exploitation trump objective reporting on their culture by anthropologists, even though there’s no evidence to support those concerns. And evidence that the disproportionate number of men in STEM fields reflects average differences between men and women in lifestyle preferences and career interests is ignored out of deference to a political ideal of perfect parity. The urge to grant moral and political ideals veto power over science is justified in part by all the oppression and injustice that abounds in modern civilizations—sexism, racism, economic exploitation—but most of all it’s rationalized with reference to the violence thought to follow in the wake of any movement toward modernity. Pinker writes,

“The twentieth century was the bloodiest in history” is a cliché that has been used to indict a vast range of demons, including atheism, Darwin, government, science, capitalism, communism, the ideal of progress, and the male gender. But is it true? The claim is rarely backed up by numbers from any century other than the 20th, or by any mention of the hemoclysms of centuries past. (193)

He gives the question even more gravity when he reports that all those other areas in which modernity is alleged to be such a colossal failure tend to improve in the absence of violence. “Across time and space,” he writes in the preface, “the more peaceable societies also tend to be richer, healthier, better educated, better governed, more respectful of their women, and more likely to engage in trade” (xxiii). So the question isn’t just about what the story with violence is; it’s about whether science, liberal democracy, and capitalism are the disastrous blunders we’ve learned to think of them as or whether they still just might hold some promise for a better world.

*******

            It’s in about the third chapter of Better Angels that you start to get the sense that Pinker’s style of thinking is, well, way out of style. He seems to be marching to the beat not of his own drummer but of some drummer from the nineteenth century. In the chapter previous, he drew a line connecting the violence of chimpanzees to that in what he calls non-state societies, and the images he’s left you with are savage indeed. Now he’s bringing in the philosopher Thomas Hobbes’s idea of a government Leviathan that once established immediately works to curb the violence that characterizes us humans in states of nature and anarchy. According to sociologist Norbert Elias’s 1969 book, The Civilizing Process, a work whose thesis plays a starring role throughout Better Angels, the consolidation of a Leviathan in England set in motion a trend toward pacification, beginning with the aristocracy no less, before spreading down to the lower ranks and radiating out to the countries of continental Europe and onward thence to other parts of the world. You can measure your feelings of unease in response to Pinker’s civilizing scenario as a proxy for how thoroughly steeped you are in postmodernism.

            The two factors missing from his account of the civilizing pacification of Europe that distinguish it from the self-congratulatory and self-exculpatory sagas of centuries past are the innate superiority of the paler stock and the special mission of conquest and conversion commissioned by a Christian god. In a later chapter, Pinker violates the contemporary taboo against discussing—or even thinking about—the potential role of average group (racial) differences in a propensity toward violence, but he concludes the case for any such differences is unconvincing: “while recent biological evolution may, in theory, have tweaked our inclinations toward violence and nonviolence, we have no good evidence that it actually has” (621). The conclusion that the Civilizing Process can’t be contingent on congenital characteristics follows from the observation of how readily individuals from far-flung regions acquire local habits of self-restraint and fellow-feeling when they’re raised in modernized societies. As for religion, Pinker includes it in a category of factors that are “Important but Inconsistent” with regard to the trend toward peace, dismissing the idea that atheism leads to genocide by pointing out that “Fascism happily coexisted with Catholicism in Spain, Italy, Portugal, and Croatia, and though Hitler had little use for Christianity, he was by no means an atheist, and professed that he was carrying out a divine plan.” Though he cites several examples of atrocities incited by religious fervor, he does credit “particular religious movements at particular times in history” with successfully working against violence (677).

            Despite his penchant for blithely trampling on the taboos of the liberal intelligentsia, Pinker refuses to cooperate with our reflex to pigeonhole him with imperialists or far-right traditionalists past or present. He continually holds up to ridicule the idea that violence has any redeeming effects. In a section on the connection between increasing peacefulness and rising intelligence, he suggests that our violence-tolerant “recent ancestors” can rightly be considered “morally retarded” (658).

  He singles out George W. Bush as an unfortunate and contemptible counterexample in a trend toward more complex political rhetoric among our leaders. And if it’s either gender that comes out not looking as virtuous in Better Angels it ain’t the distaff one. Pinker is difficult to categorize politically because he’s a scientist through and through. What he’s after are reasoned arguments supported by properly weighed evidence.

But there is something going on in Better Angels beyond a mere accounting for the ongoing decline in violence that most of us are completely oblivious of being the beneficiaries of. For one, there’s a challenge to the taboo status of topics like genetic differences between groups, or differences between individuals in IQ, or differences between genders. And there’s an implicit challenge as well to the complementary premises he took on more directly in his earlier book The Blank Slate that biological theories of human nature always lead to oppressive politics and that theories of the infinite malleability of human behavior always lead to progress (communism relies on a blank slate theory, and it inspired guys like Stalin, Mao, and Pol Pot to murder untold millions). But the most interesting and important task Pinker has set for himself with Better Angels is a restoration of the Enlightenment, with its twin pillars of science and individual rights, to its rightful place atop the hierarchy of our most cherished guiding principles, the position we as a society misguidedly allowed to be usurped by postmodernism, with its own dual pillars of relativism and identity politics.

  But, while the book succeeds handily in undermining the moral case against modernism, it does so largely by stealth, with only a few explicit references to the ideologies whose advocates have dogged Pinker and his fellow evolutionary psychologists for decades. Instead, he explores how our moral intuitions and political ideals often inspire us to make profoundly irrational arguments for positions that rational scrutiny reveals to be quite immoral, even murderous. As one illustration of how good causes can be taken to silly, but as yet harmless, extremes, he gives the example of how “violence against children has been defined down to dodgeball” (415) in gym classes all over the US, writing that

The prohibition against dodgeball represents the overshooting of yet another successful campaign against violence, the century-long movement to prevent the abuse and neglect of children. It reminds us of how a civilizing offensive can leave a culture with a legacy of puzzling customs, peccadilloes, and taboos. The code of etiquette bequeathed to us by this and other Rights Revolutions is pervasive enough to have acquired a name. We call it political correctness. (381)

Such “civilizing offensives” are deliberately undertaken counterparts to the fortuitously occurring Civilizing Process Elias proposed to explain the jagged downward slope in graphs of relative rates of violence beginning in the Middle Ages in Europe. The original change Elias describes came about as a result of rulers consolidating their territories and acquiring greater authority. As Pinker explains,

Once Leviathan was in charge, the rules of the game changed. A man’s ticket to fortune was no longer being the baddest knight in the area but making a pilgrimage to the king’s court and currying favor with him and his entourage. The court, basically a government bureaucracy, had no use for hotheads and loose cannons, but sought responsible custodians to run its provinces. The nobles had to change their marketing. They had to cultivate their manners, so as not to offend the king’s minions, and their empathy, to understand what they wanted. The manners appropriate for the court came to be called “courtly” manners or “courtesy.” (75)

And this higher premium on manners and self-presentation among the nobles would lead to a cascade of societal changes.

Elias first lighted on his theory of the Civilizing Process as he was reading some of the etiquette guides which survived from that era. It’s striking to us moderns to see that knights of yore had to be told not to dispose of their snot by shooting it into their host’s table cloth, but that simply shows how thoroughly people today internalize these rules. As Elias explains, they’ve become second nature to us. Of course, we still have to learn them as children. Pinker prefaces his discussion of Elias’s theory with a recollection of his bafflement at why it was so important for him as a child to abstain from using his knife as a backstop to help him scoop food off his plate with a fork. Table manners, he concludes, reside on the far end of a continuum of self-restraint at the opposite end of which are once-common practices like cutting off the nose of a dining partner who insults you. Likewise, protecting children from the perils of flying rubber balls is the product of a campaign against the once-common custom of brutalizing them. The centrality of self-control is the common underlying theme: we control our urge to misuse utensils, including their use in attacking our fellow diners, and we control our urge to throw things at our classmates, even if it’s just in sport. The effect of the Civilizing Process in the Middle Ages, Pinker explains, was that “A culture of honor—the readiness to take revenge—gave way to a culture of dignity—the readiness to control one’s emotions” (72). In other words, diplomacy became more important than deterrence.

            What we’re learning here is that even an evolved mind can adjust to changing incentive schemes. Chimpanzees have to control their impulses toward aggression, sexual indulgence, and food consumption in order to survive in hierarchical bands with other chimps, many of whom are bigger, stronger, and better-connected. Much of the violence in chimp populations takes the form of adult males vying for positions in the hierarchy so they can enjoy the perquisites males of lower status must forgo to avoid being brutalized. Lower ranking males meanwhile bide their time, hopefully forestalling their gratification until such time as they grow stronger or the alpha grows weaker. In humans, the capacity for impulse-control and the habit of delaying gratification are even more important because we live in even more complex societies. Those capacities can either lie dormant or they can be developed to their full potential depending on exactly how complex the society is in which we come of age. Elias noticed a connection between the move toward more structured bureaucracies, less violence, and an increasing focus on etiquette, and he concluded that self-restraint in the form of adhering to strict codes of comportment was both an advertisement of, and a type of training for, the impulse-control that would make someone a successful bureaucrat.

            Aside from children who can’t fathom why we’d futz with our forks trying to capture recalcitrant peas, we normally take our society’s rules of etiquette for granted, no matter how inconvenient or illogical they are, seldom thinking twice before drawing unflattering conclusions about people who don’t bother adhering to them, the ones for whom they aren’t second nature. And the importance we place on etiquette goes beyond table manners. We judge people according to the discretion with which they dispose of any and all varieties of bodily effluent, as well as the delicacy with which they discuss topics sexual or otherwise basely instinctual. 

            Elias and Pinker’s theory is that, while the particular rules are largely arbitrary, the underlying principle of transcending our animal nature through the application of will, motivated by an appreciation of social convention and the sensibilities of fellow community members, is what marked the transition of certain constituencies of our species from a violent non-state existence to a relatively peaceful, civilized lifestyle. To Pinker, the uptick in violence that ensued once the counterculture of the 1960s came into full blossom was no coincidence. The squares may not have been as exciting as the rock stars who sang their anthems to hedonism and the liberating thrill of sticking it to the man. But a society of squares has certain advantages—a lower probability for each of its citizens of getting beaten or killed foremost among them.

            The Civilizing Process as Elias and Pinker, along with Immanuel Kant, understand it picks up momentum as levels of peace conducive to increasingly complex forms of trade are achieved. To understand why the move toward markets or “gentle commerce” would lead to decreasing violence, us pomos have to swallow—at least momentarily—our animus for Wall Street and all the corporate fat cats in the top one percent of the wealth distribution. The basic dynamic underlying trade is that one person has access to more of something than they need, but less of something else, while another person has the opposite balance, so a trade benefits them both. It’s a win-win, or a positive-sum game. The hard part for educated liberals is to appreciate that economies work to increase the total wealth; there isn’t a set quantity everyone has to divvy up in a zero-sum game, an exchange in which every gain for one is a loss for another. And Pinker points to another benefit:

Positive-sum games also change the incentives for violence. If you’re trading favors or surpluses with someone, your trading partner suddenly becomes more valuable to you alive than dead. You have an incentive, moreover, to anticipate what he wants, the better to supply it to him in exchange for what you want. Though many intellectuals, following in the footsteps of Saints Augustine and Jerome, hold businesspeople in contempt for their selfishness and greed, in fact a free market puts a premium on empathy. (77)

The Occupy Wall Street crowd will want to jump in here with a lengthy list of examples of businesspeople being unempathetic in the extreme. But Pinker isn’t saying commerce always forces people to be altruistic; it merely encourages them to exercise their capacity for perspective-taking. Discussing the emergence of markets, he writes,

The advances encouraged the division of labor, increased surpluses, and lubricated the machinery of exchange. Life presented people with more positive-sum games and reduced the attractiveness of zero-sum plunder. To take advantage of the opportunities, people had to plan for the future, control their impulses, take other people’s perspectives, and exercise the other social and cognitive skills needed to prosper in social networks. (77)

And these changes, the theory suggests, will tend to make merchants less likely on average to harm anyone. As bad as bankers can be, they’re not out sacking villages.

            Once you have commerce, you also have a need to start keeping records. And once you start dealing with distant partners it helps to have a mode of communication that travels. As writing moved out of the monasteries, and as technological advances in transportation brought more of the world within reach, ideas and innovations collided to inspire sequential breakthroughs and discoveries. Every advance could be preserved, dispersed, and ratcheted up. Pinker focuses on two relatively brief historical periods that witnessed revolutions in the way we think about violence, and both came in the wake of major advances in the technologies involved in transportation and communication. The first is the Humanitarian Revolution that occurred in the second half of the eighteenth century, and the second covers the Rights Revolutions in the second half of the twentieth. The Civilizing Process and gentle commerce weren’t sufficient to end age-old institutions like slavery and the torture of heretics. But then came the rise of the novel as a form of mass entertainment, and with all the training in perspective-taking readers were undergoing the hitherto unimagined suffering of slaves, criminals, and swarthy foreigners became intolerably imaginable. People began to agitate and change ensued.

            The Humanitarian Revolution occurred at the tail end of the Age of Reason and is recognized today as part of the period known as the Enlightenment. According to some scholarly scenarios, the Enlightenment, for all its successes like the American Constitution and the abolition of slavery, paved the way for all those allegedly unprecedented horrors in the first half of the twentieth century. Notwithstanding all this ivory tower traducing, the Enlightenment emerged from dormancy after the Second World War and gradually gained momentum, delivering us into a period Pinker calls the New Peace. Just as the original Enlightenment was preceded by increasing cosmopolitanism, improving transportation, and an explosion of literacy, the transformations that brought about the New Peace followed a burst of technological innovation. For Pinker, this is no coincidence. He writes,

If I were to put my money on the single most important exogenous cause of the Rights Revolutions, it would be the technologies that made ideas and people increasingly mobile. The decades of the Rights Revolutions were the decades of the electronics revolutions: television, transistor radios, cable, satellite, long-distance telephones, photocopiers, fax machines, the Internet, cell phones, text messaging, Web video. They were the decades of the interstate highway, high-speed rail, and the jet airplane. They were the decades of the unprecedented growth in higher education and in the endless frontier of scientific research. Less well known is that they were also the decades of an explosion in book publishing. From 1960 to 2000, the annual number of books published in the United States increased almost fivefold. (477)

Violence got slightly worse in the 60s. But the Civil Rights Movement was underway, Women’s Rights were being extended into new territories, and people even began to acknowledge that animals could suffer, prompting them to argue that we shouldn’t cause them to do so without cause. Today the push for Gay Rights continues. By 1990, the uptick in violence was over, and so far the move toward peace is looking like an ever greater success. Ironically, though, all the new types of media bringing images from all over the globe into our living rooms and pockets contributes to the sense that violence is worse than ever.

*******

            Three factors brought about a reduction in violence over the course of history then: strong government, trade, and communications technology. These factors had the impact they did because they interacted with two of our innate propensities, impulse-control and perspective-taking, by giving individuals both the motivation and the wherewithal to develop them both to ever greater degrees. It’s difficult to draw a clear delineation between developments that were driven by chance or coincidence and those driven by deliberate efforts to transform societies. But Pinker does credit political movements based on moral principles with having played key roles:

Insofar as violence is immoral, the Rights Revolutions show that a moral way of life often requires a decisive rejection of instinct, culture, religion, and standard practice. In their place is an ethics that is inspired by empathy and reason and stated in the language of rights. We force ourselves into the shoes (or paws) of other sentient beings and consider their interests, starting with their interest in not being hurt or killed, and we ignore superficialities that may catch our eye such as race, ethnicity, gender, age, sexual orientation, and to some extent, species. (475)

Some of the instincts we must reject in order to bring about peace, however, are actually moral instincts.

Pinker is setting up a distinction here between different kinds of morality. The one he describes that’s based on perspective-taking—which evidence he presents later suggests inspires sympathy—and is “stated in the language of rights” is the one he credits with transforming the world for the better. Of the idea that superficial differences shouldn’t distract us from our common humanity, he writes,

This conclusion, of course, is the moral vision of the Enlightenment and the strands of humanism and liberalism that have grown out of it. The Rights Revolutions are liberal revolutions. Each has been associated with liberal movements, and each is currently distributed along a gradient that runs, more or less, from Western Europe to the blue American states to the red American states to the democracies of Latin America and Asia and then to the more authoritarian countries, with Africa and most of the Islamic world pulling up the rear. In every case, the movements have left Western cultures with excesses of propriety and taboo that are deservedly ridiculed as political correctness. But the numbers show that the movements have reduced many causes of death and suffering and have made the culture increasingly intolerant of violence in any form. (475-6)

So you’re not allowed to play dodgeball at school or tell off-color jokes at work, but that’s a small price to pay. The most remarkable part of this passage though is that gradient he describes; it suggests the most violent regions of the globe are also the ones where people are the most obsessed with morality, with things like Sharia and so-called family values. It also suggests that academic complaints about the evils of Western culture are unfounded and startlingly misguided. As Pinker casually points out in his section on Women’s Rights, “Though the United States and other Western nations are often accused of being misogynistic patriarchies, the rest of the world is immensely worse” (413).

The Better Angels of Our Nature came out about a year before Jonathan Haidt’s The Righteous Mind, but Pinker’s book beats Haidt’s to the punch by identifying a serious flaw in his reasoning. The Righteous Mind explores how liberals and conservatives conceive of morality differently, and Haidt argues that each conception is equally valid so we should simply work to understand and appreciate opposing political views. It’s not like you’re going to change anyone’s mind anyway, right? But the liberal ideal of resisting certain moral intuitions tends to bring about a rather important change wherever it’s allowed to be realized. Pinker writes that

right or wrong, retracting the moral sense from its traditional spheres of community, authority, and purity entails a reduction of violence. And that retraction is precisely the agenda of classical liberalism: a freedom of individuals from tribal and authoritarian force, and a tolerance of personal choices as long as they do not infringe on the autonomy and well-being of others. (637)

Classical liberalism—which Pinker distinguishes from contemporary political liberalism—can even be viewed as an effort to move morality away from the realm of instincts and intuitions into the more abstract domains of law and reason. The perspective-taking at the heart of Enlightenment morality can be said to consist of abstracting yourself from your identifying characteristics and immediate circumstances to imagine being someone else in unfamiliar straits. A man with a job imagines being a woman who can’t get one. A white man on good terms with law enforcement imagines being a black man who gets harassed. This practice of abstracting experiences and distilling individual concerns down to universal principles is the common thread connecting Enlightenment morality to science.

            So it’s probably no coincidence, Pinker argues, that as we’ve gotten more peaceful, people in Europe and the US have been getting better at abstract reasoning as well, a trend which has been going on for as long as researchers have had tests to measure it. Psychologists over the course of the twentieth century have had to adjust IQ test results (the average is always 100) a few points every generation because scores on a few subsets of questions have kept going up. The regular rising of scores is known as the Flynn Effect, after psychologist James Flynn, who was one of the first researchers to realize the trend was more than methodological noise. Having posited a possible connection between scientific and moral reasoning, Pinker asks, “Could there be a moral Flynn Effect?” He explains,

We have several grounds for supposing that enhanced powers of reason—specifically, the ability to set aside immediate experience, detach oneself from a parochial vantage point, and frame one’s ideas in abstract, universal terms—would lead to better moral commitments, including an avoidance of violence. And we have just seen that over the course of the 20th century, people’s reasoning abilities—particularly their ability to set aside immediate experience, detach themselves from a parochial vantage point, and think in abstract terms—were steadily enhanced. (656)

Pinker cites evidence from an array of studies showing that high-IQ people tend have high moral IQs as well. One of them, an infamous study by psychologist Satoshi Kanazawa based on data from over twenty thousand young adults in the US, demonstrates that exceptionally intelligent people tend to hold a particular set of political views. And just as Pinker finds it necessary to distinguish between two different types of morality he suggests we also need to distinguish between two different types of liberalism:

Intelligence is expected to correlate with classical liberalism because classical liberalism is itself a consequence of the interchangeability of perspectives that is inherent to reason itself. Intelligence need not correlate with other ideologies that get lumped into contemporary left-of-center political coalitions, such as populism, socialism, political correctness, identity politics, and the Green movement. Indeed, classical liberalism is sometimes congenial to the libertarian and anti-political-correctness factions in today’s right-of-center coalitions. (662)

And Kanazawa’s findings bear this out. It’s not liberalism in general that increases steadily with intelligence, but a particular kind of liberalism, the type focusing more on fairness than on ideology.

*******

Following the chapters devoted to historical change, from the early Middle Ages to the ongoing Rights Revolutions, Pinker includes two chapters on psychology, the first on our “Inner Demons” and the second on our “Better Angels.” Ideology gets some prime real estate in the Demons chapter, because, he writes, “the really big body counts in history pile up” when people believe they’re serving some greater good. “Yet for all that idealism,” he explains, “it’s ideology that drove many of the worst things that people have ever done to each other.” Christianity, Nazism, communism—they all “render opponents of the ideology infinitely evil and hence deserving of infinite punishment” (556). Pinker’s discussion of morality, on the other hand, is more complicated. It begins, oddly enough, in the Demons chapter, but stretches into the Angels one as well. This is how the section on morality in the Angels chapter begins:

The world has far too much morality. If you added up all the homicides committed in pursuit of self-help justice, the casualties of religious and revolutionary wars, the people executed for victimless crimes and misdemeanors, and the targets of ideological genocides, they would surely outnumber the fatalities from amoral predation and conquest. The human moral sense can excuse any atrocity in the minds of those who commit it, and it furnishes them with motives for acts of violence that bring them no tangible benefit. The torture of heretics and conversos, the burning of witches, the imprisonment of homosexuals, and the honor killing of unchaste sisters and daughters are just a few examples. (622)

The postmodern push to give precedence to moral and political considerations over science, reason, and fairness may seem like a good idea at first. But political ideologies can’t be defended on the grounds of their good intentions—they all have those. And morality has historically caused more harm than good. It’s only the minimalist, liberal morality that has any redemptive promise:

Though the net contribution of the human moral sense to human well-being may well be negative, on those occasions when it is suitably deployed it can claim some monumental advances, including the humanitarian reforms of the Enlightenment and the Rights Revolutions of recent decades. (622)

            One of the problems with ideologies Pinker explores is that they lend themselves too readily to for-us-or-against-us divisions which piggyback on all our tribal instincts, leading to dehumanization of opponents as a step along the path to unrestrained violence. But, we may ask, isn’t the Enlightenment just another ideology? If not, is there some reliable way to distinguish an ideological movement from a “civilizing offensive” or a “Rights Revolution”? Pinker doesn’t answer these questions directly, but it’s in his discussion of the demonic side of morality where Better Angels offers its most profound insights—and it’s also where we start to be able to piece together the larger purpose of the book. He writes,

In The Blank Slate I argued that the modern denial of the dark side of human nature—the doctrine of the Noble Savage—was a reaction against the romantic militarism, hydraulic theories of aggression, and glorification of struggle and strife that had been popular in the late 19th and early 20th centuries. Scientists and scholars who question the modern doctrine have been accused of justifying violence and have been subjected to vilification, blood libel, and physical assault. The Noble Savage myth appears to be another instance of an antiviolence movement leaving a cultural legacy of propriety and taboo. (488)

Since Pinker figured that what he and his fellow evolutionary psychologists kept running up against was akin to the repulsion people feel against poor table manners or kids winging balls at each other in gym class, he reasoned that he ought to be able to simply explain to the critics that evolutionary psychologists have no intention of justifying, or even encouraging complacency toward, the dark side of human nature. “But I am now convinced,” he writes after more than a decade of trying to explain himself, “that a denial of the human capacity for evil runs even deeper, and may itself be a feature of human nature” (488). That feature, he goes on to explain, makes us feel compelled to label as evil anyone who tries to explain evil scientifically—because evil as a cosmic force beyond the reach of human understanding plays an indispensable role in group identity.

            Pinker began to fully appreciate the nature of the resistance to letting biology into discussions of human harm-doing when he read about the work of psychologist Roy Baumeister exploring the wide discrepancies in accounts of anger-inducing incidents between perpetrators and victims. The first studies looked at responses to minor offenses, but Baumeister went on to present evidence that the pattern, which Pinker labels the “Moralization Gap,” can be scaled up to describe societal attitudes toward historical atrocities. Pinker explains,

The Moralization Gap consists of complementary bargaining tactics in the negotiation for recompense between a victim and a perpetrator. Like opposing counsel in a lawsuit over a tort, the social plaintiff will emphasize the deliberateness, or at least the depraved indifference, of the defendant’s action, together with the pain and suffering the plaintiff endures. The social defendant will emphasize the reasonableness or unavoidability of the action, and will minimize the plaintiff’s pain and suffering. The competing framings shape the negotiations over amends, and also play to the gallery in a competition for their sympathy and for a reputation as a responsible reciprocator. (491)

Another of the Inner Demons Pinker suggests plays a key role in human violence is the drive for dominance, which he explains operates not just at the level of the individual but at that of the group to which he or she belongs. We want our group, however we understand it in the immediate context, to rest comfortably atop a hierarchy of other groups. What happens is that the Moralization Gap gets mingled with this drive to establish individual and group superiority. You see this dynamic playing out even in national conflicts. Pinker points out,

The victims of a conflict are assiduous historians and cultivators of memory. The perpetrators are pragmatists, firmly planted in the present. Ordinarily we tend to think of historical memory as a good thing, but when the events being remembered are lingering wounds that call for redress, it can be a call to violence. (493)

Name a conflict and with little effort you’ll likely also be able to recall contentions over historical records associated with it.

            The outcome of the Moralization Gap being taken to the group historical level is what Pinker and Baumeister call the “Myth of Pure Evil.” Harm-doing narratives start to take on religious overtones as what began as a conflict between regular humans pursuing or defending their interests, in ways they probably reasoned were just, transforms into an eternal struggle against inhuman and sadistic agents of chaos. And Pinker has come to realize that it is this Myth of Pure Evil that behavioral scientists ineluctably end up blaspheming:

Baumeister notes that in the attempt to understand harm-doing, the viewpoint of the scientist or scholar overlaps with the viewpoint of the perpetrator. Both take a detached, amoral stance toward the harmful act. Both are contextualizers, always attentive to the complexities of the situation and how they contributed to the causation of the harm. And both believe that the harm is ultimately explicable. (495)

This is why evolutionary psychologists who study violence inspire what Pinker in The Blank Slate called “political paranoia and moral exhibitionism” (106) on the part of us naïve pomos, ravenously eager to showcase our valor by charging once more into the breach against the mythical malevolence. All the while, our impregnable assurance of our own righteousness is borne of the conviction that we’re standing up for the oppressed. Pinker writes,

The viewpoint of the moralist, in contrast, is the viewpoint of the victim. The harm is treated with reverence and awe. It continues to evoke sadness and anger long after it was perpetrated. And for all the feeble ratiocination we mortals throw at it, it remains a cosmic mystery, a manifestation of the irreducible and inexplicable existence of evil in the universe. Many chroniclers of the Holocaust consider it immoral even to try to explain it. (495-6)

We simply can’t help inflating the magnitude of the crime in our attempt to convince our ideological opponents of their folly—though what we’re really inflating is our own, and our group’s, glorification—and so we can’t abide anyone puncturing our overblown conception because doing so lends credence to the opposition, making us look a bit foolish in the process for all our exaggerations.

            Reading Better Angels, you get the sense that Pinker experienced some genuine surprise and some real delight in discovering more and more corroboration for the idea that rates of violence have been trending downward in nearly every domain he explored. But things get tricky as you proceed through the pages because many of his arguments take on opposing positions he avoids naming. He seems to have seen the trove of evidence for declining violence as an opportunity to outflank the critics of evolutionary psychology in leftist, postmodern academia (to use a martial metaphor). Instead of calling them out directly, he circles around to chip away at the moral case for their political mission. We see this, for example, in his discussion of rape, which psychologists get into all kinds of trouble for trying to explain. After examining how scientists seem to be taking the perspective of perpetrators, Pinker goes on to write,

The accusation of relativizing evil is particularly likely when the motive the analyst imputes to the perpetrator appears to be venial, like jealousy, status, or retaliation, rather than grandiose, like the persistence of suffering in the world or the perpetuation of race, class, or gender oppression. It is also likely when the analyst ascribes the motive to every human being rather than to a few psychopaths or to the agents of a malignant political system (hence the popularity of the doctrine of the Noble Savage). (496)

In his earlier section on Woman’s Rights and the decline of rape, he attributed the difficulty in finding good data on the incidence of the crime, as well as some of the “preposterous” ideas about what motivates it, to the same kind of overextensions of anti-violence campaigns that lead to arbitrary rules about the use of silverware and proscriptions against dodgeball:

Common sense never gets in the way of a sacred custom that has accompanied a decline in violence, and today rape centers unanimously insist that “rape or sexual assault is not an act of sex or lust—it’s about aggression, power, and humiliation, using sex as the weapon. The rapist’s goal is domination.” (To which the journalist Heather MacDonald replies: “The guys who push themselves on women at keggers are after one thing only, and it’s not a reinstatement of the patriarchy.”) (406)

Jumping ahead to Pinker’s discussion of the Moralization Gap, we see that the theory that rape is about power, as opposed to the much more obvious theory that it’s about sex, is an outgrowth of the Myth of Pure Evil, an inflation of the mundane drives that lead some pathetic individuals to commit horrible crimes into eternal cosmic forces, inscrutable and infinitely punishable.

            When feminists impute political motives to rapists, they’re crossing the boundary from Enlightenment morality to the type of moral ideology that inspires dehumanization and violence. The good news is that it’s not difficult to distinguish between the two. From the Enlightenment perspective, rape is indefensibly wrong because it violates the autonomy of the victim—it’s an act of violence perpetrated by one individual against another. From the ideological perspective, every rape must be understood in the context of the historical oppression of women by men; it transcends the individuals involved as a representation of a greater evil. The rape-as-a-political-act theory also comes dangerously close to implying a type of collective guilt, which is a clear violation of individual rights.

Scholars already make the distinction between three different waves of feminism. The first two fall within Pinker’s definition of Rights Revolutions; they encompassed pushes for suffrage, marriage rights, and property rights, and then the rights to equal pay and equal opportunity in the workplace. The third wave is avowedly postmodern, its advocates committed to the ideas that gender is a pure social construct and that suggesting otherwise is an act of oppression. What you come away from Better Angels realizing, even though Pinker doesn’t say it explicitly, is that somewhere between the second and third waves feminists effectively turned against the very ideas and institutions that had been most instrumental in bringing about the historical improvements in women’s lives from the Middle Ages to the turn of the twenty-first century. And so it is with all the other ideologies on the postmodern roster.

Another misguided propaganda tactic that dogged Pinker’s efforts to identify historical trends in violence can likewise be understood as an instance of inflating the severity of crimes on behalf of a moral ideology—and the taboo placed on puncturing the bubble or vitiating the purity of evil with evidence and theories of venial motives. As he explains in the preface, “No one has ever recruited activists to a cause by announcing that things are getting better, and bearers of good news are often advised to keep their mouths shut lest they lull people into complacency” (xxii). Here again the objective researcher can’t escape the appearance of trying to minimize the evil, and therefore risks being accused of looking the other way, or even of complicity. But in an earlier section on genocide Pinker provides the quintessential Enlightenment rationale for the clear-eyed scientific approach to studying even the worst atrocities. He writes,

The effort to whittle down the numbers that quantify the misery can seem heartless, especially when the numbers serve as propaganda for raising money and attention. But there is a moral imperative in getting the facts right, and not just to maintain credibility. The discovery that fewer people are dying in wars all over the world can thwart cynicism among compassion-fatigued news readers who might otherwise think that poor countries are irredeemable hellholes. And a better understanding of what drve the numbers down can steer us toward doing things that make people better off rather than congratulating ourselves on how altruistic we are. (320)

This passage can be taken as the underlying argument of the whole book. And it gestures toward some far-reaching ramifications to the idea that exaggerated numbers are a product of the same impulse that causes us to inflate crimes to the status of pure evil.

Could it be that the nearly universal misperception that violence is getting worse all over the world, that we’re doomed to global annihilation, and that everywhere you look is evidence of the breakdown in human decency—could it be that the false impression Pinker set out to correct with Better Angels is itself a manifestation of a natural urge in all of us to seek out evil and aggrandize ourselves by unconsciously overestimating it? Pinker himself never goes as far as suggesting the mass ignorance of waning violence is a byproduct of an instinct toward self-righteousness. Instead, he writes of the “gloom” about the fate of humanity,

I think it comes from the innumeracy of our journalistic and intellectual culture. The journalist Michael Kinsley recently wrote, “It is a crushing disappointment that Boomers entered adulthood with Americans killing and dying halfway around the world, and now, as Boomers reach retirement and beyond, our country is doing the same damned thing.” This assumes that 5,000 Americans dying is the same damned thing as 58,000 Americans dying, and that a hundred thousand Iraqis being killed is the same damned thing as several million Vietnamese being killed. If we don’t keep an eye on the numbers, the programming policy “If it bleeds it leads” will feed the cognitive shortcut “The more memorable, the more frequent,” and we will end up with what has been called a false sense of insecurity. (296)

Pinker probably has a point, but the self-righteous undertone of Kinsley’s “same damned thing” is unmistakable. He’s effectively saying, I’m such an outstanding moral being the outrageous evilness of the invasion of Iraq is blatantly obvious to me—why isn’t it to everyone else? And that same message seems to underlie most of the statements people make expressing similar sentiments about how the world is going to hell.

            Though Pinker neglects to tie all the strands together, he still manages to suggest that the drive to dominance, ideology, tribal morality, and the Myth of Pure Evil are all facets of the same disastrous flaw in human nature—an instinct for self-righteousness. Progress on the moral front—real progress like fewer deaths, less suffering, and more freedom—comes from something much closer to utilitarian pragmatism than activist idealism. Yet the activist tradition is so thoroughly enmeshed in our university culture that we’re taught to exercise our powers of political righteousness even while engaging in tasks as mundane as reading books and articles. 

            If the decline in violence and the improvement of the general weal in various other areas are attributable to the Enlightenment, then many of the assumptions underlying postmodernism are turned on their heads. If social ills like warfare, racism, sexism, and child abuse exist in cultures untouched by modernism—and they in fact not only exist but tend to be much worse—then science can’t be responsible for creating them; indeed, if they’ve all trended downward with the historical development of all the factors associated with male-dominated western culture, including strong government, market economies, run-away technology, and scientific progress, then postmodernism not only has everything wrong but threatens the progress achieved by the very institutions it depends on, emerged from, and squanders innumerable scholarly careers maligning.

Of course some Enlightenment figures and some scientists do evil things. Of course living even in the most Enlightened of civilizations is no guarantee of safety. But postmodernism is an ideology based on the premise that we ought to discard a solution to our societal woes for not working perfectly and immediately, substituting instead remedies that have historically caused more problems than they solved by orders of magnitude. The argument that there’s a core to the Enlightenment that some of its representatives have been faithless to when they committed atrocities may seem reminiscent of apologies for Christianity based on the fact that Crusaders and Inquisitors weren’t loving their neighbors as Christ enjoined. The difference is that the Enlightenment works—in just a few centuries it’s transformed the world and brought about a reduction in violence no religion has been able to match in millennia. If anything, the big monotheistic religions brought about more violence.

Embracing Enlightenment morality or classical liberalism doesn’t mean we should give up our efforts to make the world a better place. As Pinker describes the transformation he hopes to encourage with Better Angels,

As one becomes aware of the decline of violence, the world begins to look different. The past seems less innocent; the present less sinister. One starts to appreciate the small gifts of coexistence that would have seemed utopian to our ancestors: the interracial family playing in the park, the comedian who lands a zinger on the commander in chief, the countries that quietly back away from a crisis instead of escalating to war. The shift is not toward complacency: we enjoy the peace we find today because people in past generations were appalled by the violence in their time and worked to reduce it, and so we should work to reduce the violence that remains in our time. Indeed, it is a recognition of the decline of violence that best affirms that such efforts are worthwhile. (xxvi)

Since our task for the remainder of this century is to extend the reach of science, literacy, and the recognition of universal human rights farther and farther along the Enlightenment gradient until they're able to grant the same increasing likelihood of a long peaceful life to every citizen of every nation of the globe, and since the key to accomplishing this task lies in fomenting future Rights Revolutions while at the same time recognizing, so as to be better equipped to rein in, our drive for dominance as manifested in our more deadly moral instincts, I for one am glad Steven Pinker has the courage to violate so many of the outrageously counterproductive postmodern taboos while having the grace to resist succumbing himself, for the most part, to the temptation of self-righteousness.

Also read:

THE FAKE NEWS CAMPAIGN AGAINST STEVEN PINKER AND ENLIGHTENMENT NOW

And:

THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND

And:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

Read More
Dennis Junk Dennis Junk

Sabbath Says: Philip Roth and the Dilemmas of Ideological Castration

With “Sabbath’s Theater,” Philip Roth has called down the thunder. The story does away with the concept of a likable character while delivering a wildly absorbing experience. And it satirizes all the woeful facets of how literature is taught today.

Sabbath’s Theater is the type of book you lose friends over. Mickey Sabbath, the adulterous title character who follows in the long literary line of defiantly self-destructive, excruciatingly vulnerable, and offputtingly but eloquently lustful leading males like Holden Caulfield and Humbert Humbert, strains the moral bounds of fiction and compels us to contemplate the nature of our own voyeuristic impulse to see him through to the end of the story—and not only contemplate it but defend it, as if in admitting we enjoy the book, find its irreverences amusing, and think that in spite of how repulsive he often is there still might be something to be said for poor old Sabbath we’re confessing to no minor offense of our own. Fans and admiring critics alike can’t resist rushing to qualify their acclaim by insisting they don’t condone his cheating on both of his wives, the seduction of a handful of his students, his habit of casually violating others’ privacy, his theft, his betrayal of his lone friend, his manipulations, his racism, his caustic, often cruelly precise provocations—but by the time they get to the end of Sabbath’s debt column it’s a near certainty any list of mitigating considerations will fall short of getting him out of the red. Sabbath, once a puppeteer who now suffers crippling arthritis, doesn’t seem like a very sympathetic character, and yet we sympathize with him nonetheless. In his wanton disregard for his own reputation and his embrace, principled in a way, of his own appetites, intuitions, and human nastiness, he inspires a fascination none of the literary nice guys can compete with. So much for the argument that the novel is a morally edifying art form.

            Thus, in Sabbath, Philip Roth has created a character both convincing and compelling who challenges a fundamental—we may even say natural—assumption about readers’ (or viewers’) role in relation to fictional protagonists, one made by everyone from the snarky authors of even the least sophisticated Amazon.com reviews to the theoreticians behind the most highfalutin academic criticism—the assumption that characters in fiction serve as vehicles for some message the author created them to convey, or which some chimerical mechanism within the “dominant culture” created to serve as agents of its own proliferation. The corollary is that the task of audience members is to try to decipher what the author is trying to say with the work, or what element of the culture is striving to perpetuate itself through it. If you happen to like the message the story conveys, or agree with it at some level, then you recommend the book and thus endorse the statement. Only rarely does a reviewer realize or acknowledge that the purpose of fiction is not simply to encourage readers to behave as the protagonists behave or, if the tale is a cautionary one, to expect the same undesirable consequences should they choose to behave similarly. Sabbath does in fact suffer quite a bit over the course of the novel, and much of that suffering comes as a result of his multifarious offenses, so a case can be made on behalf of Roth’s morality. Still, we must wonder if he really needed to write a story in which the cheating husband is abandoned by both of his wives to make the message sink in that adultery is wrong—especially since Sabbath doesn’t come anywhere near to learning that lesson himself. “All the great thoughts he had not reached,” Sabbath muses in the final pages, “were beyond enumeration; there was no bottom to what he did not have to say about the meaning of his life” (779).

           Part of the reason we can’t help falling back on the notions that fiction serves a straightforward didactic purpose and that characters should be taken as models, positive or negative, for moral behavior is that our moral emotions are invariably and automatically engaged by stories; indeed, what we usually mean when we say we got into a story is that we were in suspense as we anticipated whether the characters ultimately met with the fates we felt they deserved. We reflexively size up any character the author introduces the same way we assess the character of a person we’re meeting for the first time in real life. For many readers, the question of whether a novel is any good is interchangeable with the question of whether they liked the main characters, assuming they fare reasonably well in the culmination of the plot. If an author like Roth evinces an attitude drastically different from ours toward a character of his own creation like Sabbath, then we feel that in failing to condemn him, in holding him up as a model, the author is just as culpable as his character. In a recent edition of PBS’s American Masters devoted to Roth, for example, Jonathan Franzen, a novelist himself, describes how even he couldn’t resist responding to his great forebear’s work in just this way. “As a young writer,” Franzen recalls, “I had this kind of moralistic response of ‘Oh, you bad person, Philip Roth’” (54:56).

            That fiction’s charge is to strengthen our preset convictions through a process of narrative tempering, thus catering to our desire for an orderly calculus of just deserts, serves as the basis for a contract between storytellers and audiences, a kind of promise on which most commercial fiction delivers with a bang. And how many of us have wanted to throw a book out of the window when we felt that promise had been broken? The goal of professional and academic critics, we may imagine, might be to ease their charges into an appreciation of more complex narrative scenarios enacted by characters who escape easy categorization. But since scholarship in the humanities, and in literary criticism especially, has been in a century-long sulk over the greater success of science and the greater renown of scientists, professors of literature have scarcely even begun to ponder what anything resembling a valid answer to the questions of how fiction works and what the best strategies for experiencing it might look like. Those who aren’t pouting in a corner about the ascendancy of science—but the Holocaust!—are stuck in the muck of the century-old pseudoscience of psychoanalysis. But the real travesty is that the most popular, politically inspired schools of literary criticism—feminism, Marxism, postcolonialism—actively preach the need to ignore, neglect, and deny the very existence of moral complexity in literature, violently displacing any appreciation of difficult dilemmas with crudely tribal formulations of good and evil.

            For those inculcated with a need to take a political stance with regard to fiction, the only important dynamics in stories involve the interplay of society’s privileged oppressors and their marginalized victims. In 1976, nearly twenty years before the publication of Sabbath’s Theater, the feminist critic Vivian Gornick lumped Roth together with Saul Bellow and Norman Mailer in an essay asking “Why Do These Men Hate Women?” because she took issue with the way women are portrayed in their novels. Gornick, following the methods standard to academic criticism, doesn’t bother devoting any space in her essay to inconvenient questions about how much we can glean about these authors from their fictional works or what it means that the case for her prosecution rests by necessity on a highly selective approach to quoting from those works. And this slapdash approach to scholarship is supposedly justified because she and her fellow feminist critics believe women are in desperate need of protection from the incalculable harm they assume must follow from such allegedly negative portrayals. In this concern for how women, or minorities, or some other victims are portrayed and how they’re treated by their notional oppressors—rich white guys—Gornick and other critics who make of literature a battleground for their political activism are making the same assumption about fiction’s straightforward didacticism as the most unschooled consumers of commercial pulp. The only difference is that the academics believe the message received by audiences is all that’s important, not the message intended by the author. The basis of this belief probably boils down to its obvious convenience.

            In Sabbath’s Theater, the idea that literature, or art of any kind, is reducible to so many simple messages, and that these messages must be measured against political agendas, is dashed in the most spectacularly gratifying fashion. Unfortunately, the idea is so seldom scrutinized, and the political agendas are insisted on so inclemently, clung to and broadcast with such indignant and prosecutorial zeal, that it seems not one of the critics, nor any of the authors, who were seduced by Sabbath were able to fully reckon with the implications of that seduction. Franzen, for instance, in a New Yorker article about fictional anti-heroes, dodges the issue as he puzzles over the phenomenon that “Mickey Sabbath may be a disgustingly self-involved old goat,” but he’s somehow still sympathetic. The explanation Franzen lights on is that

the alchemical agent by which fiction transmutes my secret envy or my ordinary dislike of “bad” people into sympathy is desire. Apparently, all a novelist has to do is give a character a powerful desire (to rise socially, to get away with murder) and I, as a reader, become helpless not to make that desire my own. (63)

If Franzen is right—and this chestnut is a staple of fiction workshops—then the political activists are justified in their urgency. For if we’re powerless to resist adopting the protagonist’s desires as our own, however fleetingly, then any impulse to victimize women or minorities must invade readers’ psyches at some level, conscious or otherwise. The simple fact, however, is that Sabbath has not one powerful desire but many competing desires, ones that shift as the novel progresses, and it’s seldom clear even to Sabbath himself what those desires are. (And is he really as self-involved as Franzen suggests? It seems to me rather that he compulsively tries to get into other people’s heads, reflexively imagining elaborate stories for them.)

            While we undeniably respond to virtuous characters in fiction by feeling anxiety on their behalf as we read about or watch them undergo the ordeals of the plot, and we just as undeniably enjoy seeing virtue rewarded alongside cruelty being punished—the goodies prevailing over the baddies—these natural responses do not necessarily imply that stories compel our interest and engage our emotions by providing us with models and messages of virtue. Stories aren’t sermons. In his interview for American Masters, Roth explained what a writer’s role is vis-à-vis social issues.

My job isn’t to be enraged. My job is what Chekhov said the job of an artist was, which is the proper presentation of the problem. The obligation of the writer is not to provide the solution to a problem. That’s the obligation of a legislator, a leader, a crusader, a revolutionary, a warrior, and so on. That’s not the goal or aim of a writer. You’re not selling it, and you’re not inviting condemnation. You’re inviting understanding. (59:41)

The crucial but overlooked distinction that characters like Sabbath—but none so well as Sabbath—bring into stark relief is the one between declarative knowledge on the one hand and moment-by-moment experience on the other. Consider for a moment how many books and movies we’ve all been thoroughly engrossed in for however long it took to read or watch them, only to discover a month or so later that we can’t remember even the broadest strokes of how their plots resolved themselves—much less what their morals might have been.

            The answer to the question of what the author is trying to say is that he or she is trying to give readers a sense of what it would be like to go through what the characters are going through—or what it would be like to go through it with them. In other words, authors are not trying to say anything; they’re offering us an experience, once-removed and simulated though it may be. This isn’t to say that these simulated experiences don’t engage our moral emotions; indeed, we’re usually only as engaged in a story as our moral emotions are engaged by it. The problem is that in real-time, in real life, political ideologies, psychoanalytic theories, and rigid ethical principles are too often the farthest thing from helpful. “Fuck the laudable ideologies,” Sabbath helpfully insists: “Shallow, shallow, shallow!” Living in a complicated society with other living, breathing, sick, cruel, saintly, conniving, venal, altruistic, deceitful, noble, horny humans demands not so much a knowledge of the rules as a finely honed body of skills—and our need to develop and hone these skills is precisely why we evolved to find the simulated experiences of fictional narratives both irresistibly fascinating and endlessly pleasurable. Franzen was right that desires are important, the desire to be a good person, the desire to do things others may condemn, the desire to get along with our families and friends and coworkers, the desire to tell them all to fuck off so we can be free, even if just for an hour, to breathe… or to fuck an intern, as the case may be. Grand principles offer little guidance when it comes to balancing these competing desires. This is because, as Sabbath explains, “The law of living: fluctuation. For every thought a counterthought, for every urge a counterurge” (518).

            Fiction then is not a conveyance for coded messages—how tedious that would be (how tedious it really is when writers make this mistake); it is rather a simulated experience of moral dilemmas arising from scenarios which pit desire against desire, conviction against reality, desire against conviction, reality against desire, in any and all permutations. Because these experiences are once-removed and, after all, merely fictional, and because they require our sustained attention, the dilemmas tend to play out in the vicinity of life’s extremes. Here’s how Sabbath’s Theater opens:

            Either forswear fucking others or the affair is over.

            This was the ultimatum, the maddeningly improbable, wholly unforeseen ultimatum, that the mistress of fifty-two delivered in tears to her lover of sixty-four on the anniversary of an attachment that had persisted with an amazing licentiousness—and that, no less amazingly, had stayed their secret—for thirteen years. But now with hormonal infusions ebbing, with the prostate enlarging, with probably no more than another few years of semi-dependable potency still his—with perhaps not that much more life remaining—here at the approach of the end of everything, he was being charged, on pain of losing her, to turn himself inside out. (373)

The ethical proposition that normally applies in situations like this is that adultery is wrong, so don’t commit adultery. But these two have been committing adultery with each other for thirteen years already—do we just stop reading? And if we keep reading, maybe nodding once in a while as we proceed, cracking a few wicked grins along the way, does that mean we too must be guilty?

                               *****

            Much of the fiction written by male literary figures of the past generation, guys like Roth, Mailer, Bellow, and Updike, focuses on the morally charged dilemmas instanced by infidelity, while their gen-x and millennial successors, led by guys like Franzen and David Foster Wallace, have responded to shifting mores—and a greater exposure to academic literary theorizing—by completely overhauling how these dilemmas are framed. Whereas the older generation framed the question as how can we balance the intense physical and spiritual—even existential—gratification of sexual adventure on the one hand with our family obligations on the other, for their successors the question has become how can we males curb our disgusting, immoral, intrinsically oppressive lusting after young women inequitably blessed with time-stamped and overwhelmingly alluring physical attributes. “The younger writers are so self-conscious,” Katie Roiphe writes in a 2009 New York Times essay, “so steeped in a certain kind of liberal education, that their characters can’t condone even their own sexual impulses; they are, in short, too cool for sex.” Roiphe’s essay, “The Naked and the Confused,” stands alongside a 2012 essay in The New York Review of Books by Elaine Blair, “Great American Losers,” as the best descriptions of the new literary trend toward sexually repressed and pathetically timid male leads. The typical character in this vein, Blair writes, “is the opposite of entitled: he approaches women cringingly, bracing for a slap.”

            The writers in the new hipster cohort create characters who bury their longings layers-deep in irony because they’ve been assured the failure on the part of men of previous generations to properly check these same impulses played some unspecified role in the abysmal standing of women in society. College students can’t make it past their first semester without hearing about the evils of so-called objectification, but it’s nearly impossible to get a straight answer from anyone, anywhere, to the question of how objectification can be distinguished from normal, non-oppressive male attraction and arousal. Even Roiphe, in her essay lamenting the demise of male sexual virility in literature, relies on a definition of male oppression so broad that it encompasses even the most innocuous space-filling lines in the books of even the most pathetically diffident authors, writing that “the sexism in the work of the heirs apparent” of writers like Roth and Updike,

is simply wilier and shrewder and harder to smoke out. What comes to mind is Franzen’s description of one of his female characters in “The Corrections”: “Denise at 32 was still beautiful.” To the esteemed ladies of the movement I would suggest this is not how our great male novelists would write in the feminist utopia.

How, we may ask, did it get to the point where acknowledging that age influences how attractive a woman is qualifies a man for designation as a sexist? Blair, in her otherwise remarkably trenchant essay, lays the blame for our oversensitivity—though paranoia is probably a better word—at the feet of none other than those great male novelists themselves, or, as David Foster Wallace calls them, the Great Male Narcissists. She writes,

Because of the GMNs, these two tendencies—heroic virility and sexist condescension—have lingered in our minds as somehow yoked together, and the succeeding generations of American male novelists have to some degree accepted the dyad as truth. Behind their skittishness is a fearful suspicion that if a man gets what he wants, sexually speaking, he is probably exploiting someone.

That Roth et al were sexist, condescending, disgusting, narcissistic—these are articles of faith for feminist critics. Yet when we consider how expansive the definition of terms like sexism and misogyny have become—in practical terms, they both translate to: not as radically feminist as me—and the laughably low standard of evidence required to convince scholars of the accusations, female empowerment starts to look like little more than a reserved right to stand in self-righteous judgment of men for giving voice to and acting on desires anyone but the most hardened ideologue will agree are only natural.

             The effect on writers of this ever-looming threat of condemnation is that they either allow themselves to be silenced or they opt to participate in the most undignified of spectacles, peevishly sniping their colleagues, falling all over themselves to be granted recognition as champions for the cause. Franzen, at least early in his career, was more the silenced type. Discussing Roth, he wistfully endeavors to give the appearance of having moved beyond his initial moralistic responses. “Eventually,” he says, “I came to feel as if that was coming out of an envy: like, wow, I wish I could be as liberated of worry about other’s people’s opinion of me as Roth is” (55:18). We have to wonder if his espousal of the reductive theory that sympathy for fictional characters is based solely on the strength of their desires derives from this same longing for freedom to express his own. David Foster Wallace, on the other hand, wasn’t quite as enlightened or forgiving when it came to his predecessors. Here’s how he explains his distaste for a character in one of Updike’s novels, openly intimating the author’s complicity:

It’s that he persists in the bizarre adolescent idea that getting to have sex with whomever one wants whenever one wants is a cure for ontological despair. And so, it appears, does Mr. Updike—he makes it plain that he views the narrator’s impotence as catastrophic, as the ultimate symbol of death itself, and he clearly wants us to mourn it as much as Turnbull does. I’m not especially offended by this attitude; I mostly just don’t get it. Erect or flaccid, Ben Turnbull’s unhappiness is obvious right from the book’s first page. But it never once occurs to him that the reason he’s so unhappy is that he’s an asshole.

So the character is an asshole because he wants to have sex outside of marriage, and he’s unhappy because he’s an asshole, and it all traces back to the idea that having sex with whomever one wants is a source of happiness? Sounds like quite the dilemma—and one that pronouncing the main player an asshole does nothing to solve. This passage is the conclusion to a review in which Wallace tries to square his admiration for Updike’s writing with his desire to please a cohort of women readers infuriated by the way Updike writes about—portrays—women (which begs the question of why they’d read so many of his books). The troubling implication of his compromise is that if Wallace were himself to freely express his sexual feelings, he’d be open to the charge of sexism too—he’d be an asshole. Better to insist he simply doesn’t “get” why indulging his sexual desires might alleviate his “ontological despair.” What would Mickey Sabbath make of the fact that Wallace hanged himself when he was only forty-six, eleven years after publishing that review? (This isn’t just a nasty rhetorical point; Sabbath has a fascination with artists who commit suicide.)

The inadequacy of moral codes and dehumanizing ideologies when it comes to guiding real humans through life’s dilemmas, along with their corrosive effects on art, is the abiding theme of Sabbath’s Theater. One of the pivotal moments in Sabbath’s life is when a twenty-year-old student he’s in the process of seducing leaves a tape recorder out to be discovered in a lady’s room at the university. The student, Kathy Goolsbee, has recorded a phone sex session between her and Sabbath, and when the tape finds its way into the hands of the dean, it becomes grounds for the formation of a committee of activists against the abuse of women. At first, Kathy doesn’t realize how bad things are about to get for Sabbath. She even offers to give him a blow job as he berates her for her carelessness. Trying to impress on her the situation’s seriousness, he says,

Your people have on tape my voice giving reality to all the worst things they want the world to know about men. They have a hundred times more proof of my criminality than could be required by even the most lenient of deans to drive me out of every decent antiphallic educational institution in America. (586)

The committee against Sabbath proceeds to make the full recorded conversation available through a call-in line (the nineties equivalent of posting the podcast online). But the conversation itself isn’t enough; one of the activists gives a long introduction, which concludes,

The listener will quickly recognize how by this point in his psychological assault on an inexperienced young woman, Professor Sabbath has been able to manipulate her into thinking that she is a willing participant. (567-8)

Sabbath knows full well that even consensual phone sex can be construed as a crime if doing so furthers the agenda of those “esteemed ladies of the movement” Roiphe addresses. 

Reading through the lens of a tribal ideology ineluctably leads to the refraction of reality beyond recognizability, and any aspiring male writer quickly learns in all his courses in literary theory that the criteria for designation as an enemy to the cause of women are pretty much whatever the feminist critics fucking say they are. Wallace wasn’t alone in acquiescing to feminist rage by denying his own boorish instincts. Roiphe describes the havoc this opportunistic antipathy toward male sexuality wreaks in the minds of male writers and their literary creations:

Rather than an interest in conquest or consummation, there is an obsessive fascination with trepidation, and with a convoluted, postfeminist second-guessing. Compare [Benjamin] Kunkel’s tentative and guilt-ridden masturbation scene in “Indecision” with Roth’s famous onanistic exuberance with apple cores, liver and candy wrappers in “Portnoy’s Complaint.” Kunkel: “Feeling extremely uncouth, I put my penis away. I might have thrown it away if I could.” Roth also writes about guilt, of course, but a guilt overridden and swept away, joyously subsumed in the sheer energy of taboo smashing: “How insane whipping out my joint like that! Imagine what would have been had I been caught red-handed! Imagine if I had gone ahead.” In other words, one rarely gets the sense in Roth that he would throw away his penis if he could.

And what good comes of an ideology that encourages the psychological torture of bookish young men? It’s hard to distinguish the effects of these so-called literary theories from the hellfire scoldings delivered from the pulpits of the most draconian and anti-humanist religious patriarchs. Do we really need to ideologically castrate all our male scholars to protect women from abuse and further the cause of equality?

*****

The experience of sexual relations between older teacher and younger student in Sabbath’s Theater is described much differently when the gender activists have yet to get involved—and not just by Sabbath but by Kathy as well. “I’m of age!” she protests as he chastises her for endangering his job and opening him up to public scorn; “I do what I want” (586). Absent the committee against him, Sabbath’s impression of how his affairs with his students impact them reflects the nuance of feeling inspired by these experimental entanglements, the kind of nuance that the “laudable ideologies” can’t even begin to capture.

There was a kind of art in his providing an illicit adventure not with a boy of their own age but with someone three times their age—the very repugnance that his aging body inspired in them had to make their adventure with him feel a little like a crime and thereby give free play to their budding perversity and to the confused exhilaration that comes of flirting with disgrace. Yes, despite everything, he had the artistry still to open up to them the lurid interstices of life, often for the first time since they’d given their debut “b.j.” in junior high. As Kathy told him in that language which they all used and which made him want to cut their heads off, through coming to know him she felt “empowered.” (566)

Opening up “the lurid interstices of life” is precisely what Roth and the other great male writers—all great writers—are about. If there are easy answers to the questions of what characters should do, or if the plot entails no more than a simple conflict between a blandly good character and a blandly bad one, then the story, however virtuous its message, will go unattended.

            But might there be too much at stake for us impressionable readers to be allowed free reign to play around in imaginary spheres peopled by morally dubious specters? After all, if denouncing the dreamworlds of privileged white men, however unfairly, redounds to the benefit of women and children and minorities, then perhaps it’s to the greater good. In fact, though, right alongside the trends of increasing availability for increasingly graphic media portrayals of sex and violence have occurred marked decreases in actual violence and the abuse of women. And does anyone really believe it’s the least literate, least media-saturated societies that are the kindest to women? The simple fact is that the theory of literature subtly encouraging oppression can’t be valid. But the problem is once ideologies are institutionalized, once a threshold number of people depend on their perpetuation for their livelihoods, people whose scholarly work and reputations are staked on them, then victims of oppression will be found, their existence insisted on, regardless of whether they truly exist or not.

In another scandal Sabbath was embroiled in long before his flirtation with Kathy Goolsbee, he was brought up on charges of indecency because in the course of a street performance he’d exposed a woman’s nipple. The woman herself, Helen Trumbull, maintains from the outset of the imbroglio that whatever Sabbath had done, he’d done it with her consent—just as will be the case with his “psychological assault” on Kathy. But even as Sabbath sits assured that the case against him will collapse once the jury hears the supposed victim testify on his behalf, the prosecution takes a bizarre twist:

In fact, the victim, if there even is one, is coming this way, but the prosecutor says no, the victim is the public. The poor public, getting the shaft from this fucking drifter, this artist. If this guy can walk along a street, he says, and do this, then little kids think it’s permissible to do this, and if little kids think it’s permissible to do this, then they think it’s permissible to blah blah banks, rape women, use knives. If seven-year-old kids—the seven nonexistent kids are now seven seven-year-old kids—are going to see that this is fun and permissible with strange women… (663-4)

Here we have Roth’s dramatization of the fundamental conflict between artists and moralists. Even if no one is directly hurt by playful scenarios, that they carry a message, one that threatens to corrupt susceptible minds, is so seemingly obvious it’s all but impossible to refute. Since the audience for art is “the public,” the acts of depravity and degradation it depicts are, if anything, even more fraught with moral and political peril than any offense against an individual victim, real or imagined.  

            This theme of the oppressive nature of ideologies devised to combat oppression, the victimizing proclivity of movements originally fomented to protect and empower victims, is most directly articulated by a young man named Donald, dressed in all black and sitting atop a file cabinet in a nurse’s station when Sabbath happens across him at a rehab clinic. Donald “vaguely resembled the Sabbath of some thirty years ago,” and Sabbath will go on to apologize for interrupting him, referring to him as “a man whose aversions I wholeheartedly endorse.” What he was saying before the interruption:

“Ideological idiots!” proclaimed the young man in black. “The third great ideological failure of the twentieth century. The same stuff. Fascism. Communism. Feminism. All designed to turn one group of people against another group of people. The good Aryans against the bad others who oppress them. The good poor against the bad rich who oppress them. The good women against the bad men who oppress them. The holder of ideology is pure and good and clean and the other wicked. But do you know who is wicked? Whoever imagines himself to be pure is wicked! I am pure, you are wicked… There is no human purity! It does not exist! It cannot exist!” he said, kicking the file cabinet for emphasis. “It must not and should not exist! Because it’s a lie. … Ideological tyranny. It’s the disease of the century. The ideology institutionalizes the pathology. In twenty years there will be a new ideology. People against dogs. The dogs are to blame for our lives as people. Then after dogs there will be what? Who will be to blame for corrupting our purity?” (620-1)

It’s noteworthy that this rant is made by a character other than Sabbath. By this point in the novel, we know Sabbath wouldn’t speak so artlessly—unless he was really frightened or angry. As effective and entertaining an indictment of “Ideological tyranny” as Sabbath’s Theater is, we shouldn’t expect to encounter anywhere in a novel by a storyteller as masterful as Roth a character operating as a mere mouthpiece for some argument. Even Donald himself, Sabbath quickly gleans, isn’t simply spouting off; he’s trying to impress one of the nurses.

            And it’s not just the political ideologies that conscript complicated human beings into simple roles as oppressors and victims. The pseudoscientific psychological theories that both inform literary scholarship and guide many non-scholars through life crises and relationship difficulties function according to the same fundamental dynamic of tribalism; they simply substitute abusive family members for more generalized societal oppression and distorted or fabricated crimes committed in the victim’s childhood for broader social injustices. Sabbath is forced to contend with this particular brand of depersonalizing ideology because his second wife, Roseanna, picks it up through her AA meetings, and then becomes further enmeshed in it through individual treatment with a therapist named Barbara. Sabbath, who considers himself a failure, and who is carrying on an affair with the woman we meet in the opening lines of the novel, is baffled as to why Roseanna would stay with him. Her therapist provides an answer of sorts.

But then her problem with Sabbath, the “enslavement,” stemmed, according to Barbara, from her disastrous history with an emotionally irresponsible mother and a violent alcoholic father for both of whom Sabbath was the sadistic doppelganger. (454)

Roseanna’s father was a geology professor who hanged himself when she was a young teenager. Sabbath is a former puppeteer with crippling arthritis. Naturally, he’s confused by the purported identity of roles.

These connections—between the mother, the father, and him—were far clearer to Barbara than they were to Sabbath; if there was, as she liked to put it, a “pattern” in it all, the pattern eluded him. In the midst of a shouting match, Sabbath tells his wife, “As for the ‘pattern’ governing a life, tell Barbara it’s commonly called chaos” (455).

When she protests, “You are shouting at me like my father,” Sabbath asserts his individuality: “The fuck that’s who I’m shouting at you like! I’m shouting at you like myself!” (459). Whether you see his resistance as heroic or not probably depends on how much credence you give to those psychological theories.

            From the opening lines of Sabbath’s Theater when we’re presented with the dilemma of the teary-eyed mistress demanding monogamy in their adulterous relationship, the simple response would be to stand in easy judgment of Sabbath, and like Wallace did to Updike’s character, declare him an asshole. It’s clear that he loves this woman, a Croatian immigrant named Drenka, a character who at points steals the show even from the larger-than-life protagonist. And it’s clear his fidelity would mean a lot to her. Is his freedom to fuck other women really so important? Isn’t he just being selfish? But only a few pages later our easy judgment suddenly gets more complicated:

As it happened, since picking up Christa several years back Sabbath had not really been the adventurous libertine Drenka claimed she could no longer endure, and consequently she already had the monogamous man she wanted, even if she didn’t know it. To women other than her, Sabbath was by now quite unalluring, not just because he was absurdly bearded and obstinately peculiar and overweight and aging in every obvious way but because, in the aftermath of the scandal four years earlier with Kathy Goolsbee, he’s become more dedicated than ever to marshaling the antipathy of just about everyone as though he were, in fact, battling for his rights. (394)

Christa was a young woman who participated in a threesome with Sabbath and Drenka, an encounter to which Sabbath’s only tangible contribution was to hand the younger woman a dildo.

            One of the central dilemmas for a character who loves the thrill of sex, who seeks in it a rekindling of youthful vigor—“the word’s rejuvenation,” Sabbath muses at one point (517)—the adrenaline boost borne of being in the wrong and the threat of getting caught, what Roiphe calls “the sheer energy of taboo smashing,” becomes ever more indispensable as libido wanes with age. Even before Sabbath ever had to contend with the ravages of aging, he reveled in this added exhilaration that attends any expedition into forbidden realms. What makes Drenka so perfect for him is that she has not just a similarly voracious appetite but a similar fondness for outrageous sex and the smashing of taboo. And it’s this mutual celebration of the verboten that Sabbath is so reluctant to relinquish. Of Drenka, he thinks,

The secret realm of thrills and concealment, this was the poetry of her existence. Her crudeness was the most distinguishing force in her life, lent her life its distinction. What was she otherwise? What was he otherwise? She was his last link with another world, she and her great taste for the impermissible. As a teacher of estrangement from the ordinary, he had never trained a more gifted pupil; instead of being joined by the contractual they were interconnected by the instinctual and together could eroticize anything (except their spouses). Each of their marriages cried out for a countermarriage in which the adulterers attack their feelings of captivity. (395)

Those feelings of captivity, the yearnings to experience the flow of the old juices, are anything but adolescent, as Wallace suggests of them; adolescents have a few decades before they have to worry about dwindling arousal. Most of them have the opposite problem.

            The question of how readers are supposed to feel about a character like Sabbath doesn’t have any simple answers. He’s an asshole at several points in the novel, but at several points he’s not. One of the reasons he’s so compelling is that working out what our response to him should be poses a moral dilemma of its own. Whether or not we ultimately decide that adultery is always and everywhere wrong, the experience of being privy to Sabbath’s perspective can help us prepare ourselves for our own feelings of captivity, lusting nostalgia, and sexual temptation. Most of us will never find ourselves in a dilemma like Sabbath gets himself tangled in with his friend Norman’s wife, for instance, but it would be to our detriment to automatically discount the old hornball’s insights.

He could discern in her, whenever her husband spoke, the desire to be just a little cruel to Norman, saw her sneering at the best of him, at the very best things in him. If you don’t go crazy because of your husband’s vices, you go crazy because of his virtues. He’s on Prozac because he can’t win. Everything is leaving her except for her behind, which her wardrobe informs her is broadening by the season—and except for this steadfast prince of a man marked by reasonableness and ethical obligation the way others are marked by insanity or illness. Sabbath understood her state of mind, her state of life, her state of suffering: dusk is descending, and sex, our greatest luxury, is racing away at a tremendous speed, everything is racing off at a tremendous speed and you wonder at your folly in having ever turned down a single squalid fuck. You’d give your right arm for one if you are a babe like this. It’s not unlike the Great Depression, not unlike going broke overnight after years of raking it in. “Nothing unforeseen that happens,” the hot flashes inform her, “is likely ever again going to be good.” Hot flashes mockingly mimicking the sexual ecstasies. Dipped, she is, in the very fire of fleeting time. (651)

Welcome to messy, chaotic, complicated life.

Sabbath’s Theater is, in part, Philip Roth’s raised middle finger to the academic moralists whose idiotic and dehumanizing ideologies have spread like a cancer into all the venues where literature is discussed and all the avenues through which it’s produced. Unfortunately, the unrecognized need for culture-wide chemotherapy hasn’t gotten any less dire in the nearly two decades since the novel was published. With literature now drowning in the devouring tide of new media, the tragic course set by the academic custodians of art toward bloodless prudery and impotent sterility in the name of misguided political activism promises to do nothing but ensure the ever greater obsolescence of epistemologically doomed and resoundingly pointless theorizing, making of college courses the places where you go to become, at best, profoundly confused about where you should stand with relation to fiction and fictional characters, and, at worst, a self-righteous demagogue denouncing the chimerical evils allegedly encoded into every text or cultural artifact. All the conspiracy theorizing about the latent evil urgings of literature has amounted to little more than another reason not to read, another reason to tune in to Breaking Bad or Mad Men instead. But the only reason Roth’s novel makes such a successful case is that it at no point allows itself to be reducible to a mere case, just as Sabbath at no point allows himself to be conscripted as a mere argument. We don’t love or hate him; we love and hate him. But we sort of just love him because he leaves us free to do both as we experience his antics, once removed and simulated, but still just as complicatedly eloquent in their message of “Fuck the laudable ideologies”—or not, as the case may be. 

Also read

JUST ANOTHER PIECE OF SLEAZE: THE REAL LESSON OF ROBERT BOROFSKY'S "FIERCE CONTROVERSY"

And

PUTTING DOWN THE PEN: HOW SCHOOL TEACHES US THE WORST POSSIBLE WAY TO READ LITERATURE

And

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

Capuchin-22: A Review of “The Bonobo and the Atheist: In Search of Humanism among the Primates” by Frans De Waal

Frans de Waal’s work is always a joy to read, insightful, surprising, and superbly humane. Unfortunately, in his mostly wonderful book, “The Bonobo and the Atheist,” he carts out a familiar series of straw men to level an attack on modern critics of religion—with whom, if he’d been more diligent in reading their work, he’d find much common ground with.

            Whenever literary folk talk about voice, that supposedly ineffable but transcendently important quality of narration, they display an exasperating penchant for vagueness, as if so lofty a dimension to so lofty an endeavor couldn’t withstand being spoken of directly—or as if they took delight in instilling panic and self-doubt into the quivering hearts of aspiring authors. What the folk who actually know what they mean by voice actually mean by it is all the idiosyncratic elements of prose that give readers a stark and persuasive impression of the narrator as a character. Discussions of what makes for stark and persuasive characters, on the other hand, are vague by necessity. It must be noted that many characters even outside of fiction are neither. As a first step toward developing a feel for how character can be conveyed through writing, we may consider the nonfiction work of real people with real character, ones who also happen to be practiced authors.

The Dutch-American primatologist Frans de Waal is one such real-life character, and his prose stands as testament to the power of written language, lonely ink on colorless pages, not only to impart information, but to communicate personality and to make a contagion of states and traits like enthusiasm, vanity, fellow-feeling, bluster, big-heartedness, impatience, and an abiding wonder. De Waal is a writer with voice. Many other scientists and science writers explore this dimension to prose in their attempts to engage readers, but few avoid the traps of being goofy or obnoxious instead of funny—a trap David Pogue, for instance, falls into routinely as he hosts NOVA on PBS—and of expending far too much effort in their attempts at being distinctive, thus failing to achieve anything resembling grace. 

The most striking quality of de Waal’s writing, however, isn’t that its good-humored quirkiness never seems strained or contrived, but that it never strays far from the man’s own obsession with getting at the stories behind the behaviors he so minutely observes—whether the characters are his fellow humans or his fellow primates, or even such seemingly unstoried creatures as rats or turtles. But to say that de Waal is an animal lover doesn’t quite capture the essence of what can only be described as a compulsive fascination marked by conviction—the conviction that when he peers into the eyes of a creature others might dismiss as an automaton, a bundle of twitching flesh powered by preprogrammed instinct, he sees something quite different, something much closer to the workings of his own mind and those of his fellow humans.

De Waal’s latest book, The Bonobo and the Atheist: In Search of Humanism among the Primates, reprises the main themes of his previous books, most centrally the continuity between humans and other primates, with an eye toward answering the questions of where does, and where should morality come from. Whereas in his books from the years leading up to the turn of the century he again and again had to challenge what he calls “veneer theory,” the notion that without a process of socialization that imposes rules on individuals from some outside source they’d all be greedy and selfish monsters, de Waal has noticed over the past six or so years a marked shift in the zeitgeist toward an awareness of our more cooperative and even altruistic animal urgings. Noting a sharp difference over the decades in how audiences at his lectures respond to recitations of the infamous quote by biologist Michael Ghiselin, “Scratch an altruist and watch a hypocrite bleed,” de Waal writes,

Although I have featured this cynical line for decades in my lectures, it is only since about 2005 that audiences greet it with audible gasps and guffaws as something so outrageous, so out of touch with how they see themselves, that they can’t believe it was ever taken seriously. Had the author never had a friend? A loving wife? Or a dog, for that matter? (43)

The assumption underlying veneer theory was that without civilizing influences humans’ deeper animal impulses would express themselves unchecked. The further assumption was that animals, the end products of the ruthless, eons-long battle for survival and reproduction, would reflect the ruthlessness of that battle in their behavior. De Waal’s first book, Chimpanzee Politics, which told the story of a period of intensified competition among the captive male chimps at the Arnhem Zoo for alpha status, with all the associated perks like first dibs on choice cuisine and sexually receptive females, was actually seen by many as lending credence to these assumptions. But de Waal himself was far from convinced that the primates he studied were invariably, or even predominantly, violent and selfish.

            What he observed at the zoo in Arnhem was far from the chaotic and bloody free-for-all it would have been if the chimps took the kind of delight in violence for its own sake that many people imagine them being disposed to. As he pointed out in his second book, Peacemaking among Primates, the violence is almost invariably attended by obvious signs of anxiety on the part of those participating in it, and the tension surrounding any major conflict quickly spreads throughout the entire community. The hierarchy itself is in fact an adaptation that serves as a check on the incessant conflict that would ensue if the relative status of each individual had to be worked out anew every time one chimp encountered another. “Tightly embedded in society,” he writes in The Bonobo and the Atheist, “they respect the limits it puts on their behavior and are ready to rock the boat only if they can get away with it or if so much is at stake that it’s worth the risk” (154). But the most remarkable thing de Waal observed came in the wake of the fights that couldn’t successfully be avoided. Chimps, along with primates of several other species, reliably make reconciliatory overtures toward one another after they’ve come to blows—and bites and scratches. In light of such reconciliations, primate violence begins to look like a momentary, albeit potentially dangerous, readjustment to a regularly peaceful social order rather than any ongoing melee, as individuals with increasing or waning strength negotiate a stable new arrangement.

            Part of the enchantment of de Waal’s writing is his judicious and deft balancing of anecdotes about the primates he works with on the one hand and descriptions of controlled studies he and his fellow researchers conduct on the other. In The Bonobo and the Atheist, he strikes a more personal note than he has in any of his previous books, at points stretching the bounds of the popular science genre and crossing into the realm of memoir. This attempt at peeling back the surface of that other veneer, the white-coated scientist’s posture of mechanistic objectivity and impassive empiricism, works best when de Waal is merging tales of his animal experiences with reports on the research that ultimately provides evidence for what was originally no more than an intuition. Discussing a recent, and to most people somewhat startling, experiment pitting the social against the alimentary preferences of a distant mammalian cousin, he recounts,

Despite the bad reputation of these animals, I have no trouble relating to its findings, having kept rats as pets during my college years. Not that they helped me become popular with the girls, but they taught me that rats are clean, smart, and affectionate. In an experiment at the University of Chicago, a rat was placed in an enclosure where it encountered a transparent container with another rat. This rat was locked up, wriggling in distress. Not only did the first rat learn how to open a little door to liberate the second, but its motivation to do so was astonishing. Faced with a choice between two containers, one with chocolate chips and another with a trapped companion, it often rescued its companion first. (142-3)

This experiment, conducted by Inbal Ben-Ami Bartal, Jean Decety, and Peggy Mason, actually got a lot of media coverage; Mason was even interviewed for an episode of NOVA Science NOW where you can watch a video of the rats performing the jailbreak and sharing the chocolate (and you can also see David Pogue being obnoxious.) This type of coverage has probably played a role in the shift in public opinion regarding the altruistic propensities of humans and animals. But if there’s one species who’s behavior can be said to have undermined the cynicism underlying veneer theory—aside from our best friend the dog of course—it would have to be de Waal’s leading character, the bonobo.

            De Waal’s 1997 book Bonobo: The Forgotten Ape, on which he collaborated with photographer Frans Lanting, introduced this charismatic, peace-loving, sex-loving primate to the masses, and in the process provided behavioral scientists with a new model for what our own ancestors’ social lives might have looked like. Bonobo females dominate the males to the point where zoos have learned never to import a strange male into a new community without the protection of his mother. But for the most part any tensions, even those over food, even those between members of neighboring groups, are resolved through genito-genital rubbing—a behavior that looks an awful lot like sex and often culminates in vocalizations and facial expressions that resemble those of humans experiencing orgasms to a remarkable degree. The implications of bonobos’ hippy-like habits have even reached into politics. After an uncharacteristically ill-researched and ill-reasoned article in the New Yorker by Ian Parker which suggested that the apes weren’t as peaceful and erotic as we’d been led to believe, conservatives couldn’t help celebrating. De Waal writes in The Bonobo and the Atheist,

Given that this ape’s reputation has been a thorn in the side of homophobes as well as Hobbesians, the right-wing media jumped with delight. The bonobo “myth” could finally be put to rest, and nature remain red in tooth and claw. The conservative commentator Dinesh D’Souza accused “liberals” of having fashioned the bonobo into their mascot, and he urged them to stick with the donkey. (63)

But most primate researchers think the behavioral differences between chimps and bonobos are pretty obvious. De Waal points out that while violence does occur among the apes on rare occasions “there are no confirmed reports of lethal aggression among bonobos” (63). Chimps, on the other hand, have been observed doing all kinds of killing. Bonobos also outperform chimps in experiments designed to test their capacity for cooperation, as in the setup that requires two individuals to pull on a rope at the same time in order for either of them to get ahold of food placed atop a plank of wood. (Incidentally, the New Yorker’s track record when it comes to anthropology is suspiciously checkered—disgraced author Patrick Tierney’s discredited book on Napoleon Chagnon, for instance, was originally excerpted in the magazine.)

            Bonobos came late to the scientific discussion of what ape behavior can tell us about our evolutionary history. The famous chimp researcher Robert Yerkes, whose name graces the facility de Waal currently directs at Emory University in Atlanta, actually wrote an entire book called Almost Human about what he believed was a rather remarkable chimp. A photograph from that period reveals that it wasn’t a chimp at all. It was a bonobo. Now, as this species is becoming better researched, and with the discovery of fossils like the 4.4 million-year-old Ardipethicus ramidus known as Ardi, a bipedal ape with fangs that are quite small when compared to the lethal daggers sported by chimps, the role of violence in our ancestry is ever more uncertain. De Waal writes,

What if we descend not from a blustering chimp-like ancestor but from a gentle, empathic bonobo-like ape? The bonobo’s body proportions—its long legs and narrow shoulders—seem to perfectly fit the descriptions of Ardi, as do its relatively small canines. Why was the bonobo overlooked? What if the chimpanzee, instead of being an ancestral prototype, is in fact a violent outlier in an otherwise relatively peaceful lineage? Ardi is telling us something, and there may exist little agreement about what she is saying, but I hear a refreshing halt to the drums of war that have accompanied all previous scenarios. (61)

De Waal is well aware of all the behaviors humans engage in that are more emblematic of chimps than of bonobos—in his 2005 book Our Inner Ape, he refers to humans as “the bipolar ape”—but the fact that our genetic relatedness to both species is exactly the same, along with the fact that chimps also have a surprising capacity for peacemaking and empathy, suggest to him that evolution has had plenty of time and plenty of raw material to instill in us the emotional underpinnings of a morality that emerges naturally—without having to be imposed by religion or philosophy. “Rather than having developed morality from scratch through rational reflection,” he writes in The Bonobo and the Atheist, “we received a huge push in the rear from our background as social animals" (17).

            In the eighth and final chapter of The Bonobo and the Atheist, titled “Bottom-Up Morality,” de Waal describes what he believes is an alternative to top-down theories that attempt to derive morals from religion on the one hand and from reason on the other. Invisible beings threatening eternal punishment can frighten us into doing the right thing, and principles of fairness might offer slight nudges in the direction of proper comportment, but we must already have some intuitive sense of right and wrong for either of these belief systems to operate on if they’re to be at all compelling. Many people assume moral intuitions are inculcated in childhood, but experiments like the one that showed rats will come to the aid of distressed companions suggest something deeper, something more ingrained, is involved. De Waal has found that a video of capuchin monkeys demonstrating "inequity aversion"—a natural, intuitive sense of fairness—does a much better job than any charts or graphs at getting past the prejudices of philosophers and economists who want to insist that fairness is too complex a principle for mere monkeys to comprehend. He writes,

This became an immensely popular experiment in which one monkey received cucumber slices while another received grapes for the same task. The monkeys had no trouble performing if both received identical rewards of whatever quality, but rejected unequal outcomes with such vehemence that there could be little doubt about their feelings. I often show their reactions to audiences, who almost fall out of their chairs laughing—which I interpret as a sign of surprised recognition. (232)

What the capuchins do when they see someone else getting a better reward is throw the measly cucumber back at the experimenter and proceed to rattle the cage in agitation. De Waal compares it to the Occupy Wall Street protests. The poor monkeys clearly recognize the insanity of the human they’re working for.

            There’s still a long way to travel, however, from helpful rats and protesting capuchins before you get to human morality. But that gap continues to shrink as researchers find new ways to explore the social behaviors of the primates that are even more closely related to us. Chimps, for instance, have been seen taking inequity aversion an important step beyond what monkeys display. Not only will certain individuals refuse to work for lesser rewards; they’ll refuse to work even for the superior rewards if they see their companions aren’t being paid equally. De Waal does acknowledge though that there still remains an important step between these behaviors and human morality. “I am reluctant to call a chimpanzee a ‘moral being,’” he writes.

This is because sentiments do not suffice. We strive for a logically coherent system and have debates about how the death penalty fits arguments for the sanctity of life, or whether an unchosen sexual orientation can be morally wrong. These debates are uniquely human. There is little evidence that other animals judge the appropriateness of actions that do not directly affect themselves. (17-8)

Moral intuitions can often inspire some behaviors that to people in modern liberal societies seem appallingly immoral. De Waal quotes anthropologist Christopher Boehm on the “special, pejorative moral ‘discount’ applied to cultural strangers—who often are not even considered fully human,” and he goes on to explain that “The more we expand morality’s reach, the more we need to rely on our intellect.” But the intellectual principles must be grounded in the instincts and emotions we evolved as social primates; this is what he means by bottom-up morality or “naturalized ethics” (235).

*****

            In locating the foundations of morality in our evolved emotions—propensities we share with primates and even rats—de Waal seems to be taking a firm stand against any need for religion. But he insists throughout the book that this isn’t the case. And, while the idea that people are quite capable of playing fair and treating each other with compassion without any supernatural policing may seem to land him squarely in the same camp as prominent atheists like Richard Dawkins and Christopher Hitchens, whom he calls “neo-atheists,” he contends that they’re just as, if not more, misguided than the people of faith who believe the rules must be handed down from heaven. “Even though Dawkins cautioned against his own anthropomorphism of the gene,” de Waal wrote all the way back in his 1996 book Good Natured: The Origins of Right and Wrong in Humans and Other Animals, “with the passage of time, carriers of selfish genes became selfish by association” (14). Thus de Waal tries to find some middle ground between religious dogmatists on one side and those who are equally dogmatic in their opposition to religion and equally mistaken in their espousal of veneer theory on the other. “I consider dogmatism a far greater threat than religion per se,” he writes in The Bonobo and the Atheist.

I am particularly curious why anyone would drop religion while retaining the blinkers sometimes associated with it. Why are the “neo-atheists” of today so obsessed with God’s nonexistence that they go on media rampages, wear T-shirts proclaiming their absence of belief, or call for a militant atheism? What does atheism have to offer that’s worth fighting for? (84)

For de Waal, neo-atheism is an empty placeholder of a philosophy, defined not by any positive belief but merely by an obstinately negative attitude toward religion. It’s hard to tell early on in his book if this view is based on any actual familiarity with the books whose titles—The God Delusion, god is not Great—he takes issue with. What is obvious, though, is that he’s trying to appeal to some spirit of moderation so that he might reach an audience who may have already been turned off by the stridency of the debates over religion’s role in society. At any rate, we can be pretty sure that Hitchens, for one, would have had something to say about de Waal’s characterization.

De Waal’s expertise as a primatologist gave him what was in many ways an ideal perspective on the selfish gene debates, as well as on sociobiology more generally, much the way Sarah Blaffer Hrdy’s expertise has done for her. The monkeys and apes de Waal works with are a far cry from the ants and wasps that originally inspired the gene-centered approach to explaining behavior. “There are the bees dying for their hive,” he writes in The Bonobo and the Atheist,

and the millions of slime mold cells that build a single, sluglike organism that permits a few among them to reproduce. This kind of sacrifice was put on the same level as the man jumping into an icy river to rescue a stranger or the chimpanzee sharing food with a whining orphan. From an evolutionary perspective, both kinds of helping are comparable, but psychologically speaking they are radically different. (33)

At the same time, though, de Waal gets to see up close almost every day how similar we are to our evolutionary cousins, and the continuities leave no question as to the wrongheadedness of blank slate ideas about socialization. “The road between genes and behavior is far from straight,” he writes, sounding a note similar to that of the late Stephen Jay Gould, “and the psychology that produces altruism deserves as much attention as the genes themselves.” He goes on to explain,

Mammals have what I call an “altruistic impulse” in that they respond to signs of distress in others and feel an urge to improve their situation. To recognize the need of others, and react appropriately, is really not the same as a preprogrammed tendency to sacrifice oneself for the genetic good. (33)

We can’t discount the role of biology, in other words, but we must keep in mind that genes are at the distant end of a long chain of cause and effect that has countless other inputs before it links to emotion and behavior. De Waal angered both the social constructivists and quite a few of the gene-centered evolutionists, but by now the balanced view his work as primatologist helped him to arrive at has, for the most part, won the day. Now, in his other role as a scientist who studies the evolution of morality, he wants to strike a similar balance between extremists on both sides of the religious divide. Unfortunately, in this new arena, his perspective isn’t anywhere near as well informed.

             The type of religion de Waal points to as evidence that the neo-atheists’ concerns are misguided and excessive is definitely moderate. It’s not even based on any actual beliefs, just some nice ideas and stories adherents enjoy hearing and thinking about in a spirit of play. We have to wonder, though, just how prevalent this New Age, Life-of-Pi type of religion really is. I suspect the passages in The Bonobo and the Atheist discussing it are going to be equally offensive to atheists and people of actual faith alike. Here’s one  example of the bizarre way he writes about religion:

Neo-atheists are like people standing outside a movie theater telling us that Leonardo DiCaprio didn’t really go down with the Titanic. How shocking! Most of us are perfectly comfortable with the duality. Humor relies on it, too, lulling us into one way of looking at a situation only to hit us over the head with another. To enrich reality is one of the most delightful capacities we have, from pretend play in childhood to visions of an afterlife when we grow older. (294)

He seems to be suggesting that the religious know, on some level, their beliefs aren’t true. “Some realities exist,” he writes, “some we just like to believe in” (294). The problem is that while many readers may enjoy the innuendo about humorless and inveterately over-literal atheists, most believers aren’t joking around—even the non-extremists are more serious than de Waal seems to think.

            As someone who’s been reading de Waal’s books for the past seventeen years, someone who wanted to strangle Ian Parker after reading his cheap smear piece in The New Yorker, someone who has admired the great primatologist since my days as an undergrad anthropology student, I experienced the sections of The Bonobo and the Atheist devoted to criticisms of neo-atheism, which make up roughly a quarter of this short book, as soul-crushingly disappointing. And I’ve agonized over how to write this part of the review. The middle path de Waal carves out is between a watered-down religion believers don’t really believe on one side and an egregious postmodern caricature of Sam Harris’s and Christopher Hitchens’s positions on the other. He focuses on Harris because of his book, The Moral Landscape, which explores how we might use science to determine our morals and values instead of religion, but he gives every indication of never having actually read the book and of instead basing his criticisms solely on the book’s reputation among Harris’s most hysterical detractors. And he targets Hitchens because he thinks he has the psychological key to understanding what he refers to as his “serial dogmatism.” But de Waal’s case is so flimsy a freshman journalism student could demolish it with no more than about ten minutes of internet fact-checking.

De Waal does acknowledge that we should be skeptical of “religious institutions and their ‘primates’,” but he wonders “what good could possibly come from insulting the many people who find value in religion?” (19). This is the tightrope he tries to walk throughout his book. His focus on the purely negative aspect of atheism juxtaposed with his strange conception of the role of belief seems designed to give readers the impression that if the atheists succeed society might actually suffer severe damage. He writes,

Religion is much more than belief. The question is not so much whether religion is true or false, but how it shapes our lives, and what might possibly take its place if we were to get rid of it the way an Aztec priest rips the beating heart out of a virgin. What could fill the gaping hole and take over the removed organ’s functions? (216)

The first problem is that many people who call themselves humanists, as de Waal does, might suggest that there are in fact many things that could fill the gap—science, literature, philosophy, music, cinema, human rights activism, just to name a few. But the second problem is that the militancy of the militant atheists is purely and avowedly rhetorical. In a debate with Hitchens, former British Prime Minister Tony Blair once held up the same straw man that de Waal drags through the pages of his book, the claim that neo-atheists are trying to extirpate religion from society entirely, to which Hitchens replied, “In fairness, no one was arguing that religion should or will die out of the world. All I’m arguing is that it would be better if there was a great deal more by way of an outbreak of secularism” (20:20). What Hitchens is after is an end to the deference automatically afforded religious ideas by dint of their supposed sacredness; religious ideas need to be critically weighed just like any other ideas—and when they are thus weighed they often don’t fare so well, in either logical or moral terms. It’s hard to understand why de Waal would have a problem with this view.

*****

            De Waal’s position is even more incoherent with regard to Harris’s arguments about the potential for a science of morality, since they represent an attempt to answer, at least in part, the very question of what might take the place of religion in providing guidance in our lives that he poses again and again throughout The Bonobo and the Atheist. De Waal takes issue first with the book’s title, The Moral Landscape: How Science can Determine Human Values. The notion that science might determine any aspect of morality suggests to him a top-down approach as opposed to his favored bottom-up strategy that takes “naturalized ethics” as its touchstone. This is, however, unbeknownst to de Waal, a mischaracterization of Harris’s thesis. Rather than engage Harris’s arguments in any direct or meaningful way, de Waal contents himself with following in the footsteps of critics who apply the postmodern strategy of holding the book to account for all the analogies that can be drawn with it, no matter how tenuously or tendentiously, to historical evils. De Waal writes, for instance,

While I do welcome a science of morality—my own work is part of it—I can’t fathom calls for science to determine human values (as per the subtitle of Sam Harris’s The Moral Landscape). Is pseudoscience something of the past? Are modern scientists free from moral biases? Think of the Tuskegee syphilis study just a few decades ago, or the ongoing involvement of medical doctors in prisoner torture at Guantanamo Bay. I am profoundly skeptical of the moral purity of science, and feel that its role should never exceed that of morality’s handmaiden. (22)

(Great phrase that "morality's handmaiden.") But Harris never argues that scientists are any more morally pure than anyone else. His argument is for the application of that “science of morality,” which de Waal proudly contributes to, to attempts at addressing the big moral issues our society faces.

            The guilt-by-association and guilt-by-historical-analogy tactics on display in The Bonobo and the Atheist extend all the way to that lodestar of postmodernism’s hysterical obsessions. We might hope that de Waal, after witnessing the frenzied insanity of the sociobiology controversy from the front row, would know better. But he doesn’t seem to grasp how toxic this type of rhetoric is to reasoned discourse and honest inquiry. After expressing his bafflement at how science and a naturalistic worldview could inspire good the way religion does (even though his main argument is that such external inspiration to do good is unnecessary), he writes,

It took Adolf Hitler and his henchmen to expose the moral bankruptcy of these ideas. The inevitable result was a precipitous drop of faith in science, especially biology. In the 1970s, biologists were still commonly equated with fascists, such as during the heated protest against “sociobiology.” As a biologist myself, I am glad those acrimonious days are over, but at the same time I wonder how anyone could forget this past and hail science as our moral savior. How did we move from deep distrust to naïve optimism? (22)

Was Nazism borne of an attempt to apply science to moral questions? It’s true some people use science in evil ways, but not nearly as commonly as people are directly urged by religion to perpetrate evils like inquisitions or holy wars. When science has directly inspired evil, as in the case of eugenics, the lifespan of the mistake was measurable in years or decades rather than centuries or millennia. Not to minimize the real human costs, but science wins hands down by being self-correcting and, certain individual scientists notwithstanding, undogmatic.

Harris intended for his book to begin a debate he was prepared to actively participate in. But he quickly ran into the problem that postmodern criticisms can’t really be dealt with in any meaningful way. The following long quote from Harris’s response to his battier critics in the Huffington Post will show both that de Waal’s characterization of his argument is way off-the-mark, and that it is suspiciously unoriginal:

How, for instance, should I respond to the novelist Marilynne Robinson’s paranoid, anti-science gabbling in the Wall Street Journal where she consigns me to the company of the lobotomists of the mid 20th century? Better not to try, I think—beyond observing how difficult it can be to know whether a task is above or beneath you. What about the science writer John Horgan, who was kind enough to review my book twice, once in Scientific American where he tarred me with the infamous Tuskegee syphilis experiments, the abuse of the mentally ill, and eugenics, and once in The Globe and Mail, where he added Nazism and Marxism for good measure? How does one graciously respond to non sequiturs? The purpose of The Moral Landscape is to argue that we can, in principle, think about moral truth in the context of science. Robinson and Horgan seem to imagine that the mere existence of the Nazi doctors counts against my thesis. Is it really so difficult to distinguish between a science of morality and the morality of science? To assert that moral truths exist, and can be scientifically understood, is not to say that all (or any) scientists currently understand these truths or that those who do will necessarily conform to them.

And we have to ask further what alternative source of ethical principles do the self-righteous grandstanders like Robinson and Horgan—and now de Waal—have to offer? In their eagerness to compare everyone to the Nazis, they seem to be deriving their own morality from Fox News.

De Waal makes three objections to Harris’s arguments that are of actual substance, but none of them are anywhere near as devastating to his overall case as de Waal makes out. First, Harris begins with the assumption that moral behaviors lead to “human flourishing,” but this is a presupposed value as opposed to an empirical finding of science—or so de Waal claims. But here’s de Waal himself on a level of morality sometimes seen in apes that transcends one-on-one interactions between individuals:

female chimpanzees have been seen to drag reluctant males toward each other to make up after a fight, while removing weapons from their hands. Moreover, high-ranking males regularly act as impartial arbiters to settle disputes in the community. I take these hints of community concern as a sign that the building blocks of morality are older than humanity, and that we don’t need God to explain how we got to where we are today. (20)

The similarity between the concepts of human flourishing and community concern highlights one of the main areas of confusion de Waal could have avoided by actually reading Harris’s book. The word “determine” in the title has two possible meanings. Science can determine values in the sense that it can guide us toward behaviors that will bring about flourishing. But it can also determine our values in the sense of discovering what we already naturally value and hence what conditions need to be met for us to flourish.

De Waal performs a sleight of hand late in The Bonobo and the Atheist, substituting another “utilitarian” for Harris, justifying the trick by pointing out that utilitarians also seek to maximize human flourishing—though Harris never claims to be one. This leads de Waal to object that strict utilitarianism isn’t viable because he’s more likely to direct his resources to his own ailing mother than to any stranger in need, even if those resources would benefit the stranger more. Thus de Waal faults Harris’s ethics for overlooking the role of loyalty in human lives. His third criticism is similar; he worries that utilitarians might infringe on the rights of a minority to maximize flourishing for a majority. But how, given what we know about human nature, could we expect humans to flourish—to feel as though they were flourishing—in a society that didn’t properly honor friendship and the bonds of family? How could humans be happy in a society where they had to constantly fear being sacrificed to the whim of the majority? It is in precisely this effort to discover—or determine—under which circumstances humans flourish that Harris believes science can be of the most help. And as de Waal moves up from his mammalian foundations of morality to more abstract ethical principles the separation between his approach and Harris’s starts to look suspiciously like a distinction without a difference.

            Harris in fact points out that honoring family bonds probably leads to greater well-being on pages seventy-three and seventy-four of The Moral Landscape, and de Waal quotes from page seventy-four himself to chastise Harris for concentrating too much on "the especially low-hanging fruit of conservative Islam" (74). The incoherence of de Waal's argument (and the carelessness of his research) are on full display here as he first responds to a point about the genital mutilation of young girls by asking, "Isn't genital mutilation common in the United States, too, where newborn males are circumcised without their consent?" (90). So cutting off the foreskin of a male's penis is morally equivalent to cutting off a girl's clitoris? Supposedly, the equivalence implies that there can't be any reliable way to determine the relative moral status of religious practices. "Could it be that religion and culture interact to the point that there is no universal morality?" Perhaps, but, personally, as a circumcised male, I think this argument is a real howler.

*****

The slick scholarly laziness on display in The Bonobo and the Atheist is just as bad when it comes to the positions, and the personality, of Christopher Hitchens, whom de Waal sees fit to psychoanalyze instead of engaging his arguments in any substantive way—but whose memoir, Hitch-22, he’s clearly never bothered to read. The straw man about the neo-atheists being bent on obliterating religion entirely is, disappointingly, but not surprisingly by this point, just one of several errors and misrepresentations. De Waal’s main argument against Hitchens, that his atheism is just another dogma, just as much a religion as any other, is taken right from the list of standard talking points the most incurious of religious apologists like to recite against him. Theorizing that “activist atheism reflects trauma” (87)—by which he means that people raised under severe religions will grow up to espouse severe ideologies of one form or another—de Waal goes on to suggest that neo-atheism is an outgrowth of “serial dogmatism”:

Hitchens was outraged by the dogmatism of religion, yet he himself had moved from Marxism (he was a Trotskyist) to Greek Orthodox Christianity, then to American Neo-Conservatism, followed by an “antitheist” stance that blamed all of the world’s troubles on religion. Hitchens thus swung from the left to the right, from anti-Vietnam War to cheerleader of the Iraq War, and from pro to contra God. He ended up favoring Dick Cheney over Mother Teresa. (89)

This is truly awful rubbish, and it’s really too bad Hitchens isn’t around anymore to take de Waal to task for it himself. First, this passage allows us to catch out de Waal’s abuse of the term dogma; dogmatism is rigid adherence to beliefs that aren’t open to questioning. The test of dogmatism is whether you’re willing to adjust your views in light of new evidence or changing circumstances—it has nothing to do with how willing or eager you are to debate. What de Waal is labeling dogmatism is what we normally call outspokenness. Second, his facts are simply wrong. For one, though Hitchens was labeled a neocon by some of his fellows on the left simply because he supported the invasion of Iraq, he never considered himself one. When he was asked in an interview for the New Stateman if he was a neoconservative, he responded unequivocally, “I’m not a conservative of any kind.” Finally, can’t someone be for one war and against another, or agree with certain aspects of a religious or political leader’s policies and not others, without being shiftily dogmatic?

            De Waal never really goes into much detail about what the “naturalized ethics” he advocates might look like beyond insisting that we should take a bottom-up approach to arriving at them. This evasiveness gives him space to criticize other nonbelievers regardless of how closely their ideas might resemble his own. “Convictions never follow straight from evidence or logic,” he writes. “Convictions reach us through the prism of human interpretation” (109). He takes this somewhat banal observation (but do they really never follow straight from evidence?) as a license to dismiss the arguments of others based on silly psychologizing. “In the same way that firefighters are sometimes stealth arsonists,” he writes, “and homophobes closet homosexuals, do some atheists secretly long for the certitude of religion?” (88). We could of course just as easily turn this Freudian rhetorical trap back against de Waal and his own convictions. Is he a closet dogmatist himself? Does he secretly hold the unconscious conviction that primates are really nothing like humans and that his research is all a big sham?

            Christopher Hitchens was another real-life character whose personality shone through his writing, and like Yossarian in Joseph Heller’s Catch-22 he often found himself in a position where he knew being sane would put him at odds with the masses, thus convincing everyone of his insanity. Hitchens particularly identified with the exchange near the end of Heller’s novel in which an officer, Major Danby, says, “But, Yossarian, suppose everyone felt that way,” to which Yossarian replies, “Then I’d certainly be a damned fool to feel any other way, wouldn’t I?” (446). (The title for his memoir came from a word game he and several of his literary friends played with book titles.) It greatly saddens me to see de Waal pitting himself against such a ham-fisted caricature of a man in whom, had he taken the time to actually explore his writings, he would likely have found much to admire. Why did Hitch become such a strong advocate for atheism? He made no secret of his motivations. And de Waal, who faults Harris (wrongly) for leaving loyalty out of his moral equations, just might identify with them. It began when the theocratic dictator of Iran put a hit out on his friend, the author Salman Rushdie, because he thought one of his books was blasphemous. Hitchens writes in Hitch-22,

When the Washington Post telephoned me at home on Valentine’s Day 1989 to ask my opinion about the Ayatollah Khomeini’s fatwah, I felt at once that here was something that completely committed me. It was, if I can phrase it like this, a matter of everything I hated versus everything I loved. In the hate column: dictatorship, religion, stupidity, demagogy, censorship, bullying, and intimidation. In the love column: literature, irony, humor, the individual, and the defense of free expression. Plus, of course, friendship—though I like to think that my reaction would have been the same if I hadn’t known Salman at all. (268)

Suddenly, neo-atheism doesn’t seem like an empty place-holder anymore. To criticize atheists so harshly for having convictions that are too strong, de Waal has to ignore all the societal and global issues religion is on the wrong side of. But when we consider the arguments on each side of the abortion or gay marriage or capital punishment or science education debates it’s easy to see that neo-atheists are only against religion because they feel it runs counter to the positive values of skeptical inquiry, egalitarian discourse, free society, and the ascendency of reason and evidence.

            De Waal ends The Bonobo and the Atheist with a really corny section in which he imagines how a bonobo would lecture atheists about morality and the proper stance toward religion. “Tolerance of religion,” the bonobo says, “even if religion is not always tolerant in return, allows humanism to focus on what is most important, which is to build a better society based on natural human abilities” (237). Hitchens is of course no longer around to respond to the bonobo, but many of the same issues came up in his debate with Tony Blair (I hope no one reads this as an insult to the former PM), who at one point also argued that religion might be useful in building better societies—look at all the charity work they do for instance. Hitch, already showing signs of physical deterioration from the treatment for the esophageal cancer that would eventually kill him, responds,

The cure for poverty has a name in fact. It’s called the empowerment of women. If you give women some control over the rate at which they reproduce, if you give them some say, take them off the animal cycle of reproduction to which nature and some doctrine, religious doctrine, condemns them, and then if you’ll throw in a handful of seeds perhaps and some credit, the flaw, the flaw of everything in that village, not just poverty, but education, health, and optimism, will increase. It doesn’t matter—try it in Bangladesh, try it in Bolivia. It works. It works all the time. Name me one religion that stands for that—or ever has. Wherever you look in the world and you try to remove the shackles of ignorance and disease and stupidity from women, it is invariably the clerisy that stands in the way. (23:05)

            Later in the debate, Hitch goes on to argue in a way that sounds suspiciously like an echo of de Waal’s challenges to veneer theory and his advocacy for bottom-up morality. He says,

The injunction not to do unto others what would be repulsive if done to yourself is found in the Analects of Confucius if you want to date it—but actually it’s found in the heart of every person in this room. Everybody knows that much. We don’t require divine permission to know right from wrong. We don’t need tablets administered to us ten at a time in tablet form, on pain of death, to be able to have a moral argument. No, we have the reasoning and the moral suasion of Socrates and of our own abilities. We don’t need dictatorship to give us right from wrong. (25:43)

And as a last word in his case and mine I’ll quote this very de Waalian line from Hitch: “There’s actually a sense of pleasure to be had in helping your fellow creature. I think that should be enough” (35:42).

Also read:

TED MCCORMICK ON STEVEN PINKER AND THE POLITICS OF RATIONALITY

And: 

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND

Read More
Dennis Junk Dennis Junk

Napoleon Chagnon's Crucible and the Ongoing Epidemic of Moralizing Hysteria in Academia

Napoleon Chagnon was targeted by postmodern activists and anthropologists, who trumped up charges against him and hoped to sacrifice his reputation on the altar of social justice. In retrospect, his case looks like an early warning sign of what would come to be called “cancel culture.” Fortunately, Chagnon was no pushover, and there were a lot of people who saw through the lies being spread about him. “Noble Savages” is in a part a great adventure story and in part his response to the tragic degradation of the field of anthropology as it succumbs to the lures of ideology.

Noble Savages by Napoleon Chagnon

    When Arthur Miller adapted the script of The Crucible, his play about the Salem Witch Trials originally written in 1953, for the 1996 film version, he enjoyed additional freedom to work with the up-close visual dimensions of the tragedy. In one added scene, the elderly and frail George Jacobs, whom we first saw lifting one of his two walking sticks to wave an unsteady greeting to a neighbor, sits before a row of assembled judges as the young Ruth Putnam stands accusing him of assaulting her. The girl, ostensibly shaken from the encounter and frightened lest some further terror ensue, dramatically recounts her ordeal, saying,

He come through my window and then he lay down upon me. I could not take breath. His body crush heavy upon me, and he say in my ear, “Ruth Putnam, I will have your life if you testify against me in court.”

This quote she delivers in a creaky imitation of the old man’s voice. When one of the judges asks Jacobs what he has to say about the charges, he responds with the glaringly obvious objection: “But, your Honor, I must have these sticks to walk with—how may I come through a window?” The problem with this defense, Jacobs comes to discover, is that the judges believe a person can be in one place physically and in another in spirit. This poor tottering old man has no defense against so-called “spectral evidence.” Indeed, as judges in Massachusetts realized the year after Jacobs was hanged, no one really has any defense against spectral evidence. That’s part of the reason why it was deemed inadmissible in their courts, and immediately thereafter convictions for the crime of witchcraft ceased entirely. 

            Many anthropologists point to the low cost of making accusations as a factor in the evolution of moral behavior. People in small societies like the ones our ancestors lived in for millennia, composed of thirty or forty profoundly interdependent individuals, would have had to balance any payoff that might come from immoral deeds against the detrimental effects to their reputations of having those deeds discovered and word of them spread. As the generations turned over and over again, human nature adapted in response to the social enforcement of cooperative norms, and individuals came to experience what we now recognize as our moral emotions—guilt which is often preëmptive and prohibitive, shame, indignation, outrage, along with the more positive feelings associated with empathy, compassion, and loyalty.

The legacy of this process of reputational selection persists in our prurient fascination with the misdeeds of others and our frenzied, often sadistic, delectation in the spreading of salacious rumors. What Miller so brilliantly dramatizes in his play is the irony that our compulsion to point fingers, which once created and enforced cohesion in groups of selfless individuals, can in some environments serve as a vehicle for our most viciously selfish and inhuman impulses. This is why it is crucial that any accusation, if we as a society are to take it at all seriously, must provide the accused with some reliable means of acquittal. Charges that can neither be proven nor disproven must be seen as meaningless—and should even be counted as strikes against the reputation of the one who levels them. 

            While this principle runs into serious complications in situations with crimes that are as inherently difficult to prove as they are horrific, a simple rule proscribing any glib application of morally charged labels is a crucial yet all-too-popularly overlooked safeguard against unjust calumny. In this age of viral dissemination, the rapidity with which rumors spread coupled with the absence of any reliable assurances of the validity of messages bearing on the reputations of our fellow citizens demand that we deliberately work to establish as cultural norms the holding to account of those who make accusations based on insufficient, misleading, or spectral evidence—and the holding to account as well, to only a somewhat lesser degree, of those who help propagate rumors without doing due diligence in assessing their credibility.

            The commentary attending the publication of anthropologist Napoleon Chagnon’s memoir of his research with the Yanomamö tribespeople in Venezuela calls to mind the insidious “Teach the Controversy” PR campaign spearheaded by intelligent design creationists. Coming out against the argument that students should be made aware of competing views on the value of intelligent design inevitably gives the impression of close-mindedness or dogmatism. But only a handful of actual scientists have any truck with intelligent design, a dressed-up rehashing of the old God-of-the-Gaps argument based on the logical fallacy of appealing to ignorance—and that ignorance, it so happens, is grossly exaggerated.

Teaching the controversy would therefore falsely imply epistemological equivalence between scientific views on evolution and those that are not-so-subtly religious. Likewise, in the wake of allegations against Chagnon about mistreatment of the people whose culture he made a career of studying, many science journalists and many of his fellow anthropologists still seem reluctant to stand up for him because they fear doing so would make them appear insensitive to the rights and concerns of indigenous peoples. Instead, they take refuge in what they hope will appear a balanced position, even though the evidence on which the accusations rested has proven to be entirely spectral.

Chagnon’s Noble Savages: My Life among Two Dangerous Tribes—the Yanomamö and the Anthropologists is destined to be one of those books that garners commentary by legions of outspoken scholars and impassioned activists who never find the time to actually read it. Science writer John Horgan, for instance, has published two blog posts on Chagnon in recent weeks, and neither of them features a single quote from the book. In the first, he boasts of his resistance to bullying, via email, by five prominent sociobiologists who had caught wind of his assignment to review Patrick Tierney’s book Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon and insisted that he condemn the work and discourage anyone from reading it. Against this pressure, Horgan wrote a positive review in which he repeats several horrific accusations that Tierney makes in the book before going on to acknowledge that the author should have worked harder to provide evidence of the wrongdoings he reports on.

But Tierney went on to become an advocate for Indian rights. And his book’s faults are outweighed by its mass of vivid, damning detail. My guess is that it will become a classic in anthropological literature, sparking countless debates over the ethics and epistemology of field studies.

Horgan probably couldn’t have known at the time (though those five scientists tried to warn him) that giving Tierney credit for prompting debates about Indian rights and ethnographic research methods was a bit like praising Abigail Williams, the original source of accusations of witchcraft in Salem, for sparking discussions about child abuse. But that he stands by his endorsement today, saying,

“I have one major regret concerning my review: I should have noted that Chagnon is a much more subtle theorist of human nature than Tierney and other critics have suggested,” as balanced as that sounds, casts serious doubt on his scholarship, not to mention his judgment.

            What did Tierney falsely accuse Chagnon of? There are over a hundred specific accusations in the book (Chagnon says his friend William Irons flagged 106 [446]), but the most heinous whopper comes in the fifth chapter, titled “Outbreak.” In 1968, Chagnon was helping the geneticist James V. Neel collect blood samples from the Yanomamö—in exchange for machetes—so their DNA could be compared with that of people in industrialized societies. While they were in the middle of this project, a measles epidemic broke out, and Neel had discovered through earlier research that the Indians lacked immunity to this disease, so the team immediately began trying to reach all of the Yanomamö villages to vaccinate everyone before the contagion reached them. Most people who knew about the episode considered what the scientists did heroic (and several investigations now support this view). But Tierney, by creating the appearance of pulling together multiple threads of evidence, weaves together a much different story in which Neel and Chagnon are cast as villains instead of heroes. (The version of the book I’ll quote here is somewhat incoherent because it went through some revisions in attempts to deal with holes in the evidence that were already emerging pre-publication.)

First, Tierney misinterprets some passages from Neel’s books as implying an espousal of eugenic beliefs about the Indians, namely that by remaining closer to nature and thus subject to ongoing natural selection they retain all-around superior health, including better immunity. Next, Tierney suggests that the vaccine Neel chose, Edmonston B, which is usually administered with a drug called gamma globulin to minimize reactions like fevers, is so similar to the measles virus that in the immune-suppressed Indians it actually ended up causing a suite of symptoms that was indistinguishable from full-blown measles. The implication is clear. Tierney writes,

Chagnon and Neel described an effort to “get ahead” of the measles epidemic by vaccinating a ring around it. As I have reconstructed it, the 1968 outbreak had a single trunk, starting at the Ocamo mission and moving up the Orinoco with the vaccinators. Hundreds of Yanomami died in 1968 on the Ocamo River alone. At the time, over three thousand Yanomami lived on the Ocamo headwaters; today there are fewer than two hundred. (69)

At points throughout the chapter, Tierney seems to be backing off the worst of his accusations; he writes, “Neel had no reason to think Edmonston B could become transmissible. The outbreak took him by surprise.” But even in this scenario Tierney suggests serious wrongdoing: “Still, he wanted to collect data even in the midst of a disaster” (82).

Earlier in the chapter, though, Tierney makes a much more serious charge. Pointing to a time when Chagnon showed up at a Catholic mission after having depleted his stores of gamma globulin and nearly run out of Edmonston B, Tierney suggests the shortage of drugs was part of a deliberate plan. “There were only two possibilities,” he writes,

Either Chagnon entered the field with only forty doses of virus; or he had more than forty doses. If he had more than forty, he deliberately withheld them while measles spread for fifteen days. If he came to the field with only forty doses, it was to collect data on a small sample of Indians who were meant to receive the vaccine without gamma globulin. Ocamo was a good choice because the nuns could look after the sick while Chagnon went on with his demanding work. Dividing villages into two groups, one serving as a control, was common in experiments and also a normal safety precaution in the absence of an outbreak. (60)

Thus Tierney implies that Chagnon was helping Neel test his eugenics theory and in the process became complicit in causing an epidemic, maybe deliberately, that killed hundreds of people. Tierney claims he isn’t sure how much Chagnon knew about the experiment; he concedes at one point that “Chagnon showed genuine concern for the Yanomami,” before adding, “At the same time, he moved quickly toward a cover-up” (75).

            Near the end of his “Outbreak” chapter, Tierney reports on a conversation with Mark Papania, a measles expert at the Center for Disease Control in Atlanta. After running his hypothesis about how Neel and Chagnon caused the epidemic with the Edmonston B vaccine by Papania, Tierney claims he responded, “Sure, it’s possible.” He goes on to say that while Papania informed him there were no documented cases of the vaccine becoming contagious he also admitted that no studies of adequate sensitivity had been done. “I guess we didn’t look very hard,” Tierney has him saying (80). But evolutionary psychologist John Tooby got a much different answer when he called Papania himself. In a an article published on Slate—nearly three weeks before Horgan published his review, incidentally—Tooby writes that the epidemiologist had a very different attitude to the adequacy of past safety tests from the one Tierney reported:

it turns out that researchers who test vaccines for safety have never been able to document, in hundreds of millions of uses, a single case of a live-virus measles vaccine leading to contagious transmission from one human to another—this despite their strenuous efforts to detect such a thing. If attenuated live virus does not jump from person to person, it cannot cause an epidemic. Nor can it be planned to cause an epidemic, as alleged in this case, if it never has caused one before.

Tierney also cites Samuel Katz, the pediatrician who developed Edmonston B, at a few points in the chapter to support his case. But Katz responded to requests from the press to comment on Tierney’s scenario by saying,

the use of Edmonston B vaccine in an attempt to halt an epidemic was a justifiable, proven and valid approach. In no way could it initiate or exacerbate an epidemic. Continued circulation of these charges is not only unwarranted, but truly egregious.

Tooby included a link to Katz’s response, along with a report from science historian Susan Lindee of her investigation of Neel’s documents disproving many of Tierney’s points. It seems Horgan should’ve paid a bit more attention to those emails he was receiving.

Further investigations have shown that pretty much every aspect of Tierney’s characterization of Neel’s beliefs and research agenda was completely wrong. The report from a task force investigation by the American Society of Human Genetics gives a sense of how Tierney, while giving the impression of having conducted meticulous research, was in fact perpetrating fraud. The report states,

Tierney further suggests that Neel, having recognized that the vaccine was the cause of the epidemic, engineered a cover-up. This is based on Tierney’s analysis of audiotapes made at the time. We have reexamined these tapes and provide evidence to show that Tierney created a false impression by juxtaposing three distinct conversations recorded on two separate tapes and in different locations. Finally, Tierney alleges, on the basis of specific taped discussions, that Neel callously and unethically placed the scientific goals of the expedition above the humanitarian need to attend to the sick. This again is shown to be a complete misrepresentation, by examination of the relevant audiotapes as well as evidence from a variety of sources, including members of the 1968 expedition.

This report was published a couple years after Tierney’s book hit the shelves. But there was sufficient evidence available to anyone willing to do the due diligence in checking out the credibility of the author and his claims to warrant suspicion that the book’s ability to make it onto the shortlist for the National Book Award is indicative of a larger problem.

*******

With the benefit of hindsight and a perspective from outside the debate (though I’ve been following the sociobiology controversy for a decade and a half, I wasn’t aware of Chagnon’s longstanding and personal battles with other anthropologists until after Tierney’s book was published) it seems to me that once Tierney had been caught misrepresenting the evidence in support of such an atrocious accusation his book should have been removed from the shelves, and all his reporting should have been dismissed entirely. Tierney himself should have been made to answer for his offense. But for some reason none of this happened.

The anthropologist Marshall Sahlins, for instance, to whom Chagnon has been a bête noire for decades, brushed off any concern for Tierney’s credibility in his review of Darkness in El Dorado, published a full month after Horgan’s, apparently because he couldn’t resist the opportunity to write about how much he hates his celebrated colleague. Sahlins’s review is titled “Guilty not as Charged,” which is already enough to cast doubt on his capacity for fairness or rationality. Here’s how he sums up the issue of Tierney’s discredited accusation in relation to the rest of the book:

The Kurtzian narrative of how Chagnon achieved the political status of a monster in Amazonia and a hero in academia is truly the heart of Darkness in El Dorado. While some of Tierney’s reporting has come under fire, this is nonetheless a revealing book, with a cautionary message that extends well beyond the field of anthropology. It reads like an allegory of American power and culture since Vietnam.

Sahlins apparently hasn’t read Conrad’s novel Heart of Darkness or he’d know Chagnon is no Kurtz. And Vietnam? The next paragraph goes into more detail about this “allegory,” as if Sahlins’s conscripting of him into service as a symbol of evil somehow establishes his culpability. To get an idea of how much Chagnon actually had to do with Vietnam, we can look at a passage early in Noble Savages about how disconnected from the outside world he was while doing his field work:

I was vaguely aware when I went into the Yanomamö area in late 1964 that the United States had sent several hundred military advisors to South Vietnam to help train the South Vietnamese army. When I returned to Ann Arbor in 1966 the United States had some two hundred thousand combat troops there. (36)

But Sahlins’s review, as bizarre as it is, is important because it’s representative of the types of arguments Chagnon’s fiercest anthropological critics make against his methods, his theories, but mainly against him personally. In another recent comment on how “The Napoleon Chagnon Wars Flare Up Again,” Barbara J. King betrays a disconcerting and unscholarly complacence with quoting other, rival anthropologists’ words as evidence of Chagnon’s own thinking. Alas, King too is weighing in on the flare-up without having read the book, or anything else by the author it seems. And she’s also at pains to appear fair and balanced, even though the sources she cites against Chagnon are neither, nor are they the least bit scientific. Of Sahlins’s review of Darkness in El Dorado, she writes,

The Sahlins essay from 2000 shows how key parts of Chagnon’s argument have been “dismembered” scientifically. In a major paper published in 1988, Sahlins says, Chagnon left out too many relevant factors that bear on Ya̧nomamö males’ reproductive success to allow any convincing case for a genetic underpinning of violence.

It’s a bit sad that King feels it’s okay to post on a site as popular as NPR and quote a criticism of a study she clearly hasn’t read—she could have downloaded the pdf of Chagnon’s landmark paper “Life Histories, Blood Revenge, and Warfare in a Tribal Population,” for free. Did Chagnon claim in the study that it proved violence had a genetic underpinning? It’s difficult to tell what the phrase “genetic underpinning” even means in this context.

To lend further support to Sahlins’s case, King selectively quotes another anthropologist, Jonathan Marks. The lines come from a rant on his blog (I urge you to check it out for yourself if you’re at all suspicious about the aptness of the term rant to describe the post) about a supposed takeover of anthropology by genetic determinism. But King leaves off the really interesting sentence at the end of the remark. Here’s the whole passage explaining why Marks thinks Chagnon is an incompetent scientist:

Let me be clear about my use of the word “incompetent”. His methods for collecting, analyzing and interpreting his data are outside the range of acceptable anthropological practices. Yes, he saw the Yanomamo doing nasty things. But when he concluded from his observations that the Yanomamo are innately and primordially “fierce” he lost his anthropological credibility, because he had not demonstrated any such thing. He has a right to his views, as creationists and racists have a right to theirs, but the evidence does not support the conclusion, which makes it scientifically incompetent.

What Marks is saying here is not that he has evidence of Chagnon doing poor field work; rather, Marks dismisses Chagnon merely because of his sociobiological leanings. Note too that the italicized words in the passage are not quotes. This is important because along with the false equation of sociobiology with genetic determinism this type of straw man underlies nearly all of the attacks on Chagnon. Finally, notice how Marks slips into the realm of morality as he tries to traduce Chagnon’s scientific credibility. In case you think the link with creationism and racism is a simple analogy—like the one I used myself at the beginning of this essay—look at how Marks ends his rant:

So on one side you’ve got the creationists, racists, genetic determinists, the Republican governor of Florida, Jared Diamond, and Napoleon Chagnon–and on the other side, you’ve got normative anthropology, and the mother of the President. Which side are you on?

How can we take this at all seriously? And why did King misleadingly quote, on a prominent news site, such a seemingly level-headed criticism which in context reveals itself as anything but level-headed? I’ll risk another analogy here and point out that Marks’s comments about genetic determinism taking over anthropology are similar in both tone and intellectual sophistication to Glenn Beck’s comments about how socialism is taking over American politics.

             King also links to a review of Noble Savages that was published in the New York Times in February, and this piece is even harsher to Chagnon. After repeating Tierney’s charge about Neel deliberately causing the 1968 measles epidemic and pointing out it was disproved, anthropologist Elizabeth Povinelli writes of the American Anthropological Association investigation that,

The committee was split over whether Neel’s fervor for observing the “differential fitness of headmen and other members of the Yanomami population” through vaccine reactions constituted the use of the Yanomamö as a Tuskegee-­like experimental population.

Since this allegation has been completely discredited by the American Society of Human Genetics, among others, Povinelli’s repetition of it is irresponsible, as was the Times failure to properly vet the facts in the article.

Try as I might to remain detached from either side as I continue to research this controversy (and I’ve never met any of these people), I have to say I found Povinelli’s review deeply offensive. The straw men she shamelessly erects and the quotes she shamelessly takes out of context, all in the service of an absurdly self-righteous and substanceless smear, allow no room whatsoever for anything answering to the name of compassion for a man who was falsely accused of complicity in an atrocity. And in her zeal to impugn Chagnon she propagates a colorful and repugnant insult of her own creation, which she misattributes to him. She writes,

Perhaps it’s politically correct to wonder whether the book would have benefited from opening with a serious reflection on the extensive suffering and substantial death toll among the Yanomamö in the wake of the measles outbreak, whether or not Chagnon bore any responsibility for it. Does their pain and grief matter less even if we believe, as he seems to, that they were brutal Neolithic remnants in a land that time forgot? For him, the “burly, naked, sweaty, hideous” Yanomamö stink and produce enormous amounts of “dark green snot.” They keep “vicious, underfed growling dogs,” engage in brutal “club fights” and—God forbid!—defecate in the bush. By the time the reader makes it to the sections on the Yanomamö’s political organization, migration patterns and sexual practices, the slant of the argument is evident: given their hideous society, understanding the real disaster that struck these people matters less than rehabilitating Chagnon’s soiled image.

In other words, Povinelli’s response to Chagnon’s “harrowing” ordeal, is to effectively say, Maybe you’re not guilty of genocide, but you’re still guilty for not quitting your anthropology job and becoming a forensic epidemiologist. Anyone who actually reads Noble Savages will see quite clearly the “slant” Povinelli describes, along with those caricatured “brutal Neolithic remnants,” must have flown in through her window right next to George Jacobs.

            Povinelli does characterize one aspect of Noble Savages correctly when she complains about its “Manichean rhetorical structure,” with the bad Rousseauian, Marxist, postmodernist cultural anthropologists—along with the corrupt and PR-obsessed Catholic missionaries—on one side, and the good Hobbesian, Darwinian, scientific anthropologists on the other, though it’s really just the scientific part he’s concerned with. I actually expected to find a more complicated, less black-and-white debate taking place when I began looking into the attacks on Chagnon’s work—and on Chagnon himself. But what I ended up finding was that Chagnon’s description of the division, at least with regard to the anthropologists (I haven’t researched his claims about the missionaries) is spot-on, and Povinelli’s repulsive review is a case in point.

This isn’t to say that there aren’t legitimate scientific disagreements about sociobiology. In fact, Chagnon writes about how one of his heroes is “calling into question some of the most widely accepted views” as early as his dedication page, referring to E.O. Wilson’s latest book The Social Conquest of Earth. But what Sahlins, Marks, and Povinelli offer is neither legitimate nor scientific. These commenters really are, as Chagnon suggests, representative of a subset of cultural anthropologists completely given over to a moralizing hysteria. Their scholarship is as dishonest as it is defamatory, their reasoning rests on guilt by free-association and the tossing up and knocking down of the most egregious of straw men, and their tone creates the illusion of moral certainty coupled with a longsuffering exasperation with entrenched institutionalized evils. For these hysterical moralizers, it seems any theory of human behavior that involves evolution or biology represents the same kind of threat as witchcraft did to the people of Salem in the 1690s, or as communism did to McCarthyites in the 1950s. To combat this chimerical evil, the presumed righteous ends justify the deceitful means.

The unavoidable conclusion with regard to the question of why Darkness in El Dorado wasn’t dismissed outright when it should have been is that even though it has been established that Chagnon didn’t commit any of the crimes Tierney accused him of, as far as his critics are concerned, he may as well have. Somehow cultural anthropologists have come to occupy a bizarre culture of their own in which charging a colleague with genocide doesn’t seem like a big deal. Before Tierney’s book hit the shelves, two anthropologists, Terence Turner and Leslie Sponsel, co-wrote an email to the American Anthropological Association which was later sent to several journalists. Turner and Sponsel later claimed the message was simply a warning about the “impending scandal” that would result from the publication of Darkness in El Dorado. But the hyperbole and suggestive language make it read more like a publicity notice than a warning. “This nightmarish story—a real anthropological heart of darkness beyond the imagining of even a Josef Conrad (though not, perhaps, a Josef Mengele)”—is it too much to ask of those who are so fond of referencing Joseph Conrad that they actually read his book?—“will be seen (rightly in our view) by the public, as well as most anthropologists, as putting the whole discipline on trial.” As it turned out, though, the only one who was put on trial, by the American Anthropological Association—though officially it was only an “inquiry”—was Napoleon Chagnon.

Chagnon’s old academic rivals, many of whom claim their problem with him stems from the alleged devastating impact of his research on Indians, fail to appreciate the gravity of Tierney’s accusations. Their blasé response to the author being exposed as a fraud gives the impression that their eagerness to participate in the pile-on has little to do with any concern for the Yanomamö people. Instead, they embraced Darkness in El Dorado because it provided good talking points in the campaign against their dreaded nemesis Napoleon Chagnon. Sahlins, for instance, is strikingly cavalier about the personal effects of Tierney’s accusations in the review cited by King and Horgan:

The brouhaha in cyberspace seemed to help Chagnon’s reputation as much as Neel’s, for in the fallout from the latter’s defense many academics also took the opportunity to make tendentious arguments on Chagnon’s behalf. Against Tierney’s brief that Chagnon acted as an anthro-provocateur of certain conflicts among the Yanomami, one anthropologist solemnly demonstrated that warfare was endemic and prehistoric in the Amazon. Such feckless debate is the more remarkable because most of the criticisms of Chagnon rehearsed by Tierney have been circulating among anthropologists for years, and the best evidence for them can be found in Chagnon’s writings going back to the 1960s.

Sahlins goes on to offer his own sinister interpretation of Chagnon’s writings, using the same straw man and guilt-by-free-association techniques common to anthropologists in the grip of moralizing hysteria. But I can’t help wondering why anyone would take a word he says seriously after he suggests that being accused of causing a deadly epidemic helped Neel’s and Chagnon’s reputations.

*******

            Marshall Sahlins recently made news by resigning from the National Academy of Sciences in protest against the organization’s election of Chagnon to its membership and its partnerships with the military. In explaining his resignation, Sahlins insists that Chagnon, based on the evidence of his own writings, did serious harm to the people whose culture he studied. Sahlins also complains that Chagnon’s sociobiological ideas about violence are so wrongheaded that they serve to “discredit the anthropological discipline.” To back up his objections, he refers interested parties to that same review of Darkness in El Dorado King links to on her post.

Though Sahlins explains his moral and intellectual objections separately, he seems to believe that theories of human behavior based on biology are inherently immoral, as if theorizing that violence has “genetic underpinnings” is no different from claiming that violence is inevitable and justifiable. This is why Sahlins can’t discuss Chagnon without reference to Vietnam. He writes in his review,

The ‘60s were the longest decade of the 20th century, and Vietnam was the longest war. In the West, the war prolonged itself in arrogant perceptions of the weaker peoples as instrumental means of the global projects of the stronger. In the human sciences, the war persists in an obsessive search for power in every nook and cranny of our society and history, and an equally strong postmodern urge to “deconstruct” it. For his part, Chagnon writes popular textbooks that describe his ethnography among the Yanomami in the 1960s in terms of gaining control over people.

Sahlins doesn’t provide any citations to back up this charge—he’s quite clearly not the least bit concerned with fairness or solid scholarship—and based on what Chagnon writes in Noble Savages this fantasy of “gaining control” originates in the mind of Sahlins, not in the writings of Chagnon.

For instance, Chagnon writes of being made the butt of an elaborate joke several Yanomamö conspired to play on him by giving him fake names for people in their village (like Hairy Cunt, Long Dong, and Asshole). When he mentions these names to people in a neighboring village, they think it’s hilarious. “My face flushed with embarrassment and anger as the word spread around the village and everybody was laughing hysterically.” And this was no minor setback: “I made this discovery some six months into my fieldwork!” (66) Contrary to the despicable caricature Povinelli provides as well, Chagnon writes admiringly of the Yanomamö’s “wicked humor,” and how “They enjoyed duping others, especially the unsuspecting and gullible anthropologist who lived among them” (67). Another gem comes from an episode in which he tries to treat a rather embarrassing fungal infection: “You can’t imagine the hilarious reaction of the Yanomamö watching the resident fieldworker in a most indescribable position trying to sprinkle foot powder onto his crotch, using gravity as a propellant” (143).

            The bitterness, outrage, and outright hatred directed at Chagnon, alongside the overt nonexistence of evidence that he’s done anything wrong, seem completely insane until you consider that this preeminent anthropologist falls afoul of all the –isms that haunt the fantastical armchair obsessions of postmodern pseudo-scholars. Chagnon stands as a living symbol of the white colonizer exploiting indigenous people and resources (colonialism); he propagates theories that can be read as supportive of fantasies about individual and racial superiority (Social Darwinism, racism); he reports on tribal warfare and cruelty toward women, with the implication that these evils are encoded in our genes (neoconservativism, sexism, biological determinism). It should be clear that all of this is nonsense: any exploitation is merely alleged and likely outweighed by efforts at vaccination against diseases introduced by missionaries and gold miners; sociobiology doesn’t focus on racial differences, and superiority is a scientifically meaningless term; and the fact that genes play a role in some behavior implies neither that the behavior is moral nor that it is inevitable. The truly evil –ism at play in the campaign against Chagnon is postmodernism—an ideology which functions as little more than a factory for the production of false accusations.

            There are two main straw men that are bound to be rolled out by postmodern critics of evolutionary theories of behavior in any discussion of morally charged topics. The first is the gene-for misconception.

Every anthropologist, sociobiologist, and evolutionary psychologist knows that there is no gene for violence and warfare in the sense that would mean everyone born with a particular allele will inevitably grow up to be physically aggressive. Yet, in any discussion of the causes of violence, or any other issue in which biology is implicated, critics fall all over themselves trying to catch their opponents out for making this mistake, and they pretend by doing so they’re defeating an attempt to undermine efforts to make the world more peaceful. It so happens that scientists actually have discovered a gene variation, known popularly as “the warrior gene,” that increases the likelihood that an individual carrying it will engage in aggressive behavior—but only if that individual experiences a traumatic childhood. Having a gene variation associated with a trait only ever means someone is more likely to express that trait, and there will almost always be other genes and several environmental factors contributing to the overall likelihood.

You can be reasonably sure that if a critic is taking a sociobiologist or an evolutionary psychologist to task for suggesting a direct one-to-one correspondence between a gene and a behavior that critic is being either careless or purposely misleading. In trying to bring about a more peaceful world, it’s far more effective to study the actual factors that contribute to violence than it is to write moralizing criticisms of scientific colleagues. The charge that evolutionary approaches can only be used to support conservative or reactionary views of society isn’t just a misrepresentation of sociobiological theories; it’s also empirically false—surveys demonstrate that grad students in evolutionary anthropology are overwhelmingly liberal in their politics, just as liberal in fact as anthropology students in non-evolutionary concentrations.

Another thing anyone who has taken a freshman anthropology course knows, but that anti-evolutionary critics fall all over themselves taking sociobiologists to task for not understanding, is that people who live in foraging or tribal cultures cannot be treated as perfect replicas of our Pleistocene ancestors, or as Povinelli calls them “prehistoric time capsules.” Hunters and gatherers are not “living fossils,” because they’ve been evolving just as long as people in industrialized societies, their histories and environments are unique, and it’s almost impossible for them to avoid being impacted by outside civilizations. If you flew two groups of foragers from different regions each into the territory of the other, you would learn quite quickly that each group’s culture is intricately adapted to the environment it originally inhabited. This does not mean, however, that evidence about how foraging and tribal peoples live is irrelevant to questions about human evolution.

As different as those two groups are, they are both probably living lives much more similar to those of our ancestors than anyone in industrialized societies. What evolutionary anthropologists and psychologists tend to be most interested in are the trends that emerge when several of these cultures are compared to one another. The Yanomamö actually subsist largely on slash-and-burn agriculture, and they live in groups much larger than those of most foraging peoples. Their culture and demographic patterns may therefore provide clues to how larger and more stratified societies developed after millennia of evolution in small, mobile bands. But, again, no one is suggesting the Yanomamö are somehow interchangeable with the people who first made this transition to more complex social organization historically.

The prehistoric time-capsule straw man often goes hand-in-hand with an implication that the anthropologists supposedly making the blunder see the people whose culture they study as somehow inferior, somehow less human than people who live in industrialized civilizations. It seems like a short step from this subtle dehumanization to the kind of whole-scale exploitation indigenous peoples are often made to suffer. But the sad truth is there are plenty of economic, religious, and geopolitical forces working against the preservation of indigenous cultures and the protection of indigenous people’s rights to make scapegoating scientists who gather cultural and demographic information completely unnecessary. And you can bet Napoleon Chagnon is, if anything, more outraged by the mistreatment of the Yanomamö than most of the activists who falsely accuse him of complicity, because he knows so many of them personally. Chagnon is particularly critical of Brazilian gold miners and Salesian missionaries, both of whom it seems have far more incentive to disrespect the Yanomamö culture (by supplanting their religion and moving them closer to civilization) and ravage the territory they inhabit. The Salesians’ reprisals for his criticisms, which entailed pulling strings to keep him out of the territory and efforts to create a public image of him as a menace, eventually provided fodder for his critics back home as well. 

*******

In an article published in the journal American Anthropologist in 2004 titled Guilt by Association, about the American Anthropological Association’s compromised investigation of Tierney’s accusations against Chagnon, Thomas Gregor and Daniel Gross describe “chains of logic by which anthropological research becomes, at the end of an associative thread, an act of misconduct” (689). Quoting Defenders of the Truth, sociologist Ullica Segerstrale’s indispensable 2000 book on the sociobiology debate, Gregor and Gross explain that Chagnon’s postmodern accusers relied on a rhetorical strategy common among critics of evolutionary theories of human behavior—a strategy that produces something startlingly indistinguishable from spectral evidence. Segerstrale writes,

In their analysis of their target’s texts, the critics used a method I call moral reading. The basic idea behind moral reading was to imagine the worst possible political consequences of a scientific claim. In this way, maximum moral guilt might be attributed to the perpetrator of this claim. (206)

She goes on to cite a “glaring” example of how a scholar drew an imaginary line from sociobiology to Nazism, and then connected it to fascist behavioral control, even though none of these links were supported by any evidence (207). Gregor and Gross describe how this postmodern version of spectral evidence was used to condemn Chagnon.

In the case at hand, for example, the Report takes Chagnon to task for an article in Science on revenge warfare, in which he reports that “Approximately 30% of Yanomami adult male deaths are due to violence”(Chagnon 1988:985). Chagnon also states that Yanomami men who had taken part in violent acts fathered more children than those who had not. Such facts could, if construed in their worst possible light, be read as suggesting that the Yanomami are violent by nature and, therefore, undeserving of protection. This reading could give aid and comfort to the opponents of creating a Yanomami reservation. The Report, therefore, criticizes Chagnon for having jeopardized Yanomami land rights by publishing the Science article, although his research played no demonstrable role in the demarcation of Yanomami reservations in Venezuela and Brazil. (689)

The task force had found that Chagnon was guilty—even though it was nominally just an “inquiry” and had no official grounds for pronouncing on any misconduct—of harming the Indians by portraying them negatively. Gregor and Gross, however, sponsored a ballot at the AAA to rescind the organization’s acceptance of the report; in 2005, it was voted on by the membership and passed by a margin of 846 to 338. “Those five years,” Chagnon writes of the time between that email warning about Tierney’s book and the vote finally exonerating him, “seem like a blurry bad dream” (450).

            Anthropological fieldwork has changed dramatically since Chagnon’s early research in Venezuela. There was legitimate concern about the impact of trading manufactured goods like machetes for information, and you can read about some of the fracases it fomented among the Yanomamö in Noble Savages. The practice is now prohibited by the ethical guidelines of ethnographic field research. The dangers to isolated or remote populations from communicable diseases must also be considered while planning any expeditions to study indigenous cultures. But Chagnon was entering the Ocamo region after many missionaries and just before many gold miners. And we can’t hold him accountable for disregarding rules that didn’t exist at the time. Sahlins, however, echoing Tierney’s perversion of Neel and Chagnon’s race to immunize the Indians so that the two men appeared to be the source of contagion, accuses Chagnon of causing much of the violence he witnessed and reported by spreading around his goods.

Hostilities thus tracked the always-changing geopolitics of Chagnon-wealth, including even pre-emptive attacks to deny others access to him. As one Yanomami man recently related to Tierney: “Shaki [Chagnon] promised us many things, and that’s why other communities were jealous and began to fight against us.”

Aside from the fact that some Yanomamö men had just returned from a raid the very first time he entered one of their villages, and the fact that the source of this quote has been discredited, Sahlins is also basing his elaborate accusation on some pretty paltry evidence.

            Sahlins also insists that the “monster in Amazonia” couldn’t possibly have figured out a way to learn the names and relationships of the people he studied without aggravating intervillage tensions (thus implicitly conceding those tensions already existed). The Yanomamö have a taboo against saying the names of other adults, similar to our own custom of addressing people we’ve just met by their titles and last names, but with much graver consequences for violations. This is why Chagnon had to confirm the names of people in one tribe by asking about them in another, the practice that led to his discovery of the prank that was played on him. Sahlins uses Tierney’s reporting as the only grounds for his speculations on how disruptive this was to the Yanomamö. And, in the same way he suggested there was some moral equivalence between Chagnon going into the jungle to study the culture of a group of Indians and the US military going into the jungles to engage in a war against the Vietcong, he fails to distinguish between the Nazi practice of marking Jews and Chagnon’s practice of writing numbers on people’s arms to keep track of their problematic names. Quoting Chagnon, Sahlins writes,

“I began the delicate task of identifying everyone by name and numbering them with indelible ink to make sure that everyone had only one name and identity.” Chagnon inscribed these indelible identification numbers on people’s arms—barely 20 years after World War II.

This juvenile innuendo calls to mind Jon Stewart’s observation that it’s not until someone in Washington makes the first Hitler reference that we know a real political showdown has begun (and Stewart has had to make the point a few times again since then).

One of the things that makes this type of trashy pseudo-scholarship so insidious is that it often creates an indelible impression of its own. Anyone who reads Sahlins’ essay could be forgiven for thinking that writing numbers on people might really be a sign that he was dehumanizing them. Fortunately, Chagnon’s own accounts go a long way toward dispelling this suspicion. In one passage, he describes how he made the naming and numbering into a game for this group of people who knew nothing about writing:

I had also noted after each name the item that person wanted me to bring on my next visit, and they were surprised at the total recall I had when they decided to check me. I simply looked at the number I had written on their arm, looked the number up in my field book, and then told the person precisely what he had requested me to bring for him on my next trip. They enjoyed this, and then they pressed me to mention the names of particular people in the village they would point to. I would look at the number on the arm, look it up in my field book, and whisper his name into someone’s ear. The others would anxiously and eagerly ask if I got it right, and the informant would give an affirmative quick raise of the eyebrows, causing everyone to laugh hysterically. (157)

Needless to say, this is a far cry from using the labels to efficiently herd people into cargo trains to transport them to concentration camps and gas chambers. Sahlins disgraces himself by suggesting otherwise and by not distancing himself from Tierney when it became clear that his atrocious accusations were meritless.

            Which brings us back to John Horgan. One week after the post in which he bragged about standing up to five email bullies who were urging him not to endorse Tierney’s book and took the opportunity to say he still stands by the mostly positive review, he published another post on Chagnon, this time about the irony of how close Chagnon’s views on war are to those of Margaret Mead, a towering figure in anthropology whose blank-slate theories sociobiologists often challenge. (Both of Horgan’s posts marking the occasion of Chagnon’s new book—neither of which quote from it—were probably written for publicity; his own book on war was published last year.) As I read the post, I came across the following bewildering passage: 

Chagnon advocates have cited a 2011 paper by bioethicist Alice Dreger as further “vindication” of Chagnon. But to my mind Dreger’s paper—which wastes lots of verbiage bragging about all the research that she’s done and about how close she has gotten to Chagnon–generates far more heat than light. She provides some interesting insights into Tierney’s possible motives in writing Darkness in El Dorado, but she leaves untouched most of the major issues raised by Chagnon’s career.

Horgan’s earlier post was one of the first things I’d read in years about Chagnon, and Tierney’s accusations against him. I read Alice Dreger’s report on her investigation of those accusations, and the “inquiry” by the American Anthropological Association that ensued from them, shortly afterward. I kept thinking back to Horgan’s continuing endorsement of Tieney’s book as I read the report because she cites several other reports that establish, at the very least, that there was no evidence to support the worst of the accusations. My conclusion was that Horgan simply hadn’t done his homework. How could he endorse a work featuring such horrific accusations if he knew most of them, the most horrific in particular, had been disproved? But with this second post he was revealing that he knew the accusations were false—and yet he still hasn’t recanted his endorsement.

            If you only read two supplements to Noble Savages, I recommend Dreger’s report and Emily Eakin’s profile of Chagnon in the New York Times. The one qualm I have about Eakin’s piece is that she too sacrifices the principle of presuming innocence in her effort to achieve journalistic balance, quoting Leslie Sponsel, one of the authors of the appalling email that sparked the AAA’s investigation of Chagnon, as saying, “The charges have not all been disproven by any means.” It should go without saying that the burden of proof is on the accuser. It should also go without saying that once the most atrocious of Tierney’s accusations were disproven the discussion of culpability should have shifted its focus away from Chagnon onto Tierney and his supporters. That it didn’t calls to mind the scene in The Crucible when an enraged John Proctor, whose wife is being arrested, shouts in response to an assurance that she’ll be released if she’s innocent—“If she is innocent! Why do you never wonder if Paris be innocent, or Abigail? Is the accuser always holy now? Were they born this morning as clean as God’s fingers?” (73). Aside from Chagnon himself, Dreger is about the only one who realized Tierney himself warranted some investigating.

            Eakin echoes Horgan a bit when she faults the “zealous tone” of Dreger’s report. Indeed, at one point, Dreger compares Chagnon’s trial to Galileo’s being called before the Inquisition. The fact is, though, there’s an important similarity. One of the most revealing discoveries of Dreger’s investigation was that the members of the AAA task force knew Tierney’s book was full of false accusations but continued with their inquiry anyway because they were concerned about the organization’s public image. In an email to the sociobiologist Sarah Blaffer Hrdy, Jane Hill, the head of the task force, wrote,

Burn this message. The book is just a piece of sleaze, that’s all there is to it (some cosmetic language will be used in the report, but we all agree on that). But I think the AAA had to do something because I really think that the future of work by anthropologists with indigenous peoples in Latin America—with a high potential to do good—was put seriously at risk by its accusations, and silence on the part of the AAA would have been interpreted as either assent or cowardice.

How John Horgan could have read this and still claimed that Dreger’s report “generates more heat than light” is beyond me. I can only guess that his judgment has been distorted by cognitive dissonance.

        To Horgan's other complaints, that she writes too much about her methods and admits to having become friends with Chagnon, she might respond that there is so much real hysteria surrounding this controversy, along with a lot of commentary reminiscent of the type of ridiculous rhetoric one hears on cable news, it was important to distinguish her report from all the groundless and recriminatory he-said-she-said. As for the friendship, it came about over the course of Dreger’s investigation. This is important because, for one, it doesn’t suggest any pre-existing bias, and two, one of the claims by critics of Chagnon’s work is that the violence he reported was either provoked by the man himself, or represented some kind of mental projection of his own bellicose character onto the people he was studying.

Dreger’s friendship with Chagnon shows that he’s not the monster portrayed by those in the grip of moralizing hysteria. And if parts of her report strike many as sententious it’s probably owing to their unfamiliarity with how ingrained that hysteria has become. It seems odd that anyone would pronounce on the importance of evidence or fairness—but basic principles we usually take for granted where trammeled in the frenzy to condemn Chagnon. 

If his enemies are going to compare him to Mengele, then a comparison with Galileo seems less extreme.

  Dreger, it seems to me, deserves credit for bringing a sorely needed modicum of sanity to the discussion. And she deserves credit as well for being one of the only people commenting on the controversy who understands the devastating personal impact of such vile accusations. She writes,

Meanwhile, unlike Neel, Chagnon was alive to experience what it is like to be drawn-and-quartered in the international press as a Nazi-like experimenter responsible for the deaths of hundreds, if not thousands, of Yanomamö. He tried to describe to me what it is like to suddenly find yourself accused of genocide, to watch your life’s work be twisted into lies and used to burn you.

So let’s make it clear: the scientific controversy over sociobiology and the scandal over Tierney’s discredited book are two completely separate issues. In light of the findings from all the investigations of Tierney’s claims, we should all, no matter our theoretical leanings, agree that Darkness in El Dorado is, in the words of Jane Hill, who headed a task force investigating it, “just a piece of sleaze.” We should still discuss whether it was appropriate or advisable for Chagnon to exchange machetes for information—I’d be interested to hear what he has to say himself, since he describes all kinds of frustrations the practice caused him in his book. We should also still discuss the relative threat of contagion posed by ethnographers versus missionaries, weighed of course against the benefits of inoculation campaigns.

But we shouldn’t discuss any ethical or scientific matter with reference to Darkness in El Dorado or its disgraced author aside from questions like: Why was the hysteria surrounding the book allowed to go so far? Why were so many people willing to scapegoat Chagnon? Why doesn’t anyone—except Alice Dreger—seem at all interested in bringing Tierney to justice in some way for making such outrageous accusations based on misleading or fabricated evidence? What he did is far worse than what Jonah Lehrer or James Frey did, and yet both of those men have publically acknowledged their dishonesty while no one has put even the slightest pressure on Tierney to publically admit wrongdoing.

            There’s some justice to be found in how easy Tierney and all the self-righteous pseudo-scholars like Sahlins have made it for future (and present) historians of science to cast them as deluded and unscrupulous villains in the story of a great—but flawed, naturally—anthropologist named Napoleon Chagnon. There’s also justice to be found in how snugly the hysterical moralizers’ tribal animosity toward Chagnon, their dehumanization of him, fits within a sociobiological framework of violence and warfare. One additional bit of justice might come from a demonstration of how easily Tierney’s accusatory pseudo-reporting can be turned inside-out. Tierney at one point in his book accuses Chagnon of withholding names that would disprove the central finding of his famous Science paper, and reading into the fact that the ascendant theories Chagnon criticized were openly inspired by Karl Marx’s ideas, he writes,

Yet there was something familiar about Chagnon’s strategy of secret lists combined with accusations against ubiquitous Marxists, something that traced back to his childhood in rural Michigan, when Joe McCarthy was king. Like the old Yanomami unokais, the former senator from Wisconsin was in no danger of death. Under the mantle of Science, Tailgunner Joe was still firing away—undefeated, undaunted, and blessed with a wealth of off-spring, one of whom, a poor boy from Port Austin, had received a full portion of his spirit. (180)

Tierney had no evidence that Chagnon kept any data out of his analysis. Nor did he have any evidence regarding Chagnon’s ideas about McCarthy aside from what he thought he could divine from knowing where he grew up (he cited no surveys of opinions from the town either). His writing is so silly it would be laughable if we didn’t know about all the anguish it caused. Tierney might just as easily have tried to divine Chagnon’s feelings about McCarthyism based on his alma mater. It turns out Chagnon began attending classes at the University of Michigan, the school where he’d write the famous dissertation for his PhD that would become the classic anthropology text The Fierce People, just two decades after another famous alumnus, one who actually stood up to McCarthy at a time when he was enjoying the success of a historical play he'd written, an allegory on the dangers of moralizing hysteria, in particular the one we now call the Red Scare. His name was Arthur Miller.

Also read

Can't Win for Losing: Why There are So Many Losers in Literature and Why It Has to Change

And

The People Who Evolved Our Genes for Us: Christopher Boehm on Moral Origins

And

The Feminist Sociobiologist: An Appreciation of Sarah Blaffer Hrdy

Read More
Dennis Junk Dennis Junk

Sympathizing with Psychos: Why We Want to See Alex Escape His Fate as A Clockwork Orange

Especially in this age where everything, from novels to social media profiles, are scrutinized for political wrong think, it’s important to ask how so many people can enjoy stories with truly reprehensible protagonists. Anthony Burgess’s “A Clockwork Orange” provides a perfect test case. How can readers possible sympathize with Alex?

            Phil Connors, the narcissistic weatherman played by Bill Murray in Groundhog Day, is, in the words of Larry, the cameraman played by Chris Elliott, a “prima donna,” at least at the beginning of the movie. He’s selfish, uncharitable, and condescending. As the plot progresses, however, Phil undergoes what is probably the most plausible transformation in all of cinema—having witnessed what he’s gone through over the course of the movie, we’re more than willing to grant the possibility that even the most narcissistic of people might be redeemed through such an ordeal. The odd thing, though, is that when you watch Groundhog Day you don’t exactly hate Phil at the beginning of the movie. Somehow, even as we take note of his most off-putting characteristics, we’re never completely put off. As horrible as he is, he’s not really even unpleasant. The pleasure of watching the movie must to some degree stem from our desire to see Phil redeemed. We want him to learn his lesson so we don’t have to condemn him or write him off. 

            In a recent article for the New Yorker, Jonathan Franzen explores what he calls “the problem of sympathy” by considering his own responses to the novels of Edith Wharton, who herself strikes him as difficult to sympathize with. Lily Bart, the protagonist of The House of Mirth, is similar to Wharton in many respects, the main difference being that Lily is beautiful (and of course Franzen was immediately accused of misogyny for pointing this out). Of Lily, Franzen writes,

She is, basically, the worst sort of party girl, and Wharton, in much the same way that she didn’t even try to be soft or charming in her personal life, eschews the standard novelistic tricks for warming or softening Lily’s image—the book is devoid of pet-the-dog moments. So why is it so hard to stop reading Lily’s story? (63)

Franzen weighs several hypotheses: her beauty, her freedom to act on impulses we would never act on, her financial woes, her aging. But ultimately he settles on the conclusion that all of these factors are incidental.

What determines whether we sympathize with a fictional character, according to Franzen, is the strength and immediacy of his or her desire. What propels us through the story then is our curiosity about whether or not the character will succeed in satisfying that desire. He explains,

One of the great perplexities of fiction—and the quality that makes the novel the quintessentially liberal art form—is that we experience sympathy so readily for characters we wouldn’t like in real life. Becky Sharp may be a soulless social climber, Tom Ripley may be a sociopath, the Jackal may want to assassinate the French President, Mickey Sabbath may be a disgustingly self-involved old goat, and Raskolnikov may want to get away with murder, but I find myself rooting for each of them. This is sometimes, no doubt, a function of the lure of the forbidden, the guilty pleasure of imagining what it would be like to be unburdened by scruples. In every case, though, the alchemical agent by which fiction transmutes my secret envy or my ordinary dislike of “bad” people into sympathy is desire. Apparently, all a novelist has to do is give a character a powerful desire (to rise socially, to get away with murder) and I, as a reader, become helpless not to make that desire my own. (63)

While I think Franzen here highlights a crucial point about the intersection between character and plot, namely that it is easier to assess how well characters fare at the story’s end if we know precisely what they want—and also what they dread—it’s clear nonetheless that he’s being flip in his dismissal of possible redeeming qualities. Emily Gould, writing for The Awl, expostulates in a parenthetical to her statement that her response to Lily was quite different from Franzen’s that “she was so trapped! There were no right choices! How could anyone find watching that ‘delicious!’ I cry every time!”

            Focusing on any single character in a story the way Franzen does leaves out important contextual cues about personality. In a story peopled with horrible characters, protagonists need only send out the most modest of cues signaling their altruism or redeemability for readers to begin to want to see them prevail. For Milton’s Satan to be sympathetic, readers have to see God as significantly less so. In Groundhog Day, you have creepy and annoying characters like Larry and Ned Ryerson to make Phil look slightly better. And here is Franzen on the denouement of House of Mirth, describing his response to Lily reflecting on the timestamp placed on her youthful beauty:

But only at the book’s very end, when Lily finds herself holding another woman’s baby and experiencing a host of unfamiliar emotions, does a more powerful sort of urgency crash into view. The financial potential of her looks is revealed to have been an artificial value, in contrast to their authentic value in the natural scheme of human reproduction. What has been simply a series of private misfortunes for Lily suddenly becomes something larger: the tragedy of a New York City social world whose priorities are so divorced from nature that they kill the emblematically attractive female who ought, by natural right, to thrive. The reader is driven to search for an explanation of the tragedy in Lily’s appallingly deforming social upbringing—the kind of upbringing that Wharton herself felt deformed by—and to pity her for it, as, per Aristotle, a tragic protagonist must be pitied. (63)

As Gould points out, though, Franzen is really late in coming to an appreciation of the tragedy, even though his absorption with Lily’s predicament suggests he feels sympathy for her all along. Launching into a list of all the qualities that supposedly make the character unsympathetic, he writes, “The social height that she’s bent on securing is one that she herself acknowledges is dull and sterile” (62), a signal of ambivalence that readers like Gould take as a hopeful sign that she might eventually be redeemed. In any case, few of the other characters seem willing to acknowledge anything of the sort.

            Perhaps the most extreme instance in which a bad character wins the sympathy of readers and viewers by being cast with a character or two who are even worse is that of Alex in Anthony Burgess’s novella A Clockwork Orange and the Stanley Kubrick film based on it. (Patricia Highsmith’s Mr. Ripley is another clear contender.) How could we possibly like Alex? He’s a true sadist who matter-of-factly describes the joyous thrill he gets from committing acts of “ultraviolence” against his victims, and he’s a definite candidate for a clinical diagnosis of antisocial personality disorder.

He’s also probably the best evidence for Franzen’s theory that sympathy is reducible to desire. It should be noted, however, that, in keeping with William Flesch’s theory of narrative interest, A Clockwork Orange is nothing if not a story of punishment. In his book Comeuppance, Flesch suggests that when we become emotionally enmeshed with stories we’re monitoring the characters for evidence of either altruism or selfishness and henceforth attending to the plot, anxious to see the altruists rewarded and the selfish get their comeuppance. Alex seems to strain the theory, though, because all he seems to want to do is hurt people, and yet audiences tend to be more disturbed than gratified by his drawn-out, torturous punishment. For many, there’s even some relief at the end of the movie and the original American version of the book when Alex makes it through all of his ordeals with his taste for ultraviolence restored.  

            Many obvious factors mitigate the case against Alex, perhaps foremost among them the whimsical tone of his narration, along with the fictional dialect which lends to the story a dream-like quality, which is also brilliantly conveyed in the film. There’s something cartoonish about all the characters who suffer at the hands of Alex and his droogs, and almost all of them return to the story later to exact their revenge. You might even say there’s a Groundhogesque element of repetition in the plot. The audience quickly learns too that all the characters who should be looking out for Alex—he’s only fifteen, we find out after almost eighty pages—are either feckless zombies like his parents, who have been sapped of all vitality by their clockwork occupations, or only see him as a means to furthering their own ambitions. “If you have no consideration for your own horrible self you at least might have some for me, who have sweated over you,” his Post-Corrective Advisor P.R. Deltoid says to him. “A big black mark, I tell you in confidence, for every one we don’t reclaim, a confession of failure for every one of you that ends up in the stripy hole” (42). Even the prison charlie (he’s a chaplain, get it?) who serves as a mouthpiece to deliver Burgess’s message treats him as a means to an end. Alex explains,

The idea was, I knew, that this charlie was after becoming a very great holy chelloveck in the world of Prison Religion, and he wanted a real horrorshow testimonial from the Governor, so he would go and govoreet quietly to the Governor now and then about what dark plots were brewing among the plennies, and he would get a lot of this cal from me. (91)

Alex ends up receiving his worst punishment at the hands of the man against whom he’s committed his worst crime. F. Alexander is the author of the metafictionally titled A Clockwork Orange, a treatise against the repressive government, and in the first part of the story Alex and his droogs, wearing masks, beat him mercilessly before forcing him to watch them gang rape his wife, who ends up dying from wounds she sustains in the attack. Later, when Alex gets beaten up himself and inadvertently stumbles back to the house that was the scene of the crime, F. Alexander recognizes him only as the guinea pig for a government experiment in brainwashing criminals he’s read about in newspapers. He takes Alex in and helps him, saying, “I think you can be used, poor boy. I think you can help dislodge this overbearing Government” (175). After he recognizes Alex from his nadsat dialect as the ringleader of the gang who killed his wife, he decides the boy will serve as a better propaganda tool if he commits suicide. Locking him in a room and blasting the Beethoven music he once loved but was conditioned in prison to find nauseating to the point of wishing for death, F. Alexander leaves Alex no escape but to jump out of a high window.

The desire for revenge is understandable, but before realizing who it is he’s dealing with F. Alexander reveals himself to be conniving and manipulative, like almost every other adult Alex knows. When he wakes up in the hospital after his suicide attempt, he discovers that the Minister of the Inferior, as he calls him, has had the conditioning procedure he originally ordered be carried out on Alex reversed and is now eager for Alex to tell everyone how F. Alexander and his fellow conspirators tried to kill him. Alex is nothing but a pawn to any of them. That’s why it’s possible to be relieved when his capacity for violent behavior has been restored.

Of course, the real villain of A Clockwork Orange is the Ludovico Technique, the treatment used to cure Alex of his violent impulses. Strapped into a chair with his head locked in place and his glazzies braced open, Alex is forced to watch recorded scenes of torture, murder, violence, and rape, the types of things he used to enjoy. Only now he’s been given a shot that makes him feel so horrible he wants to snuff it (kill himself), and over the course of the treatment sessions he becomes conditioned to associate his precious ultraviolence with this dreadful feeling. Next to the desecration of a man’s soul—the mechanistic control obviating his free will—the antisocial depredations of a young delinquent are somewhat less horrifying. As the charlie says to Alex, addressing him by his prison ID number,

It may not be nice to be good, little 6655321. It may be horrible to be good. And when I say that to you I realize how self-contradictory that sounds. I know I shall have many sleepless nights about this. What does God want? Does God want goodness or the choice of goodness? Is a man who chooses the bad perhaps in some way better than a man who has the good imposed upon him? Deep and hard questions, little 6655321. (107)

At the same time, though, one of the consequences of the treatment is that Alex becomes not just incapable of preying on others but also of defending himself. Immediately upon his release from prison, he finds himself at the mercy of everyone he’s wronged and everyone who feels justified in abusing or exploiting him owing to his past crimes. Before realizing who Alex is, F. Alexander says to him,

You’ve sinned, I suppose, but your punishment has been out of all proportion. They have turned you into something other than a human being. You have no power of choice any longer. You’re committed to socially acceptable acts, a little machine capable only of good. (175)

            To tally the mitigating factors: Alex is young (though the actor in the movie was twenty-eight), he’s surrounded by other bizarre and unlikable characters, and he undergoes dehumanizing torture. But does this really make up for his participating in gang rape and murder? Personally, as strange and unsavory as F. Alexander seems, I have to say I can’t fault him in the least for taking revenge on Alex. As someone who believes all behaviors are ultimately determined by countless factors outside the  individual’s control, from genes to education to social norms, I don’t have that much of a problem with the Ludovico Technique either. Psychopathy is a primarily genetic condition that makes people incapable of experiencing moral emotions such as would prevent them from harming others. If aversion therapy worked to endow psychopaths with negative emotions similar to those the rest of us feel in response to Alex’s brand of ultraviolence, then it doesn’t seem like any more of a desecration than, say, a brain operation to remove a tumor with deleterious effects on moral reasoning. True, the prospect of a corrupt government administering the treatment is unsettling, but this kid was going around beating, raping, and killing people.

            And yet, I also have to admit (confess?), my own response to Alex, even at the height of his delinquency, before his capture and punishment, was to like him and root for him—this despite the fact that, contra Franzen, I couldn’t really point to any one thing he desires more than anything else.

            For those of us who sympathize with Alex, every instance in which he does something unconscionable induces real discomfort, like when he takes two young ptitsas back to his room after revealing they “couldn’t have been more than ten” (47) (but he earlier says the girl Billyboy's gang is "getting ready to perform on" is "not more than ten" [18] - is he serious?). We don’t like him, in other words, because he does bad things but in spite of it. At some point near the beginning of the story, Alex must give some convincing indications that by the end he will have learned the error of his ways. He must provide readers with some evidence that he is at least capable of learning to empathize with other people’s suffering and willing to behave in such a way as to avoid it, so when we see him doing something horrible we view it as an anxiety-inducing setback rather than a deal-breaking harbinger of his true life trajectory. But what is it exactly that makes us believe this psychopath is redeemable?

            Phil Connors in Groundhog Day has one obvious saving grace. When viewers are first introduced to him, he’s doing his weather report—and he has a uniquely funny way of doing it. “Uh-oh, look out. It’s one of these big blue things!” he jokes when the graphic of a storm front appears on the screen. “Out in California they're going to have some warm weather tomorrow, gang wars, and some very overpriced real estate,” he says drolly. You could argue he’s only being funny in an attempt to further his career, but he continues trying to make people laugh, usually at the expense of weird or annoying characters, even when the cameras are off (not those cameras). Successful humor requires some degree of social acuity, and the effort that goes into it suggests at least a modicum of generosity. You could say, in effect, Phil goes out of his way to give the other characters, and us, a few laughs. Alex, likewise, offers us a laugh before the end of the first page, as he describes how the Korova Milkbar, where he and his droogs hang out, doesn’t have a liquor license but can sell moloko with drugs added to it “which would give you a nice quiet horrorshow fifteen minutes admiring Bog And All His Holy Angels And Saints in your left shoe with lights bursting all over your mozg” (3-4). Even as he’s assaulting people, Alex keeps up his witty banter and dazzling repartee. He’s being cruel, but he’s having fun. Moreover, he seems to be inviting us to have fun with him.

            Probably the single most important factor behind our desire (and I understand “our” here doesn’t include everyone in the audience) to see Alex redeemed is the fact that he’s being kind enough to tell us his story, to invite us into his life, as it were. This is the magic of first person narration. And like most magic it’s based on a psychological principle describing a mental process most of us go about our lives completely oblivious to. The Jewish psychologist Henri Tajfel was living in France at the beginning of World War II, and he was in a German prisoner-of-war camp for most of its duration. Afterward, he went to college in England, where in the 1960s and 70s he would conduct a series of experiments that are today considered classics in social psychology. Many other scientists at the time were trying to understand how an atrocity like the Holocaust could have happened. One theory was that the worst barbarism was committed by a certain type of individual who had what was called an authoritarian personality. Others, like Muzafer Sherif, pointed to a universal human tendency to form groups and discriminate on their behalf.

            Tajfel knew about Sherif’s Robber’s Cave Experiment in which groups of young boys were made to compete with each other in sports and over territory. Under those conditions, the groups of boys quickly became antagonistic toward one another, so much so that the experiment had to be moved into its reconciliation phase earlier than planned to prevent violence. But Tajfel suspected that group rivalries could be sparked even without such an elaborate setup. To test his theory, he developed what is known as the minimal group paradigm, in which test subjects engage in some task or test of their preferences and are subsequently arranged into groups based on the outcome. In the original experiments, none of the participants knew anything about their groupmates aside from the fact that they’d been assigned to the same group. And yet, even when the group assignments were based on nothing but a coin toss, subjects asked how much money other people in the experiment deserved as a reward for their participation suggested much lower dollar amounts for people in rival groups. “Apparently,” Tajfel writes in a 1970 Scientific American article about his experiments, “the mere fact of division into groups is enough to trigger discriminatory behavior” (96).

            Once divisions into us and them have been established, considerations of fairness are reserved for members of the ingroup. While the subjects in Tajfel’s tests aren’t displaying fully developed tribal animosity, they do demonstrate that the seeds of tribalism are disturbingly easily to sow. As he explains,

Unfortunately it is only too easy to think of examples in real life where fairness would go out the window, since groupness is often based on criteria much more weighty than either preferring a painter one has never heard of before or resembling someone else in one's way of counting dots. (102)

I’m unaware of any studies on the effects of various styles of narration on perceptions of group membership, but I hypothesize that we can extrapolate the minimal group paradigm into the realm of first-person narrative accounts of violence. The reason some of us like Alex despite his horrendous behavior is that he somehow manages to make us think of him as a member of our tribe—or rather as ourselves as a member of his—while everyone he wrongs belongs to a rival group. Even as he’s telling us about all the horrible things he’s done to other people, he takes time out to to introduce us to his friends, describe places like the Korova Milkbar and the Duke of York, even the flat at Municipal Flatblock 18A where he and his parents live. He tells us jokes. He shares with us his enthusiasm for classical music. Oh yeah, he also addresses us, “Oh my brothers,” beginning seven lines down on the first page and again at intervals throughout the book, making us what anthropologists call his fictive kin.

            There’s something altruistic, or at least generous, about telling jokes or stories. Alex really is our humble narrator, as he frequently refers to himself. Beyond that, though, most stories turn on some moral point, so when we encounter a narrator who immediately begins recounting his crimes we can’t help but anticipate the juncture in the story at which he experiences some moral enlightenment. In the twenty-first and last chapter of A Clockwork Orange, Alex does indeed undergo just this sort of transformation. But American publishers, along with Stanley Kubrick, cut this part of the book because it struck them as a somewhat cowardly turning away from the reality of human evil. Burgess defends the original version in an introduction to the 1986 edition,

The twenty-first chapter gives the novel the quality of genuine fiction, an art founded on the principle that human beings change. There is, in fact, not much point in writing a novel unless you can show the possibility of moral transformation, or an increase in wisdom, operating in your chief character or characters. Even trashy bestsellers show people changing. When a fictional work fails to show change, when it merely indicates that human character is set, stony, unregenerable, then you are out of the field of the novel and into that of the fable or the allegory. (xii)

Indeed, it’s probably this sense of the story being somehow unfinished or cut off in the middle that makes the film so disturbing and so nightmarishly memorable. With regard to the novel, readers could be forgiven for wondering what the hell Alex’s motivation was in telling his story in the first place if there was no lesson or no intuitive understanding he thought he could convey with it.

            But is the twenty-first chapter believable? Would it have been possible for Alex to transform into a good man? The Nobel Prize-winning psychologist Daniel Kahneman, in his book Thinking, Fast and Slow, shares with his own readers an important lesson from his student days that bears on Alex’s case:

As a graduate student I attended some courses on the art and science of psychotherapy. During one of these lectures our teacher imparted a morsel of clinical wisdom. This is what he told us: “You will from time to time meet a patient who shares a disturbing tale of multiple mistakes in his previous treatment. He has been seen by several clinicians, and all failed him. The patient can lucidly describe how his therapists misunderstood him, but he has quickly perceived that you are different. You share the same feeling, are convinced that you understand him, and will be able to help.” At this point my teacher raised his voice as he said, “Do not even think of taking on this patient! Throw him out of the office! He is most likely a psychopath and you will not be able to help him.” (27-28)

Also read

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

And

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And
HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

Why Shakespeare Nauseated Darwin: A Review of Keith Oatley's "Such Stuff as Dreams"

Does practicing science rob one of humanity? Why is it that, if reading fiction trains us to take the perspective of others, English departments are rife with pettiness and selfishness? Keith Oately is trying to make the study of literature more scientific, and he provides hints to these riddles and many others in his book “Such Stuff as Dreams.”

Late in his life, Charles Darwin lost his taste for music and poetry. “My mind seems to have become a kind of machine for grinding general laws out of large collections of facts,” he laments in his autobiography, and for many of us the temptation to place all men and women of science into a category of individuals whose minds resemble machines more than living and emotionally attuned organs of feeling and perceiving is overwhelming. In the 21st century, we even have a convenient psychiatric diagnosis for people of this sort. Don’t we just assume Sheldon in The Big Bang Theory has autism, or at least the milder version of it known as Asperger’s? It’s probably even safe to assume the show’s writers had the diagnostic criteria for the disorder in mind when they first developed his character. Likewise, Dr. Watson in the BBC’s new and obscenely entertaining Sherlock series can’t resist a reference to the quintessential evidence-crunching genius’s own supposed Asperger’s.

In Darwin’s case, however, the move away from the arts couldn’t have been due to any congenital deficiency in his finer human sentiments because it occurred only in adulthood. He writes,

I have said that in one respect my mind has changed during the last twenty or thirty years. Up to the age of thirty, or beyond it, poetry of many kinds, such as the works of Milton, Gray, Byron, Wordsworth, Coleridge, and Shelley, gave me great pleasure, and even as a schoolboy I took intense delight in Shakespeare, especially in the historical plays. I have also said that formerly pictures gave me considerable, and music very great delight. But now for many years I cannot endure to read a line of poetry: I have tried lately to read Shakespeare, and found it so intolerably dull that it nauseated me. I have also almost lost my taste for pictures or music. Music generally sets me thinking too energetically on what I have been at work on, instead of giving me pleasure.

We could interpret Darwin here as suggesting that casting his mind too doggedly into his scientific work somehow ruined his capacity to appreciate Shakespeare. But, like all thinkers and writers of great nuance and sophistication, his ideas are easy to mischaracterize through selective quotation (or, if you’re Ben Stein or any of the other unscrupulous writers behind creationist propaganda like the pseudo-documentary Expelled, you can just lie about what he actually wrote).

One of the most charming things about Darwin is that his writing is often more exploratory than merely informative. He writes in search of answers he has yet to discover. In a wider context, the quote about his mind becoming a machine, for instance, reads,

This curious and lamentable loss of the higher aesthetic tastes is all the odder, as books on history, biographies, and travels (independently of any scientific facts which they may contain), and essays on all sorts of subjects interest me as much as ever they did. My mind seems to have become a kind of machine for grinding general laws out of large collections of facts, but why this should have caused the atrophy of that part of the brain alone, on which the higher tastes depend, I cannot conceive. A man with a mind more highly organised or better constituted than mine, would not, I suppose, have thus suffered; and if I had to live my life again, I would have made a rule to read some poetry and listen to some music at least once every week; for perhaps the parts of my brain now atrophied would thus have been kept active through use. The loss of these tastes is a loss of happiness, and may possibly be injurious to the intellect, and more probably to the moral character, by enfeebling the emotional part of our nature.

His concern for his lost aestheticism notwithstanding, Darwin’s humanism, his humanity, radiates in his writing with a warmth that belies any claim about thinking like a machine, just as the intelligence that shows through it gainsays his humble deprecations about the organization of his mind.

           In this excerpt, Darwin, perhaps inadvertently, even manages to put forth a theory of the function of art. Somehow, poetry and music not only give us pleasure and make us happy—enjoying them actually constitutes a type of mental exercise that strengthens our intellect, our emotional awareness, and even our moral character. Novelist and cognitive psychologist Keith Oatley explores this idea of human betterment through aesthetic experience in his book Such Stuff as Dreams: The Psychology of Fiction. This subtitle is notably underwhelming given the long history of psychoanalytic theorizing about the meaning and role of literature. However, whereas psychoanalysis has fallen into disrepute among scientists because of its multiple empirical failures and a general methodological hubris common among its practitioners, the work of Oatley and his team at the University of Toronto relies on much more modest, and at the same time much more sophisticated, scientific protocols. One of the tools these researchers use, The Reading the Mind in the Eyes Test, was in fact first developed to research our new category of people with machine-like minds. What the researchers find bolsters Darwin’s impression that art, at least literary art, functions as a kind of exercise for our faculty of understanding and relating to others.

           Reasoning that “fiction is a kind of simulation of selves and their vicissitudes in the social world” (159), Oatley and his colleague Raymond Mar hypothesized that people who spent more time trying to understand fictional characters would be better at recognizing and reasoning about other, real-world people’s states of mind. So they devised a test to assess how much fiction participants in their study read based on how well they could categorize a long list of names according to which ones belonged to authors of fiction, which to authors of nonfiction, and which to non-authors. They then had participants take the Mind-in-the-Eyes Test, which consists of matching close-up pictures of peoples’ eyes with terms describing their emotional state at the time they were taken. The researchers also had participants take the Interpersonal Perception Test, which has them answer questions about the relationships of people in short video clips featuring social interactions. An example question might be “Which of the two children, or both, or neither, are offspring of the two adults in the clip?”  (Imagine Sherlock Holmes taking this test.) As hypothesized, Oatley writes, “We found that the more fiction people read, the better they were at the Mind-in-the-Eyes Test. A similar relationship held, though less strongly, for reading fiction and the Interpersonal Perception Test” (159).

            One major shortcoming of this study is that it fails to establish causality; people who are naturally better at reading emotions and making sound inferences about social interactions may gravitate to fiction for some reason. So Mar set up an experiment in which he had participants read either a nonfiction article from an issue of the New Yorker or a work of short fiction chosen to be the same length and require the same level of reading skills. When the two groups then took a test of social reasoning, the ones who had read the short story outperformed the control group. Both groups also took a test of analytic reasoning as a further control; on this variable there was no difference in performance between the groups. The outcome of this experiment, Oatley stresses, shouldn’t be interpreted as evidence that reading one story will increase your social skills in any meaningful and lasting way. But reading habits established over long periods likely explain the more significant differences between individuals found in the earlier study. As Oatley explains,

Readers of fiction tend to become more expert at making models of others and themselves, and at navigating the social world, and readers of non-fiction are likely to become more expert at genetics, or cookery, or environmental studies, or whatever they spend their time reading. Raymond Mar’s experimental study on reading pieces from the New Yorker is probably best explained by priming. Reading a fictional piece puts people into a frame of mind of thinking about the social world, and this is probably why they did better at the test of social reasoning. (160)

Connecting these findings to real-world outcomes, Oatley and his team also found that “reading fiction was not associated with loneliness,” as the stereotype suggests, “but was associated with what psychologists call high social support, being in a circle of people whom participants saw a lot, and who were available to them practically and emotionally” (160).

            These studies by the University of Toronto team have received wide publicity, but the people who should be the most interested in them have little or no idea how to go about making sense of them. Most people simply either read fiction or they don’t. If you happen to be of the tribe who studies fiction, then you were probably educated in a way that engendered mixed feelings—profound confusion really—about science and how it works. In his review of The Storytelling Animal, a book in which Jonathan Gottschall incorporates the Toronto team’s findings into the theory that narrative serves the adaptive function of making human social groups more cooperative and cohesive, Adam Gopnik sneers,

Surely if there were any truth in the notion that reading fiction greatly increased our capacity for empathy then college English departments, which have by far the densest concentration of fiction readers in human history, would be legendary for their absence of back-stabbing, competitive ill-will, factional rage, and egocentric self-promoters; they’d be the one place where disputes are most often quickly and amiably resolved by mutual empathetic engagement. It is rare to see a thesis actually falsified as it is being articulated.

Oatley himself is well aware of the strange case of university English departments. He cites a report by Willie van Peer on a small study he did comparing students in the natural sciences to students in the humanities. Oatley explains,

There was considerable scatter, but on average the science students had higher emotional intelligence than the humanities students, the opposite of what was expected; van Peer indicts teaching in the humanities for often turning people away from human understanding towards technical analyses of details. (160)

Oatley suggests in a footnote that an earlier study corroborates van Peer’s indictment. It found that high school students who show more emotional involvement with short stories—the type of connection that would engender greater empathy—did proportionally worse on standard academic assessments of English proficiency. The clear implication of these findings is that the way literature is taught in universities and high schools is long overdue for an in-depth critical analysis.

            The idea that literature has the power to make us better people is not new; indeed, it was the very idea on which the humanities were originally founded. We have to wonder what people like Gopnik believe the point of celebrating literature is if not to foster greater understanding and empathy. If you either enjoy it or you don’t, and it has no beneficial effects on individuals or on society in general, why bother encouraging anyone to read? Why bother writing essays about it in the New Yorker? Tellingly, many scholars in the humanities began doubting the power of art to inspire greater humanity around the same time they began questioning the value and promise of scientific progress. Oatley writes,

Part of the devastation of World War II was the failure of German citizens, one of the world’s most highly educated populations, to prevent their nation’s slide into Nazism. George Steiner has famously asserted: “We know that a man can read Goethe or Rilke in the evening, that he can play Bach and Schubert, and go to his day’s work at Auschwitz in the morning.” (164)

Postwar literary theory and criticism has, perversely, tended toward the view that literature and language in general serve as a vessel for passing on all the evils inherent in our western, patriarchal, racist, imperialist culture. The purpose of literary analysis then becomes to shift out these elements and resist them. Unfortunately, such accusatory theories leave unanswered the question of why, if literature inculcates oppressive ideologies, we should bother reading it at all. As van Peer muses in the report Oatley cites, “The Inhumanity of the Humanities,”

Consider the ills flowing from postmodern approaches, the “posthuman”: this usually involves the hegemony of “race/class/gender” in which literary texts are treated with suspicion. Here is a major source of that loss of emotional connection between student and literature. How can one expect a certain humanity to grow in students if they are continuously instructed to distrust authors and texts? (8)

           Oatley and van Peer point out, moreover, that the evidence for concentration camp workers having any degree of literary or aesthetic sophistication is nonexistent. According to the best available evidence, most of the greatest atrocities were committed by soldiers who never graduated high school. The suggestion that some type of cozy relationship existed between Nazism and an enthusiasm for Goethe runs afoul of recorded history. As Oatley points out,

Apart from propensity to violence, nationalism, and anti-Semitism, Nazism was marked by hostility to humanitarian values in education. From 1933 onwards, the Nazis replaced the idea of self-betterment through education and reading by practices designed to induce as many as possible into willing conformity, and to coerce the unwilling remainder by justified fear. (165)

Oatley also cites the work of historian Lynn Hunt, whose book Inventing Human Rights traces the original social movement for the recognition of universal human rights to the mid-1700s, when what we recognize today as novels were first being written. Other scholars like Steven Pinker have pointed out too that, while it’s hard not to dwell on tragedies like the Holocaust, even atrocities of that magnitude are resoundingly overmatched by the much larger post-Enlightenment trend toward peace, freedom, and the wider recognition of human rights. It’s sad that one of the lasting legacies of all the great catastrophes of the 20th Century is a tradition in humanities scholarship that has the people who are supposed to be the custodians of our literary heritage hell-bent on teaching us all the ways that literature makes us evil.

            Because Oatley is a central figure in what we can only hope is a movement to end the current reign of self-righteous insanity in literary studies, it pains me not to be able to recommend Such Stuff as Dreams to anyone but dedicated specialists. Oatley writes in the preface that he has “imagined the book as having some of the qualities of fiction. That is to say I have designed it to have a narrative flow” (x), and it may simply be that this suggestion set my expectations too high. But the book is poorly edited, the prose is bland and often roles over itself into graceless tangles, and a couple of the chapters seem like little more than haphazardly collated reports of studies and theories, none exactly off-topic, none completely without interest, but all lacking any central progression or theme. The book often reads more like an annotated bibliography than a story. Oatley’s scholarly range is impressive, however, bearing not just on cognitive science and literature through the centuries but extending as well to the work of important literary theorists. The book is never unreadable, never opaque, but it’s not exactly a work of art in its own right.

            Insofar as Such Stuff as Dreams is organized around a central idea, it is that fiction ought be thought of not as “a direct impression of life,” as Henry James suggests in his famous essay “The Art of Fiction,” and as many contemporary critics—notably James Wood—seem to think of it. Rather, Oatley agrees with Robert Louis Stevenson’s response to James’s essay, “A Humble Remonstrance,” in which he writes that

Life is monstrous, infinite, illogical, abrupt and poignant; a work of art in comparison is neat, finite, self-contained, rational, flowing, and emasculate. Life imposes by brute energy, like inarticulate thunder; art catches the ear, among the far louder noises of experience, like an air artificially made by a discreet musician. (qtd on pg 8)

Oatley theorizes that stories are simulations, much like dreams, that go beyond mere reflections of life to highlight through defamiliarization particular aspects of life, to cast them in a new light so as to deepen our understanding and experience of them. He writes,

Every true artistic expression, I think, is not just about the surface of things. It always has some aspect of the abstract. The issue is whether, by a change of perspective or by a making the familiar strange, by means of an artistically depicted world, we can see our everyday world in a deeper way. (15)

Critics of high-brow literature like Wood appreciate defamiliarization at the level of description; Oatley is suggesting here though that the story as a whole functions as a “metaphor-in-the-large” (17), a way of not just making us experience as strange some object or isolated feeling, but of reconceptualizing entire relationships, careers, encounters, biographies—what we recognize in fiction as plots. This is an important insight, and it topples verisimilitude from its ascendant position atop the hierarchy of literary values while rendering complaints about clichéd plots potentially moot. Didn’t Shakespeare recycle plots after all?

            The theory of fiction as a type of simulation to improve social skills and possibly to facilitate group cooperation is emerging as the frontrunner in attempts to explain narrative interest in the context of human evolution. It is to date, however, impossible to rule out the possibility that our interest in stories is not directly adaptive but instead emerges as a byproduct of other traits that confer more immediate biological advantages. The finding that readers track actions in stories with the same brain regions that activate when they witness similar actions in reality, or when they engage in them themselves, is important support for the simulation theory. But the function of mirror neurons isn’t well enough understood yet for us to determine from this study how much engagement with fictional stories depends on the reader's identifying with the protagonist. Oatley’s theory is more consonant with direct and straightforward identification. He writes,

A very basic emotional process engages the reader with plans and fortunes of a protagonist. This is what often drives the plot and, perhaps, keeps us turning the pages, or keeps us in our seat at the movies or at the theater. It can be enjoyable. In art we experience the emotion, but with it the possibility of something else, too. The way we see the world can change, and we ourselves can change. Art is not simply taking a ride on preoccupations and prejudices, using a schema that runs as usual. Art enables us to experience some emotions in contexts that we would not ordinarily encounter, and to think of ourselves in ways that usually we do not. (118)

Much of this change, Oatley suggests, comes from realizing that we too are capable of behaving in ways that we might not like. “I am capable of this too: selfishness, lack of sympathy” (193), is what he believes we think in response to witnessing good characters behave badly.

            Oatley’s theory has a lot to recommend it, but William Flesch’s theory of narrative interest, which suggests we don’t identify with fictional characters directly but rather track them and anxiously hope for them to get whatever we feel they deserve, seems much more plausible in the context of our response to protagonists behaving in surprisingly selfish or antisocial ways. When I see Ed Norton as Tyler Durden beating Angel Face half to death in Fight Club, for instance, I don’t think, hey, that’s me smashing that poor guy’s face with my fists. Instead, I think, what the hell are you doing? I had you pegged as a good guy. I know you’re trying not to be as much of a pushover as you used to be but this is getting scary. I’m anxious that Angel Face doesn’t get too damaged—partly because I imagine that would be devastating to Tyler. And I’m anxious lest this incident be a harbinger of worse behavior to come.

            The issue of identification is just one of several interesting questions that can lend itself to further research. Oatley and Mar’s studies are not enormous in terms of sample size, and their subjects were mostly young college students. What types of fiction work the best to foster empathy? What types of reading strategies might we encourage students to apply to reading literature—apart from trying to remove obstacles to emotional connections with characters? But, aside from the Big-Bad-Western Empire myth that currently has humanities scholars grooming successive generations of deluded ideologues to be little more than culture vultures presiding over the creation and celebration of Loser Lit, the other main challenge to transporting literary theory onto firmer empirical grounds is the assumption that the arts in general and literature in particular demand a wholly different type of thinking to create and appreciate than the type that goes into the intricate mechanics and intensely disciplined practices of science.

As Oatley and the Toronto team have shown, people who enjoy fiction tend to have the opposite of autism. And people who do science are, well, Sheldon. Interestingly, though, the writers of The Big Bang Theory, for whatever reason, included some contraindications for a diagnosis of autism or Asperger’s in Sheldon’s character. Like the other scientists in the show, he’s obsessed with comic books, which require at least some understanding of facial expression and body language to follow. As Simon Baron-Cohen, the autism researcher who designed the Mind-in-the-Eyes test, explains, “Autism is an empathy disorder: those with autism have major difficulties in 'mindreading' or putting themselves into someone else’s shoes, imagining the world through someone else’s feelings” (137). Baron-Cohen has coined the term “mindblindness” to describe the central feature of the disorder, and many have posited that the underlying cause is abnormal development of the brain regions devoted to perspective taking and understanding others, what cognitive psychologists refer to as our Theory of Mind.

            To follow comic book plotlines, Sheldon would have to make ample use of his own Theory of Mind. He’s also given to absorption in various science fiction shows on TV. If he were only interested in futuristic gadgets, as an autistic would be, he could just as easily get more scientifically plausible versions of them in any number of nonfiction venues. By Baron-Cohen’s definition, Sherlock Holmes can’t possibly have Asperger’s either because his ability to get into other people’s heads is vastly superior to pretty much everyone else’s. As he explains in “The Musgrave Ritual,”

You know my methods in such cases, Watson: I put myself in the man’s place, and having first gauged his intelligence, I try to imagine how I should myself have proceeded under the same circumstances.

            What about Darwin, though, that demigod of science who openly professed to being nauseated by Shakespeare? Isn’t he a prime candidate for entry into the surprisingly unpopulated ranks of heartless, data-crunching scientists whose thinking lends itself so conveniently to cooptation by oppressors and committers of wartime atrocities? It turns out that though Darwin held many of the same racist views as nearly all educated men of his time, his ability to empathize across racial and class divides was extraordinary. Darwin was not himself a Social Darwinist, a theory devised by Herbert Spencer to justify inequality (which has currency still today among political conservatives). And Darwin was also a passionate abolitionist, as is clear in the following excerpts from The Voyage of the Beagle:

On the 19th of August we finally left the shores of Brazil. I thank God, I shall never again visit a slave-country. To this day, if I hear a distant scream, it recalls with painful vividness my feelings, when passing a house near Pernambuco, I heard the most pitiable moans, and could not but suspect that some poor slave was being tortured, yet knew that I was as powerless as a child even to remonstrate.

Darwin is responding to cruelty in a way no one around him at the time would have. And note how deeply it pains him, how profound and keenly felt his sympathy is.

I was present when a kind-hearted man was on the point of separating forever the men, women, and little children of a large number of families who had long lived together. I will not even allude to the many heart-sickening atrocities which I authentically heard of;—nor would I have mentioned the above revolting details, had I not met with several people, so blinded by the constitutional gaiety of the negro as to speak of slavery as a tolerable evil.

            The question arises, not whether Darwin had sacrificed his humanity to science, but why he had so much more humanity than many other intellectuals of his day.

It is often attempted to palliate slavery by comparing the state of slaves with our poorer countrymen: if the misery of our poor be caused not by the laws of nature, but by our institutions, great is our sin; but how this bears on slavery, I cannot see; as well might the use of the thumb-screw be defended in one land, by showing that men in another land suffered from some dreadful disease.

And finally we come to the matter of Darwin’s Theory of Mind, which was quite clearly in no way deficient.

Those who look tenderly at the slave owner, and with a cold heart at the slave, never seem to put themselves into the position of the latter;—what a cheerless prospect, with not even a hope of change! picture to yourself the chance, ever hanging over you, of your wife and your little children—those objects which nature urges even the slave to call his own—being torn from you and sold like beasts to the first bidder! And these deeds are done and palliated by men who profess to love their neighbours as themselves, who believe in God, and pray that His Will be done on earth! It makes one's blood boil, yet heart tremble, to think that we Englishmen and our American descendants, with their boastful cry of liberty, have been and are so guilty; but it is a consolation to reflect, that we at least have made a greater sacrifice than ever made by any nation, to expiate our sin. (530-31)

            I suspect that Darwin’s distaste for Shakespeare was borne of oversensitivity. He doesn't say music failed to move him; he didn’t like it because it made him think “too energetically.” And as aesthetically pleasing as Shakespeare is, existentially speaking, his plays tend to be pretty harsh, even the comedies. When Prospero says, "We are such stuff / as dreams are made on" in Act 4 of The Tempest, he's actually talking not about characters in stories, but about how ephemeral and insignificant real human lives are. But why, beyond some likely nudge from his inherited temperament, was Darwin so sensitive? Why was he so empathetic even to those so vastly different from him? After admitting he’d lost his taste for Shakespeare, paintings, and music, he goes to say,

On the other hand, novels which are works of the imagination, though not of a very high order, have been for years a wonderful relief and pleasure to me, and I often bless all novelists. A surprising number have been read aloud to me, and I like all if moderately good, and if they do not end unhappily—against which a law ought to be passed. A novel, according to my taste, does not come into the first class unless it contains some person whom one can thoroughly love, and if a pretty woman all the better.

Also read

STORIES, SOCIAL PROOF, & OUR TWO SELVES

And:

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And:

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

[Check out the Toronto group's blog at onfiction.ca]

Read More
Dennis Junk Dennis Junk

The Enlightened Hypocrisy of Jonathan Haidt's Righteous Mind

Jonathan Haidt extends an olive branch to conservatives by acknowledging their morality has more dimensions than the morality of liberals. But is he mistaking what’s intuitive for what’s right? A critical, yet admiring review of The Righteous Mind.

A Review of Jonathan Haidt's new book,

The Righteous Mind: Why Good People are Divided by Politics and Religion

Back in the early 1950s, Muzafer Sherif and his colleagues conducted a now-infamous experiment that validated the central premise of Lord of the Flies. Two groups of 12-year-old boys were brought to a camp called Robber’s Cave in southern Oklahoma where they were observed by researchers as the members got to know each other. Each group, unaware at first of the other’s presence at the camp, spontaneously formed a hierarchy, and they each came up with a name for themselves, the Eagles and the Rattlers. That was the first stage of the study. In the second stage, the two groups were gradually made aware of each other’s presence, and then they were pitted against each other in several games like baseball and tug-o-war. The goal was to find out if animosity would emerge between the groups. This phase of the study had to be brought to an end after the groups began staging armed raids on each other’s territory, wielding socks they’d filled with rocks. Prepubescent boys, this and several other studies confirm, tend to be highly tribal.

            So do conservatives.

           This is what University of Virginia psychologist Jonathan Haidt heroically avoids saying explicitly for the entirety of his new 318-page, heavily endnoted The Righteous Mind: Why Good People Are Divided by Politics and Religion. In the first of three parts, he takes on ethicists like John Stuart Mill and Immanuel Kant, along with the so-called New Atheists like Sam Harris and Richard Dawkins, because, as he says in a characteristically self-undermining pronouncement, “Anyone who values truth should stop worshipping reason” (89). Intuition, Haidt insists, is more worthy of focus. In part two, he lays out evidence from his own research showing that all over the world judgments about behaviors rely on a total of six intuitive dimensions, all of which served some ancestral, adaptive function. Conservatives live in “moral matrices” that incorporate all six, while liberal morality rests disproportionally on just three. At times, Haidt intimates that more dimensions is better, but then he explicitly disavows that position. He is, after all, a liberal himself. In part three, he covers some of the most fascinating research to emerge from the field of human evolutionary anthropology over the past decade and a half, concluding that tribalism emerged from group selection and that without it humans never would have become, well, human. Again, the point is that tribal morality—i.e. conservatism—cannot be all bad.

One of Haidt’s goals in writing The Righteous Mind, though, was to improve understanding on each side of the central political divide by exploring, and even encouraging an appreciation for, the moral psychology of those on the rival side. Tribalism can’t be all bad—and yet we need much less of it in the form of partisanship. “My hope,” Haidt writes in the introduction, “is that this book will make conversations about morality, politics, and religion more common, more civil, and more fun, even in mixed company” (xii). Later he identifies the crux of his challenge, “Empathy is an antidote to righteousness, although it’s very difficult to empathize across a moral divide” (49). There are plenty of books by conservative authors which gleefully point out the contradictions and errors in the thinking of naïve liberals, and there are plenty by liberals returning the favor. What Haidt attempts is a willful disregard of his own politics for the sake of transcending the entrenched divisions, even as he’s covering some key evidence that forms the basis of his beliefs. Not surprisingly, he gives the impression at several points throughout the book that he’s either withholding the conclusions he really draws from the research or exercising great discipline in directing his conclusions along paths amenable to his agenda of bringing about greater civility.

Haidt’s focus is on intuition, so he faces the same challenge Daniel Kahneman did in writing Thinking, Fast and Slow: how to convey all these different theories and findings in a book people will enjoy reading from first page to last? Kahneman’s attempt was unsuccessful, but his encyclopedic book is still readable because its topic is so compelling. Haidt’s approach is to discuss the science in the context of his own story of intellectual development. The product reads like a postmodern hero’s journey in which the unreliable narrator returns right back to where he started, but with a heightened awareness of how small his neighborhood really is. It’s a riveting trip down the rabbit hole of self-reflection where the distinction between is and ought gets blurred and erased and reinstated, as do the distinctions between intuition and reason, and even self and other. Since, as Haidt reports, liberals tend to score higher on the personality trait called openness to new ideas and experiences, he seems to have decided on a strategy of uncritically adopting several points of conservative rhetoric—like suggesting liberals are out-of-touch with most normal people—in order to subtly encourage less open members of his audience to read all the way through. Who, after all, wants to read a book by a liberal scientist pointing out all the ways conservatives go wrong in their thinking?

The Elephant in the Room

Haidt’s first move is to challenge the primacy of thinking over intuiting. If you’ve ever debated someone into a corner, you know simply demolishing the reasons behind a position will pretty much never be enough to change anyone’s mind. Citing psychologist Tom Gilovich, Haidt explains that when we want to believe something, we ask ourselves, “Can I believe it?” We begin a search, “and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have justification, in case anyone asks.” But if we don’t like the implications of, say, global warming, or the beneficial outcomes associated with free markets, we ask a different question: when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it. You only need one key to unlock the handcuffs of must. Psychologists now have file cabinets full of findings on “motivated reasoning,” showing the many tricks people use to reach the conclusions they want to reach. (84)

Haidt’s early research was designed to force people into making weak moral arguments so that he could explore the intuitive foundations of judgments of right and wrong. When presented with stories involving incest, or eating the family dog, which in every case were carefully worded to make it clear no harm would result to anyone—the incest couldn’t result in pregnancy; the dog was already dead—“subjects tried to invent victims” (24). It was clear that they wanted there to be a logical case based on somebody getting hurt so they could justify their intuitive answer that a wrong had been done.

They said things like ‘I know it’s wrong, but I just can’t think of a reason why.’ They seemed morally dumbfounded—rendered speechless by their inability to explain verbally what they knew intuitively. These subjects were reasoning. They were working quite hard reasoning. But it was not reasoning in search of truth; it was reasoning in support of their emotional reactions. (25)

Reading this section, you get the sense that people come to their beliefs about the world and how to behave in it by asking the same three questions they’d ask before deciding on a t-shirt: how does it feel, how much does it cost, and how does it make me look? Quoting political scientist Don Kinder, Haidt writes, “Political opinions function as ‘badges of social membership.’ They’re like the array of bumper stickers people put on their cars showing the political causes, universities, and sports teams they support” (86)—or like the skinny jeans showing everybody how hip you are.

Kahneman uses the metaphor of two systems to explain the workings of the mind. System 1, intuition, does most of the work most of the time. System 2 takes a lot more effort to engage and can never manage to operate independently of intuition. Kahneman therefore proposes educating your friends about the common intuitive mistakes—because you’ll never recognize them yourself. Haidt uses the metaphor of an intuitive elephant and a cerebrating rider. He first used this image for an earlier book on happiness, so the use of the GOP mascot was accidental. But because of the more intuitive nature of conservative beliefs it’s appropriate. Far from saying that republicans need to think more, though, Haidt emphasizes the point that rational thought is never really rational and never anything but self-interested. He argues,

the rider acts as the spokesman for the elephant, even though it doesn’t necessarily know what the elephant is really thinking. The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm. (46)

The futility of trying to avoid motivated reasoning provides Haidt some justification of his own to engage in what can only be called pandering. He cites cultural psychologists Joe Henrich, Steve Heine, and Ara Noenzayan, who argued in their 2010 paper “The Weirdest People in the World?”that researchers need to do more studies with culturally diverse subjects. Haidt commandeers the acronym WEIRD—western, educated, industrial, rich, and democratic—and applies it somewhat derisively for most of his book, even though it applies both to him and to his scientific endeavors. Of course, he can’t argue that what’s popular is necessarily better. But he manages to convey that attitude implicitly, even though he can’t really share the attitude himself.

Haidt is at his best when he’s synthesizing research findings into a holistic vision of human moral nature; he’s at his worst, his cringe-inducing worst, when he tries to be polemical. He succumbs to his most embarrassingly hypocritical impulses in what are transparently intended to be concessions to the religious and the conservative. WEIRD people are more apt to deny their intuitive, judgmental impulses—except where harm or oppression are involved—and insist on the fair application of governing principles derived from reasoned analysis. But apparently there’s something wrong with this approach:

Western philosophy has been worshipping reason and distrusting the passions for thousands of years. There’s a direct line running from Plato through Immanuel Kant to Lawrence Kohlberg. I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. (28)

This is disingenuous. For one thing, he doesn’t refer to the rationalist delusion throughout the book; it only shows up one other time. Both instances implicate the New Atheists. Haidt coins the term rationalist delusion in response to Dawkins’s The God Delusion. An atheist himself, Haidt is throwing believers a bone. To make this concession, though, he’s forced to seriously muddle his argument. “I’m not saying,” he insists,

we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments, but they are often disastrous as a basis for public policy, science, and law. Rather, what I’m saying is that we must be wary of any individual’s ability to reason. We should see each individual as being limited, like a neuron. (90)

As far as I know, neither Harris nor Dawkins has ever declared himself dictator of reason—nor, for that matter, did Mill or Rawls (Hitchens might have). Haidt, in his concessions, is guilty of making points against arguments that were never made. He goes on to make a point similar to Kahneman’s.

We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. (90)

What Haidt probably realizes but isn’t saying is that the environment he’s describing is a lot like scientific institutions in academia. In other words, if you hang out in them, you’ll be WEIRD.

A Taste for Self-Righteousness

The divide over morality can largely be reduced to the differences between the urban educated and the poor not-so-educated. As Haidt says of his research in South America, “I had flown five thousand miles south to search for moral variation when in fact there was more to be found a few blocks west of campus, in the poor neighborhood surrounding my university” (22). One of the major differences he and his research assistants serendipitously discovered was that educated people think it’s normal to discuss the underlying reasons for moral judgments while everyone else in the world—who isn’t WEIRD—thinks it’s odd:

But what I didn’t expect was that these working-class subjects would sometimes find my request for justifications so perplexing. Each time someone said that the people in a story had done something wrong, I asked, “Can you tell me why that was wrong?” When I had interviewed college students on the Penn campus a month earlier, this question brought forth their moral justifications quite smoothly. But a few blocks west, this same question often led to long pauses and disbelieving stares. Those pauses and stares seemed to say,

You mean you don’t know why it’s wrong to do that to a chicken? I have to explain it to you? What planet are you from? (95)

The Penn students “were unique in their unwavering devotion to the ‘harm principle,’” Mill’s dictum that laws are only justified when they prevent harm to citizens. Haidt quotes one of the students as saying, “It’s his chicken, he’s eating it, nobody is getting hurt” (96). (You don’t want to know what he did before cooking it.)

Having spent a little bit of time with working-class people, I can make a point that Haidt overlooks: they weren’t just looking at him as if he were an alien—they were judging him. In their minds, he was wrong just to ask the question. The really odd thing is that even though Haidt is the one asking the questions he seems at points throughout The Righteous Mind to agree that we shouldn’t ask questions like that:

There’s more to morality than harm and fairness. I’m going to try to convince you that this principle is true descriptively—that is, as a portrait of the moralities we see when we look around the world. I’ll set aside the question of whether any of these alternative moralities are really good, true, or justifiable. As an intuitionist, I believe it is a mistake to even raise that emotionally powerful question until we’ve calmed our elephants and cultivated some understanding of what such moralities are trying to accomplish. It’s just too easy for our riders to build a case against every morality, political party, and religion that we don’t like. So let’s try to understand moral diversity first, before we judge other moralities. (98)

But he’s already been busy judging people who base their morality on reason, taking them to task for worshipping it. And while he’s expending so much effort to hold back his own judgments he’s being judged by those whose rival conceptions he’s trying to understand. His open-mindedness and disciplined restraint are as quintessentially liberal as they are unilateral.

In the book’s first section, Haidt recounts his education and his early research into moral intuition. The second section is the story of how he developed his Moral Foundations Theory. It begins with his voyage to Bhubaneswar, the capital of Orissa in India. He went to conduct experiments similar to those he’d already been doing in the Americas. “But these experiments,” he writes, “taught me little in comparison to what I learned just from stumbling around the complex social web of a small Indian city and then talking with my hosts and advisors about my confusion.” It was an earlier account of this sojourn Haidt had written for the online salon The Edge that first piqued my interest in his work and his writing. In both, he talks about his two “incompatible identities.”

On one hand, I was a twenty-nine-year-old liberal atheist with very definite views about right and wrong. On the other hand, I wanted to be like those open-minded anthropologists I had read so much about and had studied with. (101)

The people he meets in India are similar in many ways to American conservatives. “I was immersed,” Haidt writes, “in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine” (102). The conversion to what he calls pluralism doesn’t lead to any realignment of his politics. But supposedly for the first time he begins to feel and experience the appeal of other types of moral thinking. He could see why protecting physical purity might be fulfilling. This is part of what's known as the “ethic of divinity,” and it was missing from his earlier way of thinking. He also began to appreciate certain aspects of the social order, not to the point of advocating hierarchy or rigid sex roles but seeing value in the complex network of interdependence.

The story is thoroughly engrossing, so engrossing that you want it to build up into a life-changing insight that resolves the crisis. That’s where the six moral dimensions come in (though he begins with just five and only adds the last one much later), which he compares to the different dimensions of taste that make up our flavor palette. The two that everyone shares but that liberals give priority to whenever any two or more suggest different responses are Care and Harm—hurting people is wrong and we should help those in need—and Fairness. The other three from the original set are Loyalty, Authority, and Sanctity, loyalty to the tribe, respect for the hierarchy, and recognition of the sacredness of the tribe’s symbols, like the flag. Libertarians are closer to liberals; they just rely less on the Care dimension and much more on the recently added sixth one, Liberty from Opression, which Haidt believes evolved in the context of ancestral egalitarianism similar to that found among modern nomadic foragers. Haidt suggests that restricting yourself to one or two dimensions is like swearing off every flavor but sweet and salty, saying,

many authors reduce morality to a single principle, usually some variant of welfare maximization (basically, help people, don’t hurt them). Or sometimes it’s justice or related notions of fairness, rights, or respect for individuals and their autonomy. There’s The Utilitarian Grill, serving only sweeteners (welfare), and The Deontological Diner, serving only salts (rights). Those are your options. (113)

Haidt doesn’t make the connection between tribalism and the conservative moral trifecta explicit. And he insists he’s not relying on what’s called the Naturalistic Fallacy—reasoning that what’s natural must be right. Rather, he’s being, he claims, strictly descriptive and scientific.

Moral judgment is a kind of perception, and moral science should begin with a careful study of the moral taste receptors. You can’t possibly deduce the list of five taste receptors by pure reasoning, nor should you search for it in scripture. There’s nothing transcendental about them. You’ve got to examine tongues. (115)

But if he really were restricting himself to description he would have no beef with the utilitarian ethicists like Mill, the deontological ones like Kant, or for that matter with the New Atheists, all of whom are operating in the realm of how we should behave and what we should believe as opposed to how we’re naturally, intuitively primed to behave and believe. At one point, he goes so far as to present a case for Kant and Jeremy Bentham, father of utilitarianism, being autistic (the trendy psychological diagnosis du jour) (120). But, like a lawyer who throws out a damning but inadmissible comment only to say “withdrawn” when the defense objects, he assures us that he doesn’t mean the autism thing as an ad hominem.

From The Moral Foundations Website

I think most of my fellow liberals are going to think Haidt’s metaphor needs some adjusting. Humans evolved a craving for sweets because in our ancestral environment fruits were a rare but nutrient-rich delicacy. Likewise, our taste for salt used to be adaptive. But in the modern world our appetites for sugar and salt have created a health crisis. These taste receptors are also easy for industrial food manufacturers to exploit in a way that enriches them and harms us. As Haidt goes on to explain in the third section, our tribal intuitions were what allowed us to flourish as a species. But what he doesn’t realize or won’t openly admit is that in the modern world tribalism is dangerous and far too easily exploited by demagogues and PR experts.

In his story about his time in India, he makes it seem like a whole new world of experiences was opened to him. But this is absurd (and insulting). Liberals experience the sacred too; they just don’t attempt to legislate it. Liberals recognize intuitions pushing them toward dominance and submission. They have feelings of animosity toward outgroups and intense loyalty toward members of their ingroup. Sometimes, they even indulge these intuitions and impulses. The distinction is not that liberals don’t experience such feelings; they simply believe they should question whether acting on them is appropriate in the given context. Loyalty in a friendship or a marriage is moral and essential; loyalty in business, in the form of cronyism, is profoundly immoral. Liberals believe they shouldn’t apply their personal feelings about loyalty or sacredness to their judgments of others because it’s wrong to try to legislate your personal intuitions, or even the intuitions you share with a group whose beliefs may not be shared in other sectors of society. In fact, the need to consider diverse beliefs—the pluralism that Haidt extolls—is precisely the impetus behind the efforts ethicists make to pare down the list of moral considerations.

Moral intuitions, like food cravings, can be seen as temptations requiring discipline to resist. It’s probably no coincidence that the obesity epidemic tracks the moral divide Haidt found when he left the Penn campus. As I read Haidt’s account of Drew Westen’s fMRI experiments with political partisans, I got a bit anxious because I worried a scan might reveal me to be something other than what I consider myself. The machine in this case is a bit like the Sorting Hat at Hogwarts, and I hoped, like Harry Potter, not to be placed in Slytherin. But this hope, even if it stems from my wish to identify with the group of liberals I admire and feel loyalty toward, cannot be as meaningless as Haidt’s “intuitionism” posits.

Ultimately, the findings Haidt brings together under the rubric of Moral Foundations Theory don’t lend themselves in any way to his larger program of bringing about greater understanding and greater civility. He fails to understand that liberals appreciate all the moral dimensions but don’t think they should all be seen as guides to political policies. And while he may want there to be less tribalism in politics he has to realize that most conservatives believe tribalism is politics—and should be.

Resistance to the Hive Switch is Futile

“We are not saints,” Haidt writes in the third section, “but we are sometimes good team players” (191). Though his efforts to use Moral Foundations to understand and appreciate conservatives lead to some bizarre contortions and a profound misunderstanding of liberals, his synthesis of research on moral intuitions with research and theorizing on multi-level selection, including selection at the level of the group, is an important contribution to psychology and anthropology. He writes that

anytime a group finds a way to suppress selfishness, it changes the balance of forces in a multi-level analysis: individual-level selection becomes less important, and group-level selection becomes more powerful. For example, if there is a genetic basis for feelings of loyalty and sanctity (i.e., the Loyalty and Sanctity Foundations), then intense intergroup competition will make these genes become more common in the next generation. (194)

The most interesting idea in this section is that humans possess what Haidt calls a “hive switch” that gets flipped whenever we engage in coordinated group activities. He cites historian William McNeil who recalls an “altered state of consciousness” when he was marching in formation with fellow soldiers in his army days. He describes it as a “sense of pervasive well-being…a strange sense of personal enlargement; a sort of swelling out, becoming bigger than life” (221). Sociologist Emile Durkheim referred to this same experience as “collective effervescence.” People feel it today at football games, at concerts as they dance to a unifying beat, and during religious rituals. It’s a profoundly spiritual experience, and it likely evolved to create a greater sense of social cohesion within groups competing with other groups.

Surprisingly, the altruism inspired by this sense of the sacred triggered by coordinated activity, though primarily directed at fellow group members—parochial altruism—can also flow out in ways that aren’t entirely tribal.

Haidt cites political scientists Robert Putnam and David Campbell’s book, American Grace: How Religion Divides and Unites Us, where they report the finding that “the more frequently people attend religious services, the more generous and charitable they become across the board” (267); they do give more to religious charities, but they also give more to secular ones. Putnam and Campbell write that “religiously observant Americans are better neighbors and better citizens.” The really astonishing finding from Putnam and Campbell’s research, though, is that the social advantages enjoyed by religious people had nothing to do with the actual religious beliefs. Haidt explains,

These beliefs and practices turned out to matter very little. Whether you believe in hell, whether you pray daily, whether you are a Catholic, Protestant, Jew, or Mormon… none of these things correlated with generosity. The only thing that was reliably and powerfully associated with the moral benefits of religion was how enmeshed people were in relationships with their co-religionists. It’s the friendships and group activities, carried out within a moral matrix that emphasizes selflessness. That’s what brings out the best in people. (267)

The Sacred foundation, then, is an integral aspect of our sense of community, as well as a powerful inspiration for altruism. Haidt cites the work of Richard Sosis, who combed through all the records he could find on communes in America. His central finding is that “just 6 percent of the secular communes were still functioning twenty years after their founding, compared to 39 percent of the religious communes.” Socis went on to identify “one master variable” which accounted for the difference between success and failure for religious groups: “the number of costly sacrifices that each commune demanded from its members” (257). But sacrifices demanded by secular groups made no difference whatsoever. Haidt concludes,

In other words, the very ritual practices that the New Atheists dismiss as costly, inefficient, and irrational turn out to be a solution to one of the hardest problems humans face: cooperation without kinship. Irrational beliefs can sometimes help the group function more rationally, particularly when those beliefs rest upon the Sanctity foundation. Sacredness binds people together, and then blinds them to the arbitrariness of the practice. (257)

This section captures the best and the worst of Haidt's work. The idea that humans have an evolved sense of the sacred, and that it came about to help our ancestral groups cooperate and cohere—that’s a brilliant contribution to theories going back through D.S.Wilson, Emile Durkheim, all the way back to Darwin. Contemplating it sparks a sense of wonder that must emerge from that same evolved feeling for the sacred. But then he uses the insight in the service of a really lame argument.

The costs critics of religion point to aren’t the minor personal ones like giving up alcohol or fasting for a few days. Haidt compares studying the actual, “arbitrary” beliefs and practices of religious communities to observing the movements of a football for the purpose of trying to understand why people love watching games. It’s the coming together as a group, he suggests, the sharing of goals and mutual direction of attention, the feeling of shared triumph or even disappointment. But if the beliefs and rituals aren’t what’s important then there’s no reason they have to be arbitrary—and there’s no reason they should have to entail any degree of hostility toward outsiders. How then can Haidt condemn Harris and Dawkins for “worshipping reason” and celebrating the collective endeavor known as science? Why doesn’t he recognize that for highly educated people, especially scientists, discovery is sacred? He seriously mars his otherwise magnificent work by wrongly assuming anyone who doesn’t think flushing an American flag down the toilet is wrong has no sense of the sacred, shaking his finger at them, effectively saying, rallying around a cause is what being human is all about, but what you flag-flushers think is important just isn’t worthy—even though it’s exactly what I think is important too, what I’ve devoted my career and this book you're holding to anyway.

As Kahneman stresses in his book, resisting the pull of intuition takes a great deal of effort. The main difference between highly educated people and everyone else isn’t a matter of separate moral intuitions. It’s a different attitude toward intuitions in general. Those of us who worship reason believe in the Enlightenment ideals of scientific progress and universal human rights. I think most of us even feel those ideals are sacred and inviolable. But the Enlightenment is a victim of its own success. No one remembers the unchecked violence and injustice that were the norms before it came about—and still are the norms in many parts of the world. In some academic sectors, the Enlightenment is even blamed for some of the crimes its own principles are used to combat, like patriarchy and colonialism. Intuitions are still very much a part of human existence, even among those who are the most thoroughly steeped in Enlightenment values. But worshipping them is far more dangerous than worshipping reason. As the world becomes ever more complicated, nostalgia for simpler times becomes an ever more powerful temptation. And surmounting the pull of intuition may ultimately be an impossible goal. But it’s still a worthy, and even sacred ideal.

But if Haidt’s attempt to inspire understanding and appreciation misfires how are we to achieve the goal of greater civility and less partisanship? Haidt does offer some useful suggestions. Still, I worry that his injunction to “Talk to the elephant” will merely contribute to the growing sway of the burgeoning focus-groupocracy. Interestingly, the third stage of the Robber's Cave experiment may provide some guidance. Sherif and his colleagues did manage to curtail the escalating hostility between the Eagles and the Rattlers. And all it took was some shared goals they had to cooperate to achieve, like when their bus got stuck on the side of the road and all the boys in both groups had to work together to work it free. Maybe it’s time for a mission to Mars all Americans could support (credit Neil de Grasse Tyson). Unfortunately, the conservatives would probably never get behind it. Maybe we should do another of our liberal conspiracy hoaxes to convince them China is planning to build a military base on the Red Planet. Then we’ll be there in no time.

Also read

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

And:

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And:

WHY TAMSIN SHAW IMAGINES THE PSYCHOLOGISTS ARE TAKING POWER

Read More
Dennis Junk Dennis Junk

Tax Demagoguery

As in, put a tax on demagoguery, defined here as purposely misleading your audience. I haven’t considered this idea in a long time. The main issue I have now is that no one would agree on who the arbiters of factualness would be. Even I have some problems with the fact-checking organizations I mentioned back when I wrote this.

        Robert Frank, in The Darwin Economy, begins with the premise that having a government is both desirable and unavoidable, and that to have a government we must raise revenue somehow. He then goes on to argue that since taxes act as disincentives to whatever behavior is being taxed we should tax behaviors that harm citizens. The U.S. government currently taxes behaviors we as citizens ought to encourage, like hiring workers and making lots of money through productive employment. Frank’s central proposal is to impose a progressive consumption tax. He believes this is the best way to discourage “positional arms races,” those situations in which trying to keep up with the Joneses leads to harmful waste with no net benefit as everyone's efforts cancel each other out. One of his examples is house size:

The explosive growth of CEO pay in recent decades, for example, has led many executives to build larger and larger mansions. But those mansions have long since passed the point at which greater absolute size yields additional utility. Most executives need or want larger mansions simply because the standards that define large have changed (61).

The crucial point here is that this type of wasteful spending doesn’t just harm the CEOs. Runaway spending at the top of the income ladder affects those on the lower rungs through a phenomenon Frank calls “expenditure cascades”:

Top earners build bigger mansions simply because they have more money. The middle class shows little evidence of being offended by that. On the contrary, many seem drawn to photo essays and TV programs about the lifestyles of the rich and famous. But the larger mansions of the rich shift the frame of reference that defines acceptable housing for the near-rich, who travel in many of the same social circles… So the near-rich build bigger, too, and that shifts the relevant framework for others just below them, and so on, all the way down the income scale. By 2007, the median new single-family house built in the United States had an area of more than 2,300 square feet, some 50 percent more than its counterpart from 1970 (61-2).

This growth in house size has occurred despite the stagnation of incomes for median earners. In the wake of the collapse of the housing market, it’s easy to see how serious this type of damage can be to society.

           Frank closes a chapter titled “Taxing Harmful Activities” with a section whose heading poses the question, “A Slippery Slope?” You can imagine a tax system that socially engineers your choices down to the sugar content of your beverages. “It’s a legitimate concern,” he acknowledges (193). But taxing harmful activities is still a better idea than taxing saving and job creation. Like any new approach, it risks going off track or going too far, but for each proposed tax a cost-benefit analysis can be done. As I’ve tried over the past few days to arrive at a list of harmful activities that are in immediate need of having a tax imposed on them, one occurred to me that I haven’t seen mentioned anywhere else before: demagoguery.

           Even bringing up the topic makes me uncomfortable. Free speech is one of the central pillars of our democracy. So the task becomes defining demagoguery in a way that doesn’t stifle the ready exchange of ideas. But first let me answer the question of why this particular behavior made my shortlist. A quick internet search will make it glaringly apparent that large numbers of Tea Party supporters believe things that are simply not true. And, having attended my local Occupy Wall Street protest, I can attest there were some whacky ideas being broadcast there as well. The current state of political discourse in America is chaotic at best and tribal at worst. Policies are being enacted every day based on ideas with no validity whatsoever. The promulgation of such ideas is doing serious harm to our society—and, worse, it’s making rational, substantive debate and collectively beneficial problem-solving impossible.

           So, assuming we can kill a couple of birds with a tax stone, how would we go about actually implementing the program? I propose forming a group of researchers and journalists whose task is to investigate complaints by citizens. Organizations like Factcheck.org and Politifact.com have already gone a long way toward establishing the feasibility of such a group. Membership will be determined by nominations from recognized research institutions like the American Academy for the Advancement of Science and the Pew Research Center, to whom appeals can be made in the event of intensely contended rulings by the group itself. Anyone who's accepted payment for any type of political activism will be ineligible for membership. The money to pay for the group and provide it with the necessary resources can come from the tax itself (though that might cause a perverse incentive if members' pay isn't independent of their findings) or revenues raised by taxes on other harmful activities.

         The first step will be the complaint, which can be made by any citizen. If the number of complaints reaches some critical mass or if the complaints are brought by recognized experts in the relevant field, then the research group will investigate. Once the group has established with a sufficient degree of certainty that a claim is false, anyone who broadcasts the claim will be taxed an amount determined by the size of the audience. The complaints, reports of the investigations, and the findings can all be handled through a website. We may even want to give the individual who made the claim a chance to correct her- or himself before leveling the tax. Legitimate news organizations already do this, so they’d have nothing to worry about.

           Talk show hosts who repeatedly make false claims will be classified as demagogues and have to pay a fixed rate to obviate any need for the research group to watch every show and investigate every claim. But anyone who is designated a demagogue must advertise the designation on the screen or at regular intervals on the air—along with a link or address to the research groups’ site, where the audience can view a list of the false claims that earned him or her the designation.

Individuals speaking to each other won’t be affected. And bloggers with small audiences, if they are taxed at all, won’t be taxed much—or they can simply correct their mistakes. Demagogues like Rush Limbaugh and Michael Moore will still be free to spew nonsense; but they’ll have to consider the costs—because the harm they cause by being sloppy or mendacious doesn’t seem to bother them.

         Now, a demagogue isn't defined as someone who makes false claims; it's someone who uses personal charisma and tactics for whipping people into emotional frenzies to win support for a cause. I believe the chief strategy of demagogues is to incite tribalism, a sense of us-vs-them. But making demagogues pay for their false claims would, I believe, go a long way toward undermining their corrosive influence on public discourse.

Also read:

WHAT'S WRONG WITH THE DARWIN ECONOMY?

THE FAKE NEWS CAMPAIGN AGAINST STEVEN PINKER AND ENLIGHTENMENT NOW

Read More
Dennis Junk Dennis Junk

Occupy Fort Wayne

A quick response to my attendance at an Occupy rally in my hometown.

What is one to do when the purpose of events like this is to rouse passion for a cause, only he believes there's already too much passion, too little knowledge, too little thinking, too little reason?

Heather Bureau from WOWO put a camera in my face at one point. "Can you tell me why you're here today?"

I had no idea who she was. "Are you going to put it on TV?"

"No, on the radio--or on the website." She pulled her hair aside to show me the WOWO logo on her shirt.

"I wanted to check it out. We're here because of income inequality. And the sway of the rich in politics. Plus, I guess we're all liberals." I really wanted her to go away.

The first speaker filled us in on "Occupation Etiquette." You hold up your arm and wave your hand when you agree with what's been said. And the speakers use short sentences the crowd repeats to make sure everyone can hear. The call and response reminded me of church. Rallies like this are to gather crowds, so when talking heads say their views are the views of the people they can point to the throngs of people who came to their gatherings.

But what is one to do about the guy carrying the sign complaining about illegal immigrants? What about all the people saying we need to shut down the Fed? What about the guy who says, "There's two kinds of people: the kind who work for money, and the kind who work for people"?

Why was I there? Well, I really do think inequality is a problem. But I support the Fed. And I'm first and foremost against tribalism. As soon as someone says, "There's two types of people..." then I know I'm somewhere I don't belong.

We shouldn't need political rallies to whip us up into a frenzy. We're obligated as citizens to pay attention to what's going on--and to vote. Maybe the Occupy Protests will get some issues into the news cycle that weren't there before. If so, I'm glad I went.

But politics is a battle of marketing strategies. Holding up signs and shouting to be heard--well, let's not pretend our voices are independent.

Some fool laughingly shouts about revolution. But I'm not willing to kill anyone over how much bankers make. Why was I there today? It seemed like the first time someone was talking about the right issue. Sort of.

Read More
Dennis Junk Dennis Junk

The Inverted Pyramid: Our Millennia-Long Project of Keeping Alpha Males in their Place

Imagine this familiar hypothetical scenario: you’re a prehistoric hunter, relying on cleverness, athleticism, and well-honed skills to track and kill a gazelle on the savannah. After carting the meat home, your wife is grateful—or your wives rather. As the top hunter in the small tribe with which your family lives and travels, you are accorded great power over all the other men, just as you enjoy great power over your family. You are the leader, the decision-maker, the final arbiter of disputes and the one everyone looks to for direction in times of distress. The payoff for all this responsibility is that you and your family enjoy larger shares of whatever meat is brought in by your subordinates. And you have sexual access to almost any woman you choose. Someday, though, you know your prowess will be on the wane, and you’ll be subjected to more and more challenges from younger men, until eventually you will be divested of all your authority. This is the harsh reality of man the hunter.

It’s easy to read about “chimpanzee politics” (I’ll never forget reading Frans de Waal’s book by that title) or watch nature shows in which the stentorian, accented narrator assigns names and ranks to chimps or gorillas and then, looking around at the rigid hierarchies of the institutions where we learn or where we work, as well as those in the realm of actual human politics, and conclude that there must have been a pretty linear development from the ancestors we share with the apes to the eras of pharaohs and kings to the Don Draperish ‘60’s till today, at least in terms of our natural tendency to form rank and follow leaders.

What to make, then, of these words spoken to anthropologist Richard Lee by a hunter-gatherer teaching him of the ways of the !Kung San?

“Say that a man has been hunting. He must not come home and announce like a braggart, ‘I have killed a big one in the bush!’ He must first sit down in silence until I or someone else comes up to his fire and asks, ‘What did you see today?’ He replies quietly, ‘Ah, I’m no good for hunting. I saw nothing at all…maybe just a tiny one.’ Then I smile to myself because I now know he has killed something big.” (Quoted in Boehm 45)

Even more puzzling from a selfish gene perspective is that the successful hunter gets no more meat for himself or his family than any of the other hunters. They divide it equally. Lee asked his informants why they criticized the hunters who made big kills.

“When a young man kills much meat, he comes to think of himself as a chief or big man, and he thinks of the rest of us as his servants or inferiors. We can’t accept this.”

So what determines who gets to be the Alpha, if not hunting prowess? According to Christopher Boehm, the answer is simple. No one gets to be the Alpha. “A distinctly egalitarian political style is highly predictable wherever people live in small, locally autonomous social and economic groups” (36). These are exactly the types of groups humans have lived in for the vast majority of their existence on Earth. This means that, uniquely among the great apes, humans evolved mechanisms to ensure egalitarianism alongside those for seeking and submitting to power.

Boehm’s Hierarchy in the Forest: The Evolution of Egalitarian Behavior is a miniature course in anthropology, as dominance and submission—as well as coalition building and defiance—are examined not merely in the ethnographic record, but in the ethological descriptions of our closest ape relatives. Building on Bruce Knauft’s observations of the difference between apes and hunter-gatherers, Boehm argues that “with respect to political hierarchy human evolution followed a U-shaped trajectory” (65). But human egalitarianism is not based on a simple of absence of hierarchy; rather, Boehm theorizes that the primary political actors (who with a few notable exceptions tend to be men) decide on an individual basis that, while power may be desirable, the chances of any individual achieving it are small, and the time span during which he would be able to sustain it would be limited. Therefore, they all submit to the collective will that no man should have authority over any other, thus all of them maintain their own personal autonomy. Boehm explains

“In despotic social dominance hierarchies the pyramid of power is pointed upward, with one or a few individuals (usually male) at the top exerting authority over a submissive rank and file. In egalitarian hierarchies the pyramid of power is turned upside down, with a politically united rank and file decisively dominating the alpha-male types” (66).

This isn’t to say that there aren’t individuals who by dint of their prowess and intelligence enjoy more influence over the band than others, but such individuals are thought of as “primus inter pares” (33), a first among equals. “Foragers,” Boehm writes, “are not intent on true and absolute equality, but on a kind of mutual respect that leaves individual autonomy intact” (68). It’s as though the life of the nomadic hunter and forager is especially conducive to thinking in terms of John Rawls’s “veil of ignorance.”

The mechanisms whereby egalitarianism is enforced will be familiar to anyone who’s gone to grade school or who works with a group of adult peers. Arrogant and bullying individuals are the butt of jokes, gossip, and ostracism. For a hunter-gatherer these can be deadly. Reputations are of paramount importance. If all else fails and a despot manages to secure some level authority, instigating a “dominance episode,” his reign will be short-lived. Even the biggest and strongest men are vulnerable to sizable coalitions of upstarts—especially in species who excel at making weapons for felling big game.

Boehm address several further questions, like what conditions bring about the reinstitution of pyramidal hierarchies, and how have consensus decision-making and social pressure against domineering affected human evolution? But what I find most interesting are his thoughts about the role of narrative in the promulgation and maintenance of the egalitarian ethos:

“As practical political philosophers, foragers perceive quite correctly that self-aggrandizement and individual authority are threats to personal autonomy. When upstarts try to make inroads against an egalitarian social order, they will be quickly recognized and, in many cases, quickly curbed on a preemptive basis. One reason for this sensitivity is that the oral tradition of a band (which includes knowledge from adjacent bands) will preserve stories about serious domination episodes. There is little doubt that many of the ethnographic reports of executions in my survey were based on such traditions, as opposed to direct ethnographic observation” (87).

Read More
Dennis Junk Dennis Junk

More of a Near Miss--Response to "Collision"

The documentary Collision is an attempt at irony. The title is spelled on the box with a bloody slash for the i coming in the middle of the word. The film opens with hard rock music and Christopher Hitchens dropping the gauntlet: "One of us will have to admit he's wrong. And I think it should be him." There are jerky closeups and dramatic pullaways. The whole thing is made to resemble one of those pre-event commercials on pay-per-view for boxing matches or UFC's.

The big surprise, which I don't think I'm ruining, is that evangelical Christian Douglas Wilson and anti-theist Christopher Hitchens--even in the midst of their heated disagreement--seem to like and respect each other. At several points they stop debating and simply chat with one another. They even trade Wodehouse quotes (and here I thought you had to be English to appreciate that humor). Some of the best scenes have the two men disagreeing without any detectable bitterness, over drinks in a bar, as they ride side by side in car, and each even giving signs of being genuinely curious about what the other is saying. All this bonhomie takes place despite the fact that neither changes his position at all over the course of their book tour.

I guess for some this may come as a surprise, but I've been arguing religion and science and politics with people I like, or even love, since I was in my early teens. One of the things that got me excited about the movie was that my oldest brother, a cancer biologist whose professed Christianity I suspect is a matter of marital expediency (just kidding), once floated the idea of collaborating on a book similar to Wilson and Hitchens's. So I was more disappointed than pleasantly surprised that the film focused more on the two men's mutual respect than on the substance of the debate.

There were some parts of the argument that came through though. The debate wasn't over whether God exists but whether belief in him is beneficial to the world. Either the director or the editors seemed intent on making the outcome an even wash. Wilson took on Hitchens's position that morality is innate, based on an evolutionary need for "human solidarity," by pointing out, validly, that so is immorality and violence. He suggested that Hitchens's own morality was in fact derived from Christianity, even though Hitchens refuses to acknowledge as much. If both morality and its opposite come from human nature, Wilson argues, then you need a third force to compel you in one direction over the other. Hitchens, if he ever answered this point, wasn't shown doing so in the documentary. He does point out, though, that Christianity hasn't been any better historically at restricting human nature to acting on behalf of its better angels.

Wilson's argument is fundamentally postmodern. He explains at one point that he thinks rationalists giving reasons for their believing what they do is no different from him quoting a Bible verse to explain his belief in the Bible. All epistemologies are circular. None are to be privileged. This is nonsense. And it would have been nice to see Hitchens bring him to task for it. For one thing, the argument is purely negative--it attempts to undermine rationalism but offers no positive arguments on behalf of Christianity. To the degree that it effectively casts doubt on nonreligious thinking, it cast the same amount of doubt on religion. For another, the analogy strains itself to the point of absurdity. Reason supporting reason is a whole different animal from the Bible supporting the Bible for the same reason that a statement arrived at by deduction is different from a statement made at random. Two plus two equals four isn't the same as there's an invisible being in the sky and he's pissed.

Of course, two plus two equals four is tautological. It's circular. But science isn't based on rationalism alone; it's rationalism cross-referenced with empiricism. If Wilson's postmodern arguments had any validity (and they don't) they still don't provide him with any basis for being a Christian as opposed to an atheist as opposed to a Muslim as opposed to a drag queen. But science offers a standard of truth.

Wilson's other argument, that you need some third factor beyond good instincts and bad instincts to be moral, is equally lame. Necessity doesn't establish validity. As one witness to the debate in a bar points out, an argument from practicality doesn't serve to prove a position is true. What I wish Hitchens had pointed out, though, is that the third factor need not be divine authority. It can just as easily be empathy. And what about culture? What about human intentionality? Can't we look around, assess the state of the world, realize our dependence on other humans in an increasingly global society, and decide to be moral? I'm a moral being because I was born capable of empathy, and because I subscribe to Enlightenment principles of expanding that empathy and affording everyone on Earth a set of fundamental human rights. And, yes, I think the weight of the evidence suggests that religion, while it serves to foster in-group cooperation, also inspires tribal animosity and war. It needs to be done away with.

One last note: Hitchens tries to illustrate our natural impulse toward moral behavior by describing an assault on a pregnant woman. "Who wouldn't be appalled?" Wilson replies, "Planned Parenthood." I thought Hitchens of all people could be counted on to denounce such an outrage. Instead, he limply says, "Don't be flippant," then stands idly, mutely, by as Wilson explains how serious he is. It's a perfect demonstration of Hitchens's correctness in arguing that Christianity perverts morality that a man as intelligent as Wilson doesn't see that comparing a pregnant woman being thrown down and kicked in the stomach to abortion is akin to comparing violent rape to consensual sex. He ought to be ashamed--but won't ever be. I think Hitchens ought to be ashamed for letting him say it unchallenged (unless the challenge was edited out).
Read More
Dennis Junk Dennis Junk

Art as Altruism: Lily Briscoe and the Ghost of Mrs. Ramsay in To the Lighthouse Part 2 of 2

Because evolution took advantage of our concern for our reputations and our ability to reason about the thoughts and feelings of others to ensure cooperation, Lily’s predicament, her argument with the ghost of Mrs. Ramsay over the proper way for a woman to live, could only be resolved through proof that she was not really free-riding or cheating, but was in fact altruistic in her own way.

The question remains, though, of why Virginia Woolf felt it necessary to recall scenes from her childhood in order to lay to rest her inner conflict over her chosen way of life—if that is indeed what To the Lighthouse did for her. She did not, in fact, spend her entire life single but married her husband Leonard in 1912 at the age of thirty and stayed with him until her death in 1941. The Woolfs had been married fifteen years by the time Lighthouse was published (Lee 314). But Virginia’s marriage was quite different from her mother Julia’s. For one, as is made abundantly clear in her diaries, Leonard Woolf was much more supportive and much less demanding than her father Leslie Stephens. More important, though, Julia had seven children of her own and cared for one of Leslie’s from a previous marriage (Lee xx), whereas Virginia remained childless all her life. But, even if she felt her lifestyle represented such a cataclysmic break from her mother’s cultural tradition, it is remarkable that the pain of this partition persisted from the time of Julia’s death when Virginia was thirteen, until the writing of Lighthouse when she was forty-four—the same age as Lily in the last section of the novel. Lily returns to the Ramsays’ summer house ten years after the visit described in the first section, Mrs. Ramsay having died rather mysteriously in the interim, and sets to painting the same image she struggled to capture before. “She had never finished that picture. She would paint that picture now. It had been knocking about in her mind all these years” (147). But why should Lily experience such difficulty handling a conflict of views with a woman who has been dead for years?

Wilson sees the universal propensity among humans to carry on relationships with supernatural beings—like the minds and personalities of the dead, but also including disembodied characters like deities—as one of a host of mechanisms, partly cultural, partly biological, devoted to ensuring group cohesion. In his book Darwin’s Cathedral, in which he attempts to explain religion in terms of his group selection theory, he writes,

A group of people who abandon self-will and work tirelessly for a greater good will fare very well as a group, much better than if they all pursue their private utilities, as long as the greater good corresponds to the welfare of the group. And religions almost invariably do link the greater good to the welfare of the community of believers, whether an organized modern church or an ethnic group for whom religion is thoroughly intermixed with the rest of their culture. Since religion is such an ancient feature of our species, I have no problem whatsoever imagining the capacity for selflessness and longing to be part of something larger than ourselves as part of our genetic and cultural heritage. (175)

One of the main tasks religious beliefs must handle is the same “free-rider problem” William Flesch discovers at the heart of narrative. What religion offers beyond the social monitoring of group members is the presence of invisible beings whose concerns are tied in to the collective concerns of the group. Jesse Bering contributes to this perspective by positing a specific cognitive mechanism which paved the way for the evolution of beliefs about invisible agents, and his theory provides a crucial backdrop for any discussion of the role the dead play for the living, in life or in literature. Of course, Mrs. Ramsay is not a deity, and though Lily feels as she paints “a sense of some one there, of Mrs. Ramsay, relieved for a moment of the weight that the world had put on her” (181), which she earlier describes as, “Ghost, air, nothingness, a thing you could play with easily and safely at any time of day or night, she had been that, and then suddenly she put her hand out and wrung the heart thus” (179), she does not believe Mrs. Ramsay is still around in any literal sense. Bering suggests this “nothingness” with the power to wring the heart derives from the same capacity humans depend on to know, roughly, what other humans are thinking. Though there is much disagreement about whether apes understand differences in each other’s knowledge and intentions, it is undeniably the case that humans far outshine any other creature in their capacity to reason about the inner, invisible workings of the minds of their conspecifics. We are so predisposed to this type of reasoning that, according to Bering, we apply it to natural phenomena in which no minds are involved. He writes,

just like other people’s surface behaviors, natural events can be perceived by us human beings as being about something other than their surface characteristics only because our brains are equipped with the specialized cognitive software, theory of mind, that enables us to think about underlying psychological causes. (79)

As Lily reflects, “this making up scenes about them, is what we call ‘knowing’ people” (173). And we must make up these scenes because, like the bees hovering about the hive she compares herself to in the first section, we have no direct access to the minds of others. Yet if we are to coordinate our actions adaptively—even competitively when other groups are involved—we have no choice but to rely on working assumptions, our theories of others’ knowledge and intentions, updating them when necessary.

The reading of natural evens as signs of some mysterious mind, as well as the continued importance of minds no longer attached to bodies capable of emitting signs, might have arisen as a mere byproduct of humans’ need to understand one another, but at some point in the course our evolution our theories of disembodied minds was co-opted in the service of helping to solve the free-rider problem. In his book The God Instinct, Bering describes a series of experiments known as “The Princess Alice studies,” which have young children perform various tasks after being primed to believe an invisible agent (named Alice in honor of Bering’s mother) is in the room with them. What he and his colleagues found was that Princess Alice’s influence only emerged as the children’s theory of mind developed, suggesting “the ability to be superstitious actually demands some mental sophistication” (96). But once a theory of mind is operating the suggestion of an invisible presence has a curious effect. First in a study of college students casually told about the ghost of a graduate student before taking a math test, and then in a study of children told Princess Alice was watching them as they performed a difficult task involving Velcro darts, participants primed to consider the mind of a supernatural agent were much less likely to take opportunities to cheat which were built into the experimental designs (193-4).

Because evolution took advantage of our concern for our reputations and our ability to reason about the thoughts and feelings of others to ensure cooperation, Lily’s predicament, her argument with the ghost of Mrs. Ramsay over the proper way for a woman to live, could only be resolved through proof that she was not really free-riding or cheating, but was in fact altruistic in her own way. Considering the fate of a couple Mrs. Ramsay had encouraged to marry, Lily imagines, “She would feel a little triumphant, telling Mrs. Ramsay that the marriage had not been a success.” But, she would go on, “They’re happy like that; I’m happy like this. Life has changed completely.” Thus Lily manages to “over-ride her wishes, improve away her limited, old-fashioned ideas” (174-5). Lily’s ultimate redemption, though, can only come through acknowledgement that the life she has chosen is not actually selfish. The difficulty in this task stems from the fact that “one could not imagine Mrs. Ramsay standing painting, lying reading, a whole morning on the lawn” (196). Mrs. Ramsay has no appreciation for art or literature, but for Lily it is art—and for Woolf it is literature—that is both the product of all that time alone and her contribution to society as a whole. Lily is redeemed when she finishes her painting, and that is where the novel ends. At the same time, Virginia Woolf, having completed this great work of literature, bequeathed it to society, to us, and in so doing proved her own altruism, thus laying to rest the ghost of Julia Stephens.

Also read:
THEY COMES A DAY: CELEBRATING COOPERATION IN A GATHERING OF OLD MEN AND HORTON HEARS A WHO!

T.J. ECKLEBURG SEES EVERYTHING: THE GREAT GOD-GAP IN GATSBY

MADNESS AND BLISS: CRITICAL VERSUS PRIMITIVE READINGS IN A.S. BYATT’S POSSESSION: A ROMANCE

Read More
Dennis Junk Dennis Junk

Art as Altruism: Lily Briscoe and the Ghost of Mrs. Ramsay in To the Lighthouse Part 1 of 2

Woolf’s struggle with her mother, and its manifestation as Lily’s struggle with Mrs. Ramsay, represents a sort of trial in which the younger living woman defends herself against a charge of selfishness leveled by her deceased elder. And since Woolf’s obsession with her mother ceased upon completion of the novel, she must have been satisfied that she had successfully exonerated herself.

Virginia Woolf underwent a transformation in the process of writing To the Lighthouse the nature of which has been the subject of much scholarly inquiry. At the center of the novel is the relationship between the beautiful, self-sacrificing, and yet officious Mrs. Ramsay, and the retiring, introverted artist Lily Briscoe. “I wrote the book very quickly,” Woolf recalls in “Sketch of the Past,” “and when it was written, I ceased to be obsessed by my mother. I no longer hear her voice; I do not see her.” Quoting these lines, biographer Hermione Lee suggests the novel is all about Woolf’s parents, “a way of pacifying their ghosts” (476). But how exactly did writing the novel function to end Woolf’s obsession with her mother? And, for that matter, why would she, at forty-four, still be obsessed with a woman who had died when she was only thirteen? Evolutionary psychologist Jesse Bering suggests that while humans are uniquely capable of imagining the inner workings of each other’s minds, the cognitive mechanisms underlying this capacity, which psychologists call “theory of mind,” simply fail to comprehend the utter extinction of those other minds. However, the lingering presence of the dead is not merely a byproduct of humans’ need to understand and communicate with other living humans. Bering argues that the watchful gaze of disembodied minds—real or imagined—serves a type of police function, ensuring that otherwise selfish and sneaky individuals cooperate and play by the rules of society. From this perspective, Woolf’s struggle with her mother, and its manifestation as Lily’s struggle with Mrs. Ramsay, represents a sort of trial in which the younger living woman defends herself against a charge of selfishness leveled by her deceased elder. And since Woolf’s obsession with her mother ceased upon completion of the novel, she must have been satisfied that she had successfully exonerated herself.

Woolf made no secret of the fact that Mr. and Mrs. Ramsay were fictionalized versions of her own parents, and most critics see Lily as a stand-in for the author—even though she is merely a friend of the Ramsay family. These complex relationships between author and character, and between daughter and parents, lie at the heart of a dynamic which readily lends itself to psychoanalytic explorations. Jane Lilienfeld, for instance, suggests Woolf created Lily as a proxy to help her accept her parents, both long dead by the time she began writing, “as monumental but flawed human beings,” whom she both adored and detested. Having reduced the grand, archetypal Mrs. Ramsay to her proper human dimensions, Lily is free to acknowledge her own “validity as a single woman, as an artist whose power comes not from manipulating others’ lives in order to fulfill herself, but one whose mature vision encapsulates and transcends reality” (372). But for all the elaborate dealings with mythical and mysterious psychic forces, the theories of Freud and Jung explain very little about why writers write and why readers read. And they explain very little about how people relate to the dead, or about what role the dead play in narrative. Freud may have been right about humans’ intense ambivalence toward their parents, but why should this tension persist long after those parents have ceased to exist? And Jung may have been correct in his detection of mythic resonances in his patients’ dreams, but what accounts for such universal narrative patterns? What do they explain?

Looking at narrative from the perspective of modern evolutionary biology offers several important insights into why people devote so much time and energy to, and get so much gratification from immersing themselves in the plights and dealings of fictional characters. Anthropologists believe the primary concern for our species at the time of its origin was the threat of rival tribes vying for control of limited resources. The legacy of this threat is the persistent proclivity for tribal—us versus them—thinking among modern humans. But alongside our penchant for dehumanizing members of out-groups arose a set of mechanisms designed to encourage—and when necessary to enforce—in-group cooperation for the sake of out-competing less cohesive tribes. Evolutionary literary theorist William Flesch sees in narrative a play of these cooperation-enhancing mechanisms. He writes, “our capacity for narrative developed as a way for us to keep track of cooperators” (67), and he goes on to suggest we tend to align ourselves with those we perceive as especially cooperative or altruistic while feeling an intense desire to see those who demonstrate selfishness get their comeuppance. This is because “altruism could not sustain an evolutionarily stable system without the contribution of altruistic punishers to punish the free-riders who would flourish in a population of purely benevolent altruists” (66). Flesch cites the findings of numerous experiments which demonstrate people’s willingness to punish those they see as exploiting unspoken social compacts and implicit rules of fair dealing, even when meting out that punishment involves costs or risks to the punisher (31-34). Child psychologist Karen Wynn has found that even infants too young to speak prefer to play with puppets or blocks with crude plastic eyes that have in some way demonstrated their altruism over the ones they have seen behaving selfishly or aggressively (557-560). Such experiments lead Flesch to posit a social monitoring and volunteered affect theory of narrative interest, whereby humans track the behavior of others, even fictional others, in order to assess their propensity for altruism or selfishness and are anxious to see that the altruistic are vindicated while the selfish are punished. In responding thus to other people’s behavior, whether they are fictional or real, the individual signals his or her own propensity for second- or third-order altruism.

The plot of To the Lighthouse is unlike anything else in literature, and yet a great deal of information is provided regarding the relative cooperativeness of each of the characters. Foremost among them in her compassion for others is Mrs. Ramsay. While it is true from the perspective of her own genetic interests that her heroic devotion to her husband and their eight children can be considered selfish, she nonetheless extends her care beyond the sphere of her family. She even concerns herself with the tribulations of complete strangers, something readers discover early in the novel, as

she ruminated the other problem, of rich and poor, and the things she saw with her own eyes… when she visited this widow, or that struggling wife in person with a bag on her arm, and a note- book and pencil with which she wrote down in columns carefully ruled for the purpose wages and spendings, employment and unemployment, in the hope that thus she would cease to be a private woman whose charity was half a sop to her own indignation, half relief to her own curiosity, and become what with her untrained mind she greatly admired, an investigator, elucidating the social problem. (9)

No sooner does she finish reflecting on this social problem than she catches sight of her husband’s friend Charles Tansley, who is feeling bored and “out of things,” because no one staying at the Ramsays’ summer house likes him. Regardless of the topic Tansley discusses with them, “until he had turned the whole thing around and made it somehow reflect himself and disparage them—he was not satisfied” (8). And yet Mrs. Ramsay feels compelled to invite him along on an errand so that he does not have to be alone. Before leaving the premises, though, she has to ask yet another houseguest, Augustus Carmichael, “if he wanted anything” (10). She shows this type of exquisite sensitivity to others’ feelings and states of mind throughout the first section of the novel.

Mrs. Ramsay’s feelings about Lily, another houseguest, are at once dismissive and solicitous. Readers are introduced to Lily only through Mrs. Ramsay’s sudden realization, after prolonged absentmindedness, that she is supposed to be holding still so Lily can paint her. Mrs. Ramsay’s son James, who is sitting with her as he cuts pictures out of a catalogue, makes a strange noise she worries might embarrass him. She turns to see if anyone has heard: “Only Lily Briscoe, she was glad to find; and that did not matter.” Mrs. Ramsay is doing Lily the favor of posing, but the gesture goes no further than mere politeness. Still, there is a quality the younger woman possesses that she admires. “With her little Chinese eyes,” Mrs. Ramsay thinks, “and her puckered-up face, she would never marry; one could not take her painting very seriously; she was an independent little creature, and Mrs. Ramsay liked her for it” (17). Lily’s feelings toward her hostess, on the other hand, though based on a similar recognition that the other enjoys aspects of life utterly foreign to her, are much more intense. At one point early in the novel, Lily wonders, “what could one say to her?” The answer she hazards is “I’m in love with you?” But she decides that is not true and settles on, “‘I’m in love with this all,’ waving her hand at the hedge, at the house, at the children” (19). What Lily loves, and what she tries to capture in her painting, is the essence of the family life Mrs. Ramsay represents, the life Lily herself has rejected in pursuit of her art. It must be noted too that, though Mrs. Ramsay is not related to Lily, Lily has only an elderly father, and so some of the appeal of the large, intact Ramsay family to Lily is the fact that she has been sometime without a mother.

Apart from admiring in the other what each lacks herself, the two women share little in common. The tension between them derives from Lily’s having resigned herself to life without a husband, life in the service of her art and caring for her father, while Mrs. Ramsay simply cannot imagine how any woman could be content without a family. Underlying this conviction is Mrs. Ramsay’s unique view of men and her relationship to them:

Indeed, she had the whole of the other sex under her protection; for reasons she could not explain, for their chivalry and valour, for the fact that they negotiated treaties, ruled India, controlled finance; finally for an attitude towards herself which no woman could fail to feel or to find agreeable, something trustful, childlike, reverential; which an old woman could take from a young man without loss of dignity, and woe betide the girl—pray Heaven it was none of her daughters!—who did not feel the worth of it, and all that it implied, to the marrow of her bones! (6)

In other words, woe betide Lily Briscoe. Anthropologists Peter Richerson and Robert Boyd, whose work on the evolution of cooperation in humans provides the foundation for Flesch’s theory of narrative, put forth the idea that culture functions to simultaneously maintain group cohesion and to help the group adapt to whatever environment it inhabits. “Human cultures,” they point out, “can change even more quickly than the most rapid examples of genetic evolution by natural selection” (43). What underlies the divergence of views about women’s roles between the two women in Woolf’s novel is that their culture is undergoing major transformations owing to political and economic upheaval in the lead-up to The First World War.

Lily has no long-established tradition of women artists in which to find solace and guidance; rather, the most salient model of womanhood is the family-minded, self-sacrificing Mrs. Ramsay. It is therefore to Mrs. Ramsay that Lily must justify her attempt at establishing a new tradition. She reads the older woman as making the implicit claim that “an unmarried woman has missed the best of life.” In response, Lily imagines how

gathering a desperate courage she would urge her own exemption from the universal law; plead for it; she liked to be alone; she liked to be herself; she was not made for that; and so have to meet a serious stare from eyes of unparalleled depth, and confront Mrs. Ramsay’s simple certainty… that her dear Lily, her little Brisk, was a fool. (50)

Living alone, being herself, and refusing to give up her time or her being to any husband or children strikes even Lily herself as both selfish and illegitimate, lacking cultural sanction and therefore doubly selfish. Trying to figure out the basis of her attraction to Mrs. Ramsay, beyond her obvious beauty, Lily asks herself, “did she lock up within her some secret which certainly Lily Briscoe believed people must have for the world to go on at all? Every one could not be as helter skelter, hand to mouth as she was” (50). Lily’s dilemma is that she can either be herself, or she can be a member of a family, because being a member of a family means she cannot be wholly herself; like Mrs. Ramsay, she would have to make compromises, and her art would cease to have any more significance than the older woman’s note-book with all its writing devoted to social problems. But she must justify devoting her life only to herself. Meanwhile, she’s desperate for some form of human connection beyond the casual greetings and formal exchanges that take place under the Ramsays’ roof.

Lily expresses a desire not just for knowledge from Mrs. Ramsay but for actual unity with her because what she needs is “nothing that could be written in any language known to men.” She wants to be intimate with the “knowledge and wisdom… stored up in Mrs. Ramsay’s heart,” not any factual information that could be channeled through print. The metaphor Lily uses for her struggle is particularly striking for anyone who studies human evolution.

How then, she had asked herself, did one know one thing or another thing about people, sealed as they were? Only like a bee, drawn by some sweetness or sharpness in the air intangible to touch or taste, one haunted the dome-shaped hive, ranged the wastes of the air over the countries of the world alone, and then haunted the hives with their murmurs and their stirrings; the hives, which were people. (51)

According to evolutionary biologist David Sloan Wilson, bees are one of only about fifteen species of social insect that have crossed the “Cooperation Divide,” beyond which natural selection at the level of the group supercedes selection at the level of the individual. “Social insect colonies qualify as organisms,” Wilson writes, “not because they are physically bounded but because their members coordinate their activities in organ-like fashion to perpetuate the whole” (144). The main element that separates humans from their ancestors and other primates, he argues, “is that we are evolution’s newest transition from groups of organisms to groups as organisms. Our social groups are the primate equivalent of bodies and beehives” (154). The secret locked away from Lily in Mrs. Ramsay’s heart, the essence of the Ramsay family that she loves so intensely and feels compelled to capture in her painting, is that human individuals are adapted to life in groups of other humans who together represent a type of unitary body. In trying to live by herself and for herself, Lily is going not only against the cultural traditions of the previous generation but even against her own nature.

Part 2.

Read More
Dennis Junk Dennis Junk

From Rags to Republican

Written at a time in my life when I wanted to argue politics and religion with anyone who was willing—and some who weren’t—this essay starts with the observation that when you press a conservative for evidence or logic to support their economic theories, they’ll instead tell you a story about how their own personal stories somehow prove it’s possible to start with little and beat the odds.

One of the dishwashers at the restaurant where I work likes to light-heartedly discuss politics with me. “How are things this week on the left?” he might ask. Not even in his twenties yet, he can impressively explain why it’s wrong to conflate communism with Stalinism. He believes the best government would be a communist one, but until we figure out how to establish it, our best option is to go republican. He loves Rush Limbaugh. One day I was talking about disparities in school funding when he began telling about why he doesn’t think that sort of thing is important. “I did horribly in school, but I decided I wanted to learn on my own.”

He went on to tell me about a terrible period he went through growing up, after his parents got divorced and his mother was left nearly destitute. The young dishwater had pulled himself up by his own bootstraps. The story struck me because about two weeks earlier I’d been discussing politics with a customer in the dinning room who told a remarkably similar one. He was eating with his wife and their new baby. When I disagreed with him that Obama’s election was a national catastrophe he began an impromptu lecture on conservative ideology. I interrupted him, saying, “I understand top-down economics; I just don’t agree with it.” But when I started to explain the bottom-up theory, he interrupted me with a story about how his mom was on food stamps and they had nothing when he was a kid, and yet here he is, a well-to-do father (he even put a number on his prosperity). “I’m walking proof that it is possible.”

I can go on and on with more examples. It seems like the moment anyone takes up the mantle of economic conservatism for the first time he (usually males) has to put together one of these rags-to-riches stories. I guess I could do it too, with just a little exaggeration. “My first memories are of living in government subsidized apartments, and my parents argued about money almost every day of my life when I was a kid, and then they got divorced and I was devastated—I put on weight until I was morbidly obese and I went to a psychologist for depression because I missed a month of school in fourth grade.” (Actually, that’s not exaggerated at all.)

The point we’re supposed to take away is that hardship is good and that no matter how bad being poor may appear it’s nothing a good work ethic can’t fix. Invariably, the Horatio Alger proceeds to the non sequitur that his making it out of poverty means it’s a bad idea for us as a society to invest in programs to help the poor. Push him by asking what if the poverty he experienced wasn’t as bad as the worst poverty in the country, or where that work ethic that saved him came from, and he’ll most likely shift gears and start explaining that becoming a productive citizen is a matter of incentives.

The logic runs: if you give money to people who aren’t working, you’re taking away the main incentive they had to get off their asses and go to work. Likewise, if you take money away from the people who have earned it by taxing them, you’re giving them a disincentive to continue being productive. This a folksy version of a Skinner Box: you get the pigeons to do whatever tricks you want by rewarding them with food pellets when they get close to performing them correctly—“successive approximations” of the behavior—and punishing them by not giving them food pellets when they go astray. What’s shocking is that this is as sophisticated as the great Reagan Revolution ever got. It’s a psychological theory that was recognized as too simplistic in the 1950’s writ large to explain the economy. What if people can make money in ways other than going to work, say, by selling drugs? The conservatives’ answer—more police, harsher punishments. But what if money isn’t the only reward people respond to? And what if prison doesn’t work like it’s supposed to?

The main appeal, I think, to Skinner Box Economics is that it says, in effect, don’t worry about having more than other people because you’ve earned what you have. You deserve it. What a relief to hear that we have more because we’re just better people. We needn’t work ourselves up over the wretched plight of the have-nots; if they really wanted to, they could have everything we have. To keep this line of reasoning afloat you need to buoy it up with a bit of elitism: so maybe offering everyone the same incentives won’t make everyone rich, but the smartest and most industrious people will be alright. If you’re doing alright, then you must be smart and industrious. And if you’re filthy rich, say, Wall Street banker rich, then, well, you must be one amazing S.O.B. How much money you have becomes an index of how virtuous you are as a person. And some people are so amazing in fact that the worst thing society can do is hold them back in any way, because their prosperity is so awesome it benefits everyone—it trickles down. There you have it, a rationale for letting rich people do whatever they want, and leaving poor people to their own devices to pull up their own damn bootstraps. This is the thinking that has led to even our democratic president believing that he needs to pander to Wall Street to save the economy. This is conservatism. And it’s so silly no adult should entertain it for more than a moment.

A philosophy that further empowers the powerful, that justifies the holding of power over the masses of the less powerful, ought to be appealing to anyone who actually has power. But it’s remarkable how well these ideas trickle down to the rest of us. One way to account for the assimilation of Skinner Box Economics among the middle class is that it is the middle class; people in it still have to justify being more privileged than those in the lower classes. But the real draw probably has little to do with any recognition of one’s actual circumstances; it relies rather on a large-scale obliviousness of them. Psychologists have been documenting for years the power of two biases we all fall prey to that have bearing on our economic thinking: the first is the self-serving bias, according to which we take credit any time we succeed at something but point to forces beyond our control whenever we fail. One of the best examples of the self-serving bias is research showing that the percentage of people who believe themselves to be better-than-average drivers is in the nineties—even among those who’ve recently been at fault in a traffic accident. (Sounds like Wall Street.) The second bias, which is the flipside of the first, is the fundamental attribution error, according to which we privilege attributions of persistent character traits to other people in explaining their behavior at the expense of external, situational factors—when someone cuts us off while we’re driving we immediately conclude that person is a jerk, even though we attribute the same type of behavior in ourselves to our being late for a meeting.

Any line of thinking that leads one away from the comforting belief in his or her own infinite capacity for self-determination will inevitably fail to take hold in the minds of those who rely on intuition as a standard of truth. That’s why the conservative ideology is such an incoherent mess: on the one hand, you’re trying to create a scientific model for how the economy works (or doesn’t), but on the other you’re trying not only to leave intact people’s faith in free will but also to bolster it, to elevate it to the status of linchpin to the entire worldview. But free will and determinism don’t mix, and unless you resort to religious concepts of non-material souls there’s no place to locate free will in the natural world. The very notion of free will is self-serving to anyone at all successful in his or her life—and that’s why self-determination, in the face of extreme adversity, is fetishized by the right. That’s why every conservative has a rags-to-riches story to offer as proof of the true nature of economic forces.

The real wonder of the widespread appeal of conservatism is the enormous capacity it suggests we all have for taking our advantages for granted. Most people bristle when you even use the words advantage or privilege—as if you’re undermining their worth or authenticity as a person. But the advantages middle class people enjoy are glaring and undeniable. Sure, many of us were raised by single mothers who went through periods of hardship. I’d say most of us, though, had grandparents around who were willing to lend a helping hand here and there. And even if these grandparents didn’t provide loans or handouts they did provide the cultural capital that makes us recognizable to other middle class people as part of the tribe. What makes conservative rags-to-riches stories impossible prima facie is that the people telling them know the plot elements so well, meaning someone taught them the virtue of self-reliance, and they tell them in standard American English, with mouths full of shiny, straight teeth, in accents that belie the story’s gist. It may not seem, in hindsight, that they were comfortably ensconced in the middle class, but at the very least they were surrounded by middle class people, and benefiting from their attention.

You might be tempted to conclude that the role of contingency is left out of conservative ideology, but that’s not really the case. Contingency in the form of bad luck is incorporated into conservative thinking in the form of the very narratives of triumph over adversity that are offered as proof of the fatherly wisdom of the free market. In this way, the ideology is inextricably bound to the storyteller’s authenticity as a person. I suffered and toiled, the storyteller reasons, and therefore my accomplishments are genuine, my character strong. The corollary to this personal investment in what is no longer merely an economic theory is that any dawning awareness of people in worse circumstances than those endured and overcome by the authentic man or woman will be resisted as a threat to that authenticity. If they were to accept that they had it better or easier than some, then their victories would be invalidated. They are thus highly motivated to discount, or simply not to notice contingencies like generational or cultural advantages.

I’ve yet to hear a rags-to-riches story that begins with a malnourished and overstressed mother giving birth prematurely to a cognitively impaired and immuno-compromised baby, and continues with a malnourished and neglected childhood in underperforming schools where not a teacher nor a classmate can be found who places any real value on education, and ends with the hard-working, intelligent person you see in front of you, who makes a pretty decent income and is raising a proud, healthy family. Severely impoverished people live a different world, and however bad we middle-class toilers think we’ve had it we should never be so callous and oblivious to claim we’ve seen and mastered that world. But Skinner Box Economics doesn’t just fail because some of us are born less able to perform successive approximations of the various tricks of productivity; it fails because it’s based on an inadequate theory of human motivation. Rewards and punishments work to determine our behavior to be sure, but the only people who sit around calculating outcomes and navigating incentives and disincentives with a constant eye toward the bottom line are the rich executives who benefit most from a general acceptance of supply-side economics.

The main cultural disadvantage for people growing up in poor families in poor neighborhoods is that the individuals who are likely to serve as role models there will seldom be the beacons of middle-class virtue we stupidly expect our incentive structure to produce. When I was growing up, I looked up to my older brothers, and wanted to do whatever they were doing. And I looked up to an older neighbor kid, whose influence led me to race bikes at local parks. Later my role models were Jean Claude Van Damme and Arnold Schwarzenegger, so I got into martial arts and physical fitness. Soon thereafter, I began to idolize novelists and scientists. Skinnerian behaviorism has been supplanted in the social sciences by theories emphasizing the importance of observational learning, as well as the undeniable role of basic drives like the one for status-seeking. Primatologist Frans de Waal, for instance, has proposed a theory for cultural transmission—in both apes and humans—called BIOL, for bonding and identification based observational learning. What this theory suggests is that our personalities are largely determined by a proclivity for seeking out high-status individuals whom we admire, assimilating their values and beliefs, and emulating their behavior. Absent a paragon of the Calvinist work ethic, no amount of incentives is going to turn a child into the type of person who tells conservative rags-to-riches stories.

The thing to take away from these stories is usually that there is a figure or two who perform admirably in them—the single mom, the determined dad, the charismatic teacher. And the message isn’t about economics at all but about culture and family. Conservatives tout the sanctity of family and the importance of good parenting but when they come face-to-face with the products of poor parenting they see only the products of bad decisions. Middle class parents go to agonizing lengths to ensure their children grow up in good neighborhoods and attend good schools but suggest to them that how well someone behaves is a function of how much they have—how much love and attention, how much healthy food and access to doctors, how much they can count on their struggles being worthwhile—and those same middle class parents will warn you about the dangers of making excuses.

The real proof of how well conservative policies work is not to be found in anecdotes, no matter how numerous; it’s in measures of social mobility. The story these measures tell about the effects of moving farther to the right as a country contrast rather starkly with all the rags-to-Republican tales of personal heroism. But then numbers aren’t really stories; there’s no authenticity and self-congratulation to be gleaned from statistics; and if it’s really true that we owe our prosperity to chance, well, that’s just depressing—and discouraging. We can take some encouragement for our stories of hardship though. We just have to take note of how often the evidence they provide for poverty—food stamps, rent-controlled housing—are in fact government programs to aid the impoverished. They must be working.

Also read:

WHAT'S WRONG WITH THE DARWIN ECONOMY?

Read More