READING SUBTLY
This
was the domain of my Blogger site from 2009 to 2018, when I moved to this domain and started
The Storytelling Ape
. The search option should help you find any of the old posts you're looking for.
Capuchin-22: A Review of “The Bonobo and the Atheist: In Search of Humanism among the Primates” by Frans De Waal
Frans de Waal’s work is always a joy to read, insightful, surprising, and superbly humane. Unfortunately, in his mostly wonderful book, “The Bonobo and the Atheist,” he carts out a familiar series of straw men to level an attack on modern critics of religion—with whom, if he’d been more diligent in reading their work, he’d find much common ground with.
Whenever literary folk talk about voice, that supposedly ineffable but transcendently important quality of narration, they display an exasperating penchant for vagueness, as if so lofty a dimension to so lofty an endeavor couldn’t withstand being spoken of directly—or as if they took delight in instilling panic and self-doubt into the quivering hearts of aspiring authors. What the folk who actually know what they mean by voice actually mean by it is all the idiosyncratic elements of prose that give readers a stark and persuasive impression of the narrator as a character. Discussions of what makes for stark and persuasive characters, on the other hand, are vague by necessity. It must be noted that many characters even outside of fiction are neither. As a first step toward developing a feel for how character can be conveyed through writing, we may consider the nonfiction work of real people with real character, ones who also happen to be practiced authors.
The Dutch-American primatologist Frans de Waal is one such real-life character, and his prose stands as testament to the power of written language, lonely ink on colorless pages, not only to impart information, but to communicate personality and to make a contagion of states and traits like enthusiasm, vanity, fellow-feeling, bluster, big-heartedness, impatience, and an abiding wonder. De Waal is a writer with voice. Many other scientists and science writers explore this dimension to prose in their attempts to engage readers, but few avoid the traps of being goofy or obnoxious instead of funny—a trap David Pogue, for instance, falls into routinely as he hosts NOVA on PBS—and of expending far too much effort in their attempts at being distinctive, thus failing to achieve anything resembling grace.
The most striking quality of de Waal’s writing, however, isn’t that its good-humored quirkiness never seems strained or contrived, but that it never strays far from the man’s own obsession with getting at the stories behind the behaviors he so minutely observes—whether the characters are his fellow humans or his fellow primates, or even such seemingly unstoried creatures as rats or turtles. But to say that de Waal is an animal lover doesn’t quite capture the essence of what can only be described as a compulsive fascination marked by conviction—the conviction that when he peers into the eyes of a creature others might dismiss as an automaton, a bundle of twitching flesh powered by preprogrammed instinct, he sees something quite different, something much closer to the workings of his own mind and those of his fellow humans.
De Waal’s latest book, The Bonobo and the Atheist: In Search of Humanism among the Primates, reprises the main themes of his previous books, most centrally the continuity between humans and other primates, with an eye toward answering the questions of where does, and where should morality come from. Whereas in his books from the years leading up to the turn of the century he again and again had to challenge what he calls “veneer theory,” the notion that without a process of socialization that imposes rules on individuals from some outside source they’d all be greedy and selfish monsters, de Waal has noticed over the past six or so years a marked shift in the zeitgeist toward an awareness of our more cooperative and even altruistic animal urgings. Noting a sharp difference over the decades in how audiences at his lectures respond to recitations of the infamous quote by biologist Michael Ghiselin, “Scratch an altruist and watch a hypocrite bleed,” de Waal writes,
Although I have featured this cynical line for decades in my lectures, it is only since about 2005 that audiences greet it with audible gasps and guffaws as something so outrageous, so out of touch with how they see themselves, that they can’t believe it was ever taken seriously. Had the author never had a friend? A loving wife? Or a dog, for that matter? (43)
The assumption underlying veneer theory was that without civilizing influences humans’ deeper animal impulses would express themselves unchecked. The further assumption was that animals, the end products of the ruthless, eons-long battle for survival and reproduction, would reflect the ruthlessness of that battle in their behavior. De Waal’s first book, Chimpanzee Politics, which told the story of a period of intensified competition among the captive male chimps at the Arnhem Zoo for alpha status, with all the associated perks like first dibs on choice cuisine and sexually receptive females, was actually seen by many as lending credence to these assumptions. But de Waal himself was far from convinced that the primates he studied were invariably, or even predominantly, violent and selfish.
What he observed at the zoo in Arnhem was far from the chaotic and bloody free-for-all it would have been if the chimps took the kind of delight in violence for its own sake that many people imagine them being disposed to. As he pointed out in his second book, Peacemaking among Primates, the violence is almost invariably attended by obvious signs of anxiety on the part of those participating in it, and the tension surrounding any major conflict quickly spreads throughout the entire community. The hierarchy itself is in fact an adaptation that serves as a check on the incessant conflict that would ensue if the relative status of each individual had to be worked out anew every time one chimp encountered another. “Tightly embedded in society,” he writes in The Bonobo and the Atheist, “they respect the limits it puts on their behavior and are ready to rock the boat only if they can get away with it or if so much is at stake that it’s worth the risk” (154). But the most remarkable thing de Waal observed came in the wake of the fights that couldn’t successfully be avoided. Chimps, along with primates of several other species, reliably make reconciliatory overtures toward one another after they’ve come to blows—and bites and scratches. In light of such reconciliations, primate violence begins to look like a momentary, albeit potentially dangerous, readjustment to a regularly peaceful social order rather than any ongoing melee, as individuals with increasing or waning strength negotiate a stable new arrangement.
Part of the enchantment of de Waal’s writing is his judicious and deft balancing of anecdotes about the primates he works with on the one hand and descriptions of controlled studies he and his fellow researchers conduct on the other. In The Bonobo and the Atheist, he strikes a more personal note than he has in any of his previous books, at points stretching the bounds of the popular science genre and crossing into the realm of memoir. This attempt at peeling back the surface of that other veneer, the white-coated scientist’s posture of mechanistic objectivity and impassive empiricism, works best when de Waal is merging tales of his animal experiences with reports on the research that ultimately provides evidence for what was originally no more than an intuition. Discussing a recent, and to most people somewhat startling, experiment pitting the social against the alimentary preferences of a distant mammalian cousin, he recounts,
Despite the bad reputation of these animals, I have no trouble relating to its findings, having kept rats as pets during my college years. Not that they helped me become popular with the girls, but they taught me that rats are clean, smart, and affectionate. In an experiment at the University of Chicago, a rat was placed in an enclosure where it encountered a transparent container with another rat. This rat was locked up, wriggling in distress. Not only did the first rat learn how to open a little door to liberate the second, but its motivation to do so was astonishing. Faced with a choice between two containers, one with chocolate chips and another with a trapped companion, it often rescued its companion first. (142-3)
This experiment, conducted by Inbal Ben-Ami Bartal, Jean Decety, and Peggy Mason, actually got a lot of media coverage; Mason was even interviewed for an episode of NOVA Science NOW where you can watch a video of the rats performing the jailbreak and sharing the chocolate (and you can also see David Pogue being obnoxious.) This type of coverage has probably played a role in the shift in public opinion regarding the altruistic propensities of humans and animals. But if there’s one species who’s behavior can be said to have undermined the cynicism underlying veneer theory—aside from our best friend the dog of course—it would have to be de Waal’s leading character, the bonobo.
De Waal’s 1997 book Bonobo: The Forgotten Ape, on which he collaborated with photographer Frans Lanting, introduced this charismatic, peace-loving, sex-loving primate to the masses, and in the process provided behavioral scientists with a new model for what our own ancestors’ social lives might have looked like. Bonobo females dominate the males to the point where zoos have learned never to import a strange male into a new community without the protection of his mother. But for the most part any tensions, even those over food, even those between members of neighboring groups, are resolved through genito-genital rubbing—a behavior that looks an awful lot like sex and often culminates in vocalizations and facial expressions that resemble those of humans experiencing orgasms to a remarkable degree. The implications of bonobos’ hippy-like habits have even reached into politics. After an uncharacteristically ill-researched and ill-reasoned article in the New Yorker by Ian Parker which suggested that the apes weren’t as peaceful and erotic as we’d been led to believe, conservatives couldn’t help celebrating. De Waal writes in The Bonobo and the Atheist,
Given that this ape’s reputation has been a thorn in the side of homophobes as well as Hobbesians, the right-wing media jumped with delight. The bonobo “myth” could finally be put to rest, and nature remain red in tooth and claw. The conservative commentator Dinesh D’Souza accused “liberals” of having fashioned the bonobo into their mascot, and he urged them to stick with the donkey. (63)
But most primate researchers think the behavioral differences between chimps and bonobos are pretty obvious. De Waal points out that while violence does occur among the apes on rare occasions “there are no confirmed reports of lethal aggression among bonobos” (63). Chimps, on the other hand, have been observed doing all kinds of killing. Bonobos also outperform chimps in experiments designed to test their capacity for cooperation, as in the setup that requires two individuals to pull on a rope at the same time in order for either of them to get ahold of food placed atop a plank of wood. (Incidentally, the New Yorker’s track record when it comes to anthropology is suspiciously checkered—disgraced author Patrick Tierney’s discredited book on Napoleon Chagnon, for instance, was originally excerpted in the magazine.)
Bonobos came late to the scientific discussion of what ape behavior can tell us about our evolutionary history. The famous chimp researcher Robert Yerkes, whose name graces the facility de Waal currently directs at Emory University in Atlanta, actually wrote an entire book called Almost Human about what he believed was a rather remarkable chimp. A photograph from that period reveals that it wasn’t a chimp at all. It was a bonobo. Now, as this species is becoming better researched, and with the discovery of fossils like the 4.4 million-year-old Ardipethicus ramidus known as Ardi, a bipedal ape with fangs that are quite small when compared to the lethal daggers sported by chimps, the role of violence in our ancestry is ever more uncertain. De Waal writes,
What if we descend not from a blustering chimp-like ancestor but from a gentle, empathic bonobo-like ape? The bonobo’s body proportions—its long legs and narrow shoulders—seem to perfectly fit the descriptions of Ardi, as do its relatively small canines. Why was the bonobo overlooked? What if the chimpanzee, instead of being an ancestral prototype, is in fact a violent outlier in an otherwise relatively peaceful lineage? Ardi is telling us something, and there may exist little agreement about what she is saying, but I hear a refreshing halt to the drums of war that have accompanied all previous scenarios. (61)
De Waal is well aware of all the behaviors humans engage in that are more emblematic of chimps than of bonobos—in his 2005 book Our Inner Ape, he refers to humans as “the bipolar ape”—but the fact that our genetic relatedness to both species is exactly the same, along with the fact that chimps also have a surprising capacity for peacemaking and empathy, suggest to him that evolution has had plenty of time and plenty of raw material to instill in us the emotional underpinnings of a morality that emerges naturally—without having to be imposed by religion or philosophy. “Rather than having developed morality from scratch through rational reflection,” he writes in The Bonobo and the Atheist, “we received a huge push in the rear from our background as social animals" (17).
In the eighth and final chapter of The Bonobo and the Atheist, titled “Bottom-Up Morality,” de Waal describes what he believes is an alternative to top-down theories that attempt to derive morals from religion on the one hand and from reason on the other. Invisible beings threatening eternal punishment can frighten us into doing the right thing, and principles of fairness might offer slight nudges in the direction of proper comportment, but we must already have some intuitive sense of right and wrong for either of these belief systems to operate on if they’re to be at all compelling. Many people assume moral intuitions are inculcated in childhood, but experiments like the one that showed rats will come to the aid of distressed companions suggest something deeper, something more ingrained, is involved. De Waal has found that a video of capuchin monkeys demonstrating "inequity aversion"—a natural, intuitive sense of fairness—does a much better job than any charts or graphs at getting past the prejudices of philosophers and economists who want to insist that fairness is too complex a principle for mere monkeys to comprehend. He writes,
This became an immensely popular experiment in which one monkey received cucumber slices while another received grapes for the same task. The monkeys had no trouble performing if both received identical rewards of whatever quality, but rejected unequal outcomes with such vehemence that there could be little doubt about their feelings. I often show their reactions to audiences, who almost fall out of their chairs laughing—which I interpret as a sign of surprised recognition. (232)
What the capuchins do when they see someone else getting a better reward is throw the measly cucumber back at the experimenter and proceed to rattle the cage in agitation. De Waal compares it to the Occupy Wall Street protests. The poor monkeys clearly recognize the insanity of the human they’re working for.
There’s still a long way to travel, however, from helpful rats and protesting capuchins before you get to human morality. But that gap continues to shrink as researchers find new ways to explore the social behaviors of the primates that are even more closely related to us. Chimps, for instance, have been seen taking inequity aversion an important step beyond what monkeys display. Not only will certain individuals refuse to work for lesser rewards; they’ll refuse to work even for the superior rewards if they see their companions aren’t being paid equally. De Waal does acknowledge though that there still remains an important step between these behaviors and human morality. “I am reluctant to call a chimpanzee a ‘moral being,’” he writes.
This is because sentiments do not suffice. We strive for a logically coherent system and have debates about how the death penalty fits arguments for the sanctity of life, or whether an unchosen sexual orientation can be morally wrong. These debates are uniquely human. There is little evidence that other animals judge the appropriateness of actions that do not directly affect themselves. (17-8)
Moral intuitions can often inspire some behaviors that to people in modern liberal societies seem appallingly immoral. De Waal quotes anthropologist Christopher Boehm on the “special, pejorative moral ‘discount’ applied to cultural strangers—who often are not even considered fully human,” and he goes on to explain that “The more we expand morality’s reach, the more we need to rely on our intellect.” But the intellectual principles must be grounded in the instincts and emotions we evolved as social primates; this is what he means by bottom-up morality or “naturalized ethics” (235).
*****
In locating the foundations of morality in our evolved emotions—propensities we share with primates and even rats—de Waal seems to be taking a firm stand against any need for religion. But he insists throughout the book that this isn’t the case. And, while the idea that people are quite capable of playing fair and treating each other with compassion without any supernatural policing may seem to land him squarely in the same camp as prominent atheists like Richard Dawkins and Christopher Hitchens, whom he calls “neo-atheists,” he contends that they’re just as, if not more, misguided than the people of faith who believe the rules must be handed down from heaven. “Even though Dawkins cautioned against his own anthropomorphism of the gene,” de Waal wrote all the way back in his 1996 book Good Natured: The Origins of Right and Wrong in Humans and Other Animals, “with the passage of time, carriers of selfish genes became selfish by association” (14). Thus de Waal tries to find some middle ground between religious dogmatists on one side and those who are equally dogmatic in their opposition to religion and equally mistaken in their espousal of veneer theory on the other. “I consider dogmatism a far greater threat than religion per se,” he writes in The Bonobo and the Atheist.
I am particularly curious why anyone would drop religion while retaining the blinkers sometimes associated with it. Why are the “neo-atheists” of today so obsessed with God’s nonexistence that they go on media rampages, wear T-shirts proclaiming their absence of belief, or call for a militant atheism? What does atheism have to offer that’s worth fighting for? (84)
For de Waal, neo-atheism is an empty placeholder of a philosophy, defined not by any positive belief but merely by an obstinately negative attitude toward religion. It’s hard to tell early on in his book if this view is based on any actual familiarity with the books whose titles—The God Delusion, god is not Great—he takes issue with. What is obvious, though, is that he’s trying to appeal to some spirit of moderation so that he might reach an audience who may have already been turned off by the stridency of the debates over religion’s role in society. At any rate, we can be pretty sure that Hitchens, for one, would have had something to say about de Waal’s characterization.
De Waal’s expertise as a primatologist gave him what was in many ways an ideal perspective on the selfish gene debates, as well as on sociobiology more generally, much the way Sarah Blaffer Hrdy’s expertise has done for her. The monkeys and apes de Waal works with are a far cry from the ants and wasps that originally inspired the gene-centered approach to explaining behavior. “There are the bees dying for their hive,” he writes in The Bonobo and the Atheist,
and the millions of slime mold cells that build a single, sluglike organism that permits a few among them to reproduce. This kind of sacrifice was put on the same level as the man jumping into an icy river to rescue a stranger or the chimpanzee sharing food with a whining orphan. From an evolutionary perspective, both kinds of helping are comparable, but psychologically speaking they are radically different. (33)
At the same time, though, de Waal gets to see up close almost every day how similar we are to our evolutionary cousins, and the continuities leave no question as to the wrongheadedness of blank slate ideas about socialization. “The road between genes and behavior is far from straight,” he writes, sounding a note similar to that of the late Stephen Jay Gould, “and the psychology that produces altruism deserves as much attention as the genes themselves.” He goes on to explain,
Mammals have what I call an “altruistic impulse” in that they respond to signs of distress in others and feel an urge to improve their situation. To recognize the need of others, and react appropriately, is really not the same as a preprogrammed tendency to sacrifice oneself for the genetic good. (33)
We can’t discount the role of biology, in other words, but we must keep in mind that genes are at the distant end of a long chain of cause and effect that has countless other inputs before it links to emotion and behavior. De Waal angered both the social constructivists and quite a few of the gene-centered evolutionists, but by now the balanced view his work as primatologist helped him to arrive at has, for the most part, won the day. Now, in his other role as a scientist who studies the evolution of morality, he wants to strike a similar balance between extremists on both sides of the religious divide. Unfortunately, in this new arena, his perspective isn’t anywhere near as well informed.
The type of religion de Waal points to as evidence that the neo-atheists’ concerns are misguided and excessive is definitely moderate. It’s not even based on any actual beliefs, just some nice ideas and stories adherents enjoy hearing and thinking about in a spirit of play. We have to wonder, though, just how prevalent this New Age, Life-of-Pi type of religion really is. I suspect the passages in The Bonobo and the Atheist discussing it are going to be equally offensive to atheists and people of actual faith alike. Here’s one example of the bizarre way he writes about religion:
Neo-atheists are like people standing outside a movie theater telling us that Leonardo DiCaprio didn’t really go down with the Titanic. How shocking! Most of us are perfectly comfortable with the duality. Humor relies on it, too, lulling us into one way of looking at a situation only to hit us over the head with another. To enrich reality is one of the most delightful capacities we have, from pretend play in childhood to visions of an afterlife when we grow older. (294)
He seems to be suggesting that the religious know, on some level, their beliefs aren’t true. “Some realities exist,” he writes, “some we just like to believe in” (294). The problem is that while many readers may enjoy the innuendo about humorless and inveterately over-literal atheists, most believers aren’t joking around—even the non-extremists are more serious than de Waal seems to think.
As someone who’s been reading de Waal’s books for the past seventeen years, someone who wanted to strangle Ian Parker after reading his cheap smear piece in The New Yorker, someone who has admired the great primatologist since my days as an undergrad anthropology student, I experienced the sections of The Bonobo and the Atheist devoted to criticisms of neo-atheism, which make up roughly a quarter of this short book, as soul-crushingly disappointing. And I’ve agonized over how to write this part of the review. The middle path de Waal carves out is between a watered-down religion believers don’t really believe on one side and an egregious postmodern caricature of Sam Harris’s and Christopher Hitchens’s positions on the other. He focuses on Harris because of his book, The Moral Landscape, which explores how we might use science to determine our morals and values instead of religion, but he gives every indication of never having actually read the book and of instead basing his criticisms solely on the book’s reputation among Harris’s most hysterical detractors. And he targets Hitchens because he thinks he has the psychological key to understanding what he refers to as his “serial dogmatism.” But de Waal’s case is so flimsy a freshman journalism student could demolish it with no more than about ten minutes of internet fact-checking.
De Waal does acknowledge that we should be skeptical of “religious institutions and their ‘primates’,” but he wonders “what good could possibly come from insulting the many people who find value in religion?” (19). This is the tightrope he tries to walk throughout his book. His focus on the purely negative aspect of atheism juxtaposed with his strange conception of the role of belief seems designed to give readers the impression that if the atheists succeed society might actually suffer severe damage. He writes,
Religion is much more than belief. The question is not so much whether religion is true or false, but how it shapes our lives, and what might possibly take its place if we were to get rid of it the way an Aztec priest rips the beating heart out of a virgin. What could fill the gaping hole and take over the removed organ’s functions? (216)
The first problem is that many people who call themselves humanists, as de Waal does, might suggest that there are in fact many things that could fill the gap—science, literature, philosophy, music, cinema, human rights activism, just to name a few. But the second problem is that the militancy of the militant atheists is purely and avowedly rhetorical. In a debate with Hitchens, former British Prime Minister Tony Blair once held up the same straw man that de Waal drags through the pages of his book, the claim that neo-atheists are trying to extirpate religion from society entirely, to which Hitchens replied, “In fairness, no one was arguing that religion should or will die out of the world. All I’m arguing is that it would be better if there was a great deal more by way of an outbreak of secularism” (20:20). What Hitchens is after is an end to the deference automatically afforded religious ideas by dint of their supposed sacredness; religious ideas need to be critically weighed just like any other ideas—and when they are thus weighed they often don’t fare so well, in either logical or moral terms. It’s hard to understand why de Waal would have a problem with this view.
*****
De Waal’s position is even more incoherent with regard to Harris’s arguments about the potential for a science of morality, since they represent an attempt to answer, at least in part, the very question of what might take the place of religion in providing guidance in our lives that he poses again and again throughout The Bonobo and the Atheist. De Waal takes issue first with the book’s title, The Moral Landscape: How Science can Determine Human Values. The notion that science might determine any aspect of morality suggests to him a top-down approach as opposed to his favored bottom-up strategy that takes “naturalized ethics” as its touchstone. This is, however, unbeknownst to de Waal, a mischaracterization of Harris’s thesis. Rather than engage Harris’s arguments in any direct or meaningful way, de Waal contents himself with following in the footsteps of critics who apply the postmodern strategy of holding the book to account for all the analogies that can be drawn with it, no matter how tenuously or tendentiously, to historical evils. De Waal writes, for instance,
While I do welcome a science of morality—my own work is part of it—I can’t fathom calls for science to determine human values (as per the subtitle of Sam Harris’s The Moral Landscape). Is pseudoscience something of the past? Are modern scientists free from moral biases? Think of the Tuskegee syphilis study just a few decades ago, or the ongoing involvement of medical doctors in prisoner torture at Guantanamo Bay. I am profoundly skeptical of the moral purity of science, and feel that its role should never exceed that of morality’s handmaiden. (22)
(Great phrase that "morality's handmaiden.") But Harris never argues that scientists are any more morally pure than anyone else. His argument is for the application of that “science of morality,” which de Waal proudly contributes to, to attempts at addressing the big moral issues our society faces.
The guilt-by-association and guilt-by-historical-analogy tactics on display in The Bonobo and the Atheist extend all the way to that lodestar of postmodernism’s hysterical obsessions. We might hope that de Waal, after witnessing the frenzied insanity of the sociobiology controversy from the front row, would know better. But he doesn’t seem to grasp how toxic this type of rhetoric is to reasoned discourse and honest inquiry. After expressing his bafflement at how science and a naturalistic worldview could inspire good the way religion does (even though his main argument is that such external inspiration to do good is unnecessary), he writes,
It took Adolf Hitler and his henchmen to expose the moral bankruptcy of these ideas. The inevitable result was a precipitous drop of faith in science, especially biology. In the 1970s, biologists were still commonly equated with fascists, such as during the heated protest against “sociobiology.” As a biologist myself, I am glad those acrimonious days are over, but at the same time I wonder how anyone could forget this past and hail science as our moral savior. How did we move from deep distrust to naïve optimism? (22)
Was Nazism borne of an attempt to apply science to moral questions? It’s true some people use science in evil ways, but not nearly as commonly as people are directly urged by religion to perpetrate evils like inquisitions or holy wars. When science has directly inspired evil, as in the case of eugenics, the lifespan of the mistake was measurable in years or decades rather than centuries or millennia. Not to minimize the real human costs, but science wins hands down by being self-correcting and, certain individual scientists notwithstanding, undogmatic.
Harris intended for his book to begin a debate he was prepared to actively participate in. But he quickly ran into the problem that postmodern criticisms can’t really be dealt with in any meaningful way. The following long quote from Harris’s response to his battier critics in the Huffington Post will show both that de Waal’s characterization of his argument is way off-the-mark, and that it is suspiciously unoriginal:
How, for instance, should I respond to the novelist Marilynne Robinson’s paranoid, anti-science gabbling in the Wall Street Journal where she consigns me to the company of the lobotomists of the mid 20th century? Better not to try, I think—beyond observing how difficult it can be to know whether a task is above or beneath you. What about the science writer John Horgan, who was kind enough to review my book twice, once in Scientific American where he tarred me with the infamous Tuskegee syphilis experiments, the abuse of the mentally ill, and eugenics, and once in The Globe and Mail, where he added Nazism and Marxism for good measure? How does one graciously respond to non sequiturs? The purpose of The Moral Landscape is to argue that we can, in principle, think about moral truth in the context of science. Robinson and Horgan seem to imagine that the mere existence of the Nazi doctors counts against my thesis. Is it really so difficult to distinguish between a science of morality and the morality of science? To assert that moral truths exist, and can be scientifically understood, is not to say that all (or any) scientists currently understand these truths or that those who do will necessarily conform to them.
And we have to ask further what alternative source of ethical principles do the self-righteous grandstanders like Robinson and Horgan—and now de Waal—have to offer? In their eagerness to compare everyone to the Nazis, they seem to be deriving their own morality from Fox News.
De Waal makes three objections to Harris’s arguments that are of actual substance, but none of them are anywhere near as devastating to his overall case as de Waal makes out. First, Harris begins with the assumption that moral behaviors lead to “human flourishing,” but this is a presupposed value as opposed to an empirical finding of science—or so de Waal claims. But here’s de Waal himself on a level of morality sometimes seen in apes that transcends one-on-one interactions between individuals:
female chimpanzees have been seen to drag reluctant males toward each other to make up after a fight, while removing weapons from their hands. Moreover, high-ranking males regularly act as impartial arbiters to settle disputes in the community. I take these hints of community concern as a sign that the building blocks of morality are older than humanity, and that we don’t need God to explain how we got to where we are today. (20)
The similarity between the concepts of human flourishing and community concern highlights one of the main areas of confusion de Waal could have avoided by actually reading Harris’s book. The word “determine” in the title has two possible meanings. Science can determine values in the sense that it can guide us toward behaviors that will bring about flourishing. But it can also determine our values in the sense of discovering what we already naturally value and hence what conditions need to be met for us to flourish.
De Waal performs a sleight of hand late in The Bonobo and the Atheist, substituting another “utilitarian” for Harris, justifying the trick by pointing out that utilitarians also seek to maximize human flourishing—though Harris never claims to be one. This leads de Waal to object that strict utilitarianism isn’t viable because he’s more likely to direct his resources to his own ailing mother than to any stranger in need, even if those resources would benefit the stranger more. Thus de Waal faults Harris’s ethics for overlooking the role of loyalty in human lives. His third criticism is similar; he worries that utilitarians might infringe on the rights of a minority to maximize flourishing for a majority. But how, given what we know about human nature, could we expect humans to flourish—to feel as though they were flourishing—in a society that didn’t properly honor friendship and the bonds of family? How could humans be happy in a society where they had to constantly fear being sacrificed to the whim of the majority? It is in precisely this effort to discover—or determine—under which circumstances humans flourish that Harris believes science can be of the most help. And as de Waal moves up from his mammalian foundations of morality to more abstract ethical principles the separation between his approach and Harris’s starts to look suspiciously like a distinction without a difference.
Harris in fact points out that honoring family bonds probably leads to greater well-being on pages seventy-three and seventy-four of The Moral Landscape, and de Waal quotes from page seventy-four himself to chastise Harris for concentrating too much on "the especially low-hanging fruit of conservative Islam" (74). The incoherence of de Waal's argument (and the carelessness of his research) are on full display here as he first responds to a point about the genital mutilation of young girls by asking, "Isn't genital mutilation common in the United States, too, where newborn males are circumcised without their consent?" (90). So cutting off the foreskin of a male's penis is morally equivalent to cutting off a girl's clitoris? Supposedly, the equivalence implies that there can't be any reliable way to determine the relative moral status of religious practices. "Could it be that religion and culture interact to the point that there is no universal morality?" Perhaps, but, personally, as a circumcised male, I think this argument is a real howler.
*****
The slick scholarly laziness on display in The Bonobo and the Atheist is just as bad when it comes to the positions, and the personality, of Christopher Hitchens, whom de Waal sees fit to psychoanalyze instead of engaging his arguments in any substantive way—but whose memoir, Hitch-22, he’s clearly never bothered to read. The straw man about the neo-atheists being bent on obliterating religion entirely is, disappointingly, but not surprisingly by this point, just one of several errors and misrepresentations. De Waal’s main argument against Hitchens, that his atheism is just another dogma, just as much a religion as any other, is taken right from the list of standard talking points the most incurious of religious apologists like to recite against him. Theorizing that “activist atheism reflects trauma” (87)—by which he means that people raised under severe religions will grow up to espouse severe ideologies of one form or another—de Waal goes on to suggest that neo-atheism is an outgrowth of “serial dogmatism”:
Hitchens was outraged by the dogmatism of religion, yet he himself had moved from Marxism (he was a Trotskyist) to Greek Orthodox Christianity, then to American Neo-Conservatism, followed by an “antitheist” stance that blamed all of the world’s troubles on religion. Hitchens thus swung from the left to the right, from anti-Vietnam War to cheerleader of the Iraq War, and from pro to contra God. He ended up favoring Dick Cheney over Mother Teresa. (89)
This is truly awful rubbish, and it’s really too bad Hitchens isn’t around anymore to take de Waal to task for it himself. First, this passage allows us to catch out de Waal’s abuse of the term dogma; dogmatism is rigid adherence to beliefs that aren’t open to questioning. The test of dogmatism is whether you’re willing to adjust your views in light of new evidence or changing circumstances—it has nothing to do with how willing or eager you are to debate. What de Waal is labeling dogmatism is what we normally call outspokenness. Second, his facts are simply wrong. For one, though Hitchens was labeled a neocon by some of his fellows on the left simply because he supported the invasion of Iraq, he never considered himself one. When he was asked in an interview for the New Stateman if he was a neoconservative, he responded unequivocally, “I’m not a conservative of any kind.” Finally, can’t someone be for one war and against another, or agree with certain aspects of a religious or political leader’s policies and not others, without being shiftily dogmatic?
De Waal never really goes into much detail about what the “naturalized ethics” he advocates might look like beyond insisting that we should take a bottom-up approach to arriving at them. This evasiveness gives him space to criticize other nonbelievers regardless of how closely their ideas might resemble his own. “Convictions never follow straight from evidence or logic,” he writes. “Convictions reach us through the prism of human interpretation” (109). He takes this somewhat banal observation (but do they really never follow straight from evidence?) as a license to dismiss the arguments of others based on silly psychologizing. “In the same way that firefighters are sometimes stealth arsonists,” he writes, “and homophobes closet homosexuals, do some atheists secretly long for the certitude of religion?” (88). We could of course just as easily turn this Freudian rhetorical trap back against de Waal and his own convictions. Is he a closet dogmatist himself? Does he secretly hold the unconscious conviction that primates are really nothing like humans and that his research is all a big sham?
Christopher Hitchens was another real-life character whose personality shone through his writing, and like Yossarian in Joseph Heller’s Catch-22 he often found himself in a position where he knew being sane would put him at odds with the masses, thus convincing everyone of his insanity. Hitchens particularly identified with the exchange near the end of Heller’s novel in which an officer, Major Danby, says, “But, Yossarian, suppose everyone felt that way,” to which Yossarian replies, “Then I’d certainly be a damned fool to feel any other way, wouldn’t I?” (446). (The title for his memoir came from a word game he and several of his literary friends played with book titles.) It greatly saddens me to see de Waal pitting himself against such a ham-fisted caricature of a man in whom, had he taken the time to actually explore his writings, he would likely have found much to admire. Why did Hitch become such a strong advocate for atheism? He made no secret of his motivations. And de Waal, who faults Harris (wrongly) for leaving loyalty out of his moral equations, just might identify with them. It began when the theocratic dictator of Iran put a hit out on his friend, the author Salman Rushdie, because he thought one of his books was blasphemous. Hitchens writes in Hitch-22,
When the Washington Post telephoned me at home on Valentine’s Day 1989 to ask my opinion about the Ayatollah Khomeini’s fatwah, I felt at once that here was something that completely committed me. It was, if I can phrase it like this, a matter of everything I hated versus everything I loved. In the hate column: dictatorship, religion, stupidity, demagogy, censorship, bullying, and intimidation. In the love column: literature, irony, humor, the individual, and the defense of free expression. Plus, of course, friendship—though I like to think that my reaction would have been the same if I hadn’t known Salman at all. (268)
Suddenly, neo-atheism doesn’t seem like an empty place-holder anymore. To criticize atheists so harshly for having convictions that are too strong, de Waal has to ignore all the societal and global issues religion is on the wrong side of. But when we consider the arguments on each side of the abortion or gay marriage or capital punishment or science education debates it’s easy to see that neo-atheists are only against religion because they feel it runs counter to the positive values of skeptical inquiry, egalitarian discourse, free society, and the ascendency of reason and evidence.
De Waal ends The Bonobo and the Atheist with a really corny section in which he imagines how a bonobo would lecture atheists about morality and the proper stance toward religion. “Tolerance of religion,” the bonobo says, “even if religion is not always tolerant in return, allows humanism to focus on what is most important, which is to build a better society based on natural human abilities” (237). Hitchens is of course no longer around to respond to the bonobo, but many of the same issues came up in his debate with Tony Blair (I hope no one reads this as an insult to the former PM), who at one point also argued that religion might be useful in building better societies—look at all the charity work they do for instance. Hitch, already showing signs of physical deterioration from the treatment for the esophageal cancer that would eventually kill him, responds,
The cure for poverty has a name in fact. It’s called the empowerment of women. If you give women some control over the rate at which they reproduce, if you give them some say, take them off the animal cycle of reproduction to which nature and some doctrine, religious doctrine, condemns them, and then if you’ll throw in a handful of seeds perhaps and some credit, the flaw, the flaw of everything in that village, not just poverty, but education, health, and optimism, will increase. It doesn’t matter—try it in Bangladesh, try it in Bolivia. It works. It works all the time. Name me one religion that stands for that—or ever has. Wherever you look in the world and you try to remove the shackles of ignorance and disease and stupidity from women, it is invariably the clerisy that stands in the way. (23:05)
Later in the debate, Hitch goes on to argue in a way that sounds suspiciously like an echo of de Waal’s challenges to veneer theory and his advocacy for bottom-up morality. He says,
The injunction not to do unto others what would be repulsive if done to yourself is found in the Analects of Confucius if you want to date it—but actually it’s found in the heart of every person in this room. Everybody knows that much. We don’t require divine permission to know right from wrong. We don’t need tablets administered to us ten at a time in tablet form, on pain of death, to be able to have a moral argument. No, we have the reasoning and the moral suasion of Socrates and of our own abilities. We don’t need dictatorship to give us right from wrong. (25:43)
And as a last word in his case and mine I’ll quote this very de Waalian line from Hitch: “There’s actually a sense of pleasure to be had in helping your fellow creature. I think that should be enough” (35:42).
Also read:
TED MCCORMICK ON STEVEN PINKER AND THE POLITICS OF RATIONALITY
And:
NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA
And:
THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND
Napoleon Chagnon's Crucible and the Ongoing Epidemic of Moralizing Hysteria in Academia
Napoleon Chagnon was targeted by postmodern activists and anthropologists, who trumped up charges against him and hoped to sacrifice his reputation on the altar of social justice. In retrospect, his case looks like an early warning sign of what would come to be called “cancel culture.” Fortunately, Chagnon was no pushover, and there were a lot of people who saw through the lies being spread about him. “Noble Savages” is in a part a great adventure story and in part his response to the tragic degradation of the field of anthropology as it succumbs to the lures of ideology.
When Arthur Miller adapted the script of The Crucible, his play about the Salem Witch Trials originally written in 1953, for the 1996 film version, he enjoyed additional freedom to work with the up-close visual dimensions of the tragedy. In one added scene, the elderly and frail George Jacobs, whom we first saw lifting one of his two walking sticks to wave an unsteady greeting to a neighbor, sits before a row of assembled judges as the young Ruth Putnam stands accusing him of assaulting her. The girl, ostensibly shaken from the encounter and frightened lest some further terror ensue, dramatically recounts her ordeal, saying,
He come through my window and then he lay down upon me. I could not take breath. His body crush heavy upon me, and he say in my ear, “Ruth Putnam, I will have your life if you testify against me in court.”
This quote she delivers in a creaky imitation of the old man’s voice. When one of the judges asks Jacobs what he has to say about the charges, he responds with the glaringly obvious objection: “But, your Honor, I must have these sticks to walk with—how may I come through a window?” The problem with this defense, Jacobs comes to discover, is that the judges believe a person can be in one place physically and in another in spirit. This poor tottering old man has no defense against so-called “spectral evidence.” Indeed, as judges in Massachusetts realized the year after Jacobs was hanged, no one really has any defense against spectral evidence. That’s part of the reason why it was deemed inadmissible in their courts, and immediately thereafter convictions for the crime of witchcraft ceased entirely.
Many anthropologists point to the low cost of making accusations as a factor in the evolution of moral behavior. People in small societies like the ones our ancestors lived in for millennia, composed of thirty or forty profoundly interdependent individuals, would have had to balance any payoff that might come from immoral deeds against the detrimental effects to their reputations of having those deeds discovered and word of them spread. As the generations turned over and over again, human nature adapted in response to the social enforcement of cooperative norms, and individuals came to experience what we now recognize as our moral emotions—guilt which is often preëmptive and prohibitive, shame, indignation, outrage, along with the more positive feelings associated with empathy, compassion, and loyalty.
The legacy of this process of reputational selection persists in our prurient fascination with the misdeeds of others and our frenzied, often sadistic, delectation in the spreading of salacious rumors. What Miller so brilliantly dramatizes in his play is the irony that our compulsion to point fingers, which once created and enforced cohesion in groups of selfless individuals, can in some environments serve as a vehicle for our most viciously selfish and inhuman impulses. This is why it is crucial that any accusation, if we as a society are to take it at all seriously, must provide the accused with some reliable means of acquittal. Charges that can neither be proven nor disproven must be seen as meaningless—and should even be counted as strikes against the reputation of the one who levels them.
While this principle runs into serious complications in situations with crimes that are as inherently difficult to prove as they are horrific, a simple rule proscribing any glib application of morally charged labels is a crucial yet all-too-popularly overlooked safeguard against unjust calumny. In this age of viral dissemination, the rapidity with which rumors spread coupled with the absence of any reliable assurances of the validity of messages bearing on the reputations of our fellow citizens demand that we deliberately work to establish as cultural norms the holding to account of those who make accusations based on insufficient, misleading, or spectral evidence—and the holding to account as well, to only a somewhat lesser degree, of those who help propagate rumors without doing due diligence in assessing their credibility.
The commentary attending the publication of anthropologist Napoleon Chagnon’s memoir of his research with the Yanomamö tribespeople in Venezuela calls to mind the insidious “Teach the Controversy” PR campaign spearheaded by intelligent design creationists. Coming out against the argument that students should be made aware of competing views on the value of intelligent design inevitably gives the impression of close-mindedness or dogmatism. But only a handful of actual scientists have any truck with intelligent design, a dressed-up rehashing of the old God-of-the-Gaps argument based on the logical fallacy of appealing to ignorance—and that ignorance, it so happens, is grossly exaggerated.
Teaching the controversy would therefore falsely imply epistemological equivalence between scientific views on evolution and those that are not-so-subtly religious. Likewise, in the wake of allegations against Chagnon about mistreatment of the people whose culture he made a career of studying, many science journalists and many of his fellow anthropologists still seem reluctant to stand up for him because they fear doing so would make them appear insensitive to the rights and concerns of indigenous peoples. Instead, they take refuge in what they hope will appear a balanced position, even though the evidence on which the accusations rested has proven to be entirely spectral.
Chagnon’s Noble Savages: My Life among Two Dangerous Tribes—the Yanomamö and the Anthropologists is destined to be one of those books that garners commentary by legions of outspoken scholars and impassioned activists who never find the time to actually read it. Science writer John Horgan, for instance, has published two blog posts on Chagnon in recent weeks, and neither of them features a single quote from the book. In the first, he boasts of his resistance to bullying, via email, by five prominent sociobiologists who had caught wind of his assignment to review Patrick Tierney’s book Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon and insisted that he condemn the work and discourage anyone from reading it. Against this pressure, Horgan wrote a positive review in which he repeats several horrific accusations that Tierney makes in the book before going on to acknowledge that the author should have worked harder to provide evidence of the wrongdoings he reports on.
But Tierney went on to become an advocate for Indian rights. And his book’s faults are outweighed by its mass of vivid, damning detail. My guess is that it will become a classic in anthropological literature, sparking countless debates over the ethics and epistemology of field studies.
Horgan probably couldn’t have known at the time (though those five scientists tried to warn him) that giving Tierney credit for prompting debates about Indian rights and ethnographic research methods was a bit like praising Abigail Williams, the original source of accusations of witchcraft in Salem, for sparking discussions about child abuse. But that he stands by his endorsement today, saying,
“I have one major regret concerning my review: I should have noted that Chagnon is a much more subtle theorist of human nature than Tierney and other critics have suggested,” as balanced as that sounds, casts serious doubt on his scholarship, not to mention his judgment.
What did Tierney falsely accuse Chagnon of? There are over a hundred specific accusations in the book (Chagnon says his friend William Irons flagged 106 [446]), but the most heinous whopper comes in the fifth chapter, titled “Outbreak.” In 1968, Chagnon was helping the geneticist James V. Neel collect blood samples from the Yanomamö—in exchange for machetes—so their DNA could be compared with that of people in industrialized societies. While they were in the middle of this project, a measles epidemic broke out, and Neel had discovered through earlier research that the Indians lacked immunity to this disease, so the team immediately began trying to reach all of the Yanomamö villages to vaccinate everyone before the contagion reached them. Most people who knew about the episode considered what the scientists did heroic (and several investigations now support this view). But Tierney, by creating the appearance of pulling together multiple threads of evidence, weaves together a much different story in which Neel and Chagnon are cast as villains instead of heroes. (The version of the book I’ll quote here is somewhat incoherent because it went through some revisions in attempts to deal with holes in the evidence that were already emerging pre-publication.)
First, Tierney misinterprets some passages from Neel’s books as implying an espousal of eugenic beliefs about the Indians, namely that by remaining closer to nature and thus subject to ongoing natural selection they retain all-around superior health, including better immunity. Next, Tierney suggests that the vaccine Neel chose, Edmonston B, which is usually administered with a drug called gamma globulin to minimize reactions like fevers, is so similar to the measles virus that in the immune-suppressed Indians it actually ended up causing a suite of symptoms that was indistinguishable from full-blown measles. The implication is clear. Tierney writes,
Chagnon and Neel described an effort to “get ahead” of the measles epidemic by vaccinating a ring around it. As I have reconstructed it, the 1968 outbreak had a single trunk, starting at the Ocamo mission and moving up the Orinoco with the vaccinators. Hundreds of Yanomami died in 1968 on the Ocamo River alone. At the time, over three thousand Yanomami lived on the Ocamo headwaters; today there are fewer than two hundred. (69)
At points throughout the chapter, Tierney seems to be backing off the worst of his accusations; he writes, “Neel had no reason to think Edmonston B could become transmissible. The outbreak took him by surprise.” But even in this scenario Tierney suggests serious wrongdoing: “Still, he wanted to collect data even in the midst of a disaster” (82).
Earlier in the chapter, though, Tierney makes a much more serious charge. Pointing to a time when Chagnon showed up at a Catholic mission after having depleted his stores of gamma globulin and nearly run out of Edmonston B, Tierney suggests the shortage of drugs was part of a deliberate plan. “There were only two possibilities,” he writes,
Either Chagnon entered the field with only forty doses of virus; or he had more than forty doses. If he had more than forty, he deliberately withheld them while measles spread for fifteen days. If he came to the field with only forty doses, it was to collect data on a small sample of Indians who were meant to receive the vaccine without gamma globulin. Ocamo was a good choice because the nuns could look after the sick while Chagnon went on with his demanding work. Dividing villages into two groups, one serving as a control, was common in experiments and also a normal safety precaution in the absence of an outbreak. (60)
Thus Tierney implies that Chagnon was helping Neel test his eugenics theory and in the process became complicit in causing an epidemic, maybe deliberately, that killed hundreds of people. Tierney claims he isn’t sure how much Chagnon knew about the experiment; he concedes at one point that “Chagnon showed genuine concern for the Yanomami,” before adding, “At the same time, he moved quickly toward a cover-up” (75).
Near the end of his “Outbreak” chapter, Tierney reports on a conversation with Mark Papania, a measles expert at the Center for Disease Control in Atlanta. After running his hypothesis about how Neel and Chagnon caused the epidemic with the Edmonston B vaccine by Papania, Tierney claims he responded, “Sure, it’s possible.” He goes on to say that while Papania informed him there were no documented cases of the vaccine becoming contagious he also admitted that no studies of adequate sensitivity had been done. “I guess we didn’t look very hard,” Tierney has him saying (80). But evolutionary psychologist John Tooby got a much different answer when he called Papania himself. In a an article published on Slate—nearly three weeks before Horgan published his review, incidentally—Tooby writes that the epidemiologist had a very different attitude to the adequacy of past safety tests from the one Tierney reported:
it turns out that researchers who test vaccines for safety have never been able to document, in hundreds of millions of uses, a single case of a live-virus measles vaccine leading to contagious transmission from one human to another—this despite their strenuous efforts to detect such a thing. If attenuated live virus does not jump from person to person, it cannot cause an epidemic. Nor can it be planned to cause an epidemic, as alleged in this case, if it never has caused one before.
Tierney also cites Samuel Katz, the pediatrician who developed Edmonston B, at a few points in the chapter to support his case. But Katz responded to requests from the press to comment on Tierney’s scenario by saying,
the use of Edmonston B vaccine in an attempt to halt an epidemic was a justifiable, proven and valid approach. In no way could it initiate or exacerbate an epidemic. Continued circulation of these charges is not only unwarranted, but truly egregious.
Tooby included a link to Katz’s response, along with a report from science historian Susan Lindee of her investigation of Neel’s documents disproving many of Tierney’s points. It seems Horgan should’ve paid a bit more attention to those emails he was receiving.
Further investigations have shown that pretty much every aspect of Tierney’s characterization of Neel’s beliefs and research agenda was completely wrong. The report from a task force investigation by the American Society of Human Genetics gives a sense of how Tierney, while giving the impression of having conducted meticulous research, was in fact perpetrating fraud. The report states,
Tierney further suggests that Neel, having recognized that the vaccine was the cause of the epidemic, engineered a cover-up. This is based on Tierney’s analysis of audiotapes made at the time. We have reexamined these tapes and provide evidence to show that Tierney created a false impression by juxtaposing three distinct conversations recorded on two separate tapes and in different locations. Finally, Tierney alleges, on the basis of specific taped discussions, that Neel callously and unethically placed the scientific goals of the expedition above the humanitarian need to attend to the sick. This again is shown to be a complete misrepresentation, by examination of the relevant audiotapes as well as evidence from a variety of sources, including members of the 1968 expedition.
This report was published a couple years after Tierney’s book hit the shelves. But there was sufficient evidence available to anyone willing to do the due diligence in checking out the credibility of the author and his claims to warrant suspicion that the book’s ability to make it onto the shortlist for the National Book Award is indicative of a larger problem.
*******
With the benefit of hindsight and a perspective from outside the debate (though I’ve been following the sociobiology controversy for a decade and a half, I wasn’t aware of Chagnon’s longstanding and personal battles with other anthropologists until after Tierney’s book was published) it seems to me that once Tierney had been caught misrepresenting the evidence in support of such an atrocious accusation his book should have been removed from the shelves, and all his reporting should have been dismissed entirely. Tierney himself should have been made to answer for his offense. But for some reason none of this happened.
The anthropologist Marshall Sahlins, for instance, to whom Chagnon has been a bête noire for decades, brushed off any concern for Tierney’s credibility in his review of Darkness in El Dorado, published a full month after Horgan’s, apparently because he couldn’t resist the opportunity to write about how much he hates his celebrated colleague. Sahlins’s review is titled “Guilty not as Charged,” which is already enough to cast doubt on his capacity for fairness or rationality. Here’s how he sums up the issue of Tierney’s discredited accusation in relation to the rest of the book:
The Kurtzian narrative of how Chagnon achieved the political status of a monster in Amazonia and a hero in academia is truly the heart of Darkness in El Dorado. While some of Tierney’s reporting has come under fire, this is nonetheless a revealing book, with a cautionary message that extends well beyond the field of anthropology. It reads like an allegory of American power and culture since Vietnam.
Sahlins apparently hasn’t read Conrad’s novel Heart of Darkness or he’d know Chagnon is no Kurtz. And Vietnam? The next paragraph goes into more detail about this “allegory,” as if Sahlins’s conscripting of him into service as a symbol of evil somehow establishes his culpability. To get an idea of how much Chagnon actually had to do with Vietnam, we can look at a passage early in Noble Savages about how disconnected from the outside world he was while doing his field work:
I was vaguely aware when I went into the Yanomamö area in late 1964 that the United States had sent several hundred military advisors to South Vietnam to help train the South Vietnamese army. When I returned to Ann Arbor in 1966 the United States had some two hundred thousand combat troops there. (36)
But Sahlins’s review, as bizarre as it is, is important because it’s representative of the types of arguments Chagnon’s fiercest anthropological critics make against his methods, his theories, but mainly against him personally. In another recent comment on how “The Napoleon Chagnon Wars Flare Up Again,” Barbara J. King betrays a disconcerting and unscholarly complacence with quoting other, rival anthropologists’ words as evidence of Chagnon’s own thinking. Alas, King too is weighing in on the flare-up without having read the book, or anything else by the author it seems. And she’s also at pains to appear fair and balanced, even though the sources she cites against Chagnon are neither, nor are they the least bit scientific. Of Sahlins’s review of Darkness in El Dorado, she writes,
The Sahlins essay from 2000 shows how key parts of Chagnon’s argument have been “dismembered” scientifically. In a major paper published in 1988, Sahlins says, Chagnon left out too many relevant factors that bear on Ya̧nomamö males’ reproductive success to allow any convincing case for a genetic underpinning of violence.
It’s a bit sad that King feels it’s okay to post on a site as popular as NPR and quote a criticism of a study she clearly hasn’t read—she could have downloaded the pdf of Chagnon’s landmark paper “Life Histories, Blood Revenge, and Warfare in a Tribal Population,” for free. Did Chagnon claim in the study that it proved violence had a genetic underpinning? It’s difficult to tell what the phrase “genetic underpinning” even means in this context.
To lend further support to Sahlins’s case, King selectively quotes another anthropologist, Jonathan Marks. The lines come from a rant on his blog (I urge you to check it out for yourself if you’re at all suspicious about the aptness of the term rant to describe the post) about a supposed takeover of anthropology by genetic determinism. But King leaves off the really interesting sentence at the end of the remark. Here’s the whole passage explaining why Marks thinks Chagnon is an incompetent scientist:
Let me be clear about my use of the word “incompetent”. His methods for collecting, analyzing and interpreting his data are outside the range of acceptable anthropological practices. Yes, he saw the Yanomamo doing nasty things. But when he concluded from his observations that the Yanomamo are innately and primordially “fierce” he lost his anthropological credibility, because he had not demonstrated any such thing. He has a right to his views, as creationists and racists have a right to theirs, but the evidence does not support the conclusion, which makes it scientifically incompetent.
What Marks is saying here is not that he has evidence of Chagnon doing poor field work; rather, Marks dismisses Chagnon merely because of his sociobiological leanings. Note too that the italicized words in the passage are not quotes. This is important because along with the false equation of sociobiology with genetic determinism this type of straw man underlies nearly all of the attacks on Chagnon. Finally, notice how Marks slips into the realm of morality as he tries to traduce Chagnon’s scientific credibility. In case you think the link with creationism and racism is a simple analogy—like the one I used myself at the beginning of this essay—look at how Marks ends his rant:
So on one side you’ve got the creationists, racists, genetic determinists, the Republican governor of Florida, Jared Diamond, and Napoleon Chagnon–and on the other side, you’ve got normative anthropology, and the mother of the President. Which side are you on?
How can we take this at all seriously? And why did King misleadingly quote, on a prominent news site, such a seemingly level-headed criticism which in context reveals itself as anything but level-headed? I’ll risk another analogy here and point out that Marks’s comments about genetic determinism taking over anthropology are similar in both tone and intellectual sophistication to Glenn Beck’s comments about how socialism is taking over American politics.
King also links to a review of Noble Savages that was published in the New York Times in February, and this piece is even harsher to Chagnon. After repeating Tierney’s charge about Neel deliberately causing the 1968 measles epidemic and pointing out it was disproved, anthropologist Elizabeth Povinelli writes of the American Anthropological Association investigation that,
The committee was split over whether Neel’s fervor for observing the “differential fitness of headmen and other members of the Yanomami population” through vaccine reactions constituted the use of the Yanomamö as a Tuskegee-like experimental population.
Since this allegation has been completely discredited by the American Society of Human Genetics, among others, Povinelli’s repetition of it is irresponsible, as was the Times failure to properly vet the facts in the article.
Try as I might to remain detached from either side as I continue to research this controversy (and I’ve never met any of these people), I have to say I found Povinelli’s review deeply offensive. The straw men she shamelessly erects and the quotes she shamelessly takes out of context, all in the service of an absurdly self-righteous and substanceless smear, allow no room whatsoever for anything answering to the name of compassion for a man who was falsely accused of complicity in an atrocity. And in her zeal to impugn Chagnon she propagates a colorful and repugnant insult of her own creation, which she misattributes to him. She writes,
Perhaps it’s politically correct to wonder whether the book would have benefited from opening with a serious reflection on the extensive suffering and substantial death toll among the Yanomamö in the wake of the measles outbreak, whether or not Chagnon bore any responsibility for it. Does their pain and grief matter less even if we believe, as he seems to, that they were brutal Neolithic remnants in a land that time forgot? For him, the “burly, naked, sweaty, hideous” Yanomamö stink and produce enormous amounts of “dark green snot.” They keep “vicious, underfed growling dogs,” engage in brutal “club fights” and—God forbid!—defecate in the bush. By the time the reader makes it to the sections on the Yanomamö’s political organization, migration patterns and sexual practices, the slant of the argument is evident: given their hideous society, understanding the real disaster that struck these people matters less than rehabilitating Chagnon’s soiled image.
In other words, Povinelli’s response to Chagnon’s “harrowing” ordeal, is to effectively say, Maybe you’re not guilty of genocide, but you’re still guilty for not quitting your anthropology job and becoming a forensic epidemiologist. Anyone who actually reads Noble Savages will see quite clearly the “slant” Povinelli describes, along with those caricatured “brutal Neolithic remnants,” must have flown in through her window right next to George Jacobs.
Povinelli does characterize one aspect of Noble Savages correctly when she complains about its “Manichean rhetorical structure,” with the bad Rousseauian, Marxist, postmodernist cultural anthropologists—along with the corrupt and PR-obsessed Catholic missionaries—on one side, and the good Hobbesian, Darwinian, scientific anthropologists on the other, though it’s really just the scientific part he’s concerned with. I actually expected to find a more complicated, less black-and-white debate taking place when I began looking into the attacks on Chagnon’s work—and on Chagnon himself. But what I ended up finding was that Chagnon’s description of the division, at least with regard to the anthropologists (I haven’t researched his claims about the missionaries) is spot-on, and Povinelli’s repulsive review is a case in point.
This isn’t to say that there aren’t legitimate scientific disagreements about sociobiology. In fact, Chagnon writes about how one of his heroes is “calling into question some of the most widely accepted views” as early as his dedication page, referring to E.O. Wilson’s latest book The Social Conquest of Earth. But what Sahlins, Marks, and Povinelli offer is neither legitimate nor scientific. These commenters really are, as Chagnon suggests, representative of a subset of cultural anthropologists completely given over to a moralizing hysteria. Their scholarship is as dishonest as it is defamatory, their reasoning rests on guilt by free-association and the tossing up and knocking down of the most egregious of straw men, and their tone creates the illusion of moral certainty coupled with a longsuffering exasperation with entrenched institutionalized evils. For these hysterical moralizers, it seems any theory of human behavior that involves evolution or biology represents the same kind of threat as witchcraft did to the people of Salem in the 1690s, or as communism did to McCarthyites in the 1950s. To combat this chimerical evil, the presumed righteous ends justify the deceitful means.
The unavoidable conclusion with regard to the question of why Darkness in El Dorado wasn’t dismissed outright when it should have been is that even though it has been established that Chagnon didn’t commit any of the crimes Tierney accused him of, as far as his critics are concerned, he may as well have. Somehow cultural anthropologists have come to occupy a bizarre culture of their own in which charging a colleague with genocide doesn’t seem like a big deal. Before Tierney’s book hit the shelves, two anthropologists, Terence Turner and Leslie Sponsel, co-wrote an email to the American Anthropological Association which was later sent to several journalists. Turner and Sponsel later claimed the message was simply a warning about the “impending scandal” that would result from the publication of Darkness in El Dorado. But the hyperbole and suggestive language make it read more like a publicity notice than a warning. “This nightmarish story—a real anthropological heart of darkness beyond the imagining of even a Josef Conrad (though not, perhaps, a Josef Mengele)”—is it too much to ask of those who are so fond of referencing Joseph Conrad that they actually read his book?—“will be seen (rightly in our view) by the public, as well as most anthropologists, as putting the whole discipline on trial.” As it turned out, though, the only one who was put on trial, by the American Anthropological Association—though officially it was only an “inquiry”—was Napoleon Chagnon.
Chagnon’s old academic rivals, many of whom claim their problem with him stems from the alleged devastating impact of his research on Indians, fail to appreciate the gravity of Tierney’s accusations. Their blasé response to the author being exposed as a fraud gives the impression that their eagerness to participate in the pile-on has little to do with any concern for the Yanomamö people. Instead, they embraced Darkness in El Dorado because it provided good talking points in the campaign against their dreaded nemesis Napoleon Chagnon. Sahlins, for instance, is strikingly cavalier about the personal effects of Tierney’s accusations in the review cited by King and Horgan:
The brouhaha in cyberspace seemed to help Chagnon’s reputation as much as Neel’s, for in the fallout from the latter’s defense many academics also took the opportunity to make tendentious arguments on Chagnon’s behalf. Against Tierney’s brief that Chagnon acted as an anthro-provocateur of certain conflicts among the Yanomami, one anthropologist solemnly demonstrated that warfare was endemic and prehistoric in the Amazon. Such feckless debate is the more remarkable because most of the criticisms of Chagnon rehearsed by Tierney have been circulating among anthropologists for years, and the best evidence for them can be found in Chagnon’s writings going back to the 1960s.
Sahlins goes on to offer his own sinister interpretation of Chagnon’s writings, using the same straw man and guilt-by-free-association techniques common to anthropologists in the grip of moralizing hysteria. But I can’t help wondering why anyone would take a word he says seriously after he suggests that being accused of causing a deadly epidemic helped Neel’s and Chagnon’s reputations.
*******
Marshall Sahlins recently made news by resigning from the National Academy of Sciences in protest against the organization’s election of Chagnon to its membership and its partnerships with the military. In explaining his resignation, Sahlins insists that Chagnon, based on the evidence of his own writings, did serious harm to the people whose culture he studied. Sahlins also complains that Chagnon’s sociobiological ideas about violence are so wrongheaded that they serve to “discredit the anthropological discipline.” To back up his objections, he refers interested parties to that same review of Darkness in El Dorado King links to on her post.
Though Sahlins explains his moral and intellectual objections separately, he seems to believe that theories of human behavior based on biology are inherently immoral, as if theorizing that violence has “genetic underpinnings” is no different from claiming that violence is inevitable and justifiable. This is why Sahlins can’t discuss Chagnon without reference to Vietnam. He writes in his review,
The ‘60s were the longest decade of the 20th century, and Vietnam was the longest war. In the West, the war prolonged itself in arrogant perceptions of the weaker peoples as instrumental means of the global projects of the stronger. In the human sciences, the war persists in an obsessive search for power in every nook and cranny of our society and history, and an equally strong postmodern urge to “deconstruct” it. For his part, Chagnon writes popular textbooks that describe his ethnography among the Yanomami in the 1960s in terms of gaining control over people.
Sahlins doesn’t provide any citations to back up this charge—he’s quite clearly not the least bit concerned with fairness or solid scholarship—and based on what Chagnon writes in Noble Savages this fantasy of “gaining control” originates in the mind of Sahlins, not in the writings of Chagnon.
For instance, Chagnon writes of being made the butt of an elaborate joke several Yanomamö conspired to play on him by giving him fake names for people in their village (like Hairy Cunt, Long Dong, and Asshole). When he mentions these names to people in a neighboring village, they think it’s hilarious. “My face flushed with embarrassment and anger as the word spread around the village and everybody was laughing hysterically.” And this was no minor setback: “I made this discovery some six months into my fieldwork!” (66) Contrary to the despicable caricature Povinelli provides as well, Chagnon writes admiringly of the Yanomamö’s “wicked humor,” and how “They enjoyed duping others, especially the unsuspecting and gullible anthropologist who lived among them” (67). Another gem comes from an episode in which he tries to treat a rather embarrassing fungal infection: “You can’t imagine the hilarious reaction of the Yanomamö watching the resident fieldworker in a most indescribable position trying to sprinkle foot powder onto his crotch, using gravity as a propellant” (143).
The bitterness, outrage, and outright hatred directed at Chagnon, alongside the overt nonexistence of evidence that he’s done anything wrong, seem completely insane until you consider that this preeminent anthropologist falls afoul of all the –isms that haunt the fantastical armchair obsessions of postmodern pseudo-scholars. Chagnon stands as a living symbol of the white colonizer exploiting indigenous people and resources (colonialism); he propagates theories that can be read as supportive of fantasies about individual and racial superiority (Social Darwinism, racism); he reports on tribal warfare and cruelty toward women, with the implication that these evils are encoded in our genes (neoconservativism, sexism, biological determinism). It should be clear that all of this is nonsense: any exploitation is merely alleged and likely outweighed by efforts at vaccination against diseases introduced by missionaries and gold miners; sociobiology doesn’t focus on racial differences, and superiority is a scientifically meaningless term; and the fact that genes play a role in some behavior implies neither that the behavior is moral nor that it is inevitable. The truly evil –ism at play in the campaign against Chagnon is postmodernism—an ideology which functions as little more than a factory for the production of false accusations.
There are two main straw men that are bound to be rolled out by postmodern critics of evolutionary theories of behavior in any discussion of morally charged topics. The first is the gene-for misconception.
Every anthropologist, sociobiologist, and evolutionary psychologist knows that there is no gene for violence and warfare in the sense that would mean everyone born with a particular allele will inevitably grow up to be physically aggressive. Yet, in any discussion of the causes of violence, or any other issue in which biology is implicated, critics fall all over themselves trying to catch their opponents out for making this mistake, and they pretend by doing so they’re defeating an attempt to undermine efforts to make the world more peaceful. It so happens that scientists actually have discovered a gene variation, known popularly as “the warrior gene,” that increases the likelihood that an individual carrying it will engage in aggressive behavior—but only if that individual experiences a traumatic childhood. Having a gene variation associated with a trait only ever means someone is more likely to express that trait, and there will almost always be other genes and several environmental factors contributing to the overall likelihood.
You can be reasonably sure that if a critic is taking a sociobiologist or an evolutionary psychologist to task for suggesting a direct one-to-one correspondence between a gene and a behavior that critic is being either careless or purposely misleading. In trying to bring about a more peaceful world, it’s far more effective to study the actual factors that contribute to violence than it is to write moralizing criticisms of scientific colleagues. The charge that evolutionary approaches can only be used to support conservative or reactionary views of society isn’t just a misrepresentation of sociobiological theories; it’s also empirically false—surveys demonstrate that grad students in evolutionary anthropology are overwhelmingly liberal in their politics, just as liberal in fact as anthropology students in non-evolutionary concentrations.
Another thing anyone who has taken a freshman anthropology course knows, but that anti-evolutionary critics fall all over themselves taking sociobiologists to task for not understanding, is that people who live in foraging or tribal cultures cannot be treated as perfect replicas of our Pleistocene ancestors, or as Povinelli calls them “prehistoric time capsules.” Hunters and gatherers are not “living fossils,” because they’ve been evolving just as long as people in industrialized societies, their histories and environments are unique, and it’s almost impossible for them to avoid being impacted by outside civilizations. If you flew two groups of foragers from different regions each into the territory of the other, you would learn quite quickly that each group’s culture is intricately adapted to the environment it originally inhabited. This does not mean, however, that evidence about how foraging and tribal peoples live is irrelevant to questions about human evolution.
As different as those two groups are, they are both probably living lives much more similar to those of our ancestors than anyone in industrialized societies. What evolutionary anthropologists and psychologists tend to be most interested in are the trends that emerge when several of these cultures are compared to one another. The Yanomamö actually subsist largely on slash-and-burn agriculture, and they live in groups much larger than those of most foraging peoples. Their culture and demographic patterns may therefore provide clues to how larger and more stratified societies developed after millennia of evolution in small, mobile bands. But, again, no one is suggesting the Yanomamö are somehow interchangeable with the people who first made this transition to more complex social organization historically.
The prehistoric time-capsule straw man often goes hand-in-hand with an implication that the anthropologists supposedly making the blunder see the people whose culture they study as somehow inferior, somehow less human than people who live in industrialized civilizations. It seems like a short step from this subtle dehumanization to the kind of whole-scale exploitation indigenous peoples are often made to suffer. But the sad truth is there are plenty of economic, religious, and geopolitical forces working against the preservation of indigenous cultures and the protection of indigenous people’s rights to make scapegoating scientists who gather cultural and demographic information completely unnecessary. And you can bet Napoleon Chagnon is, if anything, more outraged by the mistreatment of the Yanomamö than most of the activists who falsely accuse him of complicity, because he knows so many of them personally. Chagnon is particularly critical of Brazilian gold miners and Salesian missionaries, both of whom it seems have far more incentive to disrespect the Yanomamö culture (by supplanting their religion and moving them closer to civilization) and ravage the territory they inhabit. The Salesians’ reprisals for his criticisms, which entailed pulling strings to keep him out of the territory and efforts to create a public image of him as a menace, eventually provided fodder for his critics back home as well.
*******
In an article published in the journal American Anthropologist in 2004 titled Guilt by Association, about the American Anthropological Association’s compromised investigation of Tierney’s accusations against Chagnon, Thomas Gregor and Daniel Gross describe “chains of logic by which anthropological research becomes, at the end of an associative thread, an act of misconduct” (689). Quoting Defenders of the Truth, sociologist Ullica Segerstrale’s indispensable 2000 book on the sociobiology debate, Gregor and Gross explain that Chagnon’s postmodern accusers relied on a rhetorical strategy common among critics of evolutionary theories of human behavior—a strategy that produces something startlingly indistinguishable from spectral evidence. Segerstrale writes,
In their analysis of their target’s texts, the critics used a method I call moral reading. The basic idea behind moral reading was to imagine the worst possible political consequences of a scientific claim. In this way, maximum moral guilt might be attributed to the perpetrator of this claim. (206)
She goes on to cite a “glaring” example of how a scholar drew an imaginary line from sociobiology to Nazism, and then connected it to fascist behavioral control, even though none of these links were supported by any evidence (207). Gregor and Gross describe how this postmodern version of spectral evidence was used to condemn Chagnon.
In the case at hand, for example, the Report takes Chagnon to task for an article in Science on revenge warfare, in which he reports that “Approximately 30% of Yanomami adult male deaths are due to violence”(Chagnon 1988:985). Chagnon also states that Yanomami men who had taken part in violent acts fathered more children than those who had not. Such facts could, if construed in their worst possible light, be read as suggesting that the Yanomami are violent by nature and, therefore, undeserving of protection. This reading could give aid and comfort to the opponents of creating a Yanomami reservation. The Report, therefore, criticizes Chagnon for having jeopardized Yanomami land rights by publishing the Science article, although his research played no demonstrable role in the demarcation of Yanomami reservations in Venezuela and Brazil. (689)
The task force had found that Chagnon was guilty—even though it was nominally just an “inquiry” and had no official grounds for pronouncing on any misconduct—of harming the Indians by portraying them negatively. Gregor and Gross, however, sponsored a ballot at the AAA to rescind the organization’s acceptance of the report; in 2005, it was voted on by the membership and passed by a margin of 846 to 338. “Those five years,” Chagnon writes of the time between that email warning about Tierney’s book and the vote finally exonerating him, “seem like a blurry bad dream” (450).
Anthropological fieldwork has changed dramatically since Chagnon’s early research in Venezuela. There was legitimate concern about the impact of trading manufactured goods like machetes for information, and you can read about some of the fracases it fomented among the Yanomamö in Noble Savages. The practice is now prohibited by the ethical guidelines of ethnographic field research. The dangers to isolated or remote populations from communicable diseases must also be considered while planning any expeditions to study indigenous cultures. But Chagnon was entering the Ocamo region after many missionaries and just before many gold miners. And we can’t hold him accountable for disregarding rules that didn’t exist at the time. Sahlins, however, echoing Tierney’s perversion of Neel and Chagnon’s race to immunize the Indians so that the two men appeared to be the source of contagion, accuses Chagnon of causing much of the violence he witnessed and reported by spreading around his goods.
Hostilities thus tracked the always-changing geopolitics of Chagnon-wealth, including even pre-emptive attacks to deny others access to him. As one Yanomami man recently related to Tierney: “Shaki [Chagnon] promised us many things, and that’s why other communities were jealous and began to fight against us.”
Aside from the fact that some Yanomamö men had just returned from a raid the very first time he entered one of their villages, and the fact that the source of this quote has been discredited, Sahlins is also basing his elaborate accusation on some pretty paltry evidence.
Sahlins also insists that the “monster in Amazonia” couldn’t possibly have figured out a way to learn the names and relationships of the people he studied without aggravating intervillage tensions (thus implicitly conceding those tensions already existed). The Yanomamö have a taboo against saying the names of other adults, similar to our own custom of addressing people we’ve just met by their titles and last names, but with much graver consequences for violations. This is why Chagnon had to confirm the names of people in one tribe by asking about them in another, the practice that led to his discovery of the prank that was played on him. Sahlins uses Tierney’s reporting as the only grounds for his speculations on how disruptive this was to the Yanomamö. And, in the same way he suggested there was some moral equivalence between Chagnon going into the jungle to study the culture of a group of Indians and the US military going into the jungles to engage in a war against the Vietcong, he fails to distinguish between the Nazi practice of marking Jews and Chagnon’s practice of writing numbers on people’s arms to keep track of their problematic names. Quoting Chagnon, Sahlins writes,
“I began the delicate task of identifying everyone by name and numbering them with indelible ink to make sure that everyone had only one name and identity.” Chagnon inscribed these indelible identification numbers on people’s arms—barely 20 years after World War II.
This juvenile innuendo calls to mind Jon Stewart’s observation that it’s not until someone in Washington makes the first Hitler reference that we know a real political showdown has begun (and Stewart has had to make the point a few times again since then).
One of the things that makes this type of trashy pseudo-scholarship so insidious is that it often creates an indelible impression of its own. Anyone who reads Sahlins’ essay could be forgiven for thinking that writing numbers on people might really be a sign that he was dehumanizing them. Fortunately, Chagnon’s own accounts go a long way toward dispelling this suspicion. In one passage, he describes how he made the naming and numbering into a game for this group of people who knew nothing about writing:
I had also noted after each name the item that person wanted me to bring on my next visit, and they were surprised at the total recall I had when they decided to check me. I simply looked at the number I had written on their arm, looked the number up in my field book, and then told the person precisely what he had requested me to bring for him on my next trip. They enjoyed this, and then they pressed me to mention the names of particular people in the village they would point to. I would look at the number on the arm, look it up in my field book, and whisper his name into someone’s ear. The others would anxiously and eagerly ask if I got it right, and the informant would give an affirmative quick raise of the eyebrows, causing everyone to laugh hysterically. (157)
Needless to say, this is a far cry from using the labels to efficiently herd people into cargo trains to transport them to concentration camps and gas chambers. Sahlins disgraces himself by suggesting otherwise and by not distancing himself from Tierney when it became clear that his atrocious accusations were meritless.
Which brings us back to John Horgan. One week after the post in which he bragged about standing up to five email bullies who were urging him not to endorse Tierney’s book and took the opportunity to say he still stands by the mostly positive review, he published another post on Chagnon, this time about the irony of how close Chagnon’s views on war are to those of Margaret Mead, a towering figure in anthropology whose blank-slate theories sociobiologists often challenge. (Both of Horgan’s posts marking the occasion of Chagnon’s new book—neither of which quote from it—were probably written for publicity; his own book on war was published last year.) As I read the post, I came across the following bewildering passage:
Chagnon advocates have cited a 2011 paper by bioethicist Alice Dreger as further “vindication” of Chagnon. But to my mind Dreger’s paper—which wastes lots of verbiage bragging about all the research that she’s done and about how close she has gotten to Chagnon–generates far more heat than light. She provides some interesting insights into Tierney’s possible motives in writing Darkness in El Dorado, but she leaves untouched most of the major issues raised by Chagnon’s career.
Horgan’s earlier post was one of the first things I’d read in years about Chagnon, and Tierney’s accusations against him. I read Alice Dreger’s report on her investigation of those accusations, and the “inquiry” by the American Anthropological Association that ensued from them, shortly afterward. I kept thinking back to Horgan’s continuing endorsement of Tieney’s book as I read the report because she cites several other reports that establish, at the very least, that there was no evidence to support the worst of the accusations. My conclusion was that Horgan simply hadn’t done his homework. How could he endorse a work featuring such horrific accusations if he knew most of them, the most horrific in particular, had been disproved? But with this second post he was revealing that he knew the accusations were false—and yet he still hasn’t recanted his endorsement.
If you only read two supplements to Noble Savages, I recommend Dreger’s report and Emily Eakin’s profile of Chagnon in the New York Times. The one qualm I have about Eakin’s piece is that she too sacrifices the principle of presuming innocence in her effort to achieve journalistic balance, quoting Leslie Sponsel, one of the authors of the appalling email that sparked the AAA’s investigation of Chagnon, as saying, “The charges have not all been disproven by any means.” It should go without saying that the burden of proof is on the accuser. It should also go without saying that once the most atrocious of Tierney’s accusations were disproven the discussion of culpability should have shifted its focus away from Chagnon onto Tierney and his supporters. That it didn’t calls to mind the scene in The Crucible when an enraged John Proctor, whose wife is being arrested, shouts in response to an assurance that she’ll be released if she’s innocent—“If she is innocent! Why do you never wonder if Paris be innocent, or Abigail? Is the accuser always holy now? Were they born this morning as clean as God’s fingers?” (73). Aside from Chagnon himself, Dreger is about the only one who realized Tierney himself warranted some investigating.
Eakin echoes Horgan a bit when she faults the “zealous tone” of Dreger’s report. Indeed, at one point, Dreger compares Chagnon’s trial to Galileo’s being called before the Inquisition. The fact is, though, there’s an important similarity. One of the most revealing discoveries of Dreger’s investigation was that the members of the AAA task force knew Tierney’s book was full of false accusations but continued with their inquiry anyway because they were concerned about the organization’s public image. In an email to the sociobiologist Sarah Blaffer Hrdy, Jane Hill, the head of the task force, wrote,
Burn this message. The book is just a piece of sleaze, that’s all there is to it (some cosmetic language will be used in the report, but we all agree on that). But I think the AAA had to do something because I really think that the future of work by anthropologists with indigenous peoples in Latin America—with a high potential to do good—was put seriously at risk by its accusations, and silence on the part of the AAA would have been interpreted as either assent or cowardice.
How John Horgan could have read this and still claimed that Dreger’s report “generates more heat than light” is beyond me. I can only guess that his judgment has been distorted by cognitive dissonance.
To Horgan's other complaints, that she writes too much about her methods and admits to having become friends with Chagnon, she might respond that there is so much real hysteria surrounding this controversy, along with a lot of commentary reminiscent of the type of ridiculous rhetoric one hears on cable news, it was important to distinguish her report from all the groundless and recriminatory he-said-she-said. As for the friendship, it came about over the course of Dreger’s investigation. This is important because, for one, it doesn’t suggest any pre-existing bias, and two, one of the claims by critics of Chagnon’s work is that the violence he reported was either provoked by the man himself, or represented some kind of mental projection of his own bellicose character onto the people he was studying.
Dreger’s friendship with Chagnon shows that he’s not the monster portrayed by those in the grip of moralizing hysteria. And if parts of her report strike many as sententious it’s probably owing to their unfamiliarity with how ingrained that hysteria has become. It seems odd that anyone would pronounce on the importance of evidence or fairness—but basic principles we usually take for granted where trammeled in the frenzy to condemn Chagnon.
If his enemies are going to compare him to Mengele, then a comparison with Galileo seems less extreme.
Dreger, it seems to me, deserves credit for bringing a sorely needed modicum of sanity to the discussion. And she deserves credit as well for being one of the only people commenting on the controversy who understands the devastating personal impact of such vile accusations. She writes,
Meanwhile, unlike Neel, Chagnon was alive to experience what it is like to be drawn-and-quartered in the international press as a Nazi-like experimenter responsible for the deaths of hundreds, if not thousands, of Yanomamö. He tried to describe to me what it is like to suddenly find yourself accused of genocide, to watch your life’s work be twisted into lies and used to burn you.
So let’s make it clear: the scientific controversy over sociobiology and the scandal over Tierney’s discredited book are two completely separate issues. In light of the findings from all the investigations of Tierney’s claims, we should all, no matter our theoretical leanings, agree that Darkness in El Dorado is, in the words of Jane Hill, who headed a task force investigating it, “just a piece of sleaze.” We should still discuss whether it was appropriate or advisable for Chagnon to exchange machetes for information—I’d be interested to hear what he has to say himself, since he describes all kinds of frustrations the practice caused him in his book. We should also still discuss the relative threat of contagion posed by ethnographers versus missionaries, weighed of course against the benefits of inoculation campaigns.
But we shouldn’t discuss any ethical or scientific matter with reference to Darkness in El Dorado or its disgraced author aside from questions like: Why was the hysteria surrounding the book allowed to go so far? Why were so many people willing to scapegoat Chagnon? Why doesn’t anyone—except Alice Dreger—seem at all interested in bringing Tierney to justice in some way for making such outrageous accusations based on misleading or fabricated evidence? What he did is far worse than what Jonah Lehrer or James Frey did, and yet both of those men have publically acknowledged their dishonesty while no one has put even the slightest pressure on Tierney to publically admit wrongdoing.
There’s some justice to be found in how easy Tierney and all the self-righteous pseudo-scholars like Sahlins have made it for future (and present) historians of science to cast them as deluded and unscrupulous villains in the story of a great—but flawed, naturally—anthropologist named Napoleon Chagnon. There’s also justice to be found in how snugly the hysterical moralizers’ tribal animosity toward Chagnon, their dehumanization of him, fits within a sociobiological framework of violence and warfare. One additional bit of justice might come from a demonstration of how easily Tierney’s accusatory pseudo-reporting can be turned inside-out. Tierney at one point in his book accuses Chagnon of withholding names that would disprove the central finding of his famous Science paper, and reading into the fact that the ascendant theories Chagnon criticized were openly inspired by Karl Marx’s ideas, he writes,
Yet there was something familiar about Chagnon’s strategy of secret lists combined with accusations against ubiquitous Marxists, something that traced back to his childhood in rural Michigan, when Joe McCarthy was king. Like the old Yanomami unokais, the former senator from Wisconsin was in no danger of death. Under the mantle of Science, Tailgunner Joe was still firing away—undefeated, undaunted, and blessed with a wealth of off-spring, one of whom, a poor boy from Port Austin, had received a full portion of his spirit. (180)
Tierney had no evidence that Chagnon kept any data out of his analysis. Nor did he have any evidence regarding Chagnon’s ideas about McCarthy aside from what he thought he could divine from knowing where he grew up (he cited no surveys of opinions from the town either). His writing is so silly it would be laughable if we didn’t know about all the anguish it caused. Tierney might just as easily have tried to divine Chagnon’s feelings about McCarthyism based on his alma mater. It turns out Chagnon began attending classes at the University of Michigan, the school where he’d write the famous dissertation for his PhD that would become the classic anthropology text The Fierce People, just two decades after another famous alumnus, one who actually stood up to McCarthy at a time when he was enjoying the success of a historical play he'd written, an allegory on the dangers of moralizing hysteria, in particular the one we now call the Red Scare. His name was Arthur Miller.
Also read
Can't Win for Losing: Why There are So Many Losers in Literature and Why It Has to Change
And
The People Who Evolved Our Genes for Us: Christopher Boehm on Moral Origins
And
The Feminist Sociobiologist: An Appreciation of Sarah Blaffer Hrdy
The Feminist Sociobiologist: An Appreciation of Sarah Blaffer Hrdy Disguised as a Review of “Mothers and Others: The Evolutionary Origins of Mutual Understanding”
Sarah Blaffer Hrdy’s book “Mother Nature” was one of the first things I ever read about evolutionary psychology. With her new book, “Mothers and Others,” Hrdy lays out a theory for why humans are so cooperative compared to their ape cousins. Once again, she’s managed to pen a work that will stand the test of time, rewarding multiple readings well into the future.
One way to think of the job of anthropologists studying human evolution is to divide it into two basic components: the first is to arrive at a comprehensive and precise catalogue of the features and behaviors that make humans different from the species most closely related to us, and the second is to arrange all these differences in order of their emergence in our ancestral line. Knowing what came first is essential—though not sufficient—to the task of distinguishing between causes and effects. For instance, humans have brains that are significantly larger than those of any other primate, and we use these brains to fashion tools that are far more elaborate than the stones, sticks, leaves, and sponges used by other apes. Humans are also the only living ape that routinely walks upright on two legs. Since most of us probably give pride of place in the hierarchy of our species’ idiosyncrasies to our intelligence, we can sympathize with early Darwinian thinkers who felt sure brain expansion must have been what started our ancestors down their unique trajectory, making possible the development of increasingly complex tools, which in turn made having our hands liberated from locomotion duty ever more advantageous.
This hypothetical sequence, however, was dashed rather dramatically with the discovery in 1974 of Lucy, the 3.2 million-year-old skeleton of an Australopithecus Afarensis, in Ethiopia. Lucy resembles a chimpanzee in most respects, including cranial capacity, except that her bones have all the hallmarks of a creature with a bipedal gait. Anthropologists like to joke that Lucy proved butts were more important to our evolution than brains. But, though intelligence wasn’t the first of our distinctive traits to evolve, most scientists still believe it was the deciding factor behind our current dominance. At least for now, humans go into the jungle and build zoos and research facilities to study apes, not the other way around. Other apes certainly can’t compete with humans in terms of sheer numbers. Still, intelligence is a catch-all term. We must ask what exactly it is that our bigger brains can do better than those of our phylogenetic cousins.
A couple decades ago, that key capacity was thought to be language, which makes symbolic thought possible. Or is it symbolic thought that makes language possible? Either way, though a handful of ape prodigies have amassed some high vocabulary scores in labs where they’ve been taught to use pictographs or sign language, human three-year-olds accomplish similar feats as a routine part of their development. As primatologist and sociobiologist (one of the few who unabashedly uses that term for her field) Sarah Blaffer Hrdy explains in her 2009 book Mothers and Others: The Evolutionary Origins of Mutual Understanding, human language relies on abilities and interests aside from a mere reporting on the state of the outside world, beyond simply matching objects or actions with symbolic labels. Honeybees signal the location of food with their dances, vervet monkeys have distinct signals for attacks by flying versus ground-approaching predators, and the list goes on. Where humans excel when it comes to language is not just in the realm of versatility, but also in our desire to bond through these communicative efforts. Hrdy writes,
The open-ended qualities of language go beyond signaling. The impetus for language has to do with wanting to “tell” someone else what is on our minds and learn what is on theirs. The desire to psychologically connect with others had to evolve before language. (38)
The question Hrdy attempts to answer in Mothers and Others—the difference between humans and other apes she wants to place within a theoretical sequence of evolutionary developments—is how we evolved to be so docile, tolerant, and nice as to be able to cram ourselves by the dozens into tight spaces like airplanes without conflict. “I cannot help wondering,” she recalls having thought in a plane preparing for flight,
what would happen if my fellow human passengers suddenly morphed into another species of ape. What if I were traveling with a planeload of chimpanzees? Any of us would be lucky to disembark with all ten fingers and toes still attached, with the baby still breathing and unmaimed. Bloody earlobes and other appendages would litter the aisles. Compressing so many highly impulsive strangers into a tight space would be a recipe for mayhem. (3)
Over the past decade, the human capacity for cooperation, and even for altruism, has been at the center of evolutionary theorizing. Some clever experiments in the field of economic game theory have revealed several scenarios in which humans can be counted on to act against their own interest. What survival and reproductive advantages could possibly accrue to creatures given to acting for the benefit of others?
When it comes to economic exchanges, of course, human thinking isn’t tied to the here-and-now the way the thinking of other animals tends to be. To explain why humans might, say, forgo a small payment in exchange for the opportunity to punish a trading partner for withholding a larger, fairer payment, many behavioral scientists point out that humans seldom think in terms of one-off deals. Any human living in a society of other humans needs to protect his or her reputation for not being someone who abides cheating. Experimental settings are well and good, but throughout human evolutionary history individuals could never have been sure they wouldn’t encounter exchange partners a second or third time in the future. It so happens that one of the dominant theories to explain ape intelligence relies on the need for individuals within somewhat stable societies to track who owes whom favors, who is subordinate to whom, and who can successfully deceive whom. This “Machiavellian intelligence” hypothesis explains the cleverness of humans and other apes as the outcome of countless generations vying for status and reproductive opportunities in intensely competitive social groups.
One of the difficulties in trying to account for the evolution of intelligence is that its advantages seem like such a no-brainer. Isn’t it always better to be smarter? But, as Hrdy points out, the Machiavellian intelligence hypothesis runs into a serious problem. Social competition may have been an important factor in making primates brainer than other mammals, but it can’t explain why humans are brainer than other apes. She writes,
We still have to explain why humans are so much better than chimpanzees at conceptualizing what others are thinking, why we are born innately eager to interpret their motives, feelings, and intentions as well as care about their affective states and moods—in short, why humans are so well equipped for mutual understanding. Chimpanzees, after all, are at least as socially competitive as humans are. (46)
To bolster this point, Hrdy cites research showing that infant chimps have some dazzling social abilities once thought to belong solely to humans. In 1977, developmental psychologist Andrew Meltzoff published his finding that newborn humans mirror the facial expressions of adults they engage with. It was thought that this tendency in humans relied on some neurological structures unique to our lineage which provided the raw material for the evolution of our incomparable social intelligence. But then in 1996 primatologist Masako Myowa replicated Meltzoff’s findings with infant chimps. This and other research suggests that other apes have probably had much the same raw material for natural selection to act on. Yet, whereas the imitative and empathic skills flourish in maturing humans, they seem to atrophy in apes. Hrdy explains,
Even though other primates are turning out to be far better at reading intentions than primatologists initially realized, early flickerings of empathic interest—what might even be termed tentative quests for intersubjective engagement—fade away instead of developing and intensifying as they do in human children. (58)
So the question of what happened in human evolution to make us so different remains.
*****
Sarah Blaffer Hrdy exemplifies a rare, possibly unique, blend of scientific rigor and humanistic sensitivity—the vision of a great scientist and the fine observation of a novelist (or the vision of a great novelist and fine observation of a scientist). Reading her 1999 book, Mother Nature: A History of Mothers, Infants, and Natural Selection, was a watershed experience for me. In going beyond the realm of the literate into that of the literary while hewing closely to strict epistemic principle, she may surpass the accomplishments of even such great figures as Richard Dawkins and Stephen Jay Gould. In fact, since Mother Nature was one of the books through which I was introduced to sociobiology—more commonly known today as evolutionary psychology—I was a bit baffled at first by much of the criticism leveled against the field by Gould and others who claimed it was founded on overly simplistic premises and often produced theories that were politically reactionary.
The theme to which Hrdy continually returns is the too-frequently overlooked role of women and their struggles in those hypothetical evolutionary sequences anthropologists string together. For inspiration in her battle against facile biological theories whose sole purpose is to provide a cheap rationale for the political status quo, she turned, not to a scientist, but a novelist. The man single-most responsible for the misapplication of Darwin’s theory of natural selection to the justification of human societal hierarchies was the philosopher Herbert Spencer, in whose eyes women were no more than what Hrdy characterizes as “Breeding Machines.” Spencer and his fellow evolutionists in the Victorian age, she explains in Mother Nature,
took for granted that being female forestalled women from evolving “the power of abstract reasoning and that most abstract of emotions, the sentiment of justice.” Predestined to be mothers, women were born to be passive and noncompetitive, intuitive rather than logical. Misinterpretations of the evidence regarding women’s intelligence were cleared up early in the twentieth century. More basic difficulties having to do with this overly narrow definition of female nature were incorporated into Darwinism proper and linger to the present day. (17)
Many women over the generations have been unable to envision a remedy for this bias in biology. Hrdy describes the reaction of a literary giant whose lead many have followed.
For Virginia Woolf, the biases were unforgivable. She rejected science outright. “Science, it would seem, in not sexless; she is a man, a father, and infected too,” Woolf warned back in 1938. Her diagnosis was accepted and passed on from woman to woman. It is still taught today in university courses. Such charges reinforce the alienation many women, especially feminists, feel toward evolutionary theory and fields like sociobiology. (xvii)
But another literary luminary much closer to the advent of evolutionary thinking had a more constructive, and combative, response to short-sighted male biologists. And it is to her that Hrdy looks for inspiration. “I fall in Eliot’s camp,” she writes, “aware of the many sources of bias, but nevertheless impressed by the strength of science as a way of knowing” (xviii). She explains that George Eliot,
whose real name was Mary Ann Evans, recognized that her own experiences, frustrations, and desires did not fit within the narrow stereotypes scientists then prescribed for her sex. “I need not crush myself… within a mould of theory called Nature!” she wrote. Eliot’s primary interest was always human nature as it could be revealed through rational study. Thus she was already reading an advance copy of On the Origin of Species on November 24, 1859, the day Darwin’s book was published. For her, “Science has no sex… the mere knowing and reasoning faculties, if they act correctly, must go through the same process and arrive at the same result.” (xvii)
Eliot’s distaste for Spencer’s idea that women’s bodies were designed to divert resources away from the brain to the womb was as personal as it was intellectual. She had in fact met and quickly fallen in love with Spencer in 1851. She went on to send him a proposal which he rejected on eugenic grounds: “…as far as posterity is concerned,” Hrdy quotes, “a cultivated intelligence based upon a bad physique is of little worth, seeing that its descendants will die out in a generation or two.” Eliot’s retort came in the form of a literary caricature—though Spencer already seems a bit like his own caricature. Hrdy writes,
In her first major novel, Adam Bede (read by Darwin as he relaxed after the exertions of preparing Origin for publication), Eliot put Spencer’s views concerning the diversion of somatic energy into reproduction in the mouth of a pedantic and blatantly misogynist old schoolmaster, Mr. Bartle: “That’s the way with these women—they’ve got no head-pieces to nourish, and so their food all runs either to fat or brats.” (17)
A mother of three and an Emeritus Professor of Anthropology at the University of California, Davis, Hrdy is eloquent on the need for intelligence—and lots of familial and societal support—if one is to balance duties and ambitions like her own. Her first contribution to ethology came when she realized that the infanticide among hanuman langurs, which she’d gone to Mount Abu in Rajasthan, India to study at age 26 for her doctoral thesis, had nothing to do with overpopulation, as many suspected. Instead, the pattern she observed was that whenever an outside male deposed a group’s main breeder he immediately began exterminating all of the prior male’s offspring to induce the females to ovulate and give birth again—this time to the new male’s offspring. This was the selfish gene theory in action. But the females Hrdy was studying had an interesting response to this strategy.
In the early 1970s, it was still widely assumed by Darwinians that females were sexually passive and “coy.” Female langurs were anything but. When bands of roving males approached the troop, females would solicit them or actually leave their troop to go in search of them. On occasion, a female mated with invaders even though she was already pregnant and not ovulating (something else nonhuman primates were not supposed to do). Hence, I speculated that mothers were mating with outside males who might take over her troop one day. By casting wide the web of possible paternity, mothers could increase the prospects of future survival of offspring, since males almost never attack infants carried by females that, in the biblical sense of the word, they have “known.” Males use past relations with the mother as a cue to attack or tolerate her infant. (35)
Hrdy would go on to discover this was just one of myriad strategies primate females use to get their genes into future generations. The days of seeing females as passive vehicles while the males duke it out for evolutionary supremacy were now numbered.
I’ll never forget the Young-Goodman-Brown experience of reading the twelfth chapter of Mother Nature, titled “Unnatural Mothers,” and covering an impressive variety of evidence sources that simply devastates any notion of women as nurturing automatons, evolved for the sole purpose of serving as loving mothers. The verdict researchers arrive at whenever they take an honest look into the practices of women with newborns is that care is contingent. To give just one example, Hrdy cites the history of one of the earliest foundling homes in the world, the “Hospital of the Innocents” in Florence.
Founded in 1419, with assistance from the silk guilds, the Ospedale delgi Innocenti was completed in 1445. Ninety foundlings were left there the first year. By 1539 (a famine year), 961 babies were left. Eventually five thousand infants a year poured in from all corners of Tuscany. (299)
What this means is that a troubling number of new mothers were realizing they couldn't care for their infants. Unfortunately, newborns without direct parental care seldom fare well. “Of 15,000 babies left at the Innocenti between 1755 and 1773,” Hrdy reports, “two thirds died before reaching their first birthday” (299). And there were fifteen other foundling homes in the Grand Duchy of Tuscany at the time.
The chapter amounts to a worldwide tour of infant abandonment, exposure, or killing. (I remember having a nightmare after reading it about being off-balance and unable to set a foot down without stepping on a dead baby.) Researchers studying sudden infant death syndrome in London set up hidden cameras to monitor mothers interacting with babies but ended up videotaping them trying to smother them. Cases like this have made it necessary for psychiatrists to warn doctors studying the phenomenon “that some undeterminable portion of SIDS cases might be infanticides” (292). Why do so many mothers abandon or kill their babies? Turning to the ethnographic data, Hrdy explains,
Unusually detailed information was available for some dozen societies. At a gross level, the answer was obvious. Mothers kill their own infants where other forms of birth control are unavailable. Mothers were unwilling to commit themselves and had no way to delegate care of the unwanted infant to others—kin, strangers, or institutions. History and ecological constraints interact in complex ways to produce different solutions to unwanted births. (296)
Many scholars see the contingent nature of maternal care as evidence that motherhood is nothing but a social construct. Consistent with the blank-slate view of human nature, this theory holds that every aspect of child-rearing, whether pertaining to the roles of mothers or fathers, is determined solely by culture and therefore must be learned. Others, who simply can’t let go of the idea of women as virtuous vessels, suggest that these women, as numerous as they are, must all be deranged.
Hrdy demolishes both the purely social constructivist view and the suggestion of pathology. And her account of the factors that lead women to infanticide goes to the heart of her arguments about the centrality of female intelligence in the history of human evolution. Citing the pioneering work of evolutionary psychologists Martin Daly and MargoWilson, Hrdy writes,
How a mother, particularly a very young mother, treats one infant turns out to be a poor predictor of how she might treat another one born when she is older, or faced with improved circumstances. Even with culture held constant, observing modern Western women all inculcated with more or less the same post-Enlightenment values, maternal age turned out to be a better predictor of how effective a mother would be than specific personality traits or attitudes. Older women describe motherhood as more meaningful, are more likely to sacrifice themselves on behalf of a needy child, and mourn lost pregnancies more than do younger women. (314)
The takeaway is that a woman, to reproduce successfully, must assess her circumstances, including the level of support she can count on from kin, dads, and society. If she lacks the resources or the support necessary to raise the child, she may have to make a hard decision. But making that decision in the present unfavorable circumstances in no way precludes her from making the most of future opportunities to give birth to other children and raise them to reproductive age.
Hrdy goes on to describe an experimental intervention that took place in a hospital located across the street from a foundling home in 17th century France. The Hospice des Enfants Assistes cared for indigent women and assisted them during childbirth. It was the only place where poor women could legally abandon their babies. What the French reformers did was tell a subset of the new mothers that they had to stay with their newborns for eight days after birth.
Under this “experimental” regimen, the proportion of destitute mothers who subsequently abandoned their babies dropped from 24 to 10 percent. Neither cultural concepts about babies nor their economic circumstances had changed. What changed was the degree to which they had become attached to their breast-feeding infants. It was as though their decision to abandon their babies and their attachment to their babies operated as two different systems. (315)
Following the originator of attachment theory, John Bowlby, who set out to integrate psychiatry and developmental psychology into an evolutionary framework, Hrdy points out that the emotions underlying the bond between mothers and infants (and fathers and infants too) are as universal as they are consequential. Indeed, the mothers who are forced to abandon their infants have to be savvy enough to realize they have to do so before these emotions are engaged or they will be unable to go through with the deed.
Female strategy plays a crucial role in reproductive outcomes in several domains beyond the choice of whether or not to care for infants. Women must form bonds with other women for support, procure the protection of men (usually from other men), and lay the groundwork for their children’s own future reproductive success. And that’s just what women have to do before choosing a mate—a task that involves striking a balance between good genes and a high level of devotion—getting pregnant, and bringing the baby to term. The demographic transition that occurs when an agrarian society becomes increasingly industrialized is characterized at first by huge population increases as infant mortality drops but then levels off as women gain more control over their life trajectories. Here again, the choices women tend to make are at odds with Victorian (and modern evangelical) conceptions of their natural proclivities. Hrdy writes,
Since, formerly, status and well-being tended to be correlated with reproductive success, it is not surprising that mothers, especially those in higher social ranks, put the basics first. When confronted with a choice between striving for status and striving for children, mothers gave priority to status and “cultural success” ahead of a desire for many children. (366)
And then of course come all the important tasks and decisions associated with actually raising any children the women eventually do give birth to. One of the basic skill sets women have to master to be successful mothers is making and maintaining friendships; they must be socially savvy because more than with any other ape the support of helpers, what Hrdy calls allomothers, will determine the fate of their offspring.
*****
Mother Nature is a massive work—541pages before the endnotes—exploring motherhood through the lens of sociobiology and attachment theory. Mothers and Others is leaner, coming in at just under 300 pages, because its focus is narrower. Hrdy feels that in attempting to account for humans’ prosocial impulses over the past decade, the role of women and motherhood has once again been scanted. She points to the prevalence of theories focusing on competition between groups, with the edge going to those made up of the most cooperative and cohesive members. Such theories once again give the leading role to males and their conflicts, leaving half the species out of the story—unless that other half’s only role is to tend to the children and forage for food while the “band of brothers” is out heroically securing borders.
Hrdy doesn’t weigh in directly on the growing controversy over whether group selection has operated as a significant force in human evolution. The problem she sees with intertribal warfare as an explanation for human generosity and empathy is that the timing isn’t right. What Hrdy is after are the selection pressures that led to the evolution of what she calls “emotionally modern humans,” the “people preadapted to get along with one another even when crowded together on an airplane” (66). And she argues that humans must have been emotionally modern before they could have further evolved to be cognitively modern. “Brains require care more than caring requires brains” (176). Her point is that bonds of mutual interest and concern came before language and the capacity for runaway inventiveness. Humans, Hrdy maintains, would have had to begin forming these bonds long before the effects of warfare were felt.
Apart from periodic increases in unusually rich locales, most Pleistocene humans lived at low population densities. The emergence of human mind reading and gift-giving almost certainly preceded the geographic spread of a species whose numbers did not begin to really expand until the past 70,000 years. With increasing population density (made possible, I would argue, because they were already good at cooperating), growing pressure on resources, and social stratification, there is little doubt that groups with greater internal cohesion would prevail over less cooperative groups. But what was the initial payoff? How could hypersocial apes evolve in the first place? (29)
In other words, what was it that took inborn capacities like mirroring an adult’s facial expressions, present in both human and chimp infants, and through generations of natural selection developed them into the intersubjective tendencies displayed by humans today?
Like so many other anthropologists before her, Hrdy begins her attempt to answer this question by pointing to a trait present in humans but absent in our fellow apes. “Under natural conditions,” she writes, “an orangutan, chimpanzee, or gorilla baby nurses for four to seven years and at the outset is inseparable from his mother, remaining in intimate front-to-front contact 100 percent of the day and night” (68). But humans allow others to participate in the care of their babies almost immediately after giving birth to them. Who besides Sarah Blaffer Hrdy would have noticed this difference, or given it more than a passing thought? (Actually, there are quite a few candidates among anthropologists—Kristen Hawkes for instance.) Ape mothers remain in constant contact with their infants, whereas human mothers often hand over their babies to other women to hold as soon as they emerge from the womb. The difference goes far beyond physical contact. Humans are what Hrdy calls “cooperative breeders,” meaning a child will in effect have several parents aside from the primary one. Help from alloparents opens the way for an increasingly lengthy development, which is important because the more complex the trait—and human social intelligence is about as complex as they come—the longer it takes to develop in maturing individuals. Hrdy writes,
One widely accepted tenet of life history theory is that, across species, those with bigger babies relative to the mother’s body size will also tend to exhibit longer intervals between births because the more babies cost the mother to produce, the longer she will need to recoup before reproducing again. Yet humans—like marmosets—provide a paradoxical exception to this rule. Humans, who of all the apes produce the largest, slowest-maturing, and most costly babies, also breed the fastest. (101)
Those marmosets turn out to be central to Hrdy’s argument because, along with their cousins in the family Callitrichidae, the tamarins, they make up almost the totality of the primate species whom she classifies as “full-fledged cooperative breeders” (92). This and other similarities between humans and marmosets and tamarins have long been overlooked because anthropologists have understandably been focused on the great apes, as well as other common research subjects like baboons and macaques.
Golden Lion Tamarins, by Sarah Landry
Callitrichidae, it so happens, engage in some uncannily human-like behaviors. Plenty of primate babies wail and shriek when they’re in distress, but infants who are frequently not in direct contact with their mothers would have to find a way to engage with them, as well as other potential caregivers, even when they aren’t in any trouble. “The repetitive, rhythmical vocalizations known as babbling,” Hrdy points out, “provided a particularly elaborate way to accomplish this” (122). But humans aren’t the only primates that babble “if by babble we mean repetitive strings of adultlike vocalizations uttered without vocal referents”; marmosets and tamarins do it too. Some of the other human-like patterns aren’t as cute though. Hrdy writes,
Shared care and provisioning clearly enhances maternal reproductive success, but there is also a dark side to such dependence. Not only are dominant females (especially pregnant ones) highly infanticidal, eliminating babies produced by competing breeders, but tamarin mothers short on help may abandon their own young, bailing out at birth by failing to pick up neonates when they fall to the ground or forcing clinging newborns off their bodies, sometimes even chewing on their hands or feet. (99)
It seems that the more cooperative infant care tends to be for a given species the more conditional it is—the more likely it will be refused when the necessary support of others can’t be counted on.
Hrdy’s cooperative breeding hypothesis is an outgrowth of George Williams and Kristen Hawkes’s so-called “Grandmother Hypothesis.” For Hawkes, the important difference between humans and apes is that human females go on living for decades after menopause, whereas very few female apes—or any other mammals for that matter—live past their reproductive years. Hawkes hypothesized that the help of grandmothers made it possible for ever longer periods of dependent development for children, which in turn made it possible for the incomparable social intelligence of humans to evolve. Until recently, though, this theory had been unconvincing to anthropologists because a renowned compendium of data compiled by George Peter Murdock in his Ethnographic Atlas revealed that there was a strong trend toward patrilocal residence patterns in all the societies that had been studied. Since grandmothers are thought to be much more likely to help care for their daughters’ children than their sons’—owing to paternity uncertainty—the fact that most humans raise their children far from maternal grandmothers made any evolutionary role for them unlikely.
But then in 2004 anthropologist Helen Alvarez reexamined Murdock’s analysis of residence patterns and concluded that pronouncements about widespread patrilocality were based on a great deal of guesswork. After eliminating societies for which too little evidence existed to determine the nature of their residence practices, Alvarez calculated that the majority of the remaining societies were bilocal, which means couples move back and forth between the mother’s and the father’s groups. Citing “The Alvarez Corrective” and other evidence, Hrdy concludes,
Instead of some highly conserved tendency, the cross-cultural prevalence of patrilocal residence patterns looks less like an evolved human universal than a more recent adaptation to post-Pleistocene conditions, as hunters moved into northern climes where women could no longer gather wild plants year-round or as groups settled into circumscribed areas. (246)
But Hrdy extends the cast of alloparents to include a mother’s preadult daughters, as well as fathers and their extended families, although the male contribution is highly variable across cultures (and variable too of course among individual men).
With the observation that human infants rely on multiple caregivers throughout development, Hrdy suggests the mystery of why selection favored the retention and elaboration of mind reading skills in humans but not in other apes can be solved by considering the life-and-death stakes for human babies trying to understand the intentions of mothers and others. She writes,
Babies passed around in this way would need to exercise a different skill set in order to monitor their mothers’ whereabouts. As part of the normal activity of maintaining contact both with their mothers and with sympathetic alloparents, they would find themselves looking for faces, staring at them, and trying to read what they reveal. (121)
Mothers, of course, would also have to be able to read the intentions of others whom they might consider handing their babies over to. So the selection pressure occurs on both sides of the generational divide. And now that she’s proposed her candidate for the single most pivotal transition in human evolution Hrdy’s next task is to place it in a sequence of other important evolutionary developments.
Without a doubt, highly complex coevolutionary processes were involved in the evolution of extended lifespans, prolonged childhoods, and bigger brains. What I want to stress here, however, is that cooperative breeding was the pre-existing condition that permitted the evolution of these traits in the hominin line. Creatures may not need big brains to evolve cooperative breeding, but hominins needed shared care and provisioning to evolve big brains. Cooperative breeding had to come first. (277)
*****
Flipping through Mother Nature, a book I first read over ten years ago, I can feel some of the excitement I must have experienced as a young student of behavioral science, having graduated from the pseudoscience of Freud and Jung to the more disciplined—and in its way far more compelling—efforts of John Bowlby, on a path, I was sure, to becoming a novelist, and now setting off into this newly emerging field with the help of a great scientist who saw the value of incorporating literature and art into her arguments, not merely as incidental illustrations retrofitted to recently proposed principles, but as sources of data in their own right, and even as inspiration potentially lighting the way to future discovery. To perceive, to comprehend, we must first imagine. And stretching the mind to dimensions never before imagined is what art is all about.
Yet there is an inescapable drawback to massive books like Mother Nature—for writers and readers alike—which is that any effort to grasp and convey such a massive array of findings and theories comes with the risk of casual distortion since the minutiae mastered by the experts in any subdiscipline will almost inevitably be heeded insufficiently in the attempt to conscript what appear to be basic points in the service of a broader perspective. Even more discouraging is the assurance that any intricate tapestry woven of myriad empirical threads will inevitably be unraveled by ongoing research. Your tapestry is really a snapshot taken from a distance of a field in flux, and no sooner does the shutter close than the beast continues along the path of its stubbornly unpredictable evolution.
When Mothers and Others was published just four years ago in 2009, for instance, reasoning based on the theory of kin selection led most anthropologists to assume, as Hrdy states, that “forager communities are composed of flexible assemblages of close and more distant blood relations and kin by marriage” (132).
This assumption seems to have been central to the thinking that led to the principal theory she lays out in the book, as she explains that “in foraging contexts the majority of children alloparents provision are likely to be cousins, nephews, and nieces rather than unrelated children” (158). But as theories evolve old assumptions come under new scrutiny, and in an article published in the journal Science in March of 2011 anthropologist Kim Hill and his colleagues report that after analyzing the residence and relationship patterns of 32 modern foraging societies their conclusion is that “most individuals in residential groups are genetically unrelated” (1286). In science, two years can make a big difference. This same study does, however, bolster a different pillar of Hrdy’s argument by demonstrating that men relocate to their wives’ groups as often as women relocate to their husbands’, lending further support to Alvarez’s corrective of Murdock’s data.
Even if every last piece of evidence she marshals in her case for how pivotal the transition to cooperative breeding was in the evolution of mutual understanding in humans is overturned, Hrdy’s painstaking efforts to develop her theory and lay it out so comprehensively, so compellingly, and so artfully, will not have been wasted. Darwin once wrote that “all observation must be for or against some view to be of any service,” but many scientists, trained as they are to keep their eyes on the data and to avoid the temptation of building grand edifices on foundations of inference and speculation, look askance at colleagues who dare to comment publically on fields outside their specialties, especially in cases like Jared Diamond’s where their efforts end up winning them Pulitzers and guaranteed audiences for their future works.
But what use are legions of researchers with specialized knowledge hermetically partitioned by narrowly focused journals and conferences of experts with homogenous interests? Science is contentious by nature, so whenever a book gains notoriety with a nonscientific audience we can count on groaning from the author’s colleagues as they rush to assure us what we’ve read is a misrepresentation of their field. But stand-alone findings, no matter how numerous, no matter how central they are to researchers’ daily concerns, can’t compete with the grand holistic visions of the Diamonds, Hrdys, or Wilsons, imperfect and provisional as they must be, when it comes to inspiring the next generation of scientists. Nor can any number of correlation coefficients or regression analyses spark anything like the same sense of wonder that comes from even a glimmer of understanding about how a new discovery fits within, and possibly transforms, our conception of life and the universe in which it evolved. The trick, I think, is to read and ponder books like the ones Sarah Blaffer Hrdy writes as soon as they’re published—but to be prepared all the while, as soon as you’re finished reading them, to read and ponder the next one, and the one after that.
Also read:
And:
And:
NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA
Too Psyched for Sherlock: A Review of Maria Konnikova’s “Mastermind: How to Think like Sherlock Holmes”—with Some Thoughts on Science Education
Maria Konnikova’s book “Mastermind: How to Think Like Sherlock Holmes” got me really excited because if the science of psychology is ever brought up in discussions of literature, it’s usually the pseudoscience of Sigmund Freud. Konnikova, whose blog went a long way toward remedying that tragedy, wanted to offer up an alternative approach. However, though the book shows great promise, it’s ultimately disappointing.
Whenever he gets really drunk, my brother has the peculiar habit of reciting the plot of one or another of his favorite shows or books. His friends and I like to tease him about it—“Watch out, Dan’s drunk, nobody mention The Wire!”—and the quirk can certainly be annoying, especially if you’ve yet to experience the story first-hand. But I have to admit, given how blotto he usually is when he first sets out on one of his grand retellings, his ability to recall intricate plotlines right down to their minutest shifts and turns is extraordinary. One recent night, during a timeout in an epic shellacking of Notre Dame’s football team, he took up the tale of Django Unchained, which incidentally I’d sat next to him watching just the week before. Tuning him out, I let my thoughts shift to a post I’d read on The New Yorker’s cinema blog The Front Row.
In “The Riddle of Tarantino,” film critic Richard Brody analyzes the director-screenwriter’s latest work in an attempt to tease out the secrets behind the popular appeal of his creations and to derive insights into the inner workings of his mind. The post is agonizingly—though also at points, I must admit, exquisitely—overwritten, almost a parody of the grandiose type of writing one expects to find within the pages of the august weekly. Bemused by the lavish application of psychoanalytic jargon, I finished the essay pitying Brody for, in all his writerly panache, having nothing of real substance to say about the movie or the mind behind it. I wondered if he knows the scientific consensus on Freud is that his influence is less in the line of, say, a Darwin or an Einstein than of an L. Ron Hubbard.
What Brody and my brother have in common is that they were both moved enough by their cinematic experience to feel an urge to share their enthusiasm, complicated though that enthusiasm may have been. Yet they both ended up doing the story a disservice, succeeding less in celebrating the work than in blunting its impact. Listening to my brother’s rehearsal of the plot with Brody’s essay in mind, I wondered what better field there could be than psychology for affording enthusiasts discussion-worthy insights to help them move beyond simple plot references. How tragic, then, that the only versions of psychology on offer in educational institutions catering to those who would be custodians of art, whether in academia or on the mastheads of magazines like The New Yorker, are those in thrall to Freud’s cultish legacy.
There’s just something irresistibly seductive about the promise of a scientific paradigm that allows us to know more about another person than he knows about himself. In this spirit of privileged knowingness, Brody faults Django for its lack of moral complexity before going on to make a silly accusation. Watching the movie, you know who the good guys are, who the bad guys are, and who you want to see prevail in the inevitably epic climax. “And yet,” Brody writes,
the cinematic unconscious shines through in moments where Tarantino just can’t help letting loose his own pleasure in filming pain. In such moments, he never seems to be forcing himself to look or to film, but, rather, forcing himself not to keep going. He’s not troubled by representation but by a visual superego that restrains it. The catharsis he provides in the final conflagration is that of purging the world of miscreants; it’s also a refining fire that blasts away suspicion of any peeping pleasure at misdeeds and fuses aesthetic, moral, and political exultation in a single apotheosis.
The strained stateliness of the prose provides a ready distraction from the stark implausibility of the assessment. Applying Occam’s Razor rather than Freud’s at once insanely elaborate and absurdly reductionist ideology, we might guess that what prompted Tarantino to let the camera linger discomfortingly long on the violent misdeeds of the black hats is that he knew we in the audience would be anticipating that “final conflagration.”
The more outrageous the offense, the more pleasurable the anticipation of comeuppance—but the experimental findings that support this view aren’t covered in film or literary criticism curricula, mired as they are in century-old pseudoscience.
I’ve been eagerly awaiting the day when scientific psychology supplants psychoanalysis (as well as other equally, if not more, absurd ideologies) in academic and popular literary discussions. Coming across the blog Literally Psyched on Scientific American’s website about a year ago gave me a great sense of hope. The tagline, “Conceived in literature, tested in psychology,” as well as the credibility conferred by the host site, promised that the most fitting approach to exploring the resonance and beauty of stories might be undergoing a long overdue renaissance, liberated at last from the dominion of crackpot theorists. So when the author, Maria Konnikova, a doctoral candidate at Columbia, released her first book, I made a point to have Amazon deliver it as early as possible.
Mastermind: How to Think Like Sherlock Holmes does indeed follow the conceived-in-literature-tested-in-psychology formula, taking the principles of sound reasoning expounded by what may be the most recognizable fictional character in history and attempting to show how modern psychology proves their soundness. In what she calls a “Prelude” to her book, Konnikova explains that she’s been a Holmes fan since her father read Conan Doyle’s stories to her and her siblings as children.
The one demonstration of the detective’s abilities that stuck with Konnikova the most comes when he explains to his companion and chronicler Dr. Watson the difference between seeing and observing, using as an example the number of stairs leading up to their famous flat at 221B Baker Street. Watson, naturally, has no idea how many stairs there are because he isn’t in the habit of observing. Holmes, preternaturally, knows there are seventeen steps. Ever since being made aware of Watson’s—and her own—cognitive limitations through this vivid illustration (which had a similar effect on me when I first read “A Scandal in Bohemia” as a teenager), Konnikova has been trying to find the secret to becoming a Holmesian observer as opposed to a mere Watsonian seer. Already in these earliest pages, we encounter some of the principle shortcomings of the strategy behind the book. Konnikova wastes no time on the question of whether or not a mindset oriented toward things like the number of stairs in your building has any actual advantages—with regard to solving crimes or to anything else—but rather assumes old Sherlock is saying something instructive and profound.
Mastermind is, for the most part, an entertaining read. Its worst fault in the realm of simple page-by-page enjoyment is that Konnikova often belabors points that upon reflection expose themselves as mere platitudes. The overall theme is the importance of mindfulness—an important message, to be sure, in this age of rampant multitasking. But readers get more endorsement than practical instruction. You can only be exhorted to pay attention to what you’re doing so many times before you stop paying attention to the exhortations. The book’s problems in both the literary and psychological domains, however, are much more serious. I came to the book hoping it would hold some promise for opening the way to more scientific literary discussions by offering at least a glimpse of what they might look like, but while reading I came to realize there’s yet another obstacle to any substantive analysis of stories. Call it the TED effect. For anything to be read today, or for anything to get published for that matter, it has to promise to uplift readers, reveal to them some secret about how to improve their lives, help them celebrate the horizonless expanse of human potential.
Naturally enough, with the cacophony of competing information outlets, we all focus on the ones most likely to offer us something personally useful. Though self-improvement is a worthy endeavor, the overlooked corollary to this trend is that the worthiness intrinsic to enterprises and ideas is overshadowed and diminished. People ask what’s in literature for me, or what can science do for me, instead of considering them valuable in their own right—and instead of thinking, heaven forbid, we may have a duty to literature and science as institutions serving as essential parts of the foundation of civilized society.
In trying to conceive of a book that would operate as a vehicle for her two passions, psychology and Sherlock Holmes, while at the same time catering to readers’ appetite for life-enhancement strategies and spiritual uplift, Konnikova has produced a work in the grip of a bewildering and self-undermining identity crisis. The organizing conceit of Mastermind is that, just as Sherlock explains to Watson in the second chapter of A Study in Scarlet, the brain is like an attic. For Konnikova, this means the mind is in constant danger of becoming cluttered and disorganized through carelessness and neglect. That this interpretation wasn’t what Conan Doyle had in mind when he put the words into Sherlock’s mouth—and that the meaning he actually had in mind has proven to be completely wrong—doesn’t stop her from making her version of the idea the centerpiece of her argument. “We can,” she writes,
learn to master many aspects of our attic’s structure, throwing out junk that got in by mistake (as Holmes promises to forget Copernicus at the earliest opportunity), prioritizing those things we want to and pushing back those that we don’t, learning how to take the contours of our unique attic into account so that they don’t unduly influence us as they otherwise might. (27)
This all sounds great—a little too great—from a self-improvement perspective, but the attic metaphor is Sherlock’s explanation for why he doesn’t know the earth revolves around the sun and not the other way around. He states quite explicitly that he believes the important point of similarity between attics and brains is their limited capacity. “Depend upon it,” he insists, “there comes a time when for every addition of knowledge you forget something that you knew before.” Note here his topic is knowledge, not attention.
It is possible that a human mind could reach and exceed its storage capacity, but the way we usually avoid this eventuality is that memories that are seldom referenced are forgotten. Learning new facts may of course exhaust our resources of time and attention. But the usual effect of acquiring knowledge is quite the opposite of what Sherlock suggests. In the early 1990’s, a research team led by Patricia Alexander demonstrated that having background knowledge in a subject area actually increased participants’ interest in and recall for details in an unfamiliar text. One of the most widely known replications of this finding was a study showing that chess experts have much better recall for the positions of pieces on a board than novices. However, Sherlock was worried about information outside of his area of expertise. Might he have a point there?
The problem is that Sherlock’s vocation demands a great deal of creativity, and it’s never certain at the outset of a case what type of knowledge may be useful in solving it. In the story “The Lion’s Mane,” he relies on obscure information about a rare species of jellyfish to wrap up the mystery. Konnikova cites this as an example of “The Importance of Curiosity and Play.” She goes on to quote Sherlock’s endorsement for curiosity in The Valley of Fear: “Breadth of view, my dear Mr. Mac, is one of the essentials of our profession. The interplay of ideas and the oblique uses of knowledge are often of extraordinary interest” (151). How does she account for the discrepancy? Could Conan Doyle’s conception of the character have undergone some sort of evolution? Alas, Konnikova isn’t interested in questions like that. “As with most things,” she writes about the earlier reference to the attic theory, “it is safe to assume that Holmes was exaggerating for effect” (150). I’m not sure what other instances she may have in mind—it seems to me that the character seldom exaggerates for effect. In any case, he was certainly not exaggerating his ignorance of Copernican theory in the earlier story.
If Konnikova were simply privileging the science at the expense of the literature, the measure of Mastermind’s success would be in how clearly the psychological theories and findings are laid out. Unfortunately, her attempt to stitch science together with pronouncements from the great detective often leads to confusing tangles of ideas. Following her formula, she prefaces one of the few example exercises from cognitive research provided in the book with a quote from “The Crooked Man.” After outlining the main points of the case, she writes,
How to make sense of these multiple elements? “Having gathered these facts, Watson,” Holmes tells the doctor, “I smoked several pipes over them, trying to separate those which were crucial from others which were merely incidental.” And that, in one sentence, is the first step toward successful deduction: the separation of those factors that are crucial to your judgment from those that are just incidental, to make sure that only the truly central elements affect your decision. (169)
So far she hasn’t gone beyond the obvious. But she does go on to cite a truly remarkable finding that emerged from research by Amos Tversky and Daniel Kahneman in the early 1980’s. People who read a description of a man named Bill suggesting he lacks imagination tended to feel it was less likely that Bill was an accountant than that he was an accountant who plays jazz for a hobby—even though the two points of information in that second description make in inherently less likely than the one point of information in the first. The same result came when people were asked whether it was more likely that a woman named Linda was a bank teller or both a bank teller and an active feminist. People mistook the two-item choice as more likely. Now, is this experimental finding an example of how people fail to sift crucial from incidental facts?
The findings of this study are now used as evidence of a general cognitive tendency known as the conjunction fallacy. In his book Thinking, Fast and Slow, Kahneman explains how more detailed descriptions (referring to Tom instead of Bill) can seem more likely, despite the actual probabilities, than shorter ones. He writes,
The judgments of probability that our respondents offered, both in the Tom W and Linda problems, corresponded precisely to judgments of representativeness (similarity to stereotypes). Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories. The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. (159)
So people are confused because the less probable version is actually easier to imagine. But here’s how Konnikova tries to explain the point by weaving it together with Sherlock’s ideas:
Holmes puts it this way: “The difficulty is to detach the framework of fact—of absolute undeniable fact—from the embellishments of theorists and reporters. Then, having established ourselves upon this sound basis, it is our duty to see what inferences may be drawn and what are the special points upon which the whole mystery turns.” In other words, in sorting through the morass of Bill and Linda, we would have done well to set clearly in our minds what were the actual facts, and what were the embellishments or stories in our minds. (173)
But Sherlock is not referring to our minds’ tendency to mistake coherence for probability, the tendency that has us seeing more detailed and hence less probable stories as more likely. How could he have been? Instead, he’s talking about the importance of independently assessing the facts instead of passively accepting the assessments of others. Konnikova is fudging, and in doing so she’s shortchanging the story and obfuscating the science.
As the subtitle implies, though, Mastermind is about how to think; it is intended as a self-improvement guide. The book should therefore be judged based on the likelihood that readers will come away with a greater ability to recognize and avoid cognitive biases, as well as the ability to sustain the conviction to stay motivated and remain alert. Konnikova emphasizes throughout that becoming a better thinker is a matter of determinedly forming better habits of thought. And she helpfully provides countless illustrative examples from the Holmes canon, though some of these precepts and examples may not be as apt as she’d like. You must have clear goals, she stresses, to help you focus your attention. But the overall purpose of her book provides a great example of a vague and unrealistic end-point. Think better? In what domain? She covers examples from countless areas, from buying cars and phones, to sizing up strangers we meet at a party. Sherlock, of course, is a detective, so he focuses his attention of solving crimes. As Konnikova dutifully points out, in domains other than his specialty, he’s not such a mastermind.
What Mastermind works best as is a fun introduction to modern psychology. But it has several major shortcomings in that domain, and these same shortcomings diminish the likelihood that reading the book will lead to any lasting changes in thought habits. Concepts are covered too quickly, organized too haphazardly, and no conceptual scaffold is provided to help readers weigh or remember the principles in context. Konnikova’s strategy is to take a passage from Conan Doyle’s stories that seems to bear on noteworthy findings in modern research, discuss that research with sprinkled references back to the stories, and wrap up with a didactic and sententious paragraph or two. Usually, the discussion begins with one of Watson’s errors, moves on to research showing we all tend to make similar errors, and then ends admonishing us not to be like Watson. Following Kahneman’s division of cognition into two systems—one fast and intuitive, the other slower and demanding of effort—Konnikova urges us to get out of our “System Watson” and rely instead on our “System Holmes.” “But how do we do this in practice?” she asks near the end of the book,
How do we go beyond theoretically understanding this need for balance and open-mindedness and applying it practically, in the moment, in situations where we might not have as much time to contemplate our judgments as we do in the leisure of our reading?
The answer she provides: “It all goes back to the very beginning: the habitual mindset that we cultivate, the structure that we try to maintain for our brain attic no matter what” (240). Unfortunately, nowhere in her discussion of built-in biases and the correlates to creativity did she offer any step-by-step instruction on how to acquire new habits. Konnikova is running us around in circles to hide the fact that her book makes an empty promise.
Tellingly, Kahneman, whose work on biases Konnikova cites on several occasions, is much more pessimistic about our prospects for achieving Holmesian thought habits. In the introduction to Thinking, Fast and Slow, he says his goal is merely to provide terms and labels for the regular pitfalls of thinking to facilitate more precise gossiping. He writes,
Why be concerned with gossip? Because it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own. Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and home. (3)
The worshipful attitude toward Sherlock in Mastermind is designed to pander to our vanity, and so the suggestion that we need to rely on others to help us think is too mature to appear in its pages. The closest Konnikova comes to allowing for the importance of input and criticism from other people is when she suggests that Watson is an indispensable facilitator of Sherlock’s process because he “serves as a constant reminder of what errors are possible” (195), and because in walking him through his reasoning Sherlock is forced to be more mindful. “It may be that you are not yourself luminous,” Konnikova quotes from The Hound of the Baskervilles, “but you are a conductor of light. Some people without possessing genius have a remarkable power of stimulating it. I confess, my dear fellow, that I am very much in your debt” (196).
That quote shows one of the limits of Sherlock’s mindfulness that Konnikova never bothers to address. At times throughout Mastermind, it’s easy to forget that we probably wouldn’t want to live the way Sherlock is described as living. Want to be a great detective? Abandon your spouse and your kids, move into a cheap flat, work full-time reviewing case histories of past crimes, inject some cocaine, shoot holes in the wall of your flat where you’ve drawn a smiley face, smoke a pipe until the air is unbreathable, and treat everyone, including your best (only?) friend with casual contempt. Conan Doyle made sure his character casts a shadow. The ideal character Konnikova holds up, with all his determined mindfulness, often bears more resemblance to Kwai Chang Caine from Kung Fu. This isn’t to say that Sherlock isn’t morally complex—readers love him because he’s so clearly a good guy, as selfish and eccentric as he may be. Konnikova cites an instance in which he holds off on letting the police know who committed a crime. She quotes:
Once that warrant was made out, nothing on earth would save him. Once or twice in my career I feel that I have done more real harm by my discovery of the criminal than ever he had done by his crime. I have learned caution now, and I had rather play tricks with the law of England than with my own conscience. Let us know more before we act.
But Konnikova isn’t interested in morality, complex or otherwise, no matter how central moral intuitions are to our enjoyment of fiction. The lesson she draws from this passage shows her at her most sententious and platitudinous:
You don’t mindlessly follow the same preplanned set of actions that you had determined early on. Circumstances change, and with them so does the approach. You have to think before you leap to act, or judge someone, as the case may be. Everyone makes mistakes, but some may not be mistakes as such, when taken in the context of the time and the situation. (243)
Hard to disagree, isn’t it?
To be fair, Konnikova does mention some of Sherlock’s peccadilloes in passing. And she includes a penultimate chapter titled “We’re Only Human,” in which she tells the story of how Conan Doyle was duped by a couple of young girls into believing they had photographed some real fairies. She doesn’t, however, take the opportunity afforded by this episode in the author’s life to explore the relationship between the man and his creation. She effectively says he got tricked because he didn’t do what he knew how to do, it can happen to any of us, so be careful you don’t let it happen to you. Aren’t you glad that’s cleared up? She goes on to end the chapter with an incongruous lesson about how you should think like a hunter. Maybe we should, but how exactly, and when, and at what expense, we’re never told.
Konnikova clearly has a great deal of genuine enthusiasm for both literature and science, and despite my disappointment with her first book I plan to keep following her blog. I’m even looking forward to her next book—confident she’ll learn from the negative reviews she’s bound to get on this one. The tragic blunder she made in eschewing nuanced examinations of how stories work, how people relate to characters, or how authors create them for a shallow and one-dimensional attempt at suggesting a 100 year-old fictional character somehow divined groundbreaking research findings from the end of the Twentieth and beginning of the Twenty-First Centuries calls to mind an exchange you can watch on YouTube between Neil Degrasse Tyson and Richard Dawkins. Tyson, after hearing Dawkins speak in the way he’s known to, tries to explain why many scientists feel he’s not making the most of his opportunities to reach out to the public.
You’re professor of the public understanding of science, not the professor of delivering truth to the public. And these are two different exercises. One of them is putting the truth out there and they either buy your book or they don’t. That’s not being an educator; that’s just putting it out there. Being an educator is not only getting the truth right; there’s got to be an act of persuasion in there as well. Persuasion isn’t “Here’s the facts—you’re either an idiot or you’re not.” It’s “Here are the facts—and here is a sensitivity to your state of mind.” And it’s the facts and the sensitivity when convolved together that creates impact. And I worry that your methods, and how articulately barbed you can be, ends up being simply ineffective when you have much more power of influence than is currently reflected in your output.
Dawkins begins his response with an anecdote that shows that he’s not the worst offender when it comes to simple and direct presentations of the facts.
A former and highly successful editor of New Scientist Magazine, who actually built up New Scientist to great new heights, was asked “What is your philosophy at New Scientist?” And he said, “Our philosophy at New Scientist is this: science is interesting, and if you don’t agree you can fuck off.”
I know the issue is a complicated one, but I can’t help thinking Tyson-style persuasion too often has the opposite of its intended impact, conveying as it does the implicit message that science has to somehow be sold to the masses, that it isn’t intrinsically interesting. At any rate, I wish that Konnikova hadn’t dressed up her book with false promises and what she thought would be cool cross-references. Sherlock Holmes is interesting. Psychology is interesting. If you don’t agree, you can fuck off.
Also read
FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM
And
THE STORYTELLING ANIMAL: A LIGHT READ WITH WEIGHTY IMPLICATIONS
And
LAB FLIES: JOSHUA GREENE’S MORAL TRIBES AND THE CONTAMINATION OF WALTER WHITE
Also a propos is
Sweet Tooth is a Strange Loop: An Aid to Some of the Dimmer Reviewers of Ian McEwan's New Novel
Ian McEwan is one of my literary heroes. “Atonement” and “Saturday” are among my favorite books. But a lot of readers trip over the more experimental aspects of his work. With “Sweet Tooth,” he once again offers up a gem of a story, one that a disconcerting number of reviewers missed the point of.
(I've done my best to avoid spoilers.)
Anytime a character in Ian McEwan’s new novel Sweet Tooth enthuses about a work of literature, another character can be counted on to come along and pronounce that same work dreadful. So there’s a delightful irony in the declaration at the end of a silly review in The Irish Independent, which begins by begrudging McEwan his “reputation as the pulse-taker of the social and political Zeitgeist,” that the book’s ending “might be enough to send McEwan acolytes scurrying back through the novel to see how he did it, but it made me want to throw the book out the window.” Citing McEwan’s renown among the reading public before gleefully launching into critiques that are as difficult to credit as they are withering seems to be the pattern. The notice in The Economist, for instance, begins,
At 64, with a Hollywood film, a Man Booker prize and a gong from the queen, Ian McEwan has become a grand old man of British letters. Publication of his latest novel, “Sweet Tooth”, was announced on the evening news. A reading at the Edinburgh book festival was introduced by none other than the first minister, Alex Salmond.
But, warns the unnamed reviewer, “For all the attendant publicity, ‘Sweet Tooth’ is not Mr. McEwan’s finest book.” My own personal take on the novel—after seeking out all the most negative reviews I could find (most of them are positive)—is that the only readers who won’t appreciate it, aside from the reviewers who can’t stand how much the reading public celebrates McEwan’s offerings, are the ones whose prior convictions about what literature is and what it should do blind them to even the possibility that a novel can successfully operate on as many levels as McEwan folds into his narrative. For these readers, the mere fact of an author’s moving from one level to the next somehow invalidates whatever gratification they got from the most straightforward delivery of the story.
At the most basic level, Sweet Tooth is the first-person account of how Serena Frome is hired by MI5 and assigned to pose as a representative for an arts foundation offering the writer Thomas Haley a pension that will allow him to quit his teaching job so he can work on a novel. The book’s title refers to the name of a Cold War propaganda initiative to support authors whose themes Serena’s agency superiors expect will bolster the cause of the Non-Communist Left. Though Sweet Tooth is fictional, there actually were programs like it that supported authors like George Orwell. Serena is an oldest-sibling type, with an appreciation for the warm security of established traditions and longstanding institutions, along with an attraction for and eagerness to please authority figures. These are exactly the traits that lead to her getting involved in the project of approaching Tom under false pretenses, an arrangement which becomes a serious dilemma for her as the two begin a relationship and she falls deeper and deeper in love with him. Looking back on the affair at the peak of the tension, she admits,
For all the mess I was in, I didn’t know how I could have done things differently. If I hadn’t joined MI5, I wouldn’t have met Tom. If I’d told him who I worked for at our very first meeting—and why would I tell a stranger that?—he would’ve shown me the door. At every point along the way, as I grew fonder of him, then loved him, it became harder, riskier to tell him the truth even as it became more important to do so. (266)
This plot has many of the markings of genre fiction, the secret-burdened romance, the spy thriller. But even on this basic level there’s a crucial element separating the novel from its pulpier cousins; the stakes are actually quite low. The nation isn’t under threat. No one’s life is in danger. The risks are only to jobs and relationships.
James Lasdun, in an otherwise favorable review for The Guardian, laments these low stakes, suggesting that the novel’s early references to big political issues of the 1970s lead readers to the thwarted expectation of more momentous themes. He writes,
I couldn't help feeling like Echo in the myth when Narcissus catches sight of himself in the pool. “What about the IRA?” I heard myself bleating inwardly as the book began fixating on its own reflection. What about the PLO? The cold war? Civilisation and barbarity? You promised!
But McEwan really doesn’t make any such promise in the book’s opening. Lasdun simply makes the mistake of anticipating Sweet Tooth will be more like McEwan’s earlier novel Saturday. In fact, the very first lines of the book reveal what the main focus of the story will be:
My name is Serena Frome (rhymes with plume) and almost forty years ago I was sent on a secret mission for the British Security Service. I didn’t return safely. Within eighteen months of joining I was sacked, having disgraced myself and ruined my lover, though he certainly had a hand in his own undoing. (1)
That “I didn’t return safely” sets the tone—overly dramatic, mock-heroic, but with a smidgen of self-awareness that suggests she’s having some fun at her own expense. Indeed, all the book’s promotional material referring to her as a spy notwithstanding, Serena is more of a secretary or a clerk than a secret agent. Her only field mission is to offer funds to a struggling writer, not exactly worthy of Ian Fleming.
When Lasdun finally begins to pick up on the lighthearted irony and the over all impish tone of the novel, his disappointment has him admitting that all the playfulness is enjoyable but longing nonetheless for it to serve some greater end. Such longing betrays a remarkable degree of obliviousness to the fact that the final revelation of the plot actually does serve an end, a quite obvious one. Lasdun misses it, apparently because the point is moral as opposed to political. A large portion of the novel’s charm stems from the realization, which I’m confident most readers will come to early on, that Sweet Tooth, for all the big talk about global crises and intrigue, is an intimately personal story about a moral dilemma and its outcomes—at least at the most basic level. The novel’s scope expands beyond this little drama by taking on themes that present various riddles and paradoxes. But whereas countless novels in the postmodern tradition have us taking for granted that literary riddles won’t have answers and plot paradoxes won’t have points, McEwan is going for an effect that’s much more profound.
The most serious criticism I came across was at the end of the Economist review. The unnamed critic doesn’t appreciate the surprise revelation that comes near the end of the book, insisting that afterward, “it is hard to feel much of anything for these heroes, who are all notions and no depth.” What’s interesting is that the author presents this not as an observation but as a logical conclusion. I’m aware of how idiosyncratic responses to fictional characters are, and I accept that my own writing here won’t settle the issue, but I suspect most readers will find the assertion that Sweet Tooth’s characters are “all notion” absurd. I even have a feeling that the critic him or herself sympathized with Serena right up until the final chapter—as the critic from TheIrish Independent must have. Why else would they be so frustrated as to want to throw the book out of the window? Several instances of Serena jumping into life from the page suggest themselves for citation, but here’s one I found particularly endearing. It comes as she’s returning to her parents’ house for Christmas after a long absence and is greeted by her father, an Anglican Bishop, at the door:
“Serena!” He said my name with a kindly, falling tone, with just a hint of mock surprise, and put his arms about me. I dropped my bag at my feet and let myself be enfolded, and as I pressed my face into his shirt and caught the familiar scent of Imperial Leather soap, and of church—of lavender wax—I started to cry. I don’t know why, it just came from nowhere and I turned to water. I don’t cry easily and I was as surprised as he was. But there was nothing I could do about it. This was the copious hopeless sort of crying you might hear from a tired child. I think it was his voice, the way he said my name, that set me off. (217)
This scene reminds me of when I heard my dad had suffered a heart attack several years ago: even though at the time I was so pissed off at the man I’d been telling myself I’d be better off never seeing him again, I barely managed two steps after hanging up the phone before my knees buckled and I broke down sobbing—so deep are these bonds we carry on into adulthood even when we barely see our parents, so shocking when their strength is made suddenly apparent. (Fortunately, my dad recovered after a quintuple bypass.)
But, if the critic for the Economist concluded that McEwan’s characters must logically be mere notions despite having encountered them as real people until the end of the novel, what led to that clearly mistaken deduction? I would be willing to wager that McEwan shares with me a fondness for the writing of the computational neuroscientist Douglas Hofstadter, in particular Gödel, Escher, Bach and I am a Strange Loop, both of which set about arriving at an intuitive understanding of the mystery of how consciousness arises from the electrochemical mechanisms of our brains, offering as analogies several varieties of paradoxical, self-referencing feedback loops, like cameras pointing at the TV screens they feed into. What McEwan has engineered—there’s no better for word for it—with his novel is a multilevel, self-referential structure that transforms and transcends its own processes and premises as it folds back on itself.
One of the strange loops Hofstadter explores, M.C. Escher’s 1960 lithograph Ascending and Descending, can give us some helpful guidance in understanding what McEwan has done. If you look at the square staircase in Escher’s lithograph a section at a time, you see that each step continues either ascending or descending, depending on the point of view you take. And, according to Hofstadter in Strange Loop,
A person is a point of view—not only a physical point of view (looking out of certain eyes in a certain physical space in the universe), but more importantly a psyche’s point of view: a set of hair-trigger associations rooted in a huge bank of memories. (234)
Importantly, many of those associations are made salient with emotions, so that certain thoughts affect us in powerful ways we might not be able to anticipate, as when Serena cries at the sound of her father’s voice, or when I collapsed at the news of my father’s heart attack. These emotionally tagged thoughts form a strange loop when they turn back on the object, now a subject, doing the thinking. The neuron forms the brain that produces the mind that imagines the neuron, in much the same way as each stair in the picture takes a figure both up and down the staircase. What happened for the negative reviewers of Sweet Tooth is that they completed a circuit of the stairs and realized they couldn’t possibly have been going up (or down), even though at each step along the way they were probably convinced.
McEwan, interviewed by Daniel Zalewski for the New Yorker in 2009, said, “When I’m writing I don’t really think about themes,” admitting that instead he follows Nabokov’s dictum to “Fondle details.”
Writing is a bottom-up process, to borrow a term from the cognitive world. One thing that’s missing from the discussion of literature in the academy is the pleasure principle. Not only the pleasure of the reader but also of the writer. Writing is a self-pleasuring act.
The immediate source of pleasure then for McEwan, and he probably assumes for his readers as well, comes at the level of the observations and experiences he renders through prose.
Sweet Tooth is full of great lines like, “Late October brought the annual rite of putting back the clocks, tightening the lid of darkness over our afternoons, lowering the nation’s mood further” (179). But McEwan would know quite well that writing is also a top-down process; at some point themes and larger conceptual patterns come into play. In his novel Saturday, the protagonist, a neurosurgeon named Henry Perowne, is listening to Angela Hewitt’s performance of Bach’s strangely loopy “Goldberg” Variations. He writes,
Well over an hour has passed, and Hewitt is already at the final Variation, the Quodlibet—uproarious and jokey, raunchy even, with its echoes of peasant songs of food and sex. The last exultant chords fade away, a few seconds’ silence, then the Aria returns, identical on the page, but changed by all the variations that have come before. (261-2)
Just as an identical Aria or the direction of ascent or descent in an image of stairs can be transformed by a shift in perspective, details about a character, though they may be identical on the page, can have radically different meanings, serve radically different purposes depending on your point of view.
Though in the novel Serena is genuinely enthusiastic about Tom’s fiction, the two express their disagreements about what constitutes good literature at several points. “I thought his lot were too dry,” Serena writes, “he thought mine were wet” (183). She likes sentimental endings and sympathetic characters; he admires technical élan. Even when they agree that a particular work is good, it’s for different reasons: “He thought it was beautifully formed,” she says of a book they both love, “I thought it was wise and sad” (183). Responding to one of Tom’s stories that features a talking ape who turns out never to have been real, Serena says,
I instinctively distrusted this kind of fictional trick. I wanted to feel the ground beneath my feet. There was, in my view, an unwritten contract with the reader that the writer must honor. No single element of an imagined world or any of its characters should be allowed to dissolve on authorial whim. The invented had to be as solid and as self-consistent as the actual. This was a contract founded on mutual trust. (183)
A couple of the reviewers suggested that the last chapter of Sweet Tooth revealed that Serena had been made to inhabit precisely the kind of story that she’d been saying all along she hated. But a moment’s careful reflection would have made them realize this isn’t true at all. What’s brilliant about McEwan’s narrative engineering is that it would satisfy the tastes of both Tom and Serena. Despite the surprise revelation at the end—the trick—not one of the terms of Serena’s contract is broken. The plot works as a trick, but it also works as an intimate story about real people in a real relationship. To get a taste of how this can work, consider the following passage:
Tom promised to read me a Kingsley Amis poem, “A Bookshop Idyll,” about men’s and women’s divergent tastes. It went a bit soppy at the end, he said, but it was funny and true. I said I’d probably hate it, except for the end. (175)
The self-referentiality of the idea makes of it a strange loop, so it can be thought of at several levels, each of which is consistent and solid, but none of which captures the whole meaning.
Sweet Tooth is a fun novel to read, engrossing and thought-provoking, combining the pleasures of genre fiction with some of the mind-expanding thought experiments of some of the best science writing. The plot centers on a troubling but compelling moral dilemma, and, astonishingly, the surprise revelation at the end actually represents a solution to this dilemma. I do have to admit, however, that I agree with the Economist that it’s not McEwan’s best novel. The conceptual plot devices bear several similarities with those in his earlier novel Atonement, and that novel is much more serious, its stakes much higher.
Sweet Tooth is nowhere near as haunting as Atonement. But it doesn’t need to be.
Also read:
LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME
And:
MUDDLING THROUGH "LIFE AFTER LIFE": A REFLECTION ON PLOT AND CHARACTER IN KATE ATKINSON’S NEW NOVEL
The Smoking Buddha: Another Ghost Story for Adults (and Young Adults too)
It’s Halloween and the kids are being a little mean with their jokes about someone who’s overweight. So it’s time for a story about how quickly anyone’s life can spiral out of control.
My nephew and three of the kids from the old neighborhood were telling raunchy jokes around the steel mesh fire pit the night of my brother and soon-to-be sister-in-law’s Halloween party. All night, I kept shaking my head in disbelief at the fact that they were all almost old enough to get their driver’s licenses. I used to babysit my nephew during the summers back when I was in college. Let’s just say at the time all these kids had a ways to go before they were teenagers.
What they were laughing at now were some jokes at the expense of an overweight girl they all knew from school. I had a feeling they were playing up the viciousness because they thought it would make them seem more grownup in my eyes. The jokes were so mean I was sitting there wondering how best to get through to them that I didn’t really care for this brand of humor (though I admit I struggled not to laugh at a couple points, restricting myself to a wry smirk).
“You guys are being pretty ruthless,” I said finally. “Do you think she deserves all that? I mean, you assume it’s okay because she’ll never hear what you’re saying. But I’d bet real money she would be able to recite back to you quite a few of the really good jokes you think she’s never heard.”
This suggestion received a surprising and not altogether heartening response, the gist of which was that this poor girl did in fact deserve to be made the butt of their cruel jokes. After all, she did
choose to eat too much, they pointed out. She also chose to sit around being lazy instead of exercising. They even suggested that their heckling could possibly give her some added incentive to change her ways.
“Uh-ho!” I erupted in incredulous laughter. “I get it—you guys aren’t picking on her because her flaws make you feel better about your own. No, you’re picking on her because you want to help her.” They fell momentarily silent. “You guys are such humanitarians.”
Before long, the mood leavened once again, and I began to wonder if I’d been too harsh, my efforts to temper my moralizing with sarcasm notwithstanding. But not two minutes later the snide, and now defiant, references to the overweight girl began to sneak back into their banter. I decided my best bet was to leave them to it, and so I got up from the log I’d been sitting on uncomfortably and went into the house to get a beer and see what the old people (some of whom are younger than me) were doing.
As I stood in the kitchen alongside the table where several people in costumes were playing a trivia game (which I’m no longer allowed to play with them), I considered bringing up the issue of the mean jokes about the obese girl to my brother. The thought had barely entered my mind before I dismissed it though; the only thing worse than a moralizer is a rat. Plus, I wasn’t exactly one to be pointing the finger, I realized, as I had just that morning been cursing my neighbor, who lives in the carriage house behind my apartment, because she’s always sitting on her porch smoking, coughing loudly at predictable intervals, often blaring music through an open window, shouting into her phone to her mother and her lone friend, completely oblivious to how many people in the vicinity can hear her every word, and, well, just being an all-around irksome presence. I also must confess my own impulse is to look at her with some revulsion. Because she’s terribly obese.
*******
I returned to the backyard and found the boys still at the fire, laughing at each other’s failed attempts at telling a passable ghost story. It wasn’t long before they started reminding me of all the times back in the days when I babysat them that I either read ghost stories to them, or else spun some ridiculously elaborate ones of my own. They pleaded with me to tell them a good one. “We know you know some,” they pressed. “Tell us the best one you can think of.”
“It just so happens I know a story about some stuff that actually happened pretty recently,” I said. They all turned toward me with eager grins. “This guy I know named Zach lives in an apartment in an old house downtown, a lot like the place I live in, and one night he brings a woman home with him from a bar. Now this is really good news for Zach because he’s been down on his luck lately. He used to live in a ginormous yuppie mansion up closer to this part of town. But then like a year ago the recession caught up to his company and he got laid off. He’d bought the house and a lot of other stuff on credit, so right away he was in trouble. And, on top of losing his house, his fiancé had just up and left him for another guy. So Zach moves into to this cheap one-bedroom apartment in West Central. As you can imagine, he’s not feeling too good about himself.
“After a while he manages to get a part-time job. Before getting laid off he used to work as a big shot sales guy for a tech company, so one of the hardest parts about finding another job was having to accept working in a less prestigious position. The part-time gig he got was only temporary—he was helping put together contracts for a bank or something—but he was hoping to get a foothold and turn it into something that would get his career back on track.
“So one night he brings this woman home—and it’s only like the second person he’s dated since his fiancé left him. Things are going well, you know. They start off talking on the couch, lots of eye-contact, the reach-over-and-brush-aside-the-hair deal, hand on her shoulder, cups the back of her neck, pulls her in for the kiss. I’m sure you guys know all about how that stuff works. So they’re kissing for a while, and then she says, ‘Maybe you should show me your bedroom.’ Well, he’s actually embarrassed about his whole tiny and rickety apartment, so he’d rather leave the lights off and not show her anything. But of course he’s about to get laid so it doesn’t take him very long to get over it.
“They get up from the couch and he leads her by the hand through his embarrassingly dirty kitchen and into his bedroom. Once inside the door, he decides not to bother with the light switch. He just wraps his arms around her and they start making out again. They make their way over to the bed, and, you know, now things are getting hot-and-heavy. Her shirt comes off, then her bra. Zach’s having himself a really good time because, if you can believe what he says, the girl’s got really nice… well, you guys can use your imaginations. Then he’s sitting back on his knees starting to take his own shirt off when he hears a sound. It’s this kind of squeaky ‘ehuh-ehuhm.’
“Zach knows exactly what it is. He stops in the middle of taking off his shirt to close his eyes and shake his head as he’s heaving this big sigh. Of course, the woman is like, ‘Are you okay? What’s the matter?’ He tries to brush it off and keep things moving along. So he gets his shirt off and starts kissing her again, and, you know, other stuff. Then he sits back and starts unbuckling her belt, and that’s when he hears it again: ‘ehuh-ehuhm.’ This time she hears it too, which is kind of a disaster. ‘What is that?’ she says. So Zach’s like, ‘Oh, it’s nothing. Don’t worry.’ He goes back to unbuckling her belt, unbuttons her pants, zipper comes down, and she starts doing that little wiggle with her hips to help him get her jeans off. But before he gets them down, they hear the sound again: ‘Ehuh-ehuhm.’
“This time he just flips. He jumps out of bed, like, ‘Goddamnit!’ He goes over to the window that looks down on the little yard behind the house and the carriage house apartment behind it. And there she is. Cheryl, his neighbor, sitting at the little table she’s set up on her porch with its neat little patterned table cloth lit up by the single orange bulb in the lamp next to her front door, smoking her cigarette, and looking so completely vacant he’s sure he could run up to her, smack her in the face, and be halfway back to the front door of the house before she even got around to saying, ‘Hey, what was that for?’ Cheryl, who emerged from the carriage house to smoke every twenty-five minutes like clockwork. Cheryl, whose neck and face were so fat she may as well have been holding her head in place with a big pillow. And, as Zach is glaring at her through his upstairs window, she makes the sound again, ‘ehuh-ehuhm,’ the cough that comes at such regular intervals, each repetition sounding so perfectly identical to the all the others, that he imagines it coming from a synthesizer set on a timer.
“He’s getting ready to lift open his window so he can yell down to her to show some fucking courtesy—it’s after midnight!—when the woman in his bed starts saying, ‘You know, it’s getting really late,’ and all those things women say when the natural progression has been interrupted and they’ve had too much time to think about what they’re doing. So Zach tries to be cool and hands her her bra and walks her out to her car, saying he’ll call her and all that. But he knows the moment is past and his chances are shot. He climbs the stairs, feeling totally defeated, and goes back to his room, where he stands at the window again, just glowering down at Cheryl as she sits smoking in the orange light of her porch, mindlessly lifting her cigarette to her lips.”
“Let me guess—he kills her.”
“Shut up and let him tell the story.”
“Well, he definitely wanted to kill her. You should have heard the way he talked about this woman. I mean he loathed her. He called her the Smoking Buddha because of the totally blank look she always had on her big doughy face. I guess one of the other things she did to annoy him was talk on her phone while she was on the porch smoking. Apparently, she was so loud he could hear just about every word even when his windows were shut. Anyway, one time he overheard her talking to like three different people about how she’d had some kind of panic attack and gone to the emergency room because she thought she was dying. And he was like, ‘You know who pays for it when people like that go to the emergency room? We do. She probably just got winded from lifting her fat ass out of her recliner and freaked out.’
“Now Zach used to be one of those political guys who think everyone who can’t pay their own bills is just lazy and looking for a handout. Since losing his job, he’s calmed down a bit, but somehow his neighbor managed to get him talking about parasites and worthless slugs and drains on society and all that again. He said over half the phone conversations she broadcasted over the neighborhood were about her health problems. So he’s like, ‘Get off your fat ass and stop eating so much pizza and I bet you feel a lot better—and stop costing us so much in your fucking healthcare bills.’
“The other thing that pissed him off was that he’d actually tried to get the carriage house a while back. The rent was like thirty bucks less than he was paying for his upstairs apartment, but the place was really cool. He’d gone in to check it out when the girls who lived there before Cheryl were moving out. When he called the landlord, though, he found out that Cheryl had set up a special deal. She and her mom were going to redo all the landscaping in the backyard and get a bit taken off the rent. Of course, the only person Zach ever saw doing any actual work in the yard over the next couple months was the mom. Cheryl just sat there at the little table on her porch, smoking and complaining about all her medical problems.
“Now, I’ve checked out the backyard Cheryl’s mom worked on for the first half of the summer. The weeds and brush have been cleared away from the hedges. She lined the edges of the grass with stones and put mulch all around the trees. It looks really nice. There’s a sidewalk that goes from the front of the house back to the carriage house and around to an alley behind it. To the left of the sidewalk as you’re walking to the alley, there’s about ten feet of mulch before the hedge. The weird thing is, Cheryl’s mom, who is completely normal by the way, judging from the few times I saw her back there working, she made what I think is a flowerbed right in the middle of the mulch. It’s rectangular and its sides are made up of what look like these tiny headstones. They each poke out of the ground, grayish-white, their tops angled at the corners but curved up in half circles in the middle. There are seven of them on the sides parallel to the sidewalk, and four on the perpendicular sides. So it’s like there’s a six by two and a half foot rectangle of fresh black dirt in the middle of the mulch. The one and only time I ever talked to Cheryl’s mom I jokingly asked her if there wasn’t a body buried in that flowerbed. She jokingly refused to reassure me.
“Even more, um, interesting, is the statue she has stationed at the back corner of the flowerbed. You can only see its back and some of its profile from Zach’s upstairs window, but coming from the alley you see it’s a cement satyr—they teach you ignorant wretches any mythology in school?—standing with one hand limp at his side, and the other raised to stroke his beard. I think it’s supposed to look relaxed and playful, but maybe because it’s like two and half feet tall—you know, the dimensions are all wrong—its breeziness comes across as mischievous, even a bit sinister. It’s still there. I’ll have to have you guys over to my apartment sometime so we can walk over and I’ll show it to you. You’ll see that it looks like it’s been out in the weather for decades, with mossy blotches and patches of gray. She must have moved it from some other yard.
“Anyway, there’s an even smaller statue of an angel cupping her hands in front of her beside Cheryl’s front door. There’s nothing scary about that one—just a kind of yard ornament you don’t see very often anymore. Oh, and there’s also this tiny maple tree, maybe four or five feet tall, a little off to one side from Cheryl’s neat little porch arrangement. I just remember that tree because come late September and all through October, its leaves have been this shocking, bright red—almost glowing. It’s actually pretty cool looking.
“Back to Zach’s story, though. So it’s about the middle of the summer now and he goes to work one day and tries to talk to some of the management figures about the possibility of going full-time and getting a raise. Unfortunately, they tell him instead that once the projects he’s working on now are done, sometime around Christmas, they won’t have any more work for him. Zach tries to take this in stride and starts planning in his mind how he’s going to devote all his free time to looking for another job, a better one. But of course he’s really worried that he’s going to end up working at a gas station or something—and even those types of jobs aren’t guaranteed anymore. To make matters worse, when his work’s done for the day, he goes out to the parking lot, gets in his car, and it won’t start.
“Now the stress is almost too much, but he just closes his eyes and tries to take some deep breaths. The building he works in is downtown, so it’s like a twenty-five minute walk to his apartment. The whole way he’s trying psych himself out, telling himself all that self-help, bootstrappy crap about how every setback is actually an opportunity, every challenge a chance to develop character and perseverance. They probably give you guys a lot of the same crap at school. I’ll just say Zach was realizing for the first time that perseverance and determination—they only go so far. At some point, no matter how hard you work, the luck factor takes over.
“This is what he’s thinking about when he’s walking past the carriage house behind his apartment, coming from the alley, and hears dishes crashing inside. He walks around to the front to peek in the window, and there’s Cheryl on the floor in the kitchen, both hands on her throat like she’s choking. Zach steps away. His first thought is that he has to hurry up and call an ambulance. Then he figures that will take too long—he needs to run inside and give her the Heimlich. But he finds himself just standing there doing nothing. He can’t imagine anything worse than having to wrap his arms around that sweaty woman. He says to himself his arms probably aren’t long enough anyway. And he actually laughs. So this woman is inside choking to death and he’s standing there chuckling at a lame fat joke.
“Finally, as soon as his mind returns to the internet job-searching tasks he’s got lined up in his mind, which he’s been telling himself he’d jump right on the second he got home, he manages to convince himself that he probably didn’t really see anything too out of the ordinary. She probably just tripped or something. He figured he ought to mind his own business and forget whatever he happened to see through her window anyway. And that’s just what he does. He turns around, walks to the door to his apartment, goes upstairs, gets on his computer, and spends the next several hours online looking through job listings.
“The crazy thing is he actually forgot all about having seen Cheryl on the floor—at least until that night. He’d been asleep for a long time, so he had no idea what time it was. But there was the sound, the ‘ehuh-ehuhm,’ the cough. He remembered it because even though it woke him up in the middle of the night he was still sort of relieved to hear it. Zach’s not a horrible person, you know. He was mostly just having a horrible day. Anyway, he didn’t want to have to think that the poor woman had died because he’d just walked away.
“He gets out of bed and goes to the window. Sure enough, Cheryl is sitting at her little table and smoke is hanging in the air all around her. The orange light from the lamp behind her is making the big blob of her outline glow, but everything else is in shadow. For several moments, he can’t resist filling in the shadows with the imagined features of a giant orange toad. Then, as he’s standing there, he shivers and feels chills spreading over his back. He can’t tell, but it looks like Cheryl is looking right back up at him—something he’s never seen her do before. She’s always seemed so oblivious to all her neighbors. The more convinced he becomes that she is in fact staring at him, glaring at him even, not even moving enough to take another drag off the cigarette in her hand, the farther he finds himself backing away from the window in tiny shuffling steps.
“It freaks him out so much it takes him forever to fall back to sleep. But eventually he does, and the morning comes. Of course, he has to walk to work in the morning because his car is still dead in the parking lot. He’s a little uneasy as he’s walking along the sidewalk, around the carriage house toward the alley, trying to keep his eyes forward and not notice anything that might be going on through the windows. But then he turns the corner into the alley and there’s a fucking ambulance parked right outside the carriage house. Zach thinks Cheryl must have choked to death after all, but then he remembers he saw her outside smoking in the middle of the night. He ends up just putting his head down and walking past, rushing to work.”
“Ooh, creepy. Did this really happen?”
“Just let him tell it.”
“When he gets off work later that day, he calls his landlord Tom to see if he’s heard anything. Sure enough, the ambulance was there for Cheryl—who’d choked to death the day before. Now Zach is so freaked out he doesn’t want to walk back home because he doesn’t want to go anywhere near that carriage house again. And this is when all sorts of weird stuff started happening to him. I didn’t hear about it until just a few days ago because I stopped hearing from him at all for a long time. But that day he walked home trying to tell himself that either she hadn’t been choking when he saw her but had choked later, or that he’d dreamt the whole thing about seeing her outside looking up at him. When he gets to the alley, he decides to walk the little extra distance to the road so he can get to the house from the front.
“As you can imagine, he goes on to have a few sleepless nights. But then, maybe three or four days later, he was distracted enough by his work and his fruitless job searching to wander into the alley again on his way home. Naturally, he tenses up when he realizes he’s passing the carriage house, and he can’t help staring at the place as he’s going around it. He’s staring at it so intently by the time he’s in the backyard where the table is still situated between two chairs on the porch, with its neat little table cloth topped with an overfull ashtray, he doesn’t notice that he’s not alone. When he finally turns his head back to the sidewalk, he’s almost nose-to-nose with an older woman. Jumping backward, he ends up tripping over one of the tiny headstones edging the still empty flowerbed and falls right on his ass in the middle of the rectangle. The woman walks over to look down at him, and he sees it’s Cheryl’s mom. But she doesn’t say anything to him. She just stands there beside the statue of the satyr, muttering something he can’t make out. And Zach’s so startled he just lies there braced up on his elbows in the dirt. Now, this is where it gets really freaky—as she’s standing over him, sort of talking under her breath, he swears the sun, which has been out all day, suddenly got blocked by a cloud. So everything gets darker and then these huge gusts of wind start blowing in the trees and scattering leaves all around.
“Now, when I saw this woman, she looked completely normal. A bit overweight, like most middle-aged people you see around here. Nothing like her daughter. And I usually saw her in jeans and sweatshirts. She had long hair, somewhat gray. She’s actually hard to describe just because you see so many women just like her every day. But right then she was scaring the hell out Zach. After a few minutes of being in a sort of trance, he says he started to stand up while she just turned and walked away toward the front door of the carriage house, still not saying a word to him.”
“Oh man, is this guy still alive?”
“He just said he talked him a few days ago, moron.”
“Maybe he talked to his ghost—ooOOoo.”
“Seriously, I want to hear what happened after that. How long ago was this?”
“It was in the middle of September. But you guys are going to have to wait a couple minutes to hear the rest because I have to piss and get another beer. All this yammering is making me parched.”
“Ha ha, Yammering!”
*******
After the intermission, we were all back on our logs and lawn chairs, and a few more people were milling around. When one of the boys explained I’d been telling a ghost story, there was a brief discussion about whether or not someone should go over the highlights of the story so far. But then my brother chimed in, assuring everyone, “If it’s any good, he’ll write the whole thing up for his blog tomorrow.” So most of the newcomers wandered away or only listened with one ear from a distance.
“So let me guess,” one of the boys said, “the Smoking Buddha comes up out of the flowerbed grave and belly flops on him.”
“No, Zach never saw the Smoking Buddha again—though I think I might’ve. But you’re going to have to wait for that part. What happened first was that Zach was diagnosed with hypothyroidism, or he had a growth on his thyroid, or something like that. So he needs surgery but the insurance program he signed up for after he lost the insurance he got through his last job won’t pay enough of the bill for him to be able to afford it. Now Zach never said anything about this in connection with Cheryl choking to death. But hypothyroidism causes your metabolism to slow down. It can lead to depression and—wait for it—severe weight gain. So of course when I hear about it all I can think is: dude was mean to woman because she’s fat, woman dies, mom cast some kind of fucking revenge curse on dude, and now Zach has some medical condition he can’t afford to get treated—which will probably make him gain weight. Sure enough, in like a month he’s put on about twenty pounds.
“I know that may just sound like a coincidence, and it probably is. One of the other things that happened was that Zach’s landlord Tom called him and asked if he still wanted to move into the carriage house. Of course, Zach’s not quite as eager anymore, but it’s broad daylight when he gets the call, so he kind of stubbornly insists to himself that there’s no reason he can’t live there. He tells Tom he wants to move, but no sooner does he get off the phone than he starts panicking and hyperventilating. When he described the dread he felt then to me later, he said he’d never felt anything like it before. He was sweating all over and couldn’t catch his breath. He started to dial Tom back like four times but kept telling himself he wanted to wait until he could calm down before trying to talk to anyone.
“But apparently his stubbornness ultimately won out. I helped move him into the carriage house near the beginning of this month. Now something else happened that, looking back afterward, is really strange. While he was online looking for work, he found a couple of guys he used to hang out with back in college through some networking site. It turns out they’re both big partiers, and Zach used go barhopping with them all the time. They both happen to live pretty close to Zach, so for the past month Zach has been meeting up with these two guys like four or more times a week at Henry’s, the bar that’s maybe four or five blocks from his apartment, which is good because his car is still sitting dead in the parking lot where he works.
“The first thing that’s weird about him hanging out with these guys is that they get him smoking again—and I hadn’t known Zach was ever a smoker to begin with. It turns out he started back in high school and quit right after graduating from college. Now, hanging out at a bar with his old friends, both of whom go outside all the time to smoke, and doing a bunch of drinking—you know, it’s only a matter of time before he starts up again. When I asked him about it, he said it’s no big deal; it’s just to help him with the stress; he’ll quit again once he gets his job situation sorted out. The second thing that’s weird is that he starts thinking someone’s following him all the time when he walks back and forth from the bar. And of course that’s the part that really freaks him out.
“One night there’s a guy walking behind him as he’s on his way home. Now Zach is pretty drunk so he tries to play it cool. There’s no law against someone walking around downtown at night, and it’s no big deal they both just happen to be heading in the same direction. But, after the guy makes a few of the same turns as Zach, he starts getting a bit scared. He keeps doing these quick glances over his shoulder to see if the guy’s still back there, because for some reason he doesn’t want to look right at him. It’s like he’s afraid once the guy realizes Zach knows he’s following him he’ll give up the pretense and just run him down to do to him whatever it is he’s planning to do.
“Now here’s where it gets really freaky. When Zach rounds the corner into the alley that goes to the carriage house, he’s thinking the guy will stop following him for sure. But then after a while he hears footsteps behind him—and there’s something strange about the way the footsteps sound. So Zach does another of those quick glances over his shoulder, and he’s glad for a second because it looks like the guy is quite some distance away from him still. But with his eyes forward he thinks the footsteps sound like they’re coming from much closer. Even before he has a chance to really think out what this means, he’s bolting down the alley as fast as he can, fumbling with his keys in the door, rushing inside and slamming the door behind him. What he realized was that whatever it was following him—it could have only been about two and half feet tall.”
“What the hell? Is this all true?”
“What was it? Like some kind of little demon?”
“He was probably just drunk and freaking himself out.”
“Will you guys just listen? So he locks the door and just stands there panicking for a while. But eventually he starts trying to peak out the windows to see if anyone—or anything—is still out there. He doesn’t see a damn thing. Now this goes on and on. Not every time he goes out, but often enough that after a while he doesn’t want to go outside after dark anymore. And he never manages to get a good look. It’s always just on the edge of his vision, or in the shadows. Plus, he’s always drunk and too terrified to look directly at it. So like six times in the past month the poor guy has gotten scared shitless in the middle of his walk home from Henry’s and had to sprint home.
“But the worst was the night he came home from the bar drunk, passed out, and then woke up because he thought someone was in the apartment with him. He opened his eyes thinking he’d heard little running footsteps in the room. When he sat up in his bed though, whatever it was was gone. So he just sits there in his bed for a minute, listening and getting scared, trying to tell himself that it had only been a dream. Then he hears the sound again. Now Zach is completely terrified at this point, but he works up the courage to go out into the living room and kitchen area to check it out. He doesn’t notice anything at first, but as he’s passing the front door he sees that it’s not even pulled all the way shut. So he rushes over and pulls on the knob to close it, but as he’s doing it he looks out through the window and ends up standing there completely frozen.
“Zach’s standing at the door, looking out into the yard that's lit up by the orange lamp—and he realizes that the satyr statue that stands at the corner of the flowerbed edged with all the little headstones—well, it’s not there. And as he’s standing there petrified he hears the sound of the tiny footsteps behind him again. After an eternity not being able to move, he decides to run to the bathroom as fast as he can, turn on the lights, and lock himself in there. And that’s what he does. He ended up sleeping on the floor in the bathroom all night. When he woke up the next morning, he crept up to the window again, and sure enough the satyr statue was right back where it was supposed to be."
"Hell no."
“Yeah, this was just a couple weeks ago. Since all this stuff started, you guys wouldn’t believe how much Zach has changed. I mean, I barely even recognize the dude. He says he’s freaked out all the time, he can’t sleep; I know he’s drinking like a fish even though he can’t afford it. He’s putting on weight—he’s stuffing his face with something every time I see him lately. And he’s smoking again. In fact, the last time I saw him, just a few days ago, he sat there chain-smoking the whole time. He has two chairs sitting on his porch, and I saw him sitting out there when I walked by, so I stopped to sit and chat. He told me it’s all still going on—the guy following him home, the sounds in the house—and he’s basically at wit's end.
“It was dusk when I stopped by, and the whole time we’re talking I’m looking at that little maple tree with the blazing red leaves blowing in the breeze in front of me. And that’s when I started getting really creeped out myself—because there wasn’t any fucking breeze. I kind of wanted to get up and leave right then, but before I could say anything Zach’s phone starts ringing. He holds up a finger to me as he answers it. But after about ten seconds it’s like he’s completely forgotten I’m even there. It turns out it’s his mom on the phone, and he just starts unloading all these complaints on her, loud enough that anyone on the block could listen in. He tells her about all the weird shit that’s happening and how he’s always waking up in the middle of the night in a panic. Then he starts in on how he can’t find any decent work. Then it’s his insurance. He tells her how he’s trying to get on Medicaid, but there’s no way he can get benefits in time to pay for his procedure. He goes on and on, so finally I stand up and just kind of gesture a goodbye to him.
“As I’m walking up the sidewalk that runs through the yard and alongside the house up to the street, I look at the satyr statue and feel chills going down my back. And that’s when I hear it, this squeaky ‘ehuh-ehuhm’ coughing sound behind me. I turn back to see Zach, just as the dark triggers the sensor on the lamp beside the door and the orange light comes on. He’s sitting there in his chair, hunched, in a cloud of cigarette smoke, still talking on the phone, obliviously loud, the orange light showing his rounded outline and casting his face in shadow. As I stood there looking at him in disbelief, I couldn’t help but fill in the shadowed features with those of a toad. I turned around and got the hell out of there. Haven’t heard from him since.
“Now, speaking of being overweight, which one of you little punks is going to find me some Twizzlers?”
Finis
Also read the first
BEDTIME GHOST STORY FOR ADULTS
And
The Ghost Haunting 710 Crowder Court
And
Projecting Power, Competing for Life, & Supply Side Math
If a George Bush or a Mitt Romney swaggers around projecting his strength by making threats and trying to push around intergovernmental organizations, some people will naturally be intimidated and back down. But those probably weren’t the people we needed to worry about in the first place.
Some issues I feel are being skirted in the debates:
1. How the Toughest Guy Projects his Power
The Republican position on national security is that the best way to achieve peace is by “projecting power,” and they are fond of saying that Democrats invite aggression by “projecting weakness.” The idea is that no one will start a fight he knows he won’t win, nor will he threaten to start a fight with someone he knows will call his bluff. This is why Republican presidents often suffer from Cowboy Syndrome.
In certain individual relationships, this type of dynamic actually does establish itself—or rather the dominant individual establishes this type of dynamic. But in the realm of national security we aren’t dealing with individuals. With national security, we’re basically broadcasting to the world the level of respect we have for them. If a George Bush or a Mitt Romney swaggers around projecting his strength by making threats and trying to push around intergovernmental organizations, some people will naturally be intimidated and back down. But those probably weren’t the people we needed to worry about in the first place.
The idea that shouting out to the world that the US is the toughest country around and we’re ready to prove it is somehow going to deter Al Qaeda militants and others like them is dangerously naïve. We can’t hope for all the nations of the world to fall into some analog of Battered Wife Syndrome. Think about it this way, everyone knows that the heavyweight champion MMA guy is the toughest fighter in the world. If you want to project power, there’s no better way to do it than by winning that belt. Now we have to ask ourselves: Do fewer people want to fight the champion? We might also ask: Does a country like Canada get attacked more because of its lame military?
The very reason organizations like Al Qaeda ever came into existence was that America was projecting its power too much. The strategy of projecting power may as well have been devised by teenage boys—and it continues to appeal to people with that mindset.
2. Supplying more Health and Demanding not to Die
Paul Ryan knows that his voucher system for Medicare is going to run into the problem that increasing healthcare costs will quickly surpass whatever amount is allotted to individuals in the vouchers—that’s the source of the savings the program achieves. But it’s not that he wants to shortchange seniors. Rather, he’s applying a principle from his economic ideology, the one that says the best way to control costs is to make providers compete. If people can shop around, the reasoning goes, they’ll flock toward the provider with the lowest prices—the same way we all do with consumer products. Over time, all the providers have to find ways to become more efficient so they can cut costs and stay in business.
Sounds good, right? But the problem is that healthcare services aren’t anything like consumer goods. Supply and demand doesn’t work in the realm of life and death. Maybe, before deciding which insurance company should get our voucher, we’ll do some research. But how do you know what types of services you’re going to need before you sign up? You’re not going to find out that your plan doesn’t cover the service you need until you need the service. And at that point the last thing you’re going to want to do is start shopping around again. Think about it, people shop around for private insurance now--are insurance companies paragons of efficiency?
Another problem is that you can’t shop around to find better services once industry standards have set in. For example—if you don’t like how impersonal your cell phone service is, can you just drop your current provider and go to another? If you do, you’re just going to run into the same problem again. What’s the lowest price you can pay for cable or internet services? The reason Comcast and Dish Network keep going back and forth with their commercials about whose service is better is that there is fundamentally very little difference.
Finally, insurance is so complicated that only people who can afford accountants or financial advisors, only people who are educated and have the time to research their options, basically only people with resources are going to be able to make prudent decisions. This is why the voucher system, over time, is just going to lead to further disadvantages for the poor and uneducated, bring about increased inequality, and exacerbate all the side effects of inequality, like increased violent crime.
3. Demand Side Never Shows up for the Debate
The reason Romney and Ryan aren’t specifying how they’re going to pay for their tax cuts, while at the same time increasing the budget for the military, while at the same time decreasing the deficit, is that they believe, again based on their economic ideology, that the tax cuts will automatically lead to economic growth. The reasoning is that if people have more money after taxes, they’ll be more likely to spend it. This includes business owners who will put the money toward expanding their businesses, which of course entails hiring new workers. All this cycles around to more money for everyone, more people paying that smaller percentage but on larger incomes, so more revenue comes in, and now we can sit back and watch the deficit go down. This is classic supply side economics.
Sounds good, right? The problem is that businesses only invest in expansion when there’s increasing demand for their products or services, and the tax cuts for lower earners won’t be enough to significantly increase that demand. If there's no demand, rich people don't invest and hire; they buy bigger houses and such. The supply side theory has been around for a long time—and it simply doesn’t work. The only reliable outcome of supply side policies is increasing wealth inequality.
What works is increasing demand—that’s demand side economics. You do this by investing in education, public resources, and infrastructure. Those construction workers building roads and bridges and maintaining parks and monuments get jobs when their companies are hired by the government—meaning they get paid with our tax money. Of course, they get taxed on it, thus helping to support more projects. Meanwhile, unemployment goes down by however many people are hired. These people have more income, and thus create more demand. The business owners expand their businesses—hire more people. As the economy grows, the government can scale back its investment.
Demand side economics can also focus on human capital - including healthcare because it's hard to work when you're sick or dying and you're not going to be creating any demand when you're bankrupt from hospital and insurance payments. Government can also help the economy by investing in people's education, because educated people tend to get better jobs, make more money, and—wait for it, create more demand. (Not to mention innovation.) Job training can work the same way.
Supply side versus demand side is at the heart of most policy debates. The supply side ideology has all kinds of popular advocates, from Ayn Rand to Rush Limbaugh. The demand siders seem more mum, but that might just be because I live in Indiana. In any case, the demand siders have much better evidence supporting their ideas, even though they lose in terms of rhetoric as the knee jerk response to their ideas is to (stupidly, inaccurately) label them socialist. As Bill Clinton pointed out and the fact checkers corroborated, Democrats do a better job creating jobs.
4. Climate Change?
Also read:
TED MCCORMICK ON STEVEN PINKER AND THE POLITICS OF RATIONALITY
THE IDIOCY OF OUTRAGE: SAM HARRIS'S RUN-INS WITH BEN AFFLECK AND NOAM CHOMSKY
The STEM Fempire Strikes Back: Feminists’ Desperate Search for Smoking Guns
Moss-Racusin et al.’s study should not be summarily dismissed, but let’s not claim it’s more important than it really is just because it produced the results the STEM fems were hoping for. Far too many people assume that feminism can only be good for women and good for science. But if discrimination really doesn’t play that big a role for women in science—which everyone should acknowledge the current weight of evidence suggests is the case—the infusion of gender politics has the potential to cause real harm.
Bad news: lots of research points to the inescapable conclusion that you, Dear Reader, whether you’re a man or a woman, are a sexist. You may be inclined to reject this label. You may even try to insist that you don’t in fact believe that one sex is inferior to the other. But it doesn’t matter, because the research suggests that what you claim to believe about the relative statuses of the genders doesn’t align with how quickly you attach positive or negative labels to pictures of women and men in a task called the Implicit Association Test. Your sexism is “subtle,” “implicit,” “unconscious.” If this charge irks you or if you feel it’s completely unfair, that probably means you’re even more of a sexist than we might have originally assumed. You can try to find fault with the research that demonstrates you’re a sexist, or offer alternative interpretations of the findings, but why would you do that unless you’re a sexist and trying to cover it up—unless, that is, you’re secretly harboring and seeking to rationalize hostile feelings toward women? Sexism is like original sin. It’s in us whether we like it or not, so we must work hard to avoid succumbing to it. We must abase ourselves before the altar of gender equality.
At least, this is what the feminists involved in the controversy over women’s underrepresentation in STEM fields—the STEM fems—would have us believe. Responding to the initial fifty-eight comments to his blog post “Scientists, Your Gender Bias is Showing,” in which he discusses a new study that found significant bias in ratings of competence and hireability depending on the sex of unseen applicants to a lab manager’s position, physicist Sean Carroll ungraciously—you might even say unbecomingly—writes, “At least the trolls have moved on from ‘there is no discrimination’ to ‘discrimination is rationally justified.’ Progress!”
By Carroll’s accounting, I am a troll (by mine, he’s a toady) because I happen to believe gender-based discrimination accounts for a very modest portion of career segregation and pay differentials in industrialized societies—and it may not account for any. And, this latest study notwithstanding, nearly all the available evidence suggests the underrepresentation of women in STEM fields is based on the fact that men and women, on average, prefer to pursue different types of careers. Indeed, the study Carroll so self-righteously trumpets, which didn’t track any actual hirings but only asked participants to treat application materials hypothetically, may have produced the findings it did because its one hundred and twenty-seven participants were well aware of these different preferences.
The underrepresentation of women in science, technology, engineering, and mathematics fields is taken by feminists as self-evident proof of discrimination. Since most people who work or teach in these areas understand that sexism is wrong—or at least recognize that it’s thought to be wrong by an influential if possibly misguided majority—not many of them openly admit to deliberately discriminating against women. Yet the underrepresentation continues, ergo the discrimination still exists. That’s why in the past decade there’s been so much discussion of unacknowledged or unconscious bias. Anyone who points out that there is another possible explanation—women and men are essentially (in the statistical sense) different—is accused of being a biological determinist, being a misogynist, having a reactionary political agenda, or some combination of the three.
Now, “essentially different” isn’t all that far from “naturally different,” which is of course part of the formula for sexism, since the belief that one sex is inferior assumes they are somehow inherently different. (I’m excluding genders besides male and female not as a statement but for simplicity’s sake.) But the idea that the sexes tend to be different need not imply either is inferior. Historically, women were considered less intelligent by most men (fewer records exist of what women thought of men), but most educated people today realize this isn’t the case. The important differences are in what men and women tend to find interesting and in what types of careers they tend to prefer (note the word “tend”).
So we have two rival theories. The STEM fems explain career segregation and pay gaps with the theory of latent sexism and rampant discrimination. My fellow trolls and I explain them with the theory that women disproportionately prefer careers focusing on people as opposed to objects and abstractions, while also prioritizing time with family over the achievement of higher rank and higher pay. The fems believe that gender roles, including those associated with career trajectories, are a bad thing, that they limit freedom, and that they are imposed on people, sometimes violently, by a patriarchal society. We preference theory folk, on the other hand, believe that gender begins with individuals, that it is expressed and enacted freely, and that the structure of advanced civilizations, including career segregation and a somewhat regular division of labor with regard to family roles, emerges from the choices and preferences of these individuals.
The best case that can be made for the feminist theory is historical. In the past, women were forbidden to work in certain careers. They were kept out of higher education. They were tethered with iron bonds to their children and their husbands’ homes. Men, meanwhile, had to live with the same type of rigid gender definitions, but at least they had some freedom to choose their careers, could count on their wives tending to the children, and enjoyed the highest position of authority in their families. So we can assume, the reasoning goes, that when we look at society today and find income inequality and segregation what we’re seeing is a holdover from this patriarchal system of the past. From this perspective, the idea that the different outcomes for each gender could possibly emerge from choices freely made is anathema because it seems similar to the rationalizations for the rigid roles of yore. Women naturally want to be mothers and homemakers? Anyone who dares make such a claim belongs in the 1950s, right?
Though this take on history is a bit of a caricature (class differences were much more significant than gender ones), it has been easy, until recently, to take as self-evident the notion that gender roles erode in lockstep with the advance of civilization toward ever greater individual freedom for ever greater numbers.
Still, tying modern preference theory to policies of the past is nothing but evil rhetoric (almost as evil as accusations of unconscious thought crimes). No one wants to bring back educational and professional barriers to women. The question is whether in the absence of those barriers career segregation and differences in income between the genders will disappear altogether or if women will continue to disproportionally occupy certain professions and continue to spend more time on average with their families than men.
Catherine Hakim, a former Senior Research Fellow at the London School of Economics, and the mind behind preference theory, posits that economic sex differences emerge from what she calls work-life preferences. She has devised three categories that can be used to describe individuals: work-centered people prioritize their careers, adaptive people try to strike some kind of balance between employment and family work, and home- or family-centered people prefer to give priority to private or family life after they get married. In all the western democracies that have been surveyed, most men but only a small minority of women fit into the work-centered category, while the number of women who are home-centered drastically exceeds the number of men. This same pattern emerges in the US even in samples restricted to elite math and science students. In 2001, David Lubinsky and his colleagues reported that in their surveys of high-achieving students 31% of females said that working part-time for some limited period in their careers was either important or extremely important, compared to only 9% of males. Nineteen percent of the females said the same about a permanent part-time career, compared to 9% for males.
Careers in science and math are notoriously demanding. You have to be a high achiever and a fierce competitor to even be considered for a position, so the fact that men disproportionately demonstrate work-centered priorities goes some way toward explaining the underrepresentation of women. Another major factor that researchers have identified is that women and men tend to be interested in different types of careers, with women preferring jobs that focus on people and men preferring those that focus on things. A 2009 meta-analysis carried out by Rong Su, James Rounds, and Patrick Ian Armstrong compiled data from over 500,000 surveys of vocational interests and found that gender differences on the Things-People dimension produce an effect size that is probably larger than any other in research on gender and personality. Once differences in work-life preferences and vocational interests are taken into consideration, there is probably very little left to explain.
Feminism is a social movement that has many admirable goals, most of which I share. But it is also an ideology that has a fitful relationship with science. Unfortunately, the growing body of evidence that gender segregation and pay gaps emerge from choices freely made by individuals based on preferences that fit reliable patterns in societies all over the world hasn’t done much to end the furor over discrimination. The study on that front that Sean Carroll insists is so damning, “Science Faculty’s Subtle Gender Biases Favor Male Students,” by Corinne A. Moss-Racusin, John F. Dovidio, Victoria L. Brescoll, Mark J. Graham, and Jo Handelsman, is the most substantial bit of actual evidence the STEM fems have been able to marshal in support of their cause in some time. Covering the study in her Scientific American blog, Ilana Yurkiewicz writes,
Whenever the subject of women in science comes up, there are people fiercely committed to the idea that sexism does not exist. They will point to everything and anything else to explain differences while becoming angry and condescending if you even suggest that discrimination could be a factor. But these people are wrong. This data shows they are wrong. And if you encounter them, you can now use this study to inform them they’re wrong. You can say that a study found that absolutely all other factors held equal, females are discriminated against in science. Sexism exists. It’s real. Certainly, you cannot and should not argue it’s everything. But no longer can you argue it’s nothing.
What this rigorous endorsement reveals is that prior to Moss-Racusin et al.’s study there was only weak evidence backing up the STEM fems conviction that sexism was rampant in science departments all over the country and the world. You can also see that Yurkiewicz takes this debate very personally. It’s really important to her that women who complain about discrimination be vindicated. I suppose that makes sense, but I wonder if she realizes that the point she’s so desperately trying to prove is intrinsically insulting to her male colleagues—to all male scientists. I also wonder if in any other scientific debate she would be so quick to declare the matter settled based on a single study that only sampled 127 individuals.
The preference theorists have some really good reasons to be skeptical of the far-reaching implications many are claiming for the study. Most importantly, the authors’ conclusions contradict the findings of a much larger study that measured the key variables more directly. In 2010, the National Academies Press published the findings of a task force that was asked by Congress to ‘conduct a study to assess gender differences in the careers of science, engineering, and mathematics (SEM) faculty, focusing on four-year institutions of higher education that award bachelor’s and graduate degrees. The study will build on the National Academies’ previous work and examine issues such as faculty hiring, promotion, tenure, and allocation of institutional resources including (but not limited to) laboratory space. (VII)
The report, Gender Differences at Critical Transitions in the Careers of Science, Engineering, and Mathematics Faculty, surprised nearly everyone because it revealed no evidence of gender-based discrimination. After reviewing records for 500 academic departments and conducting surveys with 1,800 faculty members (a larger sample than Moss-Racusin et al.’s study by more than an order of magnitude), the National Academies committee concluded,
For the most part, male and female faculty in science, engineering, and mathematics have enjoyed comparable opportunities within the university, and gender does not appear to have been a factor in a number of important career transitions and outcomes. (Bolded in original, 153)
But the two studies were by no means identical, so it’s important to compare the specific findings of one to the other.
Moss-Racusin and her colleagues sent application materials to experienced members of science faculties at research-intensive institutions. Sixty-three of the packets showed the name John and listed the sex as male; 64 had the name Jennifer and sex female. The study authors gave the participants the cover story that they were going to use their answers to several items on a questionnaire about their responses to the applications in the development of a mentoring program to help undergraduate science students. The questions focused on the applicant’s competence, hireability, likeability, how likely the rater would be to mentor the applicant, and how much the rater would offer to pay the applicant. The participants rating applications from females tended to give them better scores for likeability, but lower ones for competence and hireability. The participants, whether male or female themselves, also showed less willingness to mentor females, and indicated they would offer females lower salaries. So there you have it: the participants didn’t dislike the female applicants—they weren’t hostile or “old-fashioned” sexists. But you can see how women forced to deal with this type of bias might be discouraged.
To me, the lower salary offers are the most striking. But a difference in medians between $30,200 and $26,500 doesn't seem that big when you consider the overall spread was between $45,000 and $15,000, there was no attempt to control for differences in average salary between universities, and the sample size is really small.
Moss-Racusin et al. also had the participants complete the Modern Sexism Scale, which was designed as an indirect measure of gender attitudes. On the supporting information page for the study, the authors describe the scale,
Items included: On average, people in our society treat husbands and wives equally; Discrimination against women is no longer a problem in the United States; and Over the past few years, the government and new media have been showing more concern about the treatment of women than is warranted by women’s actual experiences (α = 0.92). Items were averaged to form the gender attitudes scale, with higher numbers indicating more negative attitudes toward women.
Aside from the fact that it defines a lack of support for feminism as sexism (and the middle item, which bears directly on the third, is precisely the matter the study is attempting to treat empirically), this so-called sexism scale introduces the second of two possible confounds. The first is that the cover story may have encouraged many of the participants to answer even direct questions about their own responses as if they were answering questions about how they believed most other people in their position would answer them. And the second problem is that for obvious reasons it’s important that the participants not know the true purpose of the study, which the authors insist was “double-blind.” But we must wonder what conclusions the participants might have drawn about the researchers’ goals when they came across the “Modern Sexism Scale,” a really odd set of questions about the responders’ own views in a survey of their thoughts about an applicant.
We also need to distinguish sexism—the belief that one sex is inferior—from biased behavior. Bias can be based on several factors besides sexism—but the feminists fail to acknowledge this. The authors of the study explain the (modest) difference in ratings for wholly imaginary applicants as the result of arbitrary, sexist stereotypes that have crept into people’s minds. (They of course ignore the sexist belief that men are less likeable—rightly so because the methods don't allow them to identify that belief.) The alternative explanation is that the bias is based on actual experiences with real people: the evaluators may have actually known more men who wanted lab management positions, more men who had successfully worked in that role, and/or more females who didn't work out in it. The conflating of sexism (or racism) with bias is akin to saying anyone who doesn't forget everything they’ve experienced with different types of people when making hiring decisions is guilty of perpetrating some injustice.
In a live chat hosted on Science’s webpage, one of the study authors, Jo Handelsman, writes, “We know from a lot of research that people apply more bias in decision making when they have less information, so I think this type of quick review is the most prone to ‘gut level’ decisions, which are colored by bias.” Implicit or gut-level reactions are notoriously sensitive to things like the way questions are framed, the order in which information is presented, and seemingly irrelevant or inconsequential cues. This sensitivity makes complex results from studies of implicit associations extremely difficult to interpret. Handelsman and her colleagues tried to control for extraneous factors by holding the conditions of their study constant for all participants, with the sole difference being the name and sex on the forms. But if I’m a scientist who’s agreed to assess an application in a hypothetical hiring situation for the purpose of helping to design a mentoring program, I would very likely be primed to provide information that I believe might give the students who are the beneficiaries of the research some useful guidance. I might, for instance, want to give female scientists a heads-up about some of the obstacles they might encounter—especially if in the course of the survey I’m reminded of the oppression of wives by husbands, discrimination in society at large, and the fact that some people are so callous as to not even want to hear about how bad women have it.
Another possibility is that the omnipresent and inescapable insistence of STEM fems that sexism is rampant is actually creating some of the bias the studies by STEM fems then turn around and measure. Since Moss-Racusin et al. report that high scores on the so-called Modern Sexism Scale correlated with lower ratings for females’ competence and hireability, we have to ask if the study participants might have been worried about women primed to make excuses for themselves, or if they might have been reluctant to hire someone with an ideologically inspired chip on her shoulder who would be ready to cry gender discrimination at the first whiff of rough treatment. Such alternative interpretations may seem like special pleading. But the discrepancy between the findings of this study and those of the National Academies committee, which, again, were based on a sample that was more than ten times larger and measured the variables directly, calls out for an explanation.
Perhaps the most troubling implication of the study is that women applicants to scientific positions will be less likely to make to the interview stage of the hiring process, so all the implicit stereotypes about women being less competent will never be overridden with more information. However, the National Academies committee found that in actuality, “The percentage of women who were interviewed for tenure-track or tenured positions was higher than the percentage of women who applied” (157). Unless we assume males tend to be worse candidates for some reason—sexism against men?—this finding rules out the possibility that women are discriminated against for interviews. Are the women who make it to the interview stage thought to be less competent and hireable than their male counterparts? According to the committee report, “For all disciplines the percentage of tenure-track women whoreceived the first job offer was greater than the percentage in the interviewpool.”
This finding suggests that for some reason women are thought to be better, not worse, candidates for academic positions. If there’s any discrimination, it’s against men.
It could still be argued that the Moss-Racusin et al. study suggests that the reason fewer women apply for positions in science and math fields is that they get less encouragement to do so because participants said they were less likely to mentor female applicants for a hypothetical position. But how do we square this finding with that of the National Academies finding that “Female tenure-track and tenured faculty reported that theywere more likely to have mentors than male faculty. In the case of tenure-track faculty, 57 percent of women had mentors compared to 49 percent of men” (159). Well, even if women are more successful at finding mentors, it could still be argued that they would be discouraged by offers of lower starting salaries. But how would they know, unless they read the study, that they can expect lower offers? And is it even true that women in science positions are paid less than men. In its review of the records of 500 academic departments, the National Academies study determined that “Men and women seem to have been treated equally when theywere hired. The overall size of start-up packages and the specific resources of reduced initial teaching load, travel funds, and summer salary did not differ between male and female faculty” (158).
Real world outcomes seem to be completely at odds with the implications of the new study, and at odds too with STEM fems insistence that discrimination accounts for a major portion of women’s underrepresentation in math and science careers. The National Academies study did however offer some strong support for preference theory. It turns out that women are more likely to turn down job offers, and the reason they cite is telling.
In 95 percent of the tenure-track and 100 percent of the tenured positions where a man was the first choice for a position, a man was ultimately hired. In contrast, in cases where a woman was the first choice, a woman was ultimately hired in only 70 percent of the tenure-track and 77 percent of the tenured positions.
When faculty were asked what factors they considered when selecting their current position, the effect of gender was statistically significant for only one factor—“family-related reasons.”
The Moss-Racusin et al. study was probably conceived of as a response to another article published in the same journal, The Proceedings of the National Academy of Science, in February of 2011. In “Understanding Current Causes of Women’s Underrepresentation in Science,” authors Stephen Ceci and Wendy Williams examine evidence from a vast array of research and write, “We find the evidence for recent sex discrimination–when it exists–is aberrant, of small magnitude, and is superseded by larger, more sophisticated analyses showing no bias, or occasionally, bias in favor of women” (1-2). That Moss-Racusin et al.’s study will likewise be superseded seems quite likely—in fact, it already has been superseded by the NAS study. Ceci and Williams' main conclusion from their review is a good summary of preference theory:
Despite frequent assertions that women’s current underrepresentation in math-intensive fields is caused by sex discrimination by grant agencies, journal reviewers, and search committees, the evidence shows women fare as well as men in hiring, funding, and publishing (given comparable resources). That women tend to occupy positions offering fewer resources is not due to women being bypassed in interviewing and hiring or being denied grants and journal publications because of their sex. It is due primarily to factors surrounding family formation and childrearing, gendered expectations, lifestyle choices, and career preferences—some originating before or during adolescence. (5)
Moss-Racusin et al.’s study should not be summarily dismissed—that’s not what I’m arguing. It is suggestive, and the proverbial further studies should be conducted. But let’s not claim it’s more important than it really is just because it produced the results the STEM fems were hoping for. And let’s quit acting like every study that produces evidence of gender discrimination is a victory for the good guys. Far too many people assume that feminism can only be good for women and good for science. But if discrimination really doesn’t play that big a role for women in science—which everyone should acknowledge the current weight of evidence suggests is the case—the infusion of gender politics has the potential to cause real harm. The standing accusation of sexism may not in the end lead to better treatment of women—it may lead to resentment. And the suggestion that every male scientist is the beneficiary of unfair hiring practices will as likely as not lead to angry defiance and increasing tension.
To succeed in the most elite fields, you have to be cut-throat. It would be surprising if science and math careers turned out to be peopled with the nicest, most accommodating individuals. Will the young woman scientist who has a run-in with a jerk frame the encounter as just that—a run-in with an individual who happens to be a jerk—or will she see it as a manifestation of patriarchal oppression? It seems to me the latter response embodies the same type of prejudice the STEM fems claim to be trying to end.
Read Catherine Hakim's Feminists Myths and Magic Medicine
And
Freud: The Falsified Cipher
Upon entering a graduate program in literature, I was appalled to find that Freud’s influence was alive and well in the department. Didn’t they know that nearly all of Freud’s theories have been disproven? Didn’t they know psychoanalysis is pseudoscience?
[As I'm hard at work on a story, I thought I'd post an essay from my first course as a graduate student on literary criticism. It was in the fall of 2009, and I was shocked and appalled that not only were Freud's ideas still being taught but there was no awareness whatsoever that psychology had moved beyond them. This is my attempt at righting the record while keeping my tone in check.]
The matter of epistemology in literary criticism is closely tied to the question of what end the discipline is supposed to serve. How critics decide what standard of truth to adhere to is determined by the role they see their work playing, both in academia and beyond. Freud stands apart as a literary theorist, professing in his works a commitment to scientific rigor in a field that generally holds belief in even the possibility of objectivity as at best naïve and at worst bourgeois or fascist. For the postmodernists, both science and literature are suspiciously shot through with the ideological underpinnings of capitalist European male hegemony, which they take as their duty to undermine. Their standard of truth, therefore, seems to be whether a theory or application effectively exposes one or another element of that ideology to “interrogation.” Admirable as the values underlying this patently political reading of texts are, the science-minded critic might worry lest such an approach merely lead straight back to the a priori assumptions from which it set forth. Now, a century after Freud revealed the theory and practice of psychoanalysis, his attempt to interpret literature scientifically seems like one possible route of escape from the circularity (and obscurantism) of postmodernism. Unfortunately, Freud’s theories have suffered multiple devastating empirical failures, and Freud himself has been shown to be less a committed scientist than an ingenious fabulist, but it may be possible to salvage from the failures of psychoanalysis some key to a viable epistemology of criticism.
A text dating from early in the development of psychoanalysis shows both the nature of Freud’s methods and some of the most important substance of his supposed discoveries. Describing his theory of the Oedipus complex in The Interpretation of Dreams, Freud refers vaguely to “observations on normal children,” to which he compares his experiences with “psychoneurotics” to arrive at his idea that both display, to varying degrees, “feelings of love and hatred to their parents” (920). There is little to object to in this rather mundane observation, but Freud feels compelled to write that his discovery is confirmed by a legend,
…a legend whose profound and universal power to move can only be understood if the hypothesis I have put forward in regard to the psychology of children has an equally universal validity. (920)
He proceeds to relate the Sophocles drama from which his theory gets its name. In the story, Oedipus is tricked by fate into killing his father and marrying his mother. Freud takes this as evidence that the love and hatred he has observed in children are of a particular kind. According to his theory, any male child is fated to “direct his first sexual impulse towards his mother” and his “first murderous wish against his father” (921). But Freud originally poses this idea as purely hypothetical. What settles the issue is evidence he gleans from dream interpretations. “Our dreams,” he writes, “convince us that this is so” (921). Many men, it seems, confided to him that they dreamt of having sex with their mothers and killing their fathers.
Freud’s method, then, was to seek a thematic confluence between men’s dreams, the stories they find moving, and the behaviors they display as children, which he knew mostly through self-reporting years after the fact. Indeed, the entire edifice of psychoanalysis is purported to have been erected on this epistemic foundation. In a later essay on “The Uncanny,” Freud makes the sources of his ideas even more explicit. “We know from psychoanalytic experience,” he writes, “that the fear of damaging or losing one’s eyes is a terrible one in children” (35). A few lines down, he claims that, “A study of dreams, phantasies and myths has taught us that anxiety about one’s eyes…is a substitute for the dread of being castrated” (36). Here he’s referring to another facet of the Oedipus complex which theorizes that the child keeps his sexual desire for his mother in check because of the threat of castration posed by his jealous father. It is through this fear of his father, which transforms into grudging respect, and then into emulation, that the boy learns his role as a male in society. And it is through the act of repressing his sexual desire for his mother that he first develops his unconscious, which will grow into a general repository of unwanted desires and memories (Eagleton 134).
But what led Freud to this theory of repression, which suggests that we have the ability to willfully forget troubling incidents and drive urges to some portion of our minds to which we have no conscious access? He must have arrived at an understanding of this process in the same stroke that led to his conclusions about the Oedipus complex, because, in order to put forth the idea that as children we all hated one parent and wanted to have sex with the other, he had to contend with the fact that most people find the idea repulsive. What accounts for the dramatic shift between childhood desires and those of adults? What accounts for our failure to remember the earlier stage? The concept of repression had to be firmly established before Freud could make such claims. Of course, he could have simply imported the idea from another scientific field, but there is no evidence he did so. So it seems that he relied on the same methods—psychoanalysis, dream interpretation, and the study of myths and legends—to arrive at his theories as he did to test them. Inspiration and confirmation were one and the same.
Notwithstanding Freud’s claim that the emotional power of the Oedipus legend “can only be understood” if his hypothesis about young boys wanting to have sex with their mothers and kill their fathers has “universal validity,” there is at least one alternative hypothesis which has the advantage of not being bizarre. It could be that the point of Sophocles’s drama was that fate is so powerful it can bring about exactly the eventualities we most desire to avoid. What moves audiences and readers is not any sense of recognition of repressed desires, but rather compassion for the man who despite, even because of, his heroic efforts fell into this most horrible of traps. (Should we assume that the enduring popularity of W.W. Jacobs’s story, “The Monkey’s Paw,” which tells a similar fated story about a couple who inadvertently wish their son dead, proves that all parents want to kill their children?) The story could be moving because it deals with events we would never want to happen. It is true however that this hypothesis fails to account for why people enjoy watching such a tragedy being enacted—but then so does Freud’s. If we have spent our conscious lives burying the memory of our childhood desires because they are so unpleasant to contemplate, it makes little sense that we should find pleasure in seeing those desires acted out on stage. And assuming this alternative hypothesis is at least as plausible as Freud’s, we are left with no evidence whatsoever to support his theory of repressed childhood desires.
To be fair, Freud did look beyond the dreams and myths of men of European descent to test the applicability of his theories. In his book Totem and Taboo he inventories “savage” cultures and adduces the universality among them of a taboo against incest as further proof of the Oedipus complex. He even goes so far as to cite a rival theory put forth by a contemporary:
Westermarck has explained the horror of incest on the ground that “there is an innate aversion to sexual intercourse between persons living very closely together from early youth, and that, as such persons are in most cases related by blood, this feeling would naturally display itself in custom and law as a horror of intercourse between near kin.” (152)
To dismiss Westermarck’s theory, Freud cites J. G. Frazer, who argues that laws exist only to prevent us from doing things we would otherwise do or prod us into doing what we otherwise would not. That there is a taboo against incest must therefore signal that there is no innate aversion, but rather a proclivity, for incest. Here it must be noted that the incest Freud had in mind includes not just lust for the mother but for sisters as well. “Psychoanalysis has taught us,” he writes, again vaguely referencing his clinical method, “that a boy’s earliest choice of objects for his love is incestuous and that those objects are forbidden ones—his mother and sister” (22). Frazer’s argument is compelling, but Freud’s test of the applicability of his theories is not the same as a test of their validity (though it seems customary in literary criticism to conflate the two).
As linguist and cognitive neuroscientist Steven Pinker explains in How the Mind Works, in tests of validity Westermarck beats Freud hands down. Citing the research of Arthur Wolf, he explains that without setting out to do so, several cultures have conducted experiments on the nature of incest aversion. Israeli kibbutzim, in which children grew up in close proximity to several unrelated agemates, and the Chinese and Taiwanese practice of adopting future brides for sons and raising them together as siblings are just two that Wolf examined. When children from the kibbutzim reached sexual maturity, even though there was no discouragement from adults for them to date or marry, they showed a marked distaste for each other as romantic partners. And compared to more traditional marriages, those in which the bride and groom grew up in conditions mimicking siblinghood were overwhelmingly “unhappy, unfaithful, unfecund, and short” (459). The effect of proximity in early childhood seems to apply to parents as well, at least when it comes to fathers’ sexual feelings for their daughters. Pinker cites research that shows the fathers who sexually abuse their daughters tend to be the ones who have spent the least time with them as infants, while the stepdads who actually do spend a lot of time with their stepdaughters are no more likely to abuse them than their biological parents. These studies not only favor Westermarck’s theory; they also provide a counter to Frazer’s objection to it. Human societies are so complex that we often grow up in close proximity with people who are unrelated, or don’t grow up with people who are, and therefore it is necessary for there to be a cultural proscription—a taboo—against incest in addition to the natural mechanism of aversion.
Among biologists and anthropologists, what is now called the Westermarck effect has displaced Freud’s Oedipus complex as the best explanation for incest avoidance. Since Freud’s theory of childhood sexual desires has been shown to be false, the question arises of where this leaves his concept of repression. According to literary critic—and critic of literary criticism—Frederick Crews, repression came to serve in the 1980’s and 90’s a role equivalent to the “spectral evidence” used in the Salem witch trials. Several psychotherapists latched on to the idea that children can store reliable information in their memories, especially when that information is too terrible for them to consciously handle. And the testimony of these therapists has led to many convictions and prison sentences. But the evidence for this notion of repression is solely clinical—modern therapists base their conclusions on interactions with patients, just as Freud did. Unfortunately, researchers outside the clinical setting are unable to find any phenomenon answering to the description of repressed but retrievable memories. Crews points out that there are plenty of people who are known to have survived traumatic experiences: “Holocaust survivors make up the most famous class of such subjects, but whatever group or trauma is chosen, the upshot of well-conducted research is always the same” (158). That upshot:
Unless a victim received a physical shock to the brain or was so starved or sleep deprived as to be thoroughly disoriented at the time, those experiences are typically better remembered than ordinary ones. (159, emphasis in original)
It seems here, as with incest aversion, Freud got the matter exactly wrong—and with devastating fallout for countless families and communities. But Freud was sketchy when it came to whether or not it was memories of actual events that were repressed or just fantasies. The crux of his argument was that we repress unacceptable and inappropriate drives and desires.
And the concept of repressed desires is integral to the use of psychoanalysis in literary criticism. In The Interpretation of Dreams, Freud distinguishes between the manifest content of dreams and their latent content. Having been exiled from consciousness, troublesome desires press against the bounds of the ego, Freud’s notional agent in charge of tamping down uncivilized urges. In sleep, the ego relaxes, allowing the desires of the id, from whence all animal drives emerge, an opportunity for free play. Even in dreams, though, full transparency of the id would be too disconcerting for the conscious mind to accept, so the ego disguises all the elements which surface with a kind of code. Breaking this code is the work of psychoanalytic dream interpretation. It is also the basis for Freud’s analysis of myths and the underlying principle of Freudian literary criticism. (In fact, the distinction between manifest and latent content is fundamental to many schools of literary criticism, though they each have their own version of the true nature of the latent content.) Science writer Steven Johnson compares Freud’s conception of repressed impulses to compressed gas seeping through the cracks of the ego’s defenses, emerging as slips of the tongue or baroque dream imagery. “Build up enough pressure in the chamber, though, and the whole thing explodes—into uncontrolled hysteria, anxiety, madness” (191). The release of pressure, as it were, through dreams and through various artistic media, is sanity-saving.
Johnson’s book, Mind Wide Open: Your Brain and the Neuroscience of Everyday life, takes the popular currency of Freud’s ideas as a starting point for his exploration of modern science. The subtitle is a homage to Freud’s influential work The Psychopathology of Everyday Life. Perhaps because he is not a working scientist, Johnson is able to look past the shaky methodological foundations of psychoanalysis and examine how accurately its tenets map onto the modern findings of neuroscience. Though he sees areas of convergence, like the idea of psychic conflict and that of the unconscious in general, he has to admit in his conclusion that “the actual unconscious doesn’t quite look like the one Freud imagined” (194). Rather than a repository of repressed fantasies, the unconscious is more of a store of implicit, or procedural, knowledge. Johnson explains, “Another word for unconscious is ‘automated’—the things you do so well you don’t even notice doing them” (195). And what happens to all the pressurized psychic energy resulting from our repression of urges? “This is one of those places,” Johnson writes, “where Freud’s metaphoric scaffolding ended up misleading him” (198). Instead of a steam engine, neuroscientists view the brain as type of ecosystem, with each module competing for resources; if the module goes unused—the neurons failing to fire—then the strength of their connections diminishes.
What are the implications of this new conception of how the mind works for the interpretation of dreams and works of art? Without the concept of repressed desires, is it still possible to maintain a distinction between the manifest and latent content of mental productions? Johnson suggests that there are indeed meaningful connections that can be discovered in dreams and slips of the tongue. To explain them, he points again to the neuronal ecosystem, and to the theory that “Neurons that fire together wire together.” He writes:
These connections are not your unconscious speaking in code. They’re much closer to free-associating. These revelations aren’t the work of some brilliant cryptographer trying to get a message to the frontlines without enemy detection. They’re more like echoes, reverberations. One neuronal group fires, and a host of others join in the chorus. (200-201)
Mind Wide Open represents Johnson’s attempt to be charitable to the century-old, and now popularly recognized, ideas of psychoanalysis. But in this description of the shortcomings of Freud’s understanding of the unconscious and how it reveals itself, he effectively discredits the epistemological underpinnings of any application of psychoanalysis to art. It’s not only the content of the unconscious that Freud got outrageously wrong, but the very nature of its operations. And if Freud could so confidently look into dreams and myths and legends and find in them material that simply wasn’t there, it is cause for us to marvel at the power of his preconceptions to distort his perceptions.
Ultimately, psychoanalysis failed to move from the realm of proto-science to that of methodologically well-founded science, and got relegated rather to the back channel of pseudoscience by the hubris of its founder. And yet, if Freud had relied on good science, his program of interpreting literature in terms of the basic themes of human nature, and even his willingness to let literature inform his understanding of those themes, may have matured into a critical repertoire free of the obscurantist excesses and reality-denying absurdities of postmodernism. (Anthropologist Clifford Geertz once answered a postmodernist critic of his work by acknowledging that perfect objectivity is indeed impossible, but then so is a perfectly germ-free operating room; that shouldn’t stop us from trying to be as objective and as sanitary as our best methods allow.)
Critics could feasibly study the production of novels by not just one or a few authors, but a large enough sample—possibly extending across cultural divides—to analyze statistically. They could pose questions systematically to even larger samples of readers. And they could identify the themes in any poem or novel which demonstrate the essential (in the statistical sense) concerns of humanity that have been studied by behavioral scientists, themes like status-seeking, pair-bonding, jealousy, and even the overwhelming strength of the mother-infant bond. “The human race has produced only one successfully validated epistemology,” writes Frederick Crews (362). That epistemology encompasses a great variety of specific research practices, but they all hold as inviolable the common injunction “to make a sharp separation between hypothesis and evidence” (363). Despite his claims to scientific legitimacy, Freud failed to distinguish himself from other critical theorists by relying too much on his own intuitive powers, a reliance that all but guarantees succumbing to the natural human tendency to discover in complex fields precisely what you’ve come to them seeking.
Also read:
WHY SHAKESPEARE NAUSEATED DARWIN: A REVIEW OF KEITH OATLEY'S "SUCH STUFF AS DREAMS"
The Truth about Grownups
A poem about how true some of the things young people think about old people are… but also about how true what old people think of young people is.
What you suspect of us grownups is true,
at least of some of us—we
just want you to do exactly
what we say, because we sort of
hate you for being young
and feel the balance should be
struck by your obedience.
We want you to think what
we think—because you allowing
us to convince you makes us feel
wise and smart and like we have something
to show for all that youth we wasted.
We’re jailors and slave-drivers,
self-righteous power-trippers,
bent on punishing you for the
disappointment and mediocrity
of our lame-ass grownup lives, seeking
in our control over you some semblance of
vindication or salvation.
And, oh yes, your first thought
should be resist, escape,
recriminate—doubt and question.
Why should you follow our
instruction, respect our
decisions, follow our example—
unless you want to end up
like us?
Old and boring and bossy.
No, you’re not condemned to
be like us, not quite,
but the generations shift
with no one’s consent,
dumping you in a place
bearing no mark of your own design,
and looking around in
the vast indifference, the struggle
lost without your ever really
sensing you’d adequately
taken it up—there is
something like concern,
something like worry,
something like a genuine
wish to pass on whatever
you can of
preparedness.
All your discoveries
will seem worthy of
handing down -
even the ones that get
thrown back in your face.
What we think of you kids
is right too, at least some of you:
you’re oblivious to
your own inconsequence—
have no sense of what
anything’s worth, can’t
imagine losing
sight of a promise
that vanishes in the distance
or recedes like a mirage
on the horizon.
Also read:
GRACIE - INVISIBLE FENCES
Secret Dancers
IN HONOR OF CHARLES DICKENS ON THE 200TH ANNIVERSARY OF HIS BIRTH
Let's Play Kill Your Brother: Fiction as a Moral Dilemma Game
Anthropologist Jean Briggs discovered one of the keys to Inuit peacekeeping in the style of play adults engage use to engage children. She describes the games in her famous essay, “‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps,” and in so doing, probably unknowingly, lays the groundwork for an understanding of how our love of fiction evolved, along with our moral sensibilities.
Season 3 of Breaking Bad opens with two expressionless Mexican men in expensive suits stepping out of a Mercedes, taking a look around the peasant village they’ve just arrived in, and then dropping to the ground to crawl on their knees and elbows to a candlelit shrine where they leave an offering to Santa Muerte, along with a crude drawing of the meth cook known as Heisenberg, marking him for execution. We later learn that the two men, Leonel and Marco, who look almost identical, are in fact twins (played by Daniel and Luis Moncada), and that they are the cousins of Tuco Salamanca, a meth dealer and cartel affiliate they believe Heisenberg betrayed and killed. We also learn that they kill people themselves as a matter of course, without registering the slightest emotion and without uttering a word to each other to mark the occasion. An episode later in the season, after we’ve been made amply aware of how coldblooded these men are, begins with a flashback to a time when they were just boys fighting over an action figure as their uncle talks cartel business on the phone nearby. After Marco gets tired of playing keep-away, he tries to provoke Leonel further by pulling off the doll’s head, at which point Leonel runs to his Uncle Hector, crying, “He broke my toy!”
“He’s just having fun,” Hector says, trying to calm him. “You’ll get over it.”
“No! I hate him!” Leonel replies. “I wish he was dead!”
Hector’s expression turns grave. After a moment, he calls Marco over and tells him to reach into the tub of melting ice beside his chair to get him a beer. When the boy leans over the tub, Hector shoves his head into the water and holds it there. “This is what you wanted,” he says to Leonel. “Your brother dead, right?” As the boy frantically pulls on his uncle’s arm trying to free his brother, Hector taunts him: “How much longer do you think he has down there? One minute? Maybe more? Maybe less? You’re going to have to try harder than that if you want to save him.” Leonel starts punching his uncle’s arm but to no avail. Finally, he rears back and punches Hector in the face, prompting him to release Marco and rise from his chair to stand over the two boys, who are now kneeling beside each other. Looking down at them, he says, “Family is all.”
The scene serves several dramatic functions. By showing the ruthless and violent nature of the boys’ upbringing, it intensifies our fear on behalf of Heisenberg, who we know is actually Walter White, a former chemistry teacher and family man from a New Mexico suburb who only turned to crime to make some money for his family before his lung cancer kills him. It also goes some distance toward humanizing the brothers by giving us insight into how they became the mute, mechanical murderers they are when we’re first introduced to them. The bond between the two men and their uncle will be important in upcoming episodes as well. But the most interesting thing about the scene is that it represents in microcosm the single most important moral dilemma of the whole series.
Marco and Leonel are taught to do violence if need be to protect their family. Walter, the show’s central character, gets involved in the meth business for the sake of his own family, and as he continues getting more deeply enmeshed in the world of crime he justifies his decisions at each juncture by saying he’s providing for his wife and kids. But how much violence can really be justified, we’re forced to wonder, with the claim that you’re simply protecting or providing for your family? The entire show we know as Breaking Bad can actually be conceived of as a type of moral exercise like the one Hector puts his nephews through, designed to impart or reinforce a lesson, though the lesson of the show is much more complicated. It may even be the case that our fondness for fictional narratives more generally, like the ones we encounter in novels and movies and TV shows, originated in our need as a species to develop and hone complex social skills involving powerful emotions and difficult cognitive calculations.
Most of us watching Breaking Bad probably feel Hector went way too far with his little lesson, and indeed I’d like to think not too many parents or aunts and uncles would be willing to risk drowning a kid to reinforce the bond between him and his brother. But presenting children with frightening and stressful moral dilemmas to guide them through major lifecycle transitions—weaning, the birth of siblings, adoptions—which tend to arouse severe ambivalence can be an effective way to encourage moral development and instill traditional values. The ethnographer Jean Briggs has found that among the Inuit peoples whose cultures she studies adults frequently engage children in what she calls “playful dramas” (173), which entail hypothetical moral dilemmas that put the children on the hot seat as they struggle to come up with a solution. She writes about these lessons, which strike many outsiders as a cruel form of teasing by the adults, in “‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps,” a chapter she contributed to a 1994 anthology of anthropological essays on peace and conflict. In one example Briggs recounts,
A mother put a strange baby to her breast and said to her own nursling: “Shall I nurse him instead of you?” The mother of the other baby offered her breast to the rejected child and said: “Do you want to nurse from me? Shall I be your mother?” The child shrieked a protest shriek. Both mothers laughed. (176)
This may seem like sadism on the part of the mothers, but it probably functioned to soothe the bitterness arising from the child’s jealousy of a younger nursling. It would also help to settle some of the ambivalence toward the child’s mother, which comes about inevitably as a response to disciplining and other unavoidable frustrations.
Another example Briggs describes seems even more pointlessly sadistic at first glance. A little girl’s aunt takes her hand and puts it on a little boy’s head, saying, “Pull his hair.” The girl doesn’t respond, so her aunt yanks on the boy’s hair herself, making him think the girl had done it. They quickly become embroiled in a “battle royal,” urged on by several adults who find it uproarious. These adults do, however, end up stopping the fight before any serious harm can be done. As horrible as this trick may seem, Briggs believes it serves to instill in the children a strong distaste for fighting because the experience is so unpleasant for them. They also learn “that it is better not to be noticed than to be playfully made the center of attention and laughed at” (177). What became clear to Briggs over time was that the teasing she kept witnessing wasn’t just designed to teach specific lessons but that it was also tailored to the child’s specific stage of development. She writes,
Indeed, since the games were consciously conceived of partly as tests of a child’s ability to cope with his or her situation, the tendency was to focus on a child’s known or expected difficulties. If a child had just acquired a sibling, the game might revolve around the question: “Do you love your new baby sibling? Why don’t you kill him or her?” If it was a new piece of clothing that the child had acquired, the question might be: “Why don’t you die so I can have it?” And if the child had been recently adopted, the question might be: “Who’s your daddy?” (172)
As unpleasant as these tests can be for the children, they never entail any actual danger—Inuit adults would probably agree Hector Salamanca went a bit too far—and they always take place in circumstances and settings where the only threats and anxieties come from the hypothetical, playful dilemmas and conflicts. Briggs explains,
A central idea of Inuit socialization is to “cause thought”: isumaqsayuq. According to [Arlene] Stairs, isumaqsayuq, in North Baffin, characterizes Inuit-style education as opposed to the Western variety. Warm and tender interactions with children help create an atmosphere in which thought can be safely caused, and the questions and dramas are well designed to elicit it. More than that, and as an integral part of thought, the dramas stimulate emotion. (173)
Part of the exercise then seems to be to introduce the children to their own feelings. Prior to having their sibling’s life threatened, the children may not have any idea how they’d feel in the event of that sibling’s death. After the test, however, it becomes much more difficult for them to entertain thoughts of harming their brother or sister—the thought alone will probably be unpleasant.
Briggs also points out that the games send the implicit message to the children that they can be trusted to arrive at the moral solution. Hector knows Leonel won’t let his brother drown—and Leonel learns that his uncle knows this about him. The Inuit adults who tease and tempt children are letting them know they have faith in the children’s ability to resist their selfish or aggressive impulses. Discussing Briggs’s work in his book Moral Origins: The Evolution of Virtue, Altruism, and Shame, anthropologist Christopher Boehm suggests that evolution has endowed children with the social and moral emotions we refer to collectively as consciences, but these inborn moral sentiments need to be activated and shaped through socialization. He writes,
On the one side there will always be our usefully egoistic selfish tendencies, and on the other there will be our altruistic or generous impulses, which also can advance our fitness because altruism and sympathy are valued by our peers. The conscience helps us to resolve such dilemmas in ways that are socially acceptable, and these Inuit parents seem to be deliberately “exercising” the consciences of their children to make morally socialized adults out of them. (226)
The Inuit-style moral dilemma games seem strange, even shocking, to people from industrialized societies, and so it’s clear they’re not a normal part of children’s upbringing in every culture. They don’t even seem to be all that common among hunter-gatherers outside the region of the Arctic. Boehm writes, however,
Deliberately and stressfully subjecting children to nasty hypothetical dilemmas is not universal among foraging nomads, but as we’ll see with Nisa, everyday life also creates real moral dilemmas that can involve Kalahari children similarly. (226)
Boehm goes on to recount an episode from anthropologist Marjorie Shostak’s famous biography Nisa: The Life and Words of a !Kung Womanto show that parents all the way on the opposite side of the world from where Briggs did her fieldwork sometimes light on similar methods for stimulating their children’s moral development.
Nisa seems to have been a greedy and impulsive child. When her pregnant mother tried to wean her, she would have none of it. At one point, she even went so far as to sneak into the hut while her mother was asleep and try to suckle without waking her up. Throughout the pregnancy, Nisa continually expressed ambivalence toward the upcoming birth of her sibling, so much so that her parents anticipated there might be some problems. The !Kung resort to infanticide in certain dire circumstances, and Nisa’s parents probably reasoned she was at least somewhat familiar with the coping mechanism many other parents used when killing a newborn was necessary. What they’d do is treat the baby as an object, not naming it or in any other way recognizing its identity as a family member. Nisa explained to Shostak how her parents used this knowledge to impart a lesson about her baby brother.
After he was born, he lay there, crying. I greeted him, “Ho, ho, my baby brother! Ho, ho, I have a little brother! Some day we’ll play together.” But my mother said, “What do you think this thing is? Why are you talking to it like that? Now, get up and go back to the village and bring me my digging stick.” I said, “What are you going to dig?” She said, “A hole. I’m going to dig a hole so I can bury the baby. Then you, Nisa, will be able to nurse again.” I refused. “My baby brother? My little brother? Mommy, he’s my brother! Pick him up and carry him back to the village. I don’t want to nurse!” Then I said, “I’ll tell Daddy when he comes home!” She said, “You won’t tell him. Now, run back and bring me my digging stick. I’ll bury him so you can nurse again. You’re much too thin.” I didn’t want to go and started to cry. I sat there, my tears falling, crying and crying. But she told me to go, saying she wanted my bones to be strong. So, I left and went back to the village, crying as I walked. (The weaning episode occurs on pgs. 46-57)
Again, this may strike us as cruel, but by threatening her brother’s life, Nisa’s mother succeeded in triggering her natural affection for him, thus tipping the scales of her ambivalence to ensure the protective and loving feelings won out over the bitter and jealous ones. This example was extreme enough that Nisa remembered it well into adulthood, but Boehm sees it as evidence that real life reliably offers up dilemmas parents all over the world can use to instill morals in their children. He writes,
I believe that all hunter-gatherer societies offer such learning experiences, not only in the real-life situations children are involved with, but also in those they merely observe. What the Inuit whom Briggs studied in Cumberland Sound have done is to not leave this up to chance. And the practice would appear to be widespread in the Arctic. Children are systematically exposed to life’s typical stressful moral dilemmas, and often hypothetically, as a training ground that helps to turn them into adults who have internalized the values of their groups. (234)
One of the reasons such dilemmas, whether real or hypothetical or merely observed, are effective as teaching tools is that they bypass the threat to personal autonomy that tends to accompany direct instruction. Imagine Tío Salamanca simply scolding Leonel for wishing his brother dead—it would have only aggravated his resentment and sparked defiance. Leonel would probably also harbor some bitterness toward his uncle for unjustly defending Marco. In any case, he would have been stubbornly resistant to the lesson.
Winston Churchill nailed the sentiment when he said, “Personally, I am always ready to learn, although I don’t always like being taught.” The Inuit-style moral dilemmas force the children to come up with the right answer on their own, a task that requires the integration and balancing of short and long term desires, individual and group interests, and powerful albeit contradictory emotions. The skills that go into solving such dilemmas are indistinguishable from the qualities we recognize as maturity, self-knowledge, generosity, poise, and wisdom.
For the children Briggs witnessed being subjected to these moral tests, the understanding that the dilemmas were in fact only hypothetical developed gradually as they matured. For the youngest ones, the stakes were real and the solutions were never clear at the onset. Briggs explains that
while the interaction between small children and adults was consistently good-humored, benign, and playful on the part of the adults, it taxed the children to—or beyond—the limits of their ability to understand, pushing them to expand their horizons, and testing them to see how much they had grown since the last encounter. (173)
What this suggests is that there isn’t always a simple declarative lesson—a moral to the story, as it were—imparted in these games. Instead, the solutions to the dilemmas can often be open-ended, and the skills the children practice can thus be more general and abstract than some basic law or principle. Briggs goes on,
Adult players did not make it easy for children to thread their way through the labyrinth of tricky proposals, questions, and actions, and they did not give answers to the children or directly confirm the conclusions the children came to. On the contrary, questioning a child’s first facile answers, they turned situations round and round, presenting first one aspect then another, to view. They made children realize their emotional investment in all possible outcomes, and then allowed them to find their own way out of the dilemmas that had been created—or perhaps, to find ways of living with unresolved dilemmas. Since children were unaware that the adults were “only playing,” they could believe that their own decisions would determine their fate. And since the emotions aroused in them might be highly conflicted and contradictory—love as well as jealousy, attraction as well as fear—they did not always know what they wanted to decide. (174-5)
As the children mature, they become more adept at distinguishing between real and hypothetical problems. Indeed, Briggs suggests one of the ways adults recognize children’s budding maturity is that they begin to treat the dilemmas as a game, ceasing to take them seriously, and ceasing to take themselves as seriously as they did when they were younger.
In his book On the Origin of Stories: Evolution, Cognition, and Fiction, literary scholar Brian Boyd theorizes that the fictional narratives that humans engage one another with in every culture all over the world, be they in the form of religious myths, folklore, or plays and novels, can be thought of as a type of cognitive play—similar to the hypothetical moral dilemmas of the Inuit. He sees storytelling as an adaption that encourages us to train the mental faculties we need to function in complex societies. The idea is that evolution ensures that adaptive behaviors tend to be pleasurable, and thus many animals playfully and joyously engage in activities in low-stakes, relatively safe circumstances that will prepare them to engage in similar activities that have much higher stakes and are much more dangerous. Boyd explains,
The more pleasure that creatures have in play in safe contexts, the more they will happily expend energy in mastering skills needed in urgent or volatile situations, in attack, defense, and social competition and cooperation. This explains why in the human case we particularly enjoy play that develops skills needed in flight (chase, tag, running) and fight (rough-and-tumble, throwing as a form of attack at a distance), in recovery of balance (skiing, surfing, skateboarding), and individual and team games. (92)
The skills most necessary to survive and thrive in human societies are the same ones Inuit adults help children develop with the hypothetical dilemma’s Briggs describes. We should expect fiction then to feature similar types of moral dilemmas. Some stories may be designed to convey simple messages—“Don’t hurt your brother,” “Don’t stray from the path”—but others might be much more complicated; they may not even have any viable solutions at all. “Art prepares minds for open-ended learning and creativity,” Boyd writes; “fiction specifically improves our social cognition and our thinking beyond the here and now” (209).
One of the ways the cognitive play we call novels or TV shows differs from Inuit dilemma games is that the fictional characters take over center stage from the individual audience members. Instead of being forced to decide on a course of action ourselves, we watch characters we’ve become emotionally invested in try to come up with solutions to the dilemmas. When these characters are first introduced to us, our feelings toward them will be based on the same criteria we’d apply to real people who could potentially become a part of our social circles. Boyd explains,
Even more than other social species, we depend on information about others’ capacities, dispositions, intentions, actions, and reactions. Such “strategic information” catches our attention so forcefully that fiction can hold our interest, unlike almost anything else, for hours at a stretch. (130)
We favor characters who are good team players—who communicate honestly, who show concern for others, and who direct aggression toward enemies and cheats—for obvious reasons, but we also assess them in terms of what they might contribute to the group. Characters with exceptional strength, beauty, intelligence, or artistic ability are always especially attention-worthy. Of course, characters with qualities that make them sometimes an asset and sometimes a liability represent a moral dilemma all on their own—it’s no wonder such characters tend to be so compelling.
The most common fictional dilemma pits a character we like against one or more characters we hate—the good team player versus the power- or money-hungry egoist. We can think of the most straightforward plot as an encroachment of chaos on the providential moral order we might otherwise take for granted. When the bad guy is finally defeated, it’s like a toy that was snatched away from us has just been returned. We embrace the moral order all the more vigorously. But of course our stories aren’t limited to this one basic formula. Around the turn of the last century, the French writer Georges Polti, following up on the work of Italian playwright Carlo Gozzi, tried to write a comprehensive list of all the basic plots in plays and novels, and flipping through his book The Thirty-Six Dramatic Situations, you find that with few exceptions (“Daring Enterprise,” “The Enigma,” “Recovery of a Lost One”) the situations aren’t simply encounters between characters with conflicting goals, or characters who run into obstacles in chasing after their desires. The conflicts are nearly all moral, either between a virtuous character and a less virtuous one or between selfish or greedy impulses and more altruistic ones. Polti’s book could be called The Thirty-Odd Moral Dilemmas in Fiction. Hector Salamanca would be happy (not really) to see the thirteenth situation: “Enmity of Kinsmen,” the first example of which is “Hatred of Brothers” (49).
One type of fictional dilemma that seems to be particularly salient in American society today pits our impulse to punish wrongdoers against our admiration for people with exceptional abilities. Characters like Walter White in Breaking Bad win us over with qualities like altruism, resourcefulness, and ingenuity—but then they go on to behave in strikingly, though somehow not obviously, immoral ways. Variations on Conan-Doyle’s Sherlock Holmes abound; he’s the supergenius who’s also a dick (get the double-entendre?): the BBC’s Sherlock (by far the best), the movies starring Robert Downey Jr., the upcoming series featuring an Asian female Watson (Lucy Liu)—plus all the minor variations like The Mentalist and House.
Though the idea that fiction is a type of low-stakes training simulation to prepare people cognitively and emotionally to take on difficult social problems in real life may not seem all that earthshattering, conceiving of stories as analogous to Inuit moral dilemmas designed to exercise children’s moral reasoning faculties can nonetheless help us understand why worries about the examples set by fictional characters are so often misguided. Many parents and teachers noisily complain about sex or violence or drug use in media. Academic literary critics condemn the way this or that author portrays women or minorities. Underlying these concerns is the crude assumption that stories simply encourage audiences to imitate the characters, that those audiences are passive receptacles for the messages—implicit or explicit—conveyed through the narrative. To be fair, these worries may be well placed when it comes to children so young they lack the cognitive sophistication necessary for separating their thoughts and feelings about protagonists from those they have about themselves, and are thus prone to take the hero for a simple model of emulation-worthy behavior. But, while Inuit adults communicate to children that they can be trusted to arrive at a right or moral solution, the moralizers in our culture betray their utter lack of faith in the intelligence and conscience of the people they try to protect from the corrupting influence of stories with imperfect or unsavory characters.
This type of self-righteous and overbearing attitude toward readers and viewers strikes me as more likely by orders of magnitude to provoke defiant resistance to moral lessons than the North Baffin’s isumaqsayuq approach. In other words, a good story is worth a thousand sermons. But if the moral dilemma at the core of the plot has an easy solution—if you can say precisely what the moral of the story is—it’s probably not a very good story.
Also read
The Criminal Sublime: Walter White's Brutally Plausible Journey to the Heart of Darkness in Breaking Bad
And
SYMPATHIZING WITH PSYCHOS: WHY WE WANT TO SEE ALEX ESCAPE HIS FATE AS A CLOCKWORK ORANGE
And
SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION
The Criminal Sublime: Walter White's Brutally Plausible Journey to the Heart of Darkness in Breaking Bad
Walter White from “Breaking Bad” stands alongside other anti-heroes, both in pop culture and in literary classics like “Lolita” and “Heart of Darkness,” raising for us the question of why we find a character’s decent into evil so riveting.
Even non-literary folk think they know what Nabokov’s Lolita is about, but if you’ve never read it you really have no idea. If ever a work of literature transcended its topic and confounded any attempt at neat summary, this is it. Over the past half century, many have wondered why a novelist with linguistic talents as prodigious as Nabokov’s would choose to detail the exploits of such an unsavory character—that is, unless he shared his narrator’s unpardonable predilection (he didn’t). But Nabokov knew exactly what he was doing when he created Humbert Humbert, a character uniquely capable of taking readers beyond the edge of the map of normal human existence to where the monsters be. The violation of taboo—of decency—is integral to the peculiar and profound impact of the story. Humbert, in the final scene of the novel, attempts to convey the feeling as he commits one last criminal act:
The road now stretched across open country, and it occurred to me—not by way of protest, not as a symbol, or anything like that, but merely as a novel experience—that since I had disregarded all laws of humanity, I might as well disregard the rules of traffic. So I crossed to the left side of the highway and checked the feeling, and the feeling was good. It was a pleasant diaphragmal melting, with elements of diffused tactility, all this enhanced by the thought that nothing could be nearer to the elimination of basic physical laws than driving on the wrong side of the road. In a way, it was a very spiritual itch. Gently, dreamily, not exceeding twenty miles an hour, I drove on that queer mirror side. Traffic was light. Cars that now and then passed me on the side I had abandoned to them, honked at me brutally. Cars coming towards me wobbled, swerved, and cried out in fear. Presently I found myself approaching populated places. Passing through a red light was like a forbidden sip of Burgundy when I was a child. (306)
Thus begins a passage no brief quotation can begin to do justice to. Nor can you get the effect by flipping directly to the pages. You have to earn it by following Humbert for the duration of his appalling, tragic journey. Along the way, you’ll come to discover that the bizarre fascination the novel inspires relies almost exclusively on the contemptibly unpleasant, sickeningly vulnerable and sympathetic narrator, who Martin Amis (himself a novelist specializing in unsavory protagonists) aptly describes as “irresistible and unforgiveable.” The effect builds over the three hundred odd pages as you are taken deeper and deeper into this warped world of compulsively embraced dissolution, until that final scene whose sublimity is rare even in the annals of great literature. Reading it, experiencing it, you don’t so much hold your breath as you simply forget to breathe.
When Walter White, the chemistry teacher turned meth cook in AMC’s excruciatingly addictive series Breaking Bad, closes his eyes and lets his now iconic Pontiac Aztek, the dull green of dollar bill backdrops, veer into the lane of oncoming traffic in the show’s third season, we are treated to a similar sense of peeking through the veil that normally occludes our view of the abyss, glimpsing the face of a man who has slipped through a secret partition, a man who may never find his way back. Walt, played by Brian Cranston, originally broke bad after receiving a diagnosis of inoperable lung cancer and being told his remaining time on earth would be measurable in months rather than years. After seeing a news report of a drug bust and hearing from his DEA agent brother-in-law Hank how much money is routinely confiscated in such operations, Walt goes on a ride-along to get a closer look. Waiting in the backseat as Hank and his fellow agents raid a house an informant tipped them off to, Walt sees a former underachieving student of his named Jesse (Aaron Paul) sneaking out of the upstairs window of the house next door, where he’d absconded to have sex with his neighbor. Walt keeps his mouth shut, letting Jesse escape, and then searches him out later that night to propose they work together to cook and sell meth. From the outset, Walt’s overriding purpose is to make a wad of cash before his cancer kills him, so his family won’t be left with nothing.
That’s the first plotline and the most immediate source of suspense. The fun of watching the show comes from seeing develop the unlikely and fractious but unbreakably profound friendship between Walt and Jesse—and from seeing how again and again the normally nebbishy Walt manages to MacGyver them both out of ever more impossible situations. The brilliant plotting by show creator Vince Gilligan and his writers is consistently worthy of the seamless and moving performances of the show’s actors. I can’t think of a single episode, or even a single scene, that falls flat. But what makes Breaking Bad more than another in the growing list of shows in the bucket list or middleclass crime genres, what makes it so immanently important, is the aspect of the show dealing with the consequences to Walt’s moral character of his increasing entanglement in the underworld. The imminent danger to his soul reveals itself most tellingly in the first season when he returns to his car after negotiating a deal with Tuco, a murderously amped meth dealer with connections to a Mexican cartel. Jesse had already tried to set up an arrangement, with he and Walt serving as producers and Tuco as distributor, but Tuco, after snorting a sample off the blade of a bowie knife, beat Jesse to a pulp, stealing the pound of meth he’d brought to open the deal.
Walt returns to the same office after seeing Jesse in the hospital and hearing about what happened. After going through the same meticulous screening process and finding himself face to face with Tuco, Walt starts dictating terms, insisting that the crazy cartel guy with all the armed henchmen standing around pay extra for his partner’s pain and suffering. Walt has even brought along what looks like another pound of meth. When Tuco breaks into incredulous laugher, saying, “Let me get this straight: I steal your dope, I beat the piss out of your mule boy, and then you walk in here and bring me more meth?” Walt tells him he’s got one thing wrong. Holding up a large crystal from the bag Tuco has opened on his desk, he says, “This is not meth,” and then throws it against the outside wall of the office. The resultant explosion knocks everyone senseless and sends debris raining down on the street below. As the dust clears, Walter takes up the rest of the bag of fulminated mercury, threatening to kill them all unless Tuco agrees to the deal. He does.
After impassively marching back downstairs, crossing the street to the Aztek, and sidling in, Walt sits alone, digging handfuls of cash out of the bag Tuco handed him. Dropping the money back in the bag, he lifts his clenched fists and growls out a long, triumphant “Yeah!” sounding his barbaric yawp over the steering wheel. And we in the audience can’t help sharing his triumph. He’s not only secured a distributor—albeit a dangerously unstable one—opening the way for him to make that pile of money he wants to leave for his family; he’s also put a big scary bully in his place, making him pay, literally, for what he did to Jesse. At a deeper level, though, we also understand that Walt’s yawp is coming after a long period of him being a “hounded slave”—even though the connection with that other Walt, Walt Whitman, won’t be highlighted until season 3.
The final showdown with Tuco happens in season 2 when he tries to abscond with Walter and Jesse to Mexico after the cops raid his office and arrest all his guys. Once the two hapless amateurs have escaped from the border house where Tuco held them prisoner, Walt has to come up with an explanation for why he’s been missing for so long. He decides to pretend to having gone into a fugue state, which had him mindlessly wandering around for days, leaving him with no memory of where he’d gone. To sell the lie, he walks into a convenience store, strips naked, and stands with a dazed expression in front of the frozen foods. The problem with faking a fugue state, though, is that the doctors they bring you to will be worried that you might go into another one, and so Walt escapes his captor in the border town only to find himself imprisoned again, this time in the hospital. In order to be released, he has to convince a psychiatrist that there’s no danger of recurrence. After confirming with the therapist that he can count on complete confidentiality, Walt confesses that there was no fugue state, explaining that he didn’t really go anywhere but “just ran.” When asked why, he responds,
Doctor, my wife is seven months pregnant with a baby we didn't intend. My fifteen-year old son has cerebral palsy. I am an extremely overqualified high school chemistry teacher. When I can work, I make $43,700 per year. I have watched all of my colleagues and friends surpass me in every way imaginable. And within eighteen months, I will be dead. And you ask why I ran?
Thus he covers his omission of one truth with the revelation of another truth. In the show’s first episode, we see Walt working a side job at a car wash, where his boss makes him leave his regular post at the register to go outside and scrub the tires of a car—which turns out to belong to one of his most disrespectful students. At home, his wife Skyler (played by Anna Gun) insists he tell his boss at the carwash he can’t keep working late. She also nags him for using the wrong credit card to buy printer ink. When Walt’s old friends and former colleagues, Elliott and Gretchen Schwartz, who it seems have gone on to make a mighty fortune thanks in large part to Walt’s contribution to their company, offer to pay for his chemotherapy, Skyler can’t fathom why he would refuse the “charity.” We find out later that Walt and Gretchen were once in a relationship and that he’d left the company indignant with her and Elliott for some reason.
What gradually becomes clear is that, in addition to the indignities he suffers at school and at the car wash, Skyler is subjecting him to the slow boil of her domestic despotism. At one point, after overhearing a mysterious phone call, she star-sixty-nines Jesse, and then confronts Walt with his name. Walt covers the big lie with a small one, telling her that Jesse has been selling him pot. When Skyler proceeds to nag him, Walt, with his growing sense of empowerment, asks her to “climb down out of my ass.” Not willing to cede her authority, she later goes directly to Jesse’s house and demands that he stop selling her husband weed. “Good job wearing the pants in the family,” Jesse jeers at him later. But it’s not just Walt she pushes around; over the first four seasons, Skyler never meets a man she doesn’t end up bullying, and she has the maddening habit of insisting on her own reasonableness and moral superiority as she does it.
In a scene from season 1 that’s downright painful to watch, Skyler stages an intervention, complete with a “talking pillow” (kind of like the conch in Lord of the Flies), which she claims is to let all the family members express their feelings about Walt’s decision not to get treatment. Of course, when her sister Marie goes off-message, suggesting chemo might be a horrible idea if Walt’s going to die anyway, Skyler is outraged. The point of the intervention, it becomes obvious, is to convince Walt to swallow his pride and take Elliott and Gretchen’s charity. As Skyler and Marie are busy shouting at each other, Walt finally stands up and snatches the pillow. Just as Marie suggested, he reveals that he doesn’t want to spend his final days in balding, nauseated misery, digging his family into a financial hole only to increase his dismal chances of surviving by a couple percentage points. He goes on,
What I want—what I need—is a choice. Sometimes I feel like I never actually make any of my own. Choices, I mean. My entire life, it just seems I never, you know, had a real say about any of it. With this last one—cancer—all I have left is how I choose to approach this.
This sense of powerlessness is what ends up making Walt dangerous, what makes him feel that all-consuming exultation and triumph every time he outsmarts some intimidating criminal, every time he beats a drug kingpin at his own game. He eventually relents and gets the expensive treatment, but he pays for it his own way, on his own terms. Of course, he can't tell Skyler where the money is really coming from. One of the odd things about watching the show is that you realize during the scenes when Walt is fumblingly trying to get his wife to back off you can’t help hoping he escapes her inquisition so he can go take on one of those scary gun-toting drug dealers again.
One of the ironies that emerge in the two latest seasons is that when Skyler finds out Walt’s been cooking meth her decision not to turn him in is anything but reasonable and moral. She even goes so far as to insist that she be allowed to take part in his illegal activities in the role of a bookkeeper, so she can make sure the laundering of Walt’s money doesn’t leave a trail leading back to the family. Walt already has a lawyer, Saul Goodman (one of the best characters), who takes care of the money, but she can’t stand being left out—so she ends up bullying Saul too. When Walt points out that she doesn’t really need to be involved, that he can continue keeping her in the dark so she can maintain plausible deniability, she scoffs, “I’d rather have them think I’m Bonnie What’s-her-name than some complete idiot.” In spite of her initial reluctance, Skyler reveals she’s just like Walt in her susceptibility to the allure of crime, a weakness borne of disappointment, indignity, and powerlessness. And, her guilt-tripping jabs at Walt for his mendacity notwithstanding, she turns out to be a far better liar than he is—because, as a fiction writer manqué, she actually delights in weaving and enacting convincing tales.
In fact, and not surprising considering the title of the series is Breaking Bad, every one of the main characters—except Walter Jr.—eventually turns to crime. Hank, frustrated at his ongoing failure to track down the source of the mysterious blue meth that keeps turning up on the streets, discovers that Jesse is somehow involved, and then, after Walt and Jesse successfully manage to destroy the telltale evidence, he loses his tempter and beats Jesse so badly he ends up in the hospital again. Marie, Skyler’s sister and Hank’s wife, is a shoplifter, and, after Hank gets shot and sinks into a bitter funk as he recovers, she starts posing as someone looking for a new home, telling elaborate lies about her life story to real estate agents as she pokes around with her sticky fingers in all the houses she visits. Skyler, even before signing on to help with Walt’s money laundering, helps Ted Beneke, the boss she has an affair with, cook his own books so he can avoid paying taxes. Some of the crimes are understandable in terms of harried or slighted people lashing out or acting up. Some of them are disturbing for how reasonably the decision to commit them is arrived at. The most interesting crimes on the show, however, are the ones that are part response to wounded pride and part perfectly reasonable—both motives clear but impossible to disentangle.
The scene that has Walt veering onto the wrong side of the road a la Humbert Humbert occurs in season 3. Walt was delighted when Saul helped him set up an arrangement with Gustavo Fring similar to the one he and Jesse had with Tuco. Gus is a far cooler customer, and far more professional, disguising his meth distribution with his fast-food chicken chain Los Pollos Hermanos. He even hooks Walt up with a fancy underground lab, cleverly hidden beneath an industrial laundry. Walt had learned some time ago that his cancer treatment was working, and he’d already made over a million dollars. But what he comes to realize is that he’s already too deep in the drug producing business, that the only thing keeping him alive is his usefulness to Gus. After a meeting in which he expresses his gratitude, Walt asks Gus if he can continue cooking meth for him. No longer in it just long enough to make money for his family before he dies, Walt drives away, with no idea how long he will live, with an indefinite commitment to keeping up his criminal activities. All his circumstances have changed. The question becomes how Walt will change to adapt to them. He closes his eyes and doesn’t open them until he hears the honk of a semi.
The most moving and compelling scenes in the series are the ones featuring Walt and Jesse’s struggles with their consciences as they’re forced to do increasingly horrible things. Jesse wonders how life can have any meaning at all if someone can do something as wrong as killing another human being and then just go on living like nothing happened. Both Walt and Jesse at times give up caring whether they live or die. Walt is actually so furious after hearing from his doctor that his cancer is in remission that he goes into the men’s room and punches the paper towel dispenser until the dents he’s making become smudged with blood from his knuckles. In season 3, he becomes obsessed with the “contamination” of the meth lab, which turns out to be nothing but a fly. Walt refuses to cook until they catch it, so Jesse sneaks him some sleeping pills to make sure they can finish cooking the day’s batch. In a narcotic haze, Walt reveals how lost and helpless he’s feeling. “I missed it,” he says.
There was some perfect moment that passed me right by. I had to have enough to leave them—that was the whole point. None of this makes any sense if I didn’t have enough. But it had to be before she found out—Skyler. It had to be before that.
“Perfect moment for what?” Jesse asks. “Are you saying you want to die?” Walt responds, “I’m saying I’ve lived too long. –You want them to actually miss you. You want their memories of you to be… But she just won’t understand.”
The theme of every type of human being having the potential to become almost any other type of human being runs throughout the series. Heisenberg, the nom de guerre Walt chooses for himself in the first season, refers to Werner Heisenberg, whose uncertainty principle in quantum mechanics holds that subatomic particles can exist in multiple states simultaneously.
“Song of Myself,” a poem in Walt Whitman’s Leaves of Grass, the one in which he says he is a “hounded slave” and describes how he “sounded my barbaric yawp,” is about how the imagination makes it possible for us to empathize with, and even become, almost anyone. Whitman writes,
“In all people I see myself, none more and not one a barleycorn less/ and the good or bad I say of myself I say of them” (section 20). Another line, perhaps the most famous, reads, “I am large, I contain multitudes” (section 51).
Walter White ends up reading Leaves of Grass in season 3 of Breaking Bad after a lab assistant hired by Gus introduces him to the poem, “When I Heard the Learn’d Astronomer” to explain his love for the “magic” of chemistry. And in the next season Walt, in one of the only instances where he actually stands up to and silences Skyler, sings a rather surprising song of his own self. When she begins to suspect he’s in imminent danger, she tries to convince him he’s in over his head and that he should go to the police. Having turned away from her, he turns back, almost a different man entirely, and delivers a speech that has already become a standout moment in television history.
Who are you talking to right now? Who is it you think you see? Do you know how much I make a year? I mean, even if I told you, you wouldn’t believe it. Do you know what would happen if I suddenly decided to stop going into work? A business big enough that it could be listed on the Nasdaq goes belly up, disappears—it ceases to exist without me. No, you clearly don’t know who you’re talking to, so let me clue you in. I am not in danger, Skyler—I am the danger. A guy opens his door and gets shot and you think that of me? No, I am the one who knocks.
Just like the mysterious pink teddy bear floating in the White’s pool at the beginning of season 2, half its face scorched, one of its eyes dislodged, Walt turns out to have a very dark side. Though he manages to escape Gus at the end of season 4, the latest season (none of which I’ve seen yet) opens with him deciding to start up his own meth operation. Still, what makes this scene as indelibly poignant as it is shocking (and rousing) is that Walt really is in danger when he makes his pronouncement. He’s expecting to be killed at any moment.
Much of the commentary about the show to date has focused on the questions of just how bad Walt will end up breaking and at what point we will, or at least should, lose sympathy for him. Many viewers, like Emily Nussbaum, jumped ship at the end of season 4 when it was revealed that Walt had poisoned the five-year-old son of Jesse’s girlfriend as part of his intricate ruse to beat Gus. Jesse had been cozying up to Gus, so Walt made it look like he poisoned the kid because he knew his partner went berserk whenever anyone endangered a child. Nussbaum writes,
When Brock was near death in the I.C.U., I spent hours arguing with friends about who was responsible. To my surprise, some of the most hard-nosed cynics thought it inconceivable that it could be Walt—that might make the show impossible to take, they said. But, of course, it did nothing of the sort. Once the truth came out, and Brock recovered, I read posts insisting that Walt was so discerning, so careful with the dosage, that Brock could never have died. The audience has been trained by cable television to react this way: to hate the nagging wives, the dumb civilians, who might sour the fun of masculine adventure. “Breaking Bad” increases that cognitive dissonance, turning some viewers into not merely fans but enablers. (83)
Nussbaum’s judgy feminist blinders preclude any recognition of Skyler’s complicity and overweening hypocrisy. And she leaves out some pretty glaring details to bolster her case against Walt—most notably, that before we learn that it was Walt who poisoned Brock, we find out that the poison used was not the invariably lethal ricin Jesse thought it was, but the far less deadly Lily of the Valley. Nussbaum strains to account for the show’s continuing appeal—its fascination—now that, by her accounting, Walt is beyond redemption. She suggests the diminishing screen time he gets as the show focuses on other characters is what saves it, even though the show has followed multiple plotlines from the beginning. (In an earlier blog post, she posits a “craving in every ‘good’ man for a secret life” as part of the basis for the show’s appeal—reading that, I experienced a pang of sympathy for her significant other.) What she doesn’t understand or can’t admit, placing herself contemptuously above the big bad male lead and his trashy, cable-addled fans, trying to watch the show after the same snobbish fashion as many superciliously insecure people watch Jerry Springer or Cops, is that the cognitive dissonance she refers to is precisely what makes the show the best among all the other great shows in the current golden age lineup.
No one watching the show would argue that poisoning a child is the right thing to do—but in the circumstances Walt finds himself in, where making a child dangerously sick is the only way he can think of to save himself, his brother-in-law, Jesse, and the rest of his family, well, Nussbaum’s condemnation starts to look pretty glib. Gus even gave Walt the option of saving his family by simply not interfering with the hit he put on Hank—but Walt never even considered it. This isn’t to say that Walt isn’t now or won’t ever become a true bad guy, as the justifications and cognitive dissonance—along with his aggravated pride—keep ratcheting him toward greater horrors, but then so might we all.
The show’s irresistible appeal comes from how seamlessly it weaves a dream spell over us that has us following right alongside Walt as he makes all these impossible decisions, stepping with him right off the map of the known world. The feeling can only be described as sublime, as if individual human concerns, even the most immensely important of them like human morality, are far too meager as ordering principles to offer any guidance or provide anything like an adequate understanding of the enormity of existence. When we return from this state of sublimity, if we have the luxury of returning, we experience the paradoxical realization that all our human concerns—morality, the sacrosanct innocence of children, the love of family—are all the more precious for being so pathetically meager.
I suspect no matter how bad Walter’s arrogance gets, how high his hubris soars, or how horribly he behaves, most viewers—even those like Nussbaum who can’t admit it—will still long for his redemption. Some of the most intensely gratifying scenes in the series (I’d be embarrassed if anyone saw how overjoyed I got at certain points watching this TV show) are the ones that have Walt suddenly realizing what the right thing to do is and overcoming his own cautious egotism to do it. But a much more interesting question than when, if ever, it will be time to turn against Walt is whether he can rightly be said to be the show’s moral center—or whether and when he ceded that role to Jesse.
If Walt really is ultimately to be lost to the heart of darkness, Jesse will be the one who plays Marlow to his Kurtz. Though the only explicit allusion is Hank’s brief mention of Apocalypse Now, there is at least one interesting and specific parallel between the two stories. Kurtz, it is suggested, was corrupted not just by the wealth he accumulated by raiding native villages and stealing ivory; it was also the ease with which he got his native acolytes to worship him. He succumbed to the temptation of “getting himself adored,” as Marlow sneers. Kurtz had written a report for the International Society for the Suppression of Savage Customs, and Marlow explains,
The opening paragraph, however, in the light of later information, strikes me now as ominous. He began with the argument that whites, from the point of development we had arrived at, “must necessarily appear to them [savages] in the nature of supernatural beings—we approach them with the might as of a deity.”
What Marlowe discovers is that Kurtz did in fact take on the mantle of a deity after being abandoned to the jungle for too long,
until his nerves went wrong, and caused him to preside at certain midnight dances ending with unspeakable rites, which—as far as I reluctantly gathered from what I heard at various times—were offered up to him—do you understand?—to Mr. Kurtz himself.
The source of Walter White’s power is not his origins in a more advanced civilization but his superior knowledge of chemistry. No one else can make meth as pure as Walt’s. He even uses chemistry to build the weapons he uses to prevail over all the drug dealers he encounters as he moves up the ranks over the seasons.
Jesse has his moments of triumph too, but so far he’s always been much more grounded than Walt. It’s easy to imagine a plotline that has Jesse being the one who finally realizes that Walt “has to go,” the verdict he renders for anyone who imperils children. Allowing for some (major) dialectal adjustments, it’s even possible to imagine Jesse confronting his former partner after he’s made it to the top of a criminal empire, and his thinking along lines similar to Marlow’s when he finally catches up to Kurtz:
I had to deal with a being to whom I could not appeal in the name of anything high or low. I had, even like the niggers, to invoke him—himself—his own exalted and incredible degradation. There was nothing either above or below him, and I knew it. He had kicked himself loose of the earth. Confound the man! he had kicked the very earth to pieces. He was alone, and I before him did not know whether I stood on the ground or floated in the air… But his soul was mad. Being alone in the wilderness, it had looked within itself, and, by heavens! I tell you, it had gone mad.
Kurtz dies whispering, “The horror, the horror,” as if reflecting in his final moments on all the violence and bloodshed he’d caused—or as if making his final pronouncement about the nature of the world he’s departing. Living amid such horrors, though, our only recourse, these glimpses into sublime enormities help us realize, is to more fully embrace our humanity. We can never comfortably dismiss these stories of men who become monsters as mere cautionary tales. That would be too simple. They gesture toward something much more frightening and much more important than that. Humbert Humbert hints as much in the closing of Lolita.
This then is my story. I have reread it. It has bits of marrow sticking to it, and blood, and beautiful bright-green flies. At this or that twist of it I feel my slippery self eluding me, gliding into deeper and darker waters than I care to probe. (308)
But he did probe those forbidden waters. And we’re all better for it. Both Kurtz and Humbert are doomed from the first page of their respective stories. But I seriously doubt I’m alone in holding out hope that when Walter White finds himself edging up to the abyss, Jesse, or Walter Jr., or someone else is there to pull him back—and Skyler isn’t there to push him over. Though I may forget to breathe I won’t hold my breath.
Also read
And
LAB FLIES: JOSHUA GREENE’S MORAL TRIBES AND THE CONTAMINATION OF WALTER WHITE
Also a propos is
And
SYMPATHIZING WITH PSYCHOS: WHY WE WANT TO SEE ALEX ESCAPE HIS FATE AS A CLOCKWORK ORANGE
The Imp of the Underground and the Literature of Low Status
A famous scene in “Notes from the Underground” echoes a famous study comparing people’s responses to an offense. What are the implications for behavior and personality of having low social status, and how does that play out in fiction? Is Poe’s “Imp of the Perverse” really just an example of our inborn defiance, our raging against the machine?
The one overarching theme in literature, and I mean all literature since there’s been any to speak of, is injustice. Does the girl get the guy she deserves? If so, the work is probably commercial, as opposed to literary, fiction. If not, then the reason begs pondering. Maybe she isn’t pretty enough, despite her wit and aesthetic sophistication, so we’re left lamenting the shallowness of our society’s males. Maybe she’s of a lower caste, despite her unassailable virtue, in which case we’re forced to question our complacency before morally arbitrary class distinctions. Or maybe the timing was just off—cursed fate in all her fickleness. Another literary work might be about the woman who ends up without the fulfilling career she longed for and worked hard to get, in which case we may blame society’s narrow conception of femininity, as evidenced by all those damn does-the-girl-get-the-guy stories.
The prevailing theory of what arouses our interest in narratives focuses on the characters’ goals, which magically, by some as yet undiscovered cognitive mechanism, become our own. But plots often catch us up before any clear goals are presented to us, and our partisanship on behalf of a character easily endures shifting purposes. We as readers and viewers are not swept into stories through the transubstantiation of someone else’s striving into our own, with the protagonist serving as our avatar as we traverse the virtual setting and experience the pre-orchestrated plot. Rather, we reflexively monitor the character for signs of virtue and for a capacity to contribute something of value to his or her community, the same way we, in our nonvirtual existence, would monitor and assess a new coworker, classmate, or potential date. While suspense in commercial fiction hinges on high-stakes struggles between characters easily recognizable as good and those easily recognizable as bad, and comfortably condemnable as such, forward momentum in literary fiction—such as it is—depends on scenes in which the protagonist is faced with temptations, tests of virtue, moral dilemmas.
The strain and complexity of coming to some sort of resolution to these dilemmas often serves as a theme in itself, a comment on the mad world we live in, where it’s all but impossible to discern between right and wrong. Indeed, the most common emotional struggle depicted in literature is that between the informal, even intimate handling of moral evaluation—which comes natural to us owing to our evolutionary heritage as a group-living species—and the official, systematized, legal or institutional channels for determining merit and culpability that became unavoidable as societies scaled up exponentially after the advent of agriculture. These burgeoning impersonal bureaucracies are all too often ill-equipped to properly weigh messy mitigating factors, and they’re all too vulnerable to subversion by unscrupulous individuals who know how to game them. Psychopaths who ought to be in prison instead become CEOs of multinational investment firms, while sensitive and compassionate artists and humanitarians wind up taking lowly day jobs at schools or used book stores. But the feature of institutions and bureaucracies—and of complex societies more generally—that takes the biggest toll on our Pleistocene psyches, the one that strikes us as the most glaring injustice, is their stratification, their arrangement into steeply graded hierarchies.
Unlike our hierarchical ape cousins, all present-day societies still living in small groups as nomadic foragers, like those our ancestors lived in throughout the epoch that gave rise to the suite of traits we recognize as uniquely human, collectively enforce an ethos of egalitarianism. As anthropologist Christopher Boehm explains in his book Hierarchy in the Forest: The Evolution of Egalitarianism,
Even though individuals may be attracted personally to a dominant role, they make a common pact which says that each main political actor will give up his modest chances of becoming alpha in order to be certain that no one will ever be alpha over him. (105)
Since humans evolved from a species that was ancestral to both chimpanzees and gorillas, we carry in us many of the emotional and behavioral capacities that support hierarchies. But, during all those millennia of egalitarianism, we also developed an instinctive distaste for behaviors that undermine an individual’s personal sovereignty. “On their list of serious moral transgressions,” Boehm explains,
hunter-gathers regularly proscribe the enactment of behavior that is politically overbearing. They are aiming at upstarts who threaten the autonomy of other group members, and upstartism takes various forms. An upstart may act the bully simply because he is disposed to dominate others, or he may become selfishly greedy when it is time to share meat, or he may want to make off with another man’s wife by threat or by force. He (or sometimes she) may also be a respected leader who suddenly begins to issue direct orders… An upstart may simply take on airs of superiority, or may aggressively put others down and thereby violate the group’s idea of how its main political actors should be treating one another. (43)
In a band of thirty people, it’s possible to keep a vigilant eye on everyone and head off potential problems. But, as populations grow, encounters with strangers in settings where no one knows one another open the way for threats to individual autonomy and casual insults to personal dignity. And, as professional specialization and institutional complexity increase in pace with technological advancement, power structures become necessary for efficient decision-making. Economic inequality then takes hold as a corollary of professional inequality.
None of this is to suggest that the advance of civilization inevitably leads to increasing injustice. In fact, per capita murder rates are much higher in hunter-gatherer societies. Nevertheless, the impersonal nature of our dealings with others in the modern world often strikes us as overly conducive to perverse incentives and unfair outcomes. And even the most mundane signals of superior status or the most subtle expressions of power, though officially sanctioned, can be maddening. Compare this famous moment in literary history to Boehm’s account of hunter-gatherer political philosophy:
I was standing beside the billiard table, blocking the way unwittingly, and he wanted to pass; he took me by the shoulders and silently—with no warning or explanation—moved me from where I stood to another place, and then passed by as if without noticing. I could have forgiven a beating, but I simply could not forgive his moving me and in the end just not noticing me. (49)
The billiard player's failure to acknowledge his autonomy outrages the narrator, who then considers attacking the man who has treated him with such disrespect. But he can’t bring himself to do it. He explains,
I turned coward not from cowardice, but from the most boundless vanity. I was afraid, not of six-foot-tallness, nor of being badly beaten and chucked out the window; I really would have had physical courage enough; what I lacked was sufficient moral courage. I was afraid that none of those present—from the insolent marker to the last putrid and blackhead-covered clerk with a collar of lard who was hanging about there—would understand, and that they would all deride me if I started protesting and talking to them in literary language. Because among us to this day it is impossible to speak of a point of honor—that is, not honor, but a point of honor (point d’honneur) otherwise than in literary language. (50)
The languages of law and practicality are the only ones whose legitimacy is recognized in modern societies. The language of morality used to describe sentiments like honor has been consigned to literature. This man wants to exact his revenge for the slight he suffered, but that would require his revenge to be understood by witnesses as such. The derision he can count on from all the bystanders would just compound the slight. In place of a close-knit moral community, there is only a loose assortment of strangers. And so he has no recourse.
The character in this scene could be anyone. Males may be more keyed into the physical dimension of domination and more prone to react with physical violence, but females likewise suffer from slights and belittlements, and react aggressively, often by attacking their tormenter's reputation through gossip. Treating a person of either gender as an insensate obstacle is easier when that person is a stranger you’re unlikely ever to encounter again. But another dynamic is at play in the scene which makes it still easier—almost inevitable. After being unceremoniously moved aside, the narrator becomes obsessed with the man who treated him so dismissively. Desperate to even the score, he ends up stalking the man, stewing resentfully, trying to come up with a plan. He writes,
And suddenly… suddenly I got my revenge in the simplest, the most brilliant way! The brightest idea suddenly dawned on me. Sometimes on holidays I would go to Nevsky Prospect between three and four, and stroll along the sunny side. That is, I by no means went strolling there, but experienced countless torments, humiliations and risings of bile: that must have been just what I needed. I darted like an eel among the passers-by, in a most uncomely fashion, ceaselessly giving way now to generals, now to cavalry officers and hussars, now to ladies; in those moments I felt convulsive pains in my heart and a hotness in my spine at the mere thought of the measliness of my attire and the measliness and triteness of my darting little figure. This was a torment of torments, a ceaseless, unbearable humiliation from the thought, which would turn into a ceaseless and immediate sensation, of my being a fly before that whole world, a foul, obscene fly—more intelligent, more developed, more noble than everyone else—that went without saying—but a fly, ceaselessly giving way to everyone, humiliated by everyone, insulted by everyone. (52)
So the indignity, it seems, was not borne of being moved aside like a piece of furniture so much as it was of being afforded absolutely no status. That’s why being beaten would have been preferable; a beating implies a modicum of worthiness in that it demands recognition, effort, even risk, no matter how slight.
The idea that occurs to the narrator for the perfect revenge requires that he first remedy the outward signals of his lower social status, “the measliness of my attire and the measliness… of my darting little figure,” as he calls them. The catch is that to don the proper attire for leveling a challenge, he has to borrow money from a man he works with—which only adds to his daily feelings of humiliation. Psychologists Derek Rucker and Adam Galinsky have conducted experiments demonstrating that people display a disturbing readiness to compensate for feelings of powerlessness and low status by making pricy purchases, even though in the long run such expenditures only serve to perpetuate their lowly economic and social straits. The irony is heightened in the story when the actual revenge itself, the trappings for which were so dearly purchased, turns out to be so bathetic.
Suddenly, within three steps of my enemy, I unexpectedly decided, closed my eyes, and—we bumped solidly shoulder against shoulder! I did not yield an inch and passed by on perfectly equal footing! He did not even look back and pretended not to notice: but he only pretended, I’m sure of that. To this day I’m sure of it! Of course, I got the worst of it; he was stronger, but that was not the point. The point was that I had achieved my purpose, preserved my dignity, yielded not a step, and placed myself publicly on an equal social footing with him. I returned home perfectly avenged for everything. (55)
But this perfect vengeance has cost him not only the price of a new coat and hat; it has cost him a full two years of obsession, anguish, and insomnia as well. The implication is that being of lowly status is a constant psychological burden, one that makes people so crazy they become incapable of making rational decisions.
Literature buffs will have recognized these scenes from Dostoevsky’s Notes from Underground (as translated by Richard Prevear and Larissa Volokhnosky), which satirizes the idea of a society based on the principle of “rational egotism” as symbolized by N.G. Chernyshevsky’s image of a “crystal palace” (25), a well-ordered utopia in which every citizen pursues his or her own rational self-interests. Dostoevsky’s underground man hates the idea because regardless of how effectively such a society may satisfy people’s individual needs the rigid conformity it would demand would be intolerable. The supposed utopia, then, could never satisfy people’s true interests. He argues,
That’s just the thing, gentlemen, that there may well exist something that is dearer for almost every man than his very best profit, or (so as not to violate logic) that there is this one most profitable profit (precisely the omitted one, the one we were just talking about), which is chiefer and more profitable than all other profits, and for which a man is ready, if need be, to go against all laws, that is, against reason, honor, peace, prosperity—in short, against all these beautiful and useful things—only so as to attain this primary, most profitable profit which is dearer to him than anything else. (22)
The underground man cites examples of people behaving against their own best interests in this section, which serves as a preface to the story of his revenge against the billiard player who so blithely moves him aside. The way he explains this “very best profit” which makes people like himself behave in counterproductive, even self-destructive ways is to suggest that nothing else matters unless everyone’s freedom to choose how to behave is held inviolate. He writes,
One’s own free and voluntary wanting, one’s own caprice, however wild, one’s own fancy, though chafed sometimes to the point of madness—all this is that same most profitable profit, the omitted one, which does not fit into any classification, and because of which all systems and theories are constantly blown to the devil… Man needs only independent wanting, whatever this independence may cost and wherever it may lead. (25-6)
Notes from Underground was originally published in 1864. But the underground man echoes, wittingly or not, the narrator of Edgar Allan Poe’s story from almost twenty years earlier, "The Imp of the Perverse," who posits an innate drive to perversity, explaining,
Through its promptings we act without comprehensible object. Or if this shall be understood as a contradiction in terms, we may so far modify the proposition as to say that through its promptings we act for the reason that we should not. In theory, no reason can be more unreasonable, but in reality there is none so strong. With certain minds, under certain circumstances, it becomes absolutely irresistible. I am not more sure that I breathe, than that the conviction of the wrong or impolicy of an action is often the one unconquerable force which impels us, and alone impels us, to its prosecution. Nor will this overwhelming tendency to do wrong for the wrong’s sake, admit of analysis, or resolution to ulterior elements. (403)
This narrator’s suggestion of the irreducibility of the impulse notwithstanding, it’s noteworthy how often the circumstances that induce its expression include the presence of an individual of higher status.
The famous shoulder bump in Notes from Underground has an uncanny parallel in experimental psychology. In 1996, Dov Cohen, Richard Nisbett, and their colleagues published the research article, “Insult, Aggression, and the Southern Culture of Honor: An ‘Experimental Ethnography’,” in which they report the results of a comparison between the cognitive and physiological responses of southern males to being bumped in a hallway and casually called an asshole to those of northern males. The study showed that whereas men from northern regions were usually amused by the run-in, southern males were much more likely to see it as an insult and a threat to their manhood, and they were much more likely to respond violently. The cortisol and testosterone levels of southern males spiked—the clever experimental setup allowed meaures before and after—and these men reported believing physical confrontation was the appropriate way to redress the insult. The way Cohen and Nisbett explain the difference is that the “culture of honor” that emerges in southern regions originally developed as a safeguard for men who lived as herders. Cultures that arise in farming regions place less emphasis on manly honor because farmland is difficult to steal. But if word gets out that a herder is soft then his livelihood is at risk. Cohen and Nisbett write,
Such concerns might appear outdated for southern participants now that the South is no longer a lawless frontier based on a herding economy. However, we believe these experiments may also hint at how the culture of honor has sustained itself in the South. It is possible that the culture-of-honor stance has become “functionally autonomous” from the material circumstances that created it. Culture of honor norms are now socially enforced and perpetuated because they have become embedded in social roles, expectations, and shared definitions of manhood. (958)
More recently, in a 2009 article titled “Low-Status Compensation: A Theory for Understanding the Role of Status in Cultures of Honor,” psychologist P.J. Henry takes another look at Cohen and Nisbett’s findings and offers another interpretation based on his own further experimentation. Henry’s key insight is that herding peoples are often considered to be of lower status than people with other professions and lifestyles. After establishing that the southern communities with a culture of honor are often stigmatized with negative stereotypes—drawling accents signaling low intelligence, high incidence of incest and drug use, etc.—both in the minds of outsiders and those of the people themselves, Henry suggests that a readiness to resort to violence probably isn’t now and may not ever have been adaptive in terms of material benefits.
An important perspective of low-status compensation theory is that low status is a stigma that brings with it lower psychological worth and value. While it is true that stigma also often accompanies lower economic worth and, as in the studies presented here, is sometimes defined by it (i.e., those who have lower incomes in a society have more of a social stigma compared with those who have higher incomes), low-status compensation theory assumes that it is psychological worth that is being protected, not economic or financial worth. In other words, the compensation strategies used by members of low-status groups are used in the service of psychological self-protection, not as a means of gaining higher status, higher income, more resources, etc. (453)
And this conception of honor brings us closer to the observations of the underground man and Poe’s boastful murderer. If psychological worth is what’s being defended, then economic considerations fall by the wayside. Unfortunately, since our financial standing tends to be so closely tied to our social standing, our efforts to protect our sense of psychological worth have a nasty tendency to backfire in the long run.
Henry found evidence for the importance of psychological reactance, as opposed to cultural norms, in causing violence when he divided participants of his study into either high or low status categories and then had them respond to questions about how likely they would be to respond to insults with physical aggression. But before being asked about the propriety of violent reprisals half of the members of each group were asked to recall as vividly as they could a time in their lives when they felt valued by their community. Henry describes the findings thus:
When lower status participants were given the opportunity to validate their worth, they were less likely to endorse lashing out aggressively when insulted or disrespected. Higher status participants were unaffected by the manipulation. (463)
The implication is that people who feel less valuable than others, a condition that tends to be associated with low socioeconomic status, are quicker to retaliate because they are almost constantly on-edge, preoccupied at almost every moment with assessments of their standing in relation to others. Aside from a readiness to engage in violence, this type of obsessive vigilance for possible slights, and the feeling of powerlessness that attends it, can be counted on to keep people in a constant state of stress. The massive longitudinal study of British Civil Service employees called the Whitehall Study, which tracks the health outcomes of people at the various levels of the bureaucratic hierarchy, has found that the stress associated with low status also has profound effects on our physical well-being.
Though it may seem that violence-prone poor people occupying lowly positions on societal and professional totem poles are responsible for aggravating and prolonging their own misery because they tend to spend extravagantly and lash out at their perceived overlords with nary a concern for the consequences, the regularity with which low status leads to self-defeating behavior suggests the impulses are much more deeply rooted than some lazily executed weighing of pros and cons. If the type of wealth or status inequality the underground man finds himself on the short end of would have begun to take root in societies like the ones Christopher Boehm describes, a high-risk attempt at leveling the playing field would not only have been understandable—it would have been morally imperative. In a group of nomadic foragers, though, a man endeavoring to knock a would-be alpha down a few pegs would be able to count on the endorsement of most of the other group members. And the success rate for re-establishing and maintaining egalitarianism would have been heartening. Today, we are forced to live with inequality, even though beyond a certain point most people (regardless of political affiliation) see it as an injustice.
Some of the functions of literature, then, are to help us imagine just how intolerable life on the bottom can be, sympathize with those who get trapped in downward spirals of self-defeat, and begin to imagine what a more just and equitable society might look like. The catch is that we will be put off by characters who mistreat others or simply show a dearth of redeeming qualities.
Also read
and
CAN’T WIN FOR LOSING: WHY THERE ARE SO MANY LOSERS IN LITERATURE AND WHY IT HAS TO CHANGE
The Tree Climber: A Story Inspired by W.S. Merwin
Clare loves climbing trees, so much that people think she’s crazy. Then one day while climbing, she makes a discovery, and it changes everything in the small town where she grew up.
Everyone in Maplewood knew Clare as the little girl who was always climbing trees. But she was so dainty and graceful and reserved that at first it surprised them all to see how deftly, even artfully, she could make her way up even the most formidable ones. It wasn’t just the trees in her family’s yard either. She climbed everywhere. “Oh, that’s just Clare Glendale,” people would say. “She’s got tree-climbing craziness.”
A lot of stories circulated about how Clare first came to love going up and down the trees of the neighborhood and the woods surrounding it. Some said her mother told her stories about how fairies lived up in the canopies, so she was constantly going up to visit them. Some said she once escaped bullies at school by climbing a tree on the playground, so now she feels secure hidden high in the foliage. And some say it began one day when she espied a lost treasure—a toy or an heirloom—from her high perch and now re-experiences that feeling of relief and reconnection whenever she’s up in the highest branches.
Even when she was still just a little girl, though, what kept her climbing was much more complicated than any of these neighborly conjecturings could comprehend. Every child eventually climbs a tree. Clare did it the first time because she’d seen some girls on her way home from school dangling from a low branch and thought it looked appealing in its manageable absurdity. The girls were squealing and kicking their earth-freed feet.
Approaching the big sycamore in her own yard just minutes later, she struggled to figure out how to make it up through the lowermost layer of branches. That the sequence of grips and reaches and toeholds she would have to traverse wasn’t clear from the outset yet gradually revealed itself through her concentrated, strenuous, grasping efforts, like a tactile puzzle she needed her whole body to solve—that was what she remembered as the earliest source of pleasure in climbing. There was also a feeling of overabundant energetic joy in the physical exertion, difficult but surmountable, of hoisting herself with her hands and arms, swinging and pushing herself with her toes braced against the bark. When she made it up near the highest tapering branches, too tiny to hold her scanty weight, she felt she’d succeeded in overcoming earthly constraints she’d never even been aware of up till then. As she rested at last, the breeze blew against her light wash of sweat, setting her skin aglow with the dissipating pulsing heat, like soundless music emanating from her blood into the air, even as she felt her strained limbs shimmer with life, discovered anew through the dancing ache. She breathed in the cascading sighs of the dry undulating sun-sparkled leaves, millions of tiny mirages, and thought about how they reminded her of ocean tides she’d only ever seen on TV. She would dream that night of dancing alone by a high-built bonfire on a moonlit beach.
But whenever she thought about that day over the ensuing years she was never really sure the memory was really of the first time she climbed a tree. She went back to it for an explanation because not having an explanation, a clear motive, some way to justify herself in depth of detail should anyone show a willingness, or even an eagerness, to hear her out—it bothered her, making her feel her youthful isolation as a lifelong sentence, hopeless.
When high school first started for Clare, she stopped climbing trees because it was too odd a habit, too conspicuous. But then one day she climbed again, after several months, and things seemed right with the world in a way that forced her to realize things weren’t at all right with the world all those months she hadn’t been climbing. She thought, “Oh well, I guess I’ll just have to let them think I’m weird.”
Maplewood was a good school in a nice neighborhood. Clare’s classmates talked among themselves about how she climbed trees all the time—as something like a hobby. They knew she sometimes took her digital camera with her. They also knew she sometimes did drawings of parts of the neighborhood or the city beyond from the tops of the trees. They knew she brought little notebooks up there sometimes too and wrote what they assumed was poetry. Sometimes, her classmates even tried to talk to Clare about her tree climbing, asking her questions, saying they’d like to see her photographs or drawings, read her poems. No one was ever cruel, not even the boys. She heard herself described as “artistic” in a way that made her inwardly cringe because it sounded dismissive, impatient—artistic even though she’d never shared any art with any of them because she sensed they were just being polite when they said they’d like to see or read it.
But Clare was glad to have circulating all the stories explaining away her climbing, even though they were all wrong and her not being given to correcting them felt a little like dishonesty, because if someone had simply asked her, “Why do you climb trees all the time?” she wouldn’t have been able to answer. Not that she minded thinking about it. She wouldn’t have minded talking about it either except that it was impossible to discuss, honestly, without being extravagant, self-important, braggy, snobby—all the most horrible things a high school kid could be. She knew for sure she didn’t climb because she wanted to be seen as interesting. More and more she didn’t want to be seen at all.
Sometimes, though, she imagined whole conversations. “Well, I’d say I climb trees because I like the feeling. But I don’t always. Sometimes my hands hurt, or I get up really high and I’m afraid I’ll fall, or it’s just really hot, or really cold. I get scraped up a lot. The view is often nice, but not always. Sometimes I go into these really peaceful trances, like dreaming while I’m awake. But that’s only rarely. Really, most of the times I go up it’s not pleasant at all. But when I don’t do it for a while I feel strange—like, my soul is stuck in a tight, awful sweater—you know, all itchy.” She imagines herself doing a dance to convey the feeling, all slithering arms and squirming fingers.
Every time she thinks about her itchy soul she laughs, thinking, “Probably aren’t too many people out there who’d understand that.”
It was when Clare was fifteen, months away from getting her license, that she overheard an older boy at school, one who wasn’t part of the main group, but sort of cute, sort of interesting, compare her tree climbing to something awful, something that made her face go hot in a way that had her ducking away so no one would see how brightly it burned. It was an offhand joke for the benefit of his friend. She couldn’t really be mad about it. He didn’t mean it to be hurtful. He didn’t even know she would hear him.
She went to the woods after school, hesitated before a monster of a tree she knew well but had always been too intimidated by to attempt, gave it a look of unyielding intensity, and then reached out and pulled herself away from the ground. The grips were just out of reach at every step, making her have to grope beyond the span of her arms, strain, and even lunge. But the tree was massive, promising to take her deeper into the bottomless sky than she’d ever yet plunged.
She paused only long enough at each interval to decide on the best trajectory. In place of fear was a fury of embarrassment and self-loathing. She couldn’t fall, it seemed to her, because there would be a sense of relief if she did, and she just didn’t have any relief coming to her. Somehow she wasn’t deserving, or worthy, of relief. She reached, she lunged, she grasped, she pulled. The ache in her hands and arms enflamed her pathetic fury. And up she flew, gritting her teeth as she pulled down the sky.
It was a loose flank of bark coming away from the trunk with a curt, agonized cough that left her dangling from one arm, supported only by a couple of cramped and exhausted fingers. Seeing her feet reach out for the trunk, rubbing against it like the paws of a rain-drenched dog scratching the door of an empty house, and feeling each attempt at getting purchase only weaken her lame grip, she began to envisage the impending meeting of her body with the ground. Resignation just barely managed to nudge panic aside.
But Clare never fully gave up. As reconciled as she was with the fall that felt like justice for the sake of some invisible sacred order she’d blunderingly violated, she nevertheless made one last desperate maneuver, which was to push herself away from the tree with as much force as she could muster with her one leg. What came next was brutal, senseless, lashing, violent, vindictive chaos. The air itself seemed bent on ripping her to shreds. Her tiny voice was again and again blasted out of her in jarring horrific whimpers, each one a portion of her trapped life escaping its vessel. She thought she’d see the razor green streaking crimson so sudden and excruciating were the lacerating clawings of the branches.
And then it stopped. She stopped. She felt herself breathing. Her legs were hovering powerless, but they were some distance still from the ground, which was obscured by the swaying mass of leaves still separating her from the just end she had been so sure of. Weak and battered, she twisted in the hard, abrasive net that had partly caught her and partly allowed her to catch herself, without her even knowing she was capable of putting forth the effort to catch herself, and saw that she was bouncing some ways out from the trunk of the other tree she’d had the last ditch hope of leaping into. Looking back to the one she’d fallen from, she saw she hadn’t even fallen that far—maybe twenty feet she guessed. She hurried toward the sturdier parts of the branches holding her aloft and then made her way down.
Both feet on the ground, Clare turned back to retrace the course of her fall with her eyes, almost too afraid to breathe because breathing might reveal the mortal injury she still couldn’t believe she hadn’t sustained. After standing there until the momentousness of the occurrence dissolved into the hush of the forest, silent but for the insects’ evening calls and the millions upon millions of leaves sent atremble by the wayward wind, both of which seemed only to dimensionalize the silence, she began to walk, hesitantly at first but then with a determination borne of an unnamed passion.
She lay awake all night, upstart thoughts and feelings surging, rolling over each other, crashing into shores of newly imagined possibility. The pain from her several less than severe injuries provoked inner crises and conflicts, setting her mind on a razor’s edge of rushing urgency. She had created a tear in the fabric of commonplace living—the insistent niceness she’d been struggling all her life to fold herself into. When the gray of dawn peeked into her room, she heard herself let loose the first half of a demented laugh before cutting off the second, lest her parents hear and start asking her questions of the sort now more than ever she would be hard-pressed to answer.
Clare now knew that all her life she had been accustoming herself to a feeling of inevitability, of fatedness, of gradual absorption with growing maturity into the normal flow of life she saw emblems of in every last one of the really nice houses she passed on the way to school. But this morning they all looked different, like so many strained denials of the only true inevitability, a veil protecting everyone from what they assumed could only be some hellish nightmare.
When after school that day Clare approached Dean Morris, the older boy who’d made the crude joke about her climbing, she managed to startle him with the wild intensity in her unaccustomedly direct glare. “You have to come with me,” she said. “I have something to show you.”
And that’s how Dean became a crazy tree climber too. At least, that’s what everyone assumed happened to him. He disappeared with Clare that day and kept on disappearing with her almost every day thereafter for the next two years. It was only two weeks after their first climb together that he showed up at home injured for the first time. He told his parents he’d broken his arm in a motorcycle crash, but he refused to name the friend who had allowed him to ride, insisting that he didn’t want to get anyone into trouble and that it was his own fault.
One or both of them was always showing up at school or the weekend jobs they each had with mysterious welts on their arms or faces. Their parents must have been in an agony of constant exasperated worry those last two years as, despite their best efforts to put an end to the excursions into the woods and the injurious goings on therein, they continued having to deal not only with their children’s defiance but with the looming danger it exposed them to. But everyone at school saw something to admire in the way Clare and Dean so uncompromisingly settled on the existence they would have be theirs. The girls remarked on Dean’s consuming and unselfconscious devotion to Clare and were envious. He always seemed naturally gravitating to a post standing guard close by her, intensely, passionately protective. What would it be like to see your first love nearly fall to her death again and again?
For those two years, they formed their own region apart from the life of the school and the neighborhood. No trace of teenage self-consciousness or awkwardness remained. They seemed as though they were in the midst of actuating some grand design, some world-saving project only they could be relied on to handle and no one else was even allowed to know about. They knew themselves. They loved each other. And they together exuded an air of contented self-sufficiency that made all the other students somehow more hopeful.
A lot of people said when it was over that Clare must have run away and started a new life in some far-away place. The truth is no one knows what happened to her. I used to think about those two all the time. When I came back to Maplewood years later, it was just on an odd whim. My parents had moved away soon after I’d left for college. I wasn’t in touch with any friends who still lived there. For some reason, I just up and decided to spend a day driving, get a hotel for the night, and maybe make a weekend of visiting my old home town. The first place I went was the woods where Clare and Dean spent their last moments together.
Everyone knew the story no one professed to believe. Beyond that, there were quite a few versions of an official account. Some held that an animal, or maybe a few coyotes, had dragged Clare’s body away. Others insisted that she was still alive and that seeing what had just happened to Dean she panicked and fled and never came back. Maybe she was afraid they’d blame her. Maybe she blamed herself and couldn’t bear to face his parents. There was something desperate about this version of the story. The fact seemed plain that Clare had died that day too. I always imagined Dean standing high up in the tree, seeing what happened to Clare, and yet never for a moment hesitating to follow her no matter where it took him.
I found myself thinking surprisingly little about what had happened on their last trip to the woods together as I drove around town, stopping by the school to see if anyone was still around to let me inside to indulge my nostalgia. What occupied my mind was rather the way those two were together, locked in to each other, suddenly mature. They seemed—what’s the word? Knowing. They both knew something the rest of us, even our teachers and parents, simply didn’t know—or couldn’t know. What that something might be is a question I put to myself with some frequency to this day.
It was late on a Friday night when I tried the door at the school. It was locked. I ended up wandering around the town in dreamy reverie without running into anyone I recognized and could invite to sit with me somewhere to reminisce. That was fine by me though.
By Sunday afternoon, I was beginning to question my decision to come. I didn’t exactly have all the time in the world to amble around my old stomping grounds in a feckless daze. Impatient to get back on the road, I responded with mild annoyance to being recognized and addressed at the gas station. When I saw it was Bret Krause, though, the boy Dean had told that crude joke to about Clare’s climbing, his best friend right up until Clare pulled him away into that separate world of theirs, my curiosity got the best of me. I was amused when he pulled out his phone to call his wife and let her know he would be stopping by a bar to have a drink with an old friend from school. Bret hadn’t exactly been a lady’s man back in our glory days, but the picture he showed me was of a striking but kind-eyed beauty. It’s funny how things like that seem to work themselves out.
I enjoyed hearing all about Bret’s life since high school. We’d never been close friends, but we’d had classes together and knew each other well enough for casual exchanges of greetings and pleasant small talk. I do have to confess, though, I was glad when he spontaneously began to talk about Dean. Anything I knew about the story had come to me in rumors thrice removed, and as curious as I’d been after they found Dean’s body in the woods that day it seemed to me untactful to barrage anyone with questions—though plenty of others apparently hadn’t felt the same scruple.
Bret had gone away to college and returned to Maplewood to teach Algebra, of all things. He brought back with him the girlfriend he would marry soon afterward. They planned on having a couple kids but weren’t ready just yet. After letting us into the school, he took us directly to the spot where Clare had walked up to Dean and told him to come with her because she had something to show him.
“I was a little worried coming back here,” he said. “I thought it might be too painful to be reminded of him constantly. At the same time, though—I know it’s a weird thing to say—it was sort of like his memory was one of the things that drew me back.”
“I know what you mean.”
“For a long time, I really thought any day we were going to hear that Clare had just shown up somewhere. It’s just such a strange thing in this day and age.”
Bret turned the key to let us into the classroom where he taught his students quadratic equations, the same classroom where I’d learned pretty much the same lessons all those years ago, with Clare and Dean sitting four rows behind me in the back corner by the windows.
“What do think happened to her?” I couldn’t help asking.
Bret laughed good-humoredly. “Everyone knows what happened to her,” he said. “Haven’t you heard the story?”
******
Inspired (partly) by:
Recognitions
Stories come to us like new senses
a wave and an ash tree were sisters
they had been separated since they were children
but they went on believing in each other
though each was sure that the other must be lost
they cherished traits of themselves that they thought of
as family resemblances features they held in common
the sheen of the wave fluttered in remembrance
of the undersides of the leaves of the ash tree
in summer air and the limbs of the ash tree
recalled the wave as the breeze lifted it
and they wrote to each other every day
without knowing where to send the letters
some of which have come to light only now
revealing in their old but familiar language
a view of the world we could not have guessed at
but that we always wanted to believe
-from W.S. Merwin's The Shadow of Sirius
Also read:
Those Most Apt to Crash: A Halloween Story
And:
And:
The People Who Evolved Our Genes for Us: Christopher Boehm on Moral Origins – Part 3 of A Crash Course in Multilevel Selection Theory
In “Moral Origins,” anthropologist Christopher Boehm lays out the mind-blowing theory that humans evolved to be cooperative in large part by developing mechanisms to keep powerful men’s selfish impulses in check. These mechanisms included, in rare instances, capital punishment. Once the free-rider problem was addressed, groups could function more as a unit than as a collection of individuals.
In a 1969 account of her time in Labrador studying the culture of the Montagnais-Naskapi people, anthropologist Eleanor Leacock describes how a man named Thomas, who was serving as her guide and informant, responded to two men they encountered while far from home on a hunting trip. The men, whom Thomas recognized but didn’t know very well, were on the brink of starvation. Even though it meant ending the hunting trip early and hence bringing back fewer furs to trade, Thomas gave the hungry men all the flour and lard he was carrying. Leacock figured that Thomas must have felt at least somewhat resentful for having to cut short his trip and that he was perhaps anticipating some return favor from the men in the future. But Thomas didn’t seem the least bit reluctant to help or frustrated by the setback. Leacock kept pressing him for an explanation until he got annoyed with her probing. She writes,
This was one of the very rare times Thomas lost patience with me, and he said with deep, if suppressed anger, “suppose now, not to give them flour, lard—just dead inside.” More revealing than the incident itself were the finality of his tone and the inference of my utter inhumanity in raising questions about his action. (Quoted in Boehm 219)
The phrase “just dead inside” expresses how deeply internalized the ethic of sympathetic giving is for people like Thomas who live in cultures more similar to those our earliest human ancestors created at the time, around 45,000 years ago, when they began leaving evidence of engaging in all the unique behaviors that are the hallmarks of our species. The Montagnais-Naskapi don’t qualify as an example of what anthropologist Christopher Boehm labels Late Pleistocene Appropriate, or LPA, cultures because they had been involved in fur trading with people from industrialized communities going back long before their culture was first studied by ethnographers. But Boehm includes Leacock’s description in his book Moral Origins: The Evolution of Virtue, Altruism, and Shame because he believes Thomas’s behavior is in fact typical of nomadic foragers and because, infelicitously for his research, standard ethnographies seldom cover encounters like the one Thomas had with those hungry acquaintances of his.
In our modern industrialized civilization, people donate blood, volunteer to fight in wars, sign over percentages of their income to churches, and pay to keep organizations like Doctors without Borders and Human Rights Watch in operation even though the people they help live in far-off countries most of us will never visit. One approach to explaining how this type of extra-familial generosity could have evolved is to suggest people who live in advanced societies like ours are, in an important sense, not in their natural habitat. Among evolutionary psychologists, it has long been assumed that in humans’ ancestral environments, most of the people individuals encountered would either be close kin who carried many genes in common, or at the very least members of a moderately stable group they could count on running into again, at which time they would be disposed to repay any favors. Once you take kin selection and reciprocal altruism into account, the consensus held, there was not much left to explain. Whatever small acts of kindness that weren’t directed toward kin or done with an expectation of repayment were, in such small groups, probably performed for the sake of impressing all the witnesses and thus improving the social status of the performer. As the biologist Michael Ghiselin once famously put it, “Scratch an altruist and watch a hypocrite bleed.” But this conception of what evolutionary psychologists call the Environment of Evolutionary Adaptedness, or EEA, never sat right with Boehm.
One problem with the standard selfish gene scenario that has just recently come to light is that modern hunter-gatherers, no matter where in the world they live, tend to form bands made up of high percentages of non-related or distantly related individuals. In an article published in Science in March of 2011, anthropologist Kim Hill and his colleagues report the findings of their analysis of thirty-two hunter-gatherer societies. The main conclusion of the study is that the members of most bands are not closely enough related for kin selection to sufficiently account for the high levels of cooperation ethnographers routinely observe. Assuming present-day forager societies are representative of the types of groups our Late Pleistocene ancestors lived in, we can rule out kin selection as a likely explanation for altruism of the sort displayed by Thomas or by modern philanthropists in complex civilizations. Boehm offers us a different scenario, one that relies on hypotheses derived from ethological studies of apes and archeological records of our human prehistory as much as on any abstract mathematical accounting of the supposed genetic payoffs of behaviors.
In three cave paintings discovered in Spain that probably date to the dawn of the Holocene epoch around 12,000 years ago, groups of men are depicted with what appear to be bows lifted above their heads in celebration while another man lay dead nearby with one arrow from each of them sticking out of his body. We can only speculate about what these images might have meant to the people who created them, but Boehm points out that all extant nomadic foraging peoples, no matter what part of the world they live in, are periodically forced to reenact dramas that resonate uncannily well with these scenes portrayed in ancient cave art. “Given enough time,” he writes, “any band society is likely to experience a problem with a homicide-prone unbalanced individual. And predictably band members will have to solve the problem by means of execution” (253). One of the more gruesome accounts of such an incident he cites comes from Richard Lee’s ethnography of !Kung Bushmen. After a man named /Twi had killed two men, Lee writes, “A number of people decided that he must be killed.” According to Lee’s informant, a man named =Toma (the symbols before the names represent clicks), the first attempt to kill /Twi was botched, allowing him to return to his hut, where a few people tried to help him. But he ended up becoming so enraged that he grabbed a spear and stabbed a woman in the face with it. When the woman’s husband came to her aid, /Twi shot him with a poisoned arrow, killing him and bringing his total body count to four. =Toma continues the story,
Now everyone took cover, and others shot at /Twi, and no one came to his aid because all those people had decided he had to die. But he still chased after some, firing arrows, but he didn’t hit any more…Then they all fired on him with poisoned arrows till he looked like a porcupine. Then he lay flat. All approached him, men and women, and stabbed his body with spears even after he was dead. (261-2)
The two most important elements of this episode for Boehm are the fact that the death sentence was arrived at through a partial group consensus which ended up being unanimous, and that it was carried out with weapons that had originally been developed for hunting. But this particular case of collectively enacted capital punishment was odd not just in how clumsy it was. Boehm writes,
In this one uniquely detailed description of what seems to begin as a delegated execution and eventually becomes a fully communal killing, things are so chaotic that it’s easy to understand why with hunter-gatherers the usual mode of execution is to efficiently delegate a kinsman to quickly kill the deviant by ambush. (261)
The prevailing wisdom among evolutionary psychologists has long been that any appearance of group-level adaptation, like the collective killing of a dangerous group member, must be an illusory outcome caused by selection at the level of individuals or families. As Steven Pinker explains, “If a person has innate traits that encourage him to contribute to the group’s welfare and as a result contribute to his own welfare, group selection is unnecessary; individual selection in the context of group living is adequate.” To demonstrate that some trait or behavior humans reliably engage in really is for the sake of the group as opposed to the individual engaging in it, there would have to be some conflict between the two motives—serving the group would have to entail incurring some kind of cost for the individual. Pinker explains,
It’s only when humans display traits that are disadvantageous to themselves while benefiting their group that group selection might have something to add. And this brings us to the familiar problem which led most evolutionary biologists to reject the idea of group selection in the 1960s. Except in the theoretically possible but empirically unlikely circumstance in which groups bud off new groups faster than their members have babies, any genetic tendency to risk life and limb that results in a net decrease in individual inclusive fitness will be relentlessly selected against. A new mutation with this effect would not come to predominate in the population, and even if it did, it would be driven out by any immigrant or mutant that favored itself at the expense of the group.
The ever-present potential for cooperative or altruistic group norms to be subverted by selfish individuals keen on exploitation is known in game theory as the free rider problem. To see how strong selfish individuals can lord over groups of their conspecifics we can look to the hierarchically organized bands great apes naturally form.
In groups of chimpanzees, for instance, an alpha male gets to eat his fill of the most nutritious food, even going so far at times as seizing meat from the subordinates who hunted it down. The alpha chimp also works to secure, as best he can, sole access to reproductively receptive females. For a hierarchical species like this, status is a winner-take-all competition, and so genes for dominance and cutthroat aggression proliferate. Subordinates tolerate being bullied because they know the more powerful alpha will probably kill them if they try to stand up for themselves. If instead of mounting some ill-fated resistance, however, they simply bide their time, they may eventually grow strong enough to more effectively challenge for the top position. Meanwhile, they can also try to sneak off with females to couple behind the alpha’s back. Boehm suggests that two competing motives keep hierarchies like this in place: one is a strong desire for dominance and the other is a penchant for fear-based submission. What this means is that subordinates only ever submit ambivalently. They even have a recognizable vocalization, which Boehm transcribes as the “waa,” that they use to signal their discontent. In his 1999 book Hierarchy in the Forest: The Evolution of Egalitarian Behavior, Boehm explains,
When an alpha male begins to display and a subordinate goes screaming up a tree, we may interpret this as a submissive act of fear; but when that same subordinate begins to waa as the display continues, it is an open, hostile expression of insubordination. (167)
Since the distant ancestor humans shared in common with chimpanzees likely felt this same ambivalence toward alphas, Boehm theorizes that it served as a preadaptation for the type of treatment modern human bullies can count on in every society of nomadic foragers anthropologists have studied. “I believe,” he writes, “that a similar emotional and behavioral orientation underlies the human moral community’s labeling of domination behaviors as deviant” (167).
Boehm has found accounts of subordinate chimpanzees, bonobos, and even gorillas banding together with one or more partner to take on an excessively domineering alpha—though there was only one case in which this happened with gorillas and the animals in question lived in captivity. But humans are much better at this type of coalition building. Two of the most crucial developments in our own lineage that lead to the differences in social organization between ourselves and the other apes were likely to have been an increased capacity for coordinated hunting and the invention of weapons designed to kill big game. As Boehm explains,
Weapons made possible not only killing at a distance, but far more effective threat behavior; brandishing a projectile could turn into an instant lethal attack with relatively little immediate risk to the attacker. (175)
Deadly weapons fundamentally altered the dynamic between lone would-be bullies and those they might try to dominate. As Boehm points out, “after weapons arrived, the camp bully became far more vulnerable” (177). With the advent of greater coalition-building skills and the invention of tools for efficient killing, the opportunities for an individual to achieve alpha status quickly vanished.
It’s dangerous to assume that any one group of modern people provides the key to understanding our Pleistocene ancestors, but when every group living with similar types of technology and subsistence methods as those ancestors follows a similar pattern it’s much more suggestive. “A distinctively egalitarian political style,” Boehm writes, “is highly predictable wherever people live in small, locally autonomous social and economic groups” (35-6). This egalitarianism must be vigilantly guarded because “A potential bully always seems to be waiting in the wings” (68). Boehm explains what he believes is the underlying motivation,
Even though individuals may be attracted personally to a dominant role, they make a common pact which says that each main political actor will give up his modest chances of becoming alpha in order to be certain that no one will ever be alpha over him. (105)
The methods used to prevent powerful or influential individuals from acquiring too much control include such collective behaviors as gossiping, ostracism, banishment, and even, in extreme cases, execution. “In egalitarian hierarchies the pyramid of power is turned upside down,” Boehm explains, “with a politically united rank and file dominating the alpha-male types” (66).
The implications for theories about our ancestors are profound. The groups humans were living in as they evolved the traits that made them what we recognize today as human were highly motivated and well-equipped to both prevent and when necessary punish the type of free-riding that evolutionary psychologists and other selfish gene theorists insist would undermine group cohesion. Boehm makes this point explicit, writing,
The overall hypothesis is straightforward: basically, the advent of egalitarianism shifted the balance of forces within natural selection so that within-group selection was substantially debilitated and between-group selection was amplified. At the same time, egalitarian moral communities found themselves uniquely positioned to suppress free-riding… at the level of phenotype. With respect to the natural selection of behavior genes, this mechanical formula clearly favors the retention of altruistic traits. (199)
This is the point where he picks up the argument again in Moral Origins. The story of the homicidal man named /Twi is an extreme example of the predictable results of overly aggressive behaviors. Any nomadic forager who intransigently tries to throw his weight around the way alpha male chimpanzees do will probably end up getting “porcupined” (158) like /Twi and the three men depicted in the Magdalenian cave art in Spain.
Murder is an extreme example of the types of free-riding behavior that nomadic foragers reliably sanction. Any politically overbearing treatment of group mates, particularly the issuing of direct commands, is considered a serious moral transgression. But alongside this disapproval of bossy or bullying behavior there exists an ethic of sharing and generosity, so people who are thought to be stingy are equally disliked. As Boehm writes in Hierarchy in the Forest, “Politically egalitarian foragers are also, to a significant degree, materially egalitarian” (70). The image many of us grew up with of the lone prehistoric male hunter going out to stalk his prey, bringing it back as a symbol of his prowess in hopes of impressing beautiful and fertile females, turns out to be completely off-base. In most hunter-gather groups, the males hunt in teams, and whatever they kill gets turned over to someone else who distributes the meat evenly among all the men so each of their families gets an equal portion. In some cultures, “the hunter who made the kill gets a somewhat larger share,” Boehm writes in Moral Origins, “perhaps as an incentive to keep him at his arduous task” (185). But every hunter knows that most of the meat he procures will go to other group members—and the sharing is done without any tracking of who owes whom a favor. Boehm writes,
The models tell us that the altruists who are helping nonkin more than they are receiving help must be “compensated” in some way, or else they—meaning their genes—will go out of business. What we can be sure of is that somehow natural selection has managed to work its way around these problems, for surely humans have been sharing meat and otherwise helping others in an unbalanced fashion for at least 45,000 years. (184)
Following biologist Richard Alexander, Boehm sees this type of group beneficial generosity as an example of “indirect reciprocity.” And he believes it functions as a type of insurance policy, or, as anthropologists call it, “variance reduction.” It’s often beneficial for an individual’s family to pay in, as it were, but much of the time people contribute knowing full well the returns will go to others.
Less extreme cases than the psychopaths who end up porcupined involve what Boehm calls “meat-cheaters.” A prominent character in Moral Origins is an Mbuti Pygmy man named Cephu, whose story was recounted in rich detail by the anthropologist Colin Turnbull. One of the cooperative hunting strategies the Pygmies use has them stretching a long net through the forest while other group members create a ruckus to scare animals into it. Each net holder is entitled to whatever runs into his section of the net, which he promptly spears to death. What Cephu did was sneak farther ahead of the other men to improve his chances of having an animal run into his section of the net before the others. Unfortunately for him, everyone quickly realized what was happening. Returning to the camp after depositing his ill-gotten gains in his hut, Cephu hears someone call out that he is an animal. Beyond that, everyone was silent. Turnbull writes,
Cephu walked into the group, and still nobody spoke. He went to where a youth was sitting in a chair. Usually he would have been offered a seat without his having to ask, and now he did not dare to ask, and the youth continued to sit there in as nonchalant a manner as he could muster. Cephu went to another chair where Amabosu was sitting. He shook it violently when Amabosu ignored him, at which point he was told, “Animals lie on the ground.” (Quoted 39)
Thus began the accusations. Cephu burst into tears and tried to claim that his repositioning himself in the line was an accident. No one bought it. Next, he made the even bigger mistake of trying to suggest he was entitled to his preferential position. “After all,” Turnbull writes, “was he not an important man, a chief, in fact, of his own band?” At this point, Manyalibo, who was taking the lead in bringing Cephu to task, decided that the matter was settled. He said that
there was obviously no use prolonging the discussion. Cephu was a big chief, and Mbuti never have chiefs. And Cephu had his own band, of which he was chief, so let him go with it and hunt elsewhere and be a chief elsewhere. Manyalibo ended a very eloquent speech with “Pisa me taba” (“Pass me the tobacco”). Cephu knew he was defeated and humiliated. (40)
The guilty verdict Cephu had to accept to avoid being banished from the band came with the sentence that he had to relinquish all the meat he brought home that day. His attempt at free-riding therefore resulted not only in a loss of food but also in a much longer-lasting blow to his reputation.
Boehm has built a large database from ethnographic studies like Lee’s and Turnbull’s, and it shows that in their handling of meat-cheaters and self-aggrandizers nomadic foragers all over the world use strategies similar to those of the Pygmies. First comes the gossip about your big ego, your dishonesty, or your cheating. Soon you’ll recognize a growing reluctance of other’s to hunt with you, or you’ll have a tough time wooing a mate. Next, you may be directly confronted by someone delegated by a quorum of group members. If you persist in your free-riding behavior, especially if it entails murder or serious attempts at domination, you’ll probably be ambushed and turned into a porcupine. Alexander put forth the idea of “reputational selection,” whereby individuals benefit in terms of survival and reproduction from being held in high esteem by their group mates. Boehm prefers the term “social selection,” however, because it encompasses the idea that people are capable of figuring out what’s best for their groups and codifying it in their culture. How well an individual internalizes a group’s norms has profound effects on his or her chances for survival and reproduction. Boehm’s theory is that our consciences are the mechanisms we’ve evolved for such internalization.
Though there remain quite a few chicken-or-egg conundrums to work out, Boehm has cobbled together archeological evidence from butchering cites, primatological evidence from observations of apes in the wild and in captivity, and quantitative analyses of ethnographic records to put forth a plausible history of how our consciences evolved and how we became so concerned for the well-being of people we may barely know. As humans began hunting larger game, demanding greater coordination and more effective long-distance killing tools, an already extant resentment of alphas expressed itself in collective suppression of bullying behavior. And as our developing capacity for language made it possible to keep track of each other’s behavior long-term it started to become important for everyone to maintain a reputation for generosity, cooperativeness, and even-temperedness. Boehm writes,
Ultimately, the social preferences of groups were able to affect gene pools profoundly, and once we began to blush with shame, this surely meant that the evolution of conscientious self-control was well under way. The final result was a full-blown, sophisticated modern conscience, which helps us to make subtle decisions that involve balancing selfish interests in food, power, sex, or whatever against the need to maintain a decent personal moral reputation in society and to feel socially valuable as a person. The cognitive beauty of having such a conscience is that it directly facilitates making useful social decisions and avoiding negative social consequences. Its emotional beauty comes from the fact that we in effect bond with the values and rules of our groups, which means we can internalize our group’s mores, judge ourselves as well as others, and, hopefully, end up with self-respect. (173)
Social selection is actually a force that acts on individuals, selecting for those who can most strategically suppress their own selfish impulses. But in establishing a mechanism that guards the group norm of cooperation against free riders, it increased the potential of competition between groups and quite likely paved the way for altruism of the sort Leacock’s informant Thomas displayed. Boehm writes,
Thomas surely knew that if he turned down the pair of hungry men, they might “bad-mouth” him to people he knew and thereby damage his reputation as a properly generous man. At the same time, his costly generosity might very well be mentioned when they arrived back in their camp, and through the exchange of favorable gossip he might gain in his public esteem in his own camp. But neither of these socially expedient personal considerations would account for the “dead” feeling he mentioned with such gravity. He obviously had absorbed his culture’s values about sharing and in fact had internalized them so deeply that being selfish was unthinkable. (221)
In response to Ghiselin’s cynical credo, “Scratch an altruist and watch a hypocrite bleed,” Boehm points out that the best way to garner the benefits of kindness and sympathy is to actually be kind and sympathetic. He points out further that if altruism is being selected for at the level of phenotypes (the end-products of genetic processes) we should expect it to have an impact at the level of genes. In a sense, we’ve bred altruism into ourselves. Boehm writes,
If such generosity could be readily faked, then selection by altruistic reputation simply wouldn’t work. However, in an intimate band of thirty that is constantly gossiping, it’s difficult to fake anything. Some people may try, but few are likely to succeed. (189)
The result of the social selection dynamic that began all those millennia ago is that today generosity is in our bones. There are of course circumstances that can keep our generous impulses from manifesting themselves, and those impulses have a sad tendency to be directed toward members of our own cultural groups and no one else. But Boehm offers a slightly more optimistic formula than Ghiselin’s:
I do acknowledge that our human genetic nature is primarily egoistic, secondarily nepotistic, and only rather modestly likely to support acts of altruism, but the credo I favor would be “Scratch an altruist, and watch a vigilant and successful suppressor of free riders bleed. But watch out, for if you scratch him too hard, he and his group may retaliate and even kill you. (205)
Read Part 1:
A CRASH COURSE IN MULTI-LEVEL SELECTION THEORY: PART 1-THE GROUNDWORK LAID BY DAWKINS AND GOULD
And Part 2:
Also of interest:
A Crash Course in Multilevel Selection Theory part 2: Steven Pinker Falls Prey to the Averaging Fallacy Sober and Wilson Tried to Warn Him about
Eliot Sober and David Sloan Wilson’s “Unto Others” lays out a theoretical framework for how selection at the level of the group could have led to the evolution of greater cooperation among humans. They point out the mistake many theorists make in thinking because evolution can be defined as changes in gene frequencies, it’s only genes that matter. But that definition leaves aside the question of how traits and behaviors evolve, i.e. what dynamics lead to the changes in gene frequencies. Steven Pinker failed to grasp their point.
If you were a woman applying to graduate school at the University of California at Berkeley in 1973, you would have had a 35 percent chance of being accepted. If you were a man, your chances would have been significantly better. Forty-four percent of male applicants got accepted that year. Apparently, at this early stage of the feminist movement, even a school as notoriously progressive as Berkeley still discriminated against women. But not surprisingly, when confronted with these numbers, the women of the school were ready to take action to right the supposed injustice. After a lawsuit was filed charging admissions offices with bias, however, a department-by-department examination was conducted which produced a curious finding: not a single department admitted a significantly higher percentage of men than women. In fact, there was a small but significant trend in the opposite direction—a bias against men.
What this means is that somehow the aggregate probability of being accepted into grad school was dramatically different from the probabilities worked out through disaggregating the numbers with regard to important groupings, in this case the academic departments housing the programs assessing the applications. This discrepancy called for an explanation, and statisticians had had one on hand since 1951.
This paradoxical finding fell into place when it was noticed that women tended to apply to departments with low acceptance rates. To see how this can happen, imagine that 90 women and 10 men apply to a department with a 30 percent acceptance rate. This department does not discriminate and therefore accepts 27 women and 3 men. Another department, with a 60 percent acceptance rate, receives applications from 10 women and 90 men. This department doesn’t discriminate either and therefore accepts 6 women and 54 men. Considering both departments together, 100 men and 100 women applied, but only 33 women were accepted, compared with 57 men. A bias exists in the two departments combined, despite the fact that it does not exist in any single department, because the departments contribute unequally to the total number of applicants who are accepted. (25)
This is how the counterintuitive statistical phenomenon known as Simpson’s Paradox is explained by philosopher Elliott Sober and biologist David Sloan Wilson in their 1998 book Unto Others: The Evolution and Psychology of Unselfish Behavior, in which they argue that the same principle can apply to the relative proliferation of organisms in groups with varying percentages of altruists and selfish actors. In this case, the benefit to the group of having more altruists is analogous to the higher acceptance rates for grad school departments which tend to receive a disproportionate number of applications from men. And the counterintuitive outcome is that, in an aggregated population of groups, altruists have an advantage over selfish actors—even though within each of those groups selfish actors outcompete altruists.
Sober and Wilson caution that this assessment is based on certain critical assumptions about the population in question. “This model,” they write, “requires groups to be isolated as far as the benefits of altruism are concerned but nevertheless to compete in the formation of new groups” (29). It also requires that altruists and nonaltruists somehow “become concentrated in different groups” (26) so the benefits of altruism can accrue to one while the costs of selfishness accrue to the other. One type of group that follows this pattern is a family, whose members resemble each other in terms of their traits—including a propensity for altruism—because they share many of the same genes. In humans, families tend to be based on pair bonds established for the purpose of siring and raising children, forming a unit that remains stable long enough for the benefits of altruism to be of immense importance. As the children reach adulthood, though, they disperse to form their own family groups. Therefore, assuming families live in a population with other families, group selection ought to lead to the evolution of altruism.
Sober and Wilson wrote Unto Others to challenge the prevailing approach to solving mysteries in evolutionary biology, which was to focus strictly on competition between genes. In place of this exclusive attention on gene selection, they advocate a pluralistic approach that takes into account the possibility of selection occurring at multiple levels, from genes to individuals to groups. This is where the term multilevel selection comes from. In certain instances, focusing on one level instead of another amounts to a mere shift in perspective. Looking at families as groups, for instance, leads to many of the same conclusions as looking at them in terms of vehicles for carrying genes.
William D. Hamilton, whose thinking inspired both Richard Dawkins’ Selfish Gene and E.O. Wilson’s Sociobiology, long ago explained altruism within families by setting forth the theory of kin selection, which posits that family members will at times behave in ways that benefit each other even at their own expense because the genes underlying the behavior don’t make any distinction between the bodies which happen to be carrying copies of themselves. Sober and Wilson write,
As we have seen, however, kin selection is a special case of a more general theory—a point that Hamilton was among the first to appreciate. In his own words, “it obviously makes no difference if altruists settle with altruists because they are related… or because they recognize fellow altruists as such, or settle together because of some pleiotropic effect of the gene on habitat preference.” We therefore need to evaluate human social behavior in terms of the general theory of multilevel selection, not the special case of kin selection. When we do this, we may discover that humans, bees, and corals are all group-selected, but for different reasons. (134)
A general proclivity toward altruism based on section at the level of family groups may look somewhat different from kin-selected altruism targeted solely at those who are recognized as close relatives. For obvious reasons, the possibility of group selection becomes even more important when it comes to explaining the evolution of altruism among unrelated individuals.
We have to bear in mind that Dawkins’s selfish genes are only selfish with regard to concerning themselves with nothing but ensuring their own continued existence—by calling them selfish he never meant to imply they must always be associated with selfishness as a trait of the bodies they provide the blueprints for. Selfish genes, in other words, can sometimes code for altruistic behavior, as in the case of kin selection. So the question of what level selection operates on is much more complicated than it would be if the gene-focused approach predicted selfishness while the multilevel approach predicted altruism. But many strict gene selection advocates argue that because selfish gene theory can account for altruism in myriad ways there’s simply no need to resort to group selection. Evolution is, after all, changes over time in gene frequencies. So why should we look to higher levels?
Sober and Wilson demonstrate that if you focus on individuals in their simple model of predominantly altruistic groups competing against predominantly selfish groups you will conclude that altruism is adaptive because it happens to be the trait that ends up proliferating. You may add the qualifier that it’s adaptive in the specified context, but the upshot is that from the perspective of individual selection altruism outcompetes selfishness. The problem is that this is the same reasoning underlying the misguided accusations against Berkley; for any individual in that aggregate population, it was advantageous to be a male—but there was never any individual selection pressure against females. Sober and Wilson write,
The averaging approach makes “individual selection” a synonym for “natural selection.” The existence of more than one group and fitness differences between the groups have been folded into the definition of individual selection, defining group selection out of existence. Group selection is no longer a process that can occur in theory, so its existence in nature is settled a priori. Group selection simply has no place in this semantic framework. (32)
Thus, a strict focus on individuals, though it may appear to fully account for the outcome, necessarily obscures a crucial process that went into producing it. The same logic might be applicable to any analysis based on gene-level accounting. Sober and Wilson write that
if the point is to understand the processes at work, the resultant is not enough. Simpson’s paradox shows how confusing it can be to focus only on net outcomes without keeping track of the component causal factors. This confusion is carried into evolutionary biology when the separate effects of selection within and between groups are expressed in terms of a single quantity. (33)
They go on to label this approach “the averaging fallacy.” Acknowledging that nobody explicitly insists that group selection is somehow impossible by definition, they still find countless instances in which it is defined out of existence in practice. They write,
Even though the averaging fallacy is not endorsed in its general form, it frequently occurs in specific cases. In fact, we will make the bold claim that the controversy over group selection and altruism in biology can be largely resolved simply by avoiding the averaging fallacy. (34)
Unfortunately, this warning about the averaging fallacy continues to go unheeded by advocates of strict gene selection theories. Even intellectual heavyweights of the caliber of Steven Pinker fall into the trap. In a severely disappointing essay published just last month at Edge.org called “The False Allure of Group Selection,” Pinker writes
If a person has innate traits that encourage him to contribute to the group’s welfare and as a result contribute to his own welfare, group selection is unnecessary; individual selection in the context of group living is adequate. Individual human traits evolved in an environment that includes other humans, just as they evolved in environments that include day-night cycles, predators, pathogens, and fruiting trees.
Multilevel selectionists wouldn’t disagree with this point; they would readily explain traits that benefit everyone in the group at no cost to the individuals possessing them as arising through individual selection. But Pinker here shows his readiness to fold the process of group competition into some generic “context.” The important element of the debate, of course, centers on traits that benefit the group at the expense of the individual. Pinker writes,
Except in the theoretically possible but empirically unlikely circumstance in which groups bud off new groups faster than their members have babies, any genetic tendency to risk life and limb that results in a net decrease in individual inclusive fitness will be relentlessly selected against. A new mutation with this effect would not come to predominate in the population, and even if it did, it would be driven out by any immigrant or mutant that favored itself at the expense of the group.
But, as Sober and Wilson demonstrate, those self-sacrificial traits wouldn’t necessarily be selected against in the population. In fact, self-sacrifice would be selected for if that population is an aggregation of competing groups. Pinker fails to even consider this possibility because he’s determined to stick with the definition of natural selection as occurring at the level of genes.
Indeed, the centerpiece of Pinker’s argument against group selection in this essay is his definition of natural selection. Channeling Dawkins, he writes that evolution is best understood as competition between “replicators” to continue replicating. The implication is that groups, and even individuals, can’t be the units of selection because they don’t replicate themselves. He writes,
The theory of natural selection applies most readily to genes because they have the right stuff to drive selection, namely making high-fidelity copies of themselves. Granted, it's often convenient to speak about selection at the level of individuals, because it’s the fate of individuals (and their kin) in the world of cause and effect which determines the fate of their genes. Nonetheless, it’s the genes themselves that are replicated over generations and are thus the targets of selection and the ultimate beneficiaries of adaptations.
The underlying assumption is that, because genes rely on individuals as “vehicles” to replicate themselves, individuals can sometimes be used as shorthand for genes when discussing natural selection. Since gene competition within an individual would be to the detriment of all the genes that individual carries and strives to pass on, the genes collaborate to suppress conflicts amongst themselves. The further assumption underlying Pinker’s and Dawkins’s reasoning is that groups make for poor vehicles because suppressing within group conflict would be too difficult. But, as Sober and Wilson write,
This argument does not evaluate group selection on a trait-by-trait basis. In addition, it begs the question of how individuals became such good vehicles of selection in the first place. The mechanisms that currently limit within-individual selection are not a happy coincidence but are themselves adaptions that evolved by natural selection. Genomes that managed to limit internal conflict presumably were more fit than other genomes, so these mechanisms evolve by between-genome selection. Being a good vehicle as Dawkins defines it is not a requirement for individual selection—it’s a product of individual selection. Similarly, groups do not have to be elaborately organized “superorganisms” to qualify as a unit of selection with respect to particular traits. (97)
The idea of a “trait-group” is exemplified by the simple altruistic group versus selfish group model they used to demonstrate the potential confusion arising from Simpson’s paradox. As long as individuals with the altruism trait interact with enough regularity for the benefits to be felt, they can be defined as a group with regard to that trait.
Pinker makes several other dubious points in his essay, most of them based on the reasoning that group selection isn’t “necessary” to explain this or that trait, only justifying his prejudice in favor of gene selection with reference to the selfish gene definition of evolution. Of course, it may be possible to imagine gene-level explanations to behaviors humans engage in predictably, like punishing cheaters in economic interactions even when doing so means the punisher incurs some cost to him or herself. But Pinker is so caught up with replicators he overlooks the potential of this type of punishment to transform groups into functional vehicles. As Sober and Wilson demonstrate, group competition can lead to the evolution of altruism on its own. But once altruism reaches a certain threshold group selection can become even more powerful because the altruistic group members will, by definition, be better at behaving as a group. And one of the mechanisms we might expect to evolve through an ongoing process of group selection would operate to curtail within group conflict and exploitation. The costly punishment Pinker dismisses as possibly explicable through gene selection is much more likely to havearisen through group selection. Sober and Wilson delight in the irony that, “The entire language of social interactions among individuals in groups has been burrowed to describe genetic interactions within individuals; ‘outlaw’ genes, ‘sheriff’ genes, ‘parliaments’ of genes, and so on” (147).
Unto Others makes such a powerful case against strict gene-level explanations and for the potentially crucial role of group selection that anyone who undertakes to argue that the appeal of multilevel selection theory is somehow false without even mentioning it risks serious embarrassment. Published fourteen years ago, it still contains a remarkably effective rebuttal to Pinker’s essay:
In short, the concept of genes as replicators, widely regarded as a decisive argument against group selection, is in fact totally irrelevant to the subject. Selfish gene theory does not invoke any processes that are different from the ones described in multilevel selection theory, but merely looks at the same processes in a different way. Those benighted group selectionists might be right in every detail; group selection could have evolved altruists that sacrifice themselves for the benefit of others, animals that regulate their numbers to avoid overexploiting their resources, and so on. Selfish gene theory calls the genes responsible for these behaviors “selfish” for the simple reason that they evolved and therefore replicated more successfully than other genes. Multilevel selection theory, on the other hand, is devoted to showing how these behaviors evolve. Fitness differences must exist somewhere in the biological hierarchy—between individuals within groups, between groups in the global population, and so on. Selfish gene theory can’t even begin to explore these questions on the basis of the replicator concept alone. The vehicle concept is its way of groping toward the very issues that multilevel selection theory was developed to explain. (88)
Sober and Wilson, in opening the field of evolutionary studies to forces beyond gene competition, went a long way toward vindicating Stephen Jay Gould, who throughout his career held that selfish gene theory was too reductionist—he even incorporated their arguments into his final book. But Sober and Wilson are still working primarily in the abstract realm of evolutionary modeling, although in the second half of Unto Others they cite multiple psychological and anthropological sources. A theorist even more after Gould’s own heart, one who synthesizes both models and evidence from multiple fields, from paleontology to primatology to ethnography, into a hypothetical account of the natural history of human evolution, from the ancestor we share with the great apes to modern nomadic foragers and beyond, is the anthropologist Christopher Boehm, whose work we’ll be exploring in part 3.
Read Part 1 of
A CRASH COURSE IN MULTI-LEVEL SELECTION THEORY: PART 1-THE GROUNDWORK LAID BY DAWKINS AND GOULD
And Part 3:
A Crash Course in Multi-Level Selection Theory: Part 1-The Groundwork Laid by Dawkins and Gould
What is the unit of selection? Richard Dawkins famously argues that it’s genes that are selected for over the course of evolutionary change. Stephen Jay Gould, meanwhile, maintained that it must be individuals and even sometimes groups of individuals. In their fascinating back and forth lies the foundation of today’s debates about multi-level selection theory.
Responding to Stephen Jay Gould’s criticisms of his then most infamous book, Richard Dawkins writes in a footnote to the 1989 edition of The Selfish Gene, “I find his reasoning wrong but interesting, which, incidentally, he has been kind enough to tell me, is how he usually finds mine” (275). Dawkins’s idea was that evolution is, at its core, competition between genes with success measured in continued existence. Genes are replicators. Evolution is therefore best thought of as the outcome of this competition between replicators to keep on replicating. Gould’s response was that natural selection can’t possibly act on genes because genes are always buried in bodies. Those replicators always come grouped with other replicators and have only indirect effects on the bodies they ultimately serve as blueprints for. Natural selection, as Gould suggests, can’t “see” genes; it can only see, and act on, individuals.
The image of individual genes, plotting the course of their own survival, bears little relationship to developmental genetics as we understand it. Dawkins will need another metaphor: genes caucusing, forming alliances, showing deference for a chance to join a pact, gauging probable environments. But when you amalgamate so many genes and tie them together in hierarchical chains of action mediated by environments, we call the resultant object a body. (91)
Dawkins’ rebuttal, in both later editions of The Selfish Gene and in The Extended Phenotype, is, essentially, Duh—of course genes come grouped together with other genes and only ever evolve in context. But the important point is that individuals never replicate themselves. Bodies don’t create copies of themselves. Genes, on the other hand, do just that. Bodies are therefore best thought of as vehicles for these replicators.
As a subtle hint of his preeminent critic’s unreason, Dawkins quotes himself in his response to Gould, citing a passage Gould must’ve missed, in which the genes making up an individual organism’s genome are compared to the members of a rowing team. Each contributes to the success or failure of the team, but it’s still the individual members that are important. Dawkins describes how the concept of an “Evolutionarily Stable Strategy,” can be applied to a matter
arising from the analogy of oarsmen in a boat (representing genes in a body) needing a good team spirit. Genes are selected, not as “good” in isolation, but as good at working against the background of the other genes in the gene pool. A good gene must be compatible with and complementary to, the other genes with whom it has to share a long succession of bodies. A gene for plant-grinding teeth is a good gene in the gene pool of a herbivorous species, but a bad gene in the gene pool of a carnivorous species. (84)
Gould, in other words, isn’t telling Dawkins anything he hasn’t already considered. But does that mean Gould’s point is moot? Or does the rowing team analogy actually support his reasoning? In any case, they both agree that the idea of a “good gene” is meaningless without context.
The selfish gene idea has gone on to become the linchpin of research in many subfields of evolutionary biology, its main appeal being the ease with which it lends itself to mathematical modeling. If you want to know what traits are the most likely to evolve, you create a simulation in which individuals with various traits compete. Run the simulation and the outcome allows you to determine the relative probability of a given trait evolving in the context of individuals with other traits. You can then compare the statistical outcomes derived from the simulation with experimental data on how the actual animals behave. This sort of analysis relies on the assumption that the traits in question are both discrete and can be selected for, and this reasoning usually rest on the further assumption that the traits are, beyond a certain threshold probability, the end-product of chemical processes set in motion by a particular gene or set of genes. In reality, everyone acknowledges that this one-to-one correspondence between gene and trait—or constellation of genes and trait—seldom occurs. All genes can do is make their associated traits more likely to develop in specific environments. But if the sample size is large enough, meaning that the population you’re modeling is large enough, and if the interactions go through enough iterations, the complicating nuances will cancel out in the final statistical averaging.
Gould’s longstanding objection to this line of research—as productive as he acknowledged it could be—was that processes, and even events, like large-scale natural catastrophes, that occur at higher levels of analysis can be just as or more important than the shuffling of gene frequencies at the lowest level. It’s hardly irrelevant that Dawkins and most of his fellow ethologists who rely on his theories primarily study insects—relatively simple-bodied species that produce huge populations and have rapid generational turnover. Gould, on the other hand, focused his research on the evolution of snail shells. And he kept his eye throughout his career on the big picture of how evolution worked over vast periods of time. As a paleontologist, he found himself looking at trends in the fossil record that didn’t seem to follow the expected patterns of continual, gradual development within species. In fact, the fossil records of most lineages seem to be characterized by long periods of slow or no change followed by sudden disruptions—a pattern he and Niles Eldredge refer to as punctuated equilibrium. In working out an explanation for this pattern, Eldredge and Gould did Dawkins one better: sure, genes are capable of a sort of immortality, they reasoned, but then so are species. Evolution then isn’t just driven by competition between genes or individuals; something like species selection must also be taking place.
Dawkins accepted this reasoning up to a point, seeing that it probably even goes some way toward explaining the patterns that often emerge in the fossil record. But whereas Gould believed there was so much randomness at play in large populations that small differences would tend to cancel out, and that “speciation events”—periods when displacement or catastrophe led to smaller group sizes—were necessary for variations to take hold in the population, Dawkins thought it unlikely that variations really do cancel each other out even in large groups. This is because he knows of several examples of “evolutionary arms races,” multigenerational exchanges in which a small change leads to a big advantage, which in turn leads to a ratcheting up of the trait in question as all the individuals in the population are now competing in a changed context. Sexual selection, based on competition for reproductive access to females, is a common cause of arms races. That’s why extreme traits in the form of plumage or body size or antlers are easy to point to. Once you allow for this type of change within populations, you are forced to conclude that gene-level selection is much more powerful and important than species-level selection. As Dawkins explains in The Extended Phenotype,
Accepting Eldredge and Gould’s belief that natural selection is a general theory that can be phrased on many levels, the putting together of a certain quantity of evolutionary change demands a certain minimum number of selective replicator-eliminations. Whether the replicators that are selectively eliminated are genes or species, a simple evolutionary change requires only a few replicator substitutions. A large number of replicator substitutions, however, are needed for the evolution of a complex adaptation. The minimum replacement cycle time when we consider the gene as replicator is one individual generation, from zygote to zygote. It is measured in years or months, or smaller time units. Even in the largest organisms it is measured in only tens of years. When we consider the species as replicator, on the other hand, the replacement cycle time is the interval from speciation event to speciation event, and may be measured in thousands of years, tens of thousands, hundreds of thousands. In any given period of geological time, the number of selective species extinctions that can have taken place is many orders of magnitude less than the number of selective allele replacements that can have taken place. (106)
This reasoning, however, applies only to features and traits that are under intense selection pressure. So in determining whether a given trait arose through a process of gene selection or species selection you would first have to know certain features about the nature of that trait: how much of an advantage it confers if any, how widely members of the population vary in terms of it, and what types of countervailing forces might cancel out or intensify the selection pressure.
The main difference between Dawkins’s and Gould’s approaches to evolutionary questions is that Dawkins prefers to frame answers in terms of the relative success of competing genes while Gould prefers to frame them in terms of historical outcomes. Dawkins would explain a wasp’s behavior by pointing out that behaving that way ensures copies of the wasp’s genes will persist in the population. Gould would explain the shape of some mammalian skull by pointing out how contingent that shape is on the skulls of earlier creatures in the lineage. Dawkins knows history is important. Gould knows gene competition is important. The difference is in the relative weights given to each. Dawkins might challenge Gould, “Gene selection explains self-sacrifice for the sake of close relatives, who carry many of the same genes”—an idea known as kin selection—“what does your historical approach say about that?” Gould might then point to the tiny forelimbs of a tyrannosaurus, or the original emergence of feathers (which were probably sported by some other dinosaur) and challenge Dawkins, “Account for that in terms of gene competition.”
The area where these different perspectives came into the most direct conflict was sociobiology, which later developed into evolutionary psychology. This is a field in which theorists steeped in selfish gene thinking look at human social behavior and see in it the end product of gene competition. Behaviors are treated as traits, traits are assumed to have a genetic basis, and, since the genes involved exist because they outcompeted other genes producing other traits, their continuing existence suggests that the traits are adaptive, i.e. that they somehow make the continued existence of the associated genes more likely. The task of the evolutionary psychologist is to work out how. This was in fact the approach ethologists had been applying, primarily to insects, for decades.
E.O. Wilson, a renowned specialist on ant behavior, was the first to apply it to humans in his book Sociobiology, and in a later book, On Human Nature, which won him the Pulitzer. But the assumption that human behavior is somehow fixed to genes and that it always serves to benefit those genes was anathema to Gould. If ever there were a creature for whom the causal chain from gene to trait or behavior was too long and complex for the standard ethological approaches to yield valid insights, it had to be humans.
Gould famously compared evolutionary psychological theories to the “Just-so” stories of Kipling, suggesting they relied on far too many shaky assumptions and made use of far too little evidence. From Gould’s perspective, any observable trait, in humans or any other species, was just as likely to have no effect on fitness at all as it was to be adaptive. For one thing, the trait could be a byproduct of some other trait that’s adaptive; it could have been selected for indirectly. Or it could emerge from essentially random fluctuations in gene frequencies that take hold in populations because they neither help nor hinder survival and reproduction. And in humans of course there are things like cultural traditions, forethought, and technological intervention (as when a gene for near-sightedness is rendered moot with contact lenses). The debate got personal and heated, but in the end evolutionary psychology survived Gould’s criticisms. Outsiders could even be forgiven for suspecting that Gould actually helped the field by highlighting some of its weaknesses. He, in fact, didn’t object in principle to the study of human behavior from the perspective of biological evolution; he just believed the earliest attempts were far too facile. Still, there are grudges being harbored to this day.
Another way to look at the debate between Dawkins and Gould, one which lies at the heart of the current debate over group selection, is that Dawkins favored reductionism while Gould preferred holism. Dawkins always wants to get down to the most basic unit. His “‘central theorem’ of the extended phenotype” is that “An animal’s behaviour tends to maximize the survival of genes ‘for’ that behaviour, whether or not those genes happen to be in the body of the particular animal performing it” (233). Reductionism, despite its bad name, is an extremely successful approach to arriving at explanations, and it has a central role in science. Gould’s holistic approach, while more inclusive, is harder to quantify and harder to model. But there are several analogues to natural selection that suggest ways in which higher-order processes might be important for changes at lower orders. Regular interactions between bodies—or even between groups or populations of bodies—may be crucial in accounting for changes in gene frequencies the same way software can impact the functioning of hardware or symbolic thoughts can determine patterns of neural connections.
The question becomes whether or not higher-level processes operate regularly enough that their effects can’t safely be assumed to average out over time. One pitfall of selfish gene thinking is that it lends itself to the conflation of definitions and explanations. Evolution can be defined as changes in gene frequencies. But assuming a priori that competition at the level of genes causes those changes means running the risk of overlooking measurable outcomes of processes at higher levels. The debate, then, isn’t over whether evolution occurs at the level of genes—it has to—but rather over what processes lead to the changes. It could be argued that Gould, in his magnum opus The Structure of Evolutionary Theory, which was finished shortly before his death, forced Dawkins into making just this mistake. Responding to the book in an essay in his own book A Devil’s Chaplain, Dawkins writes,
Gould saw natural selection as operating on many levels in the hierarchy of life. Indeed it may, after a fashion, but I believe that such selection can have evolutionary consequences only when the entities selected consist of “replicators.” A replicator is a unit of coded information, of high fidelity but occasionally mutable, with some causal power over its own fate. Genes are such entities… Biological natural selection, at whatever level we may see it, results in evolutionary effects only insofar as it gives rise to changes in gene frequencies in gene pools. Gould, however, saw genes only as “book-keepers,” passively tracking the changes going on at other levels. In my view, whatever else genes are, they must be more than book-keepers, otherwise natural selection cannot work. If a genetic change has no causal influence on bodies, or at least on something that natural selection can “see,” natural selection cannot favour or disfavour it. No evolutionary change will result. (221-222)
Thus we come full circle as Dawkins comes dangerously close to acknowledging Gould’s original point about the selfish gene idea. With the book-keeper metaphor, Gould wasn’t suggesting that genes are perfectly inert. Of course, they cause something—but they don’t cause natural selection. Genes build bodies and influence behaviors, but natural selection acts on bodies and behaviors. Genes are the passive book-keepers with regard to the effects of natural selection, even though they’re active agents with regard to bodies. Again, the question becomes, do the processes that happen at higher levels of analysis operate with enough regularity to produce measurable changes in gene frequencies that a strict gene-level analysis would miss or obscure? Yes, evolution is genetic change. But the task of evolutionary biologists is to understand how those changes come about.
Gould died in May of 2002, in the middle of a correspondence he had been carrying on with Dawkins regarding how best to deal with an emerging creationist propaganda campaign called intelligent design, a set of ideas they both agreed were contemptible nonsense. These men were in many ways the opposing generals of the so-called Darwin Wars in the 1990s, but, as exasperated as they clearly got with each other’s writing at times, they always seemed genuinely interested and amused with what the other had to say. In his essay on Gould’s final work, Dawkins writes,
The Structure of Evolutionary Theory is such a massively powerful last word, it will keep us all busy replying to it for years. What a brilliant way for a scholar to go. I shall miss him. (222)
[I’ve narrowed the scope of this post to make the ideas as manageable as possible. This account of the debate leaves out many important names and is by no means comprehensive. A good first step if you’re interested in Dawkins’s and Gould’s ideas is to read The Selfish Gene and Full House.]
Read Part 2:
And Part 3:
The Self-Transcendence Price Tag: A Review of Alex Stone's Fooling Houdini
If you can convince people you know how to broaden the contours of selfhood and show them the world as they’ve never experienced it before, if you can give them the sense that their world is expanding, they’ll at the very least want to keep talking to you so they can keep the feeling going and maybe learn what your secret is. Much of this desire to better ourselves is focused on our social lives, and that’s why duping delight is so seductive—it gives us a taste of what it’s like to be the charismatic and irresistible characters we always expected ourselves to become.
Psychologist Paul Ekman is renowned for his research on facial expressions, and he frequently studies and consults with law enforcement agencies, legal scholars, and gamblers on the topic of reading people who don’t want to be read. In his book Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage, Ekman focuses on three emotions would-be lie detectors should be on the lookout for subtle expressions of. The first two—detection apprehension and deception guilt—are pretty obvious. But the third is more interesting. Many people actually enjoy deceiving others because, for one thing, the threat of detection is more thrilling to them than terrifying, and, for another, being able to pull off the deception successfully can give them a sense of “pride in the achievement, or feelings of smug contempt toward the target” (76). Ekman calls this “Duping Delight,” and he suggests it leads many liars to brag about their crimes, which in turn leads to them being caught.
The takeaway insight is that knowing something others don’t, or having the skill to trick or deceive others, can give us an inherently rewarding feeling of empowerment.
Alex Stone, in his new book Fooling Houdini: Magicians, Mentalists, Math Geeks & the Hidden Powers of the Mind, suggests that duping delight is what drives the continuing development of the magician’s trade. The title refers to a bit of lore that has reached the status of founding myth among aficionados of legerdemain. Houdini used to boast that he could figure out the secret behind any magic trick if he saw it performed three times. Time and again, he backed up his claim, sending defeated tricksters away to nurse their wounded pride. But then came Dai Vernon, who performed for Houdini what he called the Ambitious Card, a routine in which a card signed by a volunteer and then placed in the middle of a deck mysteriously appears at the top. After watching Vernon go through the routine seven times, Houdini turned around and walked away in a huff. Vernon went on to become a leading figure in Twentieth Century magic, and every magician today has his (they’re almost all male) own version of Ambitious Card, which serves as a type of signature.
In Fooling Houdini, Stone explains that for practiced magicians, tricking the uninitiated loses its thrill over time. So they end up having to up the ante, and in the process novitiates find themselves getting deeper and deeper into the practice, tradition, culture, and society of magic and magicians. He writes,
Sure, it’s fun to fool laypeople, but they’re easy prey. It’s far more thrilling to hunt your own kind. As a result, magicians are constantly engineering new ways to dupe one another. A hierarchy of foolmanship, a who-fooled-whom pecking order, rules the conjuror’s domain. This gladiatorial spirit in turn drives considerable evolution in the art. (173)
Stone’s own story begins with a trip to Belgium to compete in the 2006 Magic Olympics. His interest in magic was, at the time, little more than an outgrowth of his interest in science. He’d been an editor at Discover magazine and had since gone on to graduate school in physics at Columbia University. But after the Magic Olympics, where he performed dismally and was left completely humiliated and averse to the thought of ever doing magic again, he gradually came to realize that one way or another he would have to face his demons by mastering the art he’d only so far dabbled in.
Fooling Houdini chronicles how Stone became obsessed with developing his own personalized act and tweaking it to perfection, and how he went from being a pathetic amateur to a respectable semi-professional. The progress of a magician, Stone learns from Jeff McBride, follows “four cardinal stations of magic: Trickster, Sorcerer, Oracle, and Sage” (41). And the resultant story as told in Stone’s book follows an eminently recognizable narrative course, from humiliation and defeat to ever-increasing mastery and self-discovery.
Fooling Houdini will likely appeal to the same audience as did Moonwalking with Einstein, Joshua Foer’s book about how he ended up winning the U.S. Memory Championships. Foer, in fact, makes a guest appearance in Fooling Houdini when Stone seeks out his expertise to help him memorize a deck of cards for an original routine of his own devising. (He also gave the book a nice plug for the back cover.) The appeal of both books comes not just from the conventional narrative arc but also from the promise of untapped potential, a sense that greater mastery, and even a better life, lie just beyond reach, accessible to anyone willing to put in those enchanted ten thousand hours of training made famous by Malcolm Gladwell. It’s the same thing people seem to love about TED lectures, the idea that ideas will almost inevitably change our lives. Nathan Heller, in a recent New Yorker article, attempts to describe the appeal of TED conferences and lectures in terms that apply uncannily well to books like Foer’s and Stone’s. Heller writes,
Debby Ruth, a Long Beach attendee, told me that she started going to TED after reaching a point in her life when “nothing excited me anymore”; she returns now for a yearly fix. TED may present itself as an ideas conference, but most people seem to watch the lectures not so much for the information as for how they make them feel. (73)
The way they make us feel is similar to the way a good magic show can make us feel—like anything is possible, like on the other side of this great idea that breaks down the walls of our own limitations is a better, fuller, more just, and happier life. “Should we be grateful to TED for providing this form of transcendence—and on the Internet, of all places?” Heller asks.
Or should we feel manipulated by one more product attempting to play on our emotions? It’s tricky, because the ideas undergirding a TED talk are both the point and, for viewers seeking a generic TED-type thrill, essentially beside it: the appeal of TED comes as much from its presentation as from it substance. (73-4)
At their core, Fooling Houdini and Moonwalking with Einstein—and pretty much every TED lecture—are about transforming yourself, and to a somewhat lesser degree the world, either with new takes on deep-rooted traditions, reconnection with ancient wisdom, or revolutionary science.
Foer presumably funded the epic journey recounted in Moonwalking with his freelance articles and maybe with expense accounts from the magazines he wrote for. Still, it seems you could train to become a serviceable “mental athlete” without spending all that much money. Not so with magic. Stone’s prose is much more quirky and slightly more self-deprecatory than Foer’s, and in one of his funniest and most revealing chapters he discusses some of the personal and financial costs associated with his obsession. The title, “It’s Annoying and I Asked You to Stop,” is a quote from a girlfriend who was about to dump him. The chapter begins,
One of my biggest fears is that someday I’ll be audited. Not because my taxes aren’t in perfect order—I’m very OCD about saving receipts and keeping track of my expenses, a habit I learned from my father—but because it would bring me face-to-face with a very difficult and decidedly lose-lose dilemma in which I’d have to choose between going to jail for tax fraud and disclosing to another adult, in naked detail, just how much money I’ve spent on magic over the years. (That, and I’d have to fess up to eating at Arby’s multiple times while traveling to magic conventions.) (159)
Having originally found magic fun and mentally stimulating, Stone ends up being seduced into spending astronomical sums by the terrible slight he received from the magic community followed by a regimen of Pavlovian conditioning based on duping delight. Both Foer’s and Stone’s stories are essentially about moderately insecure guys who try to improve themselves by learning a new skill set.
The market for a renewed sense of limitless self-potential is booming. As children, it seems every future we can imagine for ourselves is achievable—that we can inhabit them all simultaneously—so whatever singular life we find ourselves living as adults inevitably falls short of our dreams. We may have good jobs, good relationships, good health, but we can’t help sometimes feeling like we’re missing out on something, like we’re trapped in overscheduled rutted routines of workaday compromise. After a while, it becomes more and more difficult to muster any enthusiasm for much of anything beyond the laziest indulgences like the cruises we save up for and plan months or years in advance, the three-day weekend at the lake cottage, a shopping date with an old friend, going out to eat with the gang. By modern, middle-class standards, this is the good life. What more can we ask for?
What if I told you, though, that there’s a training regimen that will make you so much more creative and intelligent that you’ll wonder after a few months how you ever managed to get by with a mind as dull as yours is now? What if I told you there’s a revolutionary diet and exercise program that is almost guaranteed to make you so much more attractive that even your friends won’t recognize you? What if I told you there’s a secret set of psychological principles that will allow you to seduce almost any member of the opposite sex, or prevail in any business negotiation you ever engage in? What if I told you you’ve been living in a small dark corner of the world, and that I know the way to a boundless life of splendor?
If you can convince people you know how to broaden the contours of selfhood and show them the world as they’ve never experienced it before, if you can give them the sense that their world is expanding, they’ll at the very least want to keep talking to you so they can keep the feeling going and maybe learn what your secret is. Much of this desire to better ourselves is focused on our social lives, and that’s why duping delight is so seductive—it gives us a taste of what it’s like to be the charismatic and irresistible characters we always expected ourselves to become. This is how Foer writes about his mindset at the outset of his memory training, after he’d read about the mythic feats of memory champion Ben Pridmore:
What would it mean to have all that otherwise-lost knowledge at my fingertips? I couldn’t help but think that it would make me more persuasive, more confident and, in some fundamental sense, smarter. Certainly I’d be a better journalist, friend, and boyfriend. But more than that, I imagined that having a memory like Ben Pridmore’s would make me an altogether more attentive, perhaps even wiser, person. To the extent that experience is the sum of our memories and wisdom the sum of experience, having a better memory would mean knowing not only more about the world, but also more about myself. (7)
Stone strikes a similar chord when he’s describing what originally attracted him to magic back when he was an awkward teenager. He writes,
In my mind, magic was also a disarming social tool, a universal language that transcended age and gender and culture. Magic would be a way out of my nerdiness. That’s right, I thought magic would make me less nerdy. Or at the very least it would allow me to become a different, more interesting nerd. Through magic, I would be able to navigate awkward social situations, escape the boundaries of culture and class, connect with people from all walks of life, and seduce beautiful women. In reality, I ended up spending most of my free time with pasty male virgins. (5)
This last line echoes Foer’s observation that the people you find at memory competitions are “indistinguishable from those” you’d find at a “‘Weird Al’ Yankovic (five of spades) concert”? (189).
Though Stone’s openness about his nerdiness at times shades into some obnoxious playing up of his nerdy credentials, it does lend itself to some incisive observations. One of the lessons he has to learn to become a better magician is that his performances have to be more about the audiences than they are about the tricks—less about duping and more about connecting. What this means is that magic isn’t the key to becoming more confident he hoped it would be; instead, he has to be more confident before he can be good at magic. For Stone, this means embracing, not trying to overcome, his nerdish tendencies. He writes,
Magicians like to pretend that they’re cool and mysterious, cultivating the image of the smooth operator, the suave seducer. Their stage names are always things like the Great Tomsoni or the Amazing Randi or the International Man of Mystery—never Alex the Magical Superdoofus or the Incredible Nerdini. But does all this posturing really make them look cooler? Or just more ridiculous for trying to hide their true stripes? Why couldn’t more magicians own up to their own nerdiness? Magic was geeky. And that was okay. (243)
Stone reaches this epiphany largely through the inspiration of a clown who, in a surprising twist, ends up stealing the show from many of the larger-than-life characters featured in Fooling Houdini. In an effort to improve his comfort while performing before crowds and thus to increase his stage presence, Stone works with his actress girlfriend, takes improv classes, and attends a “clown workshop” led by “a handsome man in his early forties named Christopher Bayes,” who begins by telling the class that “The clown is the physical manifestation of the unsocialized self… It’s the essence of the playful spirit before you were defeated by society, by the world” (235). Stone immediately recognizes the connection with magic. Here you have that spark of joyful spontaneity, that childish enthusiasm before a world where everything is new and anything is possible.
“The main trigger for laughter is surprise,” Bayes told us, speaking of how the clown gets his laughs. “There’s lots of ways to find that trigger. Some of them are tricks. Some of them are math. And some of them come from building something with integrity and then smashing it. So you smash the expectation and of what you think is going to happen. (239)
In smashing something you’ve devoted considerable energy to creating, you’re also asserting your freedom to walk away from what you’ve invested yourself in, to reevaluate your idea of what’s really important, to change your life on a whim. And surprise, as Bayes describes it, isn’t just the essential tool for comedians. Stone explains,
The same goes for the magician. Magic transports us to an absurd universe, parodying the laws of physics in a whimsical toying of cause and effect. “Magic creates tension,” says Juan Tamariz, “a logical conflict that it does not resolve. That’s why people often laugh after a trick, even when we haven’t done anything funny. Tamariz is also fond of saying that magic holds a mirror up to our impossible desires. We all would like to fly, see the future, know another’s thoughts, mend what has been broken. Great magic is anchored to a symbolic logic that transcends its status as an illusion. (239)
Stone’s efforts to become a Sage magician have up till this point in the story entailed little more than a desperate stockpiling of tricks. But he comes to realize that not all tricks jive with his personality, and if he tries to go too far outside of character his performances come across as strained and false. This is the stock ironic trope that this type of story turns on—he starts off trying to become something great only to discover that he first has to accept who he is. He goes on relaying the lessons of the clown workshop,
“Try to proceed with a kind of playful integrity,” Chris Bayes told us. “Because in that integrity we actually find more possibility of surprise than we do in an idea of how to trick us into laughing. You bring it from yourself. And we see this little gift that you brought for us, which is the gift of your truth. Not an idea of your truth, but the gift of your real truth. And you can play forever with that, because it’s infinite. (244)
What’s most charming about the principle of proceeding with playful integrity is that it applies not just to clowning and magic, but to almost every artistic and dramatic endeavor—and even to science. “Every truly great idea, be it in art or science,” Stone writes, “is a kind of magic” (289). Aspirants may initially be lured into any of these creative domains by the promise of greater mastery over other people, but the true sages end up being the ones who are the most appreciative and the most susceptible to the power of the art to produce in them that sense of playfulness and boundless exuberance.
Being fooled is fun, too, because it’s a controlled way of experiencing a loss of control. Much like a roller coaster or a scary movie, it lets you loosen your grip on reality without actually losing your mind. This is strangely cathartic, and when it’s over you feel more in control, less afraid. For magicians, watching magic is about chasing this feeling—call it duped delight, the maddening ecstasy of being a layperson again, a novice, if only for a moment. (291)
“Just before Vernon,” the Man Who Fooled Houdini, “died,” Stone writes, “comedian and amateur magician Dick Cavett asked him if there was anything he wished for.” Vernon answered, “I wish somebody could fool me one more time” (291). Stone uses this line to wrap up his book, neatly bringing the story full-circle.
Fooling Houdini is unexpectedly engrossing. It reads quite a bit different from the 2010 book on magic by neuroscientists Stephen Macknik and Susana Martinez-Conde, which they wrote with Sandra Blakeslee, Sleights of Mind: What the Neuroscience of Magic Reveals about Our Everyday Deceptions. For one thing, Stone focuses much more on the people he comes to know on his journey and less on the underlying principles. And, though Macknik and Martinez-Conde also use their own education in the traditions and techniques of magic as a narrative frame, Stone gets much more personal. One underlying message of both books is that our sense of being aware of what’s going on around us is illusory, and that illusion makes us ripe for the duping. But Stone conveys more of the childlike wonder of magic, despite his efforts at coming across as a stylish hipster geek. Unfortunately, when I got to the end I was reminded of the title of a TED lecture that’s perennially on the most-watched list, Brene Brown’s “The Power of Vulnerability,” which I came away from scratching my head because it seemed utterly nonsensical.
It’s interesting that duping delight is a feeling few anticipate and many fail to recognize even as they’re experiencing it. It is the trick that ends up being played on the trickster. Like most magic, it takes advantage of a motivational system that most of us are only marginally aware of, if at all. But there’s another motivational system and another magic trick that makes things like TED lectures so thrilling. It’s also the trick that makes books like Moonwalking with Einstein and Fooling Houdini so engrossing. Arthur and Elaine Aron use what’s called “The Self-Expansion Model” to explain what attracts us to other people, and even what attracts us to groups of people we end up wanting to identify with. The basic idea is that we’re motivated to increase our efficacy, not with regard to achieving any specific goals but in terms of our general potential. One of the main ways we seek to augment our potential is by establishing close relationships with other people who have knowledge or skills or resources that would contribute to our efficacy. Foer learns countless mnemonic techniques from guys like Ed Cooke. Stone learns magic from guys like Wes James. Meanwhile, we readers are getting a glimpse of all of it through our connection with Foer and Stone.
Self-expansion theory is actually somewhat uplifting in its own right because it offers a more romantic perspective on our human desire to associate with high-status individuals and groups. But the triggers for a sense of self-expansion are probably pretty easy to mimic, even for those who lack the wherewithal or the intention to truly increase our potential or genuinely broaden our horizons. Indeed, it seems as though self-expansion has become the main psychological principle exploited by politicians, marketers, and P.R. agents. This isn’t to say we should discount every book or lecture that we find uplifting, but we should keep in mind that there are genuine experiences of self-expansion and counterfeit ones. Heller’s observation about how TED lectures are more about presentation than substance, for instance, calls to mind an experiment done in the early 1970s in which Dr. Myron Fox gave a lecture titled “Mathematical Game Theory as Applied to Physician Education.” His audience included psychologists, psychiatrists, educators, and graduate students, virtually all of whom rated his ideas and his presentation highly. But Dr. Fox wasn’t actually a doctor; he was the actor Michael Fox. And his lecture was designed to be completely devoid of meaningful content but high on expressiveness and audience connection.
The Dr. Fox Effect is the term now used to describe our striking inability to recognize nonsense coming from the mouths of charismatic speakers.
And if keeping our foolability in mind somehow makes that sense of self-transcendence elude us today, "tomorrow we will run faster, stretch out our arms farther....
And one fine morning—"
Also read:
Sympathizing with Psychos: Why We Want to See Alex Escape His Fate as A Clockwork Orange
Especially in this age where everything, from novels to social media profiles, are scrutinized for political wrong think, it’s important to ask how so many people can enjoy stories with truly reprehensible protagonists. Anthony Burgess’s “A Clockwork Orange” provides a perfect test case. How can readers possible sympathize with Alex?
Phil Connors, the narcissistic weatherman played by Bill Murray in Groundhog Day, is, in the words of Larry, the cameraman played by Chris Elliott, a “prima donna,” at least at the beginning of the movie. He’s selfish, uncharitable, and condescending. As the plot progresses, however, Phil undergoes what is probably the most plausible transformation in all of cinema—having witnessed what he’s gone through over the course of the movie, we’re more than willing to grant the possibility that even the most narcissistic of people might be redeemed through such an ordeal. The odd thing, though, is that when you watch Groundhog Day you don’t exactly hate Phil at the beginning of the movie. Somehow, even as we take note of his most off-putting characteristics, we’re never completely put off. As horrible as he is, he’s not really even unpleasant. The pleasure of watching the movie must to some degree stem from our desire to see Phil redeemed. We want him to learn his lesson so we don’t have to condemn him or write him off.
In a recent article for the New Yorker, Jonathan Franzen explores what he calls “the problem of sympathy” by considering his own responses to the novels of Edith Wharton, who herself strikes him as difficult to sympathize with. Lily Bart, the protagonist of The House of Mirth, is similar to Wharton in many respects, the main difference being that Lily is beautiful (and of course Franzen was immediately accused of misogyny for pointing this out). Of Lily, Franzen writes,
She is, basically, the worst sort of party girl, and Wharton, in much the same way that she didn’t even try to be soft or charming in her personal life, eschews the standard novelistic tricks for warming or softening Lily’s image—the book is devoid of pet-the-dog moments. So why is it so hard to stop reading Lily’s story? (63)
Franzen weighs several hypotheses: her beauty, her freedom to act on impulses we would never act on, her financial woes, her aging. But ultimately he settles on the conclusion that all of these factors are incidental.
What determines whether we sympathize with a fictional character, according to Franzen, is the strength and immediacy of his or her desire. What propels us through the story then is our curiosity about whether or not the character will succeed in satisfying that desire. He explains,
One of the great perplexities of fiction—and the quality that makes the novel the quintessentially liberal art form—is that we experience sympathy so readily for characters we wouldn’t like in real life. Becky Sharp may be a soulless social climber, Tom Ripley may be a sociopath, the Jackal may want to assassinate the French President, Mickey Sabbath may be a disgustingly self-involved old goat, and Raskolnikov may want to get away with murder, but I find myself rooting for each of them. This is sometimes, no doubt, a function of the lure of the forbidden, the guilty pleasure of imagining what it would be like to be unburdened by scruples. In every case, though, the alchemical agent by which fiction transmutes my secret envy or my ordinary dislike of “bad” people into sympathy is desire. Apparently, all a novelist has to do is give a character a powerful desire (to rise socially, to get away with murder) and I, as a reader, become helpless not to make that desire my own. (63)
While I think Franzen here highlights a crucial point about the intersection between character and plot, namely that it is easier to assess how well characters fare at the story’s end if we know precisely what they want—and also what they dread—it’s clear nonetheless that he’s being flip in his dismissal of possible redeeming qualities. Emily Gould, writing for The Awl, expostulates in a parenthetical to her statement that her response to Lily was quite different from Franzen’s that “she was so trapped! There were no right choices! How could anyone find watching that ‘delicious!’ I cry every time!”
Focusing on any single character in a story the way Franzen does leaves out important contextual cues about personality. In a story peopled with horrible characters, protagonists need only send out the most modest of cues signaling their altruism or redeemability for readers to begin to want to see them prevail. For Milton’s Satan to be sympathetic, readers have to see God as significantly less so. In Groundhog Day, you have creepy and annoying characters like Larry and Ned Ryerson to make Phil look slightly better. And here is Franzen on the denouement of House of Mirth, describing his response to Lily reflecting on the timestamp placed on her youthful beauty:
But only at the book’s very end, when Lily finds herself holding another woman’s baby and experiencing a host of unfamiliar emotions, does a more powerful sort of urgency crash into view. The financial potential of her looks is revealed to have been an artificial value, in contrast to their authentic value in the natural scheme of human reproduction. What has been simply a series of private misfortunes for Lily suddenly becomes something larger: the tragedy of a New York City social world whose priorities are so divorced from nature that they kill the emblematically attractive female who ought, by natural right, to thrive. The reader is driven to search for an explanation of the tragedy in Lily’s appallingly deforming social upbringing—the kind of upbringing that Wharton herself felt deformed by—and to pity her for it, as, per Aristotle, a tragic protagonist must be pitied. (63)
As Gould points out, though, Franzen is really late in coming to an appreciation of the tragedy, even though his absorption with Lily’s predicament suggests he feels sympathy for her all along. Launching into a list of all the qualities that supposedly make the character unsympathetic, he writes, “The social height that she’s bent on securing is one that she herself acknowledges is dull and sterile” (62), a signal of ambivalence that readers like Gould take as a hopeful sign that she might eventually be redeemed. In any case, few of the other characters seem willing to acknowledge anything of the sort.
Perhaps the most extreme instance in which a bad character wins the sympathy of readers and viewers by being cast with a character or two who are even worse is that of Alex in Anthony Burgess’s novella A Clockwork Orange and the Stanley Kubrick film based on it. (Patricia Highsmith’s Mr. Ripley is another clear contender.) How could we possibly like Alex? He’s a true sadist who matter-of-factly describes the joyous thrill he gets from committing acts of “ultraviolence” against his victims, and he’s a definite candidate for a clinical diagnosis of antisocial personality disorder.
He’s also probably the best evidence for Franzen’s theory that sympathy is reducible to desire. It should be noted, however, that, in keeping with William Flesch’s theory of narrative interest, A Clockwork Orange is nothing if not a story of punishment. In his book Comeuppance, Flesch suggests that when we become emotionally enmeshed with stories we’re monitoring the characters for evidence of either altruism or selfishness and henceforth attending to the plot, anxious to see the altruists rewarded and the selfish get their comeuppance. Alex seems to strain the theory, though, because all he seems to want to do is hurt people, and yet audiences tend to be more disturbed than gratified by his drawn-out, torturous punishment. For many, there’s even some relief at the end of the movie and the original American version of the book when Alex makes it through all of his ordeals with his taste for ultraviolence restored.
Many obvious factors mitigate the case against Alex, perhaps foremost among them the whimsical tone of his narration, along with the fictional dialect which lends to the story a dream-like quality, which is also brilliantly conveyed in the film. There’s something cartoonish about all the characters who suffer at the hands of Alex and his droogs, and almost all of them return to the story later to exact their revenge. You might even say there’s a Groundhogesque element of repetition in the plot. The audience quickly learns too that all the characters who should be looking out for Alex—he’s only fifteen, we find out after almost eighty pages—are either feckless zombies like his parents, who have been sapped of all vitality by their clockwork occupations, or only see him as a means to furthering their own ambitions. “If you have no consideration for your own horrible self you at least might have some for me, who have sweated over you,” his Post-Corrective Advisor P.R. Deltoid says to him. “A big black mark, I tell you in confidence, for every one we don’t reclaim, a confession of failure for every one of you that ends up in the stripy hole” (42). Even the prison charlie (he’s a chaplain, get it?) who serves as a mouthpiece to deliver Burgess’s message treats him as a means to an end. Alex explains,
The idea was, I knew, that this charlie was after becoming a very great holy chelloveck in the world of Prison Religion, and he wanted a real horrorshow testimonial from the Governor, so he would go and govoreet quietly to the Governor now and then about what dark plots were brewing among the plennies, and he would get a lot of this cal from me. (91)
Alex ends up receiving his worst punishment at the hands of the man against whom he’s committed his worst crime. F. Alexander is the author of the metafictionally titled A Clockwork Orange, a treatise against the repressive government, and in the first part of the story Alex and his droogs, wearing masks, beat him mercilessly before forcing him to watch them gang rape his wife, who ends up dying from wounds she sustains in the attack. Later, when Alex gets beaten up himself and inadvertently stumbles back to the house that was the scene of the crime, F. Alexander recognizes him only as the guinea pig for a government experiment in brainwashing criminals he’s read about in newspapers. He takes Alex in and helps him, saying, “I think you can be used, poor boy. I think you can help dislodge this overbearing Government” (175). After he recognizes Alex from his nadsat dialect as the ringleader of the gang who killed his wife, he decides the boy will serve as a better propaganda tool if he commits suicide. Locking him in a room and blasting the Beethoven music he once loved but was conditioned in prison to find nauseating to the point of wishing for death, F. Alexander leaves Alex no escape but to jump out of a high window.
The desire for revenge is understandable, but before realizing who it is he’s dealing with F. Alexander reveals himself to be conniving and manipulative, like almost every other adult Alex knows. When he wakes up in the hospital after his suicide attempt, he discovers that the Minister of the Inferior, as he calls him, has had the conditioning procedure he originally ordered be carried out on Alex reversed and is now eager for Alex to tell everyone how F. Alexander and his fellow conspirators tried to kill him. Alex is nothing but a pawn to any of them. That’s why it’s possible to be relieved when his capacity for violent behavior has been restored.
Of course, the real villain of A Clockwork Orange is the Ludovico Technique, the treatment used to cure Alex of his violent impulses. Strapped into a chair with his head locked in place and his glazzies braced open, Alex is forced to watch recorded scenes of torture, murder, violence, and rape, the types of things he used to enjoy. Only now he’s been given a shot that makes him feel so horrible he wants to snuff it (kill himself), and over the course of the treatment sessions he becomes conditioned to associate his precious ultraviolence with this dreadful feeling. Next to the desecration of a man’s soul—the mechanistic control obviating his free will—the antisocial depredations of a young delinquent are somewhat less horrifying. As the charlie says to Alex, addressing him by his prison ID number,
It may not be nice to be good, little 6655321. It may be horrible to be good. And when I say that to you I realize how self-contradictory that sounds. I know I shall have many sleepless nights about this. What does God want? Does God want goodness or the choice of goodness? Is a man who chooses the bad perhaps in some way better than a man who has the good imposed upon him? Deep and hard questions, little 6655321. (107)
At the same time, though, one of the consequences of the treatment is that Alex becomes not just incapable of preying on others but also of defending himself. Immediately upon his release from prison, he finds himself at the mercy of everyone he’s wronged and everyone who feels justified in abusing or exploiting him owing to his past crimes. Before realizing who Alex is, F. Alexander says to him,
You’ve sinned, I suppose, but your punishment has been out of all proportion. They have turned you into something other than a human being. You have no power of choice any longer. You’re committed to socially acceptable acts, a little machine capable only of good. (175)
To tally the mitigating factors: Alex is young (though the actor in the movie was twenty-eight), he’s surrounded by other bizarre and unlikable characters, and he undergoes dehumanizing torture. But does this really make up for his participating in gang rape and murder? Personally, as strange and unsavory as F. Alexander seems, I have to say I can’t fault him in the least for taking revenge on Alex. As someone who believes all behaviors are ultimately determined by countless factors outside the individual’s control, from genes to education to social norms, I don’t have that much of a problem with the Ludovico Technique either. Psychopathy is a primarily genetic condition that makes people incapable of experiencing moral emotions such as would prevent them from harming others. If aversion therapy worked to endow psychopaths with negative emotions similar to those the rest of us feel in response to Alex’s brand of ultraviolence, then it doesn’t seem like any more of a desecration than, say, a brain operation to remove a tumor with deleterious effects on moral reasoning. True, the prospect of a corrupt government administering the treatment is unsettling, but this kid was going around beating, raping, and killing people.
And yet, I also have to admit (confess?), my own response to Alex, even at the height of his delinquency, before his capture and punishment, was to like him and root for him—this despite the fact that, contra Franzen, I couldn’t really point to any one thing he desires more than anything else.
For those of us who sympathize with Alex, every instance in which he does something unconscionable induces real discomfort, like when he takes two young ptitsas back to his room after revealing they “couldn’t have been more than ten” (47) (but he earlier says the girl Billyboy's gang is "getting ready to perform on" is "not more than ten" [18] - is he serious?). We don’t like him, in other words, because he does bad things but in spite of it. At some point near the beginning of the story, Alex must give some convincing indications that by the end he will have learned the error of his ways. He must provide readers with some evidence that he is at least capable of learning to empathize with other people’s suffering and willing to behave in such a way as to avoid it, so when we see him doing something horrible we view it as an anxiety-inducing setback rather than a deal-breaking harbinger of his true life trajectory. But what is it exactly that makes us believe this psychopath is redeemable?
Phil Connors in Groundhog Day has one obvious saving grace. When viewers are first introduced to him, he’s doing his weather report—and he has a uniquely funny way of doing it. “Uh-oh, look out. It’s one of these big blue things!” he jokes when the graphic of a storm front appears on the screen. “Out in California they're going to have some warm weather tomorrow, gang wars, and some very overpriced real estate,” he says drolly. You could argue he’s only being funny in an attempt to further his career, but he continues trying to make people laugh, usually at the expense of weird or annoying characters, even when the cameras are off (not those cameras). Successful humor requires some degree of social acuity, and the effort that goes into it suggests at least a modicum of generosity. You could say, in effect, Phil goes out of his way to give the other characters, and us, a few laughs. Alex, likewise, offers us a laugh before the end of the first page, as he describes how the Korova Milkbar, where he and his droogs hang out, doesn’t have a liquor license but can sell moloko with drugs added to it “which would give you a nice quiet horrorshow fifteen minutes admiring Bog And All His Holy Angels And Saints in your left shoe with lights bursting all over your mozg” (3-4). Even as he’s assaulting people, Alex keeps up his witty banter and dazzling repartee. He’s being cruel, but he’s having fun. Moreover, he seems to be inviting us to have fun with him.
Probably the single most important factor behind our desire (and I understand “our” here doesn’t include everyone in the audience) to see Alex redeemed is the fact that he’s being kind enough to tell us his story, to invite us into his life, as it were. This is the magic of first person narration. And like most magic it’s based on a psychological principle describing a mental process most of us go about our lives completely oblivious to. The Jewish psychologist Henri Tajfel was living in France at the beginning of World War II, and he was in a German prisoner-of-war camp for most of its duration. Afterward, he went to college in England, where in the 1960s and 70s he would conduct a series of experiments that are today considered classics in social psychology. Many other scientists at the time were trying to understand how an atrocity like the Holocaust could have happened. One theory was that the worst barbarism was committed by a certain type of individual who had what was called an authoritarian personality. Others, like Muzafer Sherif, pointed to a universal human tendency to form groups and discriminate on their behalf.
Tajfel knew about Sherif’s Robber’s Cave Experiment in which groups of young boys were made to compete with each other in sports and over territory. Under those conditions, the groups of boys quickly became antagonistic toward one another, so much so that the experiment had to be moved into its reconciliation phase earlier than planned to prevent violence. But Tajfel suspected that group rivalries could be sparked even without such an elaborate setup. To test his theory, he developed what is known as the minimal group paradigm, in which test subjects engage in some task or test of their preferences and are subsequently arranged into groups based on the outcome. In the original experiments, none of the participants knew anything about their groupmates aside from the fact that they’d been assigned to the same group. And yet, even when the group assignments were based on nothing but a coin toss, subjects asked how much money other people in the experiment deserved as a reward for their participation suggested much lower dollar amounts for people in rival groups. “Apparently,” Tajfel writes in a 1970 Scientific American article about his experiments, “the mere fact of division into groups is enough to trigger discriminatory behavior” (96).
Once divisions into us and them have been established, considerations of fairness are reserved for members of the ingroup. While the subjects in Tajfel’s tests aren’t displaying fully developed tribal animosity, they do demonstrate that the seeds of tribalism are disturbingly easily to sow. As he explains,
Unfortunately it is only too easy to think of examples in real life where fairness would go out the window, since groupness is often based on criteria much more weighty than either preferring a painter one has never heard of before or resembling someone else in one's way of counting dots. (102)
I’m unaware of any studies on the effects of various styles of narration on perceptions of group membership, but I hypothesize that we can extrapolate the minimal group paradigm into the realm of first-person narrative accounts of violence. The reason some of us like Alex despite his horrendous behavior is that he somehow manages to make us think of him as a member of our tribe—or rather as ourselves as a member of his—while everyone he wrongs belongs to a rival group. Even as he’s telling us about all the horrible things he’s done to other people, he takes time out to to introduce us to his friends, describe places like the Korova Milkbar and the Duke of York, even the flat at Municipal Flatblock 18A where he and his parents live. He tells us jokes. He shares with us his enthusiasm for classical music. Oh yeah, he also addresses us, “Oh my brothers,” beginning seven lines down on the first page and again at intervals throughout the book, making us what anthropologists call his fictive kin.
There’s something altruistic, or at least generous, about telling jokes or stories. Alex really is our humble narrator, as he frequently refers to himself. Beyond that, though, most stories turn on some moral point, so when we encounter a narrator who immediately begins recounting his crimes we can’t help but anticipate the juncture in the story at which he experiences some moral enlightenment. In the twenty-first and last chapter of A Clockwork Orange, Alex does indeed undergo just this sort of transformation. But American publishers, along with Stanley Kubrick, cut this part of the book because it struck them as a somewhat cowardly turning away from the reality of human evil. Burgess defends the original version in an introduction to the 1986 edition,
The twenty-first chapter gives the novel the quality of genuine fiction, an art founded on the principle that human beings change. There is, in fact, not much point in writing a novel unless you can show the possibility of moral transformation, or an increase in wisdom, operating in your chief character or characters. Even trashy bestsellers show people changing. When a fictional work fails to show change, when it merely indicates that human character is set, stony, unregenerable, then you are out of the field of the novel and into that of the fable or the allegory. (xii)
Indeed, it’s probably this sense of the story being somehow unfinished or cut off in the middle that makes the film so disturbing and so nightmarishly memorable. With regard to the novel, readers could be forgiven for wondering what the hell Alex’s motivation was in telling his story in the first place if there was no lesson or no intuitive understanding he thought he could convey with it.
But is the twenty-first chapter believable? Would it have been possible for Alex to transform into a good man? The Nobel Prize-winning psychologist Daniel Kahneman, in his book Thinking, Fast and Slow, shares with his own readers an important lesson from his student days that bears on Alex’s case:
As a graduate student I attended some courses on the art and science of psychotherapy. During one of these lectures our teacher imparted a morsel of clinical wisdom. This is what he told us: “You will from time to time meet a patient who shares a disturbing tale of multiple mistakes in his previous treatment. He has been seen by several clinicians, and all failed him. The patient can lucidly describe how his therapists misunderstood him, but he has quickly perceived that you are different. You share the same feeling, are convinced that you understand him, and will be able to help.” At this point my teacher raised his voice as he said, “Do not even think of taking on this patient! Throw him out of the office! He is most likely a psychopath and you will not be able to help him.” (27-28)
Also read
SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION
And
LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME