READING SUBTLY

This
was the domain of my Blogger site from 2009 to 2018, when I moved to this domain and started
The Storytelling Ape
. The search option should help you find any of the old posts you're looking for.
 

Dennis Junk Dennis Junk

From the Blood of the Cannibal Moon: He Borara Part 3 Chapter 1

Origin myth of the Yanomamo. Sample chapter of “He Borara: a Novel about an Anthropologist among the Yąnomamö.”

Salesian Mission Outpost

          The move into the mythic past encompasses a transition from existence to essence. When the events in these stories told by great shamans took place, the characters and places described in them had no beginnings and no ends. You must understand, if you want to appreciate the underlying truth of the stories, that the boundaries separating the mythic realm from the time-bound world are sometimes porous, but never more so than they were in the time of Moonblood. So there’s no contradiction when a shaman says, in a time before men, two men drew their bows and fired arrows at the moon.

            These two brothers, Uhudima and Suhirima, were not men as we know them today, men like you and me. They were no badabö, which means “those who are now dead,” but also means “the original humans,” and they were part human, part animal, and part spirit. The moon, Peribo, likewise partook of multiple essences, and he nightly stole away with members of the no badabö village to press them between two pieces of cassava bread and devour them, until the two brothers decided Peribo must be stopped. As the moon retreated toward the horizon, Uhudima aimed his bow and loosed an arrow. He missed. Then he missed again. The Yąnomamö say Uhudima was sina—a lousy shot.

            But his brother Suhirima was an excellent marksman. Even after Uhudima missed shot after shot, allowing Peribo to escape nearly all the way the horizon, Suhirima was able to take steady aim with his own bow and shoot an arrow that planted itself deep in Peribo’s belly. The wound disgorged great gouts of blood that fell to the earth as Peribo’s screams echoed across the sky. Wherever the blood landed sprang up a Yąnomamö—a true human being like the ones in these villages today. Something of the moon’s essence, his fury and bloodthirst, transferred to these newly born beings, making them waiteri, fearless and fiercely protective of their honor.

            The first Yąnomamö no sooner sprang forth from the blood of the cannibal moon than they set about fighting and killing each other. They may have gone on to wipe themselves entirely out of existence, but in some parts of the hei kä misi, the layer of the cosmos we’re standing on now, the blood of the moon was diluted by water from streams and ponds and swamps. The Yąnomamö who arose from this washed blood were less warlike, fighting fiercely but not as frequently. The purest moon blood landed near the center of the hei kä misi, and even today you notice that the Yąnomamö in this area are far fiercer than those you encounter as you move toward the edges of the layer, a peripheral region where the ground is friable and crumbling.

            Even the more peaceful Yąnomamö far from where the purest moon blood landed would have died out eventually because there were no women among them. But one day, the headman from one of the original villages was out with his men gathering vines for making hammocks and for lashing together support beams for their shabono. He was pulling a vine from the tree it clung to when he noticed an odd-looking fruit. Picking it from the tree and turning it in his hand, he saw that the fruit, which is called wabu, had a pair of eyes that were looking back at him.

            The headman wondered aloud, “Is this what a woman looks like?” He satisfied his curiosity, examining the fruit closely for several long moments, but he knew there was still work to be done, so he tossed the wabu on the ground and went back to pulling vines from trees.

            What he didn’t see was that upon hitting the ground the wabu transformed into a woman, like the ones we see today, only this one’s vagina was especially large and hairy, traits that fire Yąnomamö men’s lust. When the men finished pulling down their vines and began dragging the bundles back along the jungle trail to their shabono, this original woman succumbed to her mischievous streak. She followed behind the men, jumping behind trees whenever they turned back, which they did each time she ran up to the ends of the vines dragging behind them and stepped on them, causing the men to drop their entire bundle. They grew frustrated to the point of rage. Finally, when the men had nearly reached their shabono, the woman stepped on the end of a vine and remained standing there when the men turned around.

            Seeing this creature with strange curves and with her great, hairy vagina in place of an up-tied penis, the men felt their frustration commingle with their lust, whipping them into a frenzy. They surrounded her and took turns copulating with her. Once they’d each taken a turn, they brought her back to their shabono, where the rest of the men of the village were likewise overcome with lust and likewise took turns copulating with her. The woman stayed in the shabono for many months until her belly grew round and she eventually gave birth to a baby, a girl. As soon as this girl came of age, the men took turns copulating with her, just as they had with her mother. And so it went. Every daughter conceived through such mass couplings mothered her own girl, and the cycle continued.

            Now there are women in every shabono, and all Yąnomamö trace their ancestry back to both Moonblood and Wabu, though it’s the male line they favor.

            Around this time, back when time wasn’t fixed on its single-dimensional trajectory, a piece of the hedu kä misi—the sky layer, the underside of which we see whenever we look up—fell crashing into hei kä misi. The impact was so powerful that the shabono the piece of sky landed on, Amahiri-teri, was knocked all the way through and out the bottom of the layer, finally coming to form a subterranean layer of its own. Unfortunately, the part of hei kä misi that fell through the crater consisted only of the shabono and the surrounding gardens, so the Amahiri-teri have no jungles in which to hunt. Since these people are no badabö, they are able to send their spirits up through the layer separating us. And without jungles to hunt in, they’ve developed an insatiable craving for meat.

            Thus the Amahiri-teri routinely rise up from under the ground to snatch and devour the souls of children. Most of the spirits the shamans do battle with in their daily rituals are sent by shamans from rival villages to steal their children’s souls, but every once in a while they’re forced to contend with the cannibalistic Amahiri-teri.

            When Yąnomamö die, their buhii, their spirits, rise up through to the surface of the sky layer, hedu kä misi, to the bottom of which are fixed the daytime and nighttime skies as we see them, but the top surface of which mirrors the surface of this layer, with jungles, mountains, streams, and of course Yąnomamö, with their shabonos and gardens. Here too reside the no badabö, but since their essences are mixed they’re somewhat different. Their spirits, the hekura, regularly travel to this layer in forms part animal and part human. It is with the hekura that the shamans commune in their daily sessions with ebene, the green powder they shoot through blowguns into each other’s nostrils.

            The shamans imitate particular animals to call forth the corresponding hekura and make requests of them. They even invite the hekura to take up residence inside their bodies—as there seems to be an entirely separate cosmos within their chests and stomachs. This is possible because the hekura, who travel down to earth from high mountains on glittering hammock strings, are quite tiny. With the help of ebene, they appear as bright flashes flitting about like ecstatic butterflies over a summertime feast.

            “Sounds a bit like our old notion of fairies,” says the padre. “Fascinating. I’ve heard bits and pieces of this before, but it’s truly fantastic—and it’s quite impressive you were able to pick all of this up in just over a month.”

            “Oh, don’t write it down yet,” Lac says. “It’s only preliminary. Even within Bisaasi-teri, there’s all kinds of disagreement over the details. And I’m still struggling with the language—to put it mildly. Lucky for me, they do the rituals and reenact the myths every day when they take their hallucinogens. I think many of the details of the stories actually exists primarily because they’re fun to reenact, and fun to watch. You should see the shamans doing the bit about the first woman stepping on the ends of the vines. Or the brothers shooting their arrows at the moon.”

            “Sex and violence and cannibalism. The part about the woman and the fruit—wabu, did you call it?—is familiar-sounding to us Bible readers, no? But there’s no reference to, no awareness of sin or redemption. Sad really.”

            The two men sit in chairs, in an office with clean white walls, atop a finished wood floor.

            “They also have a story about a flood that rings a bell,” Lac says. “When they realized I was beginning to understand a lot of what they were saying, they started asking me if I had drowned and been reincarnated. They explained there was once a great flood that washed away entire villages. Some Yąnomamö survived by finding floating logs to cling to, but they were carried away to the edges of hei kä misi. When they didn’t return, everyone figured they must have drowned. But one of their main deities, Omawä, went to the edge and fished their bodies out of the water. He wrung them out, breathed life back into them, and sent them back home on their floating logs—which may be a reference to the canoes they see Ye’kwana traveling in. Of course, we come by canoe as well. They conclude we must be coming from regions farther from the center of this layer, because we’re even more degenerated from the original form they represent, and our speech is even more ‘crooked,’ as they call it.”

            The padre rolls his head back and laughs from his belly. Lac can’t help laughing along. Father Santa Claus here.

            “Their myths do seem to capture something of their character,” the padre says, “this theme of a free-for-all with regard to fighting and killing and sex, for instance.”

            Lac resists pointing out the ubiquity of this same theme throughout the Old Testament, which to him is evidence that both sources merely reflect the stage of their respective society’s evolution at the time of the stories’ conceptions. He says instead, “It’s not a total free-for-all. They find the Amahiri-teri truly frightening because they feel they’re always at risk of turning to cannibalism themselves—and they find the prospect absolutely loathsome and disgusting. I think that’s why they prefer their meat so well-done. I ate a bloody tenderloin I cut from a tapir I’d shot in front of some of the men. It was barely cooked—how I like it. The men were horrified, accusing me of wanting to become a jaguar, an eater of raw human flesh. So they do have their taboos.”

            Lac wishes he could add that the moral dimension of the story of Genesis is overstressed. By modern, civilized standards, the original sin stands out as a simple act of disobedience, defiance. You live in paradise, but a lordly presence commands you not to eat fruit from the Tree of Knowledge of Good and Evil: this sounds a bit like, “You have it made, just don’t ask questions.” Or else, “I’ve given you so much, don’t you dare question me!” One could argue you’d have a moral obligation to eat the fruit. Or you could even take morality out of the interpretation altogether and look at the story as an allegory of maturation from a stage of naïve innocence to one of more worldly cynicism, like when your parents can no longer protect you from the harsh realities of the world—in no small part because you insist on going forth to investigate them for yourself.

            Had Laura eaten from the Tree of Knowledge when she discovered the other women she encountered at U of M were only there to meet prize marriage prospects? Was that her banishment from paradise? Did I eat the same fruit by coming to Bisaasi-teri and witnessing firsthand how much of what I’d learned from my professors was in dire need of questioning and revision? Of course, I’d eaten that fruit before. We probably all have once or twice by the time we’re approaching our thirties.

            “Taboos and temptations, indeed,” says the padre. “I wonder,” he adds somberly, “what the final mix of beliefs will look like when my time in the territory has come to an end. Hermano Mertens says the Indians he speaks to across the river from you at Mavaca confuse the name Jesus with the name of one of the figures from their myths.”

            “Yes, Yoawä—he’s Omawä’s twin brother.” Lac forbears to add that Yoawä is usually the uglier, clumsier, and more foolish of the pair in the stories. But he does say, “Chuck Clemens did once tell me it was all but impossible to convince the Yąnomamö to reject their religion wholesale. The best he could hope for was to see them incorporate the Bible stories into their own stories about the no badabö.”

            The kindly padre chuckles. “Ah, that’s how it looks for the first generation. For the children, the balance will have shifted. For the grandchildren, the stories they tell now will be mere folktales—if they’re not entirely forgotten… I can tell that prospect disturbs you. You find fascination in their culture and their way of life. That’s only natural, you being an anthropologist. And your friend is a Protestant. Here we all are in the jungle, battling it out for the savages’ souls. It was ever thus.”

            It was not ever thus, Lac thinks. Those savages used to be exterminated by the hundreds for their land, and for their madohe if they had any. People like me were called heretics and burned at the stake. “One of my informants,” he says, “tells me the hekura find it repulsive when humans have sex. He says he’d like to become a shabori, a shaman, himself, but the initiation entails a year of fasting, which reduces the men to walking skeletons, and a year of sexual abstinence. See, to become hekura oneself, you must first invite the hekura spirits into your chest. And they won’t come if you’re fooling around in hammocks or in the back of the garden with women. The hekura believe sex is shami, filthy. You can’t help but heed the similarity with the English word shame.”

            The padre laughs and Lac laughs easily alongside him. There’s no tension between him and this priest, obviously a beneficent man. “I suspect,” Lac says, “the older shabori tell the initiates the hekura think sex is shami because they want to neutralize the competition for a year. So many of their disputes are over jealousy, liberties taken with wives, refusals to deliver promised brides.”

            “They receive no divine injunction to seek peace and love their fellow man?” the padre asks, though the utterance hovers in the space between question and statement.

            “That’s not entirely true. One of my informants tells me they face judgement when they die. A figure named Wadawadariwä asks if they’ve been generous in mortal life. If they say yes, they’re admitted into hedu, the higher layer. But if not they are sent to Shobari Waka, a place of fire.” Lac pauses to let his Spanish catch up with his thoughts. The Yąnomamö terms keep getting him tangled up, making him lose track of which word fits with which language. He imagines that, while in his mind he’s toggling back and forth between each tongue somewhat seamlessly, in reality he’s probably speaking a nearly incoherent jumble. “But when I asked the man how seriously the Yąnomamö take this threat of a fiery afterlife, he laughed. You see, Wadawadariwä is a moron, and everyone knows to tell him what he wants to hear. They lie. He has no way of knowing the truth.”

            Now the padre’s attention is piercing; he’s taking note. Lac half expects him to get up from his straight-backed wicker chair and find a notepad to jot down what he’s heard. Nice work, Shackley. You just helped the Church make inroads toward frightening the Yąnomamö into accepting a new set of doctrines.

            “Curious that they’d have such a belief,” the padre says. “I wonder if it’s not a vestige of some long-ago contact with Christians—or some rumor passed along from neighboring tribes that got incorporated into their mythology in a lukewarm fashion.”

            “I had the same thought. The biggest question I have now though is whether my one informant is giving me reliable information. You work with the Yąnomamö; you know how mischievous they are. They all want to stay close to me for the prime access to trade goods, but I can tell they don’t think much of me. Ha! I’m no better off than Wadawadariwä, an idiot they’ll say anything to to get what they want.”

            This sets the padre to shuddering with laughter again. “Oh, my friend, what do you expect? You show up, build a mud hut, and start following them around all day, pestering them with questions. You can see why they’d be confused about your relative standing. But I understand you want to report on their culture and their way of life. Maybe you really are doing that the most effective way, but maybe you could learn just as much while being much more comfortable.” He stretches and swings his arm in a gesture encompassing his own living conditions. “It’s hard to say. But since the Church’s goal is different—our goal is first to persuade them—we feel it best to establish clear boundaries, clear signals of where we stand, how our societies would fare if forced to battle it out, in a manner of speaking. And battle it out we must, for the sake of their immortal souls.” He makes a face and wiggles his fingers in accompaniment to this last sentiment.

            Lac appreciates him making light of such a haughty declamation, but he’s at a loss how to interpret the general message. Does the padre not really believe he has to demonstrate the superiority of his culture if he hopes to save the Yąnomamö’s souls? Or does he simply recognize how grandiose this explanation of his mission must sound to a layman, a scientist no less—or a man aspiring to be one at any rate.

            “But naturally we’ve had our difficulties,” the padre continues, pausing to scowl over his interlaced fingers. “I hope however that once we’ve established a regular flight schedule, landing and taking off from Esmeralda, many of those difficulties will be resolved.”

            “A regular flight schedule?”

            “Yes, we’re negotiating with the Venezuelan Air Force to start making regular flights out here, maybe make some improvements to the air strip. Ha ha. I’m afraid I’m not the adventurer you are, Dr. Shackley. I like my creature comforts, and those comforts are often critical to bringing the natives to God. As Hermano Mertens is discovering now at Boca Mavaca.”

            Lac remembers standing knee-deep in the Orinoco, watching smoke twist up in its gnarled narrow column, wondering who it could be, wondering if whoever it was might have some salvation on offer. He’d only been in Bisaasi-teri a few days. It was a rough time. The padre, naturally enough, asked after the lay brother setting up a new mission outpost across the river from Bisaasi-teri soon after they’d introduced themselves. “Honestly,” Lac answered, “I just managed to get my hands on the dugout because the Malarialogìa are trying to get pills to all the villages. It’s a really bad year for malaria. Many of the children of Bisaasi-teri are afflicted. I did hear some rumors about a construction project of some sort going on across the river, but I have yet to visit and check it out for myself.”

            “Hermano Mertens had such high hopes for what he could accomplish there,” the padre says now. He pauses. One of the traits that Lac has quickly taken to in the padre is his allowance for periods of silence in conversation. Back in the States, people are so desperate to fill gaps in dialogue that they pounce whenever you stop to mull over a detail of what’s been said. The Yąnomamö are worse still. With them, you can forget the difficulty of getting a word in; every syllable you utter throughout the whole conversation will be edgewise, if not completely overlain. No one ever speaks without two or three other people speaking simultaneously.

            The padre is thoughtful, curious, so he offers any interlocutor opportunities for contemplation. Lac is relieved that the one shortwave radio in the region isn’t guarded by a man who’s succumbed to the madness of the jungle, a man who’s filled with delusions and completely unpredictable in his demands and threats, like the ones the Venezuelans downriver had described to him in warning—to encourage him to both watch out for it in others and to avoid succumbing to it himself—a reprising of the warning he originally received from his Uncle Rob when they were trekking across the UP. Lac is glad to have instead found in Padre Morello a man who’s warm, friendly, thoughtful—thoughtful enough to speak clearly and at a measured pace to help Lac keep up with the Spanish—and kind, a perspiring Santa of the tropics, with a round belly, scraggly white beard, and exiguous hair thinning to a blur of floating mist over the crown of his head. The figure he cuts is disarming in every aspect, except the incongruously dark and sharp-angled eyebrows, a touch of Mephistopheles to his otherwise jolly visage.

            Still—first Clemens, now Morello, both hard to dislike, both hard to wish away from the jungle, away from the Yąnomamö, whose way of life it is their mission to destroy. Yet how many nights over the past month have I, he thinks, lain awake in my hammock listening to the futile bellicose chants of the village shabori trying to wrest the soul of some child back from the hekura sent by the shabori of some rival village? Every illness is for the Yąnomamö the result of witchcraft. And there’s a reason the demographic age pyramid is so wide at the base and narrow at the top. There are kids everywhere you go, everywhere you look, but how many of them will live long enough to reach the next age block?

            As much as Lac abhors the image of so many Yąnomamö kids sitting at desks lined up in neat rows, wearing the modest garb of the mission Indian, he’s begun to see that those kids at least won’t have to worry about missing out on their entire adolescence and adulthood because they picked up a respiratory infection that could easily be cured with the medicine a not-too-distant neighbor has readily on hand.

            “You know, every president in the history of Venezuela has attended a Catholic Salesian school,” the padre says. “It shouldn’t be too difficult convincing the officials in Caracas how important it is that we are able to supply ourselves.” He’s talking about his airstrip in Esmeralda. The influx has begun; now it will want to gather momentum. How long before the region bears not even the slightest resemblance to what it is now? How long before Yąnomamöland is a theme park for eager and ingenuous young Jesus lovers?

            Soon after he’d docked the new dugout—newly purchased anyway—here at the mission, the padre led him to the office with the shortwave. As both men suspected would be the case, no one answered their calls. Regular check-ins are scheduled for 6 am every day. Outside of that, you’re unlikely to reach anyone. Lac had left Bisaasi-teri with the Malarialogìa men at first light. They said they were returning to Puerto Ayacucho, so Lac agreed to take them as far as Esmeralda after buying their motorized canoe. They passed Ocamo about halfway through the trip, but Lac pressed on to fulfill his promise. He was tempted to stay in Esmeralda, but instead turned around to make sure he could make it back to the mission outpost, with its large black cross prominent against the white gable you could see from the river, before it was too late in the day. He had doubts about his welcome among the Salesians. Had they heard of his dealings with the New Tribes missionaries? Would they somehow guess, perhaps tipped off by his profession, that he was an atheist?

            He was in the canoe all day, still feels swimmy in his neck and knees, still feels the vibrating drone of the motor over every inch of his skin, but nowhere so much as in his skull, like a thousand microscopic termites boring into the bone, searching out the pulpy knot behind his eyes. He’s a wraith, wrung of substance, a quivering unsubstantial husk, with heavy eyelids. But all his vitality would return in an instant were he to hear Laura’s voice—or any mere confirmation of her existence on the other side of these machines connecting them through their invisible web of pulsing energy. Just an acknowledgement that while she may not be available at that particular instant, she is still at the compound, clean and safe and well provided for, her and the kids; that would pull him back from what he fears is the brink of being lost to this hallowed out nonexistence forever. They’ll try the radio again before he retires to the hammock he’s hung in the shed where the good padre has let him store his canoe. Their best bet of reaching someone, though, will be in the morning. He can talk to Laura, say his goodbyes to the padre, perhaps set a time for his next visit, and be back to Bisaasi-teri well before noon, before the villagers are done with the day’s gardening, before it’s too sweltering to do anything but gossip and chat.

            He yawns. The padre is still talking about the airstrip, about how convenient it will be to them both, about how silly the persistent obstacles and objections are. They’re going to win, Lac thinks: the Catholics. A generation from now there will be but a few scattered villages in the remotest parts of the jungle. The rest of the Yąnomamö will be raised in or near mission schools, getting the same education as all the past presidents of Venezuela. Could I be doing more to stop this? Should I be? At least this man’s motives seem benign, and he’s offering so many children a better chance of reaching adulthood—maybe not the children of this generation but more surely those of the next.

            The padre has access to a small airstrip here at Ocamo too, and he’s always sending for more supplies to build up the compound, including the church, the school, the living quarters, and the comedor, which is like a cafeteria. You can’t really get much in, he complained, on the planes that can land here. But it’s the steady trickle that concerns Lac. Morello talks about his role here in the jungle as consisting mainly of helping to incorporate the Indian populations into the larger civilization. Not extermination, of course—we’re past that—but assimilation. It’s either that or they slowly die off as ranchers, loggers, and miners dispossess them of their territories, or poison their water, a piece at a time, introducing them all the while to diseases they have no antibodies to combat, and taking every act of self-defense as a provocation justifying mass slaughter.

            The padre wants the Indians to be treated the same as everyone else, afforded all the same rights: a tall order considering Venezuela as a country has a giant inferiority complex when it comes to its own general level of technological advancement. You take some amenity that’s totally lacking in whatever region you’re in, and that’s exactly what the officials, and even the poorest among the citizenry, will insist most vociferously they have on offer, more readily available than anywhere else you may visit in the world. Just say the word. The naked Indians running around in the forests are an embarrassment, so far beneath the lowermost rung on the social ladder they’d need another ladder to reach it, barely more than animals, more like overgrown, furless monkeys. That’s the joke you hear, according to the Malarialogìa men. The funny thing is, to the Yąnomamö, it’s us nabä who are subhuman. Look at all the hair we have on our arms and legs, our chests and backs. We’re the ones who look like monkeys—and feel like monkeys too after spending enough time in the company of these real humans.

            Without our dazzling and shiny, noisy and deadly technology, there’d be no way to settle the conflicting views. But we know it will be the nabä ways that spread unremittingly, steamrolling all of Yąnomamöland, not the other way around. Insofar as the padre and his friends are here to ease the transition, saving as many lives as possible from the merciless progress of civilization and all the attendant exploitation and blind destruction, who is Lac to fault him for being inspired by backward beliefs? Of course, it’s not the adoption of nabä ways in general the Salesians hope to facilitate; it’s the ways of the Catholic Church. The Salesians had no interest in the Indians’ plight—particularly not a foot people like the Yąnomamö, living far from the main waterways—until the New Tribes began proselytizing here. The Christians, Lac thinks, are plenty primitive in their own way; they’ve carried on their own internecine wars for centuries.

            “Don Pedro will be there when I check in at 6 tomorrow morning. I’ll have him try to reach the institute in Caracas by phone and then patch us through so you can talk to your wife.” The padre pauses thoughtfully, and then, donning a devilish grin, says, “I wonder: you said both you and your wife attended Catholic schools in Michigan. Did you also have a Catholic wedding ceremony?”

            Lac appreciates the teasing; the padre is charming enough to pass it off as part of a general spirit of play, one he infuses at well-timed points throughout the conversation. Ah, to speak to a civilized man, Lac savors, whose jokes are in nowise malicious. Smiling, he answers, “Oh, Laura’s mother would never have accepted anything else as binding.” The two men laugh together. “And you?” Lac counters. “How does the Church view your readiness to converse with people of other faiths?”

            “Other faiths?” the padre asks skeptically.

            So he has guessed I’m an atheist.

            “Miraculously enough,” the padre says without waiting for a response, “I’ve just read in our newsletter that the pope recently issued an edict declaring priests are free to pursue open dialogue with Protestants and nonbelievers, and that such exchanges may even bear spiritual fruit in our quest to become closer to God. What this means, my friend, is that I don’t have to feel guilty about enjoying this conversation so much.”

            “And many more thereafter I hope.”

            The padre smiles, his teeth flashing whiter than his scraggly beard. “You know,” he says, “throughout he war, I lived in the rectory of a church in my hometown near Turin. Now this was Northern Italy, so there were planes flying overhead all the time. We’d often hear their guns rattling like hellish thunder chains in the sky, and on many occasions we felt obliged to rush to the site of a crash. For years, the Germans had the upper hand, but if we found an Allied pilot at the crash site, we’d bring him back to the church, shelter him, and keep him hidden from any patrols. Had we been caught harboring these enemy pilots, feeding them, nursing their injuries, it would have meant the firing squad for us for sure. But what could we do?

            “When the tides shifted and it was the Allied forces who dominated the skies, we started finding Axis pilots at the crash sites, and now it was the Allied firing squads we feared.” The padre leans forward with a mischievous twinkle in his eye, holds a hand up to his mouth and whispers, “Here’s the best part: a couple years after the war, I started receiving a pension, an expression of gratitude from the military for what I’d done saving the lives of their pilots, in recognition of what I’d risked—first from the Axis side, then another one later from the Allied side.” He leans away, his head rolling back to release a booming peel of laughter. 

            Lac too laughs from deep in his belly, wondering, could this story be true? The doubt somehow makes it funnier. Does it even matter? He’s already regretting his plans to leave the mission outpost tomorrow after talking to Laura, already looking forward to his next visit to Ocamo.

The padre has told him he’s writing a book about his life in the jungle, prominently featuring his mission work among the Yąnomamö, and he’s interested in any photographs Lac may be able to provide from his own fieldwork. In exchange, Lac will be free to visit the outpost at Ocamo anytime and make use of the shortwave. He will also be free to store extra supplies and fuel for his dugout’s motor—which will make it easier for him to reach all the towns downstream.

You help me with my book; I’ll help you with yours. Sounds like an excellent bargain to me. But now that you have all these ways of reaching and communicating with the world outside the Yąnomamö’s, you really need to forget about them and get back to work.

*

            “—chlan –ell me –u’re alight.” Laura’s voice. English. Bliss.

            One candle bowing across the vast distance to light another. The hollowness inside him fills with the warm dancing glow.

Padre Morello discreetly backs out of the room upon hearing the voice come through, and Lac is grateful because he has to choke back a sob and draw in a deliberately measured breath before he can say, “I’m alright Laura. Healthy and in one piece. Though I’ve lost a bunch of weight. How are you and the kids?”

            “Healthy and in one piece. They miss you. I think we’re all feeling a little trapped here. There’s another family, though, the Hofstetters—they’ve been a godsend.”

            Lac’s mind seamlessly mends the lacunae in the transmission—one of the easier linguistic exercises he’s been put to lately—but every missing syllable elevates his heartrate. He leans forward until his cheek is almost touching the surface of the contraption. He asks, “Are you getting everything you need by way of supplies and groceries?” He feels a pang in recognition of his own pretense at having any influence whatsoever over his family’s provisioning; the question is really a plea for reassurance.

            “The Hofstetters have been taking us in their car every week to a grocery store down in the city.” Lac is already imagining a strapping husband, disenchanted with marriage, bored with his wife; he’d be some kind of prestigious scientist no doubt, handsome, over six feet. “Dominic had a fever last week, but it went down after we gave him some aspirin and put him to bed.” We? “He misses French fries, says the ones here aren’t right. He wants McDonalds.”

            Lac decides to break the news preemptively, before she has a chance to mention the plan for them to come live with him in the field. “Laura, I have some bad news. The conditions are more prim… it’s rougher than I anticipated.” He proceeds with a bowdlerized version of his misadventures among the Yąnomamö to date, adding that with time he should be able to learn the ropes of the culture and secure regular access to everything they need. “Don’t worry, honey, I’m through the most risky part myself, but even I have to be cautious at all times. I need to make absolutely sure you’ll all be safe before I bring you here.”

            “Are the Indians dangerous?” she asks innocently. “When they’re demanding your trade goods like you described, do they ever threaten you?”

“Oh yes.” She knows I’m holding back, he thinks. Am I just making her worry even more? He hurries to add, “They’re full of bluster and machismo. It was intimidating at first, but I picked up on the fact that they’re mostly bluffing.”

            “Mostly?”

“You have to understand, all the men are really short. If they get too aggressive I can stand looking down at them. Ha. It’s the first time in my life I’ve been the tallest guy around. And the key I’ve found is to stand your ground, not budge, make sure it’s known to everyone that you’re no easy mark. Now I worry more about the kids making off with anything I leave out in the open.” He turns away and curses himself. Until that last sentence, he’d managed to stick to the technical truth, however misleading the delivery. But he intuited a need in Laura for him to segue onto a more trivial threat, so he brought up the kids, even though it’s the grown men who have the stickiest fingers.

So now I’m officially lying to my wife; I got carried away weaving true threads into a curtain of falsehood and I lost sight of which threads were which. Now I can’t pull out that last thread without the whole thing unraveling, revealing the stark reality. He foresees being haunted by the guilt from his little fib for weeks, or until he’s able to show her firsthand that he really is safe, safe enough to keep her and the kids safe too.

He’s got his work cut out.

*

Padre Morello sees at a glance that Lac has no wish to speak; he makes no effort to continue the conversation from the night before, though doing so would be in keeping with his natural disposition. The men exchange a few words as the padre guides him part of the way back to the shed, where Lac will repack his belongings, do some preventative maintenance on his motor—or pretend to, as he knows embarrassingly little about engines, for a Shackely—and drag the dugout down to the riverbank for the trip back to Bisaasi-teri, back to his hut, back to Rowahirawa and all the others. Rowahirawa, formerly Waddu-ewantow, has taken over the role of chief informant, even though Lac still reckons the chances that he’ll turn violent toward him someday rather high.

The padre never asked him questions like that, about how much danger he felt he was in. That’s the other reason for recreating your own cultural surroundings, or at least a simulacrum of them, when you come to live in this territory—the safety. The dogs living at the Ocamo outpost would alert the inhabitants of any unwanted guests, and the offer of rich food would make it relatively easy to demand visitors disarm themselves before entering the area. Morello had focused on the symbolism, the message sent to the natives about how much more advanced our ways are than theirs, but what if the real reason was more practical, myopic even? You’d have to be insane to come out here and live next to one of their shabonos in a dank and gloomy mud hut. By contrast, even the creak of the wood floor beneath his feet in this place speaks of deliverance.

If it ever gets too bad, he thinks, I’m not too proud to come back here and hole up with the padre. He’ll be able to arrange transportation out of the territory—if it comes to that. “The people of Bisaasi-teri are talking about some trouble brewing to the south of them,” Lac says without having decided to speak.

“Ah, I’ve heard that the Yąnomamö often attack one another’s villages.” Lac can tell the padre has more to add but decides against it, maybe to let Lac finish his thought.

“My first day in the village, I ducked under the outside edge of the wall and stood up to see a dozen arrows drawn back and aimed at my face. I learned later that the Patanowä-teri were visiting to try to form a trading partnership with Bisaasi-teri, and that’s when the men from a third village, Monou-teri, attacked and stole seven of the Patanowä-teri women. The Patanowä-teri in turn went to Monou-teri, less than a day’s walk, and challenged them to a chest-pounding tournament, which they must’ve won because they returned to Bisaasi-teri with five of the seven women.”  

The padre nods in thoughtful silence as he walks alongside Lac, who has an inchoate sense of remorse at relaying these most unsavory of his research subjects’ deeds to a representative of the Church. “When Clemens and I arrived, we spooked them. The headman of the Monou-teri had been incensed and swore he’d take vengeance, and the two groups at Bisaasi-teri feared he was making good on his vow. For days, I looked around and it was obvious that something had them on edge, but it wasn’t like I had a baseline to compare their moods against. When the Patanowä-teri left after two days, though, I noted the diminished numbers.”

“You know,” the padre says, “if things get too tense at the village, you’re always welcome to stay here for a while.” This echo of his own earlier thought floods Lac with gratitude. He remembers once silently declaring that he’d never say a word against Chuck Clemens; he now has the same conviction about Padre Morello, though in this case it’s more of the moment, whereas with Clemens, well, he’s still sure he’ll never have anything bad to say about the man.

“I can’t tell you how much I appreciate that, Padre. I’m not expecting it to come to that, but it’s reassuring to know I have a friend to turn to if it does.”

The men shake hands. Lac continues on to the shed and his dugout canoe, while the padre goes back to his day, back to his routine as the head of the mission, directing the ghostly white-clad sisters, feeding and clothing the Indians, swaddled in his nimbus of mirth, like a saint from a bygone era. Isn’t everything out here from one bygone era or another, Lac thinks, including you? He chuckles at the thought, then comes abruptly to the verge of tears—because it’s a joke Laura would enjoy, but he’ll almost certainly have forgotten it by the time he talks to her again.

*

Lac returns in time for another commotion inside Bisaasi-teri’s main shabono. Until docking his canoe, he’d been considering landing on the far shore and journeying inland to introduce himself to the Dutch lay brother who’s building a comedor across the river to attract the Yąnomamö for food and proselytizing. It’ll have to wait for another day. Lac already feels guilty for having been away for so long, a day and a half, imagining the villagers to have engaged in myriad secret rites while he was downriver, or merely some magnificent ceremony no outsider had ever witnessed. Smiling bitterly, bracing himself for whatever chaos he’s about to thrust himself into, he thinks: every ceremony they perform has never been seen by outsiders; the big secret is that they’re not like you’d imagine; they’re like nothing so much as a bunch of overgrown boys getting high and playing an elaborate game of make-believe, boys who could throw a tantrum at random and end up maiming or killing someone, starting a war of axes and machetes, bows and arrows.

He squats under the outermost edge of the thatched roof, sidling and bobbing his way into the headman’s house, where he sees a very pregnant Nakaweshimi, the eldest wife. Since he’s begun addressing the headman with a term that means older brother, he’s obliged to likewise refer to Nakaweshimi as kin, as a sister. “Sister,” he calls to her. “I’ve returned to your shabono. I’m glad to see you. What is happening? What is causing excitement?” Not much like her fellow Yąnomamö speak to her, but they’ve learned to give him extra leeway in matters of speech and etiquette, like you would the village idiot. He’s even been trying to get everyone to tell him all their names, this addled-brained nabä.

Nakaweshimi nearly smiles upon seeing him—at least he thinks she does—but then waves him off. “Rowahirawa will tell you what Towahowä has done now,” she says. Her expression baffles him, showing an undercurrent of deep concern overlain with restrained merriment, like she may have almost laughed. Could she be that happy to see me, he wonders, maybe because she thinks I’ll ward off the raiders with my shotgun and other articles of nabä magic? Or maybe I’m such an object of derision the jokes following in my wake set people to laughing whenever they see me.

He continues through the house out into the plaza, sees the men, some squatting, others standing, pacing. High above them, he looks out to see the thick blur of white mist clinging to the nearly black leaves of the otherworldly canopy and is struck by the devastating beauty, feeling a pang he can’t immediately source and doesn’t have the time to track down. The syllables and words of the men he’s approaching rise up in a cloud around him, a blinding vortex that simultaneously sweeps up the identifiable scents of individual men, bearing aloft the broken debris of shattered meanings, all the pieces just beyond his reach. Straining, he lays a finger on one, then another. He envies these men, naked but for strings, arm bands, sticks driven through their ears, their faces demoniacally distorted by the thick wads of green tobacco tucked behind their bottom lips—envies them because the words swirling away from him flow into their ears in orderly streams.

He sees a squatting man scratch the bottom side of his up-tied penis; another spits, almost hitting a companion’s foot. They’re discussing war and strategy, but we’re a long way from the likes of Churchill and Roosevelt—and yet, probably not as far as us nabä might like to think. Bahikoawa is telling them about his relationship to some man from Monou-teri: he lives in a different village and yet descends from the same male progenitor, meaning they’re of the same lineage. Lac reaches for his back pocket but finds it empty, his notebook, he then recalls, still tucked in a backpack full of items he brought along for the trip to Ocamo.

When the men finally notice Lac’s presence—or finally let on that they’ve noticed him, some of them walk over with greetings of shori, brother-in-law, asking after his efforts to procure more of his splendid nabä trade goods, which they’re sure he’ll want to be generous in divvying out among them. He haltingly replies that he was merely visiting Iyäwei-teri and the other nabä who lives there and is building a great house. He adds that he asked this other nabä to bring him back some medicines—a word borrowed from Spanish, with some distorting effects—he can administer to the Yąnomamö, but it will be some time before they arrive. They respond with aweis and tongue clicks. One man, a young boy really, tells Lac to give him a machete in the meantime—“and be quick about it!”

Ma. Get your own machete.

Among the Yąnomamö, Lac has learned, it’s seen as stingy, almost intolerably so, not to give someone an item he requests. What they normally give each other, though, is tobacco—often handing over the rolled wads already in their own mouths—germ theory still being millennia in the future, or at least a few years of acculturation at the hands of the missionaries. Lac has to appreciate his madohe make him rich, after a fashion, but his refusal to give them away freely makes him a deviant, a sort of reprobate. They don’t exactly condemn him as such. They struggle to work out the proper attitude to have toward him, just as he does toward them. The culture has no categories to accommodate the bizarre scenario in which an outsider in possession of so many valuable goods comes to live among them.

For the most part, they make allowances; they’re flexible enough to recognize the special circumstance. They tolerate his egregious tight-fistedness—what do you expect from a subhuman? And this particular subhuman appears to be trying to learn what it means to be a real human, translated Yąnomamö. Why else would he be so determined to speak their language? Though they seem to think there’s only one language with varying degrees of crookedness. The Yąnomamö to the south speak a crooked dialect for instance, but at least it’s not so crooked as to be indecipherable. Lac must have traveled far beyond those southern villages when he was washed to the edge of the earth by the Great Flood. Really, though, Lac isn’t sure how to gauge what percentage of the villagers actually believes this story, or to what degree they believe it. He’s noted a few times that their beliefs in general seem to be malleable, changing according to the demands of the situation.

Their attitudes toward the Patanowä-teri, for example, have undergone a dramatic shift in the brief time he’s been among them. The Patanowä-teri had come to attend a feast at Bahikoawa’s invitation, hoping to establish regular trade in goods like bows, clay pots, dogs, tobacco, and ebene—or rather the hisiomo seeds used to make it. The Monou-teri, meanwhile, were invited to be fellow guests at the feast, but on their way to Bisaasi-teri they happened upon those seven Patanowä-teri women hiding in the forest, a common precaution, Lac’s been told, to keep them safe from the still suspect host villagers. The Monou-teri couldn’t resist. This led to the chest-pounding duel he’d heard about, though it must have been several separate duels, more like a tournament, the outcome of which was that the Patanowä-teri returned to Bisaasi-teri with five of the seven women, just before Lac and Clemens arrived. The Patanowä-teri then left the village early to avoid further trouble with the Monou-teri, who, if he’s hearing correctly, are now determined to raid the Patanowä-teri at their shabono on the Shanishani River. This despite their having come out ahead by two women.

Now, even though the Patanowä-teri have in no way wronged the people of Bisaasi-teri, Bahikoawa is considering whether he and some of the other men should accompany the Monou-teri on their raid. It seems Bahikoawa is related to the headman of Monou-teri, the one who’s causing all the trouble. This man is waiteri: angry or aggressive, eager to project an air of menace and invincibility, traits considered to be manly virtues rather than political liabilities.

Bahikoawa, it seems, is related to many of the Patanowä-teri as well, but more distantly. The men argue over whether Towahowä, the Monou-teri headman, is justified in launching the raid—not in the moral sense, but in a strategic one—and over whether they should send someone to participate. Bahikoawa, drawing on the juice from his tabacco, looks genuinely distraught, like he’s being forced to choose between two brothers, and Lac feels an upwelling of sympathy. He couldn’t speak for anyone else in any of these villages; he’d be loath to turn his back to any of them. Bahikoawa, on the other hand, is a good man; it’s plain for anyone to see. He also appears to be sick. He keeps clutching his side, as though he’s having sharp pains in his abdomen. Fortunately, the war counsel is breaking up, partly in response to Lac showing up to distract them.

The men have all kinds of questions about the villagers at Ocamo, the Iyäwei-teri, almost none of which Lac can answer: How are their gardens producing? Was everyone at the shabono, or were some of them off hunting? Did they appear well-supplied with tobacco? Lac, realizing he could have easily stopped to check in with the villagers—he’ll want their genealogical information too at some point—tries to explain that he merely went for a chance to speak with his wife and ask after his children. When they assume, naturally enough, his wife must be living at Iyäwei-teri, he’s at a loss as to how he can even begin to explain she’s somewhere else.

Lac asks where Rowahirawa is: out hunting for basho for his in-laws. So there’s little chance of clearing things up about where Laura is and why Lac could nonetheless speak to her from Ocamo. Lac decides to step away and go to his hut to relax for a bit before starting his interviews and surveys again. He’s shocked by his own oversight, not anticipating that the people of Bisaasi-teri would be eager to hear about what’s going on at Iyäwei-teri. Travelers are the Yąnomamö’s version of newspapers; it’s how they know what’s brewing at Monou-teri; it’s the only way for them to know what’s going on in the wider world—their own wider world anyway. But is what the Bisaasi-teri are after best characterized as news, gossip, or military intel?

At some point, it’s disturbingly easy to imagine, he could unwittingly instigate an intervillage attack simply by relaying the right information—or rather the wrong information. Lac is also amazed by the Yąnomamö’s agility in shifting alliances, and he can’t figure out how to square it with his knowledge about tribal societies vis-à-vis warfare, which is thought to begin when a society reaches a stage in its evolution when the people start to rely on certain types of key resources like cultivable land, potable water, or ready access to game. Intergroup conflicts then intensify when the key resources take on more symbolic than strategic meanings, as when they’re used as currency or as indicators of status. Think gold and diamonds. But Bisaasi-teri was working to establish trade relations with Patanowä-teri when the Monou-teri headman, in a brazen breach of diplomacy, instigated hostilities.  

What resources, he wonders as he steps into his hut, can they possibly be fighting over?

If anything, the fighting seems to be further limiting their access to the goods they may otherwise procure through trade. And why should the Bisaasi-teri men consider sending a contingent to represent their village in the raid? When the first offense occurred, the Bisaasi-teri were seeking to strengthen their ties to Patanowä-teri, and Bahikoawa’s lineage is present in both villages, so why bother picking a side? Why not sit out the fight? What do they hope to gain?

Lac lies back in his hammock, trying to make sense of it. He could easily fall asleep.

As he was preparing to board the ship in New York with Laura and the kids and all the supplies he was going to take with him into the field, he’d read in a newspaper about the U.S. sending military advisors to Southeast Asia, another jungle region, to support a group of people resisting the advance of communism. The Soviet Union apparently has already established a foothold in the country by supporting a rival group, the Vietcong. The threat of a proxy war between the great powers looms.

What resources are the U.S. and the U.S.S.R. fighting over? It seems to Lac to be much more about ideas: capitalism vs. communism. Is it only nations at the most advanced stages of technological advancement that battle over ideologies and economic systems? Dozing off, Lac’s last thought is that for the Bisaasi-teri men at least, the real motivation seems to be the opportunity to assert their own impunity alongside their readiness to punish rivals for any miniscule offense. It’s about projecting an air of superiority, acquiring prestige, for yourself, your village, your lineage, your tribe, your nation, your very way of life.

*

He dozes for maybe twenty minutes, his eagerness to work blunting the edge of his now-chronic exhaustion. Overnighting at the mission afforded him a superb night of sleep, comparatively. The Yąnomamö don’t keep to the strict schedules Westerners do; they have no qualms about late-night visits, even if those visits require rousing the visitee from a deep slumber. They do fortunately happen to be adept at midday dozes, a skill they like to practice during the day’s most oppressively hot stretches, when doing much of anything else is a loathsome prospect. Lac has adapted quickly, hence the short nap when he really felt like a longer sleep.

One good stretch of uninterrupted sleep, he thinks, hardly makes up for the weeks of erratic events, bizarre occurrences, and anxiety-fueled insomnia. When I finally leave this place, in just over a year from now, I may spend the first month home doing nothing but sleeping. What bliss. For now, though, he has work to do, and thanks to the encroachments into Yąnomamöland by the New Tribes and the Salesians, he’s on a diminishing timetable. Already, it will be difficult to tell how closely the population structures he uncovers reflect outside influences versus the true nature of tribal life in the jungle. He keeps hearing about more remote villages to the South, near the headwaters of the Mavaca, where there’s supposedly a single shabono housing over twice as many people as Bisaasi-teri. According to his main informant at least. According to his whilom persecutor, his bully, his now sometimes friend—as reticent as he’s learned to be in applying that term, and as wary as he’s become in allowing for that sentiment.

It’s still hot. The trip from Ocamo took a little over six hours, twice as long coming upriver than it was going downstream. He’s hungry, but the lackluster range of choices on offer dulls his appetite. He guesses that in the little over a month he’s been in the field, he’s lost between ten and fifteen pounds. He may never successfully remove the crusty, bug-bitten, sticky film clinging to his body; he imagines Laura catching a first whiff of him when they’re finally reunited and bursting into tears.

But I’m on to something, he thinks as he stands and moves to the table. From his discussions with Rowahirawa, he’s learned that the proscription against voicing names isn’t a taboo per se; saying someone’s name aloud is more like taking a great liberty with that person, much the way Westerners would think of being groped in public, an outrageous gesture of disrespect. But, while it may be grossly offensive to run your hands over a stranger, or even an intimate if it’s in a public setting, you can often get away with more subtle displays of affection; you can touch another person’s arm, say, or her shoulder; you could reach over and touch her hand.

Remembering a night with Laura, back before Dominic was born, Lac has one of his rare flickers of sexual arousal, a flash of a scene overflowing with the promise sensual indulgence. Lac looks around his hut; it’s only ever moments before someone arrives. He’s probably only been left alone this long because of the excitement in the shabono over the impending conflict to the east. He’s yet to see a single Yąnomamö man or woman masturbate, he notes, but the gardening time of day is when the jokes and all the innuendo he can’t decode suggest is the time for trysting. Someone’s always gone missing, and then it’s discovered someone else has gone missing at the same time. Lac’s never witnessed these paired abscondings. Sexuality is notoriously tricky for ethnographers to study, a certain degree of discretion being universal across cultures. People like to do it in private. It seems this is particularly true of the Yąnomamö, if for no other reason than that they fear detection by a jealous husband.

And Lac himself?

He’s awoken in his hammock from dreams of lying alongside Laura on the smoothest, cleanest sheets he’s ever felt—awoken in a compromised state his unannounced Yąnomamö visitor took no apparent notice of. He feels an ever-present pressure crying out for release, but the conditions could hardly be less conducive to the proper performance of such routine maintenance tasks. He’s never alone, never anything less than filthy—sticky, slimy, and moderately uncomfortable—and the forced press of Yąnomamö bodies he keeps being subjected to has an effect the opposite of sensual. So he feels the tension, physiologically, from having gone so long without release, but nothing in the field comes close to turning him on.

None of the Yąnomamö women? Not one?  

There’s something vaguely troubling to Lac about this, as it seems to have little to do with his devotion to Laura. His devotion to Laura manifests itself through his resistance to temptation, not its absence. So why is he, an aspirant acolyte of Boas, not attracted to, not sexually aroused by any of the women among this unique group of his fellow humans?

Shunning the implications, his mind takes him back in time to some key moments in his budding intimacy with the woman he’d go on to marry. He would enjoy hanging out in his hut and reminiscing like this, but sensing its futility, he decides instead to get to work, get to making something worthwhile out of this expedition, this traumatizing debacle of a first crack at ethnographic fieldwork, get to securing his prospects for a decent career—oh, the bombshells he’ll be dropping on his colleagues—get to securing a living for his family and a valuable contribution to the discipline, his legacy. Maybe Bess and Laura are right, he thinks; maybe I’m incapable of accepting a cause as lost; maybe I have something to prove, a vestige of some unsettled conflict with my father and brothers. So be it. I may as well turn it into something worthwhile.

*

At last, he writes later that night, I’ve succeeded in getting some names, and I’ve begun to fill in some genealogical graphs. My work has begun, my real work. What I couldn’t have known when I decided to study the Yąnomamö was that I would be working with the most frustratingly recalcitrant people ever encountered by an anthropologist. My project of gathering names will be a complex and deeply fraught endeavor, and every mistake will put me in danger: my subjects getting angry with me at best, violent at worst. It doesn’t help that the Yąnomamö also happen to be consummate practical jokers, who see having an ignorant and dimwitted nabä around, someone who’s just learning the basics of their language, as an irresistible opportunity.

One man will casually point me toward another, instructing me to say, “Tokowanarawa, wa waridiwa no modohawa.” When I go to the second man and repeat the phrase, careful to get the diction right, he is furious and begins waving his arms and threatening me—and rightly so since I’ve just addressed him by name and told him he’s ugly. Here’s the peculiar thing: even though the man I’ve insulted witnessed the exchange with the first man—even though he knows I’m merely relaying a message—he directs his anger at me and not the actual source of the insult, a man who is by now in hysterics over the drama he’s instigated. (This prank, along with a few minor variations at other times, was pulled on me by one of my most reliable informants, Rowahirawa.)

The Yąnomamö love drama. They love trouble. And I have to be careful not to give in to my inclination to respond to angry subjects by offering them madohe to make amends, a response that would  make (and to some degree has already made) me appear cowed, and cowardly, encouraging and emboldening them to make more displays and more threats. But, as delicate as one must be in negotiating the intricacies of the name customs, I’ve managed to uncover some underlying threads of logic to them. The worst offense when it comes to names, for instance, is to publicly say those of recently deceased relatives. The second worst offense—which is probably just as dangerous to commit—is using the name of a browähäwä, a politically prominent man, most of whom (all?) are also waiteri, warriors.

What I’ve observed, however, is that when the Yąnomamö refer to one of these men, they usually do so through teknonymy: they imply his identity through his relationship to someone safe to name. They’d never say the headman Bahikoawa’s name out loud, but instead refer to him as “the father of Sarimi,” his daughter. I can therefore begin building out my genealogies in a similar fashion, starting with the names of children and working my way up. Over time, I may light on new methods that will bring me closer to the names of the browähäwä and the ancestors, but by then I hope to already have their relationships with all the other villagers mapped out using the same sort of teknonymy as the Yąnomamö use themselves.

My plan is to create a standard list of questions and then to interview as many of the villagers as possible, offering them fish hooks or nylon line or disinfectant eye drops as payment. I’ll interview them individually, so there will be no witnesses to any sharing of sensitive information, and I’ll encourage each interviewee to whisper the names in my ear, as a demonstration of how much I personally respect the individuals being named. Still, I don’t expect to be able to draw complete charts the first time around. This first round will be more like tryouts. I’ll be looking to identify the most helpful, articulate, and reliable informants, an exercise that will involve checking each candidate’s answers against the others’.

For round two, I’ll stick to the individuals who most readily provided me with the best information in round one. And I’ll offer them more valuable trade goods in exchange for their help: machetes, axes, game meat. Over time, as I build up some trust and establish rapport, I’ll start pressing them for the more sensitive names. I’m estimating that by mid-March I should have everyone’s name on record, along with a chart that fits each village member into the kinship network. I’ll be able to pass these charts along to Dr. Nelson when he and his team arrive for their genetic research next year. And the information will also form the basis of any theorizing on my own part about the nature and evolution of larger societal patterns. At the same time, it will give me a head start on the charts for neighboring villages, and subsequently for the more remote ones I hope to visit on future expeditions. 

Lac closes the notebook, leans his head back, and sighs. A lot of things that could very easily go wrong will need to go right for this plan to work. Thinking about all the variables is overwhelming. But what really scares him now is the thought of those future expeditions, of having to return to the jungle once he’s made it out.

If he makes it out. 

***

Find my author page

Posts on Napoleon Chagnon:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

JUST ANOTHER PIECE OF SLEAZE: THE REAL LESSON OF ROBERT BOROFSKY'S "FIERCE CONTROVERSY"

Read More
Dennis Junk Dennis Junk

Why Tamsin Shaw Imagines the Psychologists Are Taking Power

Upon first reading Shaw’s piece, I dismissed it as a particularly unscrupulous bit of interdepartmental tribalism—a philosopher bemoaning the encroachment by pesky upstart scientists into what was formerly the bailiwick of philosophers. But then a line in Shaw’s attempted rebuttal of Haidt and Pinker’s letter sent me back to the original essay, and this time around I recognized it as a manifestation of a more widespread trend among scholars, and a rather unscholarly one at that.

Tamsin Shaw’s essay in the February 25th issue of The New York Review of Books, provocatively titled “The Psychologists Take Power,” is no more scholarly than your average political attack ad, nor is it any more credible. (The article is available online, but I won’t lend it further visibility to search engines by linking to it here.) Two of the psychologists maligned in the essay, Jonathan Haidt and Steven Pinker, recently contributed a letter to the editors which effectively highlights Shaw’s faulty reasoning and myriad distortions, describing how she “prosecutes her case by citation-free attribution, spurious dichotomies, and standards of guilt by association that make Joseph McCarthy look like Sherlock Holmes” (82).

Upon first reading Shaw’s piece, I dismissed it as a particularly unscrupulous bit of interdepartmental tribalism—a philosopher bemoaning the encroachment by pesky upstart scientists into what was formerly the bailiwick of philosophers. But then a line in Shaw’s attempted rebuttal of Haidt and Pinker’s letter sent me back to the original essay, and this time around I recognized it as a manifestation of a more widespread trend among scholars, and a rather unscholarly one at that.

Shaw begins her article by accusing a handful of psychologists of exceeding the bounds of their official remit. These researchers have risen to prominence in recent years through their studies into human morality. But now, instead of restricting themselves, as responsible scientists would, to describing how we make moral judgements and attempting to explain why we respond to moral dilemmas the way we do, these psychologists have begun arrogating moral authority to themselves. They’ve begun, in other words, trying to tell us how we should reason morally—according to Shaw anyway. Her article then progresses through shady innuendo and arguments based on what Haidt and Pinker call “guilt through imaginability” to connect this group of authors to the CIA’s program of “enhanced interrogation,” i.e. torture, which culminated in such atrocities as those committed in the prisons at Abu Ghraib and Guantanamo Bay.

Shaw’s sole piece of evidence comes from a report that was commissioned by the American Psychological Association. David Hoffman and his fellow investigators did indeed find that two members of the APA played a critical role in developing the interrogation methods used by the CIA, and they had the sanction of top officials. Neither of the two, however, and none of those officials authored any of the books on moral psychology that Shaw is supposedly reviewing. In the report’s conclusion, the investigators describe the responses of clinical psychologists who “feel physically sick when they think about the involvement of psychologists intentionally using harsh interrogation techniques.” Shaw writes,

It is easy to imagine the psychologists who claim to be moral experts dismissing such a reaction as an unreliable “gut response” that must be overridden by more sophisticated reasoning. But a thorough distrust of rapid, emotional responses might well leave human beings without a moral compass sufficiently strong to guide them through times of crisis, when our judgement is most severely challenged, or to compete with powerful nonmoral motivations. (39)

What she’s referring to here is the two-system model of moral reasoning which posits a rapid, intuitive system, programmed in large part by our genetic inheritance but with some cultural variation in its expression, matched against a more effort-based, cerebral system that requires the application of complex reasoning.

But it must be noted that nowhere does any of the authors she’s reviewing make a case for a “thorough distrust of rapid, emotional responses.” Their positions are far more nuanced, and Haidt in fact argues in his book The Righteous Mind that liberals could benefit from paying more heed to some of their moral instincts—a case that Shaw herself summarizes in her essay when she’s trying to paint him as an overly “didactic” conservative.

            Haidt and Pinker’s response to Shaw’s argument by imaginability was to simply ask the other five authors she insinuates support torture whether they indeed reacted the way she describes. They write, “The results: seven out of seven said ‘no’” (82). These authors’ further responses to the question offer a good opportunity to expose just how off-base Shaw’s simplistic characterizations are.

None of these psychologists believes that a reaction of physical revulsion must be overridden or should be thoroughly distrusted. But several pointed out that in the past, people have felt physically sick upon contemplating homosexuality, interracial marriage, vaccination, and other morally unexceptionable acts, so gut feelings alone cannot constitute a “moral compass.” Nor is the case against “enhanced interrogation” so fragile, as Shaw implies, that it has to rest on gut feelings: the moral arguments against torture are overwhelming. So while primitive physical revulsion may serve as an early warning signal indicating that some practice calls for moral scrutiny, it is “the more sophisticated reasoning” that should guide us through times of crisis. (82-emphasis in original)

One phrase that should stand out here is “the moral arguments against torture are overwhelming.” Shaw is supposedly writing about a takeover by psychologists who advocate torture—but none of them actually advocates torture. And, having read four of the six books she covers, I can aver that this response was entirely predictable based on what the authors had written. So why does Shaw attempt to mislead her readers?

            The false implication that the authors she’s reviewing support torture isn’t the only central premise of Shaw’s essay that’s simply wrong; if these psychologists really are trying to take power, as she claims, that’s news to them. Haidt and Pinker begin their rebuttal by pointing out that “Shaw can cite no psychologist who claims special authority or ‘superior wisdom’ on moral matters” (82). Every one of them, with a single exception, in fact includes an explanation of what separates the two endeavors—describing human morality on the one hand, and prescribing values or behaviors on the other—in the very books Shaw professes to find so alarming. The lone exception, Yale psychologist Paul Bloom, author of Just Babies: The Origins of Good and Evil, wrote to Haidt and Pinker, “The fact that one cannot derive morality from psychological research is so screamingly obvious that I never thought to explicitly write it down” (82).

Yet Shaw insists all of these authors commit the fallacy of moving from is to ought; you have to wonder if she even read the books she’s supposed to be reviewing—beyond mining them for damning quotes anyway. And didn’t any of the editors at The New York Review think to check some of her basic claims? Or were they simply hoping to bank on the publication of what amounts to controversy porn? (Think of the dilemma faced by the authors: do you respond and draw more attention to the piece, or do you ignore it and let some portion of the readership come away with a wildly mistaken impression?)

            Haidt and Pinker do a fine job of calling out most of Shaw’s biggest mistakes and mischaracterizations. But I want to draw attention to two more instances of her falling short of any reasonable standard of scholarship, because each one reveals something important about the beliefs Shaw uses as her own moral compass. The authors under review situate their findings on human morality in a larger framework of theories about human evolution. Shaw characterizes this framework as “an unverifiable and unfalsifiable story about evolutionary psychology” (38). Shaw has evidently attended the Ken Ham school of evolutionary biology, which preaches that science can only concern itself with phenomena occurring right before our eyes in a lab. The reality is that, while testing adaptationist theories is a complicated endeavor, there are usually at least two ways to falsify them. You can show that the trait or behavior in question is absent in many cultures, or you can show that it emerges late in life after some sort of deliberate training. One of the books Shaw is supposedly reviewing, Bloom’s Just Babies, focuses specifically on research demonstrating that many of our common moral intuitions emerge when we’re babies, in our first year of life, with no deliberate training whatsoever.

            Bloom comes in for some more targeted, if off-hand, criticism near the conclusion of Shaw’s essay for an article he wrote to challenge the increasingly popular sentiment that we can solve our problems as a society by encouraging everyone to be more empathetic. Empathy, Bloom points out, is a finite resource; we’re simply not capable of feeling for every single one of the millions of individuals in need of care throughout the world. So we need to offer that care based on principle, not feeling. Shaw avoids any discussion of her own beliefs about morality in her essay, but from the nature of her mischaracterization of Bloom’s argument we can start to get a sense of the ideology informing her prejudices. She insists that when Paul Bloom, in his own Atlantic article, “The Dark Side of Empathy,” warns us that empathy for people who are seen as victims may be associated with violent, punitive tendencies toward those in authority, we should be wary of extrapolating from his psychological claims a prescription for what should and should not be valued, or inferring that we need a moral corrective to a culture suffering from a supposed excess of empathic feelings. (40-1)

The “supposed excess of empathic feelings” isn’t the only laughable distortion people who actually read Bloom’s essay will catch out; the actual examples he cites of when empathy for victims leads to “violent, punitive tendencies” include Donald Trump and Ann Coulter stoking outrage against undocumented immigrants by telling stories of the crimes a few of them commit. This misrepresentation raises an important question: why would Shaw want to mislead her readers into believing Bloom’s intention is to protect those in authority? This brings us to the McCathyesque part of Shaw’s attack ad.

            The sections of the essay drawing a web of guilt connecting the two psychologists who helped develop torture methods for the CIA to all the authors she’d have us believe are complicit focus mainly on Martin Seligman, whose theory of learned helplessness formed the basis of the CIA’s approach to harsh interrogation. Seligman is the founder of a subfield called Positive Psychology, which he developed as a counterbalance to what he perceived as an almost exclusive focus on all that can go wrong with human thinking, feeling, and behaving. His Positive Psychology Center at the University of Pennsylvania has received $31 million in recent years from the Department of Defense—a smoking gun by Shaw’s lights. And Seligman even admits that on several occasions he met with those two psychologists who participated in the torture program. The other authors Shaw writes about have in turn worked with Seligman on a variety of projects. Haidt even wrote a book on Positive Psychology called The Happiness Hypothesis.

            In Shaw’s view, learned helplessness theory is a potentially dangerous tool being wielded by a bunch of mad scientists and government officials corrupted by financial incentives and a lust for military dominance. To her mind, the notion that Seligman could simply want to help soldiers cope with the stresses of combat is all but impossible to even entertain. In this and every other instance when Shaw attempts to mislead her readers, it’s to put the same sort of negative spin on the psychologists’ explicitly stated positions. If Bloom says empathy has a dark side, then all the authors in question are against empathy. If Haidt argues that resilience—the flipside of learned helplessness—is needed to counteract a culture of victimhood, then all of these authors are against efforts to combat sexism and racism on college campuses. And, as we’ve seen, if these authors say we should question our moral intuitions, it’s because they want to be able to get away with crimes like torture. “Expertise in teaching people to override their moral intuitions is only a moral good if it serves good ends,” Shaw herself writes. “Those ends,” she goes on, “should be determined by rigorous moral deliberation” (40). Since this is precisely what the authors she’s criticizing say in their books, we’re left wondering what her real problem with them might be.

            In her reply to Haidt and Pinker’s letter, Shaw suggests her aim for the essay was to encourage people to more closely scrutinize the “doctrines of Positive Psychology” and the central principles underlying psychological theories about human morality. I was curious to see how she’d respond to being called out for mistakenly stating that the psychologists were claiming moral authority and that they were given to using their research to defend the use of torture. Her main response is to repeat the central aspects of her rather flimsy case against Seligman. But then she does something truly remarkable; she doesn’t deny using guilt by imaginability—she defends it.

Pinker and Haidt say they prefer reality to imagination, but imagination is the capacity that allows us to take responsibility, insofar as it is ever possible, for the ends for which our work will be used and the consequences that it will have in the world. Such imagination is a moral and intellectual virtue that clearly needs to be cultivated. (85)

So, regardless of what the individual psychologists themselves explicitly say about torture, for instance, as long as they’re equipping other people with the conceptual tools to justify torture, they’re still at least somewhat complicit. This was the line that first made me realize Shaw’s essay was something other than a philosopher munching on sour grapes.

            Shaw’s approach to connecting each of the individual authors to Seligman and then through him to the torture program is about as sophisticated, and about as credible, as any narrative concocted by your average online conspiracy theorist. But she believes that these connections are important and meaningful, a belief, I suspect, that derives from her own philosophy. Advocates of this philosophy, commonly referred to as postmodernismor poststructuralism, posit that our culture is governed by a dominant ideology that serves to protect and perpetuate the societal status quo, especially with regard to what are referred to as hegemonic relationships—men over women, whites over other ethnicities, heterosexuals over homosexuals. This dominant ideology finds expression in, while at the same time propagating itself through, cultural practices ranging from linguistic expressions to the creation of art to the conducting of scientific experiments.

            Inspired by figures like Louis Althusser and Michel Foucault, postmodern scholars reject many of the central principles of humanism, including its emphasis on the role of rational discourse in driving societal progress. This is because the processes of reasoning and research that go into producing knowledge can never be fully disentangled from the exercise of power, or so it is argued. We experience the world through the medium of culture, and our culture distorts reality in a way that makes hierarchies seem both natural and inevitable. So, according to postmodernists, not only does science fail to create true knowledge of the natural world and its inhabitants, but the ideas it generates must also be scrutinized to identify their hidden political implications.

            What such postmodern textual analyses look like in practice is described in sociologist Ullica Segerstrale’s book, Defenders of the Truth: The Sociobiology Debate. Segerstrale observed that postmodern critics of evolutionary psychology (which was more commonly called sociobiology in the late 90s), were outraged by what they presumed were the political implications of the theories, not by what evolutionary psychologists actually wrote. She explains,

In their analysis of their targets’ texts, the critics used a method I call moral reading. The basic idea behind moral reading was to imagine the worst possible political consequences of a scientific claim. In this way, maximum guilt might be attributed to the perpetrator of this claim. (206)  

This is similar to the type of imagination Shaw faults psychologists today for insufficiently exercising. For the postmodernists, the sum total of our cultural knowledge is what sustains all the varieties of oppression and injustice that exist in our society, so unless an author explicitly decries oppression or injustice he’ll likely be held under suspicion. Five of the six books Shaw subjects to her moral reading were written by white males. The sixth was written by a male and a female, both white. The people the CIA tortured were not white. So you might imagine white psychologists telling everyone not to listen to their conscience to make it easier for them reap the benefits of a history of colonization. Of course, I could be completely wrong here; maybe this scenario isn’t what was playing out in Shaw’s imagination at all. But that’s the problem—there are few limits to what any of us can imagine, especially when it comes to people we disagree with on hot-button issues.

            Postmodernism began in English departments back in the ‘60s where it was originally developed as an approach to analyzing literature. From there, it spread to several other branches of the humanities and is now making inroads into the social sciences. Cultural anthropology was the first field to be mostly overtaken. You can see precursors to Shaw’s rhetorical approach in attacks leveled against sociobiologists like E.O. Wilson and Napoleon Chagnon by postmodern anthropologists like Marshall Sahlins. In a review published in 2001, also in The New York Review of Books, Sahlins writes,

The ‘60s were the longest decade of the 20th century, and Vietnam was the longest war. In the West, the war prolonged itself in arrogant perceptions of the weaker peoples as instrumental means of the global projects of the stronger. In the human sciences, the war persists in an obsessive search for power in every nook and cranny of our society and history, and an equally strong postmodern urge to “deconstruct” it. For his part, Chagnon writes popular textbooks that describe his ethnography among the Yanomami in the 1960s in terms of gaining control over people.

Demonstrating his own power has been not only a necessary condition of Chagnon’s fieldwork, but a main technique of investigation.

The first thing to note is that Sahlin’s characterization of Chagnon’s books as narratives of “gaining control over people” is just plain silly; Chagnon was more often than not at the mercy of the Yanomamö. The second is that, just as anyone who’s actually read the books by Haidt, Pinker, Greene, and Bloom will be shocked by Shaw’s claim that their writing somehow bolsters the case for torture, anyone familiar with Chagnon’s studies of the Yanomamö will likely wonder what the hell they have to do with Vietnam, a war that to my knowledge he never expressed an opinion of in writing.

However, according to postmodern logic—or we might say postmodern morality—Chagnon’s observation that the Yanomamö were often violent, along with his espousal of a theory that holds such violence to have been common among preindustrial societies, leads inexorably to the conclusion that he wants us all to believe violence is part of our fixed nature as humans. Through the lens of postmodernism, Chagnon’s work is complicit in making people believe working for peace is futile because violence is inevitable. Chagnon may counter that he believes violence is likely to occur only in certain circumstances, and that by learning more about what conditions lead to conflict we can better equip ourselves to prevent it. But that doesn’t change the fact that society needs high-profile figures to bring before our modern academic version of the inquisition, so that all the other white men lording it over the rest of the world will see what happens to anyone who deviates from right (actually far-left) thinking.

Ideas really do have consequences of course, some of which will be unforeseen. The place where an idea ends up may even be repugnant to its originator. But the notion that we can settle foreign policy disputes, eradicate racism, end gender inequality, and bring about world peace simply by demonizing artists and scholars whose work goes against our favored party line, scholars and artists who maybe can’t be shown to support these evils and injustices directly but can certainly be imagined to be doing so in some abstract and indirect way—well, that strikes me as far-fetched. It also strikes me as dangerously misguided, since it’s not like scholars, or anyone else, ever needed any extra encouragement to imagine people who disagree with them being guilty of some grave moral offense. We’re naturally tempted to do that as it is.

Part of becoming a good scholar—part of becoming a grownup—is learning to live with people whose beliefs are different from yours, and to treat them fairly. Unless a particular scholar is openly and explicitly advocating torture, ascribing such an agenda to her is either irresponsible, if we’re unwittingly misrepresenting her, or dishonest, if we’re doing so knowingly. Arguments from imagined adverse consequences can go both ways. We could, for instance, easily write articles suggesting that Shaw is a Stalinist, or that she advocates prosecuting perpetrators of what members of the far left deem to be thought crimes. What about the consequences of encouraging suspicion of science in an age of widespread denial of climate change? Postmodern identity politics is this moment posing a threat to free speech on college campuses. And the tactics of postmodern activists begin and end with the stoking of moral outrage, so we could easily make a case that the activists are deliberately trying to instigate witch hunts. With each baseless accusation and counter-accusation, though, we’re getting farther and farther away from any meaningful inquiry, forestalling any substantive debate, and hamstringing any real moral or political progress.

Many people try to square the circle, arguing that postmodernism isn’t inherently antithetical to science, and that the supposed insights derived from postmodern scholarship ought to be assimilated somehow into science. When Thomas Huxley, the physician and biologist known as Darwin’s bulldog, said that science “commits suicide when it adopts a creed,” he was pointing out that by adhering to an ideology you’re taking its tenets for granted. Science, despite many critics’ desperate proclamations to the contrary, is not itself an ideology; science is an epistemology, a set of principles and methods for investigating nature and arriving at truths about the world. Even the most well-established of these truths, however, is considered provisional, open to potential revision or outright rejection as the methods, technologies, and theories that form the foundation of this collective endeavor advance over the generations.

In her essay, Shaw cites the results of a project attempting to replicate the findings of several seminal experiments in social psychology, counting the surprisingly low success rate as further cause for skepticism of the field. What she fails to appreciate here is that the replication project is being done by a group of scientists who are psychologists themselves, because they’re committed to honing their techniques for studying the human mind. I would imagine if Shaw’s postmodernist precursors had shared a similar commitment to assessing the reliability of their research methods, such as they are, and weighing the validity of their core tenets, then the ideology would have long since fallen out of fashion by the time she was taking up a pen to write about how scary psychologists are.  

The point Shaw's missing here is that it’s precisely this constant quest to check and recheck the evidence, refine and further refine the methods, test and retest the theories, that makes science, if not a source of superior wisdom, then still the most reliable approach to answering questions about who we are, what our place is in the universe, and what habits and policies will give us, as individuals and as citizens, the best chance to thrive and flourish. As Saul Perlmutter, one of the discoverers of dark energy, has said, “Science is an ongoing race between our inventing ways to fool ourselves, and our inventing ways to avoid fooling ourselves.” Shaw may be right that no experimental result could ever fully settle a moral controversy, but experimental results are often not just relevant to our philosophical deliberations but critical to keeping those deliberations firmly grounded in reality.

Popular relevant posts:

JUST ANOTHER PIECE OF SLEAZE: THE REAL LESSON OF ROBERT BOROFSKY'S "FIERCE CONTROVERSY"

THE IDIOCY OF OUTRAGE: SAM HARRIS'S RUN-INS WITH BEN AFFLECK AND NOAM CHOMSKY

(My essay on Greene’s book)

LAB FLIES: JOSHUA GREENE’S MORAL TRIBES AND THE CONTAMINATION OF WALTER WHITE

(My essay on Pinker’s book)

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

(My essay on Haidt’s book)

THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND

Read More
Dennis Junk Dennis Junk

Just Another Piece of Sleaze: The Real Lesson of Robert Borofsky's "Fierce Controversy"

Robert Borofsky and his cadre of postmodernist activists try desperately to resuscitate the case against scientific anthropologist Napoleon Chagnon after disgraced pseudo-journalist Brian Tierney’s book “Darkness in El Dorado” is exposed as a work of fraud. The product is something only an ideologue can appreciate.

Robert Borofsky’sYanomami: The Fierce Controversy and What We Can Learn from It is the source book participants on a particular side of the debate over Patrick Tierney’s Darkness in El Dorado would like everyone to read, even more than Tierney’s book itself. To anyone on the opposing side, however—and, one should hope, to those who have yet to take a side—there’s an unmissable element of farce running throughout Borofsky’s book, which ultimately amounts to little more than a transparent attempt at salvaging the campaign against anthropologist Napoleon Chagnon. That campaign had initially received quite a boost from the publication of Darkness in El Dorado, but then support began to crumble as various researchers went about exposing Tierney as a fraud. With The Fierce Controversy, Borofsky and some of the key members of the anti-Chagnon campaign are doing their best to dissociate themselves and their agenda from Tierney, while at the same time taking advantage of the publicity he brought to their favorite talking points. 

The book is billed as an evenhanded back-and-forth between anthropologists on both sides of the debate. But, despite Borofsky’s pretentions to impartiality, The Fierce Controversy is about as fair and balanced as Fox News’s political coverage—there’s even a chapter titled “You Decide.” By giving the second half of the book over to an exchange of essays and responses by what he refers to as “partisans” for both sides, Borofsky makes himself out to be a disinterested mediator, and he wants us to see the book as an authoritative representation of some quasi-democratic collection of voices—think Occupy Wall Street’s human microphones, with all the repetition, incoherence, and implicit signaling of a lack of seriousness. “Objectivity does not lie in the assertions of authorities,” Borofsky insists in italics. “It lies in the open, public analysis of divergent perspectives” (18). In the first half of the book, however, Borofsky gives himself the opportunity to convey his own impressions of the controversy under the guise of providing necessary background. Unfortunately, he’s not nearly as subtle in pushing his ideology as he’d like to be.

Borofsky claims early on that his “book seeks, in empowering readers, to develop a new political constituency for transforming the discipline.” But is Borofsky empowering readers, or is he trying to foment a revolution? The only way the two goals could be aligned would be if readers already felt the need for the type of change Borofsky hopes to instigate. What does that change entail? He writes,

It is understandable that many anthropologists have had trouble addressing the controversy’s central issues because they are invested in the present system. These anthropologists worked their way through the discipline’s existing structures as they progressed from being graduate students to employed professionals. While they may acknowledge the limitations of the discipline, these structures represent the world they know, the world they feel comfortable with. One would not expect most of them to lead the charge for change. But introductory and advanced students are less invested in this system. If anything, they have a stake in changing it so as to create new spaces for themselves. (21)

In other words, Borofsky simultaneously wants his book to be open-ended—the outcome of the debate in the second half reflecting the merits of each side’s case, with the ultimate position taken by readers left to their own powers of critical thought—while at the same time inspiring those same readers to work for the goals he himself believes are important. He utterly neglects the possibility that anthropology students won’t share his markedly Marxist views. From this goal statement, you may expect the book to focus on the distribution of power and the channels for promotion in anthropology departments, but that’s not at all what Borofsky and his coauthors end up discussing. Even more problematically, though, Borofsky is taking for granted here the seriousness of “the controversy’s central issues,” the same issues whose validity is the very thing that’s supposed to be under debate in the second half of the book.  

The most serious charges in Tierney’s book were shown to be false almost as soon as it was published, and Tierney himself was thoroughly discredited when it was discovered that countless of his copious citations bore little or no relation to the claims they were supposed to support. A taskforce commissioned by the American Society of Human Genetics, for instance, found that Tierney spliced together parts of different recorded conversations to mislead his readers about the actions and intentions of James V. Neel, a geneticist he accuses of unethical conduct. Reasonably enough, many supporters of Chagnon, who Tierney likewise accuses of grave ethical breaches, found such deliberately misleading tactics sufficient cause to dismiss any other claims by the author. But Borofsky treats this argument as an effort on the part of anthropologists to dodge inconvenient questions:

Instead of confronting the breadth of issues raised by Tierney and the media, many anthropologists focused on Tierney’s accusations regarding Neel… As previously noted, focusing on Neel had a particular advantage for those who wanted to continue sidestepping the role of anthropologists in all this. Neel was a geneticist, and soon after the book’s publication most experts realized that the accusation that Neel helped facilitate the spread of measles was false. Focusing on Neel allowed anthropologists to downplay the role of the discipline in the whole affair. (46)

When Borofsky accuses some commenters of “sidestepping the role of anthropologists in all this,” we’re left wondering, all what? The Fierce Controversy is supposed to be about assessing the charges Tierney made in his book, but again the book’s editor and main contributor is assuming that where there’s smoke there’s fire. It’s also important to note that the nature of the charges against Chagnon make them much more difficult to prove or disprove. A call to a couple of epidemiologists and vaccination experts established that what Tierney accused Neel of was simply impossible. It’s hardly sidestepping the issue to ask why anyone would trust Tierney’s reporting on more complicated matters.

Anyone familiar with the debates over postmodernism taking place among anthropologists over the past three decades will see at a glance that The Fierce Controversy is disingenuous in its very conception. Borofsky and the other postmodernist contributors desperately want to have a conversation about how Napoleon Chagnon’s approach to fieldwork, and even his conception of anthropology as a discipline are no longer aligned with how most anthropologists conceive of and go about their work. Borofsky is explicit about this, writing in one of the chapters that’s supposed to merely provide background for readers new to the debate,

Chagnon writes against the grain of accepted ethical practice in the discipline. What he describes in detail to millions of readers are just the sorts of practices anthropologists claim they do not practice. (39)   

This comes in a section titled “A Painful Contradiction,” which consists of Borofsky straightforwardly arguing that Chagnon, whose first book on the Yanomamö is perhaps the most widely read ethnography in history, disregarded the principles of the American Anthropological Association by actively harming the people he studied and by violating their privacy (though most of Chagnon’s time in the field predated the AAA’s statements of the principles in question). In Borofsky’s opinion, these ethical breaches are attested to in Chagnon’s own works and hence beyond dispute. In reality, though, whether Chagnon’s techniques amount to ethical violations (by any day’s standards) is very much in dispute, as we see clearly in the second half of the book. 

(Yanomamö was Chagnon’s original spelling, but his detractors can’t bring themselves to spell it the same way—hence Yanomami.)

Borofsky is of course free to write about his issues with Chagnon’s methods, but inserting his own argument into a book he’s promoting as an open and fair exchange between experts on both sides of the debate, especially when he’s responding to the others’ contributions after the fact, is a dubious sort of bait and switch. The second half of the book is already lopsided, with Bruce Albert, Leda Martins, and Terence Turner attacking Neel’s and Chagnon’s reputations, while Raymond Hames and Kim Hill argue for the defense. (The sixth contributor, John Peters, doesn’t come down clearly on either side.) When you factor in Borofsky’s own arguments, you’ve got four against two—and if you go by page count the imbalance is quite a bit worse; indeed, the inclusion of the two Chagnon defenders in the forum starts to look more like a ploy to gain a modicum of credibility for what’s best characterized as just another anti-Chagnon screed by a few of his most outspoken detractors.

Notably absent from the list of contributors is Chagnon himself, who probably reasoned that lending his name to the title page would give the book an undeserved air of legitimacy. Given the unmasked contempt that Albert, Martins, and Turner evince toward him in their essays, Chagnon was wise not to go anywhere near the project. It’s also far from irrelevant—though it goes unmentioned by Borofsky—that Martins and Tierney were friends at the time he was writing his book; on his acknowledgements page, Tierney writes,

I am especially indebted to Leda Martins, who is finishing her Ph.D. at Cornell University, for her support throughout this long project and for her and her family’s hospitality in Boa Vista, Brazil. Leda’s dossier on Napoleon Chagnon was an important resource for my research. (XVII)

(Martins later denied, in an interview given to ethicist and science historian Alice Dreger, that she was the source of the dossier Tierney mentions.) Equally relevant is that one of the professors at Cornell where Martins was finishing her Ph.D. was none other than Terence Turner, whom Tierney also thanks in his acknowledgements. To be fair, Hames is a former student of Chagnon’s, and Hill also knows Chagnon well. But the earlier collaboration with Tierney of at least two contributors to Borofsky’s book is suspicious to say the least.   

Confronted with the book’s inquisitorial layout and tone, I believe undecided readers are going to wonder whether it’s fair to focus a whole book on the charges laid out in another book that’s been so thoroughly discredited. Borofsky does provide an answer of sorts to this objection: The Fierce Controversy is not about Tierney’s book; it’s about anthropology as a discipline. He writes that

beyond the accusations surrounding Neel, Chagnon, and Tierney, there are critical—indeed, from my perspective, far more critical—issues that need to be addressed in the controversy: those involving relations with informants as well as professional integrity and competence. Given how central these issues are to anthropology, readers can understand, perhaps, why many in the discipline have sought to sidestep the controversy. (17)

With that rhetorical flourish, Borofsky makes any concern about Tierney’s credibility, along with any concern for treating the accused fairly, seem like an unwillingness to answer difficult questions. But, in reality, the stated goals of the book raise yet another important ethical question: is it right for a group of scholars to savage their colleagues’ reputations in furtherance of their reform agenda for the discipline? How do they justify their complete disregard for the principle of presumed innocence?   

What’s going on here is that Borofsky and his fellow postmodernists really needed The Fierce Controversy to be about the dramatis personae featured in Tierney’s book, because Tierney’s book is what got the whole discipline’s attention, along with the attention of countless people outside of anthropology. The postmodernists, in other words, are riding the scandal’s coattails. Turner had been making many of the allegations that later ended up in Tierney’s book for years, but he couldn’t get anyone to take him seriously. Now that headlines about anthropologists colluding in eugenic experiments were showing up in newspapers around the world, Turner and the other members of the anti-Chagnon campaign finally got their chance to be heard. Naturally enough, even after Tierney’s book was exposed as mostly a work of fiction, they still really wanted to discuss how terribly Chagnon and other anthropologists of his ilk behaved in the field so they could take control of the larger debate over what anthropology is and what anthropological fieldwork should consist of. This is why even as Borofsky insists the debate isn’t about the people at the center of the controversy, he has no qualms about arranging his book as a trial:

We can address this problem within the discipline by applying the model of a jury trial. In such a trial, jury members—like many readers—do not know all the ins and outs of a case. But by listening to people who do know these details argue back and forth, they are able to form a reasonable judgment regarding the case. (73)

But, if the book isn’t about Neel, Chagnon, and Tierney, then who exactly is being tried? Borofsky is essentially saying, we’re going to try these men in abstentia (Neel died before Darkness in El Dorado was published) with no regard whatsoever for the effect repeating the likely bogus charges against them ad nauseam will have on their reputations, because it’s politically convenient for us to do so, since we hope it will help us achieve our agenda of discipline-wide reform, for which there’s currently either too little interest or too much resistance.

As misbegotten, duplicitous, and morally dubious as its goals and premises are, there’s a still more fatal shortcoming to The Fierce Controversy, and that’s the stance its editor, moderator, and chief contributor takes toward the role of evidence. Here again, it’s important to bear in mind the context out of which the scandal surrounding Darkness in El Dorado erupted. The reason so many of Chagnon’s colleagues responded somewhat gleefully to the lurid and appalling charges leveled against him by Tierney is that Chagnon stands as a prominent figure in the debate over whether anthropology should rightly be conceived of and conducted as a science. The rival view is that science is an arbitrary label used to give the appearance of authority. As Borofsky argues,

the issue is not whether a particular anthropologist’s work is scientific. It is whether that anthropologist’s work is credible. Calling particular research scientific in anthropology is often an attempt to establish credibility by name-dropping. (96)

What he’s referring to here as name-dropping the scientific anthropologists would probably describe as attempts at tying their observations to existing theories, as when Chagnon interprets aspects of Yanomamö culture in light of inclusive fitness theory, with reference to works by evolutionary biologists like W.D. Hamilton and G.C. Williams. But Borofsky’s characterization of how an anthropologist might collect and present data is even more cynical than his attitude toward citations of other scientists’ work. He writes of Chagnon’s descriptions of his field methods,  

To make sure readers understand that he was seriously at work during this time—because he could conceivably have spent much of his time lounging around taking in the sights—he reinforces his expertise with personal anecdotes, statistics, and photos. In Studying the Yanomamö, Chagnon presents interviews, detailed genealogies, computer printouts, photographs, and tables. All these data convey an important message: Chagnon knows what he’s talking about. (57-8)

Borofsky is either confused about or skeptical of the role evidence plays in science—or, more likely, a little of both. Anthropologists in the field could relay any number of vague impressions in their writings, as most of them do. Or those same anthropologists could measure and record details uncovered through systematic investigation. Analyzing the data collected in all those tables and graphs of demographic information could lead to the discovery of facts, trends, and correlations no amount of casual observation would reveal. Borofsky himself drops the names of some postmodern theorists in support of his cynical stance toward science—but it’s hard not to wonder if perhaps his dismissal of even the possibility of data leading to new discoveries has as much to do with him simply not liking the discoveries Chagnon actually made.

            One of the central tenets of postmodernism is that any cultural artifact, including any scientific text, is less a reflection of facts about the real world than a product of, and an attempt to perpetuate, power disparities in the political environment which produces it. From the postmodern perspective, in other words, science is nothing but disguised political rhetoric—and its message is always reactionary. This is why Borofsky is so eager to open the debate to more voices; he believes scientific credentials are really just markers of hegemonic authority, and he further believes that creating a more just society would demand a commitment that no one be excluded from the debate for a lack of expertise.

As immediately apparent as the problems with this perspective are, the really scary thing is that The Fierce Controversy applies this conception of evidence not only to Chagnon’s anthropological field work, but to his and Neel’s culpability as well. And this is where it’s easiest to see how disastrous postmodern ideas would be if they were used as legal or governing principles. Borofsky writes,

in the jury trial model followed in part 2, it is not necessary to recognize (or remember) each and every citation, each and every detail, but rather to note how participants reply to one another’s criticisms [sic]. The six participants, as noted, must respond to critiques of their positions. Readers may not be able to assess—simply by reading certain statements—which assertions are closer to what we might term “the truth.” But readers can evaluate how well a particular participant responds to another’s criticisms as a way of assessing the credibility of that person’s argument. (110)

These instructions betray a frightening obliviousness of the dangers of moral panics and witch hunts. It’s all well and good to put the truth in scare quotes—until you stand falsely accused of some horrible offense and the exculpatory evidence is deemed inadmissible. Imagine if our legal system were set up this way; if you wanted to have someone convicted of a crime, all you’d have to do is stage a successful campaign against this person. Imagine if other prominent social issues were handled this way: climate change, early childhood vaccination, genetically modified foods.

            By essentially coaching readers to attend only to the contributors’ rhetoric and not to worry about the evidence they cite, Borofsky could reasonably be understood as conceding that the evidence simply doesn’t support the case he’s trying to make with the book. But the members of the anti-Chagnon camp seem to believe that the “issues” they want to discuss are completely separable from the question of whether the accusations against Chagnon are true. Kim Hill does a good job of highlighting just how insane this position is, writing,

Turner further observes that some people seem to feel that “if the critical allegations against Neel and Chagnon can be refuted on scientific grounds, then the ethical questions raised…about the effects of their actions on the Yanomami can be made to go away.” In fact, those of us who have criticized Tierney have refuted his allegations on factual and scientific grounds, and those allegations refuted are specifically about the actions of the two accused and their effects. There are no ethical issues to “dismiss” when the actions presented never took place and the effects on the Yanomamö were never experienced as described. Thus, the facts of the book are indeed central to some ethical discussions, and factual findings can indeed “obviate ethical issues” by rendering the discussions moot. But the discussion of facts reported by Tierney have been placed outside this forum of debate (we are to consider only ethical issues raised by the book, not evaluate each factual claim in the book). (180)

One wonders whether Hill knew that evaluations of factual claims would be out of bounds when he agreed to participate in the exchange. Turner, it should be noted, violates this proscription in the final round of the exchange when he takes advantage of his essays’ privileged place as the last contribution by listing the accusations in Tierney’s book he feels are independently supported. Reading this final essay, it’s hard not to think the debate is ending just where it ought to have begun. 

            Hill’s and Hames’s contributions in each round are sandwiched in between those of the three anti-Chagnon campaigners, but whatever value the book has as anything other than an illustration of how paranoid and bizarre postmodern rhetoric can be is to be found in their essays. These sections are like little pockets of sanity in a maelstrom of deranged moralizing. In scoring the back-and-forth, most readers will inevitably favor the side most closely aligned with their own convictions, but two moments really stand out as particularly embarrassing for the prosecution. One of them has Hames catching Martins doing some pretty egregious cherry-picking to give a misleading impression. He explains,

Martins in her second-round contribution cites a specific example of a highly visible and allegedly unflattering image of the Yanomamö created by Chagnon. In the much-discussed Veja interview (entitled “Indians Are Also People”), she notes that “When asked in Veja to define the ‘real Indians,’ Chagnon said, ‘The real Indians get dirty, smell bad, use drugs, belch after they eat, covet and sometimes steal each other’s women, fornicate and make war.’” This quote is accurate. However, in the next sentence after that quote she cites, Chagnon states: “They are normal human beings. And that is sufficient reason for them to merit care and attention.” This tactic of partial quotation mirrors a technique used by Tierney. The context of the statement and most of the interview was Chagnon’s observation that some NGOs and missionaries characterized the Yanomamö as “angelic beings without faults.” His goal was to simply state that the Yanomamö and other native peoples are human beings and deserve our support and sympathy. He was concerned that false portrayals could harm native peoples when later they were discovered to be just like us. (236)

Such deliberate misrepresentations raise the question of whether postmodern thinking justifies, and even encourages, playing fast and loose with the truth—since all writing is just political rhetoric without any basis in reality anyway. What’s clear either way is that an ideology that scants the importance of evidence simply can’t support a moral framework that recognizes individual human rights, because it makes every individual vulnerable to being falsely maligned for the sake of some political cause.   

            The other supremely embarrassing moment for the anti-Chagnon crowd comes in an exchange between Hill and Turner. Hill insists in his first essay that Tierney’s book and the ensuing controversy were borne of ideological opposition to sociobiology, the theoretical framework Chagnon uses to interpret his data on the Yanomamö. On first encountering phrases like “ideological terrorism” (127) and “holy war of ideology” (135), you can’t help thinking that Hill has succumbed to hyperbole, but Turner’s response lends a great deal of credence to Hill’s characterization. Turner’s defense is the logical equivalent of a dangerously underweight teenager saying, “I’m not anorexic—I just need to lose about fifteen pounds.” He first claims his campaign against Chagnon has nothing to do with sociobiology, but then he tries to explain sociobiology as an outgrowth of eugenics, even going so far as to suggest that the theoretical framework somehow inspires adherents to undermine indigenous activists. Even Chagnon’s characterization of the Yanomamö as warlike, which the activists trying to paint a less unsavory picture of them take such issue with, is, according to Turner, more a requirement of sociobiological thinking than an observed reality. He writes,

“Fierceness” and the high level of violent conflict with which it is putatively associated are for Chagnon and like-minded sociobiologists the primary indexes of the evolutionary priority of the Yanomami as an earlier, and supposedly therefore more violent, phase of the development of human society. Most of the critics of Chagnon’s fixation on “fierceness” have had little idea of this integral connection of “fierceness” as a Yanomami trait and the deep structure of sociobiological-selectionist theory. (202)

Turner isn’t by any stretch making a good faith effort to explain the theory and its origins according to how it’s explicitly discussed in the relevant literature. He’s reading between the lines in precisely the way prescribed by his postmodernism, treating the theory as a covert effort at justifying the lower status of indigenous peoples. But his analysis is so far off-base that it not only casts doubt on his credibility on the topic of sociobiology; it calls into question his credibility as a scholarly researcher in general. As Hames points out,

Anyone who has basic knowledge of the origins of sociobiology in anthropology will quickly realize that Turner’s attempt to show a connection between Neel’s allegedly eugenic ideas and Chagnon’s analysis of the Yanomamö to be far-fetched. (238)

            Turner’s method of uncovering secret threads supposedly connecting scientific theories to abhorrent political philosophies is closer to the practices of internet conspiracy theorists than to those of academic researchers. He constructs a scary story with some prominent villains, and then he retrofits the facts to support it. The only problem is that anyone familiar with the theories and the people in the story he tells will recognize it as pure fantasy. As Hames attests,

I don’t know of any “sociobiologists” who regard the Yanomamö as any more or less representative of an “earlier, and supposedly therefore more violent, phase of the development of human society” than any other relatively isolated indigenous society. Some sociobiologists are interested in indigenous populations because they live under social and technological conditions that more closely resemble humanity for most of its history as a species than conditions found in urban population centers. (238)

And Hill, after pointing out how Turner rejects the claim that his campaign against Chagnon is motivated by his paranoid opposition to sociobiology only to turn around and try to explain why attacking the reputations of sociobiologists is justified, takes on the charge that sociobiology somehow prohibits working with indigenous activists, writing,

Indeed he concludes by suggesting that sociobiological theory leads its adherents to reject legitimate modern indigenous leaders. This suggestion is malicious slander that has no basis in reality (where most sociobiologists not only accept modern indigenous leaders but work together with them to help solve modern indigenous problems). (250)

These are people Hill happens to work with and know personally. Unfortunately, Turner himself has yet to be put on trial for these arrant misrepresentations the way he and Borofsky put Chagnon on trial for the charges they’ve so clearly played a role in trumping up.

In explaining why a book like The Fierce Controversy is necessary, Borofsky repeatedly accuses the American Anthropological Association of using a few examples of sloppy reporting on Tierney’s part as an excuse to “sidestep” the ethical issues raised by Darkness in El Dorado. As we’ve seen, however, Tierney’s misrepresentations are far too extensive, and far too conveniently selective, to have resulted from anything but an intentional effort to deceive readers. In Borofsky’s telling, the issues Tierney raises were so important that pressure from several AAA members, along with hundreds of students who commented on the organization’s website, forced the leadership to commission the El Dorado Task Force to investigate. It turns out, though, that on this critical element of the story too Borofsky is completely mistaken. The Task Force wasn’t responding to pressure from inside its own ranks; its members were instead concerned about the reputation of American anthropologists, whose ability to do future work in Latin American was threatened by the scandal. In a 2002 email uncovered by Alice Dreger, the Chair of the Task Force, former AAA President Jane Hill, wrote of Darkness in El Dorado,

Burn this message. The book is just a piece of sleaze, that’s all there is to it (some cosmetic language will be used in the report, but we all agree on that). But I think the AAA had to do something because I really think that the future of work by anthropologists with indigenous peoples in Latin America—with a high potential to do good—was put seriously at risk by its accusations, and silence on the part of the AAA would have been interpreted as either assent or cowardice. Whether we’re doing the right thing will have to be judged by posterity.

Far from the overdue examination of anthropological ethics he wants his book to be seen as, all Borofsky has offered us with The Fierce Controversy is another piece of sleaze, a sequel of sorts meant to rescue the original from its fatal, and highly unethical, distortions and wholesale fabrications. What Borofsky’s book is more than anything else, though, is a portrait of postmodernism’s powers of moral perversion. As such, and only as such, it is of some historical value.

            In a debate over teaching intelligent design in public schools, Richard Dawkins once called attention to what should have been an obvious truth. “When two opposite points of view are expressed with equal intensity,” he said, “the truth does not necessarily lie exactly halfway between them. It is possible for one side to be simply wrong.” This line came to mind again and again as I read The Fierce Controversy. If we take presumption of innocence at all seriously, we can’t avoid concluding that the case brought by the anti-Chagnon crowd is simply wrong. The entire scandal began with a campaign of character assassination, which then blew up into a media frenzy, which subsequently induced a moral panic. It seems even some of Chagnon’s old enemies were taken aback by the mushrooming scale of the allegations. And yet many of the participants whose unscrupulous or outright dishonest scholarship and reporting originally caused the hysteria saw fit years later to continue stoking the controversy. Since they don’t appear to feel any shame, all we can do is agree that they’ve forfeited any right to be heard on the topic of Napoleon Chagnon and the Yanomamö. 

            Still, the inquisitorial zealotry of the anti-Chagnon contributors notwithstanding, the most repugnant thing about Borofsky’s book is how the proclamations of concern first and foremost for the Yanomamö begin to seem pro forma through repetition, as each side tries to paint itself as more focused on the well-being of indigenous peoples than the other. You know a book that’s supposed to address ethical issues has gone terribly awry when references to an endangered people start to seem like mere rhetorical maneuvers. 

Other popular posts like this:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

SCIENCE’S DIFFERENCE PROBLEM: NICHOLAS WADE’S TROUBLESOME INHERITANCE AND THE MISSING MORAL FRAMEWORK FOR DISCUSSING THE BIOLOGY OF BEHAVIOR

You can also watch "Secrets of the Tribe," Jose Padiha's documentary about the controversy, online. 

Read More
Dennis Junk Dennis Junk

Medieval vs Enlightened: Sorry, Medievalists, Dan Savage Was Right

The medievalist letter writer claims that being “part of the center” is what makes living in the enlightened West preferable to living in the 12th century. But there’s simply no way whoever wrote the letter actually believes this. If you happen to be poor, female, a racial or religious minority, a homosexual, or a member of any other marginalized group, you’d be far more loath to return to the Middle Ages than those of us comfortably ensconced in this notional center, just as you’d be loath to relocate to any society not governed by Enlightenment principles today.

            A letter from an anonymous scholar of the medieval period to the sex columnist Dan Savage has been making the rounds of social media lately. Responding to a letter from a young woman asking how she should handle sex for the first time with her Muslim boyfriend, who happened to be a virgin, Savage wrote, “If he’s still struggling with the sex-negative, woman-phobic zap that his upbringing (and a medieval version of his faith) put on his head, he needs to work through that crap before he gets naked with you.” The anonymous writer bristles in bold lettering at Savage’s terminology: “I’m a medievalist, and this is one of the things about our current discourse on religion that drives me nuts. Contemporary radical Christianity, Judaism, and Islam are all terrible, but none of them are medieval, especially in terms of sexuality.” Oddly, however, the letter, published under the title, “A Medievalist Schools Dan on Medieval Attitudes toward Sex,” isn’t really as much about correcting popular misconceptions about sex in the Middle Ages as it is about promoting a currently fashionable but highly dubious way of understanding radical religion in the various manifestations we see today.

            While the medievalist’s overall argument is based far more on ideology than actual evidence, the letter does make one important and valid point. As citizens of a technologically advanced secular democracy, it’s tempting for us to judge other cultures by the standards of our own. Just as each of us expects every young person we encounter to follow a path to maturity roughly identical to the one we’ve taken ourselves, people in advanced civilizations tend to think of less developed societies as occupying one or another of the stages that brought us to our own current level of progress. This not only inspires a condescending attitude toward other cultures; it also often leads to an overly simplified understanding of our own culture’s history. The letter to Savage explains:

I’m not saying that the Middle Ages was a great period of freedom (sexual or otherwise), but the sexual culture of 12th-century France, Iraq, Jerusalem, or Minsk did not involve the degree of self-loathing brought about by modern approaches to sexuality. Modern sexual purity has become a marker of faith, which it wasn’t in the Middle Ages. (For instance, the Bishop of Winchester ran the brothels in South London—for real, it was a primary and publicly acknowledged source of his revenue—and one particularly powerful Bishop of Winchester was both the product of adultery and the father of a bastard, which didn’t stop him from being a cardinal and papal legate.) And faith, especially in modern radical religion, is a marker of social identity in a way it rarely was in the Middle Ages.

If we imagine the past as a bad dream of sexual repression from which our civilization has only recently awoken, historical tidbits about the prevalence and public acceptance of prostitution may come as a surprise. But do these revelations really undermine any characterization of the period as marked by religious suppression of sexual freedom?

            Obviously, the letter writer’s understanding of the Middle Ages is more nuanced than most of ours, but the argument reduces to pointing out a couple of random details to distract us from the bigger picture. The passage quoted above begins with an acknowledgement that the Middle Ages was not a time of sexual freedom, and isn’t it primarily that lack of freedom that Savage was referring to when he used the term medieval? The point about self-loathing is purely speculative if taken to apply to the devout generally, and simply wrong with regard to ascetics who wore hairshirts, flagellated themselves, or practiced other forms of mortification of the flesh. In addition, we must wonder how much those prostitutes enjoyed the status conferred on them by the society that was supposedly so accepting of their profession; we must also wonder if this medievalist is aware of what medieval Islamic scholars like Imam Malik (711-795) and Imam Shafi (767-820) wrote about homosexuality. The letter writer is on shaky ground yet again with regard to the claim that sexual purity wasn’t a marker of faith (though it’s hard to know precisely what the phrase even means). There were all kinds of strange prohibitions in Christendom against sex on certain days of the week, certain times of the year, and in any position outside of missionary. Anyone watching the BBC’s adaptation of Wolf Hall knows how much virginity was prized in women—as King Henry VIII could only be wed to a woman who’d never had sex with another man. And there’s obviously an Islamic tradition of favoring virgins, or else why would so many of them be promised to martyrs? Finally, of course faith wasn’t a marker of social identity—nearly everyone in every community was of the same faith. If you decided to take up another set of beliefs, chances are you’d have been burned as a heretic or beheaded as an apostate.

            The letter writer is eager to make the point that the sexual mores espoused by modern religious radicals are not strictly identical to the ones people lived according to in the Middle Ages. Of course, the varieties of religion in any one time aren’t ever identical to those in another, or even to others in the same era. Does anyone really believe otherwise? The important question is whether there’s enough similarity between modern religious beliefs on the one hand and medieval religious beliefs on the other for the use of the term to be apposite. And the answer is a definitive yes. So what is the medievalist’s goal in writing to correct Savage? The letter goes on,

The Middle Eastern boyfriend wasn’t taught a medieval version of his faith, and radical religion in the West isn’t a retreat into the past—it is a very modern way of conceiving identity. Even something like ISIS is really just interested in the medieval borders of their caliphate; their ideology developed out of 18th- and 19th-century anticolonial sentiment. The reason why this matters (beyond medievalists just being like, OMG no one gets us) is that the common response in the West to religious radicalism is to urge enlightenment, and to believe that enlightenment is a progressive narrative that is ever more inclusive. But these religions are responses to enlightenment, in fact often to The Enlightenment.

The Enlightenment, or Age of Reason, is popularly thought to have been the end of the Middle or so-called Dark Ages. The story goes that the medieval period was a time of Catholic oppression, feudal inequality, stunted innovation, and rampant violence. Then some brilliant philosophers woke the West up to the power of reason, science, and democracy, thus marking the dawn of the modern world. Historians and academics of various stripes like to sneer at this story of straightforward scientific and moral progress. It’s too simplistic. It ignores countless atrocities perpetrated by those supposedly enlightened societies. And it undergirds an ugly contemptuousness toward less advanced cultures. But is the story of the Enlightenment completely wrong?

            The medievalist letter writer makes no bones about the source of his ideas, writing in a parenthetical, “Michel Foucault does a great job of talking about these developments, and modern sexuality, including homosexual and heterosexual identity, as well—and I’m stealing and watering down his thoughts here.” Foucault, though he eschewed the label, is a leading figure in poststructuralist and postmodern schools of thought. His abiding interest throughout his career was with the underlying dynamics of social power as they manifested themselves in the construction of knowledge. He was one of those French philosophers who don’t believe in things like objective truth, human nature, or historical progress of any kind.

Foucault and the scores of scholars inspired by his work take it as their mission to expose all the hidden justifications for oppression in our culture’s various media for disseminating information. Why they would bother taking on this mission in the first place, though, is a mystery, beginning as they do from the premise that any notion of moral progress can only be yet another manifestation of one group’s power over another. If you don’t believe in social justice, why pursue it? If you don’t believe in truth, why seek it out? And what are Foucault’s ideas about the relationship between knowledge and power but theories of human nature? Despite this fundamental incoherence, many postmodern academics today zealously pounce on any opportunity to chastise scientists, artists, and other academics for alleged undercurrents in their work of sexism, racism, homophobia, Islamophobia, or some other oppressive ideology. Few sectors of academia remain untouched by this tradition, and its influence leads legions of intellectuals to unselfconsciously substitute sanctimony for real scholarship.

            So how do Foucault and the medievalist letter writer view the Enlightenment? The letter refers vaguely to “concepts of mass culture and population.” Already, it seems we’re getting far afield of how most historians and philosophers characterize the Enlightenment, not to mention how most Enlightenment figures themselves described their objectives. The letter continues,

Its narrative depends upon centralized control: It gave us the modern army, the modern prison, the mental asylum, genocide, and totalitarianism as well as modern science and democracy. Again, I’m not saying that I’d prefer to live in the 12th century (I wouldn’t), but that’s because I can imagine myself as part of that center. Educated, well-off Westerners generally assume that they are part of the center, that they can affect the government and contribute to the progress of enlightenment. This means that their identity is invested in the social form of modernity.

It’s true that the terms Enlightenment and Dark Ages were first used by Western scholars in the nineteenth century as an exercise in self-congratulation, and it’s also true that any moral progress that was made over the period occurred alongside untold atrocities. But neither of these complications to the oversimplified version of the narrative establishes in any way that the Enlightenment never really occurred—as the letter writer’s repeated assurances that it’s preferable to be alive today ought to make clear. What’s also clear is that this medievalist is deliberately conflating enlightenment with modernity, so that all the tragedies and outrages of the modern world can be laid at the feet of enlightenment thinking. How else could he describe the enlightenment as being simultaneously about both totalitarianism and democracy? But not everything that happened after the Enlightenment was necessarily caused by it, and nor should every social institution that arose from the late 19th to the early 20th century be seen as representative of enlightenment thinking.

            The medievalist letter writer claims that being “part of the center” is what makes living in the enlightened West preferable to living in the 12th century. But there’s simply no way whoever wrote the letter actually believes this. If you happen to be poor, female, a racial or religious minority, a homosexual, or a member of any other marginalized group, you’d be far more loath to return to the Middle Ages than those of us comfortably ensconced in this notional center, just as you’d be loath to relocate to any society not governed by Enlightenment principles today.

The medievalist insists that groups like ISIS follow an ideology that dates to the 18th and 19th centuries and arose in response to colonialism, implying that Islamic extremism would be just another consequence of the inherently oppressive nature of the West and its supposedly enlightened ideas. “Radical religion,” from this Foucauldian perspective, offers a social identity to those excluded (or who feel excluded) from the dominant system of Western enlightenment capitalism. It is a modern response to a modern problem, and by making it seem like some medieval holdover, we cover up the way in which our own social power produces the conditions for this kind of identity, thus making violence appear to be the only response for these recalcitrant “holdouts.”

This is the position of scholars and journalists like Reza Aslan and Glenn Greenwald as well. It’s emblematic of the same postmodern ideology that forces on us the conclusion that if chimpanzees are violent to one another, it must be the result of contact with primatologists and other humans; if indigenous people in traditionalist cultures go to war with their neighbors, it must be owing to contact with missionaries and anthropologists; and if radical Islamists are killing their moderate co-religionists, kidnapping women, or throwing homosexuals from rooftops, well, it can only be the fault of Western colonialism. Never mind that these things are prescribed by holy texts dating from—you guessed it—the Middle Ages. The West, to postmodernists, is the source of all evil, because the West has all the power.

Directionality in Societal Development

But the letter writer’s fear that thinking of radical religion as a historical holdover will inevitably lead us to conclude military action is the only solution is based on an obvious non sequitur. There’s simply no reason someone who sees religious radicalism as medieval must advocate further violence to stamp it out. And that brings up another vital question: what solution do the postmodernists propose for things like religious violence in the Middle East and Africa? They seem to think that if they can only convince enough people that Western culture is inherently sexist, racist, violent, and so on—basically a gargantuan engine of oppression—then every geopolitical problem will take care of itself somehow.

            If it’s absurd to believe that everything that comes from the West is good and pure and true just because it comes from the West, it’s just as absurd to believe that everything that comes from the West is evil and tainted and false for the same reason. Had the medievalist spent some time reading the webpage on the Enlightenment so helpfully hyperlinked to in the letter, whoever it is may have realized how off-the-mark Foucault’s formulation was. The letter writer gets it exactly wrong in the part about mass culture and population, since the movement is actually associated with individualism, including individual rights. But what best distinguishes enlightenment thinking from medieval thinking, in any region or era, is the conviction that knowledge, justice, and better lives for everyone in the society are achievable through the application of reason, science, and skepticism, while medieval cultures rely instead on faith, scriptural or hierarchical authority, and tradition. The two central symbols of the Enlightenment are Galileo declaring that the church was wrong to dismiss the idea of a heliocentric cosmos and the Founding Fathers appending the Bill of Rights to the U.S. Constitution. You can argue that it’s only owing to a history of colonialism that Western democracies today enjoy the highest standard of living among all the nations of the globe. But even the medievalist letter writer attests to how much better it is to live in enlightened countries today than in the same countries in the Middle Ages.

            The postmodernism of Foucault and his kindred academics is not now, and has not ever been, compelling on intellectual grounds, which leaves open the question of why so many scholars have turned against the humanist and Enlightenment ideals that once gave them their raison d’être. I can’t help suspecting that the appeal of postmodernism stems from certain religious qualities of the worldview, qualities that ironically make it resemble certain aspects of medieval thought: the bowing to the authority of celebrity scholars (mostly white males), the cloistered obsession with esoteric texts, rituals of expiation and self-abasement, and competitive finger-wagging. There’s even a core belief in something very like original sin; only in this case it consists of being born into the ranks of a privileged group whose past members were guilty of some unspeakable crime. Postmodern identity politics seems to appeal most strongly to whites with an overpowering desire for acceptance by those less fortunate, as if they were looking for some kind of forgiveness or redemption only the oppressed have the power to grant. That’s why these academics are so quick to be persuaded they should never speak up unless it’s on behalf of some marginalized group, as if good intentions were proof against absurdity. As safe and accommodating and well-intentioned as this stance sounds, though, in practice it amounts to little more than moral and intellectual cowardice.

Life really has gotten much better since the Enlightenment, and it really does continue to get better for an increasing number of formerly oppressed groups of people today. All this progress has been made, and continues being made, precisely because there are facts and ideas—scientific theories, human rights, justice, and equality—that transcend the social conditions surrounding their origins. Accepting this reality doesn’t in any way mean seeing violence as the only option for combatting religious extremism, despite many academics’ insistence to the contrary. Nor does it mean abandoning the cause of political, cultural, and religious pluralism. But, if we continue disavowing the very ideals that have driven this progress, however fitfully and haltingly it has occurred, if we continue denying that it can even be said to have occurred at all, then what hope can we possibly have of pushing it even further along in the future?   

Also read:

THE IDIOCY OF OUTRAGE: SAM HARRIS'S RUN-INS WITH BEN AFFLECK AND NOAM CHOMSKY

And:

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And: 

“THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

And: 

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

On ISIS's explicit avowal of adherence to medieval texts: “What ISIS Really Wants" by Graeme Wood of the Atlantic

Read More
Dennis Junk Dennis Junk

Science’s Difference Problem: Nicholas Wade’s Troublesome Inheritance and the Missing Moral Framework for Discussing the Biology of Behavior

Nicholas Wade went there. In his book “A Troublesome Inheritance,” he argues that not only is race a real, biological phenomenon, but one that has potentially important implications for our understanding of the fates of different peoples. Is it possible to even discuss such things without being justifiably labeled a racist? More importantly, if biological differences do show up in the research, how can we discuss them without being grossly immoral?

            No sooner had Nicholas Wade’s new book become available for free two-day shipping than a contest began to see who could pen the most devastating critical review of it, the one that best satisfies our desperate urge to dismiss Wade’s arguments and reinforce our faith in the futility of studying biological differences between human races, a faith backed up by a cherished official consensus ever so conveniently in line with our moral convictions. That Charles Murray, one of the authors of the evil tome The Bell Curve, wrote an early highly favorable review for the Wall Street Journal only upped the stakes for all would-be champions of liberal science. Even as the victor awaits crowning, many scholars are posting links to their favorite contender’s critiques all over social media to advertise their principled rejection of this book they either haven’t read yet or have no intention of ever reading.

You don’t have to go beyond the title, A Troublesome Inheritance: Genes, Race and Human History, to understand what all these conscientious intellectuals are so eager to distance themselves from—and so eager to condemn. History has undeniably treated some races much more poorly than others, so if their fates are in any small way influenced by genes the implication of inferiority is unavoidable. Regardless of what he actually says in the book, Wade’s very program strikes many as racist from its inception.

            The going theories for the dawn of the European Enlightenment and the rise of Western culture—and western people—to global ascendency attribute the phenomenon to a combination of geographic advantages and historical happenstance. Wade, along with many other scholars, finds such explanations unsatisfying. Geography can explain why some societies never reached sufficient population densities to make the transition into states. “Much harder to understand,” Wade writes, “is how Europe and Asia, lying on much the same lines of latitude, were driven in the different directions that led to the West’s dominance” (223). Wade’s theory incorporates elements of geography—like the relatively uniform expanse of undivided territory between the Yangtze and Yellow rivers that facilitated the establishment of autocratic rule, and the diversity of fragmented regions in Europe preventing such consolidation—but he goes on to suggest that these different environments would have led to the development of different types of institutions. Individuals more disposed toward behaviors favored by these institutions, Wade speculates, would be rewarded with greater wealth, which would in turn allow them to have more children with behavioral dispositions similar to their own.

            After hundreds of years and multiple generations, Wade argues, the populations of diverse regions would respond to these diverse institutions by evolving subtly different temperaments. In China for instance, favorable, and hence selected for traits may have included intelligence, conformity, and obedience. These behavioral propensities would subsequently play a role in determining the future direction of the institutions that fostered their evolution. Average differences in personality would, according to Wade, also make it more or less likely that certain new types of institution would arise within a given society, or that they could be successfully transplanted into it. And it’s a society’s institutions that ultimately determine its fate relative to other societies. To the objection that geography can, at least in principle, explain the vastly different historical outcomes among peoples of specific regions, Wade responds, “Geographic determinism, however, is as absurd a position as genetic determinism, given that evolution is about the interaction between the two” (222).

            East Asians score higher on average on IQ tests than people with European ancestry, but there’s no evidence that any advantage they enjoy in intelligence, or any proclivity they may display toward obedience and conformity—traits supposedly manifest in their long history of autocratic governance—is attributable to genetic differences as opposed to traditional attitudes toward schoolwork, authority, and group membership inculcated through common socialization practices. So we can rest assured that Wade’s just-so story about evolved differences between the races in social behavior is eminently dismissible. Wade himself at several points throughout A Troublesome Inheritance admits that his case is wholly speculative. So why, given the abominable history of racist abuses of evolutionary science, would Wade publish such a book?

It’s not because he’s unaware of the past abuses. Indeed, in his second chapter, titled “Perversions of Science,” which none of the critical reviewers deigns to mention, Wade chronicles the rise of eugenics and its culmination in the Holocaust. He concludes,

After the Second World War, scientists resolved for the best of reasons that genetics research would never again be allowed to fuel the racial fantasies of murderous despots. Now that new information about human races has been developed, the lessons of the past should not be forgotten and indeed are all the more relevant. (38)

The convention among Wade’s critics is to divide his book into two parts, acknowledge that the first is accurate and compelling enough, and then unload the full academic arsenal of both scientific and moral objections to the second. This approach necessarily scants a few important links in his chain of reasoning in an effort to reduce his overall point to its most objectionable elements. And for all their moralizing, the critics, almost to a one, fail to consider Wade’s expressed motivation for taking on such a fraught issue.

            Even acknowledging Wade’s case is weak for the role of biological evolution in historical developments like the Industrial Revolution, we may still examine his reasoning up to that point in the book, which may strike many as more firmly grounded. You can also start to get a sense of what was motivating Wade when you realize that the first half of A Troublesome Inheritance recapitulates his two previous books on human evolution. The first, Before the Dawn, chronicled the evolution and history of our ancestors from a species that resembled a chimpanzee through millennia as tribal hunter-gatherers to the first permanent settlements and the emergence of agriculture. Thus, we see that all along his scholarly interest has been focused on major transitions in human prehistory.

While critics of Wade’s latest book focus almost exclusively on his attempts at connecting genomics to geopolitical history, he begins his exploration of differences between human populations by emphasizing the critical differences between humans and chimpanzees, which we can all agree came about through biological evolution. Citing a number of studies comparing human infants to chimps, Wade writes in A Troublesome Inheritance,

Besides shared intentions, another striking social behavior is that of following norms, or rules generally agreed on within the “we” group. Allied with the rule following are two other basic principles of human social behavior. One is a tendency to criticize, and if necessary punish, those who do not follow the agreed-upon norms. Another is to bolster one’s own reputation, presenting oneself as an unselfish and valuable follower of the group’s norms, an exercise that may involve finding fault with others. (49)

What separates us from chimpanzees and other apes—including our ancestors—is our much greater sociality and our much greater capacity for cooperation. (Though primatologist Frans de Waal would object to leaving the much more altruistic bonobos out of the story.) The basis for these changes was the evolution of a suite of social emotions—emotions that predispose us toward certain types of social behaviors, like punishing those who fail to adhere to group norms (keeping mum about genes and race for instance). If there’s any doubt that the human readiness to punish wrongdoers and rule violators is instinctual, ongoing studies demonstrating this trait in children too young to speak make the claim that the behavior must be taught ever more untenable. The conclusion most psychologists derive from such studies is that, for all their myriad manifestations in various contexts and diverse cultures, the social emotions of humans emerge from a biological substrate common to us all.  

            After Before the Dawn, Wade came out with The Faith Instinct, which explores theories developed by biologist David Sloan Wilson and evolutionary psychologist Jesse Bering about the adaptive role of religion in human societies. In light of cooperation’s status as one of the most essential behavioral differences between humans and chimps, other behaviors that facilitate or regulate coordinated activity suggest themselves as candidates for having pushed our ancestors along the path toward several key transitions. Language for instance must have been an important development. Religion may have been another. As Wade argues in A Troublesome Inheritance

The fact that every known society has a religion suggests that each inherited a propensity for religion from the ancestral human population. The alternative explanation, that each society independently invented and maintained this distinctive human behavior, seems less likely. The propensity for religion seems instinctual, rather than purely cultural, because it is so deeply set in the human mind, touching emotional centers and appearing with such spontaneity. There is a strong evolutionary reason, moreover, that explains why religion may have become wired in the neural circuitry. A major function of religion is to provide social cohesion, a matter of particular importance among early societies. If the more cohesive societies regularly prevailed over the less cohesive, as would be likely in any military dispute, an instinct for religious behavior would have been strongly favored by natural selection. This would explain why the religious instinct is universal. But the particular form that religion takes in each society depends on culture, just as with language. (125-6)

As is evident in this passage, Wade never suggests any one-to-one correspondence between genes and behaviors. Genes function in the context of other genes in the context of individual bodies in the context of several other individual bodies. But natural selection is only about outcomes with regard to survival and reproduction. The evolution of social behavior must thus be understood as taking place through the competition, not just of individuals, but also of institutions we normally think of as purely cultural.

            The evolutionary sequence Wade envisions begins with increasing sociability enforced by a tendency to punish individuals who fail to cooperate, and moves on to tribal religions which involve synchronized behaviors, unifying beliefs, and omnipresent but invisible witnesses who discourage would-be rule violators. Once humans began living in more cohesive groups, behaviors that influenced the overall functioning of those groups became the targets of selection. Religion may have been among the first institutions that emerged to foster cohesion, but others relying on the same substrate of instincts and emotions would follow. Tracing the trajectory of our prehistory from the origin of our species in Africa, to the peopling of the world’s continents, to the first permanent settlements and the adoption of agriculture, Wade writes,

The common theme of all these developments is that when circumstances change, when a new resource can be exploited or a new enemy appears on the border, a society will change its institutions in response. Thus it’s easy to see the dynamics of how human social change takes place and why such a variety of human social structures exists. As soon as the mode of subsistence changes, a society will develop new institutions to exploit its environment more effectively. The individuals whose social behavior is better attuned to such institutions will prosper and leave more children, and the genetic variations that underlie such a behavior will become more common. (63-4)

First a society responds to shifting pressures culturally, but a new culture amounts to a new environment for individuals to adapt to. Wade understands that much of this adaptation occurs through learning. Some of the challenges posed by an evolving culture will, however, be easier for some individuals to address than others. Evolutionary anthropologists tend to think of culture as a buffer between environments and genes. Many consider it more of a wall. To Wade, though, culture is merely another aspect of the environment individuals and their genes compete to thrive in.

If you’re a cultural anthropologist and you want to study how cultures change over time, the most convenient assumption you can make is that any behavioral differences you observe between societies or over periods of time are owing solely to the forces you’re hoping to isolate. Biological changes would complicate your analysis. If, on the other hand, you’re interested in studying the biological evolution of social behaviors, you will likely be inclined to assume that differences between cultures, if not based completely on genetic variance, at least rest on a substrate of inherited traits. Wade has quite obviously been interested in social evolution since his first book on anthropology, so it’s understandable that he would be excited about genome studies suggesting that human evolution has been operating recently enough to affect humans in distantly separated regions of the globe. And it’s understandable that he’d be frustrated by sanctions against investigating possible behavioral differences tied to these regional genetic differences. But this doesn’t stop his critics from insinuating that his true agenda is something other than solely scientific.

            On the technology and pop culture website io9, blogger and former policy analyst Annalee Newitz calls Wade’s book an “argument for white supremacy,” which goes a half-step farther than the critical review by Eric Johnson the post links to, titled "On the Origin of White Power." Johnson sarcastically states that Wade isn’t a racist and acknowledges that the author is correct in pointing out that considering race as a possible explanatory factor isn’t necessarily racist. But, according to Johnson’s characterization,

He then explains why white people are better because of their genes. In fairness, Wade does not say Caucasians are betterper se, merely better adapted (because of their genes) to the modern economic institutions that Western society has created, and which now dominate the world’s economy and culture.

The clear implication here is that Wade’s mission is to prove that the white race is superior but that he also wanted to cloak this agenda in the garb of honest scientific inquiry. Why else would Wade publish his problematic musings? Johnson believes that scientists and journalists should self-censor speculations or as-yet unproven theories that could exacerbate societal injustices. He writes, “False scientific conclusions, often those that justify certain well-entrenched beliefs, can impact peoples’ lives for decades to come, especially when policy decisions are based on their findings.” The question this position begs is how certain can we be that any scientific “conclusion”—Wade would likely characterize it as an exploration—is indeed false before it’s been made public and become the topic of further discussion and research?

Johnson’s is the leading contender for the title of most devastating critique of A Troublesome Inheritance, and he makes several excellent points that severely undermine parts of Wade’s case for natural selection playing a critical role in recent historical developments. But, like H. Allen Orr’s critique in The New York Review, the first runner-up in the contest, Johnson’s essay is oozing with condescension and startlingly unselfconscious sanctimony. These reviewers profess to be standing up for science even as they ply their readers with egregious ad hominem rhetoric (Wade is just a science writer, not a scientist) and arguments from adverse consequences (racist groups are citing Wade’s book in support of their agendas), thereby underscoring another of Wade’s arguments—that the case against racial differences in social behavior is at least as ideological as it is scientific. Might the principle that researchers should go public with politically sensitive ideas or findings only after they’ve reached some threshold of wider acceptance end up stifling free inquiry? And, if Wade’s theories really are as unlikely to bear empirical or conceptual fruit as his critics insist, shouldn’t the scientific case against them be enough? Isn’t all the innuendo and moral condemnation superfluous—maybe even a little suspicious?

            White supremacists may get some comfort from parts of Wade’s book, but if they read from cover to cover they’ll come across plenty of passages to get upset about. In addition to the suggestion that Asians are more intelligent than Caucasians, there’s the matter of the entire eighth chapter, which describes a scenario for how Ashkenazi Jews became even more intelligent than Asians and even more creative and better suited to urban institutions than Caucasians of Northern European ancestry. Wade also points out more than once that the genetic differences between the races are based, not on the presence or absence of single genes, but on clusters of alleles occurring with varying frequencies. He insists that

the significant differences are those between societies, not their individual members. But minor variations in social behavior, though barely perceptible, if at all, in an individual, combine to create societies of very different character. (244)

In other words, none of Wade’s speculations, nor any of the findings he reports, justifies discriminating against any individual because of his or her race. At best, there would only ever be a slightly larger probability that an individual will manifest any trait associated with people of the same ancestry. You’re still much better off reading the details of the résumé. Critics may dismiss as mere lip service Wade’s disclaimers about how “Racism and discrimination are wrong as a matter of principle, not of science” (7), and how the possibility of genetic advantages in certain traits “does not of course mean that Europeans are superior to others—a meaningless term in any case from an evolutionary perspective” (238).  But if Wade is secretly taking delight in the success of one race over another, it’s odd how casually he observes that “the forces of differentiation seem now to have reversed course due to increased migration, travel and intermarriage” (71).

            Wade does of course have to cite some evidence, indirect though it may be, in support of his speculations. First, he covers several genomic studies showing that, contrary to much earlier scholarship, populations of various regions of the globe are genetically distinguishable. Race, in other words, is not merely a social construct, as many have insisted. He then moves on to research suggesting that a significant portion of the human genome reveals evidence of positive selection recently enough to have affected regional populations differently. Joshua Akey’s 2009 review of multiple studies on markers of recent evolution is central to his argument. Wade interprets Akey’s report as suggesting that as much as 14 percent of the human genome shows signs of recent selection. Orr insists this is a mistake in his review, putting the number at 8 percent.

Steven Pinker, who discusses Akey’s paper in his 2011 book The Better Angels of Our Nature, likewise takes the number to be 8 and not 14. But even that lower proportion is significant. Pinker, an evolutionary psychologist, stresses just how revolutionary this finding might be.

Some journalists have uncomprehendingly lauded these results as a refutation of evolutionary psychology and what they see as its politically dangerous implication of a human nature shaped by adaptation to a hunter-gatherer lifestyle. In fact the evidence for recent selection, if it applies to genes with effects on cognition and emotion, would license a far more radical form of evolutionary psychology, one in which minds have been biologically shaped by recent environments in addition to ancient ones. And it could have the incendiary implication that aboriginal and immigrant populations are less biologically adapted to the demands of modern life than populations that have lived in literate societies for millennia. (614)

Contra critics who paint him as a crypto-supremacist, it’s quite clearly that “far more radical form of evolutionary psychology” Wade is excited about. That’s why he’s exasperated by what he sees as Pinker’s refusal to admit that the case for that form is strong enough to warrant pursuing it further owing to fear of its political ramifications. Pinker does consider much of the same evidence as Wade, but where Wade sees only clear support Pinker sees several intractable complications. Indeed, the section of Better Angels where Pinker discusses recent evolution is an important addendum to Wade’s book, and it must be noted Pinker doesn’t rule out the possibility of regional selection for social behaviors. He simply says that “for the time being, we have no need for that hypothesis” (622).

            Wade is also able to point to one gene that has already been identified whose alleles correspond to varying frequencies of violent behavior. The MAO-A gene comes in high- and low-activity varieties, and the low-activity version is more common among certain ethnic groups, like sub-Saharan Africans and Maoris. But, as Pinker points out, a majority of Chinese men also have the low-activity version of the gene, and they aren’t known for being particularly prone to violence. So the picture isn’t straightforward. Aside from the Ashkenazim, Wade cites another well-documented case in which selection for behavioral traits could have played an important role. In his book A Farewell to Alms, Gregory Clark presents an impressive collection of historical data suggesting that in the lead-up to the Industrial Revolution in England, people with personality traits that would likely have contributed to the rapid change were rewarded with more money, and people with more money had more children. The children of the wealthy would quickly overpopulate the ranks of the upper classes and thus large numbers of them inevitably descended into lower ranks. The effect of this “ratchet of wealth” (180), as Wade calls it, after multiple generations would be genes for behaviors like impulse control, patience, and thrift cascading throughout the population, priming it for the emergence of historically unprecedented institutions.

            Wade acknowledges that Clark’s theory awaits direct confirmation through the discovery of actual alleles associated with the behavioral traits he describes. But he points to experiments with artificial selection that suggest the time-scale Clark considers, about 24 generations, would have been sufficient to effect measurable changes. In his critical review, though, Johnson counters that natural selection is much slower than artificial selection, and he shows that Clark’s own numbers demonstrate a rapid attenuation of the effects of selection. Pinker points to other shortcomings in the argument, like the number of cases in which institutions changed and populations exploded in periods too short to have seen any significant change in allele frequencies. Wade isn’t swayed by any of these objections, which he takes on one-by-one, contrary to Orr’s characterization of the disagreement. As of now, the debate is ongoing. It may not be settled conclusively until scientists have a much better understanding of how genes work to influence behavior, which Wade estimates could take decades.

            Pinker is not known for being politically correct, but Wade may have a point when he accuses him of not following the evidence to the most likely conclusions. “The fact that a hypothesis is politically uncomfortable,” Pinker writes, “does not mean that it is false, but it does mean that we should consider the evidence very carefully before concluding that it is true” (614). This sentiment echoes the position taken by Johnson: Hold off going public with sensitive ideas until you’re sure they’re right. But how can we ever be sure whether an idea has any validity if we’re not willing to investigate it? Wade’s case for natural selection operating through changing institutions during recorded history isn’t entirely convincing, but neither is it completely implausible. The evidence that would settle the issue simply hasn’t been discovered yet. But neither is there any evidence in Wade’s book to support the conclusion that his interest in the topic is political as opposed to purely scientific. “Each gene under selection,” he writes, “will eventually tell a fascinating story about some historical stress to which the population was exposed and then adapted” (105). Fascinating indeed, however troubling they may be.

            Is the best way to handle troublesome issues like the possible role of genes in behavioral variations between races to declare them off-limits to scientists until the evidence is incontrovertible? Might this policy come with the risk that avoiding the topic now will make it all too easy to deny any evidence that does emerge later? If genes really do play a role in violence and impulse-control, then we may need to take that into account when we’re devising solutions to societal inequities.

Genes are not gods whose desires must be bowed to. But neither are they imaginary forces that will go away if we just ignore them. The challenge of dealing with possible biological differences also arises in the context of gender. Because women continue to earn smaller incomes on average than men and are underrepresented in science and technology fields, and because the discrepancy is thought to be the product of discrimination and sexism, many scholars argue that any research into biological factors that may explain these outcomes is merely an effort at rationalizing injustice. The problem is the evidence for biological differences in behavior between the genders is much stronger than it is for those between populations from various regions. We can ignore these findings—and perhaps even condemn the scientists who conduct the studies—because they don’t jive with our preferred explanations. But solutions based on willful ignorance have little chance of being effective.

            The sad fact is that scientists and academics have nothing even resembling a viable moral framework for discussing biological behavioral differences. Their only recourse is to deny and inveigh. The quite reasonable fear is that warnings like Wade’s about how the variations are subtle and may not exist at all in any given individual will go unheeded as the news of the findings is disseminated, and dumbed-down versions of the theories will be coopted in the service of reactionary agendas. A study reveals that women respond more readily to a baby’s vocalizations and the headlines read “Genes Make Women Better Parents.” An allele associated with violent behavior is found to be more common in African Americans and some politician cites it as evidence that the astronomical incarceration rate for black males is justifiable. But is censorship the answer? Average differences between genders in career preferences is directly relevant to any discussion of uneven representation in various fields. And it’s possible that people with a certain allele will respond differently to different types of behavioral intervention. As Carl Sagan explained, in a much different context, in his book Demon-Haunted World, “we cannot have science in bits and pieces, applying it where we feel safe and ignoring it where we feel threatened—again, because we are not wise enough to do so” (297).

            Part of the reason the public has trouble understanding what differences between varying types of people may mean is that scientists are at odds with each other about how to talk about them. And with all the righteous declamations they can start to sound a lot like the talking heads on cable news shows. Conscientious and well-intentioned scholars have so thoroughly poisoned the well when it comes to biological behavioral differences that their possible existence is treated as a moral catastrophe. How should we discuss the topic? Working to convey the importance of the distinction between average and absolute differences may be a good start. Efforts to encourage people to celebrate diversity and to challenge the equating of genes with destiny are already popularly embraced. In the realm of policy, we might shift our focus from equality of outcome to equality of opportunity. It’s all too easy to find clear examples of racial disadvantages—in housing, in schooling, in the job market—that go well beyond simple head counting at top schools and in executive boardrooms. Slight differences in behavioral propensities can’t justify such blatant instances of unfairness. Granted, that type of unfairness is much more difficult to find when it comes to gender disparities, but the lesson there is that policies and agendas based on old assumptions might need to give way to a new understanding, not that we should pretend the evidence doesn’t exist or has no meaning.

            Wade believes it was safe for him to write about race because “opposition to racism is now well entrenched” in the Western world (7). In one sense, he’s right about that. Very few people openly profess a belief in racial hierarchies. In another sense, though, it’s just as accurate to say that racism is itself well entrenched in our society. Will A Troublesome Inheritance put the brakes on efforts to bring about greater social justice? This seems unlikely if only because the publication of every Bell Curve occasions the writing of another Mismeasure of Man.

  The unfortunate result is that where you stand on the issue will become yet another badge of political identity as we form ranks on either side. Most academics will continue to consider speculation irresponsible, apply a far higher degree of scrutiny to the research, and direct the purest moral outrage they can muster, while still appearing rational and sane, at anyone who dares violate the taboo. This represents the triumph of politics over science. And it ensures the further entrenchment of views on either side of the divide.

Despite the few superficial similarities between Wade’s arguments and those of racists and eugenicists of centuries past, we have to realize that our moral condemnation of what we suppose are his invidious extra-scientific intentions is itself borne of extra-scientific ideology. Whether race plays a role in behavior is a scientific question. Our attitude toward that question and the parts of the answer that trickle in despite our best efforts at maintaining its status as taboo just may emerge out of assumptions that no longer apply. So we must recognize that succumbing to the temptation to moralize when faced with scientific disagreement automatically makes hypocrites of us all. And we should bear in mind as well that insofar as racial and gender differences really do exist it will only be through coming to a better understanding of them that we can hope to usher in a more just society for children of any and all genders and races. 

Also read: 

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And: 

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

Read More
Dennis Junk Dennis Junk

The Spider-Man Stars' Dust-up over Pseudo-Sexism

You may have noticed that the term sexism has come to refer to any suggestion that there may be meaningful differences between women and men, or between what’s considered feminine and what’s considered masculine. But is the denial of sex differences really what’s best for women? Is it what’s best for anyone.

A new definition of the word sexism has taken hold in the English-speaking world, even to the point where it’s showing up in official definitions. No longer used merely to describe the belief that women are somehow inferior to men, sexism can now refer to any belief in gender differences. Case in point: when Spider-Man star Andrew Garfield fielded a question from a young boy about how the superhero came by his iconic costume by explaining that he sewed it himself, even though sewing is “kind of a feminine thing to do,” Emma Gray and The Huffington Post couldn’t resist griping about Garfield’s “Casual Sexism” and celebrating his girlfriend Emma Stone’s “Most Perfect Way” of calling it out. Gray writes,

Instead of letting the comment—which assumes that there is something fundamentally female about sewing, and that doing such a “girly” thing must be qualified with a “masculine” outcome—slide, Stone turned the Q&A panel into an important teachable moment. She stopped her boyfriend and asked: “It's feminine, how?”

Those three words are underwhelming enough to warrant suspicion that Gray is really just cheerleading for someone she sees as playing for the right team.  

            A few decades ago, people would express beliefs about the proper roles and places for women quite openly in public. Outside of a few bastions of radical conservatism, you’re unlikely to hear anyone say that women shouldn’t be allowed to run businesses or serve in high office today. But rather than being leveled with decreasing frequency the charge of sexism is now applied to a wider and more questionable assortment of ideas and statements. Surprised at having fallen afoul of this broadening definition of sexism, Garfield responded to Stone’s challenge by saying,

It’s amazing how you took that as an insult. It’s feminine because I would say femininity is about more delicacy and precision and detail work and craftsmanship. Like my mother, she’s an amazing craftsman. She in fact made my first Spider-Man costume when I was three. So I use it as a compliment, to compliment the feminine in women but in men as well. We all have feminine in us, young men.

Gray sees that last statement as a result of how Stone “pressed Garfield to explain himself.” Watch the video, though, and you’ll see she did little pressing. He seemed happy to explain what he meant. And that last line was actually a reiteration of the point he’d made originally by saying, “It’s kind of a feminine thing to do, but he really made a very masculine costume”—the line that Stone pounced on. 

            Garfield’s handling of both the young boy’s question and Stone’s captious interruption is far more impressive than Stone’s supposedly perfect way of calling him out. Indeed, Stone’s response was crudely ideological, implying quite simply that her boyfriend had revealed something embarrassing about himself—gotcha!—and encouraging him to expound further on his unacceptable ideas so she and the audience could chastise him. She had, like Gray, assumed that any reference to gender roles was sexist by definition. But did Garfield’s original answer to the boy’s question really reveal that he “assumes that there is something fundamentally female about sewing, and that doing such a ‘girly’ thing must be qualified with a ‘masculine’ outcome,” as Gray claims? (Note her deceptively inconsistent use of scare quotes and actual quotes.)

Garfield’s thinking throughout the exchange was quite sophisticated. First, he tried to play up Spider-Man’s initiative and self-sufficiency because he knew the young fan would appreciate these qualities in his hero. Then he seems to have realized that the young boy might be put off by the image of his favorite superhero engaging in an activity that’s predominantly taken up by women. Finally, he realized he could use this potential uneasiness as an opportunity for making the point that just because a male does something generally considered feminine that doesn’t mean he’s any less masculine. This is the opposite of sexism. So why did Stone and Gray cry foul? 

One of the tenets of modern feminism is that gender roles are either entirely chimerical or, to the extent that they exist, socially constructed. In other words, they’re nothing but collective delusions. Accepting, acknowledging, or referring to gender roles then, especially in the presence of a young child, abets in the perpetuation of these separate roles. Another tenet of modern feminism that comes into play here is that gender roles are inextricably linked to gender oppression. The only way for us as a society to move toward greater equality, according to this ideology, is for us to do away with gender roles altogether. Thus, when Garfield or anyone else refers to them as if they were real or in any way significant, he must be challenged.

One of the problems with Stone’s and Gray’s charge of sexism is that there happens to be a great deal of truth in every aspect of Garfield’s answer to the boy’s question. Developmental psychologists consistently find that young children really are preoccupied with categorizing behaviors by gender and that the salience of gender to children arises so reliably and at so young an age that it’s unlikely to stem from socialization.

Studies have also consistently found that women tend to excel in tasks requiring fine motor skill, while men excel in most other dimensions of motor ability. And what percentage of men ever go beyond sewing buttons on their shirts—if they do even that? Why but for the sake of political correctness would anyone deny this difference? Garfield’s response to Stone’s challenge was also remarkably subtle. He didn’t act as though he’d been caught in a faux pas but instead turned the challenge around, calling Stone out for assuming he somehow intended to disparage women. He then proudly expounded on his original point. If anything, it looked a little embarrassing for Stone.

Modern feminism has grown over the past decade to include the push for LGBT rights. Historically, gender roles were officially sanctioned and strictly enforced, so it was understandable that anyone advocating for women’s rights would be inclined to question those roles. Today, countless people who don’t fit neatly into conventional gender categories are in a struggle with constituencies who insist their lifestyles and sexual preferences are unnatural. But even those of us who support equal rights for LGBT people have to ask ourselves if the best strategy for combating bigotry is an aggressive and wholesale denial of gender. Isn’t it possible to recognize gender differences, and even celebrate them, without trying to enforce them prescriptively? Can’t we accept the possibility that some average differences are innate without imposing definitions on individuals or punishing them for all the ways they upset expectations? And can’t we challenge religious conservatives for the asinine belief that nature sets up rigid categories and the idiotic assumption that biology is about order as opposed to diversity instead of ignoring (or attacking) psychologists who study gender differences?

I think most people realize there’s something not just unbecoming but unfair about modern feminism’s anti-gender attitude. And most people probably don’t appreciate all the cheap gotchas liberal publications like The Huffington Post and The Guardian and Slate are so fond of touting. Every time feminists accuse someone of sexism for simply referring to obvious gender differences, they belie their own case that feminism is no more and no less than a belief in the equality of women. Only twenty percent of Americans identify themselves as feminists, while over eighty percent believe in equality for women. Feminism, like sexism, has clearly come to mean something other than what it used to. It may be the case that just as the gender roles of the past century gradually came to be seen as too rigid so too that century’s ideologies are increasingly seen as too lacking in nuance and their proponents too quick to condemn. It may even be that we Americans and Brits no longer need churchy ideologies to tell us all people deserve to be treated equally. 

Also read: 

FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

And: 

SCIENCE’S DIFFERENCE PROBLEM: NICHOLAS WADE’S TROUBLESOME INHERITANCE AND THE MISSING MORAL FRAMEWORK FOR DISCUSSING THE BIOLOGY OF BEHAVIOR

And:

WHY I WON'T BE ATTENDING THE GENDER-FLIPPED SHAKESPEARE PLAY

Read More
Dennis Junk Dennis Junk

The Better-than-Biblical History of Humanity Hidden in Tiny Cells and a Great Story of Science Hidden in Plain Sight

With “Neanderthal Man,” paleogeneticist Svante Pääbo has penned a deeply personal and sparely stylish paean to the field of paleogenetics and all the colleagues and supporters who helped him create it. The book offers an invaluable look behind the scenes of some of the most fascinating research in recent decades.

            Anthropology enthusiasts became acquainted with the name Svante Pääbo in books or articles published throughout the latter half of this century’s first decade about how our anatomically modern ancestors might have responded to the presence of other species of humans as they spread over new continents tens of thousands of years ago. The first bit of news associated with this unplaceable name was that humans probably never interbred with Neanderthals, a finding that ran counter to the multiregionalist theory of human evolution and lent support to the theory of a single origin in Africa. The significance of the Pääbo team’s findings in the context of this longstanding debate was a natural enough angle for science writers to focus on. But what’s shocking in hindsight is that so little of what was written during those few years conveyed any sense of wonder at the discovery that DNA from Neanderthals, a species that went extinct 30,000 years ago, was still retrievable—that snatches of it had in fact already been sequenced.

Then, in 2010, the verdict suddenly changed; humans really had bred with Neanderthals, and all people alive today who trace their ancestry to regions outside of Africa carry vestiges of those couplings in their genomes. The discrepancy between the two findings, we learned, was owing to the first being based on mitochondrial DNA and the second on nuclear DNA. Even those anthropology students whose knowledge of human evolution derived mostly from what can be gleaned from the shapes and ages of fossil bones probably understood that since several copies of mitochondrial DNA reside in every cell of a creature’s body, while each cell houses but a single copy of nuclear DNA, this latest feat of gene sequencing must have been an even greater challenge. Yet, at least among anthropologists, the accomplishment got swallowed up in the competition between rival scenarios for how our species came to supplant all the other types of humans. Though, to be fair, there was a bit of marveling among paleoanthropologists at the implications of being some percentage Neanderthal.

            Fortunately for us enthusiasts, in his new book Neanderthal Man: In Search of Lost Genomes, Pääbo, a Swedish molecular biologist now working at the Max Planck Institute in Leipzig, goes some distance toward making it possible for everyone to appreciate the wonder and magnificence of his team’s monumental achievements. It would have been a great service to historians for him to simply recount the series of seemingly insurmountable obstacles the researchers faced at various stages, along with the technological advances and bursts of inspiration that saw them through. But what he’s done instead is pen a deeply personal and sparely stylish paean to the field of paleogenetics and all the colleagues and supporters who helped him create it.

It’s been over sixty years since Watson and Crick, with some help from Rosalind Franklin, revealed the double-helix structure of DNA. But the Human Genome Project, the massive effort to sequence all three billion base pairs that form the blueprint for a human, was completed just over ten years ago. As inexorable as the march of technological progress often seems, the jump from methods for sequencing the genes of living creatures to those of long-extinct species only strikes us as foregone in hindsight. At the time when Pääbo was originally dreaming of ancient DNA, which he first hoped to retrieve from Egyptian mummies, there were plenty of reasons to doubt it was possible. He writes,

When we die, we stop breathing; the cells in our body then run out of oxygen, and as a consequence their energy runs out. This stops the repair of DNA, and various sorts of damage rapidly accumulate. In addition to the spontaneous chemical damage that continually occurs in living cells, there are forms of damage that occur after death, once the cells start to decompose. One of the crucial functions of living cells is to maintain compartments where enzymes and other substances are kept separate from one another. Some of these compartments contain enzymes that break down DNA from various microorganisms that the cell may encounter and engulf. Once an organism dies and runs out of energy, the compartment membranes deteriorate, and these enzymes leak out and begin degrading DNA in an uncontrolled way. Within hours and sometimes days after death, the DNA strands in our body are cut into smaller and smaller pieces, while various other forms of damage accumulate. At the same time, bacteria that live in our intestines and lungs start growing uncontrollably when our body fails to maintain the barriers that normally contain them. Together these processes will eventually dissolve the genetic information stored in our DNA—the information that once allowed our body to form, be maintained, and function. When that process is complete, the last trace of our biological uniqueness is gone. In a sense, our physical death is then complete. (6)

The hope was that amid this nucleic carnage enough pieces would survive to restore a single strand of the entire genome. That meant Pääbo needed lots of organic remains and some really powerful extraction tools. It also meant that he’d need some well-tested and highly reliable methods for fitting the pieces of the puzzle together.

            Along with the sense of inevitability that follows fast on the heels of any scientific advance, the impact of the Neanderthal Genome Project’s success in the wider culture was also dampened by a troubling inability on the part of the masses to appreciate that not all ideas are created equal—that any particular theory is only as good as the path researchers followed to arrive at it and the methods they used to validate it. Sadly, it’s in all probability the very people who would have been the most thoroughly gobsmacked by the findings coming out of the Max Planck Institute whose amazement switches are most susceptible to hijacking at the hands of the charlatans and ratings whores behind shows like Ancient Aliens. More serious than the cheap fictions masquerading as science that abound in pop culture, though, is a school of thought in academia that not only fails to grasp, but outright denies, the value of methodological rigor, charging that the methods themselves are mere vessels for the dissemination of encrypted social and political prejudices.

Such thinking can’t survive even the most casual encounter with the realities of how science is conducted. Pääbo, for instance, describes his team’s frustration whenever rival researchers published findings based on protocols that failed to meet the standards they’d developed to rule out contamination from other sources of genetic material. He explains the common “dilemma in science” whereby

doing all the analyses and experiments necessary to tell the complete story leaves you vulnerable to being beaten to the press by those willing to publish a less complete story that nevertheless makes the major point you wanted to make. Even when you publish a better paper, you are seen as mopping up the details after someone who made the real breakthrough. (115)

The more serious challenge for Pääbo, however, was dialing back extravagant expectations on the part of prospective funders against the backdrop of popular notions propagated by the Jurassic Park movie franchise and extraordinary claims from scientists who should’ve known better. He writes,

As we were painstakingly developing methods to detect and eliminate contamination, we were frustrated by flashy publications in Nature and Science whose authors, on the surface of things, were much more successful than we were and whose accomplishments dwarfed the scant products of our cumbersome efforts to retrieve DNA sequences “only” a few tens of thousands of years old. The trend had begun in 1990, when I was still at Berkeley. Scientists at UC Irvine published a DNA sequence from leaves of Magnolia latahensis that had been found in a Miocene deposit in Clarkia, Idaho, and were 17 million years old. This was a breathtaking achievement, seeming to suggest that one could study DNA evolution on a time scale of millions of years, perhaps even going back to the dinosaurs! (56)

            In the tradition of the best scientists, Pääbo didn’t simply retreat to his own projects to await the inevitable retractions and failed replications but instead set out to apply his own more meticulous extraction methods to the fossilized plant material. He writes,

I collected many of these leaves and brought them back with me to Munich. In my new lab, I tried extracting DNA from the leaves and found they contained many long DNA fragments. But I could amplify no plant DNA by PCR. Suspecting that the long DNA was from bacteria, I tried primers for bacterial DNA instead, and was immediately successful. Obviously, bacteria had been growing in the clay. The only reasonable explanation was that the Irvine group, who worked on plant genes and did not use a separate “clean lab” for their ancient work, had amplified some contaminating DNA and thought it came from the fossil leaves. (57)

With the right equipment, it turns out, you can extract and sequence genetic material from pretty much any kind of organic remains, no matter how old. The problem is that sources of contamination are myriad, and chances are whatever DNA you manage to read is almost sure to be from something other than the ancient creature you’re interested in.

            At the time when Pääbo was busy honing his techniques, many scientists thought genetic material from ancient plants and insects might be preserved in the fossilized tree resin known as amber. Sure enough, in the late 80s and early 90s, George Poinar and Raul Cano published a series of articles in which they claimed to have successfully extracted DNA through tiny holes drilled into chunks of amber to reach embedded bugs and leaves. These articles were in fact the inspiration behind Michael Crichton’s description of how the dinosaurs in Jurassic Park were cloned. But Pääbo had doubts about whether these researchers were taking proper precautions to rule out contamination, and no sooner had he heard about their findings than he started trying to find a way to get his hands on some amber specimens. He writes,

The opportunity to find out came in 1994, when Hendrik Poinar joined our lab. Hendrik was a jovial Californian and the son of George Poinar, then a professor at Berkeley and a well-respected expert on amber and the creatures found in it. Hendrik had published some of the amber DNA sequences with Raul Cano, and his father had access to the best amber in the world. Hendrik came to Munich and went to work in our new clean room. But he could not repeat what had been done in San Luis Obisco. In fact, as long as his blank extracts were clean, he got no DNA sequences at all out of the amber—regardless of whether he tried insects or plants. I grew more and more skeptical, and I was in good company. (58)

Those blank extracts were important not just to test for bacteria in the samples but to check for human cells as well. Indeed, one of the special challenges of isolating Neanderthal DNA is that it looks so much like the DNA of the anatomically modern humans handling the samples and the sequencing machines.

A high percentage of the dust that accumulates in houses is made up of our sloughed off skin cells. And Polymerase Chain Reaction (PCR), the technique Pääbo’s team was using to increase the amount of target DNA, relies on a powerful amplification process which uses rapid heating and cooling to split double helix strands up the middle before fitting synthetic chemicals along each side like an amino acid zipper, resulting in exponential replication. The result is that each fragment of a genome gets blown up, and it becomes impossible to tell what percentage of the specimen’s DNA it originally represented. Researchers then try to fit the fragments end-to-end based on repeating overlaps until they have an entire strand. If there’s a great deal of similarity between the individual you’re trying to sequence and the individual whose cells have contaminated the sample, you simply have no way to know whether you’re splicing together fragments of each individual’s genome. Much of the early work Pääbo did was with extinct mammals like giant ground sloths which were easier to disentangle from humans. These early studies were what led to the development of practices like running blank extracts, which would later help his team ensure that their supposed Neanderthal DNA wasn’t really from modern human dust.

Despite all the claims of million-year-old DNA being publicized, Pääbo and his team eventually had to rein in their frustration and stop “playing the PCR police” (61) if they ever wanted to show their techniques could be applied to an ancient species of human. One of the major events in Pääbo’s life that would make this huge accomplishment a reality was the founding of the Max Planck Institute for Evolutionary Anthropology in 1997. As celebrated as the Max Planck Society is today, though, the idea of an institute devoted to scientific anthropology in Germany at the time had to overcome some resistance arising out of fears that history might repeat itself. Pääbo explains,

As do many contemporary German institutions, the MPS had a predecessor before the war. Its name was the Kaiser Wilhelm Society, and it was founded in 1911. The Kaiser Wilhelm Society had built up and supported institutes around eminent scientists such as Otto Hahn, Albert Einstein, Max Planck, and Werner Heisenberg, scientific giants active at a time when Germany was a scientifically dominant nation. That era came to an abrupt end when Hitler rose to power and the Nazis ousted many of the best scientists because they were Jewish. Although formally independent of the government, the Kaiser Wilhelm Society became part of the German war machine—doing, for example, weapons research. This was not surprising. Even worse was that through its Institute for Anthropology, Human Heredity, and Eugenics the Kaiser Wilhelm Society was actively involved in racial science and the crimes that grew out of that. In that institute, based in Berlin, people like Josef Mengele were scientific assistants while performing on inmates at Auschwitz death camp, many of them children. (81-2)

Even without such direct historical connections, many scholars still automatically leap from any mention of anthropology or genetics to dubious efforts to give the imprimatur of science to racial hierarchies and clear the way for atrocities like eugenic culling or sterilizations, even though no scientist in any field would have truck with such ideas and policies after the lessons of the past century.

            Pääbo not only believed that anthropological science could be conducted without repeating the atrocities of the past; he insisted that allowing history to rule real science out of bounds would effectively defeat the purpose of the entire endeavor of establishing an organization for the study of human origins. Called on as a consultant to help steer a course for the institute he was simultaneously being recruited to work for, Pääbo recalls responding to the administrators’ historical concerns,

Perhaps it was easier for me as a non-German born well after the war to have a relaxed attitude toward this. I felt that more than fifty years after the war, Germany could not allow itself to be inhibited in its scientific endeavors by its past crimes. We should neither forget history nor fail to learn from it, but we should also not be afraid to go forward. I think I even said that fifty years after his death, Hitler should not be allowed to dictate what we could or could not do. I stressed that in my opinion any new institute devoted to anthropology should not be a place where one philosophized about human history. It should do empirical science. Scientists who were to work there should collect real hard facts about human history and test their ideas against them. (82-3)

As it turned out, Pääbo wasn’t alone in his convictions, and his vision of what the institute should be and how it should operate came to fruition with the construction of the research facility in Leipzig.

            Faced with Pääbo’s passionate enthusiasm, some may worry that he’s one of those mad scientists we know about from movies and books, willing to push ahead with his obsessions regardless of the moral implications or the societal impacts. But in fact Pääbo goes a long way toward showing that the popular conception of the socially oblivious scientist who calculates but can’t think, and who solves puzzles but is baffled by human emotions is not just a caricature but a malicious fiction. For instance, even amid the excitement of his team’s discovery that humans reproduced with Neanderthals, Pääbo was keenly aware that his results revealed stark genetic differences between Africans, who have no Neanderthal DNA, and non-Africans, most of whose genomes are between one and four percent Neanderthal. He writes,

When we had come this far in our analyses, I began to worry about what the social implications of our findings might be. Of course, scientists need to communicate the truth to the public, but I feel that they should do so in ways that minimize the chance for it to be misused. This is especially the case when it comes to human history and human genetic variation, when we need to ask ourselves: Do our findings feed into prejudices that exist in society? Can our findings be misrepresented to serve racists’ purposes? Can they be deliberately or unintentionally misused in some other way? (199-200)

In light of the Neanderthal’s own caricature—hunched, brutish, dimwitted—their contribution to non-Africans’ genetic makeup may actually seem like more of a drawback than a basis for any claims of superiority. The trouble would come, however, if some of these genes turned out to confer adaptive advantages that made their persistence in our lineage more likely. There are already some indications, for instance, that Neanderthal-human hybrids had more robust immune responses to certain diseases. And the potential for further discoveries along these lines is limitless. Neanderthal Man explores the personal and political dimensions of a major scientific undertaking, but it’s Pääbo’s remembrances of what it was like to work with the other members of his team that bring us closest to the essence of what science is—or at least what it can be. At several points along the team’s journey, they were faced with a series of setbacks and technical challenges that threatened to sink the entire endeavor. Pääbo describes what it was like when during one critical juncture where things looked especially dire everyone brought their heads together in weekly meetings to try to come up with solutions and assign tasks:

To me, these meetings were absorbing social and intellectual experiences: graduate students and postdocs know that their careers depend on the results they achieve and the papers they publish, so there is always a certain amount of jockeying for opportunity to do the key experiments and to avoid doing those that may serve the group’s aim but will probably not result in prominent authorship on an important publication. I had become used to the idea that budding scientists were largely driven by self-interest, and I recognized that my function was to strike a balance between what was good for someone’s career and what was necessary for a project, weighing individual abilities in this regard. As the Neanderthal crisis loomed over the group, however, I was amazed to see how readily the self-centered dynamic gave way to a more group-centered one. The group was functioning as a unit, with everyone eagerly volunteering for thankless and laborious chores that would advance the project regardless of whether such chores would bring any personal glory. There was a strong sense of common purpose in what all felt was a historic endeavor. I felt we had the perfect team. In my more sentimental moments, I felt love for each and every person around the table. This made the feeling that we’d achieved no progress all the more bitter. (146-7)

Those “more sentimental moments” of Pääbo’s occur quite frequently, and he just as frequently describes his colleagues, and even his rivals, in a way that reveals his fondness and admiration for them. Unlike James Watson, who in The Double Helix, his memoir of how he and Francis Crick discovered the underlying structure of DNA, often comes across as nasty and condescending, Pääbo reveals himself to be bighearted, almost to a fault.

            Alongside the passion and the drive, we see Pääbo again and again pausing to reflect with childlike wonder at the dizzying advancement of technology and the incredible privilege of being able to carry on such a transformative tradition of discovery and human progress. He shows at once the humility of recognizing his own limitations and the restless curiosity that propels him onward in spite of them. He writes,

My twenty-five years in molecular biology had essentially been a continuous technical revolution. I had seen DNA sequencing machines come on the market that rendered into an overnight task the toils that took me days and weeks as a graduate student. I had seen cumbersome cloning of DNA in bacteria be replaced by the PCR, which in hours achieved what had earlier taken weeks or months to do. Perhaps that was what had led me to think that within a year or two we would be able to sequence three thousand times more DNA than what we had presented in the proof-of-principle paper in Nature. Then again, why wouldn’t the technological revolution continue? I had learned over the years that unless a person was very, very smart, breakthroughs were best sought when coupled to big improvements in technologies. But that didn’t mean we were simply prisoners awaiting rescue by the next technical revolution. (143)

Like the other members of his team, and like so many other giants in the history of science, Pääbo demonstrates an important and rare mix of seemingly contradictory traits: a capacity for dogged, often mind-numbing meticulousness and a proclivity toward boundless flights of imagination.

What has been the impact of Pääbo and his team’s accomplishments so far? Their methods have already been applied to the remains of a 400,000-year-old human ancestor, led to the discovery of completely new species of hominin known as Denisovans (based on a tiny finger bone), and are helping settle a longstanding debate about the peopling of the Americas. The out-of-Africa hypothesis is, for now, the clear victor over the multiregionalist hypothesis, but of course the single origin theory has become more complicated. Many paleoanthropologists are now talking about what Pääbo calls the “Leaky replacement” model (248). Aside from filling in some of the many gaps in the chronology of humankind’s origins and migrations—or rather fitting together more pieces in the vast mosaic of our species’ history—every new genome helps us to triangulate possible functions for specific genes. As Pääbo explains, “The dirty little secret of genomics is that we still know next to nothing about how a genome translates into the particularities of a living and breathing individual” (208). But knowing the particulars of how human genomes differ from chimp genomes, and how both differ from the genomes of Neanderthals, or Denisovans, or any number of living or extinct species of primates, gives us clues about how those differences contribute to making each of us who and what we are. The Neanderthal genome is not an end-point but rather a link in a chain of discoveries. Nonetheless, we owe Svante Pääbo a debt of gratitude for helping us to appreciate what all went into the forging of this particular, particularly extraordinary link. 

Also read: 

“THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

And: 

THE FEMINIST SOCIOBIOLOGIST: AN APPRECIATION OF SARAH BLAFFER HRDY DISGUISED AS A REVIEW OF “MOTHERS AND OTHERS: THE EVOLUTIONARY ORIGINS OF MUTUAL UNDERSTANDING”

And: 

OLIVER SACKS’S GRAPHOPHILIA AND OTHER COMPENSATIONS FOR A LIFE LIVED “ON THE MOVE”

Read More
Dennis Junk Dennis Junk

“The World until Yesterday” and the Great Anthropology Divide: Wade Davis’s and James C. Scott’s Bizarre and Dishonest Reviews of Jared Diamond’s Work

The field of anthropology is divided into two rival factions, the postmodernists and the scientists—though the postmodernists like to insist they’re being scientific as well. The divide can be seen in critiques of Jared Diamond’s “The World until Yesterday.”

Cultural anthropology has for some time been divided into two groups. The first attempts to understand cultural variation empirically by incorporating it into theories of human evolution and ecological adaptation. The second merely celebrates cultural diversity, and its members are quick to attack any findings or arguments by those in the first group that can in any way be construed as unflattering to the cultures being studied. (This dichotomy is intended to serve as a useful, and only slight, oversimplification.)

Jared Diamond’s scholarship in anthropology places him squarely in the first group. Yet he manages to thwart many of the assumptions held by those in the second group because he studiously avoids the sins of racism and biological determinism they insist every last member of the first group is guilty of. Rather than seeing his work as an exemplar or as evidence that the field is amenable to scientific investigation, however, members of the second group invent crimes and victims so they can continue insisting there’s something immoral about scientific anthropology (though the second group, oddly enough, claims that designation as well).

            Diamond is not an anthropologist by training, but his Pulitzer Prize-winning book Guns, Germs, and Steel, in which he sets out to explain why some societies became technologically advanced conquerors over the past 10,000 years while others maintained their hunter-gatherer lifestyles, became a classic in the field almost as soon as it was published in 1997. His interest in cultural variation arose in large part out of his experiences traveling through New Guinea, the most culturally diverse region of the planet, to conduct ornithological research. By the time he published his first book about human evolution, The Third Chimpanzee, at age 54, he’d spent more time among people from a more diverse set of cultures than many anthropologists do over their entire careers.

In his latest book, The World until Yesterday: What Can We Learn from Traditional Societies?, Diamond compares the lifestyles of people living in modern industrialized societies with those of people who rely on hunting and gathering or horticultural subsistence strategies. His first aim is simply to highlight the differences, since the way most us live today is, evolutionarily speaking, a very recent development; his second is to show that certain traditional practices may actually lead to greater well-being, and may thus be advantageous if adopted by those of us living in advanced civilizations.

            Obviously, Diamond’s approach has certain limitations, chief among them that it affords him little space for in-depth explorations of individual cultures. Instead, he attempts to identify general patterns that apply to traditional societies all over the world. What this means in the context of the great divide in anthropology is that no sooner had Diamond set pen to paper than he’d fallen afoul of the most passionately held convictions of the second group, who bristle at any discussion of universal trends in human societies. The anthropologist Wade Davis’s review of The World until Yesterday in The Guardian is extremely helpful for anyone hoping to appreciate the differences between the two camps because it exemplifies nearly all of the features of this type of historical particularism, with one exception: it’s clearly, even gracefully, written. But this isn’t to say Davis is at all straightforward about his own positions, which you have to read between the lines to glean. Situating the commitment to avoid general theories and focus instead on celebrating the details in a historical context, Davis writes,

This ethnographic orientation, distilled in the concept of cultural relativism, was a radical departure, as unique in its way as was Einstein’s theory of relativity in the field of physics. It became the central revelation of modern anthropology. Cultures do not exist in some absolute sense; each is but a model of reality, the consequence of one particular set of intellectual and spiritual choices made, however successfully, many generations before. The goal of the anthropologist is not just to decipher the exotic other, but also to embrace the wonder of distinct and novel cultural possibilities, that we might enrich our understanding of human nature and just possibly liberate ourselves from cultural myopia, the parochial tyranny that has haunted humanity since the birth of memory.

This stance with regard to other cultures sounds viable enough—it even seems admirable. But Davis is saying something more radical than you may think at first glance. He’s claiming that cultural differences can have no explanations because they arise out of “intellectual and spiritual choices.” It must be pointed out as well that he’s profoundly confused about how relativity in physics relates to—or doesn’t relate to—cultural relativity in anthropology. Einstein discovered that time is relative with regard to velocity compared to a constant speed of light, so the faster one travels the more slowly time advances. Since this rule applies the same everywhere in the universe, the theory actually works much better as an analogy for the types of generalization Diamond tries to discover than it does for the idea that no such generalizations can be discovered. Cultural relativism is not a “revelation” about whether or not cultures can be said to exist or not; it’s a principle that enjoins us to try to understand other cultures on their own terms, not as deviations from our own. Diamond appreciates this principle—he just doesn’t take it to as great an extreme as Davis and the other anthropologists in his camp.

            The idea that cultures don’t exist in any absolute sense implies that comparing one culture to another won’t result in any meaningful or valid insights. But this isn’t a finding or a discovery, as Davis suggests; it’s an a priori conviction. For anthropologists in Davis’s camp, as soon as you start looking outside of a particular culture for an explanation of how it became what it is, you’re no longer looking to understand that culture on its own terms; you’re instead imposing outside ideas and outside values on it. So the simple act of trying to think about variation in a scientific way automatically makes you guilty of a subtle form of colonization. Davis writes,

The very premise of Guns, Germs, and Steel is that a hierarchy of progress exists in the realm of culture, with measures of success that are exclusively material and technological; the fascinating intellectual challenge is to determine just why the west ended up on top. In the posing of this question, Diamond evokes 19th-century thinking that modern anthropology fundamentally rejects. The triumph of secular materialism may be the conceit of modernity, but it does very little to unveil the essence of culture or to account for its diversity and complexity.

For Davis, comparison automatically implies assignment of relative values. But, if we agree that two things can be different without one being superior, we must conclude that Davis is simply being dishonest, because you don’t have to read beyond the Prelude to Guns, Germs, and Steel to find Diamond’s explicit disavowal of this premise that supposedly underlies the entire book:

…don’t words such as “civilization,” and phrases such as “rise of civilization,” convey the false impression that civilization is good, tribal hunter-gatherers are miserable, and history for the past 13,000 years has involved progress toward greater human happiness? In fact, I do not assume that industrialized states are “better” than hunter-gatherer tribes, or that the abandonment of the hunter-gatherer lifestyle for iron-based statehood represents “progress,” or that it has led to an increase in happiness. My own impression, from having divided my life between United States cities and New Guinea villages, is that the so-called blessings of civilization are mixed. For example, compared with hunter-gatherers, citizens of modern industrialized states enjoy better medical care, lower risk of death by homicide, and a longer life span, but receive much less social support from friendships and extended families. My motive for investigating these geographic differences in human societies is not to celebrate one type of society over another but simply to understand what happened in history. (18)

            For Davis and those sharing his postmodern ideology, this type of dishonesty is acceptable because they believe the political ends of protecting indigenous peoples from exploitation justifies their deceitful means. In other words, they’re placing their political goals before their scholarly or scientific ones. Davis argues that the only viable course is to let people from various cultures speak for themselves, since facts and theories in the wrong hands will inevitably lubricate the already slippery slope to colonialism and exploitation. Even Diamond’s theories about environmental influences, in this light, can be dangerous. Davis writes,

In accounting for their simple material culture, their failure to develop writing or agriculture, he laudably rejects notions of race, noting that there is no correlation between intelligence and technological prowess. Yet in seeking ecological and climatic explanations for the development of their way of life, he is as certain of their essential primitiveness as were the early European settlers who remained unconvinced that Aborigines were human beings. The thought that the hundreds of distinct tribes of Australia might simply represent different ways of being, embodying the consequences of unique sets of intellectual and spiritual choices, does not seem to have occurred to him.

Davis is rather deviously suggesting a kinship between Diamond and the evil colonialists of yore, but the connection rests on a non sequitur, that positing environmental explanations of cultural differences necessarily implies primitiveness on the part of the “lesser” culture.

Davis doesn’t explicitly say anywhere in his review that all scientific explanations are colonialist, but once you rule out biological, cognitive, environmental, and climatic theories, well, there’s not much left. Davis’s rival explanation, such as it is, posits a series of collective choices made over the course of history, which in a sense must be true. But it merely begs the question of what precisely led the people to make those choices, and this question inevitably brings us back to all those factors Diamond weighs as potential explanations. Davis could have made the point that not every aspect of every cultural can be explained by ecological factors—but Diamond never suggests otherwise. Citing the example of Kaulong widow strangling in The World until Yesterday, Diamond writes that there’s no reason to believe the practice is in any way adaptive and admits that it can only be “an independent historical cultural trait that arose for some unknown reason in that particular area of New Britain” (21).

I hope we can all agree that harming or exploiting indigenous peoples in any part of the world is wrong and that we should support the implementation of policies that protect them and their ways of life (as long as those ways don’t involve violations of anyone’s rights as a human—yes, that moral imperative supersedes cultural relativism, fears of colonialism be damned). But the idea that trying to understand cultural variation scientifically always and everywhere undermines the dignity of people living in non-Western cultures is the logical equivalent of insisting that trying to understand variations in peoples’ personalities through empirical methods is an affront to their agency and freedom to make choices as individuals. If the position of these political-activist anthropologists had any validity, it would undermine the entire field of psychology, and for that matter the social sciences in general. It’s safe to assume that the opacity that typifies these anthropologists’ writing is meant to protect their ideas from obvious objections like this one. 

As well as Davis writes, it’s nonetheless difficult to figure out what his specific problems with Diamond’s book are. At one point he complains, “Traditional societies do not exist to help us tweak our lives as we emulate a few of their cultural practices. They remind us that our way is not the only way.” Fair enough—but then he concludes with a passage that seems startlingly close to a summation of Diamond’s own thesis.

The voices of traditional societies ultimately matter because they can still remind us that there are indeed alternatives, other ways of orienting human beings in social, spiritual and ecological space… By their very existence the diverse cultures of the world bear witness to the folly of those who say that we cannot change, as we all know we must, the fundamental manner in which we inhabit this planet. This is a sentiment that Jared Diamond, a deeply humane and committed conservationist, would surely endorse.

On the surface, it seems like Davis isn’t even disagreeing with Diamond. What he’s not saying explicitly, however, but hopes nonetheless that we understand is that sampling or experiencing other cultures is great—but explaining them is evil.

            Davis’s review was published in January of 2013, and its main points have been echoed by several other anti-scientific anthropologists—but perhaps none so eminent as the Yale Professor of Anthropology and Political Science, James C. Scott, whose review, “Crops, Towns, Government,” appeared in the London Review of Books in November. After praising Diamond’s plea for the preservation of vanishing languages, Scott begins complaining about the idea that modern traditional societies offer us any evidence at all about how our ancestors lived. He writes of Diamond,

He imagines he can triangulate his way to the deep past by assuming that contemporary hunter-gatherer societies are ‘our living ancestors’, that they show what we were like before we discovered crops, towns and government. This assumption rests on the indefensible premise that contemporary hunter-gatherer societies are survivals, museum exhibits of the way life was lived for the entirety of human history ‘until yesterday’–preserved in amber for our examination.

Don’t be fooled by those lonely English quotation marks—Diamond never makes this mistake, nor does his argument rest on any such premise. Scott is simply being dishonest. In the first chapter of The World until Yesterday, Diamond explains why he wanted to write about the types of changes that took place in New Guinea between the first contact with Westerners in 1931 and today. “New Guinea is in some respects,” he writes, “a window onto the human world as it was until a mere yesterday, measured against a time scale of the 6,000,000 years of human evolution.” He follows this line with a parenthetical, “(I emphasize ‘in some respects’—of course the New Guinea Highlands of 1931 were not an unchanged world of yesterday)” (5-6). It’s clear he added this line because he was anticipating criticisms like Davis’s and Scott’s.

The confusion arises from Scott’s conflation of the cultures and lifestyles Diamond describes with the individuals representing them. Diamond assumes that factors like population size, social stratification, and level of technological advancement have a profound influence on culture. So, if we want to know about our ancestors, we need to look to societies living in conditions similar to the ones they must’ve lived in with regard to just these types of factors. In another bid to ward off the types of criticism he knows to expect from anthropologists like Scott and Davis, he includes a footnote in his introduction which explains precisely what he’s interested in.

By the terms “traditional” and “small-scale” societies, which I shall use throughout this book, I mean past and present societies living at low population densities in small groups ranging from a few dozen to a few thousand people, subsisting by hunting-gathering or by farming or herding, and transformed to a limited degree by contact with large, Westernized, industrial societies. In reality, all such traditional societies still existing today have been at least partly modified by contact, and could alternatively be described as “transitional” rather than “traditional” societies, but they often still retain many features and social processes of the small societies of the past. I contrast traditional small-scale societies with “Westernized” societies, by which I mean the large modern industrial societies run by state governments, familiar to readers of this book as the societies in which most of my readers now live. They are termed “Westernized” because important features of those societies (such as the Industrial Revolution and public health) arose first in Western Europe in the 1700s and 1800s, and spread from there overseas to many other countries. (6)

Scott goes on to take Diamond to task for suggesting that traditional societies are more violent than modern industrialized societies. This is perhaps the most incendiary point of disagreement between the factions on either side of the anthropology divide. The political activists worry that if anthropologists claim indigenous peoples are more violent outsiders will take it as justification to pacify them, which has historically meant armed invasion and displacement. Since the stakes are so high, Scott has no compunctions about misrepresenting Diamond’s arguments. “There is, contra Diamond,” he writes, “a strong case that might be made for the relative non-violence and physical well-being of contemporary hunters and gatherers when compared with the early agrarian states.” 

Well, no, not contra Diamond, who only compares traditional societies to modern Westernized states, like the ones his readers live in, not early agrarian ones. Scott is referring to Diamond's theories about the initial transition to states, claiming that interstate violence negates the benefits of any pacifying central authority. But it may still be better to live under the threat of infrequent state warfare than of much more frequent ambushes or retaliatory attacks by nearby tribes. Scott also suggests that records of high rates of enslavement in early states somehow undermine the case for more homicide in traditional societies, but again Diamond doesn’t discuss early states. Diamond would probably agree that slavery, in the context of his theories, is an interesting topic, but it's hardly the fatal flaw in his ideas Scott makes it out to be.

The misrepresentations extend beyond Diamond’s arguments to encompass the evidence he builds them on. Scott insists it’s all anecdotal, pseudoscientific, and extremely limited in scope. His biggest mistake here is to pull Steven Pinker into the argument, a psychologist whose name alone may tar Diamond’s book in the eyes of anthropologists who share Scott’s ideology, but for anyone else, especially if they’ve actually read Pinker’s work, that name lends further credence to Diamond’s case. (Pinker has actually done the math on whether your chances of dying a violent death are better or worse in different types of society.) Scott writes,

Having chosen some rather bellicose societies (the Dani, the Yanomamo) as illustrations, and larded his account with anecdotal evidence from informants, he reaches the same conclusion as Steven Pinker in The Better Angels of Our Nature: we know, on the basis of certain contemporary hunter-gatherers, that our ancestors were violent and homicidal and that they have only recently (very recently in Pinker’s account) been pacified and civilised by the state. Life without the state is nasty, brutish and short.

In reality, both Diamond and Pinker rely on evidence from a herculean variety of sources going well beyond contemporary ethnographies. To cite just one example Scott neglects to mention, an article by Samuel Bowles published in the journal Science in 2009 examines the rates of death by violence at several prehistoric sites and shows that they’re startlingly similar to those found among modern hunter-gatherers. Insofar as Scott even mentions archeological evidence, it's merely to insist on its worthlessness. Anyone who reads The World until Yesterday after reading Scott’s review will be astonished by how nuanced Diamond’s section on violence actually is. Taking up almost a hundred pages, it is far more insightful and better supported than the essay that purports to undermine it. The section also shows, contra Scott, that Diamond is well aware of all the difficulties and dangers of trying to arrive at conclusions based on any one line of evidence—which is precisely why he follows as many lines as are available to him.

However, even if we accept that traditional societies really are more violent, it could still be the case that tribal conflicts are caused, or at least intensified, through contact with large-scale societies. In order to make this argument, though, political-activist anthropologists must shift their position from claiming that no evidence of violence exists to claiming that the evidence is meaningless or misleading. Scott writes,

No matter how one defines violence and warfare in existing hunter-gatherer societies, the greater part of it by far can be shown to be an effect of the perils and opportunities represented by a world of states. A great deal of the warfare among the Yanomamo was, in this sense, initiated to monopolise key commodities on the trade routes to commercial outlets (see, for example, R. Brian Ferguson’s Yanomami Warfare: A Political History, a strong antidote to the pseudo-scientific account of Napoleon Chagnon on which Diamond relies heavily).

It’s true that Ferguson puts forth a rival theory for warfare among the Yanomamö—and the political-activist anthropologists hold him up as a hero for doing so. (At least one Yanomamö man insisted, in response to Chagnon’s badgering questions about why they fought so much, that it had nothing to do with commodities—they raided other villages for women.) But Ferguson’s work hardly settles the debate. Why, for instance, do the patterns of violence appear in traditional societies all over the world, regardless of which state societies they’re in supposed contact with? And state governments don’t just influence violence in an upward direction. As Diamond points out, “State governments routinely adopt a conscious policy of ending traditional warfare: for example, the first goal of 20th-Century Australian patrol officers in the Territory of Papua New Guinea, on entering a new area, was to stop warfare and cannibalism” (133-4).

What is the proper moral stance anthropologists should take with regard to people living in traditional societies? Should they make it their priority to report the findings of their inquiries honestly? Or should they prioritize their role as advocates for indigenous people’s rights? These are fair questions—and they take on a great deal of added gravity when you consider the history, not to mention the ongoing examples, of how indigenous peoples have suffered at the hands of peoples from Western societies. The answers hinge on how much influence anthropologists currently have on policies that impact traditional societies and on whether science, or Western culture in general, is by its very nature somehow harmful to indigenous peoples. Scott’s and Davis’s positions on both of these issues are clear. Scott writes,

Contemporary hunter-gatherer life can tell us a great deal about the world of states and empires but it can tell us nothing at all about our prehistory. We have virtually no credible evidence about the world until yesterday and, until we do, the only defensible intellectual position is to shut up.

Scott’s argument raises two further questions: when and from where can we count on the “credible evidence” to start rolling in? His “only defensible intellectual position” isn’t that we should reserve judgment or hold off trying to arrive at explanations; it’s that we shouldn’t bother trying to judge the merits of the evidence and that any attempts at explanation are hopeless. This isn’t an intellectual position at all—it’s an obvious endorsement of anti-intellectualism. What Scott really means is that he believes making questions about our hunter-gatherer ancestors off-limits is the only morally defensible position.

            It’s easy to conjure up mental images of the horrors inflicted on native peoples by western explorers and colonial institutions. But framing the history of encounters between peoples with varying levels of technological advancement as one long Manichean tragedy of evil imperialists having their rapacious and murderous way with perfectly innocent noble savages risks trivializing important elements of both types of culture. Traditional societies aren’t peaceful utopias. Western societies and Western governments aren’t mere engines of oppression. Most importantly, while it may be true that science can be—and sometimes is—coopted to serve oppressive or exploitative ends, there’s nothing inherently harmful or immoral about science, which can just as well be used to counter arguments for the mistreatment of one group of people by another. To anthropologists like Davis and Scott, human behavior is something to stand in spiritual awe of, indigenous societies something to experience religious guilt about, in any case not anything to profane with dirty, mechanistic explanations. But, for all their declamations about the evils of thinking that any particular culture can in any sense be said to be inferior to another, they have a pretty dim view of our own.

            It may be simple pride that makes it hard for Scott to accept that gold miners in Brazil weren’t sitting around waiting for some prominent anthropologist at the University of Michigan, or UCLA, or Yale, to publish an article in Science about Yanomamö violence to give them proper justification to use their superior weapons to displace the people living on prime locations. The sad fact is, if the motivation to exploit indigenous peoples is strong enough, and if the moral and political opposition isn’t sufficient, justifications will be found regardless of which anthropologist decides to publish on which topics. But the crucial point Scott misses is that our moral and political opposition cannot be founded on dishonest representations or willful blindness regarding the behaviors, good or bad, of the people we would protect. To understand why this is so, and because Scott embarrassed himself with his childishness, embarrassed The London Review which failed to properly fact-check his article, and did a disservice to the discipline of anthropology by attempting to shout down an honest and humane scholar he disagrees with, it's only fitting that we turn to a passage in The World until Yesterday Scott should have paid more attention to. “I sympathize with scholars outraged by the mistreatment of indigenous peoples,” Diamond writes,

But denying the reality of traditional warfare because of political misuse of its reality is a bad strategy, for the same reason that denying any other reality for any other laudable political goal is a bad strategy. The reason not to mistreat indigenous people is not that they are falsely accused of being warlike, but that it’s unjust to mistreat them. The facts about traditional warfare, just like the facts about any other controversial phenomenon that can be observed and studied, are likely eventually to come out. When they do come out, if scholars have been denying traditional warfare’s reality for laudable political reasons, the discovery of the facts will undermine the laudable political goals. The rights of indigenous people should be asserted on moral grounds, not by making untrue claims susceptible to refutation. (153-4)

Also read:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

Read More
Dennis Junk Dennis Junk

The Self-Righteousness Instinct: Steven Pinker on the Better Angels of Modernity and the Evils of Morality

Is violence really declining? How can that be true? What could be causing it? Why are so many of us convinced the world is going to hell in a hand basket? Steven Pinker attempts to answer these questions in his magnificent and mind-blowing book.

51a5k0THlNL.jpg

Steven Pinker is one of the few scientists who can write a really long book and still expect a significant number of people to read it. But I have a feeling many who might be vaguely intrigued by the buzz surrounding his 2011 book The Better Angels of Our Nature: Why Violence Has Declined wonder why he had to make it nearly seven hundred outsized pages long. Many curious folk likely also wonder why a linguist who proselytizes for psychological theories derived from evolutionary or Darwinian accounts of human nature would write a doorstop drawing on historical and cultural data to describe the downward trajectories of rates of the worst societal woes. The message that violence of pretty much every variety is at unprecedentedly low rates comes as quite a shock, as it runs counter to our intuitive, news-fueled sense of being on a crash course for Armageddon. So part of the reason behind the book’s heft is that Pinker has to bolster his case with lots of evidence to get us to rethink our views. But flipping through the book you find that somewhere between half and a third of its mass is devoted, not to evidence of the decline, but to answering the questions of why the trend has occurred and why it gives every indication of continuing into the foreseeable future. So is this a book about how evolution has made us violent or about how culture is making us peaceful?

The first thing that needs to be said about Better Angels is that you should read it. Despite its girth, it’s at no point the least bit cumbersome to read, and at many points it’s so fascinating that, weighty as it is, you’ll have a hard time putting it down. Pinker has mastered a prose style that’s simple and direct to the point of feeling casual without ever wanting for sophistication. You can also rest assured that what you’re reading is timely and important because it explores aspects of history and social evolution that impact pretty much everyone in the world but that have gone ignored—if not censoriously denied—by most of the eminences contributing to the zeitgeist since the decades following the last world war.

            Still, I suspect many people who take the plunge into the first hundred or so pages are going to feel a bit disoriented as they try to figure out what the real purpose of the book is, and this may cause them to falter in their resolve to finish reading. The problem is that the resistance Better Angels runs to such a prodigious page-count simultaneously anticipating and responding to doesn’t come from news media or the blinkered celebrities in the carnivals of sanctimonious imbecility that are political talk shows. It comes from Pinker’s fellow academics. The overall point of Better Angels remains obscure owing to some deliberate caginess on the author’s part when it comes to identifying the true targets of his arguments. 

            This evasiveness doesn’t make the book difficult to read, but a quality of diffuseness to the theoretical sections, a multitude of strands left dangling, does at points make you doubt whether Pinker had a clear purpose in writing, which makes you doubt your own purpose in reading. With just a little tying together of those strands, however, you start to see that while on the surface he’s merely righting the misperception that over the course of history our species has been either consistently or increasingly violent, what he’s really after is something different, something bigger. He’s trying to instigate, or at least play a part in instigating, a revolution—or more precisely a renaissance—in the way scholars and intellectuals think not just about human nature but about the most promising ways to improve the lot of human societies.

The longstanding complaint about evolutionary explanations of human behavior is that by focusing on our biology as opposed to our supposedly limitless capacity for learning they imply a certain level of fixity to our nature, and this fixedness is thought to further imply a limit to what political reforms can accomplish. The reasoning goes, if the explanation for the way things are is to be found in our biology, then, unless our biology changes, the way things are is the way they’re going to remain. Since biological change occurs at the glacial pace of natural selection, we’re pretty much stuck with the nature we have. 

            Historically, many scholars have made matters worse for evolutionary scientists today by applying ostensibly Darwinian reasoning to what seemed at the time obvious biological differences between human races in intelligence and capacity for acquiring the more civilized graces, making no secret of their conviction that the differences justified colonial expansion and other forms of oppressive rule. As a result, evolutionary psychologists of the past couple of decades have routinely had to defend themselves against charges that they’re secretly trying to advance some reactionary (or even genocidal) agenda. Considering Pinker’s choice of topic in Better Angels in light of this type of criticism, we can start to get a sense of what he’s up to—and why his efforts are discombobulating.

If you’ve spent any time on a university campus in the past forty years, particularly if it was in a department of the humanities, then you have been inculcated with an ideology that was once labeled postmodernism but that eventually became so entrenched in academia, and in intellectual culture more broadly, that it no longer requires a label. (If you took a class with the word "studies" in the title, then you got a direct shot to the brain.) Many younger scholars actually deny any espousal of it—“I’m not a pomo!”—with reference to a passé version marked by nonsensical tangles of meaningless jargon and the conviction that knowledge of the real world is impossible because “the real world” is merely a collective delusion or social construction put in place to perpetuate societal power structures. The disavowals notwithstanding, the essence of the ideology persists in an inescapable but unremarked obsession with those same power structures—the binaries of men and women, whites and blacks, rich and poor, the West and the rest—and the abiding assumption that texts and other forms of media must be assessed not just according to their truth content, aesthetic virtue, or entertainment value, but also with regard to what we imagine to be their political implications. Indeed, those imagined political implications are often taken as clear indicators of the author’s true purpose in writing, which we must sniff out—through a process called “deconstruction,” or its anemic offspring “rhetorical analysis”—lest we complacently succumb to the subtle persuasion.

In the late nineteenth and early twentieth centuries, faith in what we now call modernism inspired intellectuals to assume that the civilizations of Western Europe and the United States were on a steady march of progress toward improved lives for all their own inhabitants as well as the world beyond their borders. Democracy had brought about a new age of government in which rulers respected the rights and freedom of citizens. Medicine was helping ever more people live ever longer lives. And machines were transforming everything from how people labored to how they communicated with friends and loved ones. Everyone recognized that the driving force behind this progress was the juggernaut of scientific discovery. But jump ahead a hundred years to the early twenty-first century and you see a quite different attitude toward modernity. As Pinker explains in the closing chapter of Better Angels,

A loathing of modernity is one of the great constants of contemporary social criticism. Whether the nostalgia is for small-town intimacy, ecological sustainability, communitarian solidarity, family values, religious faith, primitive communism, or harmony with the rhythms of nature, everyone longs to turn back the clock. What has technology given us, they say, but alienation, despoliation, social pathology, the loss of meaning, and a consumer culture that is destroying the planet to give us McMansions, SUVs, and reality television? (692)

The social pathology here consists of all the inequities and injustices suffered by the people on the losing side of those binaries all us closet pomos go about obsessing over. Then of course there’s industrial-scale war and all the other types of modern violence. With terrorism, the War on Terror, the civil war in Syria, the Israel-Palestine conflict, genocides in the Sudan, Kosovo, and Rwanda, and the marauding bands of drugged-out gang rapists in the Congo, it seems safe to assume that science and democracy and capitalism have contributed to the construction of an unsafe global system with some fatal, even catastrophic design flaws. And that’s before we consider the two world wars and the Holocaust. So where the hell is this decline Pinker refers to in his title?

            One way to think about the strain of postmodernism or anti-modernism with the most currency today (and if you’re reading this essay you can just assume your views have been influenced by it) is that it places morality and politics—identity politics in particular—atop a hierarchy of guiding standards above science and individual rights. So, for instance, concerns over the possibility that a negative image of Amazonian tribespeople might encourage their further exploitation trump objective reporting on their culture by anthropologists, even though there’s no evidence to support those concerns. And evidence that the disproportionate number of men in STEM fields reflects average differences between men and women in lifestyle preferences and career interests is ignored out of deference to a political ideal of perfect parity. The urge to grant moral and political ideals veto power over science is justified in part by all the oppression and injustice that abounds in modern civilizations—sexism, racism, economic exploitation—but most of all it’s rationalized with reference to the violence thought to follow in the wake of any movement toward modernity. Pinker writes,

“The twentieth century was the bloodiest in history” is a cliché that has been used to indict a vast range of demons, including atheism, Darwin, government, science, capitalism, communism, the ideal of progress, and the male gender. But is it true? The claim is rarely backed up by numbers from any century other than the 20th, or by any mention of the hemoclysms of centuries past. (193)

He gives the question even more gravity when he reports that all those other areas in which modernity is alleged to be such a colossal failure tend to improve in the absence of violence. “Across time and space,” he writes in the preface, “the more peaceable societies also tend to be richer, healthier, better educated, better governed, more respectful of their women, and more likely to engage in trade” (xxiii). So the question isn’t just about what the story with violence is; it’s about whether science, liberal democracy, and capitalism are the disastrous blunders we’ve learned to think of them as or whether they still just might hold some promise for a better world.

*******

            It’s in about the third chapter of Better Angels that you start to get the sense that Pinker’s style of thinking is, well, way out of style. He seems to be marching to the beat not of his own drummer but of some drummer from the nineteenth century. In the chapter previous, he drew a line connecting the violence of chimpanzees to that in what he calls non-state societies, and the images he’s left you with are savage indeed. Now he’s bringing in the philosopher Thomas Hobbes’s idea of a government Leviathan that once established immediately works to curb the violence that characterizes us humans in states of nature and anarchy. According to sociologist Norbert Elias’s 1969 book, The Civilizing Process, a work whose thesis plays a starring role throughout Better Angels, the consolidation of a Leviathan in England set in motion a trend toward pacification, beginning with the aristocracy no less, before spreading down to the lower ranks and radiating out to the countries of continental Europe and onward thence to other parts of the world. You can measure your feelings of unease in response to Pinker’s civilizing scenario as a proxy for how thoroughly steeped you are in postmodernism.

            The two factors missing from his account of the civilizing pacification of Europe that distinguish it from the self-congratulatory and self-exculpatory sagas of centuries past are the innate superiority of the paler stock and the special mission of conquest and conversion commissioned by a Christian god. In a later chapter, Pinker violates the contemporary taboo against discussing—or even thinking about—the potential role of average group (racial) differences in a propensity toward violence, but he concludes the case for any such differences is unconvincing: “while recent biological evolution may, in theory, have tweaked our inclinations toward violence and nonviolence, we have no good evidence that it actually has” (621). The conclusion that the Civilizing Process can’t be contingent on congenital characteristics follows from the observation of how readily individuals from far-flung regions acquire local habits of self-restraint and fellow-feeling when they’re raised in modernized societies. As for religion, Pinker includes it in a category of factors that are “Important but Inconsistent” with regard to the trend toward peace, dismissing the idea that atheism leads to genocide by pointing out that “Fascism happily coexisted with Catholicism in Spain, Italy, Portugal, and Croatia, and though Hitler had little use for Christianity, he was by no means an atheist, and professed that he was carrying out a divine plan.” Though he cites several examples of atrocities incited by religious fervor, he does credit “particular religious movements at particular times in history” with successfully working against violence (677).

            Despite his penchant for blithely trampling on the taboos of the liberal intelligentsia, Pinker refuses to cooperate with our reflex to pigeonhole him with imperialists or far-right traditionalists past or present. He continually holds up to ridicule the idea that violence has any redeeming effects. In a section on the connection between increasing peacefulness and rising intelligence, he suggests that our violence-tolerant “recent ancestors” can rightly be considered “morally retarded” (658).

  He singles out George W. Bush as an unfortunate and contemptible counterexample in a trend toward more complex political rhetoric among our leaders. And if it’s either gender that comes out not looking as virtuous in Better Angels it ain’t the distaff one. Pinker is difficult to categorize politically because he’s a scientist through and through. What he’s after are reasoned arguments supported by properly weighed evidence.

But there is something going on in Better Angels beyond a mere accounting for the ongoing decline in violence that most of us are completely oblivious of being the beneficiaries of. For one, there’s a challenge to the taboo status of topics like genetic differences between groups, or differences between individuals in IQ, or differences between genders. And there’s an implicit challenge as well to the complementary premises he took on more directly in his earlier book The Blank Slate that biological theories of human nature always lead to oppressive politics and that theories of the infinite malleability of human behavior always lead to progress (communism relies on a blank slate theory, and it inspired guys like Stalin, Mao, and Pol Pot to murder untold millions). But the most interesting and important task Pinker has set for himself with Better Angels is a restoration of the Enlightenment, with its twin pillars of science and individual rights, to its rightful place atop the hierarchy of our most cherished guiding principles, the position we as a society misguidedly allowed to be usurped by postmodernism, with its own dual pillars of relativism and identity politics.

  But, while the book succeeds handily in undermining the moral case against modernism, it does so largely by stealth, with only a few explicit references to the ideologies whose advocates have dogged Pinker and his fellow evolutionary psychologists for decades. Instead, he explores how our moral intuitions and political ideals often inspire us to make profoundly irrational arguments for positions that rational scrutiny reveals to be quite immoral, even murderous. As one illustration of how good causes can be taken to silly, but as yet harmless, extremes, he gives the example of how “violence against children has been defined down to dodgeball” (415) in gym classes all over the US, writing that

The prohibition against dodgeball represents the overshooting of yet another successful campaign against violence, the century-long movement to prevent the abuse and neglect of children. It reminds us of how a civilizing offensive can leave a culture with a legacy of puzzling customs, peccadilloes, and taboos. The code of etiquette bequeathed to us by this and other Rights Revolutions is pervasive enough to have acquired a name. We call it political correctness. (381)

Such “civilizing offensives” are deliberately undertaken counterparts to the fortuitously occurring Civilizing Process Elias proposed to explain the jagged downward slope in graphs of relative rates of violence beginning in the Middle Ages in Europe. The original change Elias describes came about as a result of rulers consolidating their territories and acquiring greater authority. As Pinker explains,

Once Leviathan was in charge, the rules of the game changed. A man’s ticket to fortune was no longer being the baddest knight in the area but making a pilgrimage to the king’s court and currying favor with him and his entourage. The court, basically a government bureaucracy, had no use for hotheads and loose cannons, but sought responsible custodians to run its provinces. The nobles had to change their marketing. They had to cultivate their manners, so as not to offend the king’s minions, and their empathy, to understand what they wanted. The manners appropriate for the court came to be called “courtly” manners or “courtesy.” (75)

And this higher premium on manners and self-presentation among the nobles would lead to a cascade of societal changes.

Elias first lighted on his theory of the Civilizing Process as he was reading some of the etiquette guides which survived from that era. It’s striking to us moderns to see that knights of yore had to be told not to dispose of their snot by shooting it into their host’s table cloth, but that simply shows how thoroughly people today internalize these rules. As Elias explains, they’ve become second nature to us. Of course, we still have to learn them as children. Pinker prefaces his discussion of Elias’s theory with a recollection of his bafflement at why it was so important for him as a child to abstain from using his knife as a backstop to help him scoop food off his plate with a fork. Table manners, he concludes, reside on the far end of a continuum of self-restraint at the opposite end of which are once-common practices like cutting off the nose of a dining partner who insults you. Likewise, protecting children from the perils of flying rubber balls is the product of a campaign against the once-common custom of brutalizing them. The centrality of self-control is the common underlying theme: we control our urge to misuse utensils, including their use in attacking our fellow diners, and we control our urge to throw things at our classmates, even if it’s just in sport. The effect of the Civilizing Process in the Middle Ages, Pinker explains, was that “A culture of honor—the readiness to take revenge—gave way to a culture of dignity—the readiness to control one’s emotions” (72). In other words, diplomacy became more important than deterrence.

            What we’re learning here is that even an evolved mind can adjust to changing incentive schemes. Chimpanzees have to control their impulses toward aggression, sexual indulgence, and food consumption in order to survive in hierarchical bands with other chimps, many of whom are bigger, stronger, and better-connected. Much of the violence in chimp populations takes the form of adult males vying for positions in the hierarchy so they can enjoy the perquisites males of lower status must forgo to avoid being brutalized. Lower ranking males meanwhile bide their time, hopefully forestalling their gratification until such time as they grow stronger or the alpha grows weaker. In humans, the capacity for impulse-control and the habit of delaying gratification are even more important because we live in even more complex societies. Those capacities can either lie dormant or they can be developed to their full potential depending on exactly how complex the society is in which we come of age. Elias noticed a connection between the move toward more structured bureaucracies, less violence, and an increasing focus on etiquette, and he concluded that self-restraint in the form of adhering to strict codes of comportment was both an advertisement of, and a type of training for, the impulse-control that would make someone a successful bureaucrat.

            Aside from children who can’t fathom why we’d futz with our forks trying to capture recalcitrant peas, we normally take our society’s rules of etiquette for granted, no matter how inconvenient or illogical they are, seldom thinking twice before drawing unflattering conclusions about people who don’t bother adhering to them, the ones for whom they aren’t second nature. And the importance we place on etiquette goes beyond table manners. We judge people according to the discretion with which they dispose of any and all varieties of bodily effluent, as well as the delicacy with which they discuss topics sexual or otherwise basely instinctual. 

            Elias and Pinker’s theory is that, while the particular rules are largely arbitrary, the underlying principle of transcending our animal nature through the application of will, motivated by an appreciation of social convention and the sensibilities of fellow community members, is what marked the transition of certain constituencies of our species from a violent non-state existence to a relatively peaceful, civilized lifestyle. To Pinker, the uptick in violence that ensued once the counterculture of the 1960s came into full blossom was no coincidence. The squares may not have been as exciting as the rock stars who sang their anthems to hedonism and the liberating thrill of sticking it to the man. But a society of squares has certain advantages—a lower probability for each of its citizens of getting beaten or killed foremost among them.

            The Civilizing Process as Elias and Pinker, along with Immanuel Kant, understand it picks up momentum as levels of peace conducive to increasingly complex forms of trade are achieved. To understand why the move toward markets or “gentle commerce” would lead to decreasing violence, us pomos have to swallow—at least momentarily—our animus for Wall Street and all the corporate fat cats in the top one percent of the wealth distribution. The basic dynamic underlying trade is that one person has access to more of something than they need, but less of something else, while another person has the opposite balance, so a trade benefits them both. It’s a win-win, or a positive-sum game. The hard part for educated liberals is to appreciate that economies work to increase the total wealth; there isn’t a set quantity everyone has to divvy up in a zero-sum game, an exchange in which every gain for one is a loss for another. And Pinker points to another benefit:

Positive-sum games also change the incentives for violence. If you’re trading favors or surpluses with someone, your trading partner suddenly becomes more valuable to you alive than dead. You have an incentive, moreover, to anticipate what he wants, the better to supply it to him in exchange for what you want. Though many intellectuals, following in the footsteps of Saints Augustine and Jerome, hold businesspeople in contempt for their selfishness and greed, in fact a free market puts a premium on empathy. (77)

The Occupy Wall Street crowd will want to jump in here with a lengthy list of examples of businesspeople being unempathetic in the extreme. But Pinker isn’t saying commerce always forces people to be altruistic; it merely encourages them to exercise their capacity for perspective-taking. Discussing the emergence of markets, he writes,

The advances encouraged the division of labor, increased surpluses, and lubricated the machinery of exchange. Life presented people with more positive-sum games and reduced the attractiveness of zero-sum plunder. To take advantage of the opportunities, people had to plan for the future, control their impulses, take other people’s perspectives, and exercise the other social and cognitive skills needed to prosper in social networks. (77)

And these changes, the theory suggests, will tend to make merchants less likely on average to harm anyone. As bad as bankers can be, they’re not out sacking villages.

            Once you have commerce, you also have a need to start keeping records. And once you start dealing with distant partners it helps to have a mode of communication that travels. As writing moved out of the monasteries, and as technological advances in transportation brought more of the world within reach, ideas and innovations collided to inspire sequential breakthroughs and discoveries. Every advance could be preserved, dispersed, and ratcheted up. Pinker focuses on two relatively brief historical periods that witnessed revolutions in the way we think about violence, and both came in the wake of major advances in the technologies involved in transportation and communication. The first is the Humanitarian Revolution that occurred in the second half of the eighteenth century, and the second covers the Rights Revolutions in the second half of the twentieth. The Civilizing Process and gentle commerce weren’t sufficient to end age-old institutions like slavery and the torture of heretics. But then came the rise of the novel as a form of mass entertainment, and with all the training in perspective-taking readers were undergoing the hitherto unimagined suffering of slaves, criminals, and swarthy foreigners became intolerably imaginable. People began to agitate and change ensued.

            The Humanitarian Revolution occurred at the tail end of the Age of Reason and is recognized today as part of the period known as the Enlightenment. According to some scholarly scenarios, the Enlightenment, for all its successes like the American Constitution and the abolition of slavery, paved the way for all those allegedly unprecedented horrors in the first half of the twentieth century. Notwithstanding all this ivory tower traducing, the Enlightenment emerged from dormancy after the Second World War and gradually gained momentum, delivering us into a period Pinker calls the New Peace. Just as the original Enlightenment was preceded by increasing cosmopolitanism, improving transportation, and an explosion of literacy, the transformations that brought about the New Peace followed a burst of technological innovation. For Pinker, this is no coincidence. He writes,

If I were to put my money on the single most important exogenous cause of the Rights Revolutions, it would be the technologies that made ideas and people increasingly mobile. The decades of the Rights Revolutions were the decades of the electronics revolutions: television, transistor radios, cable, satellite, long-distance telephones, photocopiers, fax machines, the Internet, cell phones, text messaging, Web video. They were the decades of the interstate highway, high-speed rail, and the jet airplane. They were the decades of the unprecedented growth in higher education and in the endless frontier of scientific research. Less well known is that they were also the decades of an explosion in book publishing. From 1960 to 2000, the annual number of books published in the United States increased almost fivefold. (477)

Violence got slightly worse in the 60s. But the Civil Rights Movement was underway, Women’s Rights were being extended into new territories, and people even began to acknowledge that animals could suffer, prompting them to argue that we shouldn’t cause them to do so without cause. Today the push for Gay Rights continues. By 1990, the uptick in violence was over, and so far the move toward peace is looking like an ever greater success. Ironically, though, all the new types of media bringing images from all over the globe into our living rooms and pockets contributes to the sense that violence is worse than ever.

*******

            Three factors brought about a reduction in violence over the course of history then: strong government, trade, and communications technology. These factors had the impact they did because they interacted with two of our innate propensities, impulse-control and perspective-taking, by giving individuals both the motivation and the wherewithal to develop them both to ever greater degrees. It’s difficult to draw a clear delineation between developments that were driven by chance or coincidence and those driven by deliberate efforts to transform societies. But Pinker does credit political movements based on moral principles with having played key roles:

Insofar as violence is immoral, the Rights Revolutions show that a moral way of life often requires a decisive rejection of instinct, culture, religion, and standard practice. In their place is an ethics that is inspired by empathy and reason and stated in the language of rights. We force ourselves into the shoes (or paws) of other sentient beings and consider their interests, starting with their interest in not being hurt or killed, and we ignore superficialities that may catch our eye such as race, ethnicity, gender, age, sexual orientation, and to some extent, species. (475)

Some of the instincts we must reject in order to bring about peace, however, are actually moral instincts.

Pinker is setting up a distinction here between different kinds of morality. The one he describes that’s based on perspective-taking—which evidence he presents later suggests inspires sympathy—and is “stated in the language of rights” is the one he credits with transforming the world for the better. Of the idea that superficial differences shouldn’t distract us from our common humanity, he writes,

This conclusion, of course, is the moral vision of the Enlightenment and the strands of humanism and liberalism that have grown out of it. The Rights Revolutions are liberal revolutions. Each has been associated with liberal movements, and each is currently distributed along a gradient that runs, more or less, from Western Europe to the blue American states to the red American states to the democracies of Latin America and Asia and then to the more authoritarian countries, with Africa and most of the Islamic world pulling up the rear. In every case, the movements have left Western cultures with excesses of propriety and taboo that are deservedly ridiculed as political correctness. But the numbers show that the movements have reduced many causes of death and suffering and have made the culture increasingly intolerant of violence in any form. (475-6)

So you’re not allowed to play dodgeball at school or tell off-color jokes at work, but that’s a small price to pay. The most remarkable part of this passage though is that gradient he describes; it suggests the most violent regions of the globe are also the ones where people are the most obsessed with morality, with things like Sharia and so-called family values. It also suggests that academic complaints about the evils of Western culture are unfounded and startlingly misguided. As Pinker casually points out in his section on Women’s Rights, “Though the United States and other Western nations are often accused of being misogynistic patriarchies, the rest of the world is immensely worse” (413).

The Better Angels of Our Nature came out about a year before Jonathan Haidt’s The Righteous Mind, but Pinker’s book beats Haidt’s to the punch by identifying a serious flaw in his reasoning. The Righteous Mind explores how liberals and conservatives conceive of morality differently, and Haidt argues that each conception is equally valid so we should simply work to understand and appreciate opposing political views. It’s not like you’re going to change anyone’s mind anyway, right? But the liberal ideal of resisting certain moral intuitions tends to bring about a rather important change wherever it’s allowed to be realized. Pinker writes that

right or wrong, retracting the moral sense from its traditional spheres of community, authority, and purity entails a reduction of violence. And that retraction is precisely the agenda of classical liberalism: a freedom of individuals from tribal and authoritarian force, and a tolerance of personal choices as long as they do not infringe on the autonomy and well-being of others. (637)

Classical liberalism—which Pinker distinguishes from contemporary political liberalism—can even be viewed as an effort to move morality away from the realm of instincts and intuitions into the more abstract domains of law and reason. The perspective-taking at the heart of Enlightenment morality can be said to consist of abstracting yourself from your identifying characteristics and immediate circumstances to imagine being someone else in unfamiliar straits. A man with a job imagines being a woman who can’t get one. A white man on good terms with law enforcement imagines being a black man who gets harassed. This practice of abstracting experiences and distilling individual concerns down to universal principles is the common thread connecting Enlightenment morality to science.

            So it’s probably no coincidence, Pinker argues, that as we’ve gotten more peaceful, people in Europe and the US have been getting better at abstract reasoning as well, a trend which has been going on for as long as researchers have had tests to measure it. Psychologists over the course of the twentieth century have had to adjust IQ test results (the average is always 100) a few points every generation because scores on a few subsets of questions have kept going up. The regular rising of scores is known as the Flynn Effect, after psychologist James Flynn, who was one of the first researchers to realize the trend was more than methodological noise. Having posited a possible connection between scientific and moral reasoning, Pinker asks, “Could there be a moral Flynn Effect?” He explains,

We have several grounds for supposing that enhanced powers of reason—specifically, the ability to set aside immediate experience, detach oneself from a parochial vantage point, and frame one’s ideas in abstract, universal terms—would lead to better moral commitments, including an avoidance of violence. And we have just seen that over the course of the 20th century, people’s reasoning abilities—particularly their ability to set aside immediate experience, detach themselves from a parochial vantage point, and think in abstract terms—were steadily enhanced. (656)

Pinker cites evidence from an array of studies showing that high-IQ people tend have high moral IQs as well. One of them, an infamous study by psychologist Satoshi Kanazawa based on data from over twenty thousand young adults in the US, demonstrates that exceptionally intelligent people tend to hold a particular set of political views. And just as Pinker finds it necessary to distinguish between two different types of morality he suggests we also need to distinguish between two different types of liberalism:

Intelligence is expected to correlate with classical liberalism because classical liberalism is itself a consequence of the interchangeability of perspectives that is inherent to reason itself. Intelligence need not correlate with other ideologies that get lumped into contemporary left-of-center political coalitions, such as populism, socialism, political correctness, identity politics, and the Green movement. Indeed, classical liberalism is sometimes congenial to the libertarian and anti-political-correctness factions in today’s right-of-center coalitions. (662)

And Kanazawa’s findings bear this out. It’s not liberalism in general that increases steadily with intelligence, but a particular kind of liberalism, the type focusing more on fairness than on ideology.

*******

Following the chapters devoted to historical change, from the early Middle Ages to the ongoing Rights Revolutions, Pinker includes two chapters on psychology, the first on our “Inner Demons” and the second on our “Better Angels.” Ideology gets some prime real estate in the Demons chapter, because, he writes, “the really big body counts in history pile up” when people believe they’re serving some greater good. “Yet for all that idealism,” he explains, “it’s ideology that drove many of the worst things that people have ever done to each other.” Christianity, Nazism, communism—they all “render opponents of the ideology infinitely evil and hence deserving of infinite punishment” (556). Pinker’s discussion of morality, on the other hand, is more complicated. It begins, oddly enough, in the Demons chapter, but stretches into the Angels one as well. This is how the section on morality in the Angels chapter begins:

The world has far too much morality. If you added up all the homicides committed in pursuit of self-help justice, the casualties of religious and revolutionary wars, the people executed for victimless crimes and misdemeanors, and the targets of ideological genocides, they would surely outnumber the fatalities from amoral predation and conquest. The human moral sense can excuse any atrocity in the minds of those who commit it, and it furnishes them with motives for acts of violence that bring them no tangible benefit. The torture of heretics and conversos, the burning of witches, the imprisonment of homosexuals, and the honor killing of unchaste sisters and daughters are just a few examples. (622)

The postmodern push to give precedence to moral and political considerations over science, reason, and fairness may seem like a good idea at first. But political ideologies can’t be defended on the grounds of their good intentions—they all have those. And morality has historically caused more harm than good. It’s only the minimalist, liberal morality that has any redemptive promise:

Though the net contribution of the human moral sense to human well-being may well be negative, on those occasions when it is suitably deployed it can claim some monumental advances, including the humanitarian reforms of the Enlightenment and the Rights Revolutions of recent decades. (622)

            One of the problems with ideologies Pinker explores is that they lend themselves too readily to for-us-or-against-us divisions which piggyback on all our tribal instincts, leading to dehumanization of opponents as a step along the path to unrestrained violence. But, we may ask, isn’t the Enlightenment just another ideology? If not, is there some reliable way to distinguish an ideological movement from a “civilizing offensive” or a “Rights Revolution”? Pinker doesn’t answer these questions directly, but it’s in his discussion of the demonic side of morality where Better Angels offers its most profound insights—and it’s also where we start to be able to piece together the larger purpose of the book. He writes,

In The Blank Slate I argued that the modern denial of the dark side of human nature—the doctrine of the Noble Savage—was a reaction against the romantic militarism, hydraulic theories of aggression, and glorification of struggle and strife that had been popular in the late 19th and early 20th centuries. Scientists and scholars who question the modern doctrine have been accused of justifying violence and have been subjected to vilification, blood libel, and physical assault. The Noble Savage myth appears to be another instance of an antiviolence movement leaving a cultural legacy of propriety and taboo. (488)

Since Pinker figured that what he and his fellow evolutionary psychologists kept running up against was akin to the repulsion people feel against poor table manners or kids winging balls at each other in gym class, he reasoned that he ought to be able to simply explain to the critics that evolutionary psychologists have no intention of justifying, or even encouraging complacency toward, the dark side of human nature. “But I am now convinced,” he writes after more than a decade of trying to explain himself, “that a denial of the human capacity for evil runs even deeper, and may itself be a feature of human nature” (488). That feature, he goes on to explain, makes us feel compelled to label as evil anyone who tries to explain evil scientifically—because evil as a cosmic force beyond the reach of human understanding plays an indispensable role in group identity.

            Pinker began to fully appreciate the nature of the resistance to letting biology into discussions of human harm-doing when he read about the work of psychologist Roy Baumeister exploring the wide discrepancies in accounts of anger-inducing incidents between perpetrators and victims. The first studies looked at responses to minor offenses, but Baumeister went on to present evidence that the pattern, which Pinker labels the “Moralization Gap,” can be scaled up to describe societal attitudes toward historical atrocities. Pinker explains,

The Moralization Gap consists of complementary bargaining tactics in the negotiation for recompense between a victim and a perpetrator. Like opposing counsel in a lawsuit over a tort, the social plaintiff will emphasize the deliberateness, or at least the depraved indifference, of the defendant’s action, together with the pain and suffering the plaintiff endures. The social defendant will emphasize the reasonableness or unavoidability of the action, and will minimize the plaintiff’s pain and suffering. The competing framings shape the negotiations over amends, and also play to the gallery in a competition for their sympathy and for a reputation as a responsible reciprocator. (491)

Another of the Inner Demons Pinker suggests plays a key role in human violence is the drive for dominance, which he explains operates not just at the level of the individual but at that of the group to which he or she belongs. We want our group, however we understand it in the immediate context, to rest comfortably atop a hierarchy of other groups. What happens is that the Moralization Gap gets mingled with this drive to establish individual and group superiority. You see this dynamic playing out even in national conflicts. Pinker points out,

The victims of a conflict are assiduous historians and cultivators of memory. The perpetrators are pragmatists, firmly planted in the present. Ordinarily we tend to think of historical memory as a good thing, but when the events being remembered are lingering wounds that call for redress, it can be a call to violence. (493)

Name a conflict and with little effort you’ll likely also be able to recall contentions over historical records associated with it.

            The outcome of the Moralization Gap being taken to the group historical level is what Pinker and Baumeister call the “Myth of Pure Evil.” Harm-doing narratives start to take on religious overtones as what began as a conflict between regular humans pursuing or defending their interests, in ways they probably reasoned were just, transforms into an eternal struggle against inhuman and sadistic agents of chaos. And Pinker has come to realize that it is this Myth of Pure Evil that behavioral scientists ineluctably end up blaspheming:

Baumeister notes that in the attempt to understand harm-doing, the viewpoint of the scientist or scholar overlaps with the viewpoint of the perpetrator. Both take a detached, amoral stance toward the harmful act. Both are contextualizers, always attentive to the complexities of the situation and how they contributed to the causation of the harm. And both believe that the harm is ultimately explicable. (495)

This is why evolutionary psychologists who study violence inspire what Pinker in The Blank Slate called “political paranoia and moral exhibitionism” (106) on the part of us naïve pomos, ravenously eager to showcase our valor by charging once more into the breach against the mythical malevolence. All the while, our impregnable assurance of our own righteousness is borne of the conviction that we’re standing up for the oppressed. Pinker writes,

The viewpoint of the moralist, in contrast, is the viewpoint of the victim. The harm is treated with reverence and awe. It continues to evoke sadness and anger long after it was perpetrated. And for all the feeble ratiocination we mortals throw at it, it remains a cosmic mystery, a manifestation of the irreducible and inexplicable existence of evil in the universe. Many chroniclers of the Holocaust consider it immoral even to try to explain it. (495-6)

We simply can’t help inflating the magnitude of the crime in our attempt to convince our ideological opponents of their folly—though what we’re really inflating is our own, and our group’s, glorification—and so we can’t abide anyone puncturing our overblown conception because doing so lends credence to the opposition, making us look a bit foolish in the process for all our exaggerations.

            Reading Better Angels, you get the sense that Pinker experienced some genuine surprise and some real delight in discovering more and more corroboration for the idea that rates of violence have been trending downward in nearly every domain he explored. But things get tricky as you proceed through the pages because many of his arguments take on opposing positions he avoids naming. He seems to have seen the trove of evidence for declining violence as an opportunity to outflank the critics of evolutionary psychology in leftist, postmodern academia (to use a martial metaphor). Instead of calling them out directly, he circles around to chip away at the moral case for their political mission. We see this, for example, in his discussion of rape, which psychologists get into all kinds of trouble for trying to explain. After examining how scientists seem to be taking the perspective of perpetrators, Pinker goes on to write,

The accusation of relativizing evil is particularly likely when the motive the analyst imputes to the perpetrator appears to be venial, like jealousy, status, or retaliation, rather than grandiose, like the persistence of suffering in the world or the perpetuation of race, class, or gender oppression. It is also likely when the analyst ascribes the motive to every human being rather than to a few psychopaths or to the agents of a malignant political system (hence the popularity of the doctrine of the Noble Savage). (496)

In his earlier section on Woman’s Rights and the decline of rape, he attributed the difficulty in finding good data on the incidence of the crime, as well as some of the “preposterous” ideas about what motivates it, to the same kind of overextensions of anti-violence campaigns that lead to arbitrary rules about the use of silverware and proscriptions against dodgeball:

Common sense never gets in the way of a sacred custom that has accompanied a decline in violence, and today rape centers unanimously insist that “rape or sexual assault is not an act of sex or lust—it’s about aggression, power, and humiliation, using sex as the weapon. The rapist’s goal is domination.” (To which the journalist Heather MacDonald replies: “The guys who push themselves on women at keggers are after one thing only, and it’s not a reinstatement of the patriarchy.”) (406)

Jumping ahead to Pinker’s discussion of the Moralization Gap, we see that the theory that rape is about power, as opposed to the much more obvious theory that it’s about sex, is an outgrowth of the Myth of Pure Evil, an inflation of the mundane drives that lead some pathetic individuals to commit horrible crimes into eternal cosmic forces, inscrutable and infinitely punishable.

            When feminists impute political motives to rapists, they’re crossing the boundary from Enlightenment morality to the type of moral ideology that inspires dehumanization and violence. The good news is that it’s not difficult to distinguish between the two. From the Enlightenment perspective, rape is indefensibly wrong because it violates the autonomy of the victim—it’s an act of violence perpetrated by one individual against another. From the ideological perspective, every rape must be understood in the context of the historical oppression of women by men; it transcends the individuals involved as a representation of a greater evil. The rape-as-a-political-act theory also comes dangerously close to implying a type of collective guilt, which is a clear violation of individual rights.

Scholars already make the distinction between three different waves of feminism. The first two fall within Pinker’s definition of Rights Revolutions; they encompassed pushes for suffrage, marriage rights, and property rights, and then the rights to equal pay and equal opportunity in the workplace. The third wave is avowedly postmodern, its advocates committed to the ideas that gender is a pure social construct and that suggesting otherwise is an act of oppression. What you come away from Better Angels realizing, even though Pinker doesn’t say it explicitly, is that somewhere between the second and third waves feminists effectively turned against the very ideas and institutions that had been most instrumental in bringing about the historical improvements in women’s lives from the Middle Ages to the turn of the twenty-first century. And so it is with all the other ideologies on the postmodern roster.

Another misguided propaganda tactic that dogged Pinker’s efforts to identify historical trends in violence can likewise be understood as an instance of inflating the severity of crimes on behalf of a moral ideology—and the taboo placed on puncturing the bubble or vitiating the purity of evil with evidence and theories of venial motives. As he explains in the preface, “No one has ever recruited activists to a cause by announcing that things are getting better, and bearers of good news are often advised to keep their mouths shut lest they lull people into complacency” (xxii). Here again the objective researcher can’t escape the appearance of trying to minimize the evil, and therefore risks being accused of looking the other way, or even of complicity. But in an earlier section on genocide Pinker provides the quintessential Enlightenment rationale for the clear-eyed scientific approach to studying even the worst atrocities. He writes,

The effort to whittle down the numbers that quantify the misery can seem heartless, especially when the numbers serve as propaganda for raising money and attention. But there is a moral imperative in getting the facts right, and not just to maintain credibility. The discovery that fewer people are dying in wars all over the world can thwart cynicism among compassion-fatigued news readers who might otherwise think that poor countries are irredeemable hellholes. And a better understanding of what drve the numbers down can steer us toward doing things that make people better off rather than congratulating ourselves on how altruistic we are. (320)

This passage can be taken as the underlying argument of the whole book. And it gestures toward some far-reaching ramifications to the idea that exaggerated numbers are a product of the same impulse that causes us to inflate crimes to the status of pure evil.

Could it be that the nearly universal misperception that violence is getting worse all over the world, that we’re doomed to global annihilation, and that everywhere you look is evidence of the breakdown in human decency—could it be that the false impression Pinker set out to correct with Better Angels is itself a manifestation of a natural urge in all of us to seek out evil and aggrandize ourselves by unconsciously overestimating it? Pinker himself never goes as far as suggesting the mass ignorance of waning violence is a byproduct of an instinct toward self-righteousness. Instead, he writes of the “gloom” about the fate of humanity,

I think it comes from the innumeracy of our journalistic and intellectual culture. The journalist Michael Kinsley recently wrote, “It is a crushing disappointment that Boomers entered adulthood with Americans killing and dying halfway around the world, and now, as Boomers reach retirement and beyond, our country is doing the same damned thing.” This assumes that 5,000 Americans dying is the same damned thing as 58,000 Americans dying, and that a hundred thousand Iraqis being killed is the same damned thing as several million Vietnamese being killed. If we don’t keep an eye on the numbers, the programming policy “If it bleeds it leads” will feed the cognitive shortcut “The more memorable, the more frequent,” and we will end up with what has been called a false sense of insecurity. (296)

Pinker probably has a point, but the self-righteous undertone of Kinsley’s “same damned thing” is unmistakable. He’s effectively saying, I’m such an outstanding moral being the outrageous evilness of the invasion of Iraq is blatantly obvious to me—why isn’t it to everyone else? And that same message seems to underlie most of the statements people make expressing similar sentiments about how the world is going to hell.

            Though Pinker neglects to tie all the strands together, he still manages to suggest that the drive to dominance, ideology, tribal morality, and the Myth of Pure Evil are all facets of the same disastrous flaw in human nature—an instinct for self-righteousness. Progress on the moral front—real progress like fewer deaths, less suffering, and more freedom—comes from something much closer to utilitarian pragmatism than activist idealism. Yet the activist tradition is so thoroughly enmeshed in our university culture that we’re taught to exercise our powers of political righteousness even while engaging in tasks as mundane as reading books and articles. 

            If the decline in violence and the improvement of the general weal in various other areas are attributable to the Enlightenment, then many of the assumptions underlying postmodernism are turned on their heads. If social ills like warfare, racism, sexism, and child abuse exist in cultures untouched by modernism—and they in fact not only exist but tend to be much worse—then science can’t be responsible for creating them; indeed, if they’ve all trended downward with the historical development of all the factors associated with male-dominated western culture, including strong government, market economies, run-away technology, and scientific progress, then postmodernism not only has everything wrong but threatens the progress achieved by the very institutions it depends on, emerged from, and squanders innumerable scholarly careers maligning.

Of course some Enlightenment figures and some scientists do evil things. Of course living even in the most Enlightened of civilizations is no guarantee of safety. But postmodernism is an ideology based on the premise that we ought to discard a solution to our societal woes for not working perfectly and immediately, substituting instead remedies that have historically caused more problems than they solved by orders of magnitude. The argument that there’s a core to the Enlightenment that some of its representatives have been faithless to when they committed atrocities may seem reminiscent of apologies for Christianity based on the fact that Crusaders and Inquisitors weren’t loving their neighbors as Christ enjoined. The difference is that the Enlightenment works—in just a few centuries it’s transformed the world and brought about a reduction in violence no religion has been able to match in millennia. If anything, the big monotheistic religions brought about more violence.

Embracing Enlightenment morality or classical liberalism doesn’t mean we should give up our efforts to make the world a better place. As Pinker describes the transformation he hopes to encourage with Better Angels,

As one becomes aware of the decline of violence, the world begins to look different. The past seems less innocent; the present less sinister. One starts to appreciate the small gifts of coexistence that would have seemed utopian to our ancestors: the interracial family playing in the park, the comedian who lands a zinger on the commander in chief, the countries that quietly back away from a crisis instead of escalating to war. The shift is not toward complacency: we enjoy the peace we find today because people in past generations were appalled by the violence in their time and worked to reduce it, and so we should work to reduce the violence that remains in our time. Indeed, it is a recognition of the decline of violence that best affirms that such efforts are worthwhile. (xxvi)

Since our task for the remainder of this century is to extend the reach of science, literacy, and the recognition of universal human rights farther and farther along the Enlightenment gradient until they're able to grant the same increasing likelihood of a long peaceful life to every citizen of every nation of the globe, and since the key to accomplishing this task lies in fomenting future Rights Revolutions while at the same time recognizing, so as to be better equipped to rein in, our drive for dominance as manifested in our more deadly moral instincts, I for one am glad Steven Pinker has the courage to violate so many of the outrageously counterproductive postmodern taboos while having the grace to resist succumbing himself, for the most part, to the temptation of self-righteousness.

Also read:

THE FAKE NEWS CAMPAIGN AGAINST STEVEN PINKER AND ENLIGHTENMENT NOW

And:

THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND

And:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

Read More
Dennis Junk Dennis Junk

Capuchin-22: A Review of “The Bonobo and the Atheist: In Search of Humanism among the Primates” by Frans De Waal

Frans de Waal’s work is always a joy to read, insightful, surprising, and superbly humane. Unfortunately, in his mostly wonderful book, “The Bonobo and the Atheist,” he carts out a familiar series of straw men to level an attack on modern critics of religion—with whom, if he’d been more diligent in reading their work, he’d find much common ground with.

            Whenever literary folk talk about voice, that supposedly ineffable but transcendently important quality of narration, they display an exasperating penchant for vagueness, as if so lofty a dimension to so lofty an endeavor couldn’t withstand being spoken of directly—or as if they took delight in instilling panic and self-doubt into the quivering hearts of aspiring authors. What the folk who actually know what they mean by voice actually mean by it is all the idiosyncratic elements of prose that give readers a stark and persuasive impression of the narrator as a character. Discussions of what makes for stark and persuasive characters, on the other hand, are vague by necessity. It must be noted that many characters even outside of fiction are neither. As a first step toward developing a feel for how character can be conveyed through writing, we may consider the nonfiction work of real people with real character, ones who also happen to be practiced authors.

The Dutch-American primatologist Frans de Waal is one such real-life character, and his prose stands as testament to the power of written language, lonely ink on colorless pages, not only to impart information, but to communicate personality and to make a contagion of states and traits like enthusiasm, vanity, fellow-feeling, bluster, big-heartedness, impatience, and an abiding wonder. De Waal is a writer with voice. Many other scientists and science writers explore this dimension to prose in their attempts to engage readers, but few avoid the traps of being goofy or obnoxious instead of funny—a trap David Pogue, for instance, falls into routinely as he hosts NOVA on PBS—and of expending far too much effort in their attempts at being distinctive, thus failing to achieve anything resembling grace. 

The most striking quality of de Waal’s writing, however, isn’t that its good-humored quirkiness never seems strained or contrived, but that it never strays far from the man’s own obsession with getting at the stories behind the behaviors he so minutely observes—whether the characters are his fellow humans or his fellow primates, or even such seemingly unstoried creatures as rats or turtles. But to say that de Waal is an animal lover doesn’t quite capture the essence of what can only be described as a compulsive fascination marked by conviction—the conviction that when he peers into the eyes of a creature others might dismiss as an automaton, a bundle of twitching flesh powered by preprogrammed instinct, he sees something quite different, something much closer to the workings of his own mind and those of his fellow humans.

De Waal’s latest book, The Bonobo and the Atheist: In Search of Humanism among the Primates, reprises the main themes of his previous books, most centrally the continuity between humans and other primates, with an eye toward answering the questions of where does, and where should morality come from. Whereas in his books from the years leading up to the turn of the century he again and again had to challenge what he calls “veneer theory,” the notion that without a process of socialization that imposes rules on individuals from some outside source they’d all be greedy and selfish monsters, de Waal has noticed over the past six or so years a marked shift in the zeitgeist toward an awareness of our more cooperative and even altruistic animal urgings. Noting a sharp difference over the decades in how audiences at his lectures respond to recitations of the infamous quote by biologist Michael Ghiselin, “Scratch an altruist and watch a hypocrite bleed,” de Waal writes,

Although I have featured this cynical line for decades in my lectures, it is only since about 2005 that audiences greet it with audible gasps and guffaws as something so outrageous, so out of touch with how they see themselves, that they can’t believe it was ever taken seriously. Had the author never had a friend? A loving wife? Or a dog, for that matter? (43)

The assumption underlying veneer theory was that without civilizing influences humans’ deeper animal impulses would express themselves unchecked. The further assumption was that animals, the end products of the ruthless, eons-long battle for survival and reproduction, would reflect the ruthlessness of that battle in their behavior. De Waal’s first book, Chimpanzee Politics, which told the story of a period of intensified competition among the captive male chimps at the Arnhem Zoo for alpha status, with all the associated perks like first dibs on choice cuisine and sexually receptive females, was actually seen by many as lending credence to these assumptions. But de Waal himself was far from convinced that the primates he studied were invariably, or even predominantly, violent and selfish.

            What he observed at the zoo in Arnhem was far from the chaotic and bloody free-for-all it would have been if the chimps took the kind of delight in violence for its own sake that many people imagine them being disposed to. As he pointed out in his second book, Peacemaking among Primates, the violence is almost invariably attended by obvious signs of anxiety on the part of those participating in it, and the tension surrounding any major conflict quickly spreads throughout the entire community. The hierarchy itself is in fact an adaptation that serves as a check on the incessant conflict that would ensue if the relative status of each individual had to be worked out anew every time one chimp encountered another. “Tightly embedded in society,” he writes in The Bonobo and the Atheist, “they respect the limits it puts on their behavior and are ready to rock the boat only if they can get away with it or if so much is at stake that it’s worth the risk” (154). But the most remarkable thing de Waal observed came in the wake of the fights that couldn’t successfully be avoided. Chimps, along with primates of several other species, reliably make reconciliatory overtures toward one another after they’ve come to blows—and bites and scratches. In light of such reconciliations, primate violence begins to look like a momentary, albeit potentially dangerous, readjustment to a regularly peaceful social order rather than any ongoing melee, as individuals with increasing or waning strength negotiate a stable new arrangement.

            Part of the enchantment of de Waal’s writing is his judicious and deft balancing of anecdotes about the primates he works with on the one hand and descriptions of controlled studies he and his fellow researchers conduct on the other. In The Bonobo and the Atheist, he strikes a more personal note than he has in any of his previous books, at points stretching the bounds of the popular science genre and crossing into the realm of memoir. This attempt at peeling back the surface of that other veneer, the white-coated scientist’s posture of mechanistic objectivity and impassive empiricism, works best when de Waal is merging tales of his animal experiences with reports on the research that ultimately provides evidence for what was originally no more than an intuition. Discussing a recent, and to most people somewhat startling, experiment pitting the social against the alimentary preferences of a distant mammalian cousin, he recounts,

Despite the bad reputation of these animals, I have no trouble relating to its findings, having kept rats as pets during my college years. Not that they helped me become popular with the girls, but they taught me that rats are clean, smart, and affectionate. In an experiment at the University of Chicago, a rat was placed in an enclosure where it encountered a transparent container with another rat. This rat was locked up, wriggling in distress. Not only did the first rat learn how to open a little door to liberate the second, but its motivation to do so was astonishing. Faced with a choice between two containers, one with chocolate chips and another with a trapped companion, it often rescued its companion first. (142-3)

This experiment, conducted by Inbal Ben-Ami Bartal, Jean Decety, and Peggy Mason, actually got a lot of media coverage; Mason was even interviewed for an episode of NOVA Science NOW where you can watch a video of the rats performing the jailbreak and sharing the chocolate (and you can also see David Pogue being obnoxious.) This type of coverage has probably played a role in the shift in public opinion regarding the altruistic propensities of humans and animals. But if there’s one species who’s behavior can be said to have undermined the cynicism underlying veneer theory—aside from our best friend the dog of course—it would have to be de Waal’s leading character, the bonobo.

            De Waal’s 1997 book Bonobo: The Forgotten Ape, on which he collaborated with photographer Frans Lanting, introduced this charismatic, peace-loving, sex-loving primate to the masses, and in the process provided behavioral scientists with a new model for what our own ancestors’ social lives might have looked like. Bonobo females dominate the males to the point where zoos have learned never to import a strange male into a new community without the protection of his mother. But for the most part any tensions, even those over food, even those between members of neighboring groups, are resolved through genito-genital rubbing—a behavior that looks an awful lot like sex and often culminates in vocalizations and facial expressions that resemble those of humans experiencing orgasms to a remarkable degree. The implications of bonobos’ hippy-like habits have even reached into politics. After an uncharacteristically ill-researched and ill-reasoned article in the New Yorker by Ian Parker which suggested that the apes weren’t as peaceful and erotic as we’d been led to believe, conservatives couldn’t help celebrating. De Waal writes in The Bonobo and the Atheist,

Given that this ape’s reputation has been a thorn in the side of homophobes as well as Hobbesians, the right-wing media jumped with delight. The bonobo “myth” could finally be put to rest, and nature remain red in tooth and claw. The conservative commentator Dinesh D’Souza accused “liberals” of having fashioned the bonobo into their mascot, and he urged them to stick with the donkey. (63)

But most primate researchers think the behavioral differences between chimps and bonobos are pretty obvious. De Waal points out that while violence does occur among the apes on rare occasions “there are no confirmed reports of lethal aggression among bonobos” (63). Chimps, on the other hand, have been observed doing all kinds of killing. Bonobos also outperform chimps in experiments designed to test their capacity for cooperation, as in the setup that requires two individuals to pull on a rope at the same time in order for either of them to get ahold of food placed atop a plank of wood. (Incidentally, the New Yorker’s track record when it comes to anthropology is suspiciously checkered—disgraced author Patrick Tierney’s discredited book on Napoleon Chagnon, for instance, was originally excerpted in the magazine.)

            Bonobos came late to the scientific discussion of what ape behavior can tell us about our evolutionary history. The famous chimp researcher Robert Yerkes, whose name graces the facility de Waal currently directs at Emory University in Atlanta, actually wrote an entire book called Almost Human about what he believed was a rather remarkable chimp. A photograph from that period reveals that it wasn’t a chimp at all. It was a bonobo. Now, as this species is becoming better researched, and with the discovery of fossils like the 4.4 million-year-old Ardipethicus ramidus known as Ardi, a bipedal ape with fangs that are quite small when compared to the lethal daggers sported by chimps, the role of violence in our ancestry is ever more uncertain. De Waal writes,

What if we descend not from a blustering chimp-like ancestor but from a gentle, empathic bonobo-like ape? The bonobo’s body proportions—its long legs and narrow shoulders—seem to perfectly fit the descriptions of Ardi, as do its relatively small canines. Why was the bonobo overlooked? What if the chimpanzee, instead of being an ancestral prototype, is in fact a violent outlier in an otherwise relatively peaceful lineage? Ardi is telling us something, and there may exist little agreement about what she is saying, but I hear a refreshing halt to the drums of war that have accompanied all previous scenarios. (61)

De Waal is well aware of all the behaviors humans engage in that are more emblematic of chimps than of bonobos—in his 2005 book Our Inner Ape, he refers to humans as “the bipolar ape”—but the fact that our genetic relatedness to both species is exactly the same, along with the fact that chimps also have a surprising capacity for peacemaking and empathy, suggest to him that evolution has had plenty of time and plenty of raw material to instill in us the emotional underpinnings of a morality that emerges naturally—without having to be imposed by religion or philosophy. “Rather than having developed morality from scratch through rational reflection,” he writes in The Bonobo and the Atheist, “we received a huge push in the rear from our background as social animals" (17).

            In the eighth and final chapter of The Bonobo and the Atheist, titled “Bottom-Up Morality,” de Waal describes what he believes is an alternative to top-down theories that attempt to derive morals from religion on the one hand and from reason on the other. Invisible beings threatening eternal punishment can frighten us into doing the right thing, and principles of fairness might offer slight nudges in the direction of proper comportment, but we must already have some intuitive sense of right and wrong for either of these belief systems to operate on if they’re to be at all compelling. Many people assume moral intuitions are inculcated in childhood, but experiments like the one that showed rats will come to the aid of distressed companions suggest something deeper, something more ingrained, is involved. De Waal has found that a video of capuchin monkeys demonstrating "inequity aversion"—a natural, intuitive sense of fairness—does a much better job than any charts or graphs at getting past the prejudices of philosophers and economists who want to insist that fairness is too complex a principle for mere monkeys to comprehend. He writes,

This became an immensely popular experiment in which one monkey received cucumber slices while another received grapes for the same task. The monkeys had no trouble performing if both received identical rewards of whatever quality, but rejected unequal outcomes with such vehemence that there could be little doubt about their feelings. I often show their reactions to audiences, who almost fall out of their chairs laughing—which I interpret as a sign of surprised recognition. (232)

What the capuchins do when they see someone else getting a better reward is throw the measly cucumber back at the experimenter and proceed to rattle the cage in agitation. De Waal compares it to the Occupy Wall Street protests. The poor monkeys clearly recognize the insanity of the human they’re working for.

            There’s still a long way to travel, however, from helpful rats and protesting capuchins before you get to human morality. But that gap continues to shrink as researchers find new ways to explore the social behaviors of the primates that are even more closely related to us. Chimps, for instance, have been seen taking inequity aversion an important step beyond what monkeys display. Not only will certain individuals refuse to work for lesser rewards; they’ll refuse to work even for the superior rewards if they see their companions aren’t being paid equally. De Waal does acknowledge though that there still remains an important step between these behaviors and human morality. “I am reluctant to call a chimpanzee a ‘moral being,’” he writes.

This is because sentiments do not suffice. We strive for a logically coherent system and have debates about how the death penalty fits arguments for the sanctity of life, or whether an unchosen sexual orientation can be morally wrong. These debates are uniquely human. There is little evidence that other animals judge the appropriateness of actions that do not directly affect themselves. (17-8)

Moral intuitions can often inspire some behaviors that to people in modern liberal societies seem appallingly immoral. De Waal quotes anthropologist Christopher Boehm on the “special, pejorative moral ‘discount’ applied to cultural strangers—who often are not even considered fully human,” and he goes on to explain that “The more we expand morality’s reach, the more we need to rely on our intellect.” But the intellectual principles must be grounded in the instincts and emotions we evolved as social primates; this is what he means by bottom-up morality or “naturalized ethics” (235).

*****

            In locating the foundations of morality in our evolved emotions—propensities we share with primates and even rats—de Waal seems to be taking a firm stand against any need for religion. But he insists throughout the book that this isn’t the case. And, while the idea that people are quite capable of playing fair and treating each other with compassion without any supernatural policing may seem to land him squarely in the same camp as prominent atheists like Richard Dawkins and Christopher Hitchens, whom he calls “neo-atheists,” he contends that they’re just as, if not more, misguided than the people of faith who believe the rules must be handed down from heaven. “Even though Dawkins cautioned against his own anthropomorphism of the gene,” de Waal wrote all the way back in his 1996 book Good Natured: The Origins of Right and Wrong in Humans and Other Animals, “with the passage of time, carriers of selfish genes became selfish by association” (14). Thus de Waal tries to find some middle ground between religious dogmatists on one side and those who are equally dogmatic in their opposition to religion and equally mistaken in their espousal of veneer theory on the other. “I consider dogmatism a far greater threat than religion per se,” he writes in The Bonobo and the Atheist.

I am particularly curious why anyone would drop religion while retaining the blinkers sometimes associated with it. Why are the “neo-atheists” of today so obsessed with God’s nonexistence that they go on media rampages, wear T-shirts proclaiming their absence of belief, or call for a militant atheism? What does atheism have to offer that’s worth fighting for? (84)

For de Waal, neo-atheism is an empty placeholder of a philosophy, defined not by any positive belief but merely by an obstinately negative attitude toward religion. It’s hard to tell early on in his book if this view is based on any actual familiarity with the books whose titles—The God Delusion, god is not Great—he takes issue with. What is obvious, though, is that he’s trying to appeal to some spirit of moderation so that he might reach an audience who may have already been turned off by the stridency of the debates over religion’s role in society. At any rate, we can be pretty sure that Hitchens, for one, would have had something to say about de Waal’s characterization.

De Waal’s expertise as a primatologist gave him what was in many ways an ideal perspective on the selfish gene debates, as well as on sociobiology more generally, much the way Sarah Blaffer Hrdy’s expertise has done for her. The monkeys and apes de Waal works with are a far cry from the ants and wasps that originally inspired the gene-centered approach to explaining behavior. “There are the bees dying for their hive,” he writes in The Bonobo and the Atheist,

and the millions of slime mold cells that build a single, sluglike organism that permits a few among them to reproduce. This kind of sacrifice was put on the same level as the man jumping into an icy river to rescue a stranger or the chimpanzee sharing food with a whining orphan. From an evolutionary perspective, both kinds of helping are comparable, but psychologically speaking they are radically different. (33)

At the same time, though, de Waal gets to see up close almost every day how similar we are to our evolutionary cousins, and the continuities leave no question as to the wrongheadedness of blank slate ideas about socialization. “The road between genes and behavior is far from straight,” he writes, sounding a note similar to that of the late Stephen Jay Gould, “and the psychology that produces altruism deserves as much attention as the genes themselves.” He goes on to explain,

Mammals have what I call an “altruistic impulse” in that they respond to signs of distress in others and feel an urge to improve their situation. To recognize the need of others, and react appropriately, is really not the same as a preprogrammed tendency to sacrifice oneself for the genetic good. (33)

We can’t discount the role of biology, in other words, but we must keep in mind that genes are at the distant end of a long chain of cause and effect that has countless other inputs before it links to emotion and behavior. De Waal angered both the social constructivists and quite a few of the gene-centered evolutionists, but by now the balanced view his work as primatologist helped him to arrive at has, for the most part, won the day. Now, in his other role as a scientist who studies the evolution of morality, he wants to strike a similar balance between extremists on both sides of the religious divide. Unfortunately, in this new arena, his perspective isn’t anywhere near as well informed.

             The type of religion de Waal points to as evidence that the neo-atheists’ concerns are misguided and excessive is definitely moderate. It’s not even based on any actual beliefs, just some nice ideas and stories adherents enjoy hearing and thinking about in a spirit of play. We have to wonder, though, just how prevalent this New Age, Life-of-Pi type of religion really is. I suspect the passages in The Bonobo and the Atheist discussing it are going to be equally offensive to atheists and people of actual faith alike. Here’s one  example of the bizarre way he writes about religion:

Neo-atheists are like people standing outside a movie theater telling us that Leonardo DiCaprio didn’t really go down with the Titanic. How shocking! Most of us are perfectly comfortable with the duality. Humor relies on it, too, lulling us into one way of looking at a situation only to hit us over the head with another. To enrich reality is one of the most delightful capacities we have, from pretend play in childhood to visions of an afterlife when we grow older. (294)

He seems to be suggesting that the religious know, on some level, their beliefs aren’t true. “Some realities exist,” he writes, “some we just like to believe in” (294). The problem is that while many readers may enjoy the innuendo about humorless and inveterately over-literal atheists, most believers aren’t joking around—even the non-extremists are more serious than de Waal seems to think.

            As someone who’s been reading de Waal’s books for the past seventeen years, someone who wanted to strangle Ian Parker after reading his cheap smear piece in The New Yorker, someone who has admired the great primatologist since my days as an undergrad anthropology student, I experienced the sections of The Bonobo and the Atheist devoted to criticisms of neo-atheism, which make up roughly a quarter of this short book, as soul-crushingly disappointing. And I’ve agonized over how to write this part of the review. The middle path de Waal carves out is between a watered-down religion believers don’t really believe on one side and an egregious postmodern caricature of Sam Harris’s and Christopher Hitchens’s positions on the other. He focuses on Harris because of his book, The Moral Landscape, which explores how we might use science to determine our morals and values instead of religion, but he gives every indication of never having actually read the book and of instead basing his criticisms solely on the book’s reputation among Harris’s most hysterical detractors. And he targets Hitchens because he thinks he has the psychological key to understanding what he refers to as his “serial dogmatism.” But de Waal’s case is so flimsy a freshman journalism student could demolish it with no more than about ten minutes of internet fact-checking.

De Waal does acknowledge that we should be skeptical of “religious institutions and their ‘primates’,” but he wonders “what good could possibly come from insulting the many people who find value in religion?” (19). This is the tightrope he tries to walk throughout his book. His focus on the purely negative aspect of atheism juxtaposed with his strange conception of the role of belief seems designed to give readers the impression that if the atheists succeed society might actually suffer severe damage. He writes,

Religion is much more than belief. The question is not so much whether religion is true or false, but how it shapes our lives, and what might possibly take its place if we were to get rid of it the way an Aztec priest rips the beating heart out of a virgin. What could fill the gaping hole and take over the removed organ’s functions? (216)

The first problem is that many people who call themselves humanists, as de Waal does, might suggest that there are in fact many things that could fill the gap—science, literature, philosophy, music, cinema, human rights activism, just to name a few. But the second problem is that the militancy of the militant atheists is purely and avowedly rhetorical. In a debate with Hitchens, former British Prime Minister Tony Blair once held up the same straw man that de Waal drags through the pages of his book, the claim that neo-atheists are trying to extirpate religion from society entirely, to which Hitchens replied, “In fairness, no one was arguing that religion should or will die out of the world. All I’m arguing is that it would be better if there was a great deal more by way of an outbreak of secularism” (20:20). What Hitchens is after is an end to the deference automatically afforded religious ideas by dint of their supposed sacredness; religious ideas need to be critically weighed just like any other ideas—and when they are thus weighed they often don’t fare so well, in either logical or moral terms. It’s hard to understand why de Waal would have a problem with this view.

*****

            De Waal’s position is even more incoherent with regard to Harris’s arguments about the potential for a science of morality, since they represent an attempt to answer, at least in part, the very question of what might take the place of religion in providing guidance in our lives that he poses again and again throughout The Bonobo and the Atheist. De Waal takes issue first with the book’s title, The Moral Landscape: How Science can Determine Human Values. The notion that science might determine any aspect of morality suggests to him a top-down approach as opposed to his favored bottom-up strategy that takes “naturalized ethics” as its touchstone. This is, however, unbeknownst to de Waal, a mischaracterization of Harris’s thesis. Rather than engage Harris’s arguments in any direct or meaningful way, de Waal contents himself with following in the footsteps of critics who apply the postmodern strategy of holding the book to account for all the analogies that can be drawn with it, no matter how tenuously or tendentiously, to historical evils. De Waal writes, for instance,

While I do welcome a science of morality—my own work is part of it—I can’t fathom calls for science to determine human values (as per the subtitle of Sam Harris’s The Moral Landscape). Is pseudoscience something of the past? Are modern scientists free from moral biases? Think of the Tuskegee syphilis study just a few decades ago, or the ongoing involvement of medical doctors in prisoner torture at Guantanamo Bay. I am profoundly skeptical of the moral purity of science, and feel that its role should never exceed that of morality’s handmaiden. (22)

(Great phrase that "morality's handmaiden.") But Harris never argues that scientists are any more morally pure than anyone else. His argument is for the application of that “science of morality,” which de Waal proudly contributes to, to attempts at addressing the big moral issues our society faces.

            The guilt-by-association and guilt-by-historical-analogy tactics on display in The Bonobo and the Atheist extend all the way to that lodestar of postmodernism’s hysterical obsessions. We might hope that de Waal, after witnessing the frenzied insanity of the sociobiology controversy from the front row, would know better. But he doesn’t seem to grasp how toxic this type of rhetoric is to reasoned discourse and honest inquiry. After expressing his bafflement at how science and a naturalistic worldview could inspire good the way religion does (even though his main argument is that such external inspiration to do good is unnecessary), he writes,

It took Adolf Hitler and his henchmen to expose the moral bankruptcy of these ideas. The inevitable result was a precipitous drop of faith in science, especially biology. In the 1970s, biologists were still commonly equated with fascists, such as during the heated protest against “sociobiology.” As a biologist myself, I am glad those acrimonious days are over, but at the same time I wonder how anyone could forget this past and hail science as our moral savior. How did we move from deep distrust to naïve optimism? (22)

Was Nazism borne of an attempt to apply science to moral questions? It’s true some people use science in evil ways, but not nearly as commonly as people are directly urged by religion to perpetrate evils like inquisitions or holy wars. When science has directly inspired evil, as in the case of eugenics, the lifespan of the mistake was measurable in years or decades rather than centuries or millennia. Not to minimize the real human costs, but science wins hands down by being self-correcting and, certain individual scientists notwithstanding, undogmatic.

Harris intended for his book to begin a debate he was prepared to actively participate in. But he quickly ran into the problem that postmodern criticisms can’t really be dealt with in any meaningful way. The following long quote from Harris’s response to his battier critics in the Huffington Post will show both that de Waal’s characterization of his argument is way off-the-mark, and that it is suspiciously unoriginal:

How, for instance, should I respond to the novelist Marilynne Robinson’s paranoid, anti-science gabbling in the Wall Street Journal where she consigns me to the company of the lobotomists of the mid 20th century? Better not to try, I think—beyond observing how difficult it can be to know whether a task is above or beneath you. What about the science writer John Horgan, who was kind enough to review my book twice, once in Scientific American where he tarred me with the infamous Tuskegee syphilis experiments, the abuse of the mentally ill, and eugenics, and once in The Globe and Mail, where he added Nazism and Marxism for good measure? How does one graciously respond to non sequiturs? The purpose of The Moral Landscape is to argue that we can, in principle, think about moral truth in the context of science. Robinson and Horgan seem to imagine that the mere existence of the Nazi doctors counts against my thesis. Is it really so difficult to distinguish between a science of morality and the morality of science? To assert that moral truths exist, and can be scientifically understood, is not to say that all (or any) scientists currently understand these truths or that those who do will necessarily conform to them.

And we have to ask further what alternative source of ethical principles do the self-righteous grandstanders like Robinson and Horgan—and now de Waal—have to offer? In their eagerness to compare everyone to the Nazis, they seem to be deriving their own morality from Fox News.

De Waal makes three objections to Harris’s arguments that are of actual substance, but none of them are anywhere near as devastating to his overall case as de Waal makes out. First, Harris begins with the assumption that moral behaviors lead to “human flourishing,” but this is a presupposed value as opposed to an empirical finding of science—or so de Waal claims. But here’s de Waal himself on a level of morality sometimes seen in apes that transcends one-on-one interactions between individuals:

female chimpanzees have been seen to drag reluctant males toward each other to make up after a fight, while removing weapons from their hands. Moreover, high-ranking males regularly act as impartial arbiters to settle disputes in the community. I take these hints of community concern as a sign that the building blocks of morality are older than humanity, and that we don’t need God to explain how we got to where we are today. (20)

The similarity between the concepts of human flourishing and community concern highlights one of the main areas of confusion de Waal could have avoided by actually reading Harris’s book. The word “determine” in the title has two possible meanings. Science can determine values in the sense that it can guide us toward behaviors that will bring about flourishing. But it can also determine our values in the sense of discovering what we already naturally value and hence what conditions need to be met for us to flourish.

De Waal performs a sleight of hand late in The Bonobo and the Atheist, substituting another “utilitarian” for Harris, justifying the trick by pointing out that utilitarians also seek to maximize human flourishing—though Harris never claims to be one. This leads de Waal to object that strict utilitarianism isn’t viable because he’s more likely to direct his resources to his own ailing mother than to any stranger in need, even if those resources would benefit the stranger more. Thus de Waal faults Harris’s ethics for overlooking the role of loyalty in human lives. His third criticism is similar; he worries that utilitarians might infringe on the rights of a minority to maximize flourishing for a majority. But how, given what we know about human nature, could we expect humans to flourish—to feel as though they were flourishing—in a society that didn’t properly honor friendship and the bonds of family? How could humans be happy in a society where they had to constantly fear being sacrificed to the whim of the majority? It is in precisely this effort to discover—or determine—under which circumstances humans flourish that Harris believes science can be of the most help. And as de Waal moves up from his mammalian foundations of morality to more abstract ethical principles the separation between his approach and Harris’s starts to look suspiciously like a distinction without a difference.

            Harris in fact points out that honoring family bonds probably leads to greater well-being on pages seventy-three and seventy-four of The Moral Landscape, and de Waal quotes from page seventy-four himself to chastise Harris for concentrating too much on "the especially low-hanging fruit of conservative Islam" (74). The incoherence of de Waal's argument (and the carelessness of his research) are on full display here as he first responds to a point about the genital mutilation of young girls by asking, "Isn't genital mutilation common in the United States, too, where newborn males are circumcised without their consent?" (90). So cutting off the foreskin of a male's penis is morally equivalent to cutting off a girl's clitoris? Supposedly, the equivalence implies that there can't be any reliable way to determine the relative moral status of religious practices. "Could it be that religion and culture interact to the point that there is no universal morality?" Perhaps, but, personally, as a circumcised male, I think this argument is a real howler.

*****

The slick scholarly laziness on display in The Bonobo and the Atheist is just as bad when it comes to the positions, and the personality, of Christopher Hitchens, whom de Waal sees fit to psychoanalyze instead of engaging his arguments in any substantive way—but whose memoir, Hitch-22, he’s clearly never bothered to read. The straw man about the neo-atheists being bent on obliterating religion entirely is, disappointingly, but not surprisingly by this point, just one of several errors and misrepresentations. De Waal’s main argument against Hitchens, that his atheism is just another dogma, just as much a religion as any other, is taken right from the list of standard talking points the most incurious of religious apologists like to recite against him. Theorizing that “activist atheism reflects trauma” (87)—by which he means that people raised under severe religions will grow up to espouse severe ideologies of one form or another—de Waal goes on to suggest that neo-atheism is an outgrowth of “serial dogmatism”:

Hitchens was outraged by the dogmatism of religion, yet he himself had moved from Marxism (he was a Trotskyist) to Greek Orthodox Christianity, then to American Neo-Conservatism, followed by an “antitheist” stance that blamed all of the world’s troubles on religion. Hitchens thus swung from the left to the right, from anti-Vietnam War to cheerleader of the Iraq War, and from pro to contra God. He ended up favoring Dick Cheney over Mother Teresa. (89)

This is truly awful rubbish, and it’s really too bad Hitchens isn’t around anymore to take de Waal to task for it himself. First, this passage allows us to catch out de Waal’s abuse of the term dogma; dogmatism is rigid adherence to beliefs that aren’t open to questioning. The test of dogmatism is whether you’re willing to adjust your views in light of new evidence or changing circumstances—it has nothing to do with how willing or eager you are to debate. What de Waal is labeling dogmatism is what we normally call outspokenness. Second, his facts are simply wrong. For one, though Hitchens was labeled a neocon by some of his fellows on the left simply because he supported the invasion of Iraq, he never considered himself one. When he was asked in an interview for the New Stateman if he was a neoconservative, he responded unequivocally, “I’m not a conservative of any kind.” Finally, can’t someone be for one war and against another, or agree with certain aspects of a religious or political leader’s policies and not others, without being shiftily dogmatic?

            De Waal never really goes into much detail about what the “naturalized ethics” he advocates might look like beyond insisting that we should take a bottom-up approach to arriving at them. This evasiveness gives him space to criticize other nonbelievers regardless of how closely their ideas might resemble his own. “Convictions never follow straight from evidence or logic,” he writes. “Convictions reach us through the prism of human interpretation” (109). He takes this somewhat banal observation (but do they really never follow straight from evidence?) as a license to dismiss the arguments of others based on silly psychologizing. “In the same way that firefighters are sometimes stealth arsonists,” he writes, “and homophobes closet homosexuals, do some atheists secretly long for the certitude of religion?” (88). We could of course just as easily turn this Freudian rhetorical trap back against de Waal and his own convictions. Is he a closet dogmatist himself? Does he secretly hold the unconscious conviction that primates are really nothing like humans and that his research is all a big sham?

            Christopher Hitchens was another real-life character whose personality shone through his writing, and like Yossarian in Joseph Heller’s Catch-22 he often found himself in a position where he knew being sane would put him at odds with the masses, thus convincing everyone of his insanity. Hitchens particularly identified with the exchange near the end of Heller’s novel in which an officer, Major Danby, says, “But, Yossarian, suppose everyone felt that way,” to which Yossarian replies, “Then I’d certainly be a damned fool to feel any other way, wouldn’t I?” (446). (The title for his memoir came from a word game he and several of his literary friends played with book titles.) It greatly saddens me to see de Waal pitting himself against such a ham-fisted caricature of a man in whom, had he taken the time to actually explore his writings, he would likely have found much to admire. Why did Hitch become such a strong advocate for atheism? He made no secret of his motivations. And de Waal, who faults Harris (wrongly) for leaving loyalty out of his moral equations, just might identify with them. It began when the theocratic dictator of Iran put a hit out on his friend, the author Salman Rushdie, because he thought one of his books was blasphemous. Hitchens writes in Hitch-22,

When the Washington Post telephoned me at home on Valentine’s Day 1989 to ask my opinion about the Ayatollah Khomeini’s fatwah, I felt at once that here was something that completely committed me. It was, if I can phrase it like this, a matter of everything I hated versus everything I loved. In the hate column: dictatorship, religion, stupidity, demagogy, censorship, bullying, and intimidation. In the love column: literature, irony, humor, the individual, and the defense of free expression. Plus, of course, friendship—though I like to think that my reaction would have been the same if I hadn’t known Salman at all. (268)

Suddenly, neo-atheism doesn’t seem like an empty place-holder anymore. To criticize atheists so harshly for having convictions that are too strong, de Waal has to ignore all the societal and global issues religion is on the wrong side of. But when we consider the arguments on each side of the abortion or gay marriage or capital punishment or science education debates it’s easy to see that neo-atheists are only against religion because they feel it runs counter to the positive values of skeptical inquiry, egalitarian discourse, free society, and the ascendency of reason and evidence.

            De Waal ends The Bonobo and the Atheist with a really corny section in which he imagines how a bonobo would lecture atheists about morality and the proper stance toward religion. “Tolerance of religion,” the bonobo says, “even if religion is not always tolerant in return, allows humanism to focus on what is most important, which is to build a better society based on natural human abilities” (237). Hitchens is of course no longer around to respond to the bonobo, but many of the same issues came up in his debate with Tony Blair (I hope no one reads this as an insult to the former PM), who at one point also argued that religion might be useful in building better societies—look at all the charity work they do for instance. Hitch, already showing signs of physical deterioration from the treatment for the esophageal cancer that would eventually kill him, responds,

The cure for poverty has a name in fact. It’s called the empowerment of women. If you give women some control over the rate at which they reproduce, if you give them some say, take them off the animal cycle of reproduction to which nature and some doctrine, religious doctrine, condemns them, and then if you’ll throw in a handful of seeds perhaps and some credit, the flaw, the flaw of everything in that village, not just poverty, but education, health, and optimism, will increase. It doesn’t matter—try it in Bangladesh, try it in Bolivia. It works. It works all the time. Name me one religion that stands for that—or ever has. Wherever you look in the world and you try to remove the shackles of ignorance and disease and stupidity from women, it is invariably the clerisy that stands in the way. (23:05)

            Later in the debate, Hitch goes on to argue in a way that sounds suspiciously like an echo of de Waal’s challenges to veneer theory and his advocacy for bottom-up morality. He says,

The injunction not to do unto others what would be repulsive if done to yourself is found in the Analects of Confucius if you want to date it—but actually it’s found in the heart of every person in this room. Everybody knows that much. We don’t require divine permission to know right from wrong. We don’t need tablets administered to us ten at a time in tablet form, on pain of death, to be able to have a moral argument. No, we have the reasoning and the moral suasion of Socrates and of our own abilities. We don’t need dictatorship to give us right from wrong. (25:43)

And as a last word in his case and mine I’ll quote this very de Waalian line from Hitch: “There’s actually a sense of pleasure to be had in helping your fellow creature. I think that should be enough” (35:42).

Also read:

TED MCCORMICK ON STEVEN PINKER AND THE POLITICS OF RATIONALITY

And: 

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND

Read More
Dennis Junk Dennis Junk

Napoleon Chagnon's Crucible and the Ongoing Epidemic of Moralizing Hysteria in Academia

Napoleon Chagnon was targeted by postmodern activists and anthropologists, who trumped up charges against him and hoped to sacrifice his reputation on the altar of social justice. In retrospect, his case looks like an early warning sign of what would come to be called “cancel culture.” Fortunately, Chagnon was no pushover, and there were a lot of people who saw through the lies being spread about him. “Noble Savages” is in a part a great adventure story and in part his response to the tragic degradation of the field of anthropology as it succumbs to the lures of ideology.

Noble Savages by Napoleon Chagnon

    When Arthur Miller adapted the script of The Crucible, his play about the Salem Witch Trials originally written in 1953, for the 1996 film version, he enjoyed additional freedom to work with the up-close visual dimensions of the tragedy. In one added scene, the elderly and frail George Jacobs, whom we first saw lifting one of his two walking sticks to wave an unsteady greeting to a neighbor, sits before a row of assembled judges as the young Ruth Putnam stands accusing him of assaulting her. The girl, ostensibly shaken from the encounter and frightened lest some further terror ensue, dramatically recounts her ordeal, saying,

He come through my window and then he lay down upon me. I could not take breath. His body crush heavy upon me, and he say in my ear, “Ruth Putnam, I will have your life if you testify against me in court.”

This quote she delivers in a creaky imitation of the old man’s voice. When one of the judges asks Jacobs what he has to say about the charges, he responds with the glaringly obvious objection: “But, your Honor, I must have these sticks to walk with—how may I come through a window?” The problem with this defense, Jacobs comes to discover, is that the judges believe a person can be in one place physically and in another in spirit. This poor tottering old man has no defense against so-called “spectral evidence.” Indeed, as judges in Massachusetts realized the year after Jacobs was hanged, no one really has any defense against spectral evidence. That’s part of the reason why it was deemed inadmissible in their courts, and immediately thereafter convictions for the crime of witchcraft ceased entirely. 

            Many anthropologists point to the low cost of making accusations as a factor in the evolution of moral behavior. People in small societies like the ones our ancestors lived in for millennia, composed of thirty or forty profoundly interdependent individuals, would have had to balance any payoff that might come from immoral deeds against the detrimental effects to their reputations of having those deeds discovered and word of them spread. As the generations turned over and over again, human nature adapted in response to the social enforcement of cooperative norms, and individuals came to experience what we now recognize as our moral emotions—guilt which is often preëmptive and prohibitive, shame, indignation, outrage, along with the more positive feelings associated with empathy, compassion, and loyalty.

The legacy of this process of reputational selection persists in our prurient fascination with the misdeeds of others and our frenzied, often sadistic, delectation in the spreading of salacious rumors. What Miller so brilliantly dramatizes in his play is the irony that our compulsion to point fingers, which once created and enforced cohesion in groups of selfless individuals, can in some environments serve as a vehicle for our most viciously selfish and inhuman impulses. This is why it is crucial that any accusation, if we as a society are to take it at all seriously, must provide the accused with some reliable means of acquittal. Charges that can neither be proven nor disproven must be seen as meaningless—and should even be counted as strikes against the reputation of the one who levels them. 

            While this principle runs into serious complications in situations with crimes that are as inherently difficult to prove as they are horrific, a simple rule proscribing any glib application of morally charged labels is a crucial yet all-too-popularly overlooked safeguard against unjust calumny. In this age of viral dissemination, the rapidity with which rumors spread coupled with the absence of any reliable assurances of the validity of messages bearing on the reputations of our fellow citizens demand that we deliberately work to establish as cultural norms the holding to account of those who make accusations based on insufficient, misleading, or spectral evidence—and the holding to account as well, to only a somewhat lesser degree, of those who help propagate rumors without doing due diligence in assessing their credibility.

            The commentary attending the publication of anthropologist Napoleon Chagnon’s memoir of his research with the Yanomamö tribespeople in Venezuela calls to mind the insidious “Teach the Controversy” PR campaign spearheaded by intelligent design creationists. Coming out against the argument that students should be made aware of competing views on the value of intelligent design inevitably gives the impression of close-mindedness or dogmatism. But only a handful of actual scientists have any truck with intelligent design, a dressed-up rehashing of the old God-of-the-Gaps argument based on the logical fallacy of appealing to ignorance—and that ignorance, it so happens, is grossly exaggerated.

Teaching the controversy would therefore falsely imply epistemological equivalence between scientific views on evolution and those that are not-so-subtly religious. Likewise, in the wake of allegations against Chagnon about mistreatment of the people whose culture he made a career of studying, many science journalists and many of his fellow anthropologists still seem reluctant to stand up for him because they fear doing so would make them appear insensitive to the rights and concerns of indigenous peoples. Instead, they take refuge in what they hope will appear a balanced position, even though the evidence on which the accusations rested has proven to be entirely spectral.

Chagnon’s Noble Savages: My Life among Two Dangerous Tribes—the Yanomamö and the Anthropologists is destined to be one of those books that garners commentary by legions of outspoken scholars and impassioned activists who never find the time to actually read it. Science writer John Horgan, for instance, has published two blog posts on Chagnon in recent weeks, and neither of them features a single quote from the book. In the first, he boasts of his resistance to bullying, via email, by five prominent sociobiologists who had caught wind of his assignment to review Patrick Tierney’s book Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon and insisted that he condemn the work and discourage anyone from reading it. Against this pressure, Horgan wrote a positive review in which he repeats several horrific accusations that Tierney makes in the book before going on to acknowledge that the author should have worked harder to provide evidence of the wrongdoings he reports on.

But Tierney went on to become an advocate for Indian rights. And his book’s faults are outweighed by its mass of vivid, damning detail. My guess is that it will become a classic in anthropological literature, sparking countless debates over the ethics and epistemology of field studies.

Horgan probably couldn’t have known at the time (though those five scientists tried to warn him) that giving Tierney credit for prompting debates about Indian rights and ethnographic research methods was a bit like praising Abigail Williams, the original source of accusations of witchcraft in Salem, for sparking discussions about child abuse. But that he stands by his endorsement today, saying,

“I have one major regret concerning my review: I should have noted that Chagnon is a much more subtle theorist of human nature than Tierney and other critics have suggested,” as balanced as that sounds, casts serious doubt on his scholarship, not to mention his judgment.

            What did Tierney falsely accuse Chagnon of? There are over a hundred specific accusations in the book (Chagnon says his friend William Irons flagged 106 [446]), but the most heinous whopper comes in the fifth chapter, titled “Outbreak.” In 1968, Chagnon was helping the geneticist James V. Neel collect blood samples from the Yanomamö—in exchange for machetes—so their DNA could be compared with that of people in industrialized societies. While they were in the middle of this project, a measles epidemic broke out, and Neel had discovered through earlier research that the Indians lacked immunity to this disease, so the team immediately began trying to reach all of the Yanomamö villages to vaccinate everyone before the contagion reached them. Most people who knew about the episode considered what the scientists did heroic (and several investigations now support this view). But Tierney, by creating the appearance of pulling together multiple threads of evidence, weaves together a much different story in which Neel and Chagnon are cast as villains instead of heroes. (The version of the book I’ll quote here is somewhat incoherent because it went through some revisions in attempts to deal with holes in the evidence that were already emerging pre-publication.)

First, Tierney misinterprets some passages from Neel’s books as implying an espousal of eugenic beliefs about the Indians, namely that by remaining closer to nature and thus subject to ongoing natural selection they retain all-around superior health, including better immunity. Next, Tierney suggests that the vaccine Neel chose, Edmonston B, which is usually administered with a drug called gamma globulin to minimize reactions like fevers, is so similar to the measles virus that in the immune-suppressed Indians it actually ended up causing a suite of symptoms that was indistinguishable from full-blown measles. The implication is clear. Tierney writes,

Chagnon and Neel described an effort to “get ahead” of the measles epidemic by vaccinating a ring around it. As I have reconstructed it, the 1968 outbreak had a single trunk, starting at the Ocamo mission and moving up the Orinoco with the vaccinators. Hundreds of Yanomami died in 1968 on the Ocamo River alone. At the time, over three thousand Yanomami lived on the Ocamo headwaters; today there are fewer than two hundred. (69)

At points throughout the chapter, Tierney seems to be backing off the worst of his accusations; he writes, “Neel had no reason to think Edmonston B could become transmissible. The outbreak took him by surprise.” But even in this scenario Tierney suggests serious wrongdoing: “Still, he wanted to collect data even in the midst of a disaster” (82).

Earlier in the chapter, though, Tierney makes a much more serious charge. Pointing to a time when Chagnon showed up at a Catholic mission after having depleted his stores of gamma globulin and nearly run out of Edmonston B, Tierney suggests the shortage of drugs was part of a deliberate plan. “There were only two possibilities,” he writes,

Either Chagnon entered the field with only forty doses of virus; or he had more than forty doses. If he had more than forty, he deliberately withheld them while measles spread for fifteen days. If he came to the field with only forty doses, it was to collect data on a small sample of Indians who were meant to receive the vaccine without gamma globulin. Ocamo was a good choice because the nuns could look after the sick while Chagnon went on with his demanding work. Dividing villages into two groups, one serving as a control, was common in experiments and also a normal safety precaution in the absence of an outbreak. (60)

Thus Tierney implies that Chagnon was helping Neel test his eugenics theory and in the process became complicit in causing an epidemic, maybe deliberately, that killed hundreds of people. Tierney claims he isn’t sure how much Chagnon knew about the experiment; he concedes at one point that “Chagnon showed genuine concern for the Yanomami,” before adding, “At the same time, he moved quickly toward a cover-up” (75).

            Near the end of his “Outbreak” chapter, Tierney reports on a conversation with Mark Papania, a measles expert at the Center for Disease Control in Atlanta. After running his hypothesis about how Neel and Chagnon caused the epidemic with the Edmonston B vaccine by Papania, Tierney claims he responded, “Sure, it’s possible.” He goes on to say that while Papania informed him there were no documented cases of the vaccine becoming contagious he also admitted that no studies of adequate sensitivity had been done. “I guess we didn’t look very hard,” Tierney has him saying (80). But evolutionary psychologist John Tooby got a much different answer when he called Papania himself. In a an article published on Slate—nearly three weeks before Horgan published his review, incidentally—Tooby writes that the epidemiologist had a very different attitude to the adequacy of past safety tests from the one Tierney reported:

it turns out that researchers who test vaccines for safety have never been able to document, in hundreds of millions of uses, a single case of a live-virus measles vaccine leading to contagious transmission from one human to another—this despite their strenuous efforts to detect such a thing. If attenuated live virus does not jump from person to person, it cannot cause an epidemic. Nor can it be planned to cause an epidemic, as alleged in this case, if it never has caused one before.

Tierney also cites Samuel Katz, the pediatrician who developed Edmonston B, at a few points in the chapter to support his case. But Katz responded to requests from the press to comment on Tierney’s scenario by saying,

the use of Edmonston B vaccine in an attempt to halt an epidemic was a justifiable, proven and valid approach. In no way could it initiate or exacerbate an epidemic. Continued circulation of these charges is not only unwarranted, but truly egregious.

Tooby included a link to Katz’s response, along with a report from science historian Susan Lindee of her investigation of Neel’s documents disproving many of Tierney’s points. It seems Horgan should’ve paid a bit more attention to those emails he was receiving.

Further investigations have shown that pretty much every aspect of Tierney’s characterization of Neel’s beliefs and research agenda was completely wrong. The report from a task force investigation by the American Society of Human Genetics gives a sense of how Tierney, while giving the impression of having conducted meticulous research, was in fact perpetrating fraud. The report states,

Tierney further suggests that Neel, having recognized that the vaccine was the cause of the epidemic, engineered a cover-up. This is based on Tierney’s analysis of audiotapes made at the time. We have reexamined these tapes and provide evidence to show that Tierney created a false impression by juxtaposing three distinct conversations recorded on two separate tapes and in different locations. Finally, Tierney alleges, on the basis of specific taped discussions, that Neel callously and unethically placed the scientific goals of the expedition above the humanitarian need to attend to the sick. This again is shown to be a complete misrepresentation, by examination of the relevant audiotapes as well as evidence from a variety of sources, including members of the 1968 expedition.

This report was published a couple years after Tierney’s book hit the shelves. But there was sufficient evidence available to anyone willing to do the due diligence in checking out the credibility of the author and his claims to warrant suspicion that the book’s ability to make it onto the shortlist for the National Book Award is indicative of a larger problem.

*******

With the benefit of hindsight and a perspective from outside the debate (though I’ve been following the sociobiology controversy for a decade and a half, I wasn’t aware of Chagnon’s longstanding and personal battles with other anthropologists until after Tierney’s book was published) it seems to me that once Tierney had been caught misrepresenting the evidence in support of such an atrocious accusation his book should have been removed from the shelves, and all his reporting should have been dismissed entirely. Tierney himself should have been made to answer for his offense. But for some reason none of this happened.

The anthropologist Marshall Sahlins, for instance, to whom Chagnon has been a bête noire for decades, brushed off any concern for Tierney’s credibility in his review of Darkness in El Dorado, published a full month after Horgan’s, apparently because he couldn’t resist the opportunity to write about how much he hates his celebrated colleague. Sahlins’s review is titled “Guilty not as Charged,” which is already enough to cast doubt on his capacity for fairness or rationality. Here’s how he sums up the issue of Tierney’s discredited accusation in relation to the rest of the book:

The Kurtzian narrative of how Chagnon achieved the political status of a monster in Amazonia and a hero in academia is truly the heart of Darkness in El Dorado. While some of Tierney’s reporting has come under fire, this is nonetheless a revealing book, with a cautionary message that extends well beyond the field of anthropology. It reads like an allegory of American power and culture since Vietnam.

Sahlins apparently hasn’t read Conrad’s novel Heart of Darkness or he’d know Chagnon is no Kurtz. And Vietnam? The next paragraph goes into more detail about this “allegory,” as if Sahlins’s conscripting of him into service as a symbol of evil somehow establishes his culpability. To get an idea of how much Chagnon actually had to do with Vietnam, we can look at a passage early in Noble Savages about how disconnected from the outside world he was while doing his field work:

I was vaguely aware when I went into the Yanomamö area in late 1964 that the United States had sent several hundred military advisors to South Vietnam to help train the South Vietnamese army. When I returned to Ann Arbor in 1966 the United States had some two hundred thousand combat troops there. (36)

But Sahlins’s review, as bizarre as it is, is important because it’s representative of the types of arguments Chagnon’s fiercest anthropological critics make against his methods, his theories, but mainly against him personally. In another recent comment on how “The Napoleon Chagnon Wars Flare Up Again,” Barbara J. King betrays a disconcerting and unscholarly complacence with quoting other, rival anthropologists’ words as evidence of Chagnon’s own thinking. Alas, King too is weighing in on the flare-up without having read the book, or anything else by the author it seems. And she’s also at pains to appear fair and balanced, even though the sources she cites against Chagnon are neither, nor are they the least bit scientific. Of Sahlins’s review of Darkness in El Dorado, she writes,

The Sahlins essay from 2000 shows how key parts of Chagnon’s argument have been “dismembered” scientifically. In a major paper published in 1988, Sahlins says, Chagnon left out too many relevant factors that bear on Ya̧nomamö males’ reproductive success to allow any convincing case for a genetic underpinning of violence.

It’s a bit sad that King feels it’s okay to post on a site as popular as NPR and quote a criticism of a study she clearly hasn’t read—she could have downloaded the pdf of Chagnon’s landmark paper “Life Histories, Blood Revenge, and Warfare in a Tribal Population,” for free. Did Chagnon claim in the study that it proved violence had a genetic underpinning? It’s difficult to tell what the phrase “genetic underpinning” even means in this context.

To lend further support to Sahlins’s case, King selectively quotes another anthropologist, Jonathan Marks. The lines come from a rant on his blog (I urge you to check it out for yourself if you’re at all suspicious about the aptness of the term rant to describe the post) about a supposed takeover of anthropology by genetic determinism. But King leaves off the really interesting sentence at the end of the remark. Here’s the whole passage explaining why Marks thinks Chagnon is an incompetent scientist:

Let me be clear about my use of the word “incompetent”. His methods for collecting, analyzing and interpreting his data are outside the range of acceptable anthropological practices. Yes, he saw the Yanomamo doing nasty things. But when he concluded from his observations that the Yanomamo are innately and primordially “fierce” he lost his anthropological credibility, because he had not demonstrated any such thing. He has a right to his views, as creationists and racists have a right to theirs, but the evidence does not support the conclusion, which makes it scientifically incompetent.

What Marks is saying here is not that he has evidence of Chagnon doing poor field work; rather, Marks dismisses Chagnon merely because of his sociobiological leanings. Note too that the italicized words in the passage are not quotes. This is important because along with the false equation of sociobiology with genetic determinism this type of straw man underlies nearly all of the attacks on Chagnon. Finally, notice how Marks slips into the realm of morality as he tries to traduce Chagnon’s scientific credibility. In case you think the link with creationism and racism is a simple analogy—like the one I used myself at the beginning of this essay—look at how Marks ends his rant:

So on one side you’ve got the creationists, racists, genetic determinists, the Republican governor of Florida, Jared Diamond, and Napoleon Chagnon–and on the other side, you’ve got normative anthropology, and the mother of the President. Which side are you on?

How can we take this at all seriously? And why did King misleadingly quote, on a prominent news site, such a seemingly level-headed criticism which in context reveals itself as anything but level-headed? I’ll risk another analogy here and point out that Marks’s comments about genetic determinism taking over anthropology are similar in both tone and intellectual sophistication to Glenn Beck’s comments about how socialism is taking over American politics.

             King also links to a review of Noble Savages that was published in the New York Times in February, and this piece is even harsher to Chagnon. After repeating Tierney’s charge about Neel deliberately causing the 1968 measles epidemic and pointing out it was disproved, anthropologist Elizabeth Povinelli writes of the American Anthropological Association investigation that,

The committee was split over whether Neel’s fervor for observing the “differential fitness of headmen and other members of the Yanomami population” through vaccine reactions constituted the use of the Yanomamö as a Tuskegee-­like experimental population.

Since this allegation has been completely discredited by the American Society of Human Genetics, among others, Povinelli’s repetition of it is irresponsible, as was the Times failure to properly vet the facts in the article.

Try as I might to remain detached from either side as I continue to research this controversy (and I’ve never met any of these people), I have to say I found Povinelli’s review deeply offensive. The straw men she shamelessly erects and the quotes she shamelessly takes out of context, all in the service of an absurdly self-righteous and substanceless smear, allow no room whatsoever for anything answering to the name of compassion for a man who was falsely accused of complicity in an atrocity. And in her zeal to impugn Chagnon she propagates a colorful and repugnant insult of her own creation, which she misattributes to him. She writes,

Perhaps it’s politically correct to wonder whether the book would have benefited from opening with a serious reflection on the extensive suffering and substantial death toll among the Yanomamö in the wake of the measles outbreak, whether or not Chagnon bore any responsibility for it. Does their pain and grief matter less even if we believe, as he seems to, that they were brutal Neolithic remnants in a land that time forgot? For him, the “burly, naked, sweaty, hideous” Yanomamö stink and produce enormous amounts of “dark green snot.” They keep “vicious, underfed growling dogs,” engage in brutal “club fights” and—God forbid!—defecate in the bush. By the time the reader makes it to the sections on the Yanomamö’s political organization, migration patterns and sexual practices, the slant of the argument is evident: given their hideous society, understanding the real disaster that struck these people matters less than rehabilitating Chagnon’s soiled image.

In other words, Povinelli’s response to Chagnon’s “harrowing” ordeal, is to effectively say, Maybe you’re not guilty of genocide, but you’re still guilty for not quitting your anthropology job and becoming a forensic epidemiologist. Anyone who actually reads Noble Savages will see quite clearly the “slant” Povinelli describes, along with those caricatured “brutal Neolithic remnants,” must have flown in through her window right next to George Jacobs.

            Povinelli does characterize one aspect of Noble Savages correctly when she complains about its “Manichean rhetorical structure,” with the bad Rousseauian, Marxist, postmodernist cultural anthropologists—along with the corrupt and PR-obsessed Catholic missionaries—on one side, and the good Hobbesian, Darwinian, scientific anthropologists on the other, though it’s really just the scientific part he’s concerned with. I actually expected to find a more complicated, less black-and-white debate taking place when I began looking into the attacks on Chagnon’s work—and on Chagnon himself. But what I ended up finding was that Chagnon’s description of the division, at least with regard to the anthropologists (I haven’t researched his claims about the missionaries) is spot-on, and Povinelli’s repulsive review is a case in point.

This isn’t to say that there aren’t legitimate scientific disagreements about sociobiology. In fact, Chagnon writes about how one of his heroes is “calling into question some of the most widely accepted views” as early as his dedication page, referring to E.O. Wilson’s latest book The Social Conquest of Earth. But what Sahlins, Marks, and Povinelli offer is neither legitimate nor scientific. These commenters really are, as Chagnon suggests, representative of a subset of cultural anthropologists completely given over to a moralizing hysteria. Their scholarship is as dishonest as it is defamatory, their reasoning rests on guilt by free-association and the tossing up and knocking down of the most egregious of straw men, and their tone creates the illusion of moral certainty coupled with a longsuffering exasperation with entrenched institutionalized evils. For these hysterical moralizers, it seems any theory of human behavior that involves evolution or biology represents the same kind of threat as witchcraft did to the people of Salem in the 1690s, or as communism did to McCarthyites in the 1950s. To combat this chimerical evil, the presumed righteous ends justify the deceitful means.

The unavoidable conclusion with regard to the question of why Darkness in El Dorado wasn’t dismissed outright when it should have been is that even though it has been established that Chagnon didn’t commit any of the crimes Tierney accused him of, as far as his critics are concerned, he may as well have. Somehow cultural anthropologists have come to occupy a bizarre culture of their own in which charging a colleague with genocide doesn’t seem like a big deal. Before Tierney’s book hit the shelves, two anthropologists, Terence Turner and Leslie Sponsel, co-wrote an email to the American Anthropological Association which was later sent to several journalists. Turner and Sponsel later claimed the message was simply a warning about the “impending scandal” that would result from the publication of Darkness in El Dorado. But the hyperbole and suggestive language make it read more like a publicity notice than a warning. “This nightmarish story—a real anthropological heart of darkness beyond the imagining of even a Josef Conrad (though not, perhaps, a Josef Mengele)”—is it too much to ask of those who are so fond of referencing Joseph Conrad that they actually read his book?—“will be seen (rightly in our view) by the public, as well as most anthropologists, as putting the whole discipline on trial.” As it turned out, though, the only one who was put on trial, by the American Anthropological Association—though officially it was only an “inquiry”—was Napoleon Chagnon.

Chagnon’s old academic rivals, many of whom claim their problem with him stems from the alleged devastating impact of his research on Indians, fail to appreciate the gravity of Tierney’s accusations. Their blasé response to the author being exposed as a fraud gives the impression that their eagerness to participate in the pile-on has little to do with any concern for the Yanomamö people. Instead, they embraced Darkness in El Dorado because it provided good talking points in the campaign against their dreaded nemesis Napoleon Chagnon. Sahlins, for instance, is strikingly cavalier about the personal effects of Tierney’s accusations in the review cited by King and Horgan:

The brouhaha in cyberspace seemed to help Chagnon’s reputation as much as Neel’s, for in the fallout from the latter’s defense many academics also took the opportunity to make tendentious arguments on Chagnon’s behalf. Against Tierney’s brief that Chagnon acted as an anthro-provocateur of certain conflicts among the Yanomami, one anthropologist solemnly demonstrated that warfare was endemic and prehistoric in the Amazon. Such feckless debate is the more remarkable because most of the criticisms of Chagnon rehearsed by Tierney have been circulating among anthropologists for years, and the best evidence for them can be found in Chagnon’s writings going back to the 1960s.

Sahlins goes on to offer his own sinister interpretation of Chagnon’s writings, using the same straw man and guilt-by-free-association techniques common to anthropologists in the grip of moralizing hysteria. But I can’t help wondering why anyone would take a word he says seriously after he suggests that being accused of causing a deadly epidemic helped Neel’s and Chagnon’s reputations.

*******

            Marshall Sahlins recently made news by resigning from the National Academy of Sciences in protest against the organization’s election of Chagnon to its membership and its partnerships with the military. In explaining his resignation, Sahlins insists that Chagnon, based on the evidence of his own writings, did serious harm to the people whose culture he studied. Sahlins also complains that Chagnon’s sociobiological ideas about violence are so wrongheaded that they serve to “discredit the anthropological discipline.” To back up his objections, he refers interested parties to that same review of Darkness in El Dorado King links to on her post.

Though Sahlins explains his moral and intellectual objections separately, he seems to believe that theories of human behavior based on biology are inherently immoral, as if theorizing that violence has “genetic underpinnings” is no different from claiming that violence is inevitable and justifiable. This is why Sahlins can’t discuss Chagnon without reference to Vietnam. He writes in his review,

The ‘60s were the longest decade of the 20th century, and Vietnam was the longest war. In the West, the war prolonged itself in arrogant perceptions of the weaker peoples as instrumental means of the global projects of the stronger. In the human sciences, the war persists in an obsessive search for power in every nook and cranny of our society and history, and an equally strong postmodern urge to “deconstruct” it. For his part, Chagnon writes popular textbooks that describe his ethnography among the Yanomami in the 1960s in terms of gaining control over people.

Sahlins doesn’t provide any citations to back up this charge—he’s quite clearly not the least bit concerned with fairness or solid scholarship—and based on what Chagnon writes in Noble Savages this fantasy of “gaining control” originates in the mind of Sahlins, not in the writings of Chagnon.

For instance, Chagnon writes of being made the butt of an elaborate joke several Yanomamö conspired to play on him by giving him fake names for people in their village (like Hairy Cunt, Long Dong, and Asshole). When he mentions these names to people in a neighboring village, they think it’s hilarious. “My face flushed with embarrassment and anger as the word spread around the village and everybody was laughing hysterically.” And this was no minor setback: “I made this discovery some six months into my fieldwork!” (66) Contrary to the despicable caricature Povinelli provides as well, Chagnon writes admiringly of the Yanomamö’s “wicked humor,” and how “They enjoyed duping others, especially the unsuspecting and gullible anthropologist who lived among them” (67). Another gem comes from an episode in which he tries to treat a rather embarrassing fungal infection: “You can’t imagine the hilarious reaction of the Yanomamö watching the resident fieldworker in a most indescribable position trying to sprinkle foot powder onto his crotch, using gravity as a propellant” (143).

            The bitterness, outrage, and outright hatred directed at Chagnon, alongside the overt nonexistence of evidence that he’s done anything wrong, seem completely insane until you consider that this preeminent anthropologist falls afoul of all the –isms that haunt the fantastical armchair obsessions of postmodern pseudo-scholars. Chagnon stands as a living symbol of the white colonizer exploiting indigenous people and resources (colonialism); he propagates theories that can be read as supportive of fantasies about individual and racial superiority (Social Darwinism, racism); he reports on tribal warfare and cruelty toward women, with the implication that these evils are encoded in our genes (neoconservativism, sexism, biological determinism). It should be clear that all of this is nonsense: any exploitation is merely alleged and likely outweighed by efforts at vaccination against diseases introduced by missionaries and gold miners; sociobiology doesn’t focus on racial differences, and superiority is a scientifically meaningless term; and the fact that genes play a role in some behavior implies neither that the behavior is moral nor that it is inevitable. The truly evil –ism at play in the campaign against Chagnon is postmodernism—an ideology which functions as little more than a factory for the production of false accusations.

            There are two main straw men that are bound to be rolled out by postmodern critics of evolutionary theories of behavior in any discussion of morally charged topics. The first is the gene-for misconception.

Every anthropologist, sociobiologist, and evolutionary psychologist knows that there is no gene for violence and warfare in the sense that would mean everyone born with a particular allele will inevitably grow up to be physically aggressive. Yet, in any discussion of the causes of violence, or any other issue in which biology is implicated, critics fall all over themselves trying to catch their opponents out for making this mistake, and they pretend by doing so they’re defeating an attempt to undermine efforts to make the world more peaceful. It so happens that scientists actually have discovered a gene variation, known popularly as “the warrior gene,” that increases the likelihood that an individual carrying it will engage in aggressive behavior—but only if that individual experiences a traumatic childhood. Having a gene variation associated with a trait only ever means someone is more likely to express that trait, and there will almost always be other genes and several environmental factors contributing to the overall likelihood.

You can be reasonably sure that if a critic is taking a sociobiologist or an evolutionary psychologist to task for suggesting a direct one-to-one correspondence between a gene and a behavior that critic is being either careless or purposely misleading. In trying to bring about a more peaceful world, it’s far more effective to study the actual factors that contribute to violence than it is to write moralizing criticisms of scientific colleagues. The charge that evolutionary approaches can only be used to support conservative or reactionary views of society isn’t just a misrepresentation of sociobiological theories; it’s also empirically false—surveys demonstrate that grad students in evolutionary anthropology are overwhelmingly liberal in their politics, just as liberal in fact as anthropology students in non-evolutionary concentrations.

Another thing anyone who has taken a freshman anthropology course knows, but that anti-evolutionary critics fall all over themselves taking sociobiologists to task for not understanding, is that people who live in foraging or tribal cultures cannot be treated as perfect replicas of our Pleistocene ancestors, or as Povinelli calls them “prehistoric time capsules.” Hunters and gatherers are not “living fossils,” because they’ve been evolving just as long as people in industrialized societies, their histories and environments are unique, and it’s almost impossible for them to avoid being impacted by outside civilizations. If you flew two groups of foragers from different regions each into the territory of the other, you would learn quite quickly that each group’s culture is intricately adapted to the environment it originally inhabited. This does not mean, however, that evidence about how foraging and tribal peoples live is irrelevant to questions about human evolution.

As different as those two groups are, they are both probably living lives much more similar to those of our ancestors than anyone in industrialized societies. What evolutionary anthropologists and psychologists tend to be most interested in are the trends that emerge when several of these cultures are compared to one another. The Yanomamö actually subsist largely on slash-and-burn agriculture, and they live in groups much larger than those of most foraging peoples. Their culture and demographic patterns may therefore provide clues to how larger and more stratified societies developed after millennia of evolution in small, mobile bands. But, again, no one is suggesting the Yanomamö are somehow interchangeable with the people who first made this transition to more complex social organization historically.

The prehistoric time-capsule straw man often goes hand-in-hand with an implication that the anthropologists supposedly making the blunder see the people whose culture they study as somehow inferior, somehow less human than people who live in industrialized civilizations. It seems like a short step from this subtle dehumanization to the kind of whole-scale exploitation indigenous peoples are often made to suffer. But the sad truth is there are plenty of economic, religious, and geopolitical forces working against the preservation of indigenous cultures and the protection of indigenous people’s rights to make scapegoating scientists who gather cultural and demographic information completely unnecessary. And you can bet Napoleon Chagnon is, if anything, more outraged by the mistreatment of the Yanomamö than most of the activists who falsely accuse him of complicity, because he knows so many of them personally. Chagnon is particularly critical of Brazilian gold miners and Salesian missionaries, both of whom it seems have far more incentive to disrespect the Yanomamö culture (by supplanting their religion and moving them closer to civilization) and ravage the territory they inhabit. The Salesians’ reprisals for his criticisms, which entailed pulling strings to keep him out of the territory and efforts to create a public image of him as a menace, eventually provided fodder for his critics back home as well. 

*******

In an article published in the journal American Anthropologist in 2004 titled Guilt by Association, about the American Anthropological Association’s compromised investigation of Tierney’s accusations against Chagnon, Thomas Gregor and Daniel Gross describe “chains of logic by which anthropological research becomes, at the end of an associative thread, an act of misconduct” (689). Quoting Defenders of the Truth, sociologist Ullica Segerstrale’s indispensable 2000 book on the sociobiology debate, Gregor and Gross explain that Chagnon’s postmodern accusers relied on a rhetorical strategy common among critics of evolutionary theories of human behavior—a strategy that produces something startlingly indistinguishable from spectral evidence. Segerstrale writes,

In their analysis of their target’s texts, the critics used a method I call moral reading. The basic idea behind moral reading was to imagine the worst possible political consequences of a scientific claim. In this way, maximum moral guilt might be attributed to the perpetrator of this claim. (206)

She goes on to cite a “glaring” example of how a scholar drew an imaginary line from sociobiology to Nazism, and then connected it to fascist behavioral control, even though none of these links were supported by any evidence (207). Gregor and Gross describe how this postmodern version of spectral evidence was used to condemn Chagnon.

In the case at hand, for example, the Report takes Chagnon to task for an article in Science on revenge warfare, in which he reports that “Approximately 30% of Yanomami adult male deaths are due to violence”(Chagnon 1988:985). Chagnon also states that Yanomami men who had taken part in violent acts fathered more children than those who had not. Such facts could, if construed in their worst possible light, be read as suggesting that the Yanomami are violent by nature and, therefore, undeserving of protection. This reading could give aid and comfort to the opponents of creating a Yanomami reservation. The Report, therefore, criticizes Chagnon for having jeopardized Yanomami land rights by publishing the Science article, although his research played no demonstrable role in the demarcation of Yanomami reservations in Venezuela and Brazil. (689)

The task force had found that Chagnon was guilty—even though it was nominally just an “inquiry” and had no official grounds for pronouncing on any misconduct—of harming the Indians by portraying them negatively. Gregor and Gross, however, sponsored a ballot at the AAA to rescind the organization’s acceptance of the report; in 2005, it was voted on by the membership and passed by a margin of 846 to 338. “Those five years,” Chagnon writes of the time between that email warning about Tierney’s book and the vote finally exonerating him, “seem like a blurry bad dream” (450).

            Anthropological fieldwork has changed dramatically since Chagnon’s early research in Venezuela. There was legitimate concern about the impact of trading manufactured goods like machetes for information, and you can read about some of the fracases it fomented among the Yanomamö in Noble Savages. The practice is now prohibited by the ethical guidelines of ethnographic field research. The dangers to isolated or remote populations from communicable diseases must also be considered while planning any expeditions to study indigenous cultures. But Chagnon was entering the Ocamo region after many missionaries and just before many gold miners. And we can’t hold him accountable for disregarding rules that didn’t exist at the time. Sahlins, however, echoing Tierney’s perversion of Neel and Chagnon’s race to immunize the Indians so that the two men appeared to be the source of contagion, accuses Chagnon of causing much of the violence he witnessed and reported by spreading around his goods.

Hostilities thus tracked the always-changing geopolitics of Chagnon-wealth, including even pre-emptive attacks to deny others access to him. As one Yanomami man recently related to Tierney: “Shaki [Chagnon] promised us many things, and that’s why other communities were jealous and began to fight against us.”

Aside from the fact that some Yanomamö men had just returned from a raid the very first time he entered one of their villages, and the fact that the source of this quote has been discredited, Sahlins is also basing his elaborate accusation on some pretty paltry evidence.

            Sahlins also insists that the “monster in Amazonia” couldn’t possibly have figured out a way to learn the names and relationships of the people he studied without aggravating intervillage tensions (thus implicitly conceding those tensions already existed). The Yanomamö have a taboo against saying the names of other adults, similar to our own custom of addressing people we’ve just met by their titles and last names, but with much graver consequences for violations. This is why Chagnon had to confirm the names of people in one tribe by asking about them in another, the practice that led to his discovery of the prank that was played on him. Sahlins uses Tierney’s reporting as the only grounds for his speculations on how disruptive this was to the Yanomamö. And, in the same way he suggested there was some moral equivalence between Chagnon going into the jungle to study the culture of a group of Indians and the US military going into the jungles to engage in a war against the Vietcong, he fails to distinguish between the Nazi practice of marking Jews and Chagnon’s practice of writing numbers on people’s arms to keep track of their problematic names. Quoting Chagnon, Sahlins writes,

“I began the delicate task of identifying everyone by name and numbering them with indelible ink to make sure that everyone had only one name and identity.” Chagnon inscribed these indelible identification numbers on people’s arms—barely 20 years after World War II.

This juvenile innuendo calls to mind Jon Stewart’s observation that it’s not until someone in Washington makes the first Hitler reference that we know a real political showdown has begun (and Stewart has had to make the point a few times again since then).

One of the things that makes this type of trashy pseudo-scholarship so insidious is that it often creates an indelible impression of its own. Anyone who reads Sahlins’ essay could be forgiven for thinking that writing numbers on people might really be a sign that he was dehumanizing them. Fortunately, Chagnon’s own accounts go a long way toward dispelling this suspicion. In one passage, he describes how he made the naming and numbering into a game for this group of people who knew nothing about writing:

I had also noted after each name the item that person wanted me to bring on my next visit, and they were surprised at the total recall I had when they decided to check me. I simply looked at the number I had written on their arm, looked the number up in my field book, and then told the person precisely what he had requested me to bring for him on my next trip. They enjoyed this, and then they pressed me to mention the names of particular people in the village they would point to. I would look at the number on the arm, look it up in my field book, and whisper his name into someone’s ear. The others would anxiously and eagerly ask if I got it right, and the informant would give an affirmative quick raise of the eyebrows, causing everyone to laugh hysterically. (157)

Needless to say, this is a far cry from using the labels to efficiently herd people into cargo trains to transport them to concentration camps and gas chambers. Sahlins disgraces himself by suggesting otherwise and by not distancing himself from Tierney when it became clear that his atrocious accusations were meritless.

            Which brings us back to John Horgan. One week after the post in which he bragged about standing up to five email bullies who were urging him not to endorse Tierney’s book and took the opportunity to say he still stands by the mostly positive review, he published another post on Chagnon, this time about the irony of how close Chagnon’s views on war are to those of Margaret Mead, a towering figure in anthropology whose blank-slate theories sociobiologists often challenge. (Both of Horgan’s posts marking the occasion of Chagnon’s new book—neither of which quote from it—were probably written for publicity; his own book on war was published last year.) As I read the post, I came across the following bewildering passage: 

Chagnon advocates have cited a 2011 paper by bioethicist Alice Dreger as further “vindication” of Chagnon. But to my mind Dreger’s paper—which wastes lots of verbiage bragging about all the research that she’s done and about how close she has gotten to Chagnon–generates far more heat than light. She provides some interesting insights into Tierney’s possible motives in writing Darkness in El Dorado, but she leaves untouched most of the major issues raised by Chagnon’s career.

Horgan’s earlier post was one of the first things I’d read in years about Chagnon, and Tierney’s accusations against him. I read Alice Dreger’s report on her investigation of those accusations, and the “inquiry” by the American Anthropological Association that ensued from them, shortly afterward. I kept thinking back to Horgan’s continuing endorsement of Tieney’s book as I read the report because she cites several other reports that establish, at the very least, that there was no evidence to support the worst of the accusations. My conclusion was that Horgan simply hadn’t done his homework. How could he endorse a work featuring such horrific accusations if he knew most of them, the most horrific in particular, had been disproved? But with this second post he was revealing that he knew the accusations were false—and yet he still hasn’t recanted his endorsement.

            If you only read two supplements to Noble Savages, I recommend Dreger’s report and Emily Eakin’s profile of Chagnon in the New York Times. The one qualm I have about Eakin’s piece is that she too sacrifices the principle of presuming innocence in her effort to achieve journalistic balance, quoting Leslie Sponsel, one of the authors of the appalling email that sparked the AAA’s investigation of Chagnon, as saying, “The charges have not all been disproven by any means.” It should go without saying that the burden of proof is on the accuser. It should also go without saying that once the most atrocious of Tierney’s accusations were disproven the discussion of culpability should have shifted its focus away from Chagnon onto Tierney and his supporters. That it didn’t calls to mind the scene in The Crucible when an enraged John Proctor, whose wife is being arrested, shouts in response to an assurance that she’ll be released if she’s innocent—“If she is innocent! Why do you never wonder if Paris be innocent, or Abigail? Is the accuser always holy now? Were they born this morning as clean as God’s fingers?” (73). Aside from Chagnon himself, Dreger is about the only one who realized Tierney himself warranted some investigating.

            Eakin echoes Horgan a bit when she faults the “zealous tone” of Dreger’s report. Indeed, at one point, Dreger compares Chagnon’s trial to Galileo’s being called before the Inquisition. The fact is, though, there’s an important similarity. One of the most revealing discoveries of Dreger’s investigation was that the members of the AAA task force knew Tierney’s book was full of false accusations but continued with their inquiry anyway because they were concerned about the organization’s public image. In an email to the sociobiologist Sarah Blaffer Hrdy, Jane Hill, the head of the task force, wrote,

Burn this message. The book is just a piece of sleaze, that’s all there is to it (some cosmetic language will be used in the report, but we all agree on that). But I think the AAA had to do something because I really think that the future of work by anthropologists with indigenous peoples in Latin America—with a high potential to do good—was put seriously at risk by its accusations, and silence on the part of the AAA would have been interpreted as either assent or cowardice.

How John Horgan could have read this and still claimed that Dreger’s report “generates more heat than light” is beyond me. I can only guess that his judgment has been distorted by cognitive dissonance.

        To Horgan's other complaints, that she writes too much about her methods and admits to having become friends with Chagnon, she might respond that there is so much real hysteria surrounding this controversy, along with a lot of commentary reminiscent of the type of ridiculous rhetoric one hears on cable news, it was important to distinguish her report from all the groundless and recriminatory he-said-she-said. As for the friendship, it came about over the course of Dreger’s investigation. This is important because, for one, it doesn’t suggest any pre-existing bias, and two, one of the claims by critics of Chagnon’s work is that the violence he reported was either provoked by the man himself, or represented some kind of mental projection of his own bellicose character onto the people he was studying.

Dreger’s friendship with Chagnon shows that he’s not the monster portrayed by those in the grip of moralizing hysteria. And if parts of her report strike many as sententious it’s probably owing to their unfamiliarity with how ingrained that hysteria has become. It seems odd that anyone would pronounce on the importance of evidence or fairness—but basic principles we usually take for granted where trammeled in the frenzy to condemn Chagnon. 

If his enemies are going to compare him to Mengele, then a comparison with Galileo seems less extreme.

  Dreger, it seems to me, deserves credit for bringing a sorely needed modicum of sanity to the discussion. And she deserves credit as well for being one of the only people commenting on the controversy who understands the devastating personal impact of such vile accusations. She writes,

Meanwhile, unlike Neel, Chagnon was alive to experience what it is like to be drawn-and-quartered in the international press as a Nazi-like experimenter responsible for the deaths of hundreds, if not thousands, of Yanomamö. He tried to describe to me what it is like to suddenly find yourself accused of genocide, to watch your life’s work be twisted into lies and used to burn you.

So let’s make it clear: the scientific controversy over sociobiology and the scandal over Tierney’s discredited book are two completely separate issues. In light of the findings from all the investigations of Tierney’s claims, we should all, no matter our theoretical leanings, agree that Darkness in El Dorado is, in the words of Jane Hill, who headed a task force investigating it, “just a piece of sleaze.” We should still discuss whether it was appropriate or advisable for Chagnon to exchange machetes for information—I’d be interested to hear what he has to say himself, since he describes all kinds of frustrations the practice caused him in his book. We should also still discuss the relative threat of contagion posed by ethnographers versus missionaries, weighed of course against the benefits of inoculation campaigns.

But we shouldn’t discuss any ethical or scientific matter with reference to Darkness in El Dorado or its disgraced author aside from questions like: Why was the hysteria surrounding the book allowed to go so far? Why were so many people willing to scapegoat Chagnon? Why doesn’t anyone—except Alice Dreger—seem at all interested in bringing Tierney to justice in some way for making such outrageous accusations based on misleading or fabricated evidence? What he did is far worse than what Jonah Lehrer or James Frey did, and yet both of those men have publically acknowledged their dishonesty while no one has put even the slightest pressure on Tierney to publically admit wrongdoing.

            There’s some justice to be found in how easy Tierney and all the self-righteous pseudo-scholars like Sahlins have made it for future (and present) historians of science to cast them as deluded and unscrupulous villains in the story of a great—but flawed, naturally—anthropologist named Napoleon Chagnon. There’s also justice to be found in how snugly the hysterical moralizers’ tribal animosity toward Chagnon, their dehumanization of him, fits within a sociobiological framework of violence and warfare. One additional bit of justice might come from a demonstration of how easily Tierney’s accusatory pseudo-reporting can be turned inside-out. Tierney at one point in his book accuses Chagnon of withholding names that would disprove the central finding of his famous Science paper, and reading into the fact that the ascendant theories Chagnon criticized were openly inspired by Karl Marx’s ideas, he writes,

Yet there was something familiar about Chagnon’s strategy of secret lists combined with accusations against ubiquitous Marxists, something that traced back to his childhood in rural Michigan, when Joe McCarthy was king. Like the old Yanomami unokais, the former senator from Wisconsin was in no danger of death. Under the mantle of Science, Tailgunner Joe was still firing away—undefeated, undaunted, and blessed with a wealth of off-spring, one of whom, a poor boy from Port Austin, had received a full portion of his spirit. (180)

Tierney had no evidence that Chagnon kept any data out of his analysis. Nor did he have any evidence regarding Chagnon’s ideas about McCarthy aside from what he thought he could divine from knowing where he grew up (he cited no surveys of opinions from the town either). His writing is so silly it would be laughable if we didn’t know about all the anguish it caused. Tierney might just as easily have tried to divine Chagnon’s feelings about McCarthyism based on his alma mater. It turns out Chagnon began attending classes at the University of Michigan, the school where he’d write the famous dissertation for his PhD that would become the classic anthropology text The Fierce People, just two decades after another famous alumnus, one who actually stood up to McCarthy at a time when he was enjoying the success of a historical play he'd written, an allegory on the dangers of moralizing hysteria, in particular the one we now call the Red Scare. His name was Arthur Miller.

Also read

Can't Win for Losing: Why There are So Many Losers in Literature and Why It Has to Change

And

The People Who Evolved Our Genes for Us: Christopher Boehm on Moral Origins

And

The Feminist Sociobiologist: An Appreciation of Sarah Blaffer Hrdy

Read More
Dennis Junk Dennis Junk

The Feminist Sociobiologist: An Appreciation of Sarah Blaffer Hrdy Disguised as a Review of “Mothers and Others: The Evolutionary Origins of Mutual Understanding”

Sarah Blaffer Hrdy’s book “Mother Nature” was one of the first things I ever read about evolutionary psychology. With her new book, “Mothers and Others,” Hrdy lays out a theory for why humans are so cooperative compared to their ape cousins. Once again, she’s managed to pen a work that will stand the test of time, rewarding multiple readings well into the future.

One way to think of the job of anthropologists studying human evolution is to divide it into two basic components: the first is to arrive at a comprehensive and precise catalogue of the features and behaviors that make humans different from the species most closely related to us, and the second is to arrange all these differences in order of their emergence in our ancestral line. Knowing what came first is essential—though not sufficient—to the task of distinguishing between causes and effects. For instance, humans have brains that are significantly larger than those of any other primate, and we use these brains to fashion tools that are far more elaborate than the stones, sticks, leaves, and sponges used by other apes. Humans are also the only living ape that routinely walks upright on two legs. Since most of us probably give pride of place in the hierarchy of our species’ idiosyncrasies to our intelligence, we can sympathize with early Darwinian thinkers who felt sure brain expansion must have been what started our ancestors down their unique trajectory, making possible the development of increasingly complex tools, which in turn made having our hands liberated from locomotion duty ever more advantageous.

This hypothetical sequence, however, was dashed rather dramatically with the discovery in 1974 of Lucy, the 3.2 million-year-old skeleton of an Australopithecus Afarensis, in Ethiopia. Lucy resembles a chimpanzee in most respects, including cranial capacity, except that her bones have all the hallmarks of a creature with a bipedal gait. Anthropologists like to joke that Lucy proved butts were more important to our evolution than brains. But, though intelligence wasn’t the first of our distinctive traits to evolve, most scientists still believe it was the deciding factor behind our current dominance. At least for now, humans go into the jungle and build zoos and research facilities to study apes, not the other way around. Other apes certainly can’t compete with humans in terms of sheer numbers. Still, intelligence is a catch-all term. We must ask what exactly it is that our bigger brains can do better than those of our phylogenetic cousins.

A couple decades ago, that key capacity was thought to be language, which makes symbolic thought possible. Or is it symbolic thought that makes language possible? Either way, though a handful of ape prodigies have amassed some high vocabulary scores in labs where they’ve been taught to use pictographs or sign language, human three-year-olds accomplish similar feats as a routine part of their development. As primatologist and sociobiologist (one of the few who unabashedly uses that term for her field) Sarah Blaffer Hrdy explains in her 2009 book Mothers and Others: The Evolutionary Origins of Mutual Understanding, human language relies on abilities and interests aside from a mere reporting on the state of the outside world, beyond simply matching objects or actions with symbolic labels. Honeybees signal the location of food with their dances, vervet monkeys have distinct signals for attacks by flying versus ground-approaching predators, and the list goes on. Where humans excel when it comes to language is not just in the realm of versatility, but also in our desire to bond through these communicative efforts. Hrdy writes,

The open-ended qualities of language go beyond signaling. The impetus for language has to do with wanting to “tell” someone else what is on our minds and learn what is on theirs. The desire to psychologically connect with others had to evolve before language. (38)

The question Hrdy attempts to answer in Mothers and Others—the difference between humans and other apes she wants to place within a theoretical sequence of evolutionary developments—is how we evolved to be so docile, tolerant, and nice as to be able to cram ourselves by the dozens into tight spaces like airplanes without conflict. “I cannot help wondering,” she recalls having thought in a plane preparing for flight,

what would happen if my fellow human passengers suddenly morphed into another species of ape. What if I were traveling with a planeload of chimpanzees? Any of us would be lucky to disembark with all ten fingers and toes still attached, with the baby still breathing and unmaimed. Bloody earlobes and other appendages would litter the aisles. Compressing so many highly impulsive strangers into a tight space would be a recipe for mayhem. (3)

Over the past decade, the human capacity for cooperation, and even for altruism, has been at the center of evolutionary theorizing. Some clever experiments in the field of economic game theory have revealed several scenarios in which humans can be counted on to act against their own interest. What survival and reproductive advantages could possibly accrue to creatures given to acting for the benefit of others?

When it comes to economic exchanges, of course, human thinking isn’t tied to the here-and-now the way the thinking of other animals tends to be. To explain why humans might, say, forgo a small payment in exchange for the opportunity to punish a trading partner for withholding a larger, fairer payment, many behavioral scientists point out that humans seldom think in terms of one-off deals. Any human living in a society of other humans needs to protect his or her reputation for not being someone who abides cheating. Experimental settings are well and good, but throughout human evolutionary history individuals could never have been sure they wouldn’t encounter exchange partners a second or third time in the future. It so happens that one of the dominant theories to explain ape intelligence relies on the need for individuals within somewhat stable societies to track who owes whom favors, who is subordinate to whom, and who can successfully deceive whom. This “Machiavellian intelligence” hypothesis explains the cleverness of humans and other apes as the outcome of countless generations vying for status and reproductive opportunities in intensely competitive social groups.

One of the difficulties in trying to account for the evolution of intelligence is that its advantages seem like such a no-brainer. Isn’t it always better to be smarter? But, as Hrdy points out, the Machiavellian intelligence hypothesis runs into a serious problem. Social competition may have been an important factor in making primates brainer than other mammals, but it can’t explain why humans are brainer than other apes. She writes,

We still have to explain why humans are so much better than chimpanzees at conceptualizing what others are thinking, why we are born innately eager to interpret their motives, feelings, and intentions as well as care about their affective states and moods—in short, why humans are so well equipped for mutual understanding. Chimpanzees, after all, are at least as socially competitive as humans are. (46)

To bolster this point, Hrdy cites research showing that infant chimps have some dazzling social abilities once thought to belong solely to humans. In 1977, developmental psychologist Andrew Meltzoff published his finding that newborn humans mirror the facial expressions of adults they engage with. It was thought that this tendency in humans relied on some neurological structures unique to our lineage which provided the raw material for the evolution of our incomparable social intelligence. But then in 1996 primatologist Masako Myowa replicated Meltzoff’s findings with infant chimps. This and other research suggests that other apes have probably had much the same raw material for natural selection to act on. Yet, whereas the imitative and empathic skills flourish in maturing humans, they seem to atrophy in apes. Hrdy explains,

Even though other primates are turning out to be far better at reading intentions than primatologists initially realized, early flickerings of empathic interest—what might even be termed tentative quests for intersubjective engagement—fade away instead of developing and intensifying as they do in human children. (58)

So the question of what happened in human evolution to make us so different remains.

*****

Sarah Blaffer Hrdy exemplifies a rare, possibly unique, blend of scientific rigor and humanistic sensitivity—the vision of a great scientist and the fine observation of a novelist (or the vision of a great novelist and fine observation of a scientist). Reading her 1999 book, Mother Nature: A History of Mothers, Infants, and Natural Selection, was a watershed experience for me. In going beyond the realm of the literate into that of the literary while hewing closely to strict epistemic principle, she may surpass the accomplishments of even such great figures as Richard Dawkins and Stephen Jay Gould. In fact, since Mother Nature was one of the books through which I was introduced to sociobiology—more commonly known today as evolutionary psychology—I was a bit baffled at first by much of the criticism leveled against the field by Gould and others who claimed it was founded on overly simplistic premises and often produced theories that were politically reactionary.

The theme to which Hrdy continually returns is the too-frequently overlooked role of women and their struggles in those hypothetical evolutionary sequences anthropologists string together. For inspiration in her battle against facile biological theories whose sole purpose is to provide a cheap rationale for the political status quo, she turned, not to a scientist, but a novelist. The man single-most responsible for the misapplication of Darwin’s theory of natural selection to the justification of human societal hierarchies was the philosopher Herbert Spencer, in whose eyes women were no more than what Hrdy characterizes as “Breeding Machines.” Spencer and his fellow evolutionists in the Victorian age, she explains in Mother Nature,

took for granted that being female forestalled women from evolving “the power of abstract reasoning and that most abstract of emotions, the sentiment of justice.” Predestined to be mothers, women were born to be passive and noncompetitive, intuitive rather than logical. Misinterpretations of the evidence regarding women’s intelligence were cleared up early in the twentieth century. More basic difficulties having to do with this overly narrow definition of female nature were incorporated into Darwinism proper and linger to the present day. (17)

Many women over the generations have been unable to envision a remedy for this bias in biology. Hrdy describes the reaction of a literary giant whose lead many have followed.

For Virginia Woolf, the biases were unforgivable. She rejected science outright. “Science, it would seem, in not sexless; she is a man, a father, and infected too,” Woolf warned back in 1938. Her diagnosis was accepted and passed on from woman to woman. It is still taught today in university courses. Such charges reinforce the alienation many women, especially feminists, feel toward evolutionary theory and fields like sociobiology. (xvii)

But another literary luminary much closer to the advent of evolutionary thinking had a more constructive, and combative, response to short-sighted male biologists. And it is to her that Hrdy looks for inspiration. “I fall in Eliot’s camp,” she writes, “aware of the many sources of bias, but nevertheless impressed by the strength of science as a way of knowing” (xviii). She explains that George Eliot,

whose real name was Mary Ann Evans, recognized that her own experiences, frustrations, and desires did not fit within the narrow stereotypes scientists then prescribed for her sex. “I need not crush myself… within a mould of theory called Nature!” she wrote. Eliot’s primary interest was always human nature as it could be revealed through rational study. Thus she was already reading an advance copy of On the Origin of Species on November 24, 1859, the day Darwin’s book was published. For her, “Science has no sex… the mere knowing and reasoning faculties, if they act correctly, must go through the same process and arrive at the same result.” (xvii)

Eliot’s distaste for Spencer’s idea that women’s bodies were designed to divert resources away from the brain to the womb was as personal as it was intellectual. She had in fact met and quickly fallen in love with Spencer in 1851. She went on to send him a proposal which he rejected on eugenic grounds: “…as far as posterity is concerned,” Hrdy quotes, “a cultivated intelligence based upon a bad physique is of little worth, seeing that its descendants will die out in a generation or two.” Eliot’s retort came in the form of a literary caricature—though Spencer already seems a bit like his own caricature. Hrdy writes,

In her first major novel, Adam Bede (read by Darwin as he relaxed after the exertions of preparing Origin for publication), Eliot put Spencer’s views concerning the diversion of somatic energy into reproduction in the mouth of a pedantic and blatantly misogynist old schoolmaster, Mr. Bartle: “That’s the way with these women—they’ve got no head-pieces to nourish, and so their food all runs either to fat or brats.” (17)

A mother of three and an Emeritus Professor of Anthropology at the University of California, Davis, Hrdy is eloquent on the need for intelligence—and lots of familial and societal support—if one is to balance duties and ambitions like her own. Her first contribution to ethology came when she realized that the infanticide among hanuman langurs, which she’d gone to Mount Abu in Rajasthan, India to study at age 26 for her doctoral thesis, had nothing to do with overpopulation, as many suspected. Instead, the pattern she observed was that whenever an outside male deposed a group’s main breeder he immediately began exterminating all of the prior male’s offspring to induce the females to ovulate and give birth again—this time to the new male’s offspring. This was the selfish gene theory in action. But the females Hrdy was studying had an interesting response to this strategy.

In the early 1970s, it was still widely assumed by Darwinians that females were sexually passive and “coy.” Female langurs were anything but. When bands of roving males approached the troop, females would solicit them or actually leave their troop to go in search of them. On occasion, a female mated with invaders even though she was already pregnant and not ovulating (something else nonhuman primates were not supposed to do). Hence, I speculated that mothers were mating with outside males who might take over her troop one day. By casting wide the web of possible paternity, mothers could increase the prospects of future survival of offspring, since males almost never attack infants carried by females that, in the biblical sense of the word, they have “known.” Males use past relations with the mother as a cue to attack or tolerate her infant. (35)

Hrdy would go on to discover this was just one of myriad strategies primate females use to get their genes into future generations. The days of seeing females as passive vehicles while the males duke it out for evolutionary supremacy were now numbered.

I’ll never forget the Young-Goodman-Brown experience of reading the twelfth chapter of Mother Nature, titled “Unnatural Mothers,” and covering an impressive variety of evidence sources that simply devastates any notion of women as nurturing automatons, evolved for the sole purpose of serving as loving mothers. The verdict researchers arrive at whenever they take an honest look into the practices of women with newborns is that care is contingent. To give just one example, Hrdy cites the history of one of the earliest foundling homes in the world, the “Hospital of the Innocents” in Florence.

Founded in 1419, with assistance from the silk guilds, the Ospedale delgi Innocenti was completed in 1445. Ninety foundlings were left there the first year. By 1539 (a famine year), 961 babies were left. Eventually five thousand infants a year poured in from all corners of Tuscany. (299)

What this means is that a troubling number of new mothers were realizing they couldn't care for their infants. Unfortunately, newborns without direct parental care seldom fare well. “Of 15,000 babies left at the Innocenti between 1755 and 1773,” Hrdy reports, “two thirds died before reaching their first birthday” (299). And there were fifteen other foundling homes in the Grand Duchy of Tuscany at the time.

The chapter amounts to a worldwide tour of infant abandonment, exposure, or killing. (I remember having a nightmare after reading it about being off-balance and unable to set a foot down without stepping on a dead baby.) Researchers studying sudden infant death syndrome in London set up hidden cameras to monitor mothers interacting with babies but ended up videotaping them trying to smother them. Cases like this have made it necessary for psychiatrists to warn doctors studying the phenomenon “that some undeterminable portion of SIDS cases might be infanticides” (292). Why do so many mothers abandon or kill their babies? Turning to the ethnographic data, Hrdy explains,

Unusually detailed information was available for some dozen societies. At a gross level, the answer was obvious. Mothers kill their own infants where other forms of birth control are unavailable. Mothers were unwilling to commit themselves and had no way to delegate care of the unwanted infant to others—kin, strangers, or institutions. History and ecological constraints interact in complex ways to produce different solutions to unwanted births. (296)

Many scholars see the contingent nature of maternal care as evidence that motherhood is nothing but a social construct. Consistent with the blank-slate view of human nature, this theory holds that every aspect of child-rearing, whether pertaining to the roles of mothers or fathers, is determined solely by culture and therefore must be learned. Others, who simply can’t let go of the idea of women as virtuous vessels, suggest that these women, as numerous as they are, must all be deranged.

Hrdy demolishes both the purely social constructivist view and the suggestion of pathology. And her account of the factors that lead women to infanticide goes to the heart of her arguments about the centrality of female intelligence in the history of human evolution. Citing the pioneering work of evolutionary psychologists Martin Daly and MargoWilson, Hrdy writes,

How a mother, particularly a very young mother, treats one infant turns out to be a poor predictor of how she might treat another one born when she is older, or faced with improved circumstances. Even with culture held constant, observing modern Western women all inculcated with more or less the same post-Enlightenment values, maternal age turned out to be a better predictor of how effective a mother would be than specific personality traits or attitudes. Older women describe motherhood as more meaningful, are more likely to sacrifice themselves on behalf of a needy child, and mourn lost pregnancies more than do younger women. (314)

The takeaway is that a woman, to reproduce successfully, must assess her circumstances, including the level of support she can count on from kin, dads, and society. If she lacks the resources or the support necessary to raise the child, she may have to make a hard decision. But making that decision in the present unfavorable circumstances in no way precludes her from making the most of future opportunities to give birth to other children and raise them to reproductive age.

Hrdy goes on to describe an experimental intervention that took place in a hospital located across the street from a foundling home in 17th century France. The Hospice des Enfants Assistes cared for indigent women and assisted them during childbirth. It was the only place where poor women could legally abandon their babies. What the French reformers did was tell a subset of the new mothers that they had to stay with their newborns for eight days after birth.

Under this “experimental” regimen, the proportion of destitute mothers who subsequently abandoned their babies dropped from 24 to 10 percent. Neither cultural concepts about babies nor their economic circumstances had changed. What changed was the degree to which they had become attached to their breast-feeding infants. It was as though their decision to abandon their babies and their attachment to their babies operated as two different systems. (315)

Following the originator of attachment theory, John Bowlby, who set out to integrate psychiatry and developmental psychology into an evolutionary framework, Hrdy points out that the emotions underlying the bond between mothers and infants (and fathers and infants too) are as universal as they are consequential. Indeed, the mothers who are forced to abandon their infants have to be savvy enough to realize they have to do so before these emotions are engaged or they will be unable to go through with the deed.

Female strategy plays a crucial role in reproductive outcomes in several domains beyond the choice of whether or not to care for infants. Women must form bonds with other women for support, procure the protection of men (usually from other men), and lay the groundwork for their children’s own future reproductive success. And that’s just what women have to do before choosing a mate—a task that involves striking a balance between good genes and a high level of devotion—getting pregnant, and bringing the baby to term. The demographic transition that occurs when an agrarian society becomes increasingly industrialized is characterized at first by huge population increases as infant mortality drops but then levels off as women gain more control over their life trajectories. Here again, the choices women tend to make are at odds with Victorian (and modern evangelical) conceptions of their natural proclivities. Hrdy writes,

Since, formerly, status and well-being tended to be correlated with reproductive success, it is not surprising that mothers, especially those in higher social ranks, put the basics first. When confronted with a choice between striving for status and striving for children, mothers gave priority to status and “cultural success” ahead of a desire for many children. (366)

And then of course come all the important tasks and decisions associated with actually raising any children the women eventually do give birth to. One of the basic skill sets women have to master to be successful mothers is making and maintaining friendships; they must be socially savvy because more than with any other ape the support of helpers, what Hrdy calls allomothers, will determine the fate of their offspring.

*****

Mother Nature is a massive work—541pages before the endnotes—exploring motherhood through the lens of sociobiology and attachment theory. Mothers and Others is leaner, coming in at just under 300 pages, because its focus is narrower. Hrdy feels that in attempting to account for humans’ prosocial impulses over the past decade, the role of women and motherhood has once again been scanted. She points to the prevalence of theories focusing on competition between groups, with the edge going to those made up of the most cooperative and cohesive members. Such theories once again give the leading role to males and their conflicts, leaving half the species out of the story—unless that other half’s only role is to tend to the children and forage for food while the “band of brothers” is out heroically securing borders.

Hrdy doesn’t weigh in directly on the growing controversy over whether group selection has operated as a significant force in human evolution. The problem she sees with intertribal warfare as an explanation for human generosity and empathy is that the timing isn’t right. What Hrdy is after are the selection pressures that led to the evolution of what she calls “emotionally modern humans,” the “people preadapted to get along with one another even when crowded together on an airplane” (66). And she argues that humans must have been emotionally modern before they could have further evolved to be cognitively modern. “Brains require care more than caring requires brains” (176). Her point is that bonds of mutual interest and concern came before language and the capacity for runaway inventiveness. Humans, Hrdy maintains, would have had to begin forming these bonds long before the effects of warfare were felt.

Apart from periodic increases in unusually rich locales, most Pleistocene humans lived at low population densities. The emergence of human mind reading and gift-giving almost certainly preceded the geographic spread of a species whose numbers did not begin to really expand until the past 70,000 years. With increasing population density (made possible, I would argue, because they were already good at cooperating), growing pressure on resources, and social stratification, there is little doubt that groups with greater internal cohesion would prevail over less cooperative groups. But what was the initial payoff? How could hypersocial apes evolve in the first place? (29)

In other words, what was it that took inborn capacities like mirroring an adult’s facial expressions, present in both human and chimp infants, and through generations of natural selection developed them into the intersubjective tendencies displayed by humans today?

Like so many other anthropologists before her, Hrdy begins her attempt to answer this question by pointing to a trait present in humans but absent in our fellow apes. “Under natural conditions,” she writes, “an orangutan, chimpanzee, or gorilla baby nurses for four to seven years and at the outset is inseparable from his mother, remaining in intimate front-to-front contact 100 percent of the day and night” (68). But humans allow others to participate in the care of their babies almost immediately after giving birth to them. Who besides Sarah Blaffer Hrdy would have noticed this difference, or given it more than a passing thought? (Actually, there are quite a few candidates among anthropologists—Kristen Hawkes for instance.) Ape mothers remain in constant contact with their infants, whereas human mothers often hand over their babies to other women to hold as soon as they emerge from the womb. The difference goes far beyond physical contact. Humans are what Hrdy calls “cooperative breeders,” meaning a child will in effect have several parents aside from the primary one. Help from alloparents opens the way for an increasingly lengthy development, which is important because the more complex the trait—and human social intelligence is about as complex as they come—the longer it takes to develop in maturing individuals. Hrdy writes,

One widely accepted tenet of life history theory is that, across species, those with bigger babies relative to the mother’s body size will also tend to exhibit longer intervals between births because the more babies cost the mother to produce, the longer she will need to recoup before reproducing again. Yet humans—like marmosets—provide a paradoxical exception to this rule. Humans, who of all the apes produce the largest, slowest-maturing, and most costly babies, also breed the fastest. (101)

Those marmosets turn out to be central to Hrdy’s argument because, along with their cousins in the family Callitrichidae, the tamarins, they make up almost the totality of the primate species whom she classifies as “full-fledged cooperative breeders” (92). This and other similarities between humans and marmosets and tamarins have long been overlooked because anthropologists have understandably been focused on the great apes, as well as other common research subjects like baboons and macaques.

Golden Lion Tamarins, by Sarah Landry

Callitrichidae, it so happens, engage in some uncannily human-like behaviors. Plenty of primate babies wail and shriek when they’re in distress, but infants who are frequently not in direct contact with their mothers would have to find a way to engage with them, as well as other potential caregivers, even when they aren’t in any trouble. “The repetitive, rhythmical vocalizations known as babbling,” Hrdy points out, “provided a particularly elaborate way to accomplish this” (122). But humans aren’t the only primates that babble “if by babble we mean repetitive strings of adultlike vocalizations uttered without vocal referents”; marmosets and tamarins do it too. Some of the other human-like patterns aren’t as cute though. Hrdy writes,

Shared care and provisioning clearly enhances maternal reproductive success, but there is also a dark side to such dependence. Not only are dominant females (especially pregnant ones) highly infanticidal, eliminating babies produced by competing breeders, but tamarin mothers short on help may abandon their own young, bailing out at birth by failing to pick up neonates when they fall to the ground or forcing clinging newborns off their bodies, sometimes even chewing on their hands or feet. (99)

It seems that the more cooperative infant care tends to be for a given species the more conditional it is—the more likely it will be refused when the necessary support of others can’t be counted on.

Hrdy’s cooperative breeding hypothesis is an outgrowth of George Williams and Kristen Hawkes’s so-called “Grandmother Hypothesis.” For Hawkes, the important difference between humans and apes is that human females go on living for decades after menopause, whereas very few female apes—or any other mammals for that matter—live past their reproductive years. Hawkes hypothesized that the help of grandmothers made it possible for ever longer periods of dependent development for children, which in turn made it possible for the incomparable social intelligence of humans to evolve. Until recently, though, this theory had been unconvincing to anthropologists because a renowned compendium of data compiled by George Peter Murdock in his Ethnographic Atlas revealed that there was a strong trend toward patrilocal residence patterns in all the societies that had been studied. Since grandmothers are thought to be much more likely to help care for their daughters’ children than their sons’—owing to paternity uncertainty—the fact that most humans raise their children far from maternal grandmothers made any evolutionary role for them unlikely.

But then in 2004 anthropologist Helen Alvarez reexamined Murdock’s analysis of residence patterns and concluded that pronouncements about widespread patrilocality were based on a great deal of guesswork. After eliminating societies for which too little evidence existed to determine the nature of their residence practices, Alvarez calculated that the majority of the remaining societies were bilocal, which means couples move back and forth between the mother’s and the father’s groups. Citing “The Alvarez Corrective” and other evidence, Hrdy concludes,

Instead of some highly conserved tendency, the cross-cultural prevalence of patrilocal residence patterns looks less like an evolved human universal than a more recent adaptation to post-Pleistocene conditions, as hunters moved into northern climes where women could no longer gather wild plants year-round or as groups settled into circumscribed areas. (246)

But Hrdy extends the cast of alloparents to include a mother’s preadult daughters, as well as fathers and their extended families, although the male contribution is highly variable across cultures (and variable too of course among individual men).

With the observation that human infants rely on multiple caregivers throughout development, Hrdy suggests the mystery of why selection favored the retention and elaboration of mind reading skills in humans but not in other apes can be solved by considering the life-and-death stakes for human babies trying to understand the intentions of mothers and others. She writes,

Babies passed around in this way would need to exercise a different skill set in order to monitor their mothers’ whereabouts. As part of the normal activity of maintaining contact both with their mothers and with sympathetic alloparents, they would find themselves looking for faces, staring at them, and trying to read what they reveal. (121)

Mothers, of course, would also have to be able to read the intentions of others whom they might consider handing their babies over to. So the selection pressure occurs on both sides of the generational divide. And now that she’s proposed her candidate for the single most pivotal transition in human evolution Hrdy’s next task is to place it in a sequence of other important evolutionary developments.

Without a doubt, highly complex coevolutionary processes were involved in the evolution of extended lifespans, prolonged childhoods, and bigger brains. What I want to stress here, however, is that cooperative breeding was the pre-existing condition that permitted the evolution of these traits in the hominin line. Creatures may not need big brains to evolve cooperative breeding, but hominins needed shared care and provisioning to evolve big brains. Cooperative breeding had to come first. (277)

*****

Flipping through Mother Nature, a book I first read over ten years ago, I can feel some of the excitement I must have experienced as a young student of behavioral science, having graduated from the pseudoscience of Freud and Jung to the more disciplined—and in its way far more compelling—efforts of John Bowlby, on a path, I was sure, to becoming a novelist, and now setting off into this newly emerging field with the help of a great scientist who saw the value of incorporating literature and art into her arguments, not merely as incidental illustrations retrofitted to recently proposed principles, but as sources of data in their own right, and even as inspiration potentially lighting the way to future discovery. To perceive, to comprehend, we must first imagine. And stretching the mind to dimensions never before imagined is what art is all about.

Yet there is an inescapable drawback to massive books like Mother Nature—for writers and readers alike—which is that any effort to grasp and convey such a massive array of findings and theories comes with the risk of casual distortion since the minutiae mastered by the experts in any subdiscipline will almost inevitably be heeded insufficiently in the attempt to conscript what appear to be basic points in the service of a broader perspective. Even more discouraging is the assurance that any intricate tapestry woven of myriad empirical threads will inevitably be unraveled by ongoing research. Your tapestry is really a snapshot taken from a distance of a field in flux, and no sooner does the shutter close than the beast continues along the path of its stubbornly unpredictable evolution.

When Mothers and Others was published just four years ago in 2009, for instance, reasoning based on the theory of kin selection led most anthropologists to assume, as Hrdy states, that “forager communities are composed of flexible assemblages of close and more distant blood relations and kin by marriage” (132).

This assumption seems to have been central to the thinking that led to the principal theory she lays out in the book, as she explains that “in foraging contexts the majority of children alloparents provision are likely to be cousins, nephews, and nieces rather than unrelated children” (158). But as theories evolve old assumptions come under new scrutiny, and in an article published in the journal Science in March of 2011 anthropologist Kim Hill and his colleagues report that after analyzing the residence and relationship patterns of 32 modern foraging societies their conclusion is that “most individuals in residential groups are genetically unrelated” (1286). In science, two years can make a big difference. This same study does, however, bolster a different pillar of Hrdy’s argument by demonstrating that men relocate to their wives’ groups as often as women relocate to their husbands’, lending further support to Alvarez’s corrective of Murdock’s data. 

Even if every last piece of evidence she marshals in her case for how pivotal the transition to cooperative breeding was in the evolution of mutual understanding in humans is overturned, Hrdy’s painstaking efforts to develop her theory and lay it out so comprehensively, so compellingly, and so artfully, will not have been wasted. Darwin once wrote that “all observation must be for or against some view to be of any service,” but many scientists, trained as they are to keep their eyes on the data and to avoid the temptation of building grand edifices on foundations of inference and speculation, look askance at colleagues who dare to comment publically on fields outside their specialties, especially in cases like Jared Diamond’s where their efforts end up winning them Pulitzers and guaranteed audiences for their future works.

But what use are legions of researchers with specialized knowledge hermetically partitioned by narrowly focused journals and conferences of experts with homogenous interests? Science is contentious by nature, so whenever a book gains notoriety with a nonscientific audience we can count on groaning from the author’s colleagues as they rush to assure us what we’ve read is a misrepresentation of their field. But stand-alone findings, no matter how numerous, no matter how central they are to researchers’ daily concerns, can’t compete with the grand holistic visions of the Diamonds, Hrdys, or Wilsons, imperfect and provisional as they must be, when it comes to inspiring the next generation of scientists. Nor can any number of correlation coefficients or regression analyses spark anything like the same sense of wonder that comes from even a glimmer of understanding about how a new discovery fits within, and possibly transforms, our conception of life and the universe in which it evolved. The trick, I think, is to read and ponder books like the ones Sarah Blaffer Hrdy writes as soon as they’re published—but to be prepared all the while, as soon as you’re finished reading them, to read and ponder the next one, and the one after that.

Also read:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

And:

“THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

And:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

Read More
Dennis Junk Dennis Junk

Let's Play Kill Your Brother: Fiction as a Moral Dilemma Game

Anthropologist Jean Briggs discovered one of the keys to Inuit peacekeeping in the style of play adults engage use to engage children. She describes the games in her famous essay, ‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps,” and in so doing, probably unknowingly, lays the groundwork for an understanding of how our love of fiction evolved, along with our moral sensibilities.

            Season 3 of Breaking Bad opens with two expressionless Mexican men in expensive suits stepping out of a Mercedes, taking a look around the peasant village they’ve just arrived in, and then dropping to the ground to crawl on their knees and elbows to a candlelit shrine where they leave an offering to Santa Muerte, along with a crude drawing of the meth cook known as Heisenberg, marking him for execution. We later learn that the two men, Leonel and Marco, who look almost identical, are in fact twins (played by Daniel and Luis Moncada), and that they are the cousins of Tuco Salamanca, a meth dealer and cartel affiliate they believe Heisenberg betrayed and killed. We also learn that they kill people themselves as a matter of course, without registering the slightest emotion and without uttering a word to each other to mark the occasion. An episode later in the season, after we’ve been made amply aware of how coldblooded these men are, begins with a flashback to a time when they were just boys fighting over an action figure as their uncle talks cartel business on the phone nearby. After Marco gets tired of playing keep-away, he tries to provoke Leonel further by pulling off the doll’s head, at which point Leonel runs to his Uncle Hector, crying, “He broke my toy!”

“He’s just having fun,” Hector says, trying to calm him. “You’ll get over it.”

“No! I hate him!” Leonel replies. “I wish he was dead!”

Hector’s expression turns grave. After a moment, he calls Marco over and tells him to reach into the tub of melting ice beside his chair to get him a beer. When the boy leans over the tub, Hector shoves his head into the water and holds it there. “This is what you wanted,” he says to Leonel. “Your brother dead, right?” As the boy frantically pulls on his uncle’s arm trying to free his brother, Hector taunts him: “How much longer do you think he has down there? One minute? Maybe more? Maybe less? You’re going to have to try harder than that if you want to save him.” Leonel starts punching his uncle’s arm but to no avail. Finally, he rears back and punches Hector in the face, prompting him to release Marco and rise from his chair to stand over the two boys, who are now kneeling beside each other. Looking down at them, he says, “Family is all.”

The scene serves several dramatic functions. By showing the ruthless and violent nature of the boys’ upbringing, it intensifies our fear on behalf of Heisenberg, who we know is actually Walter White, a former chemistry teacher and family man from a New Mexico suburb who only turned to crime to make some money for his family before his lung cancer kills him. It also goes some distance toward humanizing the brothers by giving us insight into how they became the mute, mechanical murderers they are when we’re first introduced to them. The bond between the two men and their uncle will be important in upcoming episodes as well. But the most interesting thing about the scene is that it represents in microcosm the single most important moral dilemma of the whole series.

Marco and Leonel are taught to do violence if need be to protect their family. Walter, the show’s central character, gets involved in the meth business for the sake of his own family, and as he continues getting more deeply enmeshed in the world of crime he justifies his decisions at each juncture by saying he’s providing for his wife and kids. But how much violence can really be justified, we’re forced to wonder, with the claim that you’re simply protecting or providing for your family? The entire show we know as Breaking Bad can actually be conceived of as a type of moral exercise like the one Hector puts his nephews through, designed to impart or reinforce a lesson, though the lesson of the show is much more complicated. It may even be the case that our fondness for fictional narratives more generally, like the ones we encounter in novels and movies and TV shows, originated in our need as a species to develop and hone complex social skills involving powerful emotions and difficult cognitive calculations.

Most of us watching Breaking Bad probably feel Hector went way too far with his little lesson, and indeed I’d like to think not too many parents or aunts and uncles would be willing to risk drowning a kid to reinforce the bond between him and his brother. But presenting children with frightening and stressful moral dilemmas to guide them through major lifecycle transitions—weaning, the birth of siblings, adoptions—which tend to arouse severe ambivalence can be an effective way to encourage moral development and instill traditional values. The ethnographer Jean Briggs has found that among the Inuit peoples whose cultures she studies adults frequently engage children in what she calls “playful dramas” (173), which entail hypothetical moral dilemmas that put the children on the hot seat as they struggle to come up with a solution. She writes about these lessons, which strike many outsiders as a cruel form of teasing by the adults, in “‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps,” a chapter she contributed to a 1994 anthology of anthropological essays on peace and conflict. In one example Briggs recounts,

A mother put a strange baby to her breast and said to her own nursling: “Shall I nurse him instead of you?” The mother of the other baby offered her breast to the rejected child and said: “Do you want to nurse from me? Shall I be your mother?” The child shrieked a protest shriek. Both mothers laughed. (176)

This may seem like sadism on the part of the mothers, but it probably functioned to soothe the bitterness arising from the child’s jealousy of a younger nursling. It would also help to settle some of the ambivalence toward the child’s mother, which comes about inevitably as a response to disciplining and other unavoidable frustrations.

Another example Briggs describes seems even more pointlessly sadistic at first glance. A little girl’s aunt takes her hand and puts it on a little boy’s head, saying, “Pull his hair.” The girl doesn’t respond, so her aunt yanks on the boy’s hair herself, making him think the girl had done it. They quickly become embroiled in a “battle royal,” urged on by several adults who find it uproarious. These adults do, however, end up stopping the fight before any serious harm can be done. As horrible as this trick may seem, Briggs believes it serves to instill in the children a strong distaste for fighting because the experience is so unpleasant for them. They also learn “that it is better not to be noticed than to be playfully made the center of attention and laughed at” (177). What became clear to Briggs over time was that the teasing she kept witnessing wasn’t just designed to teach specific lessons but that it was also tailored to the child’s specific stage of development. She writes,

Indeed, since the games were consciously conceived of partly as tests of a child’s ability to cope with his or her situation, the tendency was to focus on a child’s known or expected difficulties. If a child had just acquired a sibling, the game might revolve around the question: “Do you love your new baby sibling? Why don’t you kill him or her?” If it was a new piece of clothing that the child had acquired, the question might be: “Why don’t you die so I can have it?” And if the child had been recently adopted, the question might be: “Who’s your daddy?” (172)

As unpleasant as these tests can be for the children, they never entail any actual danger—Inuit adults would probably agree Hector Salamanca went a bit too far—and they always take place in circumstances and settings where the only threats and anxieties come from the hypothetical, playful dilemmas and conflicts. Briggs explains,

A central idea of Inuit socialization is to “cause thought”: isumaqsayuq. According to [Arlene] Stairs, isumaqsayuq, in North Baffin, characterizes Inuit-style education as opposed to the Western variety. Warm and tender interactions with children help create an atmosphere in which thought can be safely caused, and the questions and dramas are well designed to elicit it. More than that, and as an integral part of thought, the dramas stimulate emotion. (173)

Part of the exercise then seems to be to introduce the children to their own feelings. Prior to having their sibling’s life threatened, the children may not have any idea how they’d feel in the event of that sibling’s death. After the test, however, it becomes much more difficult for them to entertain thoughts of harming their brother or sister—the thought alone will probably be unpleasant.

Briggs also points out that the games send the implicit message to the children that they can be trusted to arrive at the moral solution. Hector knows Leonel won’t let his brother drown—and Leonel learns that his uncle knows this about him. The Inuit adults who tease and tempt children are letting them know they have faith in the children’s ability to resist their selfish or aggressive impulses. Discussing Briggs’s work in his book Moral Origins: The Evolution of Virtue, Altruism, and Shame, anthropologist Christopher Boehm suggests that evolution has endowed children with the social and moral emotions we refer to collectively as consciences, but these inborn moral sentiments need to be activated and shaped through socialization. He writes,

On the one side there will always be our usefully egoistic selfish tendencies, and on the other there will be our altruistic or generous impulses, which also can advance our fitness because altruism and sympathy are valued by our peers. The conscience helps us to resolve such dilemmas in ways that are socially acceptable, and these Inuit parents seem to be deliberately “exercising” the consciences of their children to make morally socialized adults out of them. (226)

The Inuit-style moral dilemma games seem strange, even shocking, to people from industrialized societies, and so it’s clear they’re not a normal part of children’s upbringing in every culture. They don’t even seem to be all that common among hunter-gatherers outside the region of the Arctic. Boehm writes, however,

Deliberately and stressfully subjecting children to nasty hypothetical dilemmas is not universal among foraging nomads, but as we’ll see with Nisa, everyday life also creates real moral dilemmas that can involve Kalahari children similarly. (226)

Boehm goes on to recount an episode from anthropologist Marjorie Shostak’s famous biography Nisa: The Life and Words of a !Kung Womanto show that parents all the way on the opposite side of the world from where Briggs did her fieldwork sometimes light on similar methods for stimulating their children’s moral development.

Nisa seems to have been a greedy and impulsive child. When her pregnant mother tried to wean her, she would have none of it. At one point, she even went so far as to sneak into the hut while her mother was asleep and try to suckle without waking her up. Throughout the pregnancy, Nisa continually expressed ambivalence toward the upcoming birth of her sibling, so much so that her parents anticipated there might be some problems. The !Kung resort to infanticide in certain dire circumstances, and Nisa’s parents probably reasoned she was at least somewhat familiar with the coping mechanism many other parents used when killing a newborn was necessary. What they’d do is treat the baby as an object, not naming it or in any other way recognizing its identity as a family member. Nisa explained to Shostak how her parents used this knowledge to impart a lesson about her baby brother.

After he was born, he lay there, crying. I greeted him, “Ho, ho, my baby brother! Ho, ho, I have a little brother! Some day we’ll play together.” But my mother said, “What do you think this thing is? Why are you talking to it like that? Now, get up and go back to the village and bring me my digging stick.” I said, “What are you going to dig?” She said, “A hole. I’m going to dig a hole so I can bury the baby. Then you, Nisa, will be able to nurse again.” I refused. “My baby brother? My little brother? Mommy, he’s my brother! Pick him up and carry him back to the village. I don’t want to nurse!” Then I said, “I’ll tell Daddy when he comes home!” She said, “You won’t tell him. Now, run back and bring me my digging stick. I’ll bury him so you can nurse again. You’re much too thin.” I didn’t want to go and started to cry. I sat there, my tears falling, crying and crying. But she told me to go, saying she wanted my bones to be strong. So, I left and went back to the village, crying as I walked. (The weaning episode occurs on pgs. 46-57)

Again, this may strike us as cruel, but by threatening her brother’s life, Nisa’s mother succeeded in triggering her natural affection for him, thus tipping the scales of her ambivalence to ensure the protective and loving feelings won out over the bitter and jealous ones. This example was extreme enough that Nisa remembered it well into adulthood, but Boehm sees it as evidence that real life reliably offers up dilemmas parents all over the world can use to instill morals in their children. He writes,

I believe that all hunter-gatherer societies offer such learning experiences, not only in the real-life situations children are involved with, but also in those they merely observe. What the Inuit whom Briggs studied in Cumberland Sound have done is to not leave this up to chance. And the practice would appear to be widespread in the Arctic. Children are systematically exposed to life’s typical stressful moral dilemmas, and often hypothetically, as a training ground that helps to turn them into adults who have internalized the values of their groups. (234)

One of the reasons such dilemmas, whether real or hypothetical or merely observed, are effective as teaching tools is that they bypass the threat to personal autonomy that tends to accompany direct instruction. Imagine Tío Salamanca simply scolding Leonel for wishing his brother dead—it would have only aggravated his resentment and sparked defiance. Leonel would probably also harbor some bitterness toward his uncle for unjustly defending Marco. In any case, he would have been stubbornly resistant to the lesson.

Winston Churchill nailed the sentiment when he said, “Personally, I am always ready to learn, although I don’t always like being taught.” The Inuit-style moral dilemmas force the children to come up with the right answer on their own, a task that requires the integration and balancing of short and long term desires, individual and group interests, and powerful albeit contradictory emotions. The skills that go into solving such dilemmas are indistinguishable from the qualities we recognize as maturity, self-knowledge, generosity, poise, and wisdom.

For the children Briggs witnessed being subjected to these moral tests, the understanding that the dilemmas were in fact only hypothetical developed gradually as they matured. For the youngest ones, the stakes were real and the solutions were never clear at the onset. Briggs explains that

while the interaction between small children and adults was consistently good-humored, benign, and playful on the part of the adults, it taxed the children to—or beyond—the limits of their ability to understand, pushing them to expand their horizons, and testing them to see how much they had grown since the last encounter. (173)

What this suggests is that there isn’t always a simple declarative lesson—a moral to the story, as it were—imparted in these games. Instead, the solutions to the dilemmas can often be open-ended, and the skills the children practice can thus be more general and abstract than some basic law or principle. Briggs goes on,

Adult players did not make it easy for children to thread their way through the labyrinth of tricky proposals, questions, and actions, and they did not give answers to the children or directly confirm the conclusions the children came to. On the contrary, questioning a child’s first facile answers, they turned situations round and round, presenting first one aspect then another, to view. They made children realize their emotional investment in all possible outcomes, and then allowed them to find their own way out of the dilemmas that had been created—or perhaps, to find ways of living with unresolved dilemmas. Since children were unaware that the adults were “only playing,” they could believe that their own decisions would determine their fate. And since the emotions aroused in them might be highly conflicted and contradictory—love as well as jealousy, attraction as well as fear—they did not always know what they wanted to decide. (174-5)

As the children mature, they become more adept at distinguishing between real and hypothetical problems. Indeed, Briggs suggests one of the ways adults recognize children’s budding maturity is that they begin to treat the dilemmas as a game, ceasing to take them seriously, and ceasing to take themselves as seriously as they did when they were younger.

In his book On the Origin of Stories: Evolution, Cognition, and Fiction, literary scholar Brian Boyd theorizes that the fictional narratives that humans engage one another with in every culture all over the world, be they in the form of religious myths, folklore, or plays and novels, can be thought of as a type of cognitive play—similar to the hypothetical moral dilemmas of the Inuit. He sees storytelling as an adaption that encourages us to train the mental faculties we need to function in complex societies. The idea is that evolution ensures that adaptive behaviors tend to be pleasurable, and thus many animals playfully and joyously engage in activities in low-stakes, relatively safe circumstances that will prepare them to engage in similar activities that have much higher stakes and are much more dangerous. Boyd explains,

The more pleasure that creatures have in play in safe contexts, the more they will happily expend energy in mastering skills needed in urgent or volatile situations, in attack, defense, and social competition and cooperation. This explains why in the human case we particularly enjoy play that develops skills needed in flight (chase, tag, running) and fight (rough-and-tumble, throwing as a form of attack at a distance), in recovery of balance (skiing, surfing, skateboarding), and individual and team games. (92)

The skills most necessary to survive and thrive in human societies are the same ones Inuit adults help children develop with the hypothetical dilemma’s Briggs describes. We should expect fiction then to feature similar types of moral dilemmas. Some stories may be designed to convey simple messages—“Don’t hurt your brother,” “Don’t stray from the path”—but others might be much more complicated; they may not even have any viable solutions at all. “Art prepares minds for open-ended learning and creativity,” Boyd writes; “fiction specifically improves our social cognition and our thinking beyond the here and now” (209).

One of the ways the cognitive play we call novels or TV shows differs from Inuit dilemma games is that the fictional characters take over center stage from the individual audience members. Instead of being forced to decide on a course of action ourselves, we watch characters we’ve become emotionally invested in try to come up with solutions to the dilemmas. When these characters are first introduced to us, our feelings toward them will be based on the same criteria we’d apply to real people who could potentially become a part of our social circles. Boyd explains,

Even more than other social species, we depend on information about others’ capacities, dispositions, intentions, actions, and reactions. Such “strategic information” catches our attention so forcefully that fiction can hold our interest, unlike almost anything else, for hours at a stretch. (130)

We favor characters who are good team players—who communicate honestly, who show concern for others, and who direct aggression toward enemies and cheats—for obvious reasons, but we also assess them in terms of what they might contribute to the group. Characters with exceptional strength, beauty, intelligence, or artistic ability are always especially attention-worthy. Of course, characters with qualities that make them sometimes an asset and sometimes a liability represent a moral dilemma all on their own—it’s no wonder such characters tend to be so compelling.

The most common fictional dilemma pits a character we like against one or more characters we hate—the good team player versus the power- or money-hungry egoist. We can think of the most straightforward plot as an encroachment of chaos on the providential moral order we might otherwise take for granted. When the bad guy is finally defeated, it’s like a toy that was snatched away from us has just been returned. We embrace the moral order all the more vigorously. But of course our stories aren’t limited to this one basic formula. Around the turn of the last century, the French writer Georges Polti, following up on the work of Italian playwright Carlo Gozzi, tried to write a comprehensive list of all the basic plots in plays and novels, and flipping through his book The Thirty-Six Dramatic Situations, you find that with few exceptions (“Daring Enterprise,” “The Enigma,” “Recovery of a Lost One”) the situations aren’t simply encounters between characters with conflicting goals, or characters who run into obstacles in chasing after their desires. The conflicts are nearly all moral, either between a virtuous character and a less virtuous one or between selfish or greedy impulses and more altruistic ones. Polti’s book could be called The Thirty-Odd Moral Dilemmas in Fiction. Hector Salamanca would be happy (not really) to see the thirteenth situation: “Enmity of Kinsmen,” the first example of which is “Hatred of Brothers” (49).

One type of fictional dilemma that seems to be particularly salient in American society today pits our impulse to punish wrongdoers against our admiration for people with exceptional abilities. Characters like Walter White in Breaking Bad win us over with qualities like altruism, resourcefulness, and ingenuity—but then they go on to behave in strikingly, though somehow not obviously, immoral ways. Variations on Conan-Doyle’s Sherlock Holmes abound; he’s the supergenius who’s also a dick (get the double-entendre?): the BBC’s Sherlock (by far the best), the movies starring Robert Downey Jr., the upcoming series featuring an Asian female Watson (Lucy Liu)—plus all the minor variations like The Mentalist and House

Though the idea that fiction is a type of low-stakes training simulation to prepare people cognitively and emotionally to take on difficult social problems in real life may not seem all that earthshattering, conceiving of stories as analogous to Inuit moral dilemmas designed to exercise children’s moral reasoning faculties can nonetheless help us understand why worries about the examples set by fictional characters are so often misguided. Many parents and teachers noisily complain about sex or violence or drug use in media. Academic literary critics condemn the way this or that author portrays women or minorities. Underlying these concerns is the crude assumption that stories simply encourage audiences to imitate the characters, that those audiences are passive receptacles for the messages—implicit or explicit—conveyed through the narrative. To be fair, these worries may be well placed when it comes to children so young they lack the cognitive sophistication necessary for separating their thoughts and feelings about protagonists from those they have about themselves, and are thus prone to take the hero for a simple model of emulation-worthy behavior. But, while Inuit adults communicate to children that they can be trusted to arrive at a right or moral solution, the moralizers in our culture betray their utter lack of faith in the intelligence and conscience of the people they try to protect from the corrupting influence of stories with imperfect or unsavory characters. 

           This type of self-righteous and overbearing attitude toward readers and viewers strikes me as more likely by orders of magnitude to provoke defiant resistance to moral lessons than the North Baffin’s isumaqsayuq approach. In other words, a good story is worth a thousand sermons. But if the moral dilemma at the core of the plot has an easy solution—if you can say precisely what the moral of the story is—it’s probably not a very good story.

Also read

The Criminal Sublime: Walter White's Brutally Plausible Journey to the Heart of Darkness in Breaking Bad

And

SYMPATHIZING WITH PSYCHOS: WHY WE WANT TO SEE ALEX ESCAPE HIS FATE AS A CLOCKWORK ORANGE

And

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

Read More
Dennis Junk Dennis Junk

The Imp of the Underground and the Literature of Low Status

A famous scene in “Notes from the Underground” echoes a famous study comparing people’s responses to an offense. What are the implications for behavior and personality of having low social status, and how does that play out in fiction? Is Poe’s “Imp of the Perverse” really just an example of our inborn defiance, our raging against the machine?

The one overarching theme in literature, and I mean all literature since there’s been any to speak of, is injustice. Does the girl get the guy she deserves? If so, the work is probably commercial, as opposed to literary, fiction. If not, then the reason begs pondering. Maybe she isn’t pretty enough, despite her wit and aesthetic sophistication, so we’re left lamenting the shallowness of our society’s males. Maybe she’s of a lower caste, despite her unassailable virtue, in which case we’re forced to question our complacency before morally arbitrary class distinctions. Or maybe the timing was just off—cursed fate in all her fickleness. Another literary work might be about the woman who ends up without the fulfilling career she longed for and worked hard to get, in which case we may blame society’s narrow conception of femininity, as evidenced by all those damn does-the-girl-get-the-guy stories.

            The prevailing theory of what arouses our interest in narratives focuses on the characters’ goals, which magically, by some as yet undiscovered cognitive mechanism, become our own. But plots often catch us up before any clear goals are presented to us, and our partisanship on behalf of a character easily endures shifting purposes. We as readers and viewers are not swept into stories through the transubstantiation of someone else’s striving into our own, with the protagonist serving as our avatar as we traverse the virtual setting and experience the pre-orchestrated plot. Rather, we reflexively monitor the character for signs of virtue and for a capacity to contribute something of value to his or her community, the same way we, in our nonvirtual existence, would monitor and assess a new coworker, classmate, or potential date. While suspense in commercial fiction hinges on high-stakes struggles between characters easily recognizable as good and those easily recognizable as bad, and comfortably condemnable as such, forward momentum in literary fiction—such as it is—depends on scenes in which the protagonist is faced with temptations, tests of virtue, moral dilemmas.

The strain and complexity of coming to some sort of resolution to these dilemmas often serves as a theme in itself, a comment on the mad world we live in, where it’s all but impossible to discern between right and wrong. Indeed, the most common emotional struggle depicted in literature is that between the informal, even intimate handling of moral evaluation—which comes natural to us owing to our evolutionary heritage as a group-living species—and the official, systematized, legal or institutional channels for determining merit and culpability that became unavoidable as societies scaled up exponentially after the advent of agriculture. These burgeoning impersonal bureaucracies are all too often ill-equipped to properly weigh messy mitigating factors, and they’re all too vulnerable to subversion by unscrupulous individuals who know how to game them. Psychopaths who ought to be in prison instead become CEOs of multinational investment firms, while sensitive and compassionate artists and humanitarians wind up taking lowly day jobs at schools or used book stores. But the feature of institutions and bureaucracies—and of complex societies more generally—that takes the biggest toll on our Pleistocene psyches, the one that strikes us as the most glaring injustice, is their stratification, their arrangement into steeply graded hierarchies.

Unlike our hierarchical ape cousins, all present-day societies still living in small groups as nomadic foragers, like those our ancestors lived in throughout the epoch that gave rise to the suite of traits we recognize as uniquely human, collectively enforce an ethos of egalitarianism. As anthropologist Christopher Boehm explains in his book Hierarchy in the Forest: The Evolution of Egalitarianism,

Even though individuals may be attracted personally to a dominant role, they make a common pact which says that each main political actor will give up his modest chances of becoming alpha in order to be certain that no one will ever be alpha over him. (105)

Since humans evolved from a species that was ancestral to both chimpanzees and gorillas, we carry in us many of the emotional and behavioral capacities that support hierarchies. But, during all those millennia of egalitarianism, we also developed an instinctive distaste for behaviors that undermine an individual’s personal sovereignty. “On their list of serious moral transgressions,” Boehm explains,

hunter-gathers regularly proscribe the enactment of behavior that is politically overbearing. They are aiming at upstarts who threaten the autonomy of other group members, and upstartism takes various forms. An upstart may act the bully simply because he is disposed to dominate others, or he may become selfishly greedy when it is time to share meat, or he may want to make off with another man’s wife by threat or by force. He (or sometimes she) may also be a respected leader who suddenly begins to issue direct orders… An upstart may simply take on airs of superiority, or may aggressively put others down and thereby violate the group’s idea of how its main political actors should be treating one another. (43)

In a band of thirty people, it’s possible to keep a vigilant eye on everyone and head off potential problems. But, as populations grow, encounters with strangers in settings where no one knows one another open the way for threats to individual autonomy and casual insults to personal dignity. And, as professional specialization and institutional complexity increase in pace with technological advancement, power structures become necessary for efficient decision-making. Economic inequality then takes hold as a corollary of professional inequality.

None of this is to suggest that the advance of civilization inevitably leads to increasing injustice. In fact, per capita murder rates are much higher in hunter-gatherer societies. Nevertheless, the impersonal nature of our dealings with others in the modern world often strikes us as overly conducive to perverse incentives and unfair outcomes. And even the most mundane signals of superior status or the most subtle expressions of power, though officially sanctioned, can be maddening. Compare this famous moment in literary history to Boehm’s account of hunter-gatherer political philosophy:

I was standing beside the billiard table, blocking the way unwittingly, and he wanted to pass; he took me by the shoulders and silently—with no warning or explanation—moved me from where I stood to another place, and then passed by as if without noticing. I could have forgiven a beating, but I simply could not forgive his moving me and in the end just not noticing me. (49)

The billiard player's failure to acknowledge his autonomy outrages the narrator, who then considers attacking the man who has treated him with such disrespect. But he can’t bring himself to do it. He explains,

I turned coward not from cowardice, but from the most boundless vanity. I was afraid, not of six-foot-tallness, nor of being badly beaten and chucked out the window; I really would have had physical courage enough; what I lacked was sufficient moral courage. I was afraid that none of those present—from the insolent marker to the last putrid and blackhead-covered clerk with a collar of lard who was hanging about there—would understand, and that they would all deride me if I started protesting and talking to them in literary language. Because among us to this day it is impossible to speak of a point of honor—that is, not honor, but a point of honor (point d’honneur) otherwise than in literary language. (50)

The languages of law and practicality are the only ones whose legitimacy is recognized in modern societies. The language of morality used to describe sentiments like honor has been consigned to literature. This man wants to exact his revenge for the slight he suffered, but that would require his revenge to be understood by witnesses as such. The derision he can count on from all the bystanders would just compound the slight. In place of a close-knit moral community, there is only a loose assortment of strangers. And so he has no recourse.

            The character in this scene could be anyone. Males may be more keyed into the physical dimension of domination and more prone to react with physical violence, but females likewise suffer from slights and belittlements, and react aggressively, often by attacking their tormenter's reputation through gossip. Treating a person of either gender as an insensate obstacle is easier when that person is a stranger you’re unlikely ever to encounter again. But another dynamic is at play in the scene which makes it still easier—almost inevitable. After being unceremoniously moved aside, the narrator becomes obsessed with the man who treated him so dismissively. Desperate to even the score, he ends up stalking the man, stewing resentfully, trying to come up with a plan. He writes,

And suddenly… suddenly I got my revenge in the simplest, the most brilliant way! The brightest idea suddenly dawned on me. Sometimes on holidays I would go to Nevsky Prospect between three and four, and stroll along the sunny side. That is, I by no means went strolling there, but experienced countless torments, humiliations and risings of bile: that must have been just what I needed. I darted like an eel among the passers-by, in a most uncomely fashion, ceaselessly giving way now to generals, now to cavalry officers and hussars, now to ladies; in those moments I felt convulsive pains in my heart and a hotness in my spine at the mere thought of the measliness of my attire and the measliness and triteness of my darting little figure. This was a torment of torments, a ceaseless, unbearable humiliation from the thought, which would turn into a ceaseless and immediate sensation, of my being a fly before that whole world, a foul, obscene fly—more intelligent, more developed, more noble than everyone else—that went without saying—but a fly, ceaselessly giving way to everyone, humiliated by everyone, insulted by everyone. (52)

So the indignity, it seems, was not borne of being moved aside like a piece of furniture so much as it was of being afforded absolutely no status. That’s why being beaten would have been preferable; a beating implies a modicum of worthiness in that it demands recognition, effort, even risk, no matter how slight.

            The idea that occurs to the narrator for the perfect revenge requires that he first remedy the outward signals of his lower social status, “the measliness of my attire and the measliness… of my darting little figure,” as he calls them. The catch is that to don the proper attire for leveling a challenge, he has to borrow money from a man he works with—which only adds to his daily feelings of humiliation. Psychologists Derek Rucker and Adam Galinsky have conducted experiments demonstrating that people display a disturbing readiness to compensate for feelings of powerlessness and low status by making pricy purchases, even though in the long run such expenditures only serve to perpetuate their lowly economic and social straits. The irony is heightened in the story when the actual revenge itself, the trappings for which were so dearly purchased, turns out to be so bathetic.

Suddenly, within three steps of my enemy, I unexpectedly decided, closed my eyes, and—we bumped solidly shoulder against shoulder! I did not yield an inch and passed by on perfectly equal footing! He did not even look back and pretended not to notice: but he only pretended, I’m sure of that. To this day I’m sure of it! Of course, I got the worst of it; he was stronger, but that was not the point. The point was that I had achieved my purpose, preserved my dignity, yielded not a step, and placed myself publicly on an equal social footing with him. I returned home perfectly avenged for everything. (55)

But this perfect vengeance has cost him not only the price of a new coat and hat; it has cost him a full two years of obsession, anguish, and insomnia as well. The implication is that being of lowly status is a constant psychological burden, one that makes people so crazy they become incapable of making rational decisions.

            Literature buffs will have recognized these scenes from Dostoevsky’s Notes from Underground (as translated by Richard Prevear and Larissa Volokhnosky), which satirizes the idea of a society based on the principle of “rational egotism” as symbolized by N.G. Chernyshevsky’s image of a “crystal palace” (25), a well-ordered utopia in which every citizen pursues his or her own rational self-interests. Dostoevsky’s underground man hates the idea because regardless of how effectively such a society may satisfy people’s individual needs the rigid conformity it would demand would be intolerable. The supposed utopia, then, could never satisfy people’s true interests. He argues,

That’s just the thing, gentlemen, that there may well exist something that is dearer for almost every man than his very best profit, or (so as not to violate logic) that there is this one most profitable profit (precisely the omitted one, the one we were just talking about), which is chiefer and more profitable than all other profits, and for which a man is ready, if need be, to go against all laws, that is, against reason, honor, peace, prosperity—in short, against all these beautiful and useful things—only so as to attain this primary, most profitable profit which is dearer to him than anything else. (22)

The underground man cites examples of people behaving against their own best interests in this section, which serves as a preface to the story of his revenge against the billiard player who so blithely moves him aside. The way he explains this “very best profit” which makes people like himself behave in counterproductive, even self-destructive ways is to suggest that nothing else matters unless everyone’s freedom to choose how to behave is held inviolate. He writes,

One’s own free and voluntary wanting, one’s own caprice, however wild, one’s own fancy, though chafed sometimes to the point of madness—all this is that same most profitable profit, the omitted one, which does not fit into any classification, and because of which all systems and theories are constantly blown to the devil… Man needs only independent wanting, whatever this independence may cost and wherever it may lead. (25-6)

Notes from Underground was originally published in 1864. But the underground man echoes, wittingly or not, the narrator of Edgar Allan Poe’s story from almost twenty years earlier, "The Imp of the Perverse," who posits an innate drive to perversity, explaining,

Through its promptings we act without comprehensible object. Or if this shall be understood as a contradiction in terms, we may so far modify the proposition as to say that through its promptings we act for the reason that we should not. In theory, no reason can be more unreasonable, but in reality there is none so strong. With certain minds, under certain circumstances, it becomes absolutely irresistible. I am not more sure that I breathe, than that the conviction of the wrong or impolicy of an action is often the one unconquerable force which impels us, and alone impels us, to its prosecution. Nor will this overwhelming tendency to do wrong for the wrong’s sake, admit of analysis, or resolution to ulterior elements. (403)

This narrator’s suggestion of the irreducibility of the impulse notwithstanding, it’s noteworthy how often the circumstances that induce its expression include the presence of an individual of higher status.

            The famous shoulder bump in Notes from Underground has an uncanny parallel in experimental psychology. In 1996, Dov Cohen, Richard Nisbett, and their colleagues published the research article, “Insult, Aggression, and the Southern Culture of Honor: An ‘Experimental Ethnography’,” in which they report the results of a comparison between the cognitive and physiological responses of southern males to being bumped in a hallway and casually called an asshole to those of northern males. The study showed that whereas men from northern regions were usually amused by the run-in, southern males were much more likely to see it as an insult and a threat to their manhood, and they were much more likely to respond violently. The cortisol and testosterone levels of southern males spiked—the clever experimental setup allowed meaures before and after—and these men reported believing physical confrontation was the appropriate way to redress the insult. The way Cohen and Nisbett explain the difference is that the “culture of honor” that emerges in southern regions originally developed as a safeguard for men who lived as herders. Cultures that arise in farming regions place less emphasis on manly honor because farmland is difficult to steal. But if word gets out that a herder is soft then his livelihood is at risk. Cohen and Nisbett write,

Such concerns might appear outdated for southern participants now that the South is no longer a lawless frontier based on a herding economy. However, we believe these experiments may also hint at how the culture of honor has sustained itself in the South. It is possible that the culture-of-honor stance has become “functionally autonomous” from the material circumstances that created it. Culture of honor norms are now socially enforced and perpetuated because they have become embedded in social roles, expectations, and shared definitions of manhood. (958)

            More recently, in a 2009 article titled “Low-Status Compensation: A Theory for Understanding the Role of Status in Cultures of Honor,” psychologist P.J. Henry takes another look at Cohen and Nisbett’s findings and offers another interpretation based on his own further experimentation. Henry’s key insight is that herding peoples are often considered to be of lower status than people with other professions and lifestyles. After establishing that the southern communities with a culture of honor are often stigmatized with negative stereotypes—drawling accents signaling low intelligence, high incidence of incest and drug use, etc.—both in the minds of outsiders and those of the people themselves, Henry suggests that a readiness to resort to violence probably isn’t now and may not ever have been adaptive in terms of material benefits.

An important perspective of low-status compensation theory is that low status is a stigma that brings with it lower psychological worth and value. While it is true that stigma also often accompanies lower economic worth and, as in the studies presented here, is sometimes defined by it (i.e., those who have lower incomes in a society have more of a social stigma compared with those who have higher incomes), low-status compensation theory assumes that it is psychological worth that is being protected, not economic or financial worth. In other words, the compensation strategies used by members of low-status groups are used in the service of psychological self-protection, not as a means of gaining higher status, higher income, more resources, etc. (453)

And this conception of honor brings us closer to the observations of the underground man and Poe’s boastful murderer. If psychological worth is what’s being defended, then economic considerations fall by the wayside. Unfortunately, since our financial standing tends to be so closely tied to our social standing, our efforts to protect our sense of psychological worth have a nasty tendency to backfire in the long run.

            Henry found evidence for the importance of psychological reactance, as opposed to cultural norms, in causing violence when he divided participants of his study into either high or low status categories and then had them respond to questions about how likely they would be to respond to insults with physical aggression. But before being asked about the propriety of violent reprisals half of the members of each group were asked to recall as vividly as they could a time in their lives when they felt valued by their community. Henry describes the findings thus:

When lower status participants were given the opportunity to validate their worth, they were less likely to endorse lashing out aggressively when insulted or disrespected. Higher status participants were unaffected by the manipulation. (463)

The implication is that people who feel less valuable than others, a condition that tends to be associated with low socioeconomic status, are quicker to retaliate because they are almost constantly on-edge, preoccupied at almost every moment with assessments of their standing in relation to others. Aside from a readiness to engage in violence, this type of obsessive vigilance for possible slights, and the feeling of powerlessness that attends it, can be counted on to keep people in a constant state of stress. The massive longitudinal study of British Civil Service employees called the Whitehall Study, which tracks the health outcomes of people at the various levels of the bureaucratic hierarchy, has found that the stress associated with low status also has profound effects on our physical well-being.  

            Though it may seem that violence-prone poor people occupying lowly positions on societal and professional totem poles are responsible for aggravating and prolonging their own misery because they tend to spend extravagantly and lash out at their perceived overlords with nary a concern for the consequences, the regularity with which low status leads to self-defeating behavior suggests the impulses are much more deeply rooted than some lazily executed weighing of pros and cons. If the type of wealth or status inequality the underground man finds himself on the short end of would have begun to take root in societies like the ones Christopher Boehm describes, a high-risk attempt at leveling the playing field would not only have been understandable—it would have been morally imperative. In a group of nomadic foragers, though, a man endeavoring to knock a would-be alpha down a few pegs would be able to count on the endorsement of most of the other group members. And the success rate for re-establishing and maintaining egalitarianism would have been heartening. Today, we are forced to live with inequality, even though beyond a certain point most people (regardless of political affiliation) see it as an injustice. 

            Some of the functions of literature, then, are to help us imagine just how intolerable life on the bottom can be, sympathize with those who get trapped in downward spirals of self-defeat, and begin to imagine what a more just and equitable society might look like. The catch is that we will be put off by characters who mistreat others or simply show a dearth of redeeming qualities.

Also read

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

and

CAN’T WIN FOR LOSING: WHY THERE ARE SO MANY LOSERS IN LITERATURE AND WHY IT HAS TO CHANGE

Read More
Dennis Junk Dennis Junk

The People Who Evolved Our Genes for Us: Christopher Boehm on Moral Origins – Part 3 of A Crash Course in Multilevel Selection Theory

In “Moral Origins,” anthropologist Christopher Boehm lays out the mind-blowing theory that humans evolved to be cooperative in large part by developing mechanisms to keep powerful men’s selfish impulses in check. These mechanisms included, in rare instances, capital punishment. Once the free-rider problem was addressed, groups could function more as a unit than as a collection of individuals.

In a 1969 account of her time in Labrador studying the culture of the Montagnais-Naskapi people, anthropologist Eleanor Leacock describes how a man named Thomas, who was serving as her guide and informant, responded to two men they encountered while far from home on a hunting trip. The men, whom Thomas recognized but didn’t know very well, were on the brink of starvation. Even though it meant ending the hunting trip early and hence bringing back fewer furs to trade, Thomas gave the hungry men all the flour and lard he was carrying. Leacock figured that Thomas must have felt at least somewhat resentful for having to cut short his trip and that he was perhaps anticipating some return favor from the men in the future. But Thomas didn’t seem the least bit reluctant to help or frustrated by the setback. Leacock kept pressing him for an explanation until he got annoyed with her probing. She writes,

This was one of the very rare times Thomas lost patience with me, and he said with deep, if suppressed anger, “suppose now, not to give them flour, lard—just dead inside.” More revealing than the incident itself were the finality of his tone and the inference of my utter inhumanity in raising questions about his action. (Quoted in Boehm 219)

The phrase “just dead inside” expresses how deeply internalized the ethic of sympathetic giving is for people like Thomas who live in cultures more similar to those our earliest human ancestors created at the time, around 45,000 years ago, when they began leaving evidence of engaging in all the unique behaviors that are the hallmarks of our species. The Montagnais-Naskapi don’t qualify as an example of what anthropologist Christopher Boehm labels Late Pleistocene Appropriate, or LPA, cultures because they had been involved in fur trading with people from industrialized communities going back long before their culture was first studied by ethnographers. But Boehm includes Leacock’s description in his book Moral Origins: The Evolution of Virtue, Altruism, and Shame because he believes Thomas’s behavior is in fact typical of nomadic foragers and because, infelicitously for his research, standard ethnographies seldom cover encounters like the one Thomas had with those hungry acquaintances of his.

In our modern industrialized civilization, people donate blood, volunteer to fight in wars, sign over percentages of their income to churches, and pay to keep organizations like Doctors without Borders and Human Rights Watch in operation even though the people they help live in far-off countries most of us will never visit. One approach to explaining how this type of extra-familial generosity could have evolved is to suggest people who live in advanced societies like ours are, in an important sense, not in their natural habitat. Among evolutionary psychologists, it has long been assumed that in humans’ ancestral environments, most of the people individuals encountered would either be close kin who carried many genes in common, or at the very least members of a moderately stable group they could count on running into again, at which time they would be disposed to repay any favors. Once you take kin selection and reciprocal altruism into account, the consensus held, there was not much left to explain. Whatever small acts of kindness that weren’t directed toward kin or done with an expectation of repayment were, in such small groups, probably performed for the sake of impressing all the witnesses and thus improving the social status of the performer. As the biologist Michael Ghiselin once famously put it, “Scratch an altruist and watch a hypocrite bleed.” But this conception of what evolutionary psychologists call the Environment of Evolutionary Adaptedness, or EEA, never sat right with Boehm.

One problem with the standard selfish gene scenario that has just recently come to light is that modern hunter-gatherers, no matter where in the world they live, tend to form bands made up of high percentages of non-related or distantly related individuals. In an article published in Science in March of 2011, anthropologist Kim Hill and his colleagues report the findings of their analysis of thirty-two hunter-gatherer societies. The main conclusion of the study is that the members of most bands are not closely enough related for kin selection to sufficiently account for the high levels of cooperation ethnographers routinely observe. Assuming present-day forager societies are representative of the types of groups our Late Pleistocene ancestors lived in, we can rule out kin selection as a likely explanation for altruism of the sort displayed by Thomas or by modern philanthropists in complex civilizations. Boehm offers us a different scenario, one that relies on hypotheses derived from ethological studies of apes and archeological records of our human prehistory as much as on any abstract mathematical accounting of the supposed genetic payoffs of behaviors.

In three cave paintings discovered in Spain that probably date to the dawn of the Holocene epoch around 12,000 years ago, groups of men are depicted with what appear to be bows lifted above their heads in celebration while another man lay dead nearby with one arrow from each of them sticking out of his body. We can only speculate about what these images might have meant to the people who created them, but Boehm points out that all extant nomadic foraging peoples, no matter what part of the world they live in, are periodically forced to reenact dramas that resonate uncannily well with these scenes portrayed in ancient cave art. “Given enough time,” he writes, “any band society is likely to experience a problem with a homicide-prone unbalanced individual. And predictably band members will have to solve the problem by means of execution” (253). One of the more gruesome accounts of such an incident he cites comes from Richard Lee’s ethnography of !Kung Bushmen. After a man named /Twi had killed two men, Lee writes, “A number of people decided that he must be killed.” According to Lee’s informant, a man named =Toma (the symbols before the names represent clicks), the first attempt to kill /Twi was botched, allowing him to return to his hut, where a few people tried to help him. But he ended up becoming so enraged that he grabbed a spear and stabbed a woman in the face with it. When the woman’s husband came to her aid, /Twi shot him with a poisoned arrow, killing him and bringing his total body count to four. =Toma continues the story,

Now everyone took cover, and others shot at /Twi, and no one came to his aid because all those people had decided he had to die. But he still chased after some, firing arrows, but he didn’t hit any more…Then they all fired on him with poisoned arrows till he looked like a porcupine. Then he lay flat. All approached him, men and women, and stabbed his body with spears even after he was dead. (261-2)

The two most important elements of this episode for Boehm are the fact that the death sentence was arrived at through a partial group consensus which ended up being unanimous, and that it was carried out with weapons that had originally been developed for hunting. But this particular case of collectively enacted capital punishment was odd not just in how clumsy it was. Boehm writes,

In this one uniquely detailed description of what seems to begin as a delegated execution and eventually becomes a fully communal killing, things are so chaotic that it’s easy to understand why with hunter-gatherers the usual mode of execution is to efficiently delegate a kinsman to quickly kill the deviant by ambush. (261)

The prevailing wisdom among evolutionary psychologists has long been that any appearance of group-level adaptation, like the collective killing of a dangerous group member, must be an illusory outcome caused by selection at the level of individuals or families. As Steven Pinker explains, “If a person has innate traits that encourage him to contribute to the group’s welfare and as a result contribute to his own welfare, group selection is unnecessary; individual selection in the context of group living is adequate.” To demonstrate that some trait or behavior humans reliably engage in really is for the sake of the group as opposed to the individual engaging in it, there would have to be some conflict between the two motives—serving the group would have to entail incurring some kind of cost for the individual. Pinker explains,

It’s only when humans display traits that are disadvantageous to themselves while benefiting their group that group selection might have something to add. And this brings us to the familiar problem which led most evolutionary biologists to reject the idea of group selection in the 1960s. Except in the theoretically possible but empirically unlikely circumstance in which groups bud off new groups faster than their members have babies, any genetic tendency to risk life and limb that results in a net decrease in individual inclusive fitness will be relentlessly selected against. A new mutation with this effect would not come to predominate in the population, and even if it did, it would be driven out by any immigrant or mutant that favored itself at the expense of the group.

The ever-present potential for cooperative or altruistic group norms to be subverted by selfish individuals keen on exploitation is known in game theory as the free rider problem. To see how strong selfish individuals can lord over groups of their conspecifics we can look to the hierarchically organized bands great apes naturally form.

In groups of chimpanzees, for instance, an alpha male gets to eat his fill of the most nutritious food, even going so far at times as seizing meat from the subordinates who hunted it down. The alpha chimp also works to secure, as best he can, sole access to reproductively receptive females. For a hierarchical species like this, status is a winner-take-all competition, and so genes for dominance and cutthroat aggression proliferate. Subordinates tolerate being bullied because they know the more powerful alpha will probably kill them if they try to stand up for themselves. If instead of mounting some ill-fated resistance, however, they simply bide their time, they may eventually grow strong enough to more effectively challenge for the top position. Meanwhile, they can also try to sneak off with females to couple behind the alpha’s back. Boehm suggests that two competing motives keep hierarchies like this in place: one is a strong desire for dominance and the other is a penchant for fear-based submission. What this means is that subordinates only ever submit ambivalently. They even have a recognizable vocalization, which Boehm transcribes as the “waa,” that they use to signal their discontent. In his 1999 book Hierarchy in the Forest: The Evolution of Egalitarian Behavior, Boehm explains,

When an alpha male begins to display and a subordinate goes screaming up a tree, we may interpret this as a submissive act of fear; but when that same subordinate begins to waa as the display continues, it is an open, hostile expression of insubordination. (167)

Since the distant ancestor humans shared in common with chimpanzees likely felt this same ambivalence toward alphas, Boehm theorizes that it served as a preadaptation for the type of treatment modern human bullies can count on in every society of nomadic foragers anthropologists have studied. “I believe,” he writes, “that a similar emotional and behavioral orientation underlies the human moral community’s labeling of domination behaviors as deviant” (167).

Boehm has found accounts of subordinate chimpanzees, bonobos, and even gorillas banding together with one or more partner to take on an excessively domineering alpha—though there was only one case in which this happened with gorillas and the animals in question lived in captivity. But humans are much better at this type of coalition building. Two of the most crucial developments in our own lineage that lead to the differences in social organization between ourselves and the other apes were likely to have been an increased capacity for coordinated hunting and the invention of weapons designed to kill big game. As Boehm explains,

Weapons made possible not only killing at a distance, but far more effective threat behavior; brandishing a projectile could turn into an instant lethal attack with relatively little immediate risk to the attacker. (175)

Deadly weapons fundamentally altered the dynamic between lone would-be bullies and those they might try to dominate. As Boehm points out, “after weapons arrived, the camp bully became far more vulnerable” (177). With the advent of greater coalition-building skills and the invention of tools for efficient killing, the opportunities for an individual to achieve alpha status quickly vanished.

            It’s dangerous to assume that any one group of modern people provides the key to understanding our Pleistocene ancestors, but when every group living with similar types of technology and subsistence methods as those ancestors follows a similar pattern it’s much more suggestive. “A distinctively egalitarian political style,” Boehm writes, “is highly predictable wherever people live in small, locally autonomous social and economic groups” (35-6). This egalitarianism must be vigilantly guarded because “A potential bully always seems to be waiting in the wings” (68). Boehm explains what he believes is the underlying motivation,

Even though individuals may be attracted personally to a dominant role, they make a common pact which says that each main political actor will give up his modest chances of becoming alpha in order to be certain that no one will ever be alpha over him. (105)

The methods used to prevent powerful or influential individuals from acquiring too much control include such collective behaviors as gossiping, ostracism, banishment, and even, in extreme cases, execution. “In egalitarian hierarchies the pyramid of power is turned upside down,” Boehm explains, “with a politically united rank and file dominating the alpha-male types” (66).

The implications for theories about our ancestors are profound. The groups humans were living in as they evolved the traits that made them what we recognize today as human were highly motivated and well-equipped to both prevent and when necessary punish the type of free-riding that evolutionary psychologists and other selfish gene theorists insist would undermine group cohesion. Boehm makes this point explicit, writing,

The overall hypothesis is straightforward: basically, the advent of egalitarianism shifted the balance of forces within natural selection so that within-group selection was substantially debilitated and between-group selection was amplified. At the same time, egalitarian moral communities found themselves uniquely positioned to suppress free-riding… at the level of phenotype. With respect to the natural selection of behavior genes, this mechanical formula clearly favors the retention of altruistic traits. (199)

This is the point where he picks up the argument again in Moral Origins. The story of the homicidal man named /Twi is an extreme example of the predictable results of overly aggressive behaviors. Any nomadic forager who intransigently tries to throw his weight around the way alpha male chimpanzees do will probably end up getting “porcupined” (158) like /Twi and the three men depicted in the Magdalenian cave art in Spain.

Murder is an extreme example of the types of free-riding behavior that nomadic foragers reliably sanction. Any politically overbearing treatment of group mates, particularly the issuing of direct commands, is considered a serious moral transgression. But alongside this disapproval of bossy or bullying behavior there exists an ethic of sharing and generosity, so people who are thought to be stingy are equally disliked. As Boehm writes in Hierarchy in the Forest, “Politically egalitarian foragers are also, to a significant degree, materially egalitarian” (70). The image many of us grew up with of the lone prehistoric male hunter going out to stalk his prey, bringing it back as a symbol of his prowess in hopes of impressing beautiful and fertile females, turns out to be completely off-base. In most hunter-gather groups, the males hunt in teams, and whatever they kill gets turned over to someone else who distributes the meat evenly among all the men so each of their families gets an equal portion. In some cultures, “the hunter who made the kill gets a somewhat larger share,” Boehm writes in Moral Origins, “perhaps as an incentive to keep him at his arduous task” (185). But every hunter knows that most of the meat he procures will go to other group members—and the sharing is done without any tracking of who owes whom a favor. Boehm writes,

The models tell us that the altruists who are helping nonkin more than they are receiving help must be “compensated” in some way, or else they—meaning their genes—will go out of business. What we can be sure of is that somehow natural selection has managed to work its way around these problems, for surely humans have been sharing meat and otherwise helping others in an unbalanced fashion for at least 45,000 years. (184)

Following biologist Richard Alexander, Boehm sees this type of group beneficial generosity as an example of “indirect reciprocity.” And he believes it functions as a type of insurance policy, or, as anthropologists call it, “variance reduction.” It’s often beneficial for an individual’s family to pay in, as it were, but much of the time people contribute knowing full well the returns will go to others.

Less extreme cases than the psychopaths who end up porcupined involve what Boehm calls “meat-cheaters.” A prominent character in Moral Origins is an Mbuti Pygmy man named Cephu, whose story was recounted in rich detail by the anthropologist Colin Turnbull. One of the cooperative hunting strategies the Pygmies use has them stretching a long net through the forest while other group members create a ruckus to scare animals into it. Each net holder is entitled to whatever runs into his section of the net, which he promptly spears to death. What Cephu did was sneak farther ahead of the other men to improve his chances of having an animal run into his section of the net before the others. Unfortunately for him, everyone quickly realized what was happening. Returning to the camp after depositing his ill-gotten gains in his hut, Cephu hears someone call out that he is an animal. Beyond that, everyone was silent. Turnbull writes,

Cephu walked into the group, and still nobody spoke. He went to where a youth was sitting in a chair. Usually he would have been offered a seat without his having to ask, and now he did not dare to ask, and the youth continued to sit there in as nonchalant a manner as he could muster. Cephu went to another chair where Amabosu was sitting. He shook it violently when Amabosu ignored him, at which point he was told, “Animals lie on the ground.” (Quoted 39)

Thus began the accusations. Cephu burst into tears and tried to claim that his repositioning himself in the line was an accident. No one bought it. Next, he made the even bigger mistake of trying to suggest he was entitled to his preferential position. “After all,” Turnbull writes, “was he not an important man, a chief, in fact, of his own band?” At this point, Manyalibo, who was taking the lead in bringing Cephu to task, decided that the matter was settled. He said that

there was obviously no use prolonging the discussion. Cephu was a big chief, and Mbuti never have chiefs. And Cephu had his own band, of which he was chief, so let him go with it and hunt elsewhere and be a chief elsewhere. Manyalibo ended a very eloquent speech with “Pisa me taba” (“Pass me the tobacco”). Cephu knew he was defeated and humiliated. (40)

The guilty verdict Cephu had to accept to avoid being banished from the band came with the sentence that he had to relinquish all the meat he brought home that day. His attempt at free-riding therefore resulted not only in a loss of food but also in a much longer-lasting blow to his reputation.

Boehm has built a large database from ethnographic studies like Lee’s and Turnbull’s, and it shows that in their handling of meat-cheaters and self-aggrandizers nomadic foragers all over the world use strategies similar to those of the Pygmies. First comes the gossip about your big ego, your dishonesty, or your cheating. Soon you’ll recognize a growing reluctance of other’s to hunt with you, or you’ll have a tough time wooing a mate. Next, you may be directly confronted by someone delegated by a quorum of group members. If you persist in your free-riding behavior, especially if it entails murder or serious attempts at domination, you’ll probably be ambushed and turned into a porcupine. Alexander put forth the idea of “reputational selection,” whereby individuals benefit in terms of survival and reproduction from being held in high esteem by their group mates. Boehm prefers the term “social selection,” however, because it encompasses the idea that people are capable of figuring out what’s best for their groups and codifying it in their culture. How well an individual internalizes a group’s norms has profound effects on his or her chances for survival and reproduction. Boehm’s theory is that our consciences are the mechanisms we’ve evolved for such internalization.

Though there remain quite a few chicken-or-egg conundrums to work out, Boehm has cobbled together archeological evidence from butchering cites, primatological evidence from observations of apes in the wild and in captivity, and quantitative analyses of ethnographic records to put forth a plausible history of how our consciences evolved and how we became so concerned for the well-being of people we may barely know. As humans began hunting larger game, demanding greater coordination and more effective long-distance killing tools, an already extant resentment of alphas expressed itself in collective suppression of bullying behavior. And as our developing capacity for language made it possible to keep track of each other’s behavior long-term it started to become important for everyone to maintain a reputation for generosity, cooperativeness, and even-temperedness. Boehm writes,

Ultimately, the social preferences of groups were able to affect gene pools profoundly, and once we began to blush with shame, this surely meant that the evolution of conscientious self-control was well under way. The final result was a full-blown, sophisticated modern conscience, which helps us to make subtle decisions that involve balancing selfish interests in food, power, sex, or whatever against the need to maintain a decent personal moral reputation in society and to feel socially valuable as a person. The cognitive beauty of having such a conscience is that it directly facilitates making useful social decisions and avoiding negative social consequences. Its emotional beauty comes from the fact that we in effect bond with the values and rules of our groups, which means we can internalize our group’s mores, judge ourselves as well as others, and, hopefully, end up with self-respect. (173)

Social selection is actually a force that acts on individuals, selecting for those who can most strategically suppress their own selfish impulses. But in establishing a mechanism that guards the group norm of cooperation against free riders, it increased the potential of competition between groups and quite likely paved the way for altruism of the sort Leacock’s informant Thomas displayed. Boehm writes,

Thomas surely knew that if he turned down the pair of hungry men, they might “bad-mouth” him to people he knew and thereby damage his reputation as a properly generous man. At the same time, his costly generosity might very well be mentioned when they arrived back in their camp, and through the exchange of favorable gossip he might gain in his public esteem in his own camp. But neither of these socially expedient personal considerations would account for the “dead” feeling he mentioned with such gravity. He obviously had absorbed his culture’s values about sharing and in fact had internalized them so deeply that being selfish was unthinkable. (221)

In response to Ghiselin’s cynical credo, “Scratch an altruist and watch a hypocrite bleed,” Boehm points out that the best way to garner the benefits of kindness and sympathy is to actually be kind and sympathetic. He points out further that if altruism is being selected for at the level of phenotypes (the end-products of genetic processes) we should expect it to have an impact at the level of genes. In a sense, we’ve bred altruism into ourselves. Boehm writes,

If such generosity could be readily faked, then selection by altruistic reputation simply wouldn’t work. However, in an intimate band of thirty that is constantly gossiping, it’s difficult to fake anything. Some people may try, but few are likely to succeed. (189)

The result of the social selection dynamic that began all those millennia ago is that today generosity is in our bones. There are of course circumstances that can keep our generous impulses from manifesting themselves, and those impulses have a sad tendency to be directed toward members of our own cultural groups and no one else. But Boehm offers a slightly more optimistic formula than Ghiselin’s:

I do acknowledge that our human genetic nature is primarily egoistic, secondarily nepotistic, and only rather modestly likely to support acts of altruism, but the credo I favor would be “Scratch an altruist, and watch a vigilant and successful suppressor of free riders bleed. But watch out, for if you scratch him too hard, he and his group may retaliate and even kill you. (205)

Read Part 1:

A CRASH COURSE IN MULTI-LEVEL SELECTION THEORY: PART 1-THE GROUNDWORK LAID BY DAWKINS AND GOULD

And Part 2:

A CRASH COURSE IN MULTILEVEL SELECTION THEORY PART 2: STEVEN PINKER FALLS PREY TO THE AVERAGING FALLACY SOBER AND WILSON TRIED TO WARN HIM ABOUT

Also of interest:

THE FEMINIST SOCIOBIOLOGIST: AN APPRECIATION OF SARAH BLAFFER HRDY DISGUISED AS A REVIEW OF “MOTHERS AND OTHERS: THE EVOLUTIONARY ORIGINS OF MUTUAL UNDERSTANDING”

Read More
Dennis Junk Dennis Junk

A Crash Course in Multilevel Selection Theory part 2: Steven Pinker Falls Prey to the Averaging Fallacy Sober and Wilson Tried to Warn Him about

Eliot Sober and David Sloan Wilson’s “Unto Others” lays out a theoretical framework for how selection at the level of the group could have led to the evolution of greater cooperation among humans. They point out the mistake many theorists make in thinking because evolution can be defined as changes in gene frequencies, it’s only genes that matter. But that definition leaves aside the question of how traits and behaviors evolve, i.e. what dynamics lead to the changes in gene frequencies. Steven Pinker failed to grasp their point.

If you were a woman applying to graduate school at the University of California at Berkeley in 1973, you would have had a 35 percent chance of being accepted. If you were a man, your chances would have been significantly better. Forty-four percent of male applicants got accepted that year. Apparently, at this early stage of the feminist movement, even a school as notoriously progressive as Berkeley still discriminated against women. But not surprisingly, when confronted with these numbers, the women of the school were ready to take action to right the supposed injustice. After a lawsuit was filed charging admissions offices with bias, however, a department-by-department examination was conducted which produced a curious finding: not a single department admitted a significantly higher percentage of men than women. In fact, there was a small but significant trend in the opposite direction—a bias against men.

What this means is that somehow the aggregate probability of being accepted into grad school was dramatically different from the probabilities worked out through disaggregating the numbers with regard to important groupings, in this case the academic departments housing the programs assessing the applications. This discrepancy called for an explanation, and statisticians had had one on hand since 1951.

This paradoxical finding fell into place when it was noticed that women tended to apply to departments with low acceptance rates. To see how this can happen, imagine that 90 women and 10 men apply to a department with a 30 percent acceptance rate. This department does not discriminate and therefore accepts 27 women and 3 men. Another department, with a 60 percent acceptance rate, receives applications from 10 women and 90 men. This department doesn’t discriminate either and therefore accepts 6 women and 54 men. Considering both departments together, 100 men and 100 women applied, but only 33 women were accepted, compared with 57 men. A bias exists in the two departments combined, despite the fact that it does not exist in any single department, because the departments contribute unequally to the total number of applicants who are accepted. (25)

This is how the counterintuitive statistical phenomenon known as Simpson’s Paradox is explained by philosopher Elliott Sober and biologist David Sloan Wilson in their 1998 book Unto Others: The Evolution and Psychology of Unselfish Behavior, in which they argue that the same principle can apply to the relative proliferation of organisms in groups with varying percentages of altruists and selfish actors. In this case, the benefit to the group of having more altruists is analogous to the higher acceptance rates for grad school departments which tend to receive a disproportionate number of applications from men. And the counterintuitive outcome is that, in an aggregated population of groups, altruists have an advantage over selfish actors—even though within each of those groups selfish actors outcompete altruists.  

            Sober and Wilson caution that this assessment is based on certain critical assumptions about the population in question. “This model,” they write, “requires groups to be isolated as far as the benefits of altruism are concerned but nevertheless to compete in the formation of new groups” (29). It also requires that altruists and nonaltruists somehow “become concentrated in different groups” (26) so the benefits of altruism can accrue to one while the costs of selfishness accrue to the other. One type of group that follows this pattern is a family, whose members resemble each other in terms of their traits—including a propensity for altruism—because they share many of the same genes. In humans, families tend to be based on pair bonds established for the purpose of siring and raising children, forming a unit that remains stable long enough for the benefits of altruism to be of immense importance. As the children reach adulthood, though, they disperse to form their own family groups. Therefore, assuming families live in a population with other families, group selection ought to lead to the evolution of altruism.

            Sober and Wilson wrote Unto Others to challenge the prevailing approach to solving mysteries in evolutionary biology, which was to focus strictly on competition between genes. In place of this exclusive attention on gene selection, they advocate a pluralistic approach that takes into account the possibility of selection occurring at multiple levels, from genes to individuals to groups. This is where the term multilevel selection comes from. In certain instances, focusing on one level instead of another amounts to a mere shift in perspective. Looking at families as groups, for instance, leads to many of the same conclusions as looking at them in terms of vehicles for carrying genes.

William D. Hamilton, whose thinking inspired both Richard Dawkins’ Selfish Gene and E.O. Wilson’s Sociobiology, long ago explained altruism within families by setting forth the theory of kin selection, which posits that family members will at times behave in ways that benefit each other even at their own expense because the genes underlying the behavior don’t make any distinction between the bodies which happen to be carrying copies of themselves. Sober and Wilson write,

As we have seen, however, kin selection is a special case of a more general theory—a point that Hamilton was among the first to appreciate. In his own words, “it obviously makes no difference if altruists settle with altruists because they are related… or because they recognize fellow altruists as such, or settle together because of some pleiotropic effect of the gene on habitat preference.” We therefore need to evaluate human social behavior in terms of the general theory of multilevel selection, not the special case of kin selection. When we do this, we may discover that humans, bees, and corals are all group-selected, but for different reasons. (134)

A general proclivity toward altruism based on section at the level of family groups may look somewhat different from kin-selected altruism targeted solely at those who are recognized as close relatives. For obvious reasons, the possibility of group selection becomes even more important when it comes to explaining the evolution of altruism among unrelated individuals.

            We have to bear in mind that Dawkins’s selfish genes are only selfish with regard to concerning themselves with nothing but ensuring their own continued existence—by calling them selfish he never meant to imply they must always be associated with selfishness as a trait of the bodies they provide the blueprints for. Selfish genes, in other words, can sometimes code for altruistic behavior, as in the case of kin selection. So the question of what level selection operates on is much more complicated than it would be if the gene-focused approach predicted selfishness while the multilevel approach predicted altruism. But many strict gene selection advocates argue that because selfish gene theory can account for altruism in myriad ways there’s simply no need to resort to group selection. Evolution is, after all, changes over time in gene frequencies. So why should we look to higher levels?

            Sober and Wilson demonstrate that if you focus on individuals in their simple model of predominantly altruistic groups competing against predominantly selfish groups you will conclude that altruism is adaptive because it happens to be the trait that ends up proliferating. You may add the qualifier that it’s adaptive in the specified context, but the upshot is that from the perspective of individual selection altruism outcompetes selfishness. The problem is that this is the same reasoning underlying the misguided accusations against Berkley; for any individual in that aggregate population, it was advantageous to be a male—but there was never any individual selection pressure against females. Sober and Wilson write,

The averaging approach makes “individual selection” a synonym for “natural selection.” The existence of more than one group and fitness differences between the groups have been folded into the definition of individual selection, defining group selection out of existence. Group selection is no longer a process that can occur in theory, so its existence in nature is settled a priori. Group selection simply has no place in this semantic framework. (32)

Thus, a strict focus on individuals, though it may appear to fully account for the outcome, necessarily obscures a crucial process that went into producing it. The same logic might be applicable to any analysis based on gene-level accounting. Sober and Wilson write that

if the point is to understand the processes at work, the resultant is not enough. Simpson’s paradox shows how confusing it can be to focus only on net outcomes without keeping track of the component causal factors. This confusion is carried into evolutionary biology when the separate effects of selection within and between groups are expressed in terms of a single quantity. (33)

They go on to label this approach “the averaging fallacy.” Acknowledging that nobody explicitly insists that group selection is somehow impossible by definition, they still find countless instances in which it is defined out of existence in practice. They write,

Even though the averaging fallacy is not endorsed in its general form, it frequently occurs in specific cases. In fact, we will make the bold claim that the controversy over group selection and altruism in biology can be largely resolved simply by avoiding the averaging fallacy. (34)

            Unfortunately, this warning about the averaging fallacy continues to go unheeded by advocates of strict gene selection theories. Even intellectual heavyweights of the caliber of Steven Pinker fall into the trap. In a severely disappointing essay published just last month at Edge.org called “The False Allure of Group Selection,” Pinker writes

If a person has innate traits that encourage him to contribute to the group’s welfare and as a result contribute to his own welfare, group selection is unnecessary; individual selection in the context of group living is adequate. Individual human traits evolved in an environment that includes other humans, just as they evolved in environments that include day-night cycles, predators, pathogens, and fruiting trees.

Multilevel selectionists wouldn’t disagree with this point; they would readily explain traits that benefit everyone in the group at no cost to the individuals possessing them as arising through individual selection. But Pinker here shows his readiness to fold the process of group competition into some generic “context.” The important element of the debate, of course, centers on traits that benefit the group at the expense of the individual. Pinker writes,

Except in the theoretically possible but empirically unlikely circumstance in which groups bud off new groups faster than their members have babies, any genetic tendency to risk life and limb that results in a net decrease in individual inclusive fitness will be relentlessly selected against. A new mutation with this effect would not come to predominate in the population, and even if it did, it would be driven out by any immigrant or mutant that favored itself at the expense of the group.

But, as Sober and Wilson demonstrate, those self-sacrificial traits wouldn’t necessarily be selected against in the population. In fact, self-sacrifice would be selected for if that population is an aggregation of competing groups. Pinker fails to even consider this possibility because he’s determined to stick with the definition of natural selection as occurring at the level of genes.

            Indeed, the centerpiece of Pinker’s argument against group selection in this essay is his definition of natural selection. Channeling Dawkins, he writes that evolution is best understood as competition between “replicators” to continue replicating. The implication is that groups, and even individuals, can’t be the units of selection because they don’t replicate themselves. He writes,

The theory of natural selection applies most readily to genes because they have the right stuff to drive selection, namely making high-fidelity copies of themselves. Granted, it's often convenient to speak about selection at the level of individuals, because it’s the fate of individuals (and their kin) in the world of cause and effect which determines the fate of their genes. Nonetheless, it’s the genes themselves that are replicated over generations and are thus the targets of selection and the ultimate beneficiaries of adaptations.

The underlying assumption is that, because genes rely on individuals as “vehicles” to replicate themselves, individuals can sometimes be used as shorthand for genes when discussing natural selection. Since gene competition within an individual would be to the detriment of all the genes that individual carries and strives to pass on, the genes collaborate to suppress conflicts amongst themselves. The further assumption underlying Pinker’s and Dawkins’s reasoning is that groups make for poor vehicles because suppressing within group conflict would be too difficult. But, as Sober and Wilson write,

This argument does not evaluate group selection on a trait-by-trait basis. In addition, it begs the question of how individuals became such good vehicles of selection in the first place. The mechanisms that currently limit within-individual selection are not a happy coincidence but are themselves adaptions that evolved by natural selection. Genomes that managed to limit internal conflict presumably were more fit than other genomes, so these mechanisms evolve by between-genome selection. Being a good vehicle as Dawkins defines it is not a requirement for individual selection—it’s a product of individual selection. Similarly, groups do not have to be elaborately organized “superorganisms” to qualify as a unit of selection with respect to particular traits. (97)

The idea of a “trait-group” is exemplified by the simple altruistic group versus selfish group model they used to demonstrate the potential confusion arising from Simpson’s paradox. As long as individuals with the altruism trait interact with enough regularity for the benefits to be felt, they can be defined as a group with regard to that trait.

            Pinker makes several other dubious points in his essay, most of them based on the reasoning that group selection isn’t “necessary” to explain this or that trait, only justifying his prejudice in favor of gene selection with reference to the selfish gene definition of evolution. Of course, it may be possible to imagine gene-level explanations to behaviors humans engage in predictably, like punishing cheaters in economic interactions even when doing so means the punisher incurs some cost to him or herself. But Pinker is so caught up with replicators he overlooks the potential of this type of punishment to transform groups into functional vehicles. As Sober and Wilson demonstrate, group competition can lead to the evolution of altruism on its own. But once altruism reaches a certain threshold group selection can become even more powerful because the altruistic group members will, by definition, be better at behaving as a group. And one of the mechanisms we might expect to evolve through an ongoing process of group selection would operate to curtail within group conflict and exploitation. The costly punishment Pinker dismisses as possibly explicable through gene selection is much more likely to havearisen through group selection. Sober and Wilson delight in the irony that, “The entire language of social interactions among individuals in groups has been burrowed to describe genetic interactions within individuals; ‘outlaw’ genes, ‘sheriff’ genes, ‘parliaments’ of genes, and so on” (147).

Unto Others makes such a powerful case against strict gene-level explanations and for the potentially crucial role of group selection that anyone who undertakes to argue that the appeal of multilevel selection theory is somehow false without even mentioning it risks serious embarrassment. Published fourteen years ago, it still contains a remarkably effective rebuttal to Pinker’s essay:  

In short, the concept of genes as replicators, widely regarded as a decisive argument against group selection, is in fact totally irrelevant to the subject. Selfish gene theory does not invoke any processes that are different from the ones described in multilevel selection theory, but merely looks at the same processes in a different way. Those benighted group selectionists might be right in every detail; group selection could have evolved altruists that sacrifice themselves for the benefit of others, animals that regulate their numbers to avoid overexploiting their resources, and so on. Selfish gene theory calls the genes responsible for these behaviors “selfish” for the simple reason that they evolved and therefore replicated more successfully than other genes. Multilevel selection theory, on the other hand, is devoted to showing how these behaviors evolve. Fitness differences must exist somewhere in the biological hierarchy—between individuals within groups, between groups in the global population, and so on. Selfish gene theory can’t even begin to explore these questions on the basis of the replicator concept alone. The vehicle concept is its way of groping toward the very issues that multilevel selection theory was developed to explain. (88)

Sober and Wilson, in opening the field of evolutionary studies to forces beyond gene competition, went a long way toward vindicating Stephen Jay Gould, who throughout his career held that selfish gene theory was too reductionist—he even incorporated their arguments into his final book. But Sober and Wilson are still working primarily in the abstract realm of evolutionary modeling, although in the second half of Unto Others they cite multiple psychological and anthropological sources. A theorist even more after Gould’s own heart, one who synthesizes both models and evidence from multiple fields, from paleontology to primatology to ethnography, into a hypothetical account of the natural history of human evolution, from the ancestor we share with the great apes to modern nomadic foragers and beyond, is the anthropologist Christopher Boehm, whose work we’ll be exploring in part 3.

Read Part 1 of

A CRASH COURSE IN MULTI-LEVEL SELECTION THEORY: PART 1-THE GROUNDWORK LAID BY DAWKINS AND GOULD

And Part 3:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

Read More
Dennis Junk Dennis Junk

A Crash Course in Multi-Level Selection Theory: Part 1-The Groundwork Laid by Dawkins and Gould

What is the unit of selection? Richard Dawkins famously argues that it’s genes that are selected for over the course of evolutionary change. Stephen Jay Gould, meanwhile, maintained that it must be individuals and even sometimes groups of individuals. In their fascinating back and forth lies the foundation of today’s debates about multi-level selection theory.

Responding to Stephen Jay Gould’s criticisms of his then most infamous book, Richard Dawkins writes in a footnote to the 1989 edition of The Selfish Gene, “I find his reasoning wrong but interesting, which, incidentally, he has been kind enough to tell me, is how he usually finds mine” (275). Dawkins’s idea was that evolution is, at its core, competition between genes with success measured in continued existence. Genes are replicators. Evolution is therefore best thought of as the outcome of this competition between replicators to keep on replicating. Gould’s response was that natural selection can’t possibly act on genes because genes are always buried in bodies. Those replicators always come grouped with other replicators and have only indirect effects on the bodies they ultimately serve as blueprints for. Natural selection, as Gould suggests, can’t “see” genes; it can only see, and act on, individuals.

The image of individual genes, plotting the course of their own survival, bears little relationship to developmental genetics as we understand it. Dawkins will need another metaphor: genes caucusing, forming alliances, showing deference for a chance to join a pact, gauging probable environments. But when you amalgamate so many genes and tie them together in hierarchical chains of action mediated by environments, we call the resultant object a body. (91)

Dawkins’ rebuttal, in both later editions of The Selfish Gene and in The Extended Phenotype, is, essentially, Duh—of course genes come grouped together with other genes and only ever evolve in context. But the important point is that individuals never replicate themselves. Bodies don’t create copies of themselves. Genes, on the other hand, do just that. Bodies are therefore best thought of as vehicles for these replicators.

            As a subtle hint of his preeminent critic’s unreason, Dawkins quotes himself in his response to Gould, citing a passage Gould must’ve missed, in which the genes making up an individual organism’s genome are compared to the members of a rowing team. Each contributes to the success or failure of the team, but it’s still the individual members that are important. Dawkins describes how the concept of an “Evolutionarily Stable Strategy,” can be applied to a matter

arising from the analogy of oarsmen in a boat (representing genes in a body) needing a good team spirit. Genes are selected, not as “good” in isolation, but as good at working against the background of the other genes in the gene pool. A good gene must be compatible with and complementary to, the other genes with whom it has to share a long succession of bodies. A gene for plant-grinding teeth is a good gene in the gene pool of a herbivorous species, but a bad gene in the gene pool of a carnivorous species. (84)

Gould, in other words, isn’t telling Dawkins anything he hasn’t already considered. But does that mean Gould’s point is moot? Or does the rowing team analogy actually support his reasoning? In any case, they both agree that the idea of a “good gene” is meaningless without context.

            The selfish gene idea has gone on to become the linchpin of research in many subfields of evolutionary biology, its main appeal being the ease with which it lends itself to mathematical modeling. If you want to know what traits are the most likely to evolve, you create a simulation in which individuals with various traits compete. Run the simulation and the outcome allows you to determine the relative probability of a given trait evolving in the context of individuals with other traits. You can then compare the statistical outcomes derived from the simulation with experimental data on how the actual animals behave. This sort of analysis relies on the assumption that the traits in question are both discrete and can be selected for, and this reasoning usually rest on the further assumption that the traits are, beyond a certain threshold probability, the end-product of chemical processes set in motion by a particular gene or set of genes. In reality, everyone acknowledges that this one-to-one correspondence between gene and trait—or constellation of genes and trait—seldom occurs. All genes can do is make their associated traits more likely to develop in specific environments. But if the sample size is large enough, meaning that the population you’re modeling is large enough, and if the interactions go through enough iterations, the complicating nuances will cancel out in the final statistical averaging.  

            Gould’s longstanding objection to this line of research—as productive as he acknowledged it could be—was that processes, and even events, like large-scale natural catastrophes, that occur at higher levels of analysis can be just as or more important than the shuffling of gene frequencies at the lowest level. It’s hardly irrelevant that Dawkins and most of his fellow ethologists who rely on his theories primarily study insects—relatively simple-bodied species that produce huge populations and have rapid generational turnover. Gould, on the other hand, focused his research on the evolution of snail shells. And he kept his eye throughout his career on the big picture of how evolution worked over vast periods of time. As a paleontologist, he found himself looking at trends in the fossil record that didn’t seem to follow the expected patterns of continual, gradual development within species. In fact, the fossil records of most lineages seem to be characterized by long periods of slow or no change followed by sudden disruptions—a pattern he and Niles Eldredge refer to as punctuated equilibrium. In working out an explanation for this pattern, Eldredge and Gould did Dawkins one better: sure, genes are capable of a sort of immortality, they reasoned, but then so are species. Evolution then isn’t just driven by competition between genes or individuals; something like species selection must also be taking place.

            Dawkins accepted this reasoning up to a point, seeing that it probably even goes some way toward explaining the patterns that often emerge in the fossil record. But whereas Gould believed there was so much randomness at play in large populations that small differences would tend to cancel out, and that “speciation events”—periods when displacement or catastrophe led to smaller group sizes—were necessary for variations to take hold in the population, Dawkins thought it unlikely that variations really do cancel each other out even in large groups. This is because he knows of several examples of “evolutionary arms races,” multigenerational exchanges in which a small change leads to a big advantage, which in turn leads to a ratcheting up of the trait in question as all the individuals in the population are now competing in a changed context. Sexual selection, based on competition for reproductive access to females, is a common cause of arms races. That’s why extreme traits in the form of plumage or body size or antlers are easy to point to. Once you allow for this type of change within populations, you are forced to conclude that gene-level selection is much more powerful and important than species-level selection. As Dawkins explains in The Extended Phenotype,

Accepting Eldredge and Gould’s belief that natural selection is a general theory that can be phrased on many levels, the putting together of a certain quantity of evolutionary change demands a certain minimum number of selective replicator-eliminations. Whether the replicators that are selectively eliminated are genes or species, a simple evolutionary change requires only a few replicator substitutions. A large number of replicator substitutions, however, are needed for the evolution of a complex adaptation. The minimum replacement cycle time when we consider the gene as replicator is one individual generation, from zygote to zygote. It is measured in years or months, or smaller time units. Even in the largest organisms it is measured in only tens of years. When we consider the species as replicator, on the other hand, the replacement cycle time is the interval from speciation event to speciation event, and may be measured in thousands of years, tens of thousands, hundreds of thousands. In any given period of geological time, the number of selective species extinctions that can have taken place is many orders of magnitude less than the number of selective allele replacements that can have taken place. (106)

This reasoning, however, applies only to features and traits that are under intense selection pressure. So in determining whether a given trait arose through a process of gene selection or species selection you would first have to know certain features about the nature of that trait: how much of an advantage it confers if any, how widely members of the population vary in terms of it, and what types of countervailing forces might cancel out or intensify the selection pressure.

            The main difference between Dawkins’s and Gould’s approaches to evolutionary questions is that Dawkins prefers to frame answers in terms of the relative success of competing genes while Gould prefers to frame them in terms of historical outcomes. Dawkins would explain a wasp’s behavior by pointing out that behaving that way ensures copies of the wasp’s genes will persist in the population. Gould would explain the shape of some mammalian skull by pointing out how contingent that shape is on the skulls of earlier creatures in the lineage. Dawkins knows history is important. Gould knows gene competition is important. The difference is in the relative weights given to each. Dawkins might challenge Gould, “Gene selection explains self-sacrifice for the sake of close relatives, who carry many of the same genes”—an idea known as kin selection—“what does your historical approach say about that?” Gould might then point to the tiny forelimbs of a tyrannosaurus, or the original emergence of feathers (which were probably sported by some other dinosaur) and challenge Dawkins, “Account for that in terms of gene competition.”

            The area where these different perspectives came into the most direct conflict was sociobiology, which later developed into evolutionary psychology. This is a field in which theorists steeped in selfish gene thinking look at human social behavior and see in it the end product of gene competition. Behaviors are treated as traits, traits are assumed to have a genetic basis, and, since the genes involved exist because they outcompeted other genes producing other traits, their continuing existence suggests that the traits are adaptive, i.e. that they somehow make the continued existence of the associated genes more likely. The task of the evolutionary psychologist is to work out how. This was in fact the approach ethologists had been applying, primarily to insects, for decades.

E.O. Wilson, a renowned specialist on ant behavior, was the first to apply it to humans in his book Sociobiology, and in a later book, On Human Nature, which won him the Pulitzer. But the assumption that human behavior is somehow fixed to genes and that it always serves to benefit those genes was anathema to Gould. If ever there were a creature for whom the causal chain from gene to trait or behavior was too long and complex for the standard ethological approaches to yield valid insights, it had to be humans.

Gould famously compared evolutionary psychological theories to the “Just-so” stories of Kipling, suggesting they relied on far too many shaky assumptions and made use of far too little evidence. From Gould’s perspective, any observable trait, in humans or any other species, was just as likely to have no effect on fitness at all as it was to be adaptive. For one thing, the trait could be a byproduct of some other trait that’s adaptive; it could have been selected for indirectly. Or it could emerge from essentially random fluctuations in gene frequencies that take hold in populations because they neither help nor hinder survival and reproduction. And in humans of course there are things like cultural traditions, forethought, and technological intervention (as when a gene for near-sightedness is rendered moot with contact lenses). The debate got personal and heated, but in the end evolutionary psychology survived Gould’s criticisms. Outsiders could even be forgiven for suspecting that Gould actually helped the field by highlighting some of its weaknesses. He, in fact, didn’t object in principle to the study of human behavior from the perspective of biological evolution; he just believed the earliest attempts were far too facile. Still, there are grudges being harbored to this day.

            Another way to look at the debate between Dawkins and Gould, one which lies at the heart of the current debate over group selection, is that Dawkins favored reductionism while Gould preferred holism. Dawkins always wants to get down to the most basic unit. His “‘central theorem’ of the extended phenotype” is that “An animal’s behaviour tends to maximize the survival of genes ‘for’ that behaviour, whether or not those genes happen to be in the body of the particular animal performing it” (233). Reductionism, despite its bad name, is an extremely successful approach to arriving at explanations, and it has a central role in science. Gould’s holistic approach, while more inclusive, is harder to quantify and harder to model. But there are several analogues to natural selection that suggest ways in which higher-order processes might be important for changes at lower orders. Regular interactions between bodies—or even between groups or populations of bodies—may be crucial in accounting for changes in gene frequencies the same way software can impact the functioning of hardware or symbolic thoughts can determine patterns of neural connections.

            The question becomes whether or not higher-level processes operate regularly enough that their effects can’t safely be assumed to average out over time. One pitfall of selfish gene thinking is that it lends itself to the conflation of definitions and explanations. Evolution can be defined as changes in gene frequencies. But assuming a priori that competition at the level of genes causes those changes means running the risk of overlooking measurable outcomes of processes at higher levels. The debate, then, isn’t over whether evolution occurs at the level of genes—it has to—but rather over what processes lead to the changes. It could be argued that Gould, in his magnum opus The Structure of Evolutionary Theory, which was finished shortly before his death, forced Dawkins into making just this mistake. Responding to the book in an essay in his own book A Devil’s Chaplain, Dawkins writes,

Gould saw natural selection as operating on many levels in the hierarchy of life. Indeed it may, after a fashion, but I believe that such selection can have evolutionary consequences only when the entities selected consist of “replicators.” A replicator is a unit of coded information, of high fidelity but occasionally mutable, with some causal power over its own fate. Genes are such entities… Biological natural selection, at whatever level we may see it, results in evolutionary effects only insofar as it gives rise to changes in gene frequencies in gene pools. Gould, however, saw genes only as “book-keepers,” passively tracking the changes going on at other levels. In my view, whatever else genes are, they must be more than book-keepers, otherwise natural selection cannot work. If a genetic change has no causal influence on bodies, or at least on something that natural selection can “see,” natural selection cannot favour or disfavour it. No evolutionary change will result. (221-222)

Thus we come full circle as Dawkins comes dangerously close to acknowledging Gould’s original point about the selfish gene idea. With the book-keeper metaphor, Gould wasn’t suggesting that genes are perfectly inert. Of course, they cause something—but they don’t cause natural selection. Genes build bodies and influence behaviors, but natural selection acts on bodies and behaviors. Genes are the passive book-keepers with regard to the effects of natural selection, even though they’re active agents with regard to bodies. Again, the question becomes, do the processes that happen at higher levels of analysis operate with enough regularity to produce measurable changes in gene frequencies that a strict gene-level analysis would miss or obscure? Yes, evolution is genetic change. But the task of evolutionary biologists is to understand how those changes come about.

            Gould died in May of 2002, in the middle of a correspondence he had been carrying on with Dawkins regarding how best to deal with an emerging creationist propaganda campaign called intelligent design, a set of ideas they both agreed were contemptible nonsense. These men were in many ways the opposing generals of the so-called Darwin Wars in the 1990s, but, as exasperated as they clearly got with each other’s writing at times, they always seemed genuinely interested and amused with what the other had to say. In his essay on Gould’s final work, Dawkins writes,

The Structure of Evolutionary Theory is such a massively powerful last word, it will keep us all busy replying to it for years. What a brilliant way for a scholar to go. I shall miss him. (222)

[I’ve narrowed the scope of this post to make the ideas as manageable as possible. This account of the debate leaves out many important names and is by no means comprehensive. A good first step if you’re interested in Dawkins’s and Gould’s ideas is to read The Selfish Gene and Full House.]  

Read Part 2:

A CRASH COURSE IN MULTILEVEL SELECTION THEORY PART 2: STEVEN PINKER FALLS PREY TO THE AVERAGING FALLACY SOBER AND WILSON TRIED TO WARN HIM ABOUT

And Part 3:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

Read More
Dennis Junk Dennis Junk

The Storytelling Animal: a Light Read with Weighty Implications

The Storytelling Animal is not groundbreaking. But the style of the book contributes something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams, through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be. The effect is that we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe.

A review of Jonathan Gottschall's The Storytelling Animal: How Stories Make Us Human

Vivian Paley, like many other preschool and kindergarten teachers in the 1970s, was disturbed by how her young charges always separated themselves by gender at playtime. She was further disturbed by how closely the play of each gender group hewed to the old stereotypes about girls and boys. Unlike most other teachers, though, Paley tried to do something about it. Her 1984 book Boys and Girls: Superheroes in the Doll Corner demonstrates in microcosm how quixotic social reforms inspired by the assumption that all behaviors are shaped solely by upbringing and culture can be. Eventually, Paley realized that it wasn’t the children who needed to learn new ways of thinking and behaving, but herself. What happened in her classrooms in the late 70s, developmental psychologists have reliably determined, is the same thing that happens when you put kids together anywhere in the world. As Jonathan Gottschall explains,

Dozens of studies across five decades and a multitude of cultures have found essentially what Paley found in her Midwestern classroom: boys and girls spontaneously segregate themselves by sex; boys engage in more rough-and-tumble play; fantasy play is more frequent in girls, more sophisticated, and more focused on pretend parenting; boys are generally more aggressive and less nurturing than girls, with the differences being present and measurable by the seventeenth month of life. (39)

Paley’s study is one of several you probably wouldn’t expect to find discussed in a book about our human fascination with storytelling. But, as Gottschall makes clear in The Storytelling Animal: How Stories Make Us Human, there really aren’t many areas of human existence that aren’t relevant to a discussion of the role stories play in our lives. Those rowdy boys in Paley’s classes were playing recognizable characters from current action and sci-fi movies, and the fantasies of the girls were right out of Grimm’s fairy tales (it’s easy to see why people might assume these cultural staples were to blame for the sex differences). And the play itself was structured around one of the key ingredients—really the key ingredient—of any compelling story, trouble, whether in the form of invading pirates or people trying to poison babies.

The Storytelling Animal is the book to start with if you have yet to cut your teeth on any of the other recent efforts to bring the study of narrative into the realm of cognitive and evolutionary psychology. Gottschall covers many of the central themes of this burgeoning field without getting into the weedier territories of game theory or selection at multiple levels. While readers accustomed to more technical works may balk at wading through all the author’s anecdotes about his daughters, Gottschall’s keen sense of measure and the light touch of his prose keep the book from getting bogged down in frivolousness. This applies as well to the sections in which he succumbs to the temptation any writer faces when trying to explain one or another aspect of storytelling by making a few forays into penning abortive, experimental plots of his own.

None of the central theses of The Storytelling Animal is groundbreaking. But the style and layout of the book contribute something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion, the way most science books do. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams—which contra Freud are seldom centered on wish-fulfillment—through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be (or actually is, if you’ve read any of D.F.Wallace’s last novel about an IRS clerk). The effect is that instead of simply having a new idea to toss around we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe. And we appreciate just how integral story is to almost everything we do.

This gloss of Gottschall’s approach gives a sense of what is truly original about The Storytelling Animal—it doesn’t seal off narrative as discrete from other features of human existence but rather shows how stories permeate every aspect of our lives, from our dreams to our plans for the future, even our sense of our own identity. In a chapter titled “Life Stories,” Gottschall writes,

This need to see ourselves as the striving heroes of our own epics warps our sense of self. After all, it’s not easy to be a plausible protagonist. Fiction protagonists tend to be young, attractive, smart, and brave—all of the things that most of us aren’t. Fiction protagonists usually live interesting lives that are marked by intense conflict and drama. We don’t. Average Americans work retail or cubicle jobs and spend their nights watching protagonists do interesting things on television, while they eat pork rinds dipped in Miracle Whip. (171)

If you find this observation a tad unsettling, imagine it situated on a page underneath a mug shot of John Wayne Gacy with a caption explaining how he thought of himself “more as a victim than as a perpetrator.” For the most part, though, stories follow an easily identifiable moral logic, which Gottschall demonstrates with a short plot of his own based on the hypothetical situations Jonathan Haidt designed to induce moral dumbfounding. This almost inviolable moral underpinning of narratives suggests to Gottschall that one of the functions of stories is to encourage a sense of shared values and concern for the wider community, a role similar to the one D.S. Wilson sees religion as having played, and continuing to play in human evolution.

Though Gottschall stays away from the inside baseball stuff for the most part, he does come down firmly on one issue in opposition to at least one of the leading lights of the field. Gottschall imagines a future “exodus” from the real world into virtual story realms that are much closer to the holodecks of Star Trek than to current World of Warcraft interfaces. The assumption here is that people’s emotional involvement with stories results from audience members imagining themselves to be the protagonist. But interactive videogames are probably much closer to actual wish-fulfillment than the more passive approaches to attending to a story—hence the god-like powers and grandiose speechifying.

William Flesch challenges the identification theory in his own (much more technical) book Comeuppance. He points out that films that have experimented with a first-person approach to camera work failed to capture audiences (think of the complicated contraption that filmed Will Smith’s face as he was running from the zombies in I am Legend). Flesch writes, “If I imagined I were a character, I could not see her face; thus seeing her face means I must have a perspective on her that prevents perfect (naïve) identification” (16). One of the ways we sympathize with one another, though, is to mirror them—to feel, at least to some degree, their pain. That makes the issue a complicated one. Flesch believes our emotional involvement comes not from identification but from a desire to see virtuous characters come through the troubles of the plot unharmed, vindicated, maybe even rewarded. Attending to a story therefore entails tracking characters' interactions to see if they are in fact virtuous, then hoping desperately to see their virtue rewarded.

Gottschall does his best to avoid dismissing the typical obsessive Larper (live-action role player) as the “stereotypical Dungeons and Dragons player” who “is a pimply, introverted boy who isn’t cool and can’t play sports or attract girls” (190). And he does his best to end his book on an optimistic note. But the exodus he writes about may be an example of another phenomenon he discusses. First the optimism:

Humans evolved to crave story. This craving has, on the whole, been a good thing for us. Stories give us pleasure and instruction. They simulate worlds so we can live better in this one. They help bind us into communities and define us as cultures. Stories have been a great boon to our species. (197)

But he then makes an analogy with food cravings, which likewise evolved to serve a beneficial function yet in the modern world are wreaking havoc with our health. Just as there is junk food, so there is such a thing as “junk story,” possibly leading to what Brian Boyd, another luminary in evolutionary criticism, calls a “mental diabetes epidemic” (198). In the context of America’s current education woes, and with how easy it is to conjure images of glazy-eyed zombie students, the idea that video games and shows like Jersey Shore are “the story equivalent of deep-fried Twinkies” (197) makes an unnerving amount of sense.

Here, as in the section on how our personal histories are more fictionalized rewritings than accurate recordings, Gottschall manages to achieve something the playful tone and off-handed experimentation don't prepare you for. The surprising accomplishment of this unassuming little book (200 pages) is that it never stops being a light read even as it takes on discoveries with extremely weighty implications. The temptation to eat deep-fried Twinkies is only going to get more powerful as story-delivery systems become more technologically advanced. Might we have already begun the zombie apocalypse without anyone noticing—and, if so, are there already heroes working to save us we won’t recognize until long after the struggle has ended and we’ve begun weaving its history into a workable narrative, a legend?

Also read:

WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?

And:

HOW TO GET KIDS TO READ LITERATURE WITHOUT MAKING THEM HATE IT

Read More
Dennis Junk Dennis Junk

The Enlightened Hypocrisy of Jonathan Haidt's Righteous Mind

Jonathan Haidt extends an olive branch to conservatives by acknowledging their morality has more dimensions than the morality of liberals. But is he mistaking what’s intuitive for what’s right? A critical, yet admiring review of The Righteous Mind.

A Review of Jonathan Haidt's new book,

The Righteous Mind: Why Good People are Divided by Politics and Religion

Back in the early 1950s, Muzafer Sherif and his colleagues conducted a now-infamous experiment that validated the central premise of Lord of the Flies. Two groups of 12-year-old boys were brought to a camp called Robber’s Cave in southern Oklahoma where they were observed by researchers as the members got to know each other. Each group, unaware at first of the other’s presence at the camp, spontaneously formed a hierarchy, and they each came up with a name for themselves, the Eagles and the Rattlers. That was the first stage of the study. In the second stage, the two groups were gradually made aware of each other’s presence, and then they were pitted against each other in several games like baseball and tug-o-war. The goal was to find out if animosity would emerge between the groups. This phase of the study had to be brought to an end after the groups began staging armed raids on each other’s territory, wielding socks they’d filled with rocks. Prepubescent boys, this and several other studies confirm, tend to be highly tribal.

            So do conservatives.

           This is what University of Virginia psychologist Jonathan Haidt heroically avoids saying explicitly for the entirety of his new 318-page, heavily endnoted The Righteous Mind: Why Good People Are Divided by Politics and Religion. In the first of three parts, he takes on ethicists like John Stuart Mill and Immanuel Kant, along with the so-called New Atheists like Sam Harris and Richard Dawkins, because, as he says in a characteristically self-undermining pronouncement, “Anyone who values truth should stop worshipping reason” (89). Intuition, Haidt insists, is more worthy of focus. In part two, he lays out evidence from his own research showing that all over the world judgments about behaviors rely on a total of six intuitive dimensions, all of which served some ancestral, adaptive function. Conservatives live in “moral matrices” that incorporate all six, while liberal morality rests disproportionally on just three. At times, Haidt intimates that more dimensions is better, but then he explicitly disavows that position. He is, after all, a liberal himself. In part three, he covers some of the most fascinating research to emerge from the field of human evolutionary anthropology over the past decade and a half, concluding that tribalism emerged from group selection and that without it humans never would have become, well, human. Again, the point is that tribal morality—i.e. conservatism—cannot be all bad.

One of Haidt’s goals in writing The Righteous Mind, though, was to improve understanding on each side of the central political divide by exploring, and even encouraging an appreciation for, the moral psychology of those on the rival side. Tribalism can’t be all bad—and yet we need much less of it in the form of partisanship. “My hope,” Haidt writes in the introduction, “is that this book will make conversations about morality, politics, and religion more common, more civil, and more fun, even in mixed company” (xii). Later he identifies the crux of his challenge, “Empathy is an antidote to righteousness, although it’s very difficult to empathize across a moral divide” (49). There are plenty of books by conservative authors which gleefully point out the contradictions and errors in the thinking of naïve liberals, and there are plenty by liberals returning the favor. What Haidt attempts is a willful disregard of his own politics for the sake of transcending the entrenched divisions, even as he’s covering some key evidence that forms the basis of his beliefs. Not surprisingly, he gives the impression at several points throughout the book that he’s either withholding the conclusions he really draws from the research or exercising great discipline in directing his conclusions along paths amenable to his agenda of bringing about greater civility.

Haidt’s focus is on intuition, so he faces the same challenge Daniel Kahneman did in writing Thinking, Fast and Slow: how to convey all these different theories and findings in a book people will enjoy reading from first page to last? Kahneman’s attempt was unsuccessful, but his encyclopedic book is still readable because its topic is so compelling. Haidt’s approach is to discuss the science in the context of his own story of intellectual development. The product reads like a postmodern hero’s journey in which the unreliable narrator returns right back to where he started, but with a heightened awareness of how small his neighborhood really is. It’s a riveting trip down the rabbit hole of self-reflection where the distinction between is and ought gets blurred and erased and reinstated, as do the distinctions between intuition and reason, and even self and other. Since, as Haidt reports, liberals tend to score higher on the personality trait called openness to new ideas and experiences, he seems to have decided on a strategy of uncritically adopting several points of conservative rhetoric—like suggesting liberals are out-of-touch with most normal people—in order to subtly encourage less open members of his audience to read all the way through. Who, after all, wants to read a book by a liberal scientist pointing out all the ways conservatives go wrong in their thinking?

The Elephant in the Room

Haidt’s first move is to challenge the primacy of thinking over intuiting. If you’ve ever debated someone into a corner, you know simply demolishing the reasons behind a position will pretty much never be enough to change anyone’s mind. Citing psychologist Tom Gilovich, Haidt explains that when we want to believe something, we ask ourselves, “Can I believe it?” We begin a search, “and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have justification, in case anyone asks.” But if we don’t like the implications of, say, global warming, or the beneficial outcomes associated with free markets, we ask a different question: when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it. You only need one key to unlock the handcuffs of must. Psychologists now have file cabinets full of findings on “motivated reasoning,” showing the many tricks people use to reach the conclusions they want to reach. (84)

Haidt’s early research was designed to force people into making weak moral arguments so that he could explore the intuitive foundations of judgments of right and wrong. When presented with stories involving incest, or eating the family dog, which in every case were carefully worded to make it clear no harm would result to anyone—the incest couldn’t result in pregnancy; the dog was already dead—“subjects tried to invent victims” (24). It was clear that they wanted there to be a logical case based on somebody getting hurt so they could justify their intuitive answer that a wrong had been done.

They said things like ‘I know it’s wrong, but I just can’t think of a reason why.’ They seemed morally dumbfounded—rendered speechless by their inability to explain verbally what they knew intuitively. These subjects were reasoning. They were working quite hard reasoning. But it was not reasoning in search of truth; it was reasoning in support of their emotional reactions. (25)

Reading this section, you get the sense that people come to their beliefs about the world and how to behave in it by asking the same three questions they’d ask before deciding on a t-shirt: how does it feel, how much does it cost, and how does it make me look? Quoting political scientist Don Kinder, Haidt writes, “Political opinions function as ‘badges of social membership.’ They’re like the array of bumper stickers people put on their cars showing the political causes, universities, and sports teams they support” (86)—or like the skinny jeans showing everybody how hip you are.

Kahneman uses the metaphor of two systems to explain the workings of the mind. System 1, intuition, does most of the work most of the time. System 2 takes a lot more effort to engage and can never manage to operate independently of intuition. Kahneman therefore proposes educating your friends about the common intuitive mistakes—because you’ll never recognize them yourself. Haidt uses the metaphor of an intuitive elephant and a cerebrating rider. He first used this image for an earlier book on happiness, so the use of the GOP mascot was accidental. But because of the more intuitive nature of conservative beliefs it’s appropriate. Far from saying that republicans need to think more, though, Haidt emphasizes the point that rational thought is never really rational and never anything but self-interested. He argues,

the rider acts as the spokesman for the elephant, even though it doesn’t necessarily know what the elephant is really thinking. The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm. (46)

The futility of trying to avoid motivated reasoning provides Haidt some justification of his own to engage in what can only be called pandering. He cites cultural psychologists Joe Henrich, Steve Heine, and Ara Noenzayan, who argued in their 2010 paper “The Weirdest People in the World?”that researchers need to do more studies with culturally diverse subjects. Haidt commandeers the acronym WEIRD—western, educated, industrial, rich, and democratic—and applies it somewhat derisively for most of his book, even though it applies both to him and to his scientific endeavors. Of course, he can’t argue that what’s popular is necessarily better. But he manages to convey that attitude implicitly, even though he can’t really share the attitude himself.

Haidt is at his best when he’s synthesizing research findings into a holistic vision of human moral nature; he’s at his worst, his cringe-inducing worst, when he tries to be polemical. He succumbs to his most embarrassingly hypocritical impulses in what are transparently intended to be concessions to the religious and the conservative. WEIRD people are more apt to deny their intuitive, judgmental impulses—except where harm or oppression are involved—and insist on the fair application of governing principles derived from reasoned analysis. But apparently there’s something wrong with this approach:

Western philosophy has been worshipping reason and distrusting the passions for thousands of years. There’s a direct line running from Plato through Immanuel Kant to Lawrence Kohlberg. I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. (28)

This is disingenuous. For one thing, he doesn’t refer to the rationalist delusion throughout the book; it only shows up one other time. Both instances implicate the New Atheists. Haidt coins the term rationalist delusion in response to Dawkins’s The God Delusion. An atheist himself, Haidt is throwing believers a bone. To make this concession, though, he’s forced to seriously muddle his argument. “I’m not saying,” he insists,

we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments, but they are often disastrous as a basis for public policy, science, and law. Rather, what I’m saying is that we must be wary of any individual’s ability to reason. We should see each individual as being limited, like a neuron. (90)

As far as I know, neither Harris nor Dawkins has ever declared himself dictator of reason—nor, for that matter, did Mill or Rawls (Hitchens might have). Haidt, in his concessions, is guilty of making points against arguments that were never made. He goes on to make a point similar to Kahneman’s.

We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. (90)

What Haidt probably realizes but isn’t saying is that the environment he’s describing is a lot like scientific institutions in academia. In other words, if you hang out in them, you’ll be WEIRD.

A Taste for Self-Righteousness

The divide over morality can largely be reduced to the differences between the urban educated and the poor not-so-educated. As Haidt says of his research in South America, “I had flown five thousand miles south to search for moral variation when in fact there was more to be found a few blocks west of campus, in the poor neighborhood surrounding my university” (22). One of the major differences he and his research assistants serendipitously discovered was that educated people think it’s normal to discuss the underlying reasons for moral judgments while everyone else in the world—who isn’t WEIRD—thinks it’s odd:

But what I didn’t expect was that these working-class subjects would sometimes find my request for justifications so perplexing. Each time someone said that the people in a story had done something wrong, I asked, “Can you tell me why that was wrong?” When I had interviewed college students on the Penn campus a month earlier, this question brought forth their moral justifications quite smoothly. But a few blocks west, this same question often led to long pauses and disbelieving stares. Those pauses and stares seemed to say,

You mean you don’t know why it’s wrong to do that to a chicken? I have to explain it to you? What planet are you from? (95)

The Penn students “were unique in their unwavering devotion to the ‘harm principle,’” Mill’s dictum that laws are only justified when they prevent harm to citizens. Haidt quotes one of the students as saying, “It’s his chicken, he’s eating it, nobody is getting hurt” (96). (You don’t want to know what he did before cooking it.)

Having spent a little bit of time with working-class people, I can make a point that Haidt overlooks: they weren’t just looking at him as if he were an alien—they were judging him. In their minds, he was wrong just to ask the question. The really odd thing is that even though Haidt is the one asking the questions he seems at points throughout The Righteous Mind to agree that we shouldn’t ask questions like that:

There’s more to morality than harm and fairness. I’m going to try to convince you that this principle is true descriptively—that is, as a portrait of the moralities we see when we look around the world. I’ll set aside the question of whether any of these alternative moralities are really good, true, or justifiable. As an intuitionist, I believe it is a mistake to even raise that emotionally powerful question until we’ve calmed our elephants and cultivated some understanding of what such moralities are trying to accomplish. It’s just too easy for our riders to build a case against every morality, political party, and religion that we don’t like. So let’s try to understand moral diversity first, before we judge other moralities. (98)

But he’s already been busy judging people who base their morality on reason, taking them to task for worshipping it. And while he’s expending so much effort to hold back his own judgments he’s being judged by those whose rival conceptions he’s trying to understand. His open-mindedness and disciplined restraint are as quintessentially liberal as they are unilateral.

In the book’s first section, Haidt recounts his education and his early research into moral intuition. The second section is the story of how he developed his Moral Foundations Theory. It begins with his voyage to Bhubaneswar, the capital of Orissa in India. He went to conduct experiments similar to those he’d already been doing in the Americas. “But these experiments,” he writes, “taught me little in comparison to what I learned just from stumbling around the complex social web of a small Indian city and then talking with my hosts and advisors about my confusion.” It was an earlier account of this sojourn Haidt had written for the online salon The Edge that first piqued my interest in his work and his writing. In both, he talks about his two “incompatible identities.”

On one hand, I was a twenty-nine-year-old liberal atheist with very definite views about right and wrong. On the other hand, I wanted to be like those open-minded anthropologists I had read so much about and had studied with. (101)

The people he meets in India are similar in many ways to American conservatives. “I was immersed,” Haidt writes, “in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine” (102). The conversion to what he calls pluralism doesn’t lead to any realignment of his politics. But supposedly for the first time he begins to feel and experience the appeal of other types of moral thinking. He could see why protecting physical purity might be fulfilling. This is part of what's known as the “ethic of divinity,” and it was missing from his earlier way of thinking. He also began to appreciate certain aspects of the social order, not to the point of advocating hierarchy or rigid sex roles but seeing value in the complex network of interdependence.

The story is thoroughly engrossing, so engrossing that you want it to build up into a life-changing insight that resolves the crisis. That’s where the six moral dimensions come in (though he begins with just five and only adds the last one much later), which he compares to the different dimensions of taste that make up our flavor palette. The two that everyone shares but that liberals give priority to whenever any two or more suggest different responses are Care and Harm—hurting people is wrong and we should help those in need—and Fairness. The other three from the original set are Loyalty, Authority, and Sanctity, loyalty to the tribe, respect for the hierarchy, and recognition of the sacredness of the tribe’s symbols, like the flag. Libertarians are closer to liberals; they just rely less on the Care dimension and much more on the recently added sixth one, Liberty from Opression, which Haidt believes evolved in the context of ancestral egalitarianism similar to that found among modern nomadic foragers. Haidt suggests that restricting yourself to one or two dimensions is like swearing off every flavor but sweet and salty, saying,

many authors reduce morality to a single principle, usually some variant of welfare maximization (basically, help people, don’t hurt them). Or sometimes it’s justice or related notions of fairness, rights, or respect for individuals and their autonomy. There’s The Utilitarian Grill, serving only sweeteners (welfare), and The Deontological Diner, serving only salts (rights). Those are your options. (113)

Haidt doesn’t make the connection between tribalism and the conservative moral trifecta explicit. And he insists he’s not relying on what’s called the Naturalistic Fallacy—reasoning that what’s natural must be right. Rather, he’s being, he claims, strictly descriptive and scientific.

Moral judgment is a kind of perception, and moral science should begin with a careful study of the moral taste receptors. You can’t possibly deduce the list of five taste receptors by pure reasoning, nor should you search for it in scripture. There’s nothing transcendental about them. You’ve got to examine tongues. (115)

But if he really were restricting himself to description he would have no beef with the utilitarian ethicists like Mill, the deontological ones like Kant, or for that matter with the New Atheists, all of whom are operating in the realm of how we should behave and what we should believe as opposed to how we’re naturally, intuitively primed to behave and believe. At one point, he goes so far as to present a case for Kant and Jeremy Bentham, father of utilitarianism, being autistic (the trendy psychological diagnosis du jour) (120). But, like a lawyer who throws out a damning but inadmissible comment only to say “withdrawn” when the defense objects, he assures us that he doesn’t mean the autism thing as an ad hominem.

From The Moral Foundations Website

I think most of my fellow liberals are going to think Haidt’s metaphor needs some adjusting. Humans evolved a craving for sweets because in our ancestral environment fruits were a rare but nutrient-rich delicacy. Likewise, our taste for salt used to be adaptive. But in the modern world our appetites for sugar and salt have created a health crisis. These taste receptors are also easy for industrial food manufacturers to exploit in a way that enriches them and harms us. As Haidt goes on to explain in the third section, our tribal intuitions were what allowed us to flourish as a species. But what he doesn’t realize or won’t openly admit is that in the modern world tribalism is dangerous and far too easily exploited by demagogues and PR experts.

In his story about his time in India, he makes it seem like a whole new world of experiences was opened to him. But this is absurd (and insulting). Liberals experience the sacred too; they just don’t attempt to legislate it. Liberals recognize intuitions pushing them toward dominance and submission. They have feelings of animosity toward outgroups and intense loyalty toward members of their ingroup. Sometimes, they even indulge these intuitions and impulses. The distinction is not that liberals don’t experience such feelings; they simply believe they should question whether acting on them is appropriate in the given context. Loyalty in a friendship or a marriage is moral and essential; loyalty in business, in the form of cronyism, is profoundly immoral. Liberals believe they shouldn’t apply their personal feelings about loyalty or sacredness to their judgments of others because it’s wrong to try to legislate your personal intuitions, or even the intuitions you share with a group whose beliefs may not be shared in other sectors of society. In fact, the need to consider diverse beliefs—the pluralism that Haidt extolls—is precisely the impetus behind the efforts ethicists make to pare down the list of moral considerations.

Moral intuitions, like food cravings, can be seen as temptations requiring discipline to resist. It’s probably no coincidence that the obesity epidemic tracks the moral divide Haidt found when he left the Penn campus. As I read Haidt’s account of Drew Westen’s fMRI experiments with political partisans, I got a bit anxious because I worried a scan might reveal me to be something other than what I consider myself. The machine in this case is a bit like the Sorting Hat at Hogwarts, and I hoped, like Harry Potter, not to be placed in Slytherin. But this hope, even if it stems from my wish to identify with the group of liberals I admire and feel loyalty toward, cannot be as meaningless as Haidt’s “intuitionism” posits.

Ultimately, the findings Haidt brings together under the rubric of Moral Foundations Theory don’t lend themselves in any way to his larger program of bringing about greater understanding and greater civility. He fails to understand that liberals appreciate all the moral dimensions but don’t think they should all be seen as guides to political policies. And while he may want there to be less tribalism in politics he has to realize that most conservatives believe tribalism is politics—and should be.

Resistance to the Hive Switch is Futile

“We are not saints,” Haidt writes in the third section, “but we are sometimes good team players” (191). Though his efforts to use Moral Foundations to understand and appreciate conservatives lead to some bizarre contortions and a profound misunderstanding of liberals, his synthesis of research on moral intuitions with research and theorizing on multi-level selection, including selection at the level of the group, is an important contribution to psychology and anthropology. He writes that

anytime a group finds a way to suppress selfishness, it changes the balance of forces in a multi-level analysis: individual-level selection becomes less important, and group-level selection becomes more powerful. For example, if there is a genetic basis for feelings of loyalty and sanctity (i.e., the Loyalty and Sanctity Foundations), then intense intergroup competition will make these genes become more common in the next generation. (194)

The most interesting idea in this section is that humans possess what Haidt calls a “hive switch” that gets flipped whenever we engage in coordinated group activities. He cites historian William McNeil who recalls an “altered state of consciousness” when he was marching in formation with fellow soldiers in his army days. He describes it as a “sense of pervasive well-being…a strange sense of personal enlargement; a sort of swelling out, becoming bigger than life” (221). Sociologist Emile Durkheim referred to this same experience as “collective effervescence.” People feel it today at football games, at concerts as they dance to a unifying beat, and during religious rituals. It’s a profoundly spiritual experience, and it likely evolved to create a greater sense of social cohesion within groups competing with other groups.

Surprisingly, the altruism inspired by this sense of the sacred triggered by coordinated activity, though primarily directed at fellow group members—parochial altruism—can also flow out in ways that aren’t entirely tribal.

Haidt cites political scientists Robert Putnam and David Campbell’s book, American Grace: How Religion Divides and Unites Us, where they report the finding that “the more frequently people attend religious services, the more generous and charitable they become across the board” (267); they do give more to religious charities, but they also give more to secular ones. Putnam and Campbell write that “religiously observant Americans are better neighbors and better citizens.” The really astonishing finding from Putnam and Campbell’s research, though, is that the social advantages enjoyed by religious people had nothing to do with the actual religious beliefs. Haidt explains,

These beliefs and practices turned out to matter very little. Whether you believe in hell, whether you pray daily, whether you are a Catholic, Protestant, Jew, or Mormon… none of these things correlated with generosity. The only thing that was reliably and powerfully associated with the moral benefits of religion was how enmeshed people were in relationships with their co-religionists. It’s the friendships and group activities, carried out within a moral matrix that emphasizes selflessness. That’s what brings out the best in people. (267)

The Sacred foundation, then, is an integral aspect of our sense of community, as well as a powerful inspiration for altruism. Haidt cites the work of Richard Sosis, who combed through all the records he could find on communes in America. His central finding is that “just 6 percent of the secular communes were still functioning twenty years after their founding, compared to 39 percent of the religious communes.” Socis went on to identify “one master variable” which accounted for the difference between success and failure for religious groups: “the number of costly sacrifices that each commune demanded from its members” (257). But sacrifices demanded by secular groups made no difference whatsoever. Haidt concludes,

In other words, the very ritual practices that the New Atheists dismiss as costly, inefficient, and irrational turn out to be a solution to one of the hardest problems humans face: cooperation without kinship. Irrational beliefs can sometimes help the group function more rationally, particularly when those beliefs rest upon the Sanctity foundation. Sacredness binds people together, and then blinds them to the arbitrariness of the practice. (257)

This section captures the best and the worst of Haidt's work. The idea that humans have an evolved sense of the sacred, and that it came about to help our ancestral groups cooperate and cohere—that’s a brilliant contribution to theories going back through D.S.Wilson, Emile Durkheim, all the way back to Darwin. Contemplating it sparks a sense of wonder that must emerge from that same evolved feeling for the sacred. But then he uses the insight in the service of a really lame argument.

The costs critics of religion point to aren’t the minor personal ones like giving up alcohol or fasting for a few days. Haidt compares studying the actual, “arbitrary” beliefs and practices of religious communities to observing the movements of a football for the purpose of trying to understand why people love watching games. It’s the coming together as a group, he suggests, the sharing of goals and mutual direction of attention, the feeling of shared triumph or even disappointment. But if the beliefs and rituals aren’t what’s important then there’s no reason they have to be arbitrary—and there’s no reason they should have to entail any degree of hostility toward outsiders. How then can Haidt condemn Harris and Dawkins for “worshipping reason” and celebrating the collective endeavor known as science? Why doesn’t he recognize that for highly educated people, especially scientists, discovery is sacred? He seriously mars his otherwise magnificent work by wrongly assuming anyone who doesn’t think flushing an American flag down the toilet is wrong has no sense of the sacred, shaking his finger at them, effectively saying, rallying around a cause is what being human is all about, but what you flag-flushers think is important just isn’t worthy—even though it’s exactly what I think is important too, what I’ve devoted my career and this book you're holding to anyway.

As Kahneman stresses in his book, resisting the pull of intuition takes a great deal of effort. The main difference between highly educated people and everyone else isn’t a matter of separate moral intuitions. It’s a different attitude toward intuitions in general. Those of us who worship reason believe in the Enlightenment ideals of scientific progress and universal human rights. I think most of us even feel those ideals are sacred and inviolable. But the Enlightenment is a victim of its own success. No one remembers the unchecked violence and injustice that were the norms before it came about—and still are the norms in many parts of the world. In some academic sectors, the Enlightenment is even blamed for some of the crimes its own principles are used to combat, like patriarchy and colonialism. Intuitions are still very much a part of human existence, even among those who are the most thoroughly steeped in Enlightenment values. But worshipping them is far more dangerous than worshipping reason. As the world becomes ever more complicated, nostalgia for simpler times becomes an ever more powerful temptation. And surmounting the pull of intuition may ultimately be an impossible goal. But it’s still a worthy, and even sacred ideal.

But if Haidt’s attempt to inspire understanding and appreciation misfires how are we to achieve the goal of greater civility and less partisanship? Haidt does offer some useful suggestions. Still, I worry that his injunction to “Talk to the elephant” will merely contribute to the growing sway of the burgeoning focus-groupocracy. Interestingly, the third stage of the Robber's Cave experiment may provide some guidance. Sherif and his colleagues did manage to curtail the escalating hostility between the Eagles and the Rattlers. And all it took was some shared goals they had to cooperate to achieve, like when their bus got stuck on the side of the road and all the boys in both groups had to work together to work it free. Maybe it’s time for a mission to Mars all Americans could support (credit Neil de Grasse Tyson). Unfortunately, the conservatives would probably never get behind it. Maybe we should do another of our liberal conspiracy hoaxes to convince them China is planning to build a military base on the Red Planet. Then we’ll be there in no time.

Also read

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

And:

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And:

WHY TAMSIN SHAW IMAGINES THE PSYCHOLOGISTS ARE TAKING POWER

Read More
Dennis Junk Dennis Junk

The Adaptive Appeal of Bad Boys

From the intro to my master’s thesis where I explore the evolved psychological dynamics of storytelling and witnessing, with a special emphasis on the paradox that the most compelling characters are often less than perfect human beings. Why do audiences like Milton’s Satan, for instance? Why did we all fall in love with Tyler Durden from Fight Club? It turns out both of these characters give indications that they just may be more altruistic than they appear at first.

Excerpt from Hierarchies in Hell and Leaderless Fight ClubsAltruism, Narrative Interest, and the Adaptive Appeal of Bad Boys, my master’s thesis

            In a New York Times article published in the spring of 2010, psychologist Paul Bloom tells the story of a one-year-old boy’s remarkable response to a puppet show. The drama the puppets enacted began with a central character’s demonstration of a desire to play with a ball. After revealing that intention, the character roles the ball to a second character who likewise wants to play and so rolls the ball back to the first. When the first character rolls the ball to a third, however, this puppet snatches it up and quickly absconds. The second, nice puppet and the third, mean one are then placed before the boy, who’s been keenly attentive to their doings, and they both have placed before them a few treats. The boy is now instructed by one of the adults in the room to take a treat away from one of the puppets. Most children respond to the instructions by taking the treat away from the mean puppet, and this particular boy is no different. He’s not content with such a meager punishment, though, and after removing the treat he proceeds to reach out and smack the mean puppet on the head.

            Brief stage shows like the one featuring the nice and naughty puppets are part of an ongoing research program lead by Karen Wynn, Bloom’s wife and colleague, and graduate student Kiley Hamlin at Yale University’s Infant Cognition Center. An earlier permutation of the study was featured on PBS’s Nova series The Human Spark(jump to chapter 5), which shows host Alan Alda looking on as an infant named Jessica attends to a puppet show with the same script as the one that riled the boy Bloom describes. Jessica is so tiny that her ability to track and interpret the puppets’ behavior on any level is impressive, but when she demonstrates a rudimentary capacity for moral judgment by reaching with unchecked joy for the nice puppet while barely glancing at the mean one, Alda—and Nova viewers along with him—can’t help but demonstrate his own delight. Jessica shows unmistakable signs of positive emotion in response to the nice puppet’s behaviors, and Alda in turn feels positive emotions toward Jessica. Bloom attests that “if you watch the older babies during the experiments, they don’t act like impassive judges—they tend to smile and clap during good events and frown, shake their heads and look sad during the naughty events” (6). Any adult witnessing the children’s reactions can be counted on to mirror these expressions and to feel delight at the babies’ incredible precocity.

            The setup for these experiments with children is very similar to experiments with adult participants that assess responses to anonymously witnessed exchanges. In their research report, “Third-Party Punishment and Social Norms,” Ernst Fehr and Urs Fischbacher describe a scenario inspired by economic game theory called the Dictator Game. It begins with an experimenter giving a first participant, or player, a sum of money. The experimenter then explains to the first player that he or she is to propose a cut of the money to the second player. In the Dictator Game—as opposed to other similar game theory scenarios—the second player has no choice but to accept the cut from the first player, the dictator. The catch is that the exchange is being witnessed by a third party, the analogue of little Jessica or the head-slapping avenger in the Yale experiments.  This third player is then given the opportunity to reward or punish the dictator. As Fehr and Fischbacher explain, “Punishment is, however, costly for the third party so a selfish third party will never punish” (3).

It turns out, though, that adults, just like the infants in the Yale studies, are not selfish—at least not entirely. Instead, they readily engage in indirect, or strong, reciprocity. Evolutionary literary theorist William Flesch explains that “the strong reciprocator punishes and rewards others for their behavior toward any member of the social group, and not just or primarily for their interactions with the reciprocator” (21-2). According to Flesch, strong reciprocity is the key to solving what he calls “the puzzle of narrative interest,” the mystery of why humans so readily and eagerly feel “anxiety on behalf of and about the motives, actions, and experiences of fictional characters” (7). The human tendency toward strong reciprocity reaches beyond any third party witnessing an exchange between two others; as Alda, viewers of Nova, and even readers of Bloom’s article in the Times watch or read about Wynn and Hamlin’s experiments, they have no choice but to become participants in the experiments themselves, because their own tendency to reward good behavior with positive emotion and to punish bad behavior with negative emotion is automatically engaged. Audiences’ concern, however, is much less with the puppets’ behavior than with the infants’ responses to it.

The studies of social and moral development conducted at the Infant Cognition Center pull at people’s heartstrings because they demonstrate babies’ capacity to behave in a way that is expected of adults. If Jessica had failed to discern between the nice and the mean puppets, viewers probably would have readily forgiven her. When older people fail to make moral distinctions, however, those in a position to witness and appreciate that failure can be counted on to withdraw their favor—and may even engage in some type of sanctioning, beginning with unflattering gossip and becoming more severe if the immorality or moral complacency persists. Strong reciprocity opens the way for endlessly branching nth–order reciprocation, so not only will individuals be considered culpable for offenses they commit but also for offenses they passively witness. Flesch explains,

Among the kinds of behavior that we monitor through tracking or through report, and that we have a tendency to punish or reward, is the way others monitor behavior through tracking or through report, and the way they manifest a tendency to punish and reward. (50)

Failing to signal disapproval makes witnesses complicit. On the other hand, signaling favor toward individuals who behave altruistically simultaneously signals to others the altruism of the signaler. What’s important to note about this sort of indirect signaling is that it does not necessarily require the original offense or benevolent act to have actually occurred. People take a proclivity to favor the altruistic as evidence of altruism—even if the altruistic character is fictional. 

        That infants less than a year old respond to unfair or selfish behavior with negative emotions—and a readiness to punish—suggests that strong reciprocity has deep evolutionary roots in the human lineage. Humans’ profound emotional engagement with fictional characters and fictional exchanges probably derives from a long history of adapting to challenges whose Darwinian ramifications were far more serious than any attempt to while away some idle afternoons. Game theorists and evolutionary anthropologists have a good idea what those challenges might have been: for cooperativeness or altruism to be established and maintained as a norm within a group of conspecifics, some mechanism must be in place to prevent the exploitation of cooperative or altruistic individuals by selfish and devious ones. Flesch explains,

Darwin himself had proposed a way for altruism to evolve through the mechanism of group selection. Groups with altruists do better as a group than groups without. But it was shown in the 1960s that, in fact, such groups would be too easily infiltrated or invaded by nonaltruists—that is, that group boundaries are too porous—to make group selection strong enough to overcome competition at the level of the individual or the gene. (5)

If, however, individuals given to trying to take advantage of cooperative norms were reliably met with slaps on the head—or with ostracism in the wake of spreading gossip—any benefits they (or their genes) might otherwise count on to redound from their selfish behavior would be much diminished. Flesch’s theory is “that we have explicitly evolved the ability and desire to track others and to learn their stories precisely in order to punish the guilty (and somewhat secondarily to reward the virtuous)” (21). Before strong reciprocity was driving humans to bookstores, amphitheaters, and cinemas, then, it was serving the life-and-death cause of ensuring group cohesion and sealing group boundaries against neighboring exploiters. 

Game theory experiments that have been conducted since the early 1980s have consistently shown that people are willing, even eager to punish others whose behavior strikes them as unfair or exploitative, even when administering that punishment involves incurring some cost for the punisher. Like the Dictator Game, the Ultimatum Game involves two people, one of whom is given a sum of money and told to offer the other participant a cut. The catch in this scenario is that the second player must accept the cut or neither player gets to keep any money. “It is irrational for the responder not to accept any proposed split from the proposer,” Flesch writes. “The responder will always come out better by accepting than vetoing” (31). What the researchers discovered, though, was that a line exists beneath which responders will almost always refuse the cut. “This means they are paying to punish,” Flesch explains. “They are giving up a sure gain in order to punish the selfishness of the proposer” (31). Game theorists call this behavior altruistic punishment because “the punisher’s willingness to pay this cost may be an important part in enforcing norms of fairness” (31). In other words, the punisher is incurring a cost to him or herself in order to ensure that selfish actors don’t have a chance to get a foothold in the larger, cooperative group. 

The economic logic notwithstanding, it seems natural to most people that second players in Ultimatum Game experiments should signal their disapproval—or stand up for themselves, as it were—by refusing to accept insultingly meager proposed cuts. The cost of the punishment, moreover, can be seen as a symbol of various other types of considerations that might prevent a participant or a witness from stepping up or stepping in to protest. Discussing the Three-Player Dictator Game experiments conducted by Fehr and Fischbacher, Flesch points out that strong reciprocity is even more starkly contrary to any selfish accounting:

Note that the third player gets nothing out of paying to reward or punish except the power or agency to do just that. It is highly irrational for this player to pay to reward or punish, but again considerations of fairness trump rational self-interest. People do pay, and pay a substantial amount, when they think that someone has been treated notably unfairly, or when they think someone has evinced marked generosity, to affect what they have observed. (33)

Neuroscientists have even zeroed in on the brain regions that correspond to our suppression of immediate self-interest in the service of altruistic punishment, as well as those responsible for the pleasure we take in anticipating—though not in actually witnessing—free riders meeting with their just deserts (Knoch et al. 829Quevain et al. 1254). Outside of laboratories, though, the cost punishers incur can range from the risks associated with a physical confrontation to time and energy spent convincing skeptical peers a crime has indeed been committed.

Flesch lays out his theory of narrative interest in a book aptly titled Comeuppance:Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction. A cursory survey of mainstream fiction, in both blockbuster movies and best-selling novels, reveals the good guys versus bad guys dynamic as preeminent in nearly every plot, and much of the pleasure people get from the most popular narratives can quite plausibly be said to derive from the goodie prevailing—after a long, harrowing series of close calls and setbacks—while the baddie simultaneously gets his or her comeuppance. Audiences love to see characters get their just deserts. When the plot fails to deliver on this score, they walk away severely disturbed. That disturbance can, however, serve the author’s purposes, particularly when the goal is to bring some danger or injustice to readers’ or viewers’ attention, as in the case of novels like Orwell’s 1984. Plots, of course, seldom feature simple exchanges with meager stakes on the scale of game theory experiments, and heroes can by no means count on making it to the final scene both vindicated and rewarded—even in stories designed to give audiences exactly what they want. The ultimate act of altruistic punishment, and hence the most emotionally poignant behavior a character can engage in, is martyrdom. It’s no coincidence that the hero dies in the act of vanquishing the villain in so many of the most memorable books and movies.

            If narrative interest really does emerge out of a propensity to monitor each other’s behaviors for signs of a capacity for cooperation and to volunteer affect on behalf of altruistic individuals and against selfish ones they want to see get their comeuppance, the strong appeal of certain seemingly bad characters emerges as a mystery calling for explanation.  From England’s tradition of Byronic heroes like Rochester to America’s fascination with bad boys like Tom Sawyer, these characters win over audiences and stand out as perennial favorites even though at first blush they seem anything but eager to establish their nice guy bone fides. On the other hand, Rochester was eventually redeemed in Jane Eyre, and Tom Sawyer, though naughty to be sure, shows no sign whatsoever of being malicious. Tellingly, though, these characters, and a long list of others like them, also demonstrate a remarkable degree of cleverness: Rochester passing for a gypsy woman, for instance, or Tom Sawyer making fence painting out to be a privilege. One hypothesis that could account for the appeal of bad boys is that their badness demonstrates undeniably their ability to escape the negative consequences most people expect to result from their own bad behavior.

This type of demonstration likely functions in a way similar to another mechanism that many evolutionary biologists theorize must have been operating for cooperation to have become established in human societies, a process referred to as the handicap principle, or costly signaling. A lone altruist in any group is unlikely to fare well in terms of survival and reproduction. So the question arises as to how the minimum threshold of cooperators in a population was first surmounted. Flesch’s fellow evolutionary critic, Brian Boyd, in his book On the Origin of Stories, traces the process along a path from mutualism, or coincidental mutual benefits, to inclusive fitness, whereby organisms help others who are likely to share their genes—primarily family members—to reciprocal altruism, a quid pro quo arrangement in which one organism will aid another in anticipation of some future repayment (54-57). However, a few individuals in our human ancestry must have benefited from altruism that went beyond familial favoritism and tit-for-tat bartering.

In their classic book The Handicap Principal, Amotz and Avishag Zahavi suggest that altruism serves a function in cooperative species similar to the one served by a peacock’s feathers. The principle could also help account for the appeal of human individuals who routinely risk suffering consequences which deter most others. The idea is that conspecifics have much to gain from accurate assessments of each other’s fitness when choosing mates or allies. Many species have thus evolved methods for honestly signaling their fitness, and as the Zahavis explain, “in order to be effective, signals have to be reliable; in order to be reliable, signals have to be costly” (xiv). Peacocks, the iconic examples of the principle in action, signal their fitness with cumbersome plumage because their ability to survive in spite of the handicap serves as a guarantee of their strength and resourcefulness. Flesch and Boyd, inspired by evolutionary anthropologists, find in this theory of costly signaling the solution the mystery of how altruism first became established; human altruism is, if anything, even more elaborate than the peacock’s display. 

Humans display their fitness in many ways. Not everyone can be expected to have the wherewithal to punish free-riders, especially when doing so involves physical conflict. The paradoxical result is that humans compete for the status of best cooperator. Altruism is a costly signal of fitness. Flesch explains how this competition could have emerged in human populations:

If there is a lot of between-group competition, then those groups whose modes of costly signaling take the form of strong reciprocity, especially altruistic punishment, will outcompete those whose modes yield less secondary gain, especially less secondary gain for the group as a whole. (57)

Taken together, the evidence Flesch presents suggests the audiences of narratives volunteer affect on behalf of fictional characters who show themselves to be altruists and against those who show themselves to be selfish actors or exploiters, experiencing both frustration and delight in the unfolding of the plot as they hope to see the altruists prevail and the free-riders get their comeuppance. This capacity for emotional engagement with fiction likely evolved because it serves as a signal to anyone monitoring individuals as they read or view the story, or as they discuss it later, that they are disposed either toward altruistic punishment or toward third-order free-riding themselves—and altruism is a costly signal of fitness.

The hypothesis emerging from this theory of social monitoring and volunteered affect to explain the appeal of bad boy characters is that their bad behavior will tend to redound to the detriment of still worse characters. Bloom describes the results of another series of experiments with eight-month-old participants:

When the target of the action was itself a good guy, babies preferred the puppet who was nice to it. This alone wasn’t very surprising, given that the other studies found an overall preference among babies for those who act nicely. What was more interesting was what happened when they watched the bad guy being rewarded or punished. Here they chose the punisher. Despite their overall preference for good actors over bad, then, babies are drawn to bad actors when those actors are punishing bad behavior. (5)

These characters’ bad behavior will also likely serve an obvious function as costly signaling; they’re bad because they’re good at getting away with it. Evidence that the bad boy characters are somehow truly malicious—for instance, clear signals of a wish to harm innocent characters—or that they’re irredeemable would severely undermine the theory. As the first step toward a preliminary survey, the following sections examine two infamous instances in which literary characters whose creators intended audiences to recognize as bad nonetheless managed to steal the show from the supposed good guys.

(Watch Hamlin discussing the research in an interview from earlier today.)

And check out this video of the experiments.

Read More