READING SUBTLY
This
was the domain of my Blogger site from 2009 to 2018, when I moved to this domain and started
The Storytelling Ape
. The search option should help you find any of the old posts you're looking for.
Too Psyched for Sherlock: A Review of Maria Konnikova’s “Mastermind: How to Think like Sherlock Holmes”—with Some Thoughts on Science Education
Maria Konnikova’s book “Mastermind: How to Think Like Sherlock Holmes” got me really excited because if the science of psychology is ever brought up in discussions of literature, it’s usually the pseudoscience of Sigmund Freud. Konnikova, whose blog went a long way toward remedying that tragedy, wanted to offer up an alternative approach. However, though the book shows great promise, it’s ultimately disappointing.
Whenever he gets really drunk, my brother has the peculiar habit of reciting the plot of one or another of his favorite shows or books. His friends and I like to tease him about it—“Watch out, Dan’s drunk, nobody mention The Wire!”—and the quirk can certainly be annoying, especially if you’ve yet to experience the story first-hand. But I have to admit, given how blotto he usually is when he first sets out on one of his grand retellings, his ability to recall intricate plotlines right down to their minutest shifts and turns is extraordinary. One recent night, during a timeout in an epic shellacking of Notre Dame’s football team, he took up the tale of Django Unchained, which incidentally I’d sat next to him watching just the week before. Tuning him out, I let my thoughts shift to a post I’d read on The New Yorker’s cinema blog The Front Row.
In “The Riddle of Tarantino,” film critic Richard Brody analyzes the director-screenwriter’s latest work in an attempt to tease out the secrets behind the popular appeal of his creations and to derive insights into the inner workings of his mind. The post is agonizingly—though also at points, I must admit, exquisitely—overwritten, almost a parody of the grandiose type of writing one expects to find within the pages of the august weekly. Bemused by the lavish application of psychoanalytic jargon, I finished the essay pitying Brody for, in all his writerly panache, having nothing of real substance to say about the movie or the mind behind it. I wondered if he knows the scientific consensus on Freud is that his influence is less in the line of, say, a Darwin or an Einstein than of an L. Ron Hubbard.
What Brody and my brother have in common is that they were both moved enough by their cinematic experience to feel an urge to share their enthusiasm, complicated though that enthusiasm may have been. Yet they both ended up doing the story a disservice, succeeding less in celebrating the work than in blunting its impact. Listening to my brother’s rehearsal of the plot with Brody’s essay in mind, I wondered what better field there could be than psychology for affording enthusiasts discussion-worthy insights to help them move beyond simple plot references. How tragic, then, that the only versions of psychology on offer in educational institutions catering to those who would be custodians of art, whether in academia or on the mastheads of magazines like The New Yorker, are those in thrall to Freud’s cultish legacy.
There’s just something irresistibly seductive about the promise of a scientific paradigm that allows us to know more about another person than he knows about himself. In this spirit of privileged knowingness, Brody faults Django for its lack of moral complexity before going on to make a silly accusation. Watching the movie, you know who the good guys are, who the bad guys are, and who you want to see prevail in the inevitably epic climax. “And yet,” Brody writes,
the cinematic unconscious shines through in moments where Tarantino just can’t help letting loose his own pleasure in filming pain. In such moments, he never seems to be forcing himself to look or to film, but, rather, forcing himself not to keep going. He’s not troubled by representation but by a visual superego that restrains it. The catharsis he provides in the final conflagration is that of purging the world of miscreants; it’s also a refining fire that blasts away suspicion of any peeping pleasure at misdeeds and fuses aesthetic, moral, and political exultation in a single apotheosis.
The strained stateliness of the prose provides a ready distraction from the stark implausibility of the assessment. Applying Occam’s Razor rather than Freud’s at once insanely elaborate and absurdly reductionist ideology, we might guess that what prompted Tarantino to let the camera linger discomfortingly long on the violent misdeeds of the black hats is that he knew we in the audience would be anticipating that “final conflagration.”
The more outrageous the offense, the more pleasurable the anticipation of comeuppance—but the experimental findings that support this view aren’t covered in film or literary criticism curricula, mired as they are in century-old pseudoscience.
I’ve been eagerly awaiting the day when scientific psychology supplants psychoanalysis (as well as other equally, if not more, absurd ideologies) in academic and popular literary discussions. Coming across the blog Literally Psyched on Scientific American’s website about a year ago gave me a great sense of hope. The tagline, “Conceived in literature, tested in psychology,” as well as the credibility conferred by the host site, promised that the most fitting approach to exploring the resonance and beauty of stories might be undergoing a long overdue renaissance, liberated at last from the dominion of crackpot theorists. So when the author, Maria Konnikova, a doctoral candidate at Columbia, released her first book, I made a point to have Amazon deliver it as early as possible.
Mastermind: How to Think Like Sherlock Holmes does indeed follow the conceived-in-literature-tested-in-psychology formula, taking the principles of sound reasoning expounded by what may be the most recognizable fictional character in history and attempting to show how modern psychology proves their soundness. In what she calls a “Prelude” to her book, Konnikova explains that she’s been a Holmes fan since her father read Conan Doyle’s stories to her and her siblings as children.
The one demonstration of the detective’s abilities that stuck with Konnikova the most comes when he explains to his companion and chronicler Dr. Watson the difference between seeing and observing, using as an example the number of stairs leading up to their famous flat at 221B Baker Street. Watson, naturally, has no idea how many stairs there are because he isn’t in the habit of observing. Holmes, preternaturally, knows there are seventeen steps. Ever since being made aware of Watson’s—and her own—cognitive limitations through this vivid illustration (which had a similar effect on me when I first read “A Scandal in Bohemia” as a teenager), Konnikova has been trying to find the secret to becoming a Holmesian observer as opposed to a mere Watsonian seer. Already in these earliest pages, we encounter some of the principle shortcomings of the strategy behind the book. Konnikova wastes no time on the question of whether or not a mindset oriented toward things like the number of stairs in your building has any actual advantages—with regard to solving crimes or to anything else—but rather assumes old Sherlock is saying something instructive and profound.
Mastermind is, for the most part, an entertaining read. Its worst fault in the realm of simple page-by-page enjoyment is that Konnikova often belabors points that upon reflection expose themselves as mere platitudes. The overall theme is the importance of mindfulness—an important message, to be sure, in this age of rampant multitasking. But readers get more endorsement than practical instruction. You can only be exhorted to pay attention to what you’re doing so many times before you stop paying attention to the exhortations. The book’s problems in both the literary and psychological domains, however, are much more serious. I came to the book hoping it would hold some promise for opening the way to more scientific literary discussions by offering at least a glimpse of what they might look like, but while reading I came to realize there’s yet another obstacle to any substantive analysis of stories. Call it the TED effect. For anything to be read today, or for anything to get published for that matter, it has to promise to uplift readers, reveal to them some secret about how to improve their lives, help them celebrate the horizonless expanse of human potential.
Naturally enough, with the cacophony of competing information outlets, we all focus on the ones most likely to offer us something personally useful. Though self-improvement is a worthy endeavor, the overlooked corollary to this trend is that the worthiness intrinsic to enterprises and ideas is overshadowed and diminished. People ask what’s in literature for me, or what can science do for me, instead of considering them valuable in their own right—and instead of thinking, heaven forbid, we may have a duty to literature and science as institutions serving as essential parts of the foundation of civilized society.
In trying to conceive of a book that would operate as a vehicle for her two passions, psychology and Sherlock Holmes, while at the same time catering to readers’ appetite for life-enhancement strategies and spiritual uplift, Konnikova has produced a work in the grip of a bewildering and self-undermining identity crisis. The organizing conceit of Mastermind is that, just as Sherlock explains to Watson in the second chapter of A Study in Scarlet, the brain is like an attic. For Konnikova, this means the mind is in constant danger of becoming cluttered and disorganized through carelessness and neglect. That this interpretation wasn’t what Conan Doyle had in mind when he put the words into Sherlock’s mouth—and that the meaning he actually had in mind has proven to be completely wrong—doesn’t stop her from making her version of the idea the centerpiece of her argument. “We can,” she writes,
learn to master many aspects of our attic’s structure, throwing out junk that got in by mistake (as Holmes promises to forget Copernicus at the earliest opportunity), prioritizing those things we want to and pushing back those that we don’t, learning how to take the contours of our unique attic into account so that they don’t unduly influence us as they otherwise might. (27)
This all sounds great—a little too great—from a self-improvement perspective, but the attic metaphor is Sherlock’s explanation for why he doesn’t know the earth revolves around the sun and not the other way around. He states quite explicitly that he believes the important point of similarity between attics and brains is their limited capacity. “Depend upon it,” he insists, “there comes a time when for every addition of knowledge you forget something that you knew before.” Note here his topic is knowledge, not attention.
It is possible that a human mind could reach and exceed its storage capacity, but the way we usually avoid this eventuality is that memories that are seldom referenced are forgotten. Learning new facts may of course exhaust our resources of time and attention. But the usual effect of acquiring knowledge is quite the opposite of what Sherlock suggests. In the early 1990’s, a research team led by Patricia Alexander demonstrated that having background knowledge in a subject area actually increased participants’ interest in and recall for details in an unfamiliar text. One of the most widely known replications of this finding was a study showing that chess experts have much better recall for the positions of pieces on a board than novices. However, Sherlock was worried about information outside of his area of expertise. Might he have a point there?
The problem is that Sherlock’s vocation demands a great deal of creativity, and it’s never certain at the outset of a case what type of knowledge may be useful in solving it. In the story “The Lion’s Mane,” he relies on obscure information about a rare species of jellyfish to wrap up the mystery. Konnikova cites this as an example of “The Importance of Curiosity and Play.” She goes on to quote Sherlock’s endorsement for curiosity in The Valley of Fear: “Breadth of view, my dear Mr. Mac, is one of the essentials of our profession. The interplay of ideas and the oblique uses of knowledge are often of extraordinary interest” (151). How does she account for the discrepancy? Could Conan Doyle’s conception of the character have undergone some sort of evolution? Alas, Konnikova isn’t interested in questions like that. “As with most things,” she writes about the earlier reference to the attic theory, “it is safe to assume that Holmes was exaggerating for effect” (150). I’m not sure what other instances she may have in mind—it seems to me that the character seldom exaggerates for effect. In any case, he was certainly not exaggerating his ignorance of Copernican theory in the earlier story.
If Konnikova were simply privileging the science at the expense of the literature, the measure of Mastermind’s success would be in how clearly the psychological theories and findings are laid out. Unfortunately, her attempt to stitch science together with pronouncements from the great detective often leads to confusing tangles of ideas. Following her formula, she prefaces one of the few example exercises from cognitive research provided in the book with a quote from “The Crooked Man.” After outlining the main points of the case, she writes,
How to make sense of these multiple elements? “Having gathered these facts, Watson,” Holmes tells the doctor, “I smoked several pipes over them, trying to separate those which were crucial from others which were merely incidental.” And that, in one sentence, is the first step toward successful deduction: the separation of those factors that are crucial to your judgment from those that are just incidental, to make sure that only the truly central elements affect your decision. (169)
So far she hasn’t gone beyond the obvious. But she does go on to cite a truly remarkable finding that emerged from research by Amos Tversky and Daniel Kahneman in the early 1980’s. People who read a description of a man named Bill suggesting he lacks imagination tended to feel it was less likely that Bill was an accountant than that he was an accountant who plays jazz for a hobby—even though the two points of information in that second description make in inherently less likely than the one point of information in the first. The same result came when people were asked whether it was more likely that a woman named Linda was a bank teller or both a bank teller and an active feminist. People mistook the two-item choice as more likely. Now, is this experimental finding an example of how people fail to sift crucial from incidental facts?
The findings of this study are now used as evidence of a general cognitive tendency known as the conjunction fallacy. In his book Thinking, Fast and Slow, Kahneman explains how more detailed descriptions (referring to Tom instead of Bill) can seem more likely, despite the actual probabilities, than shorter ones. He writes,
The judgments of probability that our respondents offered, both in the Tom W and Linda problems, corresponded precisely to judgments of representativeness (similarity to stereotypes). Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories. The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. (159)
So people are confused because the less probable version is actually easier to imagine. But here’s how Konnikova tries to explain the point by weaving it together with Sherlock’s ideas:
Holmes puts it this way: “The difficulty is to detach the framework of fact—of absolute undeniable fact—from the embellishments of theorists and reporters. Then, having established ourselves upon this sound basis, it is our duty to see what inferences may be drawn and what are the special points upon which the whole mystery turns.” In other words, in sorting through the morass of Bill and Linda, we would have done well to set clearly in our minds what were the actual facts, and what were the embellishments or stories in our minds. (173)
But Sherlock is not referring to our minds’ tendency to mistake coherence for probability, the tendency that has us seeing more detailed and hence less probable stories as more likely. How could he have been? Instead, he’s talking about the importance of independently assessing the facts instead of passively accepting the assessments of others. Konnikova is fudging, and in doing so she’s shortchanging the story and obfuscating the science.
As the subtitle implies, though, Mastermind is about how to think; it is intended as a self-improvement guide. The book should therefore be judged based on the likelihood that readers will come away with a greater ability to recognize and avoid cognitive biases, as well as the ability to sustain the conviction to stay motivated and remain alert. Konnikova emphasizes throughout that becoming a better thinker is a matter of determinedly forming better habits of thought. And she helpfully provides countless illustrative examples from the Holmes canon, though some of these precepts and examples may not be as apt as she’d like. You must have clear goals, she stresses, to help you focus your attention. But the overall purpose of her book provides a great example of a vague and unrealistic end-point. Think better? In what domain? She covers examples from countless areas, from buying cars and phones, to sizing up strangers we meet at a party. Sherlock, of course, is a detective, so he focuses his attention of solving crimes. As Konnikova dutifully points out, in domains other than his specialty, he’s not such a mastermind.
What Mastermind works best as is a fun introduction to modern psychology. But it has several major shortcomings in that domain, and these same shortcomings diminish the likelihood that reading the book will lead to any lasting changes in thought habits. Concepts are covered too quickly, organized too haphazardly, and no conceptual scaffold is provided to help readers weigh or remember the principles in context. Konnikova’s strategy is to take a passage from Conan Doyle’s stories that seems to bear on noteworthy findings in modern research, discuss that research with sprinkled references back to the stories, and wrap up with a didactic and sententious paragraph or two. Usually, the discussion begins with one of Watson’s errors, moves on to research showing we all tend to make similar errors, and then ends admonishing us not to be like Watson. Following Kahneman’s division of cognition into two systems—one fast and intuitive, the other slower and demanding of effort—Konnikova urges us to get out of our “System Watson” and rely instead on our “System Holmes.” “But how do we do this in practice?” she asks near the end of the book,
How do we go beyond theoretically understanding this need for balance and open-mindedness and applying it practically, in the moment, in situations where we might not have as much time to contemplate our judgments as we do in the leisure of our reading?
The answer she provides: “It all goes back to the very beginning: the habitual mindset that we cultivate, the structure that we try to maintain for our brain attic no matter what” (240). Unfortunately, nowhere in her discussion of built-in biases and the correlates to creativity did she offer any step-by-step instruction on how to acquire new habits. Konnikova is running us around in circles to hide the fact that her book makes an empty promise.
Tellingly, Kahneman, whose work on biases Konnikova cites on several occasions, is much more pessimistic about our prospects for achieving Holmesian thought habits. In the introduction to Thinking, Fast and Slow, he says his goal is merely to provide terms and labels for the regular pitfalls of thinking to facilitate more precise gossiping. He writes,
Why be concerned with gossip? Because it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own. Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and home. (3)
The worshipful attitude toward Sherlock in Mastermind is designed to pander to our vanity, and so the suggestion that we need to rely on others to help us think is too mature to appear in its pages. The closest Konnikova comes to allowing for the importance of input and criticism from other people is when she suggests that Watson is an indispensable facilitator of Sherlock’s process because he “serves as a constant reminder of what errors are possible” (195), and because in walking him through his reasoning Sherlock is forced to be more mindful. “It may be that you are not yourself luminous,” Konnikova quotes from The Hound of the Baskervilles, “but you are a conductor of light. Some people without possessing genius have a remarkable power of stimulating it. I confess, my dear fellow, that I am very much in your debt” (196).
That quote shows one of the limits of Sherlock’s mindfulness that Konnikova never bothers to address. At times throughout Mastermind, it’s easy to forget that we probably wouldn’t want to live the way Sherlock is described as living. Want to be a great detective? Abandon your spouse and your kids, move into a cheap flat, work full-time reviewing case histories of past crimes, inject some cocaine, shoot holes in the wall of your flat where you’ve drawn a smiley face, smoke a pipe until the air is unbreathable, and treat everyone, including your best (only?) friend with casual contempt. Conan Doyle made sure his character casts a shadow. The ideal character Konnikova holds up, with all his determined mindfulness, often bears more resemblance to Kwai Chang Caine from Kung Fu. This isn’t to say that Sherlock isn’t morally complex—readers love him because he’s so clearly a good guy, as selfish and eccentric as he may be. Konnikova cites an instance in which he holds off on letting the police know who committed a crime. She quotes:
Once that warrant was made out, nothing on earth would save him. Once or twice in my career I feel that I have done more real harm by my discovery of the criminal than ever he had done by his crime. I have learned caution now, and I had rather play tricks with the law of England than with my own conscience. Let us know more before we act.
But Konnikova isn’t interested in morality, complex or otherwise, no matter how central moral intuitions are to our enjoyment of fiction. The lesson she draws from this passage shows her at her most sententious and platitudinous:
You don’t mindlessly follow the same preplanned set of actions that you had determined early on. Circumstances change, and with them so does the approach. You have to think before you leap to act, or judge someone, as the case may be. Everyone makes mistakes, but some may not be mistakes as such, when taken in the context of the time and the situation. (243)
Hard to disagree, isn’t it?
To be fair, Konnikova does mention some of Sherlock’s peccadilloes in passing. And she includes a penultimate chapter titled “We’re Only Human,” in which she tells the story of how Conan Doyle was duped by a couple of young girls into believing they had photographed some real fairies. She doesn’t, however, take the opportunity afforded by this episode in the author’s life to explore the relationship between the man and his creation. She effectively says he got tricked because he didn’t do what he knew how to do, it can happen to any of us, so be careful you don’t let it happen to you. Aren’t you glad that’s cleared up? She goes on to end the chapter with an incongruous lesson about how you should think like a hunter. Maybe we should, but how exactly, and when, and at what expense, we’re never told.
Konnikova clearly has a great deal of genuine enthusiasm for both literature and science, and despite my disappointment with her first book I plan to keep following her blog. I’m even looking forward to her next book—confident she’ll learn from the negative reviews she’s bound to get on this one. The tragic blunder she made in eschewing nuanced examinations of how stories work, how people relate to characters, or how authors create them for a shallow and one-dimensional attempt at suggesting a 100 year-old fictional character somehow divined groundbreaking research findings from the end of the Twentieth and beginning of the Twenty-First Centuries calls to mind an exchange you can watch on YouTube between Neil Degrasse Tyson and Richard Dawkins. Tyson, after hearing Dawkins speak in the way he’s known to, tries to explain why many scientists feel he’s not making the most of his opportunities to reach out to the public.
You’re professor of the public understanding of science, not the professor of delivering truth to the public. And these are two different exercises. One of them is putting the truth out there and they either buy your book or they don’t. That’s not being an educator; that’s just putting it out there. Being an educator is not only getting the truth right; there’s got to be an act of persuasion in there as well. Persuasion isn’t “Here’s the facts—you’re either an idiot or you’re not.” It’s “Here are the facts—and here is a sensitivity to your state of mind.” And it’s the facts and the sensitivity when convolved together that creates impact. And I worry that your methods, and how articulately barbed you can be, ends up being simply ineffective when you have much more power of influence than is currently reflected in your output.
Dawkins begins his response with an anecdote that shows that he’s not the worst offender when it comes to simple and direct presentations of the facts.
A former and highly successful editor of New Scientist Magazine, who actually built up New Scientist to great new heights, was asked “What is your philosophy at New Scientist?” And he said, “Our philosophy at New Scientist is this: science is interesting, and if you don’t agree you can fuck off.”
I know the issue is a complicated one, but I can’t help thinking Tyson-style persuasion too often has the opposite of its intended impact, conveying as it does the implicit message that science has to somehow be sold to the masses, that it isn’t intrinsically interesting. At any rate, I wish that Konnikova hadn’t dressed up her book with false promises and what she thought would be cool cross-references. Sherlock Holmes is interesting. Psychology is interesting. If you don’t agree, you can fuck off.
Also read
FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM
And
THE STORYTELLING ANIMAL: A LIGHT READ WITH WEIGHTY IMPLICATIONS
And
LAB FLIES: JOSHUA GREENE’S MORAL TRIBES AND THE CONTAMINATION OF WALTER WHITE
Also a propos is
The Self-Transcendence Price Tag: A Review of Alex Stone's Fooling Houdini
If you can convince people you know how to broaden the contours of selfhood and show them the world as they’ve never experienced it before, if you can give them the sense that their world is expanding, they’ll at the very least want to keep talking to you so they can keep the feeling going and maybe learn what your secret is. Much of this desire to better ourselves is focused on our social lives, and that’s why duping delight is so seductive—it gives us a taste of what it’s like to be the charismatic and irresistible characters we always expected ourselves to become.
Psychologist Paul Ekman is renowned for his research on facial expressions, and he frequently studies and consults with law enforcement agencies, legal scholars, and gamblers on the topic of reading people who don’t want to be read. In his book Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage, Ekman focuses on three emotions would-be lie detectors should be on the lookout for subtle expressions of. The first two—detection apprehension and deception guilt—are pretty obvious. But the third is more interesting. Many people actually enjoy deceiving others because, for one thing, the threat of detection is more thrilling to them than terrifying, and, for another, being able to pull off the deception successfully can give them a sense of “pride in the achievement, or feelings of smug contempt toward the target” (76). Ekman calls this “Duping Delight,” and he suggests it leads many liars to brag about their crimes, which in turn leads to them being caught.
The takeaway insight is that knowing something others don’t, or having the skill to trick or deceive others, can give us an inherently rewarding feeling of empowerment.
Alex Stone, in his new book Fooling Houdini: Magicians, Mentalists, Math Geeks & the Hidden Powers of the Mind, suggests that duping delight is what drives the continuing development of the magician’s trade. The title refers to a bit of lore that has reached the status of founding myth among aficionados of legerdemain. Houdini used to boast that he could figure out the secret behind any magic trick if he saw it performed three times. Time and again, he backed up his claim, sending defeated tricksters away to nurse their wounded pride. But then came Dai Vernon, who performed for Houdini what he called the Ambitious Card, a routine in which a card signed by a volunteer and then placed in the middle of a deck mysteriously appears at the top. After watching Vernon go through the routine seven times, Houdini turned around and walked away in a huff. Vernon went on to become a leading figure in Twentieth Century magic, and every magician today has his (they’re almost all male) own version of Ambitious Card, which serves as a type of signature.
In Fooling Houdini, Stone explains that for practiced magicians, tricking the uninitiated loses its thrill over time. So they end up having to up the ante, and in the process novitiates find themselves getting deeper and deeper into the practice, tradition, culture, and society of magic and magicians. He writes,
Sure, it’s fun to fool laypeople, but they’re easy prey. It’s far more thrilling to hunt your own kind. As a result, magicians are constantly engineering new ways to dupe one another. A hierarchy of foolmanship, a who-fooled-whom pecking order, rules the conjuror’s domain. This gladiatorial spirit in turn drives considerable evolution in the art. (173)
Stone’s own story begins with a trip to Belgium to compete in the 2006 Magic Olympics. His interest in magic was, at the time, little more than an outgrowth of his interest in science. He’d been an editor at Discover magazine and had since gone on to graduate school in physics at Columbia University. But after the Magic Olympics, where he performed dismally and was left completely humiliated and averse to the thought of ever doing magic again, he gradually came to realize that one way or another he would have to face his demons by mastering the art he’d only so far dabbled in.
Fooling Houdini chronicles how Stone became obsessed with developing his own personalized act and tweaking it to perfection, and how he went from being a pathetic amateur to a respectable semi-professional. The progress of a magician, Stone learns from Jeff McBride, follows “four cardinal stations of magic: Trickster, Sorcerer, Oracle, and Sage” (41). And the resultant story as told in Stone’s book follows an eminently recognizable narrative course, from humiliation and defeat to ever-increasing mastery and self-discovery.
Fooling Houdini will likely appeal to the same audience as did Moonwalking with Einstein, Joshua Foer’s book about how he ended up winning the U.S. Memory Championships. Foer, in fact, makes a guest appearance in Fooling Houdini when Stone seeks out his expertise to help him memorize a deck of cards for an original routine of his own devising. (He also gave the book a nice plug for the back cover.) The appeal of both books comes not just from the conventional narrative arc but also from the promise of untapped potential, a sense that greater mastery, and even a better life, lie just beyond reach, accessible to anyone willing to put in those enchanted ten thousand hours of training made famous by Malcolm Gladwell. It’s the same thing people seem to love about TED lectures, the idea that ideas will almost inevitably change our lives. Nathan Heller, in a recent New Yorker article, attempts to describe the appeal of TED conferences and lectures in terms that apply uncannily well to books like Foer’s and Stone’s. Heller writes,
Debby Ruth, a Long Beach attendee, told me that she started going to TED after reaching a point in her life when “nothing excited me anymore”; she returns now for a yearly fix. TED may present itself as an ideas conference, but most people seem to watch the lectures not so much for the information as for how they make them feel. (73)
The way they make us feel is similar to the way a good magic show can make us feel—like anything is possible, like on the other side of this great idea that breaks down the walls of our own limitations is a better, fuller, more just, and happier life. “Should we be grateful to TED for providing this form of transcendence—and on the Internet, of all places?” Heller asks.
Or should we feel manipulated by one more product attempting to play on our emotions? It’s tricky, because the ideas undergirding a TED talk are both the point and, for viewers seeking a generic TED-type thrill, essentially beside it: the appeal of TED comes as much from its presentation as from it substance. (73-4)
At their core, Fooling Houdini and Moonwalking with Einstein—and pretty much every TED lecture—are about transforming yourself, and to a somewhat lesser degree the world, either with new takes on deep-rooted traditions, reconnection with ancient wisdom, or revolutionary science.
Foer presumably funded the epic journey recounted in Moonwalking with his freelance articles and maybe with expense accounts from the magazines he wrote for. Still, it seems you could train to become a serviceable “mental athlete” without spending all that much money. Not so with magic. Stone’s prose is much more quirky and slightly more self-deprecatory than Foer’s, and in one of his funniest and most revealing chapters he discusses some of the personal and financial costs associated with his obsession. The title, “It’s Annoying and I Asked You to Stop,” is a quote from a girlfriend who was about to dump him. The chapter begins,
One of my biggest fears is that someday I’ll be audited. Not because my taxes aren’t in perfect order—I’m very OCD about saving receipts and keeping track of my expenses, a habit I learned from my father—but because it would bring me face-to-face with a very difficult and decidedly lose-lose dilemma in which I’d have to choose between going to jail for tax fraud and disclosing to another adult, in naked detail, just how much money I’ve spent on magic over the years. (That, and I’d have to fess up to eating at Arby’s multiple times while traveling to magic conventions.) (159)
Having originally found magic fun and mentally stimulating, Stone ends up being seduced into spending astronomical sums by the terrible slight he received from the magic community followed by a regimen of Pavlovian conditioning based on duping delight. Both Foer’s and Stone’s stories are essentially about moderately insecure guys who try to improve themselves by learning a new skill set.
The market for a renewed sense of limitless self-potential is booming. As children, it seems every future we can imagine for ourselves is achievable—that we can inhabit them all simultaneously—so whatever singular life we find ourselves living as adults inevitably falls short of our dreams. We may have good jobs, good relationships, good health, but we can’t help sometimes feeling like we’re missing out on something, like we’re trapped in overscheduled rutted routines of workaday compromise. After a while, it becomes more and more difficult to muster any enthusiasm for much of anything beyond the laziest indulgences like the cruises we save up for and plan months or years in advance, the three-day weekend at the lake cottage, a shopping date with an old friend, going out to eat with the gang. By modern, middle-class standards, this is the good life. What more can we ask for?
What if I told you, though, that there’s a training regimen that will make you so much more creative and intelligent that you’ll wonder after a few months how you ever managed to get by with a mind as dull as yours is now? What if I told you there’s a revolutionary diet and exercise program that is almost guaranteed to make you so much more attractive that even your friends won’t recognize you? What if I told you there’s a secret set of psychological principles that will allow you to seduce almost any member of the opposite sex, or prevail in any business negotiation you ever engage in? What if I told you you’ve been living in a small dark corner of the world, and that I know the way to a boundless life of splendor?
If you can convince people you know how to broaden the contours of selfhood and show them the world as they’ve never experienced it before, if you can give them the sense that their world is expanding, they’ll at the very least want to keep talking to you so they can keep the feeling going and maybe learn what your secret is. Much of this desire to better ourselves is focused on our social lives, and that’s why duping delight is so seductive—it gives us a taste of what it’s like to be the charismatic and irresistible characters we always expected ourselves to become. This is how Foer writes about his mindset at the outset of his memory training, after he’d read about the mythic feats of memory champion Ben Pridmore:
What would it mean to have all that otherwise-lost knowledge at my fingertips? I couldn’t help but think that it would make me more persuasive, more confident and, in some fundamental sense, smarter. Certainly I’d be a better journalist, friend, and boyfriend. But more than that, I imagined that having a memory like Ben Pridmore’s would make me an altogether more attentive, perhaps even wiser, person. To the extent that experience is the sum of our memories and wisdom the sum of experience, having a better memory would mean knowing not only more about the world, but also more about myself. (7)
Stone strikes a similar chord when he’s describing what originally attracted him to magic back when he was an awkward teenager. He writes,
In my mind, magic was also a disarming social tool, a universal language that transcended age and gender and culture. Magic would be a way out of my nerdiness. That’s right, I thought magic would make me less nerdy. Or at the very least it would allow me to become a different, more interesting nerd. Through magic, I would be able to navigate awkward social situations, escape the boundaries of culture and class, connect with people from all walks of life, and seduce beautiful women. In reality, I ended up spending most of my free time with pasty male virgins. (5)
This last line echoes Foer’s observation that the people you find at memory competitions are “indistinguishable from those” you’d find at a “‘Weird Al’ Yankovic (five of spades) concert”? (189).
Though Stone’s openness about his nerdiness at times shades into some obnoxious playing up of his nerdy credentials, it does lend itself to some incisive observations. One of the lessons he has to learn to become a better magician is that his performances have to be more about the audiences than they are about the tricks—less about duping and more about connecting. What this means is that magic isn’t the key to becoming more confident he hoped it would be; instead, he has to be more confident before he can be good at magic. For Stone, this means embracing, not trying to overcome, his nerdish tendencies. He writes,
Magicians like to pretend that they’re cool and mysterious, cultivating the image of the smooth operator, the suave seducer. Their stage names are always things like the Great Tomsoni or the Amazing Randi or the International Man of Mystery—never Alex the Magical Superdoofus or the Incredible Nerdini. But does all this posturing really make them look cooler? Or just more ridiculous for trying to hide their true stripes? Why couldn’t more magicians own up to their own nerdiness? Magic was geeky. And that was okay. (243)
Stone reaches this epiphany largely through the inspiration of a clown who, in a surprising twist, ends up stealing the show from many of the larger-than-life characters featured in Fooling Houdini. In an effort to improve his comfort while performing before crowds and thus to increase his stage presence, Stone works with his actress girlfriend, takes improv classes, and attends a “clown workshop” led by “a handsome man in his early forties named Christopher Bayes,” who begins by telling the class that “The clown is the physical manifestation of the unsocialized self… It’s the essence of the playful spirit before you were defeated by society, by the world” (235). Stone immediately recognizes the connection with magic. Here you have that spark of joyful spontaneity, that childish enthusiasm before a world where everything is new and anything is possible.
“The main trigger for laughter is surprise,” Bayes told us, speaking of how the clown gets his laughs. “There’s lots of ways to find that trigger. Some of them are tricks. Some of them are math. And some of them come from building something with integrity and then smashing it. So you smash the expectation and of what you think is going to happen. (239)
In smashing something you’ve devoted considerable energy to creating, you’re also asserting your freedom to walk away from what you’ve invested yourself in, to reevaluate your idea of what’s really important, to change your life on a whim. And surprise, as Bayes describes it, isn’t just the essential tool for comedians. Stone explains,
The same goes for the magician. Magic transports us to an absurd universe, parodying the laws of physics in a whimsical toying of cause and effect. “Magic creates tension,” says Juan Tamariz, “a logical conflict that it does not resolve. That’s why people often laugh after a trick, even when we haven’t done anything funny. Tamariz is also fond of saying that magic holds a mirror up to our impossible desires. We all would like to fly, see the future, know another’s thoughts, mend what has been broken. Great magic is anchored to a symbolic logic that transcends its status as an illusion. (239)
Stone’s efforts to become a Sage magician have up till this point in the story entailed little more than a desperate stockpiling of tricks. But he comes to realize that not all tricks jive with his personality, and if he tries to go too far outside of character his performances come across as strained and false. This is the stock ironic trope that this type of story turns on—he starts off trying to become something great only to discover that he first has to accept who he is. He goes on relaying the lessons of the clown workshop,
“Try to proceed with a kind of playful integrity,” Chris Bayes told us. “Because in that integrity we actually find more possibility of surprise than we do in an idea of how to trick us into laughing. You bring it from yourself. And we see this little gift that you brought for us, which is the gift of your truth. Not an idea of your truth, but the gift of your real truth. And you can play forever with that, because it’s infinite. (244)
What’s most charming about the principle of proceeding with playful integrity is that it applies not just to clowning and magic, but to almost every artistic and dramatic endeavor—and even to science. “Every truly great idea, be it in art or science,” Stone writes, “is a kind of magic” (289). Aspirants may initially be lured into any of these creative domains by the promise of greater mastery over other people, but the true sages end up being the ones who are the most appreciative and the most susceptible to the power of the art to produce in them that sense of playfulness and boundless exuberance.
Being fooled is fun, too, because it’s a controlled way of experiencing a loss of control. Much like a roller coaster or a scary movie, it lets you loosen your grip on reality without actually losing your mind. This is strangely cathartic, and when it’s over you feel more in control, less afraid. For magicians, watching magic is about chasing this feeling—call it duped delight, the maddening ecstasy of being a layperson again, a novice, if only for a moment. (291)
“Just before Vernon,” the Man Who Fooled Houdini, “died,” Stone writes, “comedian and amateur magician Dick Cavett asked him if there was anything he wished for.” Vernon answered, “I wish somebody could fool me one more time” (291). Stone uses this line to wrap up his book, neatly bringing the story full-circle.
Fooling Houdini is unexpectedly engrossing. It reads quite a bit different from the 2010 book on magic by neuroscientists Stephen Macknik and Susana Martinez-Conde, which they wrote with Sandra Blakeslee, Sleights of Mind: What the Neuroscience of Magic Reveals about Our Everyday Deceptions. For one thing, Stone focuses much more on the people he comes to know on his journey and less on the underlying principles. And, though Macknik and Martinez-Conde also use their own education in the traditions and techniques of magic as a narrative frame, Stone gets much more personal. One underlying message of both books is that our sense of being aware of what’s going on around us is illusory, and that illusion makes us ripe for the duping. But Stone conveys more of the childlike wonder of magic, despite his efforts at coming across as a stylish hipster geek. Unfortunately, when I got to the end I was reminded of the title of a TED lecture that’s perennially on the most-watched list, Brene Brown’s “The Power of Vulnerability,” which I came away from scratching my head because it seemed utterly nonsensical.
It’s interesting that duping delight is a feeling few anticipate and many fail to recognize even as they’re experiencing it. It is the trick that ends up being played on the trickster. Like most magic, it takes advantage of a motivational system that most of us are only marginally aware of, if at all. But there’s another motivational system and another magic trick that makes things like TED lectures so thrilling. It’s also the trick that makes books like Moonwalking with Einstein and Fooling Houdini so engrossing. Arthur and Elaine Aron use what’s called “The Self-Expansion Model” to explain what attracts us to other people, and even what attracts us to groups of people we end up wanting to identify with. The basic idea is that we’re motivated to increase our efficacy, not with regard to achieving any specific goals but in terms of our general potential. One of the main ways we seek to augment our potential is by establishing close relationships with other people who have knowledge or skills or resources that would contribute to our efficacy. Foer learns countless mnemonic techniques from guys like Ed Cooke. Stone learns magic from guys like Wes James. Meanwhile, we readers are getting a glimpse of all of it through our connection with Foer and Stone.
Self-expansion theory is actually somewhat uplifting in its own right because it offers a more romantic perspective on our human desire to associate with high-status individuals and groups. But the triggers for a sense of self-expansion are probably pretty easy to mimic, even for those who lack the wherewithal or the intention to truly increase our potential or genuinely broaden our horizons. Indeed, it seems as though self-expansion has become the main psychological principle exploited by politicians, marketers, and P.R. agents. This isn’t to say we should discount every book or lecture that we find uplifting, but we should keep in mind that there are genuine experiences of self-expansion and counterfeit ones. Heller’s observation about how TED lectures are more about presentation than substance, for instance, calls to mind an experiment done in the early 1970s in which Dr. Myron Fox gave a lecture titled “Mathematical Game Theory as Applied to Physician Education.” His audience included psychologists, psychiatrists, educators, and graduate students, virtually all of whom rated his ideas and his presentation highly. But Dr. Fox wasn’t actually a doctor; he was the actor Michael Fox. And his lecture was designed to be completely devoid of meaningful content but high on expressiveness and audience connection.
The Dr. Fox Effect is the term now used to describe our striking inability to recognize nonsense coming from the mouths of charismatic speakers.
And if keeping our foolability in mind somehow makes that sense of self-transcendence elude us today, "tomorrow we will run faster, stretch out our arms farther....
And one fine morning—"
Also read:
Taking the GRE again after 10 Years
I aced the verbal reasoning section of the GRE the first time I took it. It ended up, not being the worst thing that ever happened to me, but… distracting. Eleven years later, I had to take the test again to start trying to make my way back into school. How could I compete with my earlier perfection?
I had it timed: if I went to the bathroom at 8:25, I’d be finishing up the essay portion of the test about ten minutes after my bladder was full again. Caffeine being essential for me to get into the proper state of mind for writing, I’d woken up to three cans of Diet Mountain Dew and two and half rather large cups of coffee. I knew I might not get called in to take the test precisely at 8:30, but I figured I could handle the pressure, as it were. The clock in the office of the test center read 8:45 when I walked in. Paperwork, signatures, getting a picture taken, turning out all my pockets (where I managed to keep my three talismans concealed)—by the time I was sitting down in the carrel—in a room that might serve as a meeting place for prisoners and their lawyers—it was after 9:00. And there were still more preliminaries to go through.
Test takers are allotted 45 minutes for an essay on the “Issue Topic” prompted by a short quote. The “Analysis of an Argument” essay takes a half hour. The need to piss got urgent with about ten minutes left on the clock for the issue essay. By the end of the second essay, I was squirming and dancing and pretty desperate. Of course, I had to wait for our warden to let me out of the testing room. And then I had to halt midway through the office to come back and sign myself out. Standing at the urinal—and standing and standing—I had plenty of time to consider how poorly designed my strategy had been. I won’t find out my scores for the essay portion for ten or so days.
**********************************
I’ve been searching my apartment for the letter with my official scores from the first time I took the GRE about ten years ago. I’d taken it near the end of the summer, at one of those times in life of great intellectual awakening. With bachelor’s degrees in both anthropology and psychology, and with only the most inchoate glimmerings of a few possible plans for the future, I lived in my dad’s enormous house with my oldest brother, who had returned after graduating from Notre Dame and was now taking graduate courses at IPFW, my alma mater, and some roommates. I delivered pizzas in the convertible Mustang I bought as a sort of hand-me-down from that same brother. And I spent hours every day reading.
I’m curious about the specific date of the test because it would allow me place it in the context of what I was reading. It would also help me ascertain the amount of time I spent preparing. If memory serves, I was doing things like pouring over various books by Stephen Jay Gould and Richard Dawkins, trying to decide which one of them knew the real skinny on how evolution works. I think by then I’d read Frank Sulloway’s Born to Rebel, in which he applied complex statistics to data culled from historical samples and concluded that later-born siblings tend to be less conscientious but more open to new ideas and experiences. I was delighted to hear that the former president had read Jared Diamond’s Guns, Germs, and Steel, and thought it tragically unimaginable that the current president would ever read anything like that. At some point, I began circling words I didn’t recognize or couldn’t define so when I was finished with the chapter I could look them up and make a few flashcards.
I’m not even sure the flashcards were in anticipation of the GRE. Several of my classmates in both the anthropology and psychology departments had spoken to me by then of their dejection upon receiving their scores. I was scared to take it. The trend seemed to be that everyone was getting about a hundred points less on this test than they did on the SAT. I decided I only really cared about the verbal reasoning section, and a 620 on that really wasn’t acceptable. Beyond the flashcards, I got my hands on a Kaplan CD-ROM from a guy at school and started doing all the practice tests on it. The scores it gave me hovered in the mid-600s. It also gave me scads of unfamiliar words (like scad) to put in my stack of flashcards, which grew, ridiculously, to the height of about a foot.
I don’t remember much about the test itself. It was at a Sylvan Learning Center that closed a while back. One of the reading comprehension excerpts was on chimpanzees, which I saw as a good sign. When I was done, there was a screen giving me a chance to admit I cheated. It struck me as odd. Then came the screen with my scores—800 verbal reasoning. I looked around the room and saw nothing but the backs of silent test-takers. Could this be right? I never ace anything. It sank in when I was sitting down in the Mustang. Driving home on I-69, I sang along to “The Crush” by Dave Matthews, elated.
I got accepted into MIT’s program in science writing based on that score and a writing sample in which I defended Frank Sulloway’s birth order theory against Judith Rich Harris, the author of The Nurture Assumption, another great book. But Harris’s arguments struck me as petty and somewhat disgraceful. She was engaging in something akin to a political campaign against a competing theory, rather than making a good faith effort to discover the truth. Anyway, the article I wrote got long and unwieldy. Michael Shermer considered it for publication in Skeptic but ultimately declined because I just didn’t have my chops up when it came to writing about science. By then, I was a writer of fiction.
That’s why upon discovering how expensive a year in Cambridge would be and how little financial aid I’d be getting I declined MIT's invitation to attend their program. If being a science writer was my dream, I’d have gone. But I decided to hold out for an acceptance to an MFA program in creative writing. I’d already applied two years in row before stretching my net to include science writing. But the year I got accepted at MIT ended up being the third year of summary rejection on the fiction front. I had one more year before that perfect GRE score expired.
**********
Year four went the same way all the other years had gone. I was in my late twenties now and had the feeling whatever opportunities that were once open to me had slipped away. Next came a crazy job at a restaurant—Lucky’s—and a tumultuous relationship with the kitchen manager. After I had to move out of the apartment I shared with her in the wake of our second breakup (there would be a third), I was in a pretty bad place. But I made the smartest decision I’d made in a while and went back to school to get my master’s in English at IPFW.
The plan was to improve my qualifications for creative writing programs. And now that I’m nearly finished with the program I put re-taking the GRE at the top of my list for things to do this summer. In the middle of May, I registered to take it on June 22nd. I’d been dreading it ever since my original score expired, but now I was really worried. What would it mean if I didn’t get an 800 again? What if I got significantly lower than that? The MFA programs I’ll be applying to are insanely competitive: between five hundred and a thousand applicants for less than a dozen spaces. At the same time, though, there was a sense that a lower score would serve as this perfect symbol for just how far I’d let my life go off-track.
Without much conscious awareness of what I was doing, I started playing out a Rocky narrative, or some story like Mohammed Ali making his comeback after losing his boxing license for refusing to serve in Vietnam. I would prove I wasn’t a has-been, that whatever meager accomplishments I had under my belt weren’t flukes. Last semester I wrote a paper on how to practice to be creative, and one of the books I read for it was K. Anders Ericsson’s The Road to Excellence. So, after signing up for the test I created a regimen of what Ericsson calls “deliberate practice,” based on anticipation and immediate feedback. I got my hands on as many sample items and sample tests I could find. I made little flashcards with the correct answers on them to make the feedback as close as possible to the hazarded answer. I put hours and hours into it. And I came up with a strategy for each section, and for every possible contingency I could think of. I was going to beat the GRE, again, through sheer force of will.
***********
The order of the sections is variable. Ideally, the verbal section would have come first after the essay section so I wouldn’t have to budget my stores of concentration. But sitting down again after relieving my bladder I saw the quantitative section appear before me on the screen. Oh well, I planned for this too, I thought. I adhered pretty well to my strategy of working for a certain length of time to see if I could get the answer and then guessing if it didn’t look promising. And I achieved my goal for this section by not embarrassing myself. I got a 650.
The trouble began almost immediately when the verbal questions starting coming. The strategy for doing analogies, the questions I most often missed in practice, was to work out the connection between the top words, “the bridge,” before considering the five word couples below to see which one has the same bridge. But because the screen was so large, and because I was still jittery from the caffeine, I couldn’t read the first word pair without seeing all the others. I abandoned the strategy with the first question.
Then disaster struck. I’d anticipated only two sets of reading comprehension questions, but then, with the five minute warning already having passed, another impossibly long blurb appeared. I resign myself at that point to having to give up my perfect score. I said to myself, “Just read it quick and give the best answers you can.” I finished the section with about twenty seconds left. At least all the antonyms had been easy. Next came an experimental section I agreed to take since I didn’t need to worry about flagging concentration anymore. For the entire eighteen minutes it took, I sat there feeling completely defeated. I doubt my answers for that section will be of much use.
Finally, I was asked if I wanted to abandon my scores—a ploy, I’m sure to get skittish people to pay to take the test twice. I said no, and clicked to see and record my scores. There it was at the top of the screen, my 800. I’d visualized the moment several times. I was to raise one arm in victory—but I couldn’t because the warden would just think I was raising my hand to signal I needed something. I also couldn’t because I didn’t feel victorious. I still felt defeated. I was sure all the preparation I’d done had been completely pointless. I hadn’t boxed. I’d clenched my jaw, bunched up my fist, and brawled.
I listened to “The Crush” on the way home again, but as I detoured around all the construction downtown I wasn’t in a celebratory mood. I wasn’t elated. I was disturbed. The experience hadn’t been at all like a Rocky movie. It was a lot more like Gattaca. I’d come in, had my finger pricked so they could read my DNA, and had the verdict delivered to me. Any score could have come up on the screen. I had no control over it. That it turned out to be the one I was after was just an accident. A fluke.
**************
The week before I took the test, I’d met a woman at Columbia Street who used to teach seventh graders. After telling her I taught Intro Comp at IPFW, we discussed how teaching is a process of translation from how you understand something into a language that will allow others who lack your experience and knowledge to understand it. Then you have to add some element of entertainment so you don’t lose their attention. The younger the students, the more patience it takes to teach them. Beginning when I was an undergrad working in the Writing Center, but really picking up pace as I got more and more experience as a TA, the delight I used to feel in regard to my own cleverness was being superseded by the nagging doubt that I could ever pass along the method behind it to anyone.
When you’re young (or conservative), it’s easy to look at people who don’t do as well as you with disdain, as if it’s a moral failing on their part. You hold the conviction deep in your gut that if they merely did what you’ve done they’d have what you have or know what you know. Teaching disabuses you of this conviction (which might be why so many teachers are liberal). How many times did I sit with a sharp kid in the writing center trying to explain some element of college writing to him or her, trying to think back to how I had figured it out, and realizing either that I’d simply understood it without much effort or arrived at an understanding through a process that had already failed this kid? You might expect such a realization would make someone feel really brilliant. But in fact it’s humbling. You wonder how many things there are, fascinating things, important things, that despite your own best effort you’ll never really get. Someone, for instance, probably “just gets” how to relay complex information to freshman writers—just gets teaching.
And if, despite your efforts, you’re simply accorded a faculty for perceiving this or understanding that, if you ever lose it your prospects for recreating the same magic are dismal. What can be given can be taken away. Finally, there’s the question of desert. That I can score an 800 on the verbal reasoning section of the GRE is not tied to my effort or to my will. I like to read, always have. It’s not work to me. My proficiency is morally arbitrary. And yet everyone will say about my accomplishments and accolades, “You deserve it.”
Really, though, this unsettled feeling notwithstanding, this is some stupid shit to complain about. I aced the GRE—again. It’s time to celebrate.
Also read:
Productivity as Practice:An Expert Performance Approach to Creative Writing Pedagogy Part 3
Psychologist K. Anders Ericsson’s central finding in his research on expert achievement is that what separates those who attain a merely sufficient level of proficiency in a performance domain from those who reach higher levels of excellence is the amount of time devoted over the course of training to deliberate practice. But, in a domain with criteria for success that can only be abstractly defined, like creative writing, what would constitute deliberate practice is difficult to define.
Start reading at part one.
But the question of what standards of success the instructor is to apply to students’ work, as well as the ones instructors will encourage the students to apply to each others’ work, has yet to be addressed. The skills students develop through practicing evaluating their own work will both be based on their evaluations of the works of others and be applied to them. The first step toward becoming a creative writer is recognizing how much one likes the writing of another. The work the student is initially exposed to will almost certainly have gone through a complex series of assessments beginning with the author’s of his own work, onto commenters and editors working on behalf of the author, then onto editors working on behalf of publishers, and finally to the publishers themselves. Even upon publication, any given work is unlikely to be read by a majority of readers who appreciate the type of writing it represents until a critical threshold is reached beyond which its likelihood of becoming recommended reading is increased. At some point in the process it may even reach the attention of critics and reviewers, who will themselves evaluate the work either positively or negatively. (This is leaving out the roles of branding and author reputation because they probably aren’t practicable skills.) Since feedback cannot be grounded in any absolute or easily measurable criteria, Ericsson advocates a “socially based definition of creativity” (330). And, since students develop their evaluative skills through internalizing and anticipating the evaluations of others, the choice of which workshop to attend is paramount. The student should seek out those most versed in and most appreciative of the type of writing he aspires to master.
Simply reading theoretical essays on poetry or storytelling, as Vikil has his students do, is probably far less effective than sampling a theorist’s or critic’s work and then trying to anticipate that evaluator’s response to a work he or she has written about. Some critics’ work lends itself to this type of exercise more readily than others; those who focus on literary as opposed to political elements, and those who put more effort into using sound methods to ensure the validity of their psychological or sociological theories—if they must theorize—will be much more helpful than those who see each new work as an opportunity to reiterate their favorite ideas in a fresh context. It may be advisable, in other words, to concentrate on reviewers rather than critics and theorists. After having learned to anticipate the responses of a few reviewers whose work is influential, the student will be better equipped to evaluate his or her own work in terms of how it will be received in the social context that will be the final arbiter of success or failure.
Anticipation, as it allows for feedback, could form the basis for several types of practice exercises. Ericsson cites his own and others’ research demonstrating that chess players improve not as a function of how much time they spend playing chess but through studying past games between chess masters. “By trying to select the best move for each position of the game,” Ericsson writes, “and comparing their selected move to the actual move of the game, the players identify discrepancies where they must study the chess position more deeply to uncover the reason for the master’s move” (37). In a similar way, pausing in the process of reading to anticipate a successful author’s next move in a story or novel should offer an opportunity for creative writing students to compare their ideas with the author’s. Of course, areas of divergence between the reader’s ideas for a next move and the one the author actually made need not be interpreted as a mistake on the part of the reader—the reader’s idea may even be better. However, in anticipating what will happen next in a story, the student is generating ideas and therefore getting practice in the area of productivity. And, whether or not the author’s ideas are better, the student will develop greater familiarity with her methods through such active engagement with them. Finally, the students will be getting practice evaluating ideas as they compare their own to those of the author.
A possible objection to implementing this anticipatory reading method in a creative writing curriculum is that a student learning to anticipate an author’s moves would simply be learning to make moves like the ones that author makes—which amounts to reproduction, not creativity. Indeed, one of the theories Ericsson has explored to explain how expertise develops posits a sort of rote memorization of strategies and their proper application to a limited set of situations. “For a long time it was believed that experts acquired a large repertoire of patterns,” he explains, “and their superior performance could be attributed to simple pattern matching and recall of previously stored actions from memory in an effortless and automatic manner” (331). If expertise relies on memory and pattern recognition, though, then experts would fare no better in novel situations than non-experts. Ericsson has found just the opposite to be the case.
Superior expert performers in domains such as music, chess, and medicine can generate better actions than their less skilled peers even in situations they have never directly experienced. Expert performers have acquired refined mental representations that maintain access to relevant information about the situation and support more extensive, flexible reasoning to determine the appropriate actions demanded by the encountered situation. (331)
What the creative writer would be developing through techniques for practice such as anticipation-based reading likely goes beyond a simple accumulation of fixed strategies—a bigger bag of tricks appropriated from other authors. They would instead be developing a complex working model of storytelling as well as a greater capacity for representing and manipulating the various aspects of their own stories in working memory.
Skepticism about whether literary writing of any sort can be taught—or learned in any mundane or systematic way—derives from a real and important insight: authors are judged not by how well they reproduce the formulas of poetry and storytelling but by how successful they are in reformulating the conventional techniques of the previous generation of writers. No one taught Cervantes his epic-absurd form of parody. No one taught Shakespeare how to explore the inner workings of his characters’ minds through monologues. No one taught Virginia Woolf how to shun external trappings and delve so exquisitely into the consciousness of her characters. Yet observations of where authors came to reside in relation to prevailing literary cultures don’t always offer clues to the mode of transportation. Woolf, for instance, wrote a great deal about the fashion for representing characters through references to their physical appearances and lists of their possessions in her reviews for the Times Literary Supplement. She didn’t develop her own approach oblivious of what she called “materialism”; fully understanding the method, she found it insufficient for what she hoped to accomplish with her own work. And she’d spent a lot time in her youth reading Shakespeare, with those long eminently revealing monologues (Wood 110). Supposing creative genius is born of mastery of conventions and techniques and not ignorance of or antipathy toward them, the emphasis on the works of established authors in creative writing pedagogy ceases to savor of hidebound conservatism.
The general pedagogical outline focusing on practice devoted to productivity, as well as the general approach to reading based on anticipation can be refined to accommodate any student’s proclivities or concerns. A student who wants to develop skill in describing characters’ physical appearances in a way that captures something of the essence of their personalities may begin by studying the work of authors from Charles Dickens to Saul Bellow. Though it’s difficult to imagine how such descriptions might be anticipated, the characters’ later development over the course of the plot does offer opportunities to test predictions. Coming away from studies of past works, the student need not be limited to exercises on blank screens or sheets of paper; practice might entail generating multiple ideas for describing some interesting individual he knows in real life, or describing multiple individuals he knows. He may set himself the task of coming up with a good description for everyone interviewed during the course of a television news program. He can practice describing random people who pass on a campus sidewalk, imagining details of their lives and personalities, or characters in shows and movies. By the time the aspiring author is sitting down to write about her own character in a story or novel, she will all but automatically produce a number of possible strategies for making that character come alive through words, increasing the likelihood that she’ll light on one that resonates strongly, first with her own memories and emotions and then with those of her readers. And, if Simonton’s theory has any validity, the works produced according to this strategy need not resemble each other any more than one species resembles another.
All of the conventional elements of craft—character, plot, theme, dialogue, point of view, and even higher-order dimensions like voice—readily lend themselves to this qualitative approach to practice. A creative writing instructor may coach a student who wants to be better able to devise compelling plots to read stories recognized as excelling in that dimension, encouraging her to pause along the way to write a list of possible complications, twists, and resolutions to compare with the ones she’ll eventually discover in the actual text. If the student fails to anticipate the author’s moves, she can then compare her ideas with the author’s, giving her a deeper sense of why one works better than the others. She may even practice anticipating the plots of television shows and movies, or trying to conceive of how stories in the news might be rendered as fictional plots. To practice describing settings, students could be encouraged to come up with multiple metaphors and similes based on one set and then another of the physical features they observe in real places. How many ways, a student may be prompted, can you have characters exchange the same basic information in a dialogue? Which ones reveal more of the characters’ personalities? Which ones most effectively reprise and develop the themes you’re working with? Any single idea generated in these practice sessions is unlikely to represent a significant breakthrough. But the more ideas one has the more likely she’ll discover one which seems likely to her to garner wider recognition of superior quality. The productivity approach can also be applied to revision and would consist of the writer identifying weak passages or scenes in an early draft and generating several new versions of each one so that a single, best version can be chosen for later drafts.
What I’ve attempted here is a sketch of one possible approach to teaching. It seems that since many worry about the future of literature, fearing that the growing influence of workshops will lead to insularity and standardization, too few teachers are coming forward with ideas on how to help their students improve, as if whatever methods they admit to using would inevitably lend credence to the image of workshops as assembly lines for the production of mediocre and tragically uninspired poems and short stories. But, if creative writing is in danger of being standardized into obsolescence, the publishing industry is the more likely culprit, as every starry-eyed would-be author knows full well publication is the one irreducible factor underlying professional legitimacy. And research has pretty thoroughly ruled out the notion that familiarity with the techniques of the masters in any domain inevitably precludes original, truly creative thinking. The general outline for practice based on productivity and evaluation can be personalized and refined in countless ways, and students can be counted on to bring an endless variety of experiences and perspectives to workshops, variety that would be difficult, to say the least, to completely eradicate in the span of the two or three years allotted to MFA programs.
The productivity and evaluation model for creative writing pedagogy also holds a great deal of potential for further development. For instance, a survey of successful poets and fiction writers asking them how they practice—after providing them a précis of Ericsson’s and Simonton’s findings on what constitutes practice—may lead to the development of an enormously useful and surprising repertoire of training techniques. How many authors engage in activities they think of as simple games or distractions but in fact contribute to their ability to write engaging and moving stories or poems? Ultimately, though, the discovery of increasingly effective methods will rely on rigorously designed research comparing approaches to each other. The popular rankings for MFA programs based on the professional success of students who graduate from them are a step in the direction of this type of research, but they have the rather serious flaw of sampling bias owing to higher ranking schools having the advantage of larger applicant pools. At this stage, though, even the subjective ratings of individuals experimenting with several practice techniques would be a useful guide for adjusting and refining teaching methods.
Applying the expert performance framework developed by Ericsson, Simonton, Csikszentmihaly, and their colleagues to creative writing pedagogy would probably not drastically revolutionize teaching and writing practices. It would rather represent a shift in focus from the evaluation-heavy workshop model onto methods for generating ideas. And of course activities like brainstorming and free writing are as old as the hills. What may be new is the conception of these invention strategies as a form of practice to be engaged in for the purpose of developing skills, and the idea that this practice can and should be engaged in independent of any given writing project. Even if a writing student isn’t working on a story or novel, even if he doesn’t have an idea for one yet, he should still be practicing to be a better storyteller or novelist. It’s probably the case, too, that many or most professional writers already habitually engage in activities fitting the parameters of practice laid out by the expert performance model. Such activities probably already play at least some role in classrooms. Since the basic framework can be tailored to any individual’s interests, passions, weaknesses, and strengths, and since it stresses the importance and quantity of new ideas, it’s not inconceivable that more of this type of practice will lead to greater as opposed to less originality.
Also read:
WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?
HOW TO GET KIDS TO READ LITERATURE WITHOUT MAKING THEM HATE IT
PUTTING DOWN THE PEN: HOW SCHOOL TEACHES US THE WORST POSSIBLE WAY TO READ LITERATURE
Productivity as Practice: An Expert Performance Approach to Creative Writing Pedagogy Part 2
Psychologist K. Anders Ericsson’s central finding in his research on expert achievement is that what separates those who attain a merely sufficient level of proficiency in a performance domain from those who reach higher levels of excellence is the amount of time devoted over the course of training to deliberate practice. But, in a domain with criteria for success that can only be abstractly defined, like creative writing, what would constitute deliberate practice is difficult to define.
Of course, productivity alone cannot account for impactful ideas and works; at some point the most promising ideas must be culled from among the multitude. Since foresight seems to play little if any role in the process, Simonton, following D.T. Campbell, describes it as one of “blind variation and selective retention” (310). Simonton thus theorizes that creativity is Darwinian. Innovative and valuable ideas are often borne of non-linear or “divergent” thinking, which means their future use may not be at all apparent when they are originally conceived. So, Csikszentmihalyi follows his advice to produce multiple ideas with the suggestion, “Try to produce unlikely ideas” (369). Ignoring future utility, then, seems to be important for the creative process, at least until the stage is reached when one should “Shift from openness to closure.” Csikszentmihalyi explains
Good scientists, like good artists, must let their minds roam playfully or they will not discover new facts, new patterns, new relationships. At the same time, they must also be able to evaluate critically every novelty they encounter, forget immediately the spurious ones, and then concentrate their minds on developing and realizing the few that are promising. (361)
So, two sets of skills appear to lie at the heart of creative endeavors, and they suggest themselves as focal areas for those hoping to build on their talents. In the domain of creative writing, it would seem the most important things to practice are producing multiple and unlikely ideas, and evaluating those ideas to see which are the most viable.
The workshop method prevalent in graduate writing programs probably involves at least some degree of practice in both of these skills. Novelist and teacher Ardashir Vakil, in his thoughtful and candid essay, “Teaching Creative Writing,” outlines what happens in his classrooms.
In the first part of the workshop students are given writing exercises. These vary from the most basic—write about a painful experience from your childhood—to more elaborate games in which you get pairs to tell stories to each other and then write the other person’s story with some bits invented. Then we look at texts by established writers and try to analyse what makes them work—what has the writer done?—in terms of character, language, voice and structure to capture our attention and how have they brought forth a visceral emotional response from the reader. (158)
These exercises amount to efforts to generate ideas by taking inspiration from life, the writer’s or someone else’s, or from the work of successful writers. The analysis of noteworthy texts also shades into the practice of evaluating ideas and methods. Already, though, it seems the focus leans more toward evaluation, the ability to recognize useful ideas, than productivity. And this emphasis becomes even more pronounced as the course progresses.
Along the way, we read essays, interviews and extracts by established writers, reflecting on their practice. Sometimes I use an essay by Freud, Bakhtin or Benjamin on theories of storytelling. Finally, there is a group workshop in which we read and discuss each others’ writing. Each week, someone offers up a story or a few poems or an extract to the group, who go away and read, annotate and comment on it. (158)
Though the writer’s reflections on their practices may describe methods for generating ideas, those methods don’t seem to comprise an integral part of the class. Vakil reports that “with minor variations” this approach is common to creative writing programs all over England and America.
Before dismissing any of these practices, though, it is important to note that creative writing must rely on skills beyond the production and assessment of random ideas. One could have a wonderful idea for a story but not have the language or storytelling skills necessary to convey it clearly and movingly. Or one may have a great idea for how to string words together into a beautiful sentence but lack any sense of how to fit it into a larger plot or poem. In a critique of the blind-variation and selective-retention model, Ericsson points out that productivity in terms of published works, which is what Simonton used to arrive at his equal odds rule, takes a certain level of expertise for granted. Whether students learn to develop multiple new ideas by writing down each other’s stories or not, it is likely important that they get practice with the basic skills of stringing words together into narratives. As Ericsson explains, “Unless the individual has the technical mastery to develop his or her ideas or products fully, it is unlikely that judges will be able to recognize their value and potential” (330). Though mere productivity may be what separates the professional from the game-changer, to get to the level of expertise required to reliably produce professional-grade work of any quality takes more than a bunch of blindly conceived ideas. As if defending Vakil’s assigned reading of “established writers,” Ericsson argues that “anyone interested in being able to anticipate better what is valued by experts in a domain should study the teachings and recognized masterpieces of master teachers in that domain” (330).
Part of the disagreement between Simonton and Ericsson stems from their focusing on different levels of the creative process. In the domain of creative writing, even the skills underlying “technical mastery” are open to revision. Writers can—and certainly have—experimented with every aspect of storytelling from word choice and syntax at the fundamental level to perspective and voice at higher-order levels. The same is true for poetry. Assuming the equal odds principle can be extrapolated into each of these levels, teachers of creative writing might view their role as centering on the assessment of their students’ strengths and weaknesses and thenceforth offering encouragement to them to practice in those areas they show poorer skills in. “Develop what you lack” (360) is another of Csikszentmihalyi’s prescriptions. The teacher also has at her disposal the evaluations of each student’s classmates, which she might collate into a more unified assessment. Rather than focusing solely on a student’s text, then, the teacher could ask for the class’s impressions of the writer’s skills as evidenced by the text in relation to others offered by that student in the past. Once a few weaknesses have been agreed upon, the student can then devote practice sessions to generating multiple ideas in that area and subsequently evaluating them with reference to the works of successful authors.
The general outline for creative writing workshops based on the expert performance framework might entail the following: Get the workshop students to provide assessments, based on each of the texts submitted for the workshop, of their colleagues’ strengths and weaknesses to supplement those provided by the instructor. If a student learns from such assessments that, say, his plots are engaging but his characters are eminently forgettable, he can then devote practice sessions to characterization. These sessions should consist of 1) studying the work of authors particularly strong in this area, 2) brainstorming exercises in which the student generates a large number of ideas in the area, and 3) an exercise at a later time involving a comparative assessment of the ideas produced in the prior stage. This general formula can be applied to developing skills in any aspect of creative writing, from word choice and syntax to plot and perspective. As the workshop progresses and the student receives more feedback from the evaluations, he will get better at anticipating the responses of the instructor and his classmates, thus honing the evaluative skills necessary for the third practice phase.
The precedence of quantity of ideas over quality may be something of a dirty little secret for those with romantic conceptions of creative writing. Probably owing to these romantic or mystical notions about creativity, workshops focus on assessment and evaluation to the exclusion of productivity. One objection to applying the productivity approach within the expert performance framework likely to be leveled by those with romantic leanings is that they ignore the emotional aspects of creative writing. Where in the process of developing a slew of random words and images and characters does the heart come in? Many writers report that they are routinely struck with ideas and characters—some of whom are inspired by real people—that they simply have to write about. And these ideas that come equipped with preformed emotional resonance are the same ones that end up striking a chord with readers. Applying a formula to the process of conceiving ideas, even assuming it doesn’t necessarily lead to formulaic works, might simply crowd out or somehow dissipate the emotional pull of these moments of true inspiration.
This account may, however, go further toward supporting the expert performance methods than casting doubt on them. For one, it leaves unanswered the question of how many ideas the writer was developing when the big inspiration struck. How many other real people had the author considered as possible characters before lighting on the one deemed perfect? Like the dreamer who becomes convinced of her dreams’ prophetic powers by dint of forgetting the much greater proportion of dreams that don’t come true, these writers are likely discounting a large number of ideas they generated before settling on the one with the greatest potential. Far from being completely left out of the training methods, emotional resonance can just as easily take ascendant priority as an evaluative criterion. It can even play a central role in the other two phases of practice.
If the student reads a passage from another author’s work that evokes a strong emotion, she can then analyze the writing to see what made it so powerful. Also, since the emotion the passage evoked will likely prime the reader’s mind to recall experiences that aroused similar feelings—memories which resonate with the passage—it offers an opportunity for brainstorming ideas linked with those feelings, which will in turn have greater potential for evoking them in other readers. And of course the student need not limit herself to memories of actual events; she can elaborate on those events or extemporize to come up with completely new scenes. The fear that a cognitive exercise must preclude any emotional component is based on a false dichotomy between feeling and thought and the assumption that emotions are locked off from thinking and thus not amenable to deliberate training. Any good actor can attest that this assumption is wrong.
Productivity as Practice: An Expert Performance Approach to Creative Writing Pedagogy Part 1
Psychologist K. Anders Ericsson’s central finding in his research on expert achievement is that what separates those who attain a merely sufficient level of proficiency in a performance domain from those who reach higher levels of excellence is the amount of time devoted over the course of training to deliberate practice. But, in a domain with criteria for success that can only be abstractly defined, like creative writing, what would constitute deliberate practice is difficult to define.
Much of the pedagogy in creative writing workshops derives solely from tradition and rests on the assumption that the mind of the talented writer will adopt its own learned practices in the process of writing. The difficult question of whether mastery, or even expertise, can be inculcated through any process of instruction, and the long-standing tradition of assuming the answer is an only somewhat qualified “no”, comprise just one of several impediments to developing an empirically supported set of teaching methods for aspiring writers. Even the phrase, “empirically supported,” conjures for many the specter of formula, which they fear students will be encouraged to apply to their writing, robbing the products of some mysterious and ineffable quality of freshness and spontaneity. Since the criterion of originality is only one of several that are much easier to recognize than they are to define, the biggest hindrance to moving traditional workshop pedagogy onto firmer empirical ground may be the intractability of the question of what evaluative standards should be applied to student writing. Psychologist K. Anders Ericsson’s central finding in his research on expert achievement is that what separates those who attain a merely sufficient level of proficiency in a performance domain from those who reach higher levels of excellence is the amount of time devoted over the course of training to deliberate practice. But, in a domain with criteria for success that can only be abstractly defined, like creative writing, what would constitute deliberate practice is as difficult to describe in any detail as the standards by which work in that domain are evaluated.
Paul Kezle, in a review article whose title, “What Creative Writing Pedagogy Might Be,” promises more than the conclusions deliver, writes, “The Iowa Workshop model originally laid out by Paul Engle stands as the pillar of origination for all debate about creative writing pedagogy” (127). This model, which Kezle describes as one of “top-down apprenticeship,” involves a published author who’s achieved some level of acclaim—usually commensurate to the prestige of the school housing the program—whose teaching method consists of little more than moderating evaluative class discussions on each student’s work in turn. The appeal of this method is two-fold. As Shirley Geok-lin Lim explains, it “reliev[es] the teacher of the necessity to offer teacher feedback to students’ writing, through editing, commentary, and other one-to-one, labor intensive, authority-based evaluation” (81), leaving the teacher more time to write his or her own work as the students essentially teach each other and, hopefully, themselves. This aspect of self-teaching is the second main appeal of the workshop method—it bypasses the pesky issue of whether creative writing can be taught, letting the gates of the sacred citadel of creative talent remain closed. Furthermore, as is made inescapably clear in Mark McGurl’s book The Program Era, which tracks the burgeoning of creative writing programs as their numbers go from less than eighty in 1975 to nearly nine hundred today, the method works, at least in terms of its own proliferation.
But what, beyond enrolling in a workshop, can a writer do to get better at writing? The answer to this question, assuming it can be reliably applied to other writers, holds the key to answering the question of what creative writing teachers can do to help their students improve. Lim, along with many other scholars and teachers with backgrounds in composition, suggests that pedagogy needs to get beyond “lore,” by which she means “the ad hoc strategies composing what today is widely accepted as standard workshop technique” (79). Unfortunately, the direction these theorists take is forbiddingly abstruse, focusing on issues of gender and ethnic identity in the classroom, or the negotiation of power roles (see Russel 109 for a review.) Their prescription for creative writing pedagogy boils down to an injunction to introduce students to poststructuralist ways of thinking and writing. An example sentence from Lim will suffice to show why implementing this approach would be impractical:
As Kalamaras has argued, however, collective identities, socially constructed, historically circumscribed, uniquely experienced, call for a “socially responsible” engagement, not only on the level of theme and content but particularly on that of language awareness, whether of oral or dialectic-orthographic “voice,” lexical choice, particular idiolect features, linguistic registers, and what Mikhail Bakhtin called heteroglossic characteristics. (86)
Assuming the goal is not to help marginalized individuals find a voice and communicate effectively and expressively in society but rather to help a group of students demonstrating some degree of both talent and passion in the realm of creative writing to reach the highest levels of success possible—or even simply to succeed in finding a way to get paid for doing what they love—arcane linguistic theories are unlikely to be of much use. (Whether they’re of any real use even for the prior goal is debatable.)
Conceiving of creative writing as the product of a type of performance demanding several discrete skills, at least some of which are improvable through training, brings it into a realm that has been explored with increasing comprehensiveness and with ever more refined methods by psychologists. While University of Chicago professor Mihaly Csikszentmihalyi writes about the large group of highly successful people in creative fields interviewed for his book Creativity: Flow and the Psychology of Discovery and Invention as if they were a breed apart, even going so far as to devote an entire chapter to “The Creative Personality,” and in so doing reinforcing the idea that creative talent is something one is simply born with, he does manage to provide several potentially useful strategies for “Enhancing Personal Creativity” in a chapter by that name. “Just as a physician may look at the physical habits of the most healthy individuals” Csikszentmihalyi writes, “to find in them a prescription that will help everyone else to be more healthy, so we may extract some useful ideas from the lives of a few creative persons about how to enrich the lives of everyone else” (343). The aspirant creative writer must understand, though, that “to move from personal to cultural creativity one needs talent, training, and an enormous dose of good luck” (344). This equation, as it suggests only one variable amenable to deliberate effort, offers a refinement to the question of what an effective creative writing pedagogy might entail. How does one train to be a better a writer? Training as a determining factor underlying exceptional accomplishments is underscored by Ericsson’s finding that “amount of experience in a domain is often a weak predictor of performance” (20). Simply writing poems and stories may not be enough to ensure success in the realm of creative writing, especially considering the intense competition evidenced by those nearly nine hundred MFA programs.
Because writing stories and poems seldom entails a performance in real time, but instead involves multiple opportunities for inspiration and revision, the distinction Ericsson found between simply engaging in an activity and training for it may not be as stark for creative writing. Writing and training may overlap if the tasks involved in writing meet the requirements for effective training. Having identified deliberate practice as the most important predictor of expert performance, Ericsson breaks the concept down into three elements: “a well-defined task with an appropriate level of difficulty for the particular individual, informative feedback, and opportunities for repetition and corrections of errors” (21). Deliberate practice requires immediate feedback on performance. In a sense, success can be said to multiply in direct proportion to the accumulation of past failures. But how is a poet to know if the line she’s just written constitutes a success or failure? How does a novelist know if a scene or a chapter bears comparison to the greats of literature?
One possible way to get around the problem of indefinable evaluative standards is to focus on quantity instead of quality. Ericsson’s colleague, Dean Simonton, studies people in various fields in which innovation is highly valued in an attempt to discover what separates those who exhibit “received expertise,” mastering and carrying on dominant traditions in arts or sciences, from those who show “creative expertise” (228) by transforming or advancing those traditions. Contrary to the conventional view that some individuals possess a finely attuned sense of how to go about producing a successful creative work, Simonton finds that what he calls “the equal odds rule” holds in every creative field he’s studied. What the rule suggests is “that quality correlates positively with quantity, so that creativity becomes a linear statistical function of productivity” (235). Individuals working in creative fields can never be sure which of their works will have an impact, so the creators who have the greatest impact tend to be those who produce the greatest number of works. Simonton has discovered that this rule holds at every stage in the individual’s lifespan, leading him to conclude that success derives more from productivity and playing the odds than from sure-footed and far-seeing genius. “The odds of hitting a bull’s eye,” he writes, “is a probabilistic function of the number of shots” (234). Csikszentmihalyi discovered a similar quantitative principle among the creative people he surveyed; part of creativity, he suggests, is having multiple ideas where only one seems necessary, leading him to the prescription for enhancing personal creativity, “Produce as many ideas as possible” (368).