ALL OF US. Not just the "smart" ones, not just the "unintelligent" ones; not just the well-educated, well-read, uneducated, or self-educated; not just the successful or the unsuccessful. ALL OF US, myself included.
Every single one of us is not only susceptible to, but regularly guilty of, all aspects of the human condition. This includes mental shortcuts, common biases, habitual behavior, beliefs, assumptions, and more. I am not saying the human condition is a bad thing, not at all, but the good comes with the same limitations for every single one of us when it comes to the way the brain functions.
But what does this have to do with the best-selling author Malcolm Gladwell?
He has a new book out: David and Goliath (2013). And because he is a best-selling author and he has released a new book, it has naturally been reviewed in big-name publications.
Here is why I am re-thinking Malcolm Gladwell: I drank the Kool-aid, I bought in. But after reading recent reviews (and going back and reading former reviews) of his books, the Kool-aid has soured and I am re-thinking my being a fan and paying much closer attention to my evaluation of his work.
Drinking the Kool-Aid:
I was a big fan of The Tipping Point back in 2002. I discussed it with friends and colleagues, even gave the book as a gift to a couple people.
Then during business school, excerpts from both The Tipping Point (2002) and Blink (2005) were regularly assigned by my professors as part of our assigned course readings. I was riveted, even went beyond my course assignments and read the books in their entirety.
When Outliers (2008) came out, it was read, discussed, and suggested regularly among me and my former business school friends.
When I got an iPad, I even went as far as repurchasing these books so I would have a digital version.
Also, concepts and ideas from Gladwell's books regularly came up (and still do) in the many client settings which I have worked as a consultant. I have referenced his work in class papers, client proposals, and more.
Re-thinking the Kool-Aid:
Then came David and Goliath and its reviews, which then led to reading reviews of his previously published books.
I won't repeat everything, that is what the links are for, and I see no value in repeating the authors' points verbatim, I am simply offering what I have read that has led to my re-evaluation of Gladwell's work in case you are interested in doing the same.
As a consultant I pride myself on trying to utilize scientific findings that are applicable in real-work settings and avoid popular trends, as a PhD student I pride myself on critical thinking skills, and as a person I pride myself on trying to pay attention to where mental shortcuts and biases might be impacting my own decisions, beliefs, actions, etc.
I consider the following links excellent food for thought, especially if you are a follower and believer in Gladwell and his theories (I have also copied the content below for those that prefer not to follow links, just simply scroll down). I won't purport to tell anyone what to think, but I believe these are worth reading and well worth thinking about.
Links to articles:
Blog post by Christopher Chabris (which expands on and includes his recent David and Goliath book review in the Wall Street Journal):
http://blog.chabris.com/2013/10/why-malcolm-gladwell-matters-and-why.html?m=1
Steven Pinker's review of What the Dog Saw for The New York Times in 2009:
http://www.nytimes.com/2009/11/15/books/review/Pinker-t.html?pagewanted=all&_r=1&
Richard Posner's review of Blink for The New Republic in 2005:
http://www.law.uchicago.edu/node/3360
Full copy of articles:
Blog post by Christopher Chabris (which expands on and includes his recent David and Goliath book review in the Wall Street Journal):
http://blog.chabris.com/2013/10/why-malcolm-gladwell-matters-and-why.html?m=1
Friday, October 14, 2013
Why Malcolm Gladwell Matters (and Why That's Unfortunate)
Malcolm Gladwell, the New Yorker writer and perennial bestselling author, has a new book out. It's called David and Goliath: Misfits, Underdogs, and the Art of Battling Giants. I reviewed it (PDF) in last weekend's edition of The Wall Street Journal. (Other reviews have appeared in The Atlantic, The New York Times, The Guardian, and The Millions, to name a few.) Even though the WSJ editors kindly gave me about 2500 words to go into depth about the book, there were many things I did not have space to discuss or elaborate on. This post contains some additional thoughts about Malcolm Gladwell, David and Goliath, the general modus operandi of his writing, and how he and others conceive of what he is doing.
I noticed some interesting reactions to my review. Some people said I was a jealous hater. One even implied that as a cognitive scientist (rather than a neuroscientist) I somehow lacked the capacity or credibility to criticize anyone's logic or adherence to evidence. A more serious response, of which I saw several instances, came from people who said in essence "Why do you take Gladwell so seriously—it's obvious he is just an entertainer." For example, here's Jason Kottke:
I enjoy Gladwell's writing and am able to take it with the proper portion of salt ... I read (and write about) most pop science as science fiction: good for thinking about things in novel ways but not so great for basing your cancer treatment on.The Freakonomics blog reviewer said much the same thing:
... critics have primarily focused on whether the argument they think Gladwell is making is valid. I am going to argue that this approach misses the fact that the stories Gladwell tells are simply well worth reading.I say good for you to everyone who doesn't take Gladwell seriously. But the reason I take him seriously is because I take him and his publisher at their word. On their face, many of the assertions and conclusions in Gladwell's books are clearly meant to describe lawful regularities about the way human mental life and the human social world work. And this has always been the case with his writing.
In The Tipping Point (2000), Gladwell wrote of sociological regularities and even coined new ones, like "The Law of the Few." Calling patterns of behavior "laws" is a basic way of signaling that they are robust empirical regularities. Laws of human behavior aren't as mathematically precise as laws of physics, but asserting one is about the strongest claim that can be made in social science. To say something is a law is to say that it applies with (near) universality and can be used to predict, in advance, with a fair degree of certainty, what will happen in a situation. It says this is truth you can believe in, and act on to your benefit.
A blurb from the publisher of David and Goliath avers: "The author of Outliers explores the hidden rules governing relationships between the mighty and the weak, upending prevailing wisdom as he goes." A hidden rule is a counterintuitive, causal mechanism behind the workings of the world. If you say you are exploring hidden rules that govern relationships, you are promising to explicate social science. But we don't have to take the publisher's word for it. Here's the author himself, in the book, stating one of his theses:
The fact of being an underdog changes people in ways that we often fail to appreciate. It opens doors, and creates opportunities and educates and permits things that might otherwise have seemed unthinkable.The emphasis on changes is in the original (at least in the version of the quote I saw on Gladwell's Facebook page). In an excerpt published in The Guardian, he wrote, "If you take away the gift of reading, you create the gift of listening." I added the emphasis on create to highlight the fact that Gladwell is here claiming a causal rule about the mind and brain, namely that having dyslexia causes one to become a better listener (something he says made superlawyer David Boies so successful).
I've gone on at length with these examples because I think they also run counter to another point I have seen made about Gladwell's writings recently: That he does nothing more than restate the obvious or banal. I couldn't disagree more here. Indeed, to his credit, what he writes about is the opposite of trivial. If Gladwell is right in his claims, we have all been acting unethically by watching professional football, and the sport will go the way of dogfighting, or at best boxing. If he is right about basketball, thousands of teams have been employing bad strategies for no good reason. If he is right about dyslexia, the world would literally be a worse place if everyone were able to learn how to read with ease, because we would lose the geniuses that dyslexia (and other "desirable difficulties") create. If he was right about how beliefs and fads spread through social networks in The Tipping Point, consumer marketing would have changed greatly in the years since. Actually, it did: firms spent great effort trying to find "influentials" and buy their influence, even though there was never good causal evidence that this would work. (See Duncan Watts's brilliant book Everything is Obvious, Once You Know the Answer—reviewed here—to understand why.) If Gladwell is right, also in The Tipping Point, about how much news anchors can influence our votes by deploying their smiles for and against their preferred candidates, then democracy as we know it is a charade (and not for the reasons usually given, but for the completely unsupported reason that subliminal persuaders can create any electoral results they want). And so on. These ideas are far from obvious, self-evident, or trivial. They do have the property of engaging a hindsight bias, of triggering a pleasurable rush of counterintuition, of seeming correct once you have learned about them. But an idea that people feel like they already knew is much different from an idea people really did know all along.
Janet Maslin's New York Times review of David and Goliath begins by succinctly stating the value proposition that Gladwell's work offers to his readers:
I don't think the main flaw is oversimplification (though that is a problem: Einstein was right when he—supposedly—advised that things be made as simple as possible, but no simpler). As I wrote in my own review, the main flaw is a lack of logic and proper evidence in the argumentation. But consider what Gladwell's quote means. He is saying that if you understand his topics enough to see what he is doing wrong, then you are not the reader he wants. At a stroke he has said that anyone equipped to properly review his work should not be reading it. How convenient! Those who are left are only those who do not think the material is oversimplified.
Who are those people? They are the readers who will take Gladwell's laws, rules, and causal theories seriously; they will tweet them to the world, preach them to their underlings and colleagues, write them up in their own books and articles (David Brooks relied on Gladwell's claims more than once in his last book), and let them infiltrate their own decision-making processes. These are the people who will learn to trust their guts (Blink), search out and lavish attention and money on fictitious "influencers" (The Tipping Point), celebrate neurological problems rather than treat them (David and Goliath), and fail to pay attention to talent and potential because they think personal triumph results just from luck and hard work (Outliers). It doesn't matter if these are misreadings or imprecise readings of what Gladwell is saying in these books—they are common readings, and I think they are more common among exactly those readers Gladwell says are his audience.
Not backing down, Gladwell said on the Brian Lehrer show that he really doesn't care about logic, evidence, and truth—or that he thinks discussions of the concerns of "academic research" in the sciences, i.e., logic, evidence, and truth—are "inaccessible" to his lowly readers:
The world becomes less complicated with a Malcolm Gladwell book in hand. Mr. Gladwell raises questions — should David have won his fight with Goliath? — that are reassuringly clear even before they are answered. His answers are just tricky enough to suggest that the reader has learned something, regardless of whether that’s true.(I would only add that the world becomes not just less complicated but better, which leaves the reader a little bit happier about life.) In a recent interview with The Guardian, Gladwell as much as agreed: "If my books appear to a reader to be oversimplified, then you shouldn't read them: you're not the audience!"
I don't think the main flaw is oversimplification (though that is a problem: Einstein was right when he—supposedly—advised that things be made as simple as possible, but no simpler). As I wrote in my own review, the main flaw is a lack of logic and proper evidence in the argumentation. But consider what Gladwell's quote means. He is saying that if you understand his topics enough to see what he is doing wrong, then you are not the reader he wants. At a stroke he has said that anyone equipped to properly review his work should not be reading it. How convenient! Those who are left are only those who do not think the material is oversimplified.
Who are those people? They are the readers who will take Gladwell's laws, rules, and causal theories seriously; they will tweet them to the world, preach them to their underlings and colleagues, write them up in their own books and articles (David Brooks relied on Gladwell's claims more than once in his last book), and let them infiltrate their own decision-making processes. These are the people who will learn to trust their guts (Blink), search out and lavish attention and money on fictitious "influencers" (The Tipping Point), celebrate neurological problems rather than treat them (David and Goliath), and fail to pay attention to talent and potential because they think personal triumph results just from luck and hard work (Outliers). It doesn't matter if these are misreadings or imprecise readings of what Gladwell is saying in these books—they are common readings, and I think they are more common among exactly those readers Gladwell says are his audience.
Not backing down, Gladwell said on the Brian Lehrer show that he really doesn't care about logic, evidence, and truth—or that he thinks discussions of the concerns of "academic research" in the sciences, i.e., logic, evidence, and truth—are "inaccessible" to his lowly readers:
I am a story-teller, and I look to academic research … for ways of augmenting story-telling. The reason I don’t do things their way is because their way has a cost: it makes their writing inaccessible. If you are someone who has as their goal ... to reach a lay audience ... you can't do it their way.In this and another quote, from his interview in The Telegraph, about what readers "are indifferent to," the condescension and arrogance are in full view:
And as I’ve written more books I’ve realised there are certain things that writers and critics prize, and readers don’t. So we’re obsessed with things like coherence, consistency, neatness of argument. Readers are indifferent to those things.
Note, incidentally, that he mentions coherence, consistency, and neatness. But not correctness, or proper evidence. Perhaps he thinks that these are highfalutin cares for writers and critics, or perhaps he is some kind of postmodernist for whom they don't even exist in any cognizable form. In any case, I do not agree with Gladwell's implication that accuracy and logic are incompatible with entertainment. If anyone could make accurate and logical discussion of science entertaining, it is Malcolm Gladwell.
Perhaps ... perhaps I am the one who is naive, but I was honestly very surprised by these quotes. I had thought Gladwell was inadvertently misunderstanding the science he was writing about, and making sincere mistakes in the service of coming up with ever more "Gladwellian" insights to serve his audience. But according to his own account, he knows exactly what he is doing, and not only that, he thinks it is the right thing to do. Is there no sense of ethics that requires more fidelity to truth, especially when your audience is so vast—and, by your own admission, so benighted—as to need oversimplification and to be unmoved by little things like consistency and coherence? I think a higher ethic of communication should apply here, not a lower standard.
This brings me back to the question of why Gladwell matters so much. Why am I, an academic who is supposed to be keeping his head down and toiling away on inaccessible stuff, spending so much time on reading his interviews, reviewing his book, and writing this blog post? What Malcom Gladwell says matters because, whether academics like it or not, he is incredibly influential.
As Gladwell himself might put it: "We tend to think that people who write popular books don't have much influence. But we are wrong." Sure, Gladwell has huge sales figures and is said to command big speaking fees, and his TED talks are among the most watched. But James Patterson has huge sales too, and he isn't driving public opinion or belief. I know Gladwell has influence for multiple reasons. One is that even highly-educated people in leadership positions in academia—a field where I have experience—are sometimes more familiar with and more likely to cite Gladwell's writings than those of the top scholars in their own fields, even when those top scholars have put their ideas into trade-book form like Gladwell does.
Another data point: David and Goliath has only been out for a few days, but already there's an article online about its "business lessons." A sample assertion:
One final indicator of Malcolm Gladwell's influence—and I'll be upfront and say this is an utterly non-scientific and imprecise methodology—that suggests why he matters. I Googled the phrases "Malcolm Gladwell proved" and "Malcolm Gladwell showed" and compared the results to the similar "Steven Pinker proved" and "Steven Pinker showed" (adding in the results of redoing the Pinker search with the incorrect "Stephen"). I chose Steven Pinker not because he is an academic, but because he has published a lot of bestselling books and widely-read essays and is considered a leading public intellectual, like Gladwell. Pinker is surely much more influential than most other academics. It just so happens that he published a critical review of Gladwell's previous book—but this also is an indicator of the fact that Pinker chooses to engage the public rather than just his professional colleagues. The results, in total number of hits:
Gladwell: proved 5300, showed 19200 = 24500 total
Pinker: proved 9, showed 625 = 634 total
So the total influence ratio as measured by this crude technique is 24500/634, or over 38-to-1 in favor of Gladwell. I wasn't expecting it to be nearly this high myself. (Interestingly, those "influenced" by Pinker are only 9/634, or 1.4% likely to think he "proved" something as opposed to the arguably more correct "showed" it. Gladwell's influencees are 5300/24500 or 21.6% likely to think their influencer "proved" something.) Refining the searches, adding "according to Gladwell" versus "according to Pinker" and so on will change the numbers, but I doubt enough corrections will significantly redress a 38:1 difference.
When someone with this much influence on what people seem to really believe (as indexed by my dashed-off method) says that he is just a storyteller who just uses research to "augment" the stories—who places the stories first and the science in a supporting role, rather than the other way around—he's essentially placing his work in the category of inspirational books like The Secret. As Dan Simons and I noted in a New York Times essay, such books sprinkle in references and allusions to science as a rhetorical strategy. Accessorizing your otherwise inconsistent or incoherent story-based argument with pieces of science is a profitable rhetorical strategy because references to science are crucial touchpoints that help readers maintain their default instinct to believe what they are being told. They help because when readers see "science" they can suppress any skepticism that might be bubbling up in response to the inconsistencies and contradictions.
As Gladwell himself might put it: "We tend to think that people who write popular books don't have much influence. But we are wrong." Sure, Gladwell has huge sales figures and is said to command big speaking fees, and his TED talks are among the most watched. But James Patterson has huge sales too, and he isn't driving public opinion or belief. I know Gladwell has influence for multiple reasons. One is that even highly-educated people in leadership positions in academia—a field where I have experience—are sometimes more familiar with and more likely to cite Gladwell's writings than those of the top scholars in their own fields, even when those top scholars have put their ideas into trade-book form like Gladwell does.
Gladwell proves that not only do many successful people have dyslexia, but that they have become successful in large part because of having to deal with their difficulty. Those diagnosed with dyslexia are forced to explore other activities and learn new skills that they may have otherwise pursued.Of course this is nonsense—there is no "proof" of anything in this book, much less a proof that dyslexia causes success. I wonder if the author of this article even has an idea what proper evidence in support of these assertions would be, or if he knows that these kinds of assertions cannot be "proved."
One final indicator of Malcolm Gladwell's influence—and I'll be upfront and say this is an utterly non-scientific and imprecise methodology—that suggests why he matters. I Googled the phrases "Malcolm Gladwell proved" and "Malcolm Gladwell showed" and compared the results to the similar "Steven Pinker proved" and "Steven Pinker showed" (adding in the results of redoing the Pinker search with the incorrect "Stephen"). I chose Steven Pinker not because he is an academic, but because he has published a lot of bestselling books and widely-read essays and is considered a leading public intellectual, like Gladwell. Pinker is surely much more influential than most other academics. It just so happens that he published a critical review of Gladwell's previous book—but this also is an indicator of the fact that Pinker chooses to engage the public rather than just his professional colleagues. The results, in total number of hits:
Gladwell: proved 5300, showed 19200 = 24500 total
Pinker: proved 9, showed 625 = 634 total
So the total influence ratio as measured by this crude technique is 24500/634, or over 38-to-1 in favor of Gladwell. I wasn't expecting it to be nearly this high myself. (Interestingly, those "influenced" by Pinker are only 9/634, or 1.4% likely to think he "proved" something as opposed to the arguably more correct "showed" it. Gladwell's influencees are 5300/24500 or 21.6% likely to think their influencer "proved" something.) Refining the searches, adding "according to Gladwell" versus "according to Pinker" and so on will change the numbers, but I doubt enough corrections will significantly redress a 38:1 difference.
When someone with this much influence on what people seem to really believe (as indexed by my dashed-off method) says that he is just a storyteller who just uses research to "augment" the stories—who places the stories first and the science in a supporting role, rather than the other way around—he's essentially placing his work in the category of inspirational books like The Secret. As Dan Simons and I noted in a New York Times essay, such books sprinkle in references and allusions to science as a rhetorical strategy. Accessorizing your otherwise inconsistent or incoherent story-based argument with pieces of science is a profitable rhetorical strategy because references to science are crucial touchpoints that help readers maintain their default instinct to believe what they are being told. They help because when readers see "science" they can suppress any skepticism that might be bubbling up in response to the inconsistencies and contradictions.
In his Telegraph interview, Gladwell again played down the seriousness of his own ideas: "The mistake is to think these books are ends in themselves. My books are gateway drugs – they lead you to the hard stuff." And David and Goliath does cite scholarly works, books and journal articles, and journalism, in its footnotes and endnotes. But I wonder how many of its readers will follow those links, as compared to the number who will take its categorical claims at face value. And of those that do follow the links, how many will realize that many of the most important links are missing?
This leads to my last topic, the psychology experiment Gladwell deploys in David and Goliath to explain what he means by "desirable difficulties." The difficulties he talks about are serious challenges, like dyslexia or the death of a parent during one's childhood. But the experiment is a 40-person study on Princeton students who solved three mathematical reasoning problems presented in either a normal typeface or a difficult-to-read typeface. Counterintuitively, the group that read in a difficult typeface scored higher on the reasoning problems than the group that read in a normal typeface.
In my review, I criticized Gladwell for describing this experiment at length without also mentioning that a replication attempt with a much larger and more representative sample of subjects did not find an advantage for difficult typefaces. One of the original study's authors wrote to me to argue that his effect is robust when the test questions are at an appropriate level of difficulty for the participants in the experiment, and that his effect has in fact been replicated “conceptually” by other researchers. However, I cannot find any successful direct replications—repetitions of the experiment that use the same methods and get the same results—and direct replication is the evidence that I believe is most relevant.
This may be an interesting controversy for cognitive psychologists, but it's not the point here. The point is that Gladwell says absolutely nothing about the controversy over whether this effect is reliable. All he does is cite the original 2007 study of 40 subjects and rest his case. Even those who have been hooked by his prose and look to the endnotes of this chapter for a new fix will find no sources for the "hard stuff"—e.g., the true state of the science of "desirable difficulty"—that he claims to be promoting. And if the hard stuff has value, why does Gladwell not wade into it himself and let it inform his writing? When discussing the question of how to pick the right college, why not discuss the intriguing research that debates whether going to an elite school really adds economic value (over going to a lesser-ranked school) for those people who get admitted to both. Or, when discussing dyslexia, instead of claiming it is a gift to society, how about devoting the space to a serious consideration of the hypothesis that this kind of early life difficulty jars the course of development, adding uncertainty (increasing the chances of both success and failure, though probably not in equal proportions) rather than directionality. There was so much more he could have done with these fascinating and important topics.
But at least the difficulty finding a simple experiment to serve as metaphor might have jarred Gladwell into realizing that the connection between the typeface effect, however robust it might turn out to be, and the effect of a neurological condition or loss of a parent, is in fact just metaphorical. There is no relevant nexus between reading faint type and losing a parent at an early age, and pretending there is just loosens the threads of logic to the point of breaking. But perhaps Gladwell already knows this. After all, in his Telegraph interview, he said readers don't care about stuff like consistency and coherence, only critics and writers do.
I can certainly think of one gifted writer with a huge audience who doesn't seem to care that much. I think the effect is the propagation of a lot of wrong beliefs among a vast audience of influential people. And that's unfortunate.
In my review, I criticized Gladwell for describing this experiment at length without also mentioning that a replication attempt with a much larger and more representative sample of subjects did not find an advantage for difficult typefaces. One of the original study's authors wrote to me to argue that his effect is robust when the test questions are at an appropriate level of difficulty for the participants in the experiment, and that his effect has in fact been replicated “conceptually” by other researchers. However, I cannot find any successful direct replications—repetitions of the experiment that use the same methods and get the same results—and direct replication is the evidence that I believe is most relevant.
This may be an interesting controversy for cognitive psychologists, but it's not the point here. The point is that Gladwell says absolutely nothing about the controversy over whether this effect is reliable. All he does is cite the original 2007 study of 40 subjects and rest his case. Even those who have been hooked by his prose and look to the endnotes of this chapter for a new fix will find no sources for the "hard stuff"—e.g., the true state of the science of "desirable difficulty"—that he claims to be promoting. And if the hard stuff has value, why does Gladwell not wade into it himself and let it inform his writing? When discussing the question of how to pick the right college, why not discuss the intriguing research that debates whether going to an elite school really adds economic value (over going to a lesser-ranked school) for those people who get admitted to both. Or, when discussing dyslexia, instead of claiming it is a gift to society, how about devoting the space to a serious consideration of the hypothesis that this kind of early life difficulty jars the course of development, adding uncertainty (increasing the chances of both success and failure, though probably not in equal proportions) rather than directionality. There was so much more he could have done with these fascinating and important topics.
But at least the difficulty finding a simple experiment to serve as metaphor might have jarred Gladwell into realizing that the connection between the typeface effect, however robust it might turn out to be, and the effect of a neurological condition or loss of a parent, is in fact just metaphorical. There is no relevant nexus between reading faint type and losing a parent at an early age, and pretending there is just loosens the threads of logic to the point of breaking. But perhaps Gladwell already knows this. After all, in his Telegraph interview, he said readers don't care about stuff like consistency and coherence, only critics and writers do.
I can certainly think of one gifted writer with a huge audience who doesn't seem to care that much. I think the effect is the propagation of a lot of wrong beliefs among a vast audience of influential people. And that's unfortunate.
http://www.nytimes.com/2009/11/15/books/review/Pinker-t.html?pagewanted=all&_r=1&
Malcolm Gladwell, Eclectic Detective
WHAT THE DOG SAW
By Malcolm Gladwell
410 pp. Little, Brown & Company. $27.99
Have you ever wondered why there are so many kinds of mustard but only one kind of ketchup? Or what Cézanne did before painting his first significant works in his 50s? Have you hungered for the story behind the Veg-O-Matic, star of the frenetic late-night TV ads? Or wanted to know where Led Zeppelin got the riff in “Whole Lotta Love”?
Neither had I, until I began this collection by the indefatigably curious journalist Malcolm Gladwell. The familiar jacket design, with its tiny graphic on a spare background, reminds us that Gladwell has become a brand. He is the author of the mega-best sellers “The Tipping Point,” “Blink” and “Outliers”; a popular speaker on the Dilbert circuit; and a prolific contributor to The New Yorker, where the 19 articles in “What the Dog Saw” were originally published. This volume includes prequels to those books and other examples of Gladwell’s stock in trade: counterintuitive findings from little-known experts.
A third of the essays are portraits of “minor geniuses” — impassioned oddballs loosely connected to cultural trends. We meet the feuding clan of speed-talking pitchmen who gave us the Pocket Fisherman, Hair in a Can, and other it-slices!-it-dices! contraptions. There is the woman who came up with the slogan “Does she or doesn’t she?” and made hair coloring (and, Gladwell suggests, self-invention) respectable to millions of American women. The investor Nassim Taleb explains how markets can be blindsided by improbable but consequential events. A gourmet ketchup entrepreneur provides Gladwell the opportunity to explain the psychology of taste and to recount the history of condiments.
Another third are on the hazards of statistical prediction, especially when it comes to spectacular failures like Enron, 9/11, the fatal flight of John F. Kennedy Jr., the explosion of the space shuttle Challenger, the persistence of homelessness and the unsuccessful targeting of Scud missile launchers during the Persian Gulf war of 1991. For each debacle, Gladwell tries to single out a fallacy of reasoning behind it, such as that more information is always better, that pictures offer certainty, that events are distributed in a bell curve around typical cases, that clues available in hindsight should have been obvious before the fact and that the risk of failure in a complex system can be reduced to zero.
The final third are also about augury, this time about individuals rather than events. Why, he asks, is it so hard to prognosticate the performance of artists, teachers, quarterbacks, executives, serial killers and breeds of dogs?
The themes of the collection are a good way to characterize Gladwell himself: a minor genius who unwittingly demonstrates the hazards of statistical reasoning and who occasionally blunders into spectacular failures.
Gladwell is a writer of many gifts. His nose for the untold back story will have readers repeatedly muttering, “Gee, that’s interesting!” He avoids shopworn topics, easy moralization and conventional wisdom, encouraging his readers to think again and think different. His prose is transparent, with lucid explanations and a sense that we are chatting with the experts ourselves. Some chapters are masterpieces in the art of the essay. I particularly liked “Something Borrowed,” a moving examination of the elusive line between artistic influence and plagiarism, and “Dangerous Minds,” a suspenseful tale of criminal profiling that shows how self-anointed experts can delude their clients and themselves with elastic predictions.
An eclectic essayist is necessarily a dilettante, which is not in itself a bad thing. But Gladwell frequently holds forth about statistics and psychology, and his lack of technical grounding in these subjects can be jarring. He provides misleading definitions of “homology,” “sagittal plane” and “power law” and quotes an expert speaking about an “igon value” (that’s eigenvalue, a basic concept in linear algebra). In the spirit of Gladwell, who likes to give portentous names to his aperçus, I will call this the Igon Value Problem: when a writer’s education on a topic consists in interviewing an expert, he is apt to offer generalizations that are banal, obtuse or flat wrong.
The banalities come from a gimmick that can be called the Straw We. First Gladwell disarmingly includes himself and the reader in a dubious consensus — for example, that “we” believe that jailing an executive will end corporate malfeasance, or that geniuses are invariably self-made prodigies or that eliminating a risk can make a system 100 percent safe. He then knocks it down with an ambiguous observation, such as that “risks are not easily manageable, accidents are not easily preventable.” As a generic statement, this is true but trite: of course many things can go wrong in a complex system, and of course people sometimes trade off safety for cost and convenience (we don’t drive to work wearing crash helmets in Mack trucks at 10 miles per hour). But as a more substantive claim that accident investigations are meaningless “rituals of reassurance” with no effect on safety, or that people have a “fundamental tendency to compensate for lower risks in one area by taking greater risks in another,” it is demonstrably false.
The problem with Gladwell’s generalizations about prediction is that he never zeroes in on the essence of a statistical problem and instead overinterprets some of its trappings. For example, in many cases of uncertainty, a decision maker has to act on an observation that may be either a signal from a target or noise from a distractor (a blip on a screen may be a missile or static; a blob on an X-ray may be a tumor or a harmless thickening). Improving the ability of your detection technology to discriminate signals from noise is always a good thing, because it lowers the chance you’ll mistake a target for a distractor or vice versa. But given the technology you have, there is an optimal threshold for a decision, which depends on the relative costs of missing a target and issuing a false alarm. By failing to identify this trade-off, Gladwell bamboozles his readers with pseudoparadoxes about the limitations of pictures and the downside of precise information.
Another example of an inherent trade-off in decision-making is the one that pits the accuracy of predictive information against the cost and complexity of acquiring it. Gladwell notes that I.Q. scores, teaching certificates and performance in college athletics are imperfect predictors of professional success. This sets up a “we” who is “used to dealing with prediction problems by going back and looking for better predictors.” Instead, Gladwell argues, “teaching should be open to anyone with a pulse and a college degree — and teachers should be judged after they have started their jobs, not before.”
But this “solution” misses the whole point of assessment, which is not clairvoyance but cost-effectiveness. To hire teachers indiscriminately and judge them on the job is an example of “going back and looking for better predictors”: the first year of a career is being used to predict the remainder. It’s simply the predictor that’s most expensive (in dollars and poorly taught students) along the accuracy-cost trade-off. Nor does the absurdity of this solution for professional athletics (should every college quarterback play in the N.F.L.?) give Gladwell doubts about his misleading analogy between hiring teachers (where the goal is to weed out the bottom 15 percent) and drafting quarterbacks (where the goal is to discover the sliver of a percentage point at the top).
The common thread in Gladwell’s writing is a kind of populism, which seeks to undermine the ideals of talent, intelligence and analytical prowess in favor of luck, opportunity, experience and intuition. For an apolitical writer like Gladwell, this has the advantage of appealing both to the Horatio Alger right and to the egalitarian left. Unfortunately he wildly overstates his empirical case. It is simply not true that a quarterback’s rank in the draft is uncorrelated with his success in the pros, that cognitive skills don’t predict a teacher’s effectiveness, that intelligence scores are poorly related to job performance or (the major claim in “Outliers”) that above a minimum I.Q. of 120, higher intelligence does not bring greater intellectual achievements.
The reasoning in “Outliers,” which consists of cherry-picked anecdotes, post-hoc sophistry and false dichotomies, had me gnawing on my Kindle. Fortunately for “What the Dog Saw,” the essay format is a better showcase for Gladwell’s talents, because the constraints of length and editors yield a higher ratio of fact to fancy. Readers have much to learn from Gladwell the journalist and essayist. But when it comes to Gladwell the social scientist, they should watch out for those igon values.
Richard Posner's review of Blink for The New Republic in 2005:By Malcolm Gladwell
410 pp. Little, Brown & Company. $27.99
http://www.law.uchicago.edu/node/3360
Posner Reviews Blink
Blinkered
The New Republic
January 24, 2005
Blink: The Power of Thinking Without Thinking
By Malcolm Gladwell
(Little, Brown, 277 pp., $25.95)
By Malcolm Gladwell
(Little, Brown, 277 pp., $25.95)
There are two types of thinking, to oversimplify grossly. We may call them intuitive and articulate. The first is the domain of hunches, snap judgments, emotional reactions, and first impressions--in short, instant responses to sensations. Obviously there is a cognitive process involved in such mental processes; one is responding to information. But there is no conscious thought, because there is no time for it. The second type of thinking is the domain of logic, deliberation, reasoned discussion, and scientific method. Here thinking is conscious: it occurs in words or sentences or symbols or concepts or formulas, and so it takes time. Articulate thinking is the model of rationality, while intuitive thinking is often seen as primitive, "emotional" in a derogatory sense, the only type of thinking of which animals are capable; and so it is articulate thinking that distinguishes human beings from the "lower" animals.
When, many years ago, a judge confessed that his decisions were based largely on hunch, this caused a bit of a scandal; but there is increasing recognition that while judicial opinions, in which the judge explains his decision, are models of articulate thinking, the decision itself--the outcome, the winner--will often come to the judge in a flash. But finally the contrast between intuitive and articulate thinking is overdrawn: it ignores the fact that deliberative procedures can become unconscious simply by becoming habitual, without thereby being intuitive in the sense of pre-verbal or emotional; and that might be the case with judicial decisions, too.
Malcolm Gladwell, a journalist, wishes to bring to a popular audience the results of recent research in psychology and related disciplines, such as neuroscience, which not only confirm the importance of intuitive cognition in human beings but also offer a qualified vindication of it. He argues that intuition is often superior to articulate thinking. It often misleads, to be sure; but with an awareness of the pitfalls we may be able to avoid them.
As Exhibit A for the superiority of intuitive to articulate thinking, Gladwell offers the case of a purported ancient Greek statue that was offered to the Getty Museum for $10 million. Months of careful study by a geologist (to determine the age of the statue) and by the museum's lawyers (to trace the statue's provenance) convinced the museum that it was genuine. But when historians of ancient art looked at it, they experienced an "intuitive revulsion," and indeed it was eventually proved to be a fake.
The example is actually a bad one for Gladwell's point, though it is a good illustration of the weakness of this book, which is a series of loosely connected anecdotes, rich in "human interest" particulars but poor in analysis. There is irony in the book's blizzard of anecdotal details. One of Gladwell's themes is that clear thinking can be overwhelmed by irrelevant information, but he revels in the irrelevant. An anecdote about food tasters begins: "One bright summer day, I had lunch with two women who run a company in New Jersey called Sensory Spectrum." The weather, the season, and the state are all irrelevant. And likewise that hospital chairman Brendan Reilly "is a tall man with a runner's slender build." Or that "inside, JFCOM [Joint Forces Command] looks like a very ordinary office building.... The business of JFCOM, however, is anything but ordinary." These are typical examples of Gladwell's style, which is bland and padded with cliches.
But back to the case of the Greek statue. It illustrates not the difference between intuitive thinking and articulate thinking, but different articulate methods of determining the authenticity of a work of art. One method is to trace the chain of title, ideally back to the artist himself (impossible in this case); another is to perform chemical tests on the material of the work; and a third is to compare the appearance of the work to that of works of art known to be authentic. The fact that the first two methods happened to take longer in the particular case of the Getty statue is happenstance. Had the seller produced a bill of sale from Phidias to Cleopatra, or the chemist noticed that the statue was made out of plastic rather than marble, the fake would have been detected in the blink of an eye. Conversely, had the statue looked more like authentic statues of its type, the art historians might have had to conduct a painstakingly detailed comparison of each feature of the work with the corresponding features of authentic works. Thus the speed with which the historians spotted this particular fake is irrelevant to Gladwell's thesis. Practice may not make perfect, but it enables an experienced person to arrive at conclusions more quickly than a neophyte. The expert's snap judgment is the result of a deliberative process made unconscious through habituation.
As one moves from anecdote to anecdote, the reader of Blink quickly realizes, though its author does not, that a variety of interestingly different mental operations are being crammed unhelpfully into the "rapid cognition" pigeonhole. In one anecdote, Dr. Lee Goldman discovers that the most reliable quick way of determining whether a patient admitted to a hospital with chest pains is about to have a heart attack is by using an algorithm based on just four data: the results of the patient's electrocardiogram, the pain being unstable angina, the presence of fluid in the lungs, and systolic blood pressure below one hundred. There is no diagnostic gain, Goldman found, from also knowing whether the patient has the traditional risk factors for heart disease, such as being a smoker or suffering from diabetes. In fact, there is a diagnostic loss, because an admitting doctor who gave weight to these factors (which are indeed good long-term predictors of heart disease) would be unlikely to admit a patient who had none of the traditional risk factors but was predicted by the algorithm to be about to have a heart attack.
To illustrate where rapid cognition can go wrong, Gladwell introduces us to Bob Golomb, an auto salesman who attributes his success to the fact that "he tries never to judge anyone on the basis of his or her appearance." More unwitting irony here, for Gladwell himself is preoccupied with people's appearances. Think of Reilly, with his runner's build; or John Gottman, who claims to be able by listening to a married couple talk for fifteen minutes to determine with almost 90 percent accuracy whether they will still be married in fifteen years, and whom Gladwell superfluously describes as "a middle-aged man with owl-like eyes, silvery hair, and a neatly trimmed beard. He is short and very charming...." And then there is "Klin, who bears a striking resemblance to the actor Martin Short, is half Israeli and half Brazilian, and he speaks with an understandably peculiar accent." Sheer clutter.
Golomb, the successful auto salesman, is contrasted with the salesmen in a study in which black and white men and women, carefully selected to be similar in every aspect except race and sex, pretended to shop for cars. The blacks were quoted higher prices than the whites, and the women higher prices than the men. Gladwell interprets this to mean that the salesmen lost out on good deals by judging people on the basis of their appearance. But the study shows no such thing. The authors of the study did not say, and Gladwell does not show, and Golomb did not suggest, that auto salesmen are incorrect in believing that blacks and women are less experienced or assiduous or pertinacious car shoppers than white males and therefore can be induced to pay higher prices. The Golomb story contained no mention of race or sex. (Flemington, where Golomb works, is a small town in central New Jersey that is only 3 percent black.) And when he said he tries not to judge a person on the basis of the person's appearance, it seems that all he meant was that shabbily dressed and otherwise unprepossessing shoppers are often serious about buying a car. "Now, if you saw this man [a farmer], with his coveralls and his cow dung, you'd figure he was not a worthy customer. But in fact, as we say in the trade, he's all cashed up."
It would not occur to Gladwell, a good liberal, that an auto salesman's discriminating on the basis of race or sex might be a rational form of the "rapid cognition" that he admires. If two groups happen to differ on average, even though there is considerable overlap between the groups, it may be sensible to ascribe the group's average characteristics to each member of the group, even though one knows that many members deviate from the average. An individual's characteristics may be difficult to determine in a brief encounter, and a salesman cannot afford to waste his time in a protracted one, and so he may quote a high price to every black shopper even though he knows that some blacks are just as shrewd and experienced car shoppers as the average white, or more so. Economists use the term "statistical discrimination" to describe this behavior. It is a better label than stereotyping for what is going on in the auto-dealer case, because it is more precise and lacks the distracting negative connotation of stereotype, defined by Gladwell as "a rigid and unyielding system." But is it? Think of how stereotypes of professional women, Asians, and homosexuals have changed in recent years. Statistical discrimination erodes as the average characteristics of different groups converge.
Gladwell reports an experiment in which some students are told before a test to think about professors and other students are told to think about soccer hooligans, and the first group does better on the test. He thinks this result shows the fallacy of stereotypical thinking. The experimenter claimed it showed that people are so suggestible that they can be put in a frame of mind in which they feel smarter and therefore perform smarter. The claim is undermined by a literature of which Gladwell seems unaware, which finds that self-esteem is correlated negatively rather than positively with academic performance. Yet, true or false, the claim is unrelated to statistical discrimination, which is a matter of basing judgments on partial information.
The average male CEO of a Fortune 500 company is significantly taller than the average American male, and Gladwell offers this as another example of stereotypical thinking. That is not very plausible; a CEO is selected only after a careful search to determine the candidate's individual characteristics. Gladwell ignores the possibility that tall men are disproportionately selected for leadership positions because of personality characteristics that are correlated with height, notably self-confidence and a sense of superiority perhaps derived from experiences in childhood, when tall boys lord it over short ones. Height might be a tiebreaker, but it would be unlikely to land the job for a candidate whom an elaborate search process revealed to be less qualified than a shorter candidate.
Gladwell applauds the rule that a police officer who stops a car driven by someone thought to be armed should approach the seated driver from the rear on the driver's side but pause before he reaches the driver, so that he will be standing slightly behind where the driver is sitting. The driver, if he wants to shoot the officer, will have to twist around in his seat, and this will give the officer more time to react. Gladwell says that this rule is designed to prevent what he calls "temporary autism." This is one of many cutesy phrases and business-guru slogans in which this book abounds. Others include "mind-blindness," "listening with your eyes," "thin slicing"--which means basing a decision on a small amount of the available information--and the "Warren Harding error," which is thinking that someone who looks presidential must have the qualities of a good president.
Autistic people treat people as inanimate objects rather than as thinking beings like themselves, and as a result they have trouble predicting behavior. Gladwell argues that a police officer who fears that his life is in danger will be unable to read the suspect's face and gestures for reliable clues to intentions (Gladwell calls this "mind reading") and is therefore likely to make a mistake; he is "mind-blinded," as if he were autistic. The rule gives him more time to decide what the suspect's intentions are. It seems a sensible rule, but the assessment of it gains nothing from a reference to autism. Obviously you are less likely to shoot a person in mistaken self-defense the more time you have in which to assess his intentions.
Gladwell endorses a claim by the psychologist Paul Ekman that careful study of a person's face while he is speaking will reveal unerringly whether he is lying. Were this true, the implications would be revolutionary. The CIA could discard its lie detectors. Psychologists trained by Ekman could be hired to study videotapes of courtroom testimony and advise judges and jurors whom to believe and whom to convict of perjury. Ekman's "Facial Action Coding System" would dominate the trial process. Gladwell is completely credulous about Ekman's claims. Ekman told him that he studied Bill Clinton's facial expressions during the 1992 campaign and told his wife, "This is a guy who wants to be caught with his hand in the cookie jar, and have us love him for it anyway." This self-serving testimony is no evidence of anything. The natural follow-up question for Gladwell to have asked would have been whether, when Ekman saw the videotape of Clinton's deposition during the run-up to his impeachment, he realized that Clinton was lying. He didn't ask that question. Nor does he mention the flaws that critics have found in Ekman's work.
As with Gladwell's other tales, the Ekman story is not actually about the strengths and the weaknesses of rapid cognition. It took Ekman years to construct his Facial Action Coding System, which Gladwell tells us fills a five-hundred-page binder. Now, it is perfectly true that we can often infer a person's feelings, intentions, and other mental dispositions from a glance at his face. But people are as skillful at concealing their feelings and intentions as they are at reading them in others--hence the need for the FACS, which is itself a product of articulate thinking.
So Gladwell should not have been surprised by the results of an experiment to test alternative methods of discovering certain personal characteristics of college kids, such as emotional stability. One method was to ask the person's best friends; another was to ask strangers to peek inside the person's room. The latter method proved superior. People conceal as well as reveal themselves in their interactions with their friends. In arranging their rooms, they are less likely to be trying to make an impression, so the stranger will not be fooled by prior interactions with the person whose room it is. The better method happened to be the quicker one. But it wasn't better because it was quicker.
Remember JFCOM? In 2002, it conducted a war game called "Millennium Challenge" in anticipation of the U.S. invasion of Iraq. As commander of the "Red Team" (the adversary in a war game), JFCOM chose a retired Marine general named Paul Van Riper. Oddly, Gladwell never mentions that Van Riper was a general. This omission, I think, is owed to Gladwell's practice of presenting everyone who gets the psychology right as an enemy of the establishment, and it is hard to think of a general in that light, though in fact Van Riper is something of a maverick.
The Blue Team was equipped with an elaborate computerized decision-making tool called "Operational Net Assessment." Van Riper beat the Blue Team in the war game using low-tech, commonsense tactics: when the Blue Team knocked out the Red Team's electronic communications, for example, he used couriers on motorcycles to deliver messages. Was Van Riper's strategy a triumph of rapid cognition, as Gladwell portrays it? Operational Net Assessment was and is an experimental program for integrating military intelligence from all sources in order to dispel the "fog of war." The military is continuing to work on it. That Van Riper beat it two years ago is no more surprising than that chess champions easily beat the earliest chess-playing computers: today, in a triumph of articulate "thinking" over intuition, it is the computers that are the champs.
Gladwell also discusses alternative approaches in dating. (The procession of his anecdotes here becomes dizzying.) One is to make a list of the characteristics one desires in a date and then go looking for possible dates that fit the characteristics. The other, which experiments reveal, plausibly, to be superior, is to date a variety of people until you find someone with whom you click. The distinction is not between articulate thinking and intuitive thinking, but between deduction and induction. If you have never dated, you will not have a good idea of what you are looking for. As you date, you will acquire a better idea, and eventually you will be able to construct a useful checklist of characteristics. So this is yet another little tale that doesn't fit the ostensible subject of his book. Gladwell does not discuss "love at first sight," which would be a good illustration of the unreliability of rapid cognition.
In discussing racial discrimination, Gladwell distinguishes between "unconscious attitudes" and "conscious attitudes. That is what we choose to believe." But beliefs are not chosen. You might think it very nice to believe in the immortality of the soul, but you could not will yourself (at least if you are intellectually honest) to believe it. Elsewhere he remarks of someone that when he is excited "his eyes light up and open even wider." But eyes don't light up; it is only by opening them wider that one conveys a sense of excitement. The metaphor of eyes lighting up is harmless, but one is surprised to find it being used by a writer who is at pains to explain exactly how we read intentions in facial expressions--and it is not by observing ocular flashes.
This book also succumbs to the fallacy that people with good ideas must be good people. Everyone in the book who gets psychology right is not only or mainly a bright person, he is also a noble human being; so there is much emphasis, Kerry-like, on Van Riper's combat performance in the Vietnam War, without explicitly mentioning that he went on to become a lieutenant-general. Such pratfalls, together with the inaptness of the stories that constitute the entirety of the book, make me wonder how far Gladwell has actually delved into the literatures that bear on his subject, which is not a new one. These include a philosophical literature illustrated by the work of Michael Polanyi on tacit knowledge and on "know how" versus "know that"; a psychological literature on cognitive capabilities and distortions; a literature in both philosophy and psychology that explores the cognitive role of the emotions; a literature in evolutionary biology that relates some of these distortions to conditions in the "ancestral environment" (the environment in which the human brain reached approximately its current level of development); a psychiatric literature on autism and other cognitive disturbances; an economic literature on the costs of acquiring and absorbing information; a literature at the intersection of philosophy, statistics, and economics that explores the rationality of basing decisions on subjective estimates of probability (Bayes's Theorem); and a literature in neuroscience that relates cognitive and emotional states to specific parts of and neuronal activities in the brain.
Taken together, these literatures demonstrate the importance of unconscious cognition, but their findings are obscured rather than elucidated by Gladwell's parade of poorly understood yarns. He wants to tell stories rather than to analyze a phenomenon. He tells them well enough, if you can stand the style. (Blink is written like a book intended for people who do not read books.) And there are interesting and even illuminating facts scattered here and there, such as the blindfold "sip" test that led Coca-Cola into the disastrous error of changing the formula for Coke so that it would taste more like Pepsi. As Gladwell explains, people do not decide what food or beverage to buy solely on the basis of taste, let alone taste in the artificial setting of a blindfold test; the taste of a food or a drink is influenced by its visual properties. So that was a case in which less information really was less, and not more. And of course he is right that we may drown in information, so that to know less about a situation may sometimes be to know more about it. It is a lesson he should have taken to heart.
Richard A. Posner is a judge of the United States Court of Appeals for the Seventh Circuit and a senior lecturer at the University of Chicago Law School.
When, many years ago, a judge confessed that his decisions were based largely on hunch, this caused a bit of a scandal; but there is increasing recognition that while judicial opinions, in which the judge explains his decision, are models of articulate thinking, the decision itself--the outcome, the winner--will often come to the judge in a flash. But finally the contrast between intuitive and articulate thinking is overdrawn: it ignores the fact that deliberative procedures can become unconscious simply by becoming habitual, without thereby being intuitive in the sense of pre-verbal or emotional; and that might be the case with judicial decisions, too.
Malcolm Gladwell, a journalist, wishes to bring to a popular audience the results of recent research in psychology and related disciplines, such as neuroscience, which not only confirm the importance of intuitive cognition in human beings but also offer a qualified vindication of it. He argues that intuition is often superior to articulate thinking. It often misleads, to be sure; but with an awareness of the pitfalls we may be able to avoid them.
As Exhibit A for the superiority of intuitive to articulate thinking, Gladwell offers the case of a purported ancient Greek statue that was offered to the Getty Museum for $10 million. Months of careful study by a geologist (to determine the age of the statue) and by the museum's lawyers (to trace the statue's provenance) convinced the museum that it was genuine. But when historians of ancient art looked at it, they experienced an "intuitive revulsion," and indeed it was eventually proved to be a fake.
The example is actually a bad one for Gladwell's point, though it is a good illustration of the weakness of this book, which is a series of loosely connected anecdotes, rich in "human interest" particulars but poor in analysis. There is irony in the book's blizzard of anecdotal details. One of Gladwell's themes is that clear thinking can be overwhelmed by irrelevant information, but he revels in the irrelevant. An anecdote about food tasters begins: "One bright summer day, I had lunch with two women who run a company in New Jersey called Sensory Spectrum." The weather, the season, and the state are all irrelevant. And likewise that hospital chairman Brendan Reilly "is a tall man with a runner's slender build." Or that "inside, JFCOM [Joint Forces Command] looks like a very ordinary office building.... The business of JFCOM, however, is anything but ordinary." These are typical examples of Gladwell's style, which is bland and padded with cliches.
But back to the case of the Greek statue. It illustrates not the difference between intuitive thinking and articulate thinking, but different articulate methods of determining the authenticity of a work of art. One method is to trace the chain of title, ideally back to the artist himself (impossible in this case); another is to perform chemical tests on the material of the work; and a third is to compare the appearance of the work to that of works of art known to be authentic. The fact that the first two methods happened to take longer in the particular case of the Getty statue is happenstance. Had the seller produced a bill of sale from Phidias to Cleopatra, or the chemist noticed that the statue was made out of plastic rather than marble, the fake would have been detected in the blink of an eye. Conversely, had the statue looked more like authentic statues of its type, the art historians might have had to conduct a painstakingly detailed comparison of each feature of the work with the corresponding features of authentic works. Thus the speed with which the historians spotted this particular fake is irrelevant to Gladwell's thesis. Practice may not make perfect, but it enables an experienced person to arrive at conclusions more quickly than a neophyte. The expert's snap judgment is the result of a deliberative process made unconscious through habituation.
As one moves from anecdote to anecdote, the reader of Blink quickly realizes, though its author does not, that a variety of interestingly different mental operations are being crammed unhelpfully into the "rapid cognition" pigeonhole. In one anecdote, Dr. Lee Goldman discovers that the most reliable quick way of determining whether a patient admitted to a hospital with chest pains is about to have a heart attack is by using an algorithm based on just four data: the results of the patient's electrocardiogram, the pain being unstable angina, the presence of fluid in the lungs, and systolic blood pressure below one hundred. There is no diagnostic gain, Goldman found, from also knowing whether the patient has the traditional risk factors for heart disease, such as being a smoker or suffering from diabetes. In fact, there is a diagnostic loss, because an admitting doctor who gave weight to these factors (which are indeed good long-term predictors of heart disease) would be unlikely to admit a patient who had none of the traditional risk factors but was predicted by the algorithm to be about to have a heart attack.
To illustrate where rapid cognition can go wrong, Gladwell introduces us to Bob Golomb, an auto salesman who attributes his success to the fact that "he tries never to judge anyone on the basis of his or her appearance." More unwitting irony here, for Gladwell himself is preoccupied with people's appearances. Think of Reilly, with his runner's build; or John Gottman, who claims to be able by listening to a married couple talk for fifteen minutes to determine with almost 90 percent accuracy whether they will still be married in fifteen years, and whom Gladwell superfluously describes as "a middle-aged man with owl-like eyes, silvery hair, and a neatly trimmed beard. He is short and very charming...." And then there is "Klin, who bears a striking resemblance to the actor Martin Short, is half Israeli and half Brazilian, and he speaks with an understandably peculiar accent." Sheer clutter.
Golomb, the successful auto salesman, is contrasted with the salesmen in a study in which black and white men and women, carefully selected to be similar in every aspect except race and sex, pretended to shop for cars. The blacks were quoted higher prices than the whites, and the women higher prices than the men. Gladwell interprets this to mean that the salesmen lost out on good deals by judging people on the basis of their appearance. But the study shows no such thing. The authors of the study did not say, and Gladwell does not show, and Golomb did not suggest, that auto salesmen are incorrect in believing that blacks and women are less experienced or assiduous or pertinacious car shoppers than white males and therefore can be induced to pay higher prices. The Golomb story contained no mention of race or sex. (Flemington, where Golomb works, is a small town in central New Jersey that is only 3 percent black.) And when he said he tries not to judge a person on the basis of the person's appearance, it seems that all he meant was that shabbily dressed and otherwise unprepossessing shoppers are often serious about buying a car. "Now, if you saw this man [a farmer], with his coveralls and his cow dung, you'd figure he was not a worthy customer. But in fact, as we say in the trade, he's all cashed up."
It would not occur to Gladwell, a good liberal, that an auto salesman's discriminating on the basis of race or sex might be a rational form of the "rapid cognition" that he admires. If two groups happen to differ on average, even though there is considerable overlap between the groups, it may be sensible to ascribe the group's average characteristics to each member of the group, even though one knows that many members deviate from the average. An individual's characteristics may be difficult to determine in a brief encounter, and a salesman cannot afford to waste his time in a protracted one, and so he may quote a high price to every black shopper even though he knows that some blacks are just as shrewd and experienced car shoppers as the average white, or more so. Economists use the term "statistical discrimination" to describe this behavior. It is a better label than stereotyping for what is going on in the auto-dealer case, because it is more precise and lacks the distracting negative connotation of stereotype, defined by Gladwell as "a rigid and unyielding system." But is it? Think of how stereotypes of professional women, Asians, and homosexuals have changed in recent years. Statistical discrimination erodes as the average characteristics of different groups converge.
Gladwell reports an experiment in which some students are told before a test to think about professors and other students are told to think about soccer hooligans, and the first group does better on the test. He thinks this result shows the fallacy of stereotypical thinking. The experimenter claimed it showed that people are so suggestible that they can be put in a frame of mind in which they feel smarter and therefore perform smarter. The claim is undermined by a literature of which Gladwell seems unaware, which finds that self-esteem is correlated negatively rather than positively with academic performance. Yet, true or false, the claim is unrelated to statistical discrimination, which is a matter of basing judgments on partial information.
The average male CEO of a Fortune 500 company is significantly taller than the average American male, and Gladwell offers this as another example of stereotypical thinking. That is not very plausible; a CEO is selected only after a careful search to determine the candidate's individual characteristics. Gladwell ignores the possibility that tall men are disproportionately selected for leadership positions because of personality characteristics that are correlated with height, notably self-confidence and a sense of superiority perhaps derived from experiences in childhood, when tall boys lord it over short ones. Height might be a tiebreaker, but it would be unlikely to land the job for a candidate whom an elaborate search process revealed to be less qualified than a shorter candidate.
Gladwell applauds the rule that a police officer who stops a car driven by someone thought to be armed should approach the seated driver from the rear on the driver's side but pause before he reaches the driver, so that he will be standing slightly behind where the driver is sitting. The driver, if he wants to shoot the officer, will have to twist around in his seat, and this will give the officer more time to react. Gladwell says that this rule is designed to prevent what he calls "temporary autism." This is one of many cutesy phrases and business-guru slogans in which this book abounds. Others include "mind-blindness," "listening with your eyes," "thin slicing"--which means basing a decision on a small amount of the available information--and the "Warren Harding error," which is thinking that someone who looks presidential must have the qualities of a good president.
Autistic people treat people as inanimate objects rather than as thinking beings like themselves, and as a result they have trouble predicting behavior. Gladwell argues that a police officer who fears that his life is in danger will be unable to read the suspect's face and gestures for reliable clues to intentions (Gladwell calls this "mind reading") and is therefore likely to make a mistake; he is "mind-blinded," as if he were autistic. The rule gives him more time to decide what the suspect's intentions are. It seems a sensible rule, but the assessment of it gains nothing from a reference to autism. Obviously you are less likely to shoot a person in mistaken self-defense the more time you have in which to assess his intentions.
Gladwell endorses a claim by the psychologist Paul Ekman that careful study of a person's face while he is speaking will reveal unerringly whether he is lying. Were this true, the implications would be revolutionary. The CIA could discard its lie detectors. Psychologists trained by Ekman could be hired to study videotapes of courtroom testimony and advise judges and jurors whom to believe and whom to convict of perjury. Ekman's "Facial Action Coding System" would dominate the trial process. Gladwell is completely credulous about Ekman's claims. Ekman told him that he studied Bill Clinton's facial expressions during the 1992 campaign and told his wife, "This is a guy who wants to be caught with his hand in the cookie jar, and have us love him for it anyway." This self-serving testimony is no evidence of anything. The natural follow-up question for Gladwell to have asked would have been whether, when Ekman saw the videotape of Clinton's deposition during the run-up to his impeachment, he realized that Clinton was lying. He didn't ask that question. Nor does he mention the flaws that critics have found in Ekman's work.
As with Gladwell's other tales, the Ekman story is not actually about the strengths and the weaknesses of rapid cognition. It took Ekman years to construct his Facial Action Coding System, which Gladwell tells us fills a five-hundred-page binder. Now, it is perfectly true that we can often infer a person's feelings, intentions, and other mental dispositions from a glance at his face. But people are as skillful at concealing their feelings and intentions as they are at reading them in others--hence the need for the FACS, which is itself a product of articulate thinking.
So Gladwell should not have been surprised by the results of an experiment to test alternative methods of discovering certain personal characteristics of college kids, such as emotional stability. One method was to ask the person's best friends; another was to ask strangers to peek inside the person's room. The latter method proved superior. People conceal as well as reveal themselves in their interactions with their friends. In arranging their rooms, they are less likely to be trying to make an impression, so the stranger will not be fooled by prior interactions with the person whose room it is. The better method happened to be the quicker one. But it wasn't better because it was quicker.
Remember JFCOM? In 2002, it conducted a war game called "Millennium Challenge" in anticipation of the U.S. invasion of Iraq. As commander of the "Red Team" (the adversary in a war game), JFCOM chose a retired Marine general named Paul Van Riper. Oddly, Gladwell never mentions that Van Riper was a general. This omission, I think, is owed to Gladwell's practice of presenting everyone who gets the psychology right as an enemy of the establishment, and it is hard to think of a general in that light, though in fact Van Riper is something of a maverick.
The Blue Team was equipped with an elaborate computerized decision-making tool called "Operational Net Assessment." Van Riper beat the Blue Team in the war game using low-tech, commonsense tactics: when the Blue Team knocked out the Red Team's electronic communications, for example, he used couriers on motorcycles to deliver messages. Was Van Riper's strategy a triumph of rapid cognition, as Gladwell portrays it? Operational Net Assessment was and is an experimental program for integrating military intelligence from all sources in order to dispel the "fog of war." The military is continuing to work on it. That Van Riper beat it two years ago is no more surprising than that chess champions easily beat the earliest chess-playing computers: today, in a triumph of articulate "thinking" over intuition, it is the computers that are the champs.
Gladwell also discusses alternative approaches in dating. (The procession of his anecdotes here becomes dizzying.) One is to make a list of the characteristics one desires in a date and then go looking for possible dates that fit the characteristics. The other, which experiments reveal, plausibly, to be superior, is to date a variety of people until you find someone with whom you click. The distinction is not between articulate thinking and intuitive thinking, but between deduction and induction. If you have never dated, you will not have a good idea of what you are looking for. As you date, you will acquire a better idea, and eventually you will be able to construct a useful checklist of characteristics. So this is yet another little tale that doesn't fit the ostensible subject of his book. Gladwell does not discuss "love at first sight," which would be a good illustration of the unreliability of rapid cognition.
In discussing racial discrimination, Gladwell distinguishes between "unconscious attitudes" and "conscious attitudes. That is what we choose to believe." But beliefs are not chosen. You might think it very nice to believe in the immortality of the soul, but you could not will yourself (at least if you are intellectually honest) to believe it. Elsewhere he remarks of someone that when he is excited "his eyes light up and open even wider." But eyes don't light up; it is only by opening them wider that one conveys a sense of excitement. The metaphor of eyes lighting up is harmless, but one is surprised to find it being used by a writer who is at pains to explain exactly how we read intentions in facial expressions--and it is not by observing ocular flashes.
This book also succumbs to the fallacy that people with good ideas must be good people. Everyone in the book who gets psychology right is not only or mainly a bright person, he is also a noble human being; so there is much emphasis, Kerry-like, on Van Riper's combat performance in the Vietnam War, without explicitly mentioning that he went on to become a lieutenant-general. Such pratfalls, together with the inaptness of the stories that constitute the entirety of the book, make me wonder how far Gladwell has actually delved into the literatures that bear on his subject, which is not a new one. These include a philosophical literature illustrated by the work of Michael Polanyi on tacit knowledge and on "know how" versus "know that"; a psychological literature on cognitive capabilities and distortions; a literature in both philosophy and psychology that explores the cognitive role of the emotions; a literature in evolutionary biology that relates some of these distortions to conditions in the "ancestral environment" (the environment in which the human brain reached approximately its current level of development); a psychiatric literature on autism and other cognitive disturbances; an economic literature on the costs of acquiring and absorbing information; a literature at the intersection of philosophy, statistics, and economics that explores the rationality of basing decisions on subjective estimates of probability (Bayes's Theorem); and a literature in neuroscience that relates cognitive and emotional states to specific parts of and neuronal activities in the brain.
Taken together, these literatures demonstrate the importance of unconscious cognition, but their findings are obscured rather than elucidated by Gladwell's parade of poorly understood yarns. He wants to tell stories rather than to analyze a phenomenon. He tells them well enough, if you can stand the style. (Blink is written like a book intended for people who do not read books.) And there are interesting and even illuminating facts scattered here and there, such as the blindfold "sip" test that led Coca-Cola into the disastrous error of changing the formula for Coke so that it would taste more like Pepsi. As Gladwell explains, people do not decide what food or beverage to buy solely on the basis of taste, let alone taste in the artificial setting of a blindfold test; the taste of a food or a drink is influenced by its visual properties. So that was a case in which less information really was less, and not more. And of course he is right that we may drown in information, so that to know less about a situation may sometimes be to know more about it. It is a lesson he should have taken to heart.
Richard A. Posner is a judge of the United States Court of Appeals for the Seventh Circuit and a senior lecturer at the University of Chicago Law School.
Copyright 2005 The New Republic, LLC
Faculty:
Richard A. Posner