The Limits of Expertise | The Scholar's Stage
Jared Sperli stashed this in education
The trouble with this advice is that there are plenty of perfectly rational reasons to distrust those with political expertise. Mr. Nichol's wiser readers, for example, may have heard of the research conducted by Philip Tetlock, presented most compellingly in his 2005 book, Expert Political Judgement: How Good is it? How Can We Know?. Louis Menard describes the results of Dr. Tetlock's research program in a book review for the New Yorker:
"Tetlock is a psychologist—he teaches at Berkeley—and his conclusions are based on a long-term study that he began twenty years ago. He picked two hundred and eighty-four people who made their living “commenting or offering advice on political and economic trends,” and he started asking them to assess the probability that various things would or would not come to pass, both in the areas of the world in which they specialized and in areas about which they were not expert. Would there be a nonviolent end to apartheid in South Africa? Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? Would Canada disintegrate? (Many experts believed that it would, on the ground that Quebec would succeed in seceding.) And so on. By the end of the study, in 2003, the experts had made 82,361 forecasts. Tetlock also asked questions designed to determine how they reached their judgments, how they reacted when their predictions proved to be wrong, how they evaluated new information that did not support their views, and how they assessed the probability that rival theories and predictions were accurate."
Tetlock's experts came from all sorts of backgrounds: included were media personalities, tenured academics, professional analysts in Washington think tanks, and employees of numerous government agencies (including those with access to classified materials). A wide range of political beliefs and styles of analysis were also included: the study included both registered Republicans and Democrats, Austrian economists and their Keynesian counterparts, specialists in game theory, realist IR, area studies, and every other analytic model that gained popularity during the test period. What were the results of their 82,000 predictions?
"The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes—if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight.... The expert also suffers from knowing too much: the more facts an expert has, the more information is available to be enlisted in support of his or her pet theories, and the more chains of causation he or she can find beguiling. This helps explain why specialists fail to outguess non-specialists. The odds tend to be with the obvious" (emphasis added). 
In other words, the expert is not more likely to be right than you are! Rigorous examination of actual expert track records show the opposite to be true: the more famous, confident, and specialized an expert is, the less accurate their political judgments tend to be. It turns out what an expert truly excels at is explaining away his or her own horrible record. Dr. Tetlock describes what happened at the end of the study when participants were asked to explain their errors:
"When we recontacted experts to gauge their reactions to confirmation or disconfirmation of their predictions, we frequently ran into a awkward problem. Our records of the probability judgements made at the beginning of the forcast periods often disagreed with experts recollections of what they predicted. Experts claimed that they assigned higher probabilities to outcomes that materialized than they did. From a narrowly Bayesian perspective, this 20/20 hindsight effect was a methodological nuisance: it is hard to ask someone why they got it wrong when the think it is right. But from a psychological perspective the hindsight effect is intriguing in its own right... , so we decided in six cases, to ask experts to recollect their positions prior to receiving the reminder from our records. When we asked experts to recall their original likelihood judgments, experts, especially hedgehogs, often claimed that they attached higher probabilities to what subsequently happened than they did.... Experts [also] shortchanged competition. When experts recalled the probabilities they once thought their most influential rivals would assign t the future that materialized, they imputed lower probabilities after the fact than before he fact. In effect, experts displayed both the classic hindsight effect (claiming more credit for predicting the future than they deserved) and the mirror image effect (giving less credit to their opponents for anticipating the future they deserved.)" 
When confronted with these truths very few experts would admit that their reasoning or their methods were wrong. Instead they would create elaborate justifications for each failed prediction, claiming that their predictions were "almost right," that events had truly been "close calls," or that their prediction was only thrown off by some "out of the blue" or "fluke" occurrence no one could have seen coming. Each failed prediction was successfully turned into a compelling story that not only justified the expert's failures but made things seem as if they and their methods had been right all along.Mr. Tetlock's research deserves to be better known than it is. In addition to Louis Menard's review in the New Yorker excerpted above, interested readers are encouraged to read the CATO Unbound issue devoted to the book, view Tetlock's hour long presentation for the Long Now Foundation, or purchase the book itself. It is difficult to delve into his work and think about experts the same way again.