In Thinking, Fast and Slow author Daniel Kahneman wrote of a study conducted by psychologist Philip Tetlock at the University if Pennsylvania (my alma mater). Tetlock asked 284 people who made their living commenting or advising on political and economic trends and compared the outcome of their predictions to random outcomes.
The results were devastating. The experts performed worse than they would have if they had simply assigned equal probabilities to each of the three potential outcomes. In other words, people who spend their time, and earn their living, studying a particular topic produce poorer predictions than dart-throwing monkeys…. Even in the regions they knew best, experts were not significantly better than non specialists.
Those who know more forecast very slightly better than those who know less. But those with the most knowledge are often less reliable. The reason is that the person who acquires more knowledge develops an enhanced illusion of their skill and becomes unrealistically overconfident. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” Tetlock writes. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals- distinguished political scientists, and study specialists, economists and so on- are any better than journalists or attentive readers of The New York Times in ‘reading’ emerging situations.” The more famous the forecaster, Tetlock discovered, the more flamboyant the forecasts. “Experts in demand,” he writes, “were more overconfident than their colleagues who eked out existences far from the limelight.”
Tetlock also found that experts resisted admitting that they had been wrong, and when they were compelled to admit error, they had a large collection of excuses.
One of the great disservices we have accepted is the illusion that scientific principles apply to realms that are not science. Endless statistics are researched and catalogued to be used to apply to economics, psychology and political science, but the realm of human behavior defies scientific categorization. Such data may be dependable most of the time during a range of normal outcomes, but they fail miserably at the extremes when we depend on them the most.
A sound theory in physics is not right only most of the time. When a scientist tests his theory against reality he adjusts the theory to reality. But political ideologues do not develop theories to describe reality as it exists but as they want it to exist. When reality does not respond the way their theory predicts (a theory’s value is its ability to predict), then they attempt to adjust the reality to fit their theory.
Adjusting reality requires a lot of force. This is how utopian masters of the universe develop into tyrants.