What if you had the opportunity to learn how to improve the quality of your forecasts, measured as the distance between forecasts and outcomes, by 60 percent? Interested? ... Phil Tetlock is a professor of psychology and political science at the University of Pennsylvania who has spent decades studying the predictions of experts. Specifically, he enticed 284 experts to make more than 27,000 predictions on political, social, and economic outcomes over a 21-year span ended in 2004. The period included six presidential elections and three wars. These forecasters had crack credentials, including more than a dozen years of relevant work experience and lots of advanced degrees—nearly all had postgraduate training and half had PhDs. ... Overall, Tetlock’s results provide lethal ammunition for those who debunk the value of experts. ... While famous experts had among the worst records of prediction, they demonstrated “skill at telling a compelling story.” To gain fame it helps to tell “tight, simple, clear stories that grab and hold audiences.” These pundits are often wrong but never in doubt. ... foresight is a real and measurable skill. One test of skill is persistence. High persistence means that you do consistently well over time and are not a one-hit wonder. About 70 percent of superforecasters remain in those elite ranks from one year to the next, vastly more than what chance would dictate. ... second is that foresight “is the product of particular ways of thinking, of gathering information, of updating beliefs.” Importantly, the essential ingredients of being a superforecaster can be learned and cultivated. ... Tetlock and his colleagues found four drivers behind the success of the superforecasters:
- Find the right people. You get a 10-15 percent boost from screening forecasters on fluid intelligence and active open-mindedness.
- Manage interaction. You get a 10-20 percent enhancement by allowing the forecasters to work collaboratively in teams or competitively in prediction markets.
- Train effectively. Cognitive debiasing exercises lift results by 10 percent.
- Overweight elite forecasters or extremize estimates. Results improve by 15-30 percent if you give more weight to better forecasters and make forecasts more extreme to compensate for the conservatism of forecasts.