Professionals in many organizations are assigned arbitrarily to cases: appraisers in credit-rating agencies, physicians in emergency rooms, underwriters of loans and insurance, and others. Organizations expect consistency from these professionals: Identical cases should be treated similarly, if not identically. The problem is that humans are unreliable decision makers; their judgments are strongly influenced by irrelevant factors, such as their current mood, the time since their last meal, and the weather. We call the chance variability of judgments noise. It is an invisible tax on the bottom line of many companies. ... The prevalence of noise has been demonstrated in several studies. Academic researchers have repeatedly confirmed that professionals often contradict their own prior judgments when given the same data on different occasions. ... The unavoidable conclusion is that professionals often make decisions that deviate significantly from those of their peers, from their own prior decisions, and from rules that they themselves claim to follow. ... It has long been known that predictions and decisions generated by simple statistical algorithms are often more accurate than those made by experts, even when the experts have access to more information than the formulas use. It is less well known that the key advantage of algorithms is that they are noise-free: Unlike humans, a formula will always return the same output for any given input. Superior consistency allows even simple and imperfect algorithms to achieve greater accuracy than human professionals. ... One reason the problem of noise is invisible is that people do not go through life imagining plausible alternatives to every judgment they make. ... The bottom line here is that if you plan to use an algorithm to reduce noise, you need not wait for outcome data. You can reap most of the benefits by using common sense to select variables and the simplest possible rule to combine them.
I’m sure some of the criticism of people who claim to be using data to find knowledge, and to exploit inefficiencies in their industries, has some truth to it. But whatever it is in the human psyche that the Oakland A’s exploited for profit—this hunger for an expert who knows things with certainty, even when certainty is not possible—has a talent for hanging around. ... How did this pair of Israeli psychologists come to have so much to say about these matters of the human mind that they more or less anticipated a book about American baseball written decades in the future? What possessed two guys in the Middle East to sit down and figure out what the mind was doing when it tried to judge a baseball player, or an investment, or a presidential candidate? And how on earth does a psychologist win a Nobel Prize in economics? ... Amos was now what people referred to, a bit confusingly, as a “mathematical psychologist.” Non-mathematical psychologists, like Danny, quietly viewed much of mathematical psychology as a series of pointless exercises conducted by people who were using their ability to do math as camouflage for how little of psychological interest they had to say. ... students who once wondered why the two brightest stars of Hebrew University kept their distance from each other now wondered how two so radically different personalities could find common ground, much less become soulmates. ... Danny was always sure he was wrong. Amos was always sure he was right. Amos was the life of every party; Danny didn’t go to the parties. ... Both were grandsons of Eastern European rabbis, for a start. Both were explicitly interested in how people functioned when they were in a “normal” unemotional state. Both wanted to do science. Both wanted to search for simple, powerful truths.