Humans are social and generally want to be part of the crowd. Studies of social conformity suggest that the group’s view may shape how we perceive a situation. Those individuals who remain independent show activity in a part of the brain associated with fear. ... We are natural pattern seekers and see them even where none exist. Our brains are keen to make causal inferences, which can lead to faulty conclusions. ... Standard economic theory assumes that one discount rate allows us to translate value in the future to value in the present, and vice versa. Yet humans often use a high discount rate in the short term and a low one in the long term. This may be because different parts of the brain mediate short- and long-term decisions. ... We suffer losses more than we enjoy gains of comparable size. But the magnitude of loss aversion varies across the population and even for each individual based on recent experience. As a result, we sometimes forgo attractive opportunities because the fear of loss looms too large.
Hard realities in these three fields are inconvenient for vested interests and because the day of reckoning can always be seen as “later,” politicians can always find a way to postpone necessary actions, as can we all: “Because markets are efficient, these high prices must be reflecting the remarkable potential of the internet”; “the U.S. housing market largely reflects a strong U.S. economy”; “the climate has always changed”; “how could mere mortals change something as immense as the weather”; “we have nearly infinite resources, it is only a question of price”; “the infinite capacity of the human brain will always solve our problems.” ... Having realized the seriousness of this bias over the last few decades, I have noticed how hard it is to effectively pass on a warning for the same reason: No one wants to hear this bad news. So a while ago I came up with a list of propositions that are widely accepted by an educated business audience. They are widely accepted but totally wrong. It is my attempt to bring home how extreme is our preference for good news over accurate news. When you have run through this list you may be a little more aware of how dangerous our wishful thinking can be in investing and in the much more important fields of resource (especially food) limitations and the potentially life-threatening risks of climate damage. Wishful thinking and denial of unpleasant facts are simply not survival characteristics.
The internet has spawned subtle forms of influence that can flip elections and manipulate everything we say, think and do ... Most of us have heard of at least one of these methods: subliminal stimulation, or what Packard called ‘subthreshold effects’ – the presentation of short messages that tell us what to do but that are flashed so briefly we aren’t aware we have seen them. In 1958, propelled by public concern about a theatre in New Jersey that had supposedly hidden messages in a movie to increase ice cream sales, the National Association of Broadcasters – the association that set standards for US television – amended its code to prohibit the use of subliminal messages in broadcasting. ... Subliminal stimulation is probably still in wide use in the US – it’s hard to detect, after all, and no one is keeping track of it – but it’s probably not worth worrying about. ... what would happen if new sources of control began to emerge that had little or no competition? And what if new means of control were developed that were far more powerful – and far more invisible – than any that have existed in the past? And what if new types of control allowed a handful of people to exert enormous influence not just over the citizens of the US but over most of the people on Earth? ... It might surprise you to hear this, but these things have already happened. ... The shift we had produced, which we called the Search Engine Manipulation Effect (or SEME, pronounced ‘seem’), appeared to be one of the largest behavioural effects ever discovered.
Should we make decisions based on intuition and emotion, or should we make decisions more rationally, with data, analytics, and numbers? The best process for making decisions under pressure is to use the data and numbers to inform our intuition. In addition, leaders must recognize and avoid falling prey to a number of mind tricks and biases. Power dynamics can also lead to poor decisions, and leaders do best to pursue an inquiry-based—rather than advocacy-based—approach. ... When making decisions under pressure, there are four tensions. Any decision in an organization generally has an ethical issue, a strategic issue, a financial issue, and a legal issue. Sometimes, there is tension among those issues. What makes perfect sense strategically might not make sense legally, or what makes the best sense financially might not make sense ethically. Part of the decision-making process is having the ability to recognize and manage the fundamental tensions that exist in most of the decisions we face. ... The way to do that is by answering three questions. First, how do I motivate and encourage the people and the organization to be aligned with what we are trying to achieve? Second, operationally, when we are under threat, how do I make sure that the business will be able to continue during these threatening circumstances? Third, how do I communicate the decision that I am about to make?
Cheese-rolling. Pole-vaulting. Wife-carrying. Figure skating. Cup-stacking. Bobsledding. Ferret-legging. Golf. Somewhere someone right now is endeavoring to become more proficient at every one of these activities. Half the sports on that list are imbued with the prestige and promise of an Olympic medal, but is there anything more intrinsically worthy about performing a triple salchow than there is about keeping an angry ferret inside your trousers for two minutes? ... The upcoming Summer Olympics from Rio de Janeiro will feature 306 different events in 42 sports, or so the official Rio2016.com site tells us. But how many of those sports, such as synchronized swimming or equestrian events, do you consider a sport?
Risk scores, generated by algorithms, are an increasingly common factor in sentencing. Computers crunch data—arrests, type of crime committed, and demographic information—and a risk rating is generated. The idea is to create a guide that’s less likely to be subject to unconscious biases, the mood of a judge, or other human shortcomings. Similar tools are used to decide which blocks police officers should patrol, where to put inmates in prison, and who to let out on parole. Supporters of these tools claim they’ll help solve historical inequities, but their critics say they have the potential to aggravate them, by hiding old prejudices under the veneer of computerized precision. ... Computer scientists have a maxim, “Garbage in, garbage out.” In this case, the garbage would be decades of racial and socioeconomic disparities in the criminal justice system. Predictions about future crimes based on data about historical crime statistics have the potential to equate past patterns of policing with the predisposition of people in certain groups—mostly poor and nonwhite—to commit crimes.
Professionals in many organizations are assigned arbitrarily to cases: appraisers in credit-rating agencies, physicians in emergency rooms, underwriters of loans and insurance, and others. Organizations expect consistency from these professionals: Identical cases should be treated similarly, if not identically. The problem is that humans are unreliable decision makers; their judgments are strongly influenced by irrelevant factors, such as their current mood, the time since their last meal, and the weather. We call the chance variability of judgments noise. It is an invisible tax on the bottom line of many companies. ... The prevalence of noise has been demonstrated in several studies. Academic researchers have repeatedly confirmed that professionals often contradict their own prior judgments when given the same data on different occasions. ... The unavoidable conclusion is that professionals often make decisions that deviate significantly from those of their peers, from their own prior decisions, and from rules that they themselves claim to follow. ... It has long been known that predictions and decisions generated by simple statistical algorithms are often more accurate than those made by experts, even when the experts have access to more information than the formulas use. It is less well known that the key advantage of algorithms is that they are noise-free: Unlike humans, a formula will always return the same output for any given input. Superior consistency allows even simple and imperfect algorithms to achieve greater accuracy than human professionals. ... One reason the problem of noise is invisible is that people do not go through life imagining plausible alternatives to every judgment they make. ... The bottom line here is that if you plan to use an algorithm to reduce noise, you need not wait for outcome data. You can reap most of the benefits by using common sense to select variables and the simplest possible rule to combine them.
Scientists are beginning to understand why these ‘mini Wall Streets’ work so well at forecasting election results — and how they sometimes fail. ... Experiments such as this are a testament to the power of prediction markets to turn individuals’ guesses into forecasts of sometimes startling accuracy. That uncanny ability ensures that during every US presidential election, voters avidly follow the standings for their favoured candidates on exchanges such as Betfair and the Iowa Electronic Markets (IEM). But prediction markets are increasingly being used to make forecasts of all kinds, on everything from the outcomes of sporting events to the results of business decisions. Advocates maintain that they allow people to aggregate information without the biases that plague traditional forecasting methods, such as polls or expert analysis. ... sceptics point out that prediction markets are far from infallible. ... prediction-market supporters argue that even imperfect forecasts can be helpful. ... People have been betting on future events for as long as they have played sports and raced horses. But in the latter half of the nineteenth century, US efforts to set betting odds through marketplace supply and demand became centralized on Wall Street, where wealthy New York City businessmen and entertainers were using informal markets to bet on US elections as far back as 1868. ... Friedrich Hayek. He argued that markets in general could be viewed as mechanisms for collecting vast amounts of information held by individuals and synthesizing it into a useful data point — namely the price that people are willing to pay for goods or services.
Americans are bad at saving. In an annual survey by the Fed, almost half said they couldn’t come up with $400 in an emergency. The savings rate of the bottom 90 percent of American households hovers just above 1 percent. ... There are many theories for why Americans don’t save, from poverty to debt to conspicuous consumption. But the most enticing comes from behavioral economics: It’s easier not to. Inertia is strong, and putting money away requires overcoming what economists call present bias. ... The good news, according to behavioral economists, is that we can just as easily be tricked into overcoming that psychology with “nudges” that reframe incentives. Just post calorie counts next to unhealthy food, and people won’t order cheeseburgers. Or, make 401(k) plans opt-out, and more people will save money for retirement. Suddenly, with one oh-so-simple tweak, making bad decisions becomes the harder option. ... At every step of the way, the study ran into a web of competing incentives and pesky human flaws that hurt its goal of getting poor people to save money. ... The problem goes beyond a sheer lack of funds. The psychology of poverty is hard to overcome with a dainty nudge. ... the study’s preliminary results were muddy. They suggested that the nudge method did get some people to save more: Deposits increased when people got some kind of reminder. But they didn’t show whether one type of nudge worked better than any other (possibly because of teller error), and they provided no evidence that the savings accounts helped people build up money over time.
I’m sure some of the criticism of people who claim to be using data to find knowledge, and to exploit inefficiencies in their industries, has some truth to it. But whatever it is in the human psyche that the Oakland A’s exploited for profit—this hunger for an expert who knows things with certainty, even when certainty is not possible—has a talent for hanging around. ... How did this pair of Israeli psychologists come to have so much to say about these matters of the human mind that they more or less anticipated a book about American baseball written decades in the future? What possessed two guys in the Middle East to sit down and figure out what the mind was doing when it tried to judge a baseball player, or an investment, or a presidential candidate? And how on earth does a psychologist win a Nobel Prize in economics? ... Amos was now what people referred to, a bit confusingly, as a “mathematical psychologist.” Non-mathematical psychologists, like Danny, quietly viewed much of mathematical psychology as a series of pointless exercises conducted by people who were using their ability to do math as camouflage for how little of psychological interest they had to say. ... students who once wondered why the two brightest stars of Hebrew University kept their distance from each other now wondered how two so radically different personalities could find common ground, much less become soulmates. ... Danny was always sure he was wrong. Amos was always sure he was right. Amos was the life of every party; Danny didn’t go to the parties. ... Both were grandsons of Eastern European rabbis, for a start. Both were explicitly interested in how people functioned when they were in a “normal” unemotional state. Both wanted to do science. Both wanted to search for simple, powerful truths.
Like a number of up-and-coming researchers in his generation, Nosek was troubled by mounting evidence that science itself—through its systems of publication, funding, and advancement—had become biased toward generating a certain kind of finding: novel, attention grabbing, but ultimately unreliable. The incentives to produce positive results were so great, Nosek and others worried, that some scientists were simply locking their inconvenient data away. ... The problem even had a name: the file drawer effect. ... The aim was to redo about 50 studies from three prominent psychology journals, to establish an estimate of how often modern psychology turns up false positive results. ... He wasn’t promising novel findings, he was promising to question them. So he ran his projects on a shoestring budget, self-financing them with his own earnings from corporate speaking engagements on his research about bias. ... researchers involved in similar rounds of soul-searching and critique in their own fields, who have loosely amounted to a movement to fix science. ... The problem, they claim, isn’t that scientists don’t want to do the right thing. On the contrary, Arnold says he believes that most researchers go into their work with the best of intentions, only to be led astray by a system that rewards the wrong behaviors.
Statcheck had read some 50,000 published psychology papers and checked the maths behind every statistical result it encountered. In the space of 24 hours, virtually every academic active in the field in the past two decades had received an email from the program, informing them that their work had been reviewed. Nothing like this had ever been seen before: a massive, open, retroactive evaluation of scientific literature, conducted entirely by computer. ... Statcheck’s method was relatively simple, more like the mathematical equivalent of a spellchecker than a thoughtful review, but some scientists saw it as a new form of scrutiny and suspicion, portending a future in which the objective authority of peer review would be undermined by unaccountable and uncredentialed critics. ... When it comes to fraud – or in the more neutral terms he prefers, “scientific misconduct” ... Despite its professed commitment to self-correction, science is a discipline that relies mainly on a culture of mutual trust and good faith to stay clean. Talking about its faults can feel like a kind of heresy. ... Even in the more mundane business of day-to-day research, scientists are constantly building on past work, relying on its solidity to underpin their own theories. If misconduct really is as widespread as Hartgerink and Van Assen think, then false results are strewn across scientific literature, like unexploded mines that threaten any new structure built over them.
Decision fatigue helps explain why ordinarily sensible people get angry at colleagues and families, splurge on clothes, buy junk food at the supermarket and can’t resist the dealer’s offer to rustproof their new car. No matter how rational and high-minded you try to be, you can’t make decision after decision without paying a biological price. It’s different from ordinary physical fatigue — you’re not consciously aware of being tired — but you’re low on mental energy. The more choices you make throughout the day, the harder each one becomes for your brain, and eventually it looks for shortcuts, usually in either of two very different ways. One shortcut is to become reckless: to act impulsively instead of expending the energy to first think through the consequences. (Sure, tweet that photo! What could go wrong?) The other shortcut is the ultimate energy saver: do nothing. Instead of agonizing over decisions, avoid any choice. Ducking a decision often creates bigger problems in the long run, but for the moment, it eases the mental strain. You start to resist any change, any potentially risky move ... experiments confirmed the 19th-century notion of willpower being like a muscle that was fatigued with use, a force that could be conserved by avoiding temptation. ... Any decision, whether it’s what pants to buy or whether to start a war, can be broken down into what psychologists call the Rubicon model of action phases, in honor of the river that separated Italy from the Roman province of Gaul.
Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way? ... new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. ... point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context. ... Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to cooperate. Cooperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups. ... Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.
As long as the public delights in seeing pompous winemakers and critics humbled, journalists will keep writing Schadenfreude-laden stories about the latest “Gotcha!” study. But these articles generally confuse absence of evidence with evidence of absence: they presume that if a handful of researchers did not find that one group of connoisseurs possessed statistically significant tasting ability, any claim to wine expertise must be a hoax. The truly interesting question is the opposite one: whether it’s possible for a critic to look smart rather than silly. ... Unfortunately, designing an experiment that gives tasters a chance to succeed requires the scientist to understand wine. They need to give the drinkers plenty of time on a small number of wines, in an odourless room with appropriate stemware; to taste the bottles and ensure they are not flawed; to choose wines that are representative of a well-known style; and to serve them at the age where they best strut their stuff. In other words, what you would need is the Oxford-Cambridge Varsity match.
How do experts go wrong? There are several kinds of expert failure. The most innocent and most common are what we might think of as the ordinary failures of science. Individuals, or even entire professions, get important questions wrong because of error or because of the limitations of a field itself. They observe a phenomenon or examine a problem, come up with theories and solutions, and then test them. Sometimes they’re right, and sometimes they’re wrong. ... Science is learning by doing. Laypeople are uncomfortable with ambiguity, and they prefer answers rather than caveats. But science is a process, not a conclusion. Science subjects itself to constant testing by a set of careful rules under which theories can be displaced only by other theories. Laypeople cannot expect experts to never be wrong; if they were capable of such accuracy, they wouldn’t need to do research and run experiments in the first place. If policy experts were clairvoyant or omniscient, governments would never run deficits, and wars would break out only at the instigation of madmen. ... The most important point is that failed predictions do not mean very much in terms of judging expertise. Experts usually cover their predictions (and an important part of their anatomy) with caveats, because the world is full of unforeseeable accidents that can have major ripple effects down the line. ... The goal of expert advice and prediction is not to win a coin toss, it is to help guide decisions about possible futures.