I’d like to tell the story of a paradox: How do we bring the right people to the right place at the right time to discover something new, when we don’t know who or where or when that is, let alone what it is we’re looking for? This is the paradox of innovation: If so many discoveries — from penicillin to plastics – are the product of serendipity, why do we insist breakthroughs can somehow be planned? Why not embrace serendipity instead? Because here’s an example of what happens when you don’t. ... By one estimate, the rate of new drugs developed per dollar spent by the industry has fallen by roughly a factor of 100 over the last 60 years. Patent statistics tell a similar story across industry after industry, from chemistry to metalworking to clean energy, in which top-down innovation has only grown more expensive and less efficient over time. ... Instead of speeding up the pace of discovery, large hierarchical organizations are slowing down — a stagflationary principle known as “Eroom’s Law,” which is “Moore’s Law” spelled backwards. ... Any society that values novelty and new ideas (like our innovation-obsessed one) will invariably trend toward greater serendipity over time. The push toward greater diversity, better public spaces, and an expanded public sphere all increase the potential for fortuitous discoveries.
The divergent tales of Syria and Lebanon demonstrate that the best early warning signs of instability are found not in historical data but in underlying structural properties. Past experience can be extremely effective when it comes to detecting risks of cancer, crime, and earthquakes. But it is a bad bellwether of complex political and economic events, particularly so-called tail risks—events, such as coups and financial crises, that are highly unlikely but enormously consequential. For those, the evidence of risk comes too late to do anything about it, and a more sophisticated approach is required. ... Thus, instead of trying in vain to predict such “Black Swan” events, it’s much more fruitful to focus on how systems can handle disorder—in other words, to study how fragile they are. Although one cannot predict what events will befall a country, one can predict how events will affect a country. Some political systems can sustain an extraordinary amount of stress, while others fall apart at the onset of the slightest trouble. The good news is that it’s possible to tell which are which by relying on the theory of fragility. ... The first marker of a fragile state is a concentrated decision-making system. On its face, centralization seems to make governments more efficient and thus more stable. But that stability is an illusion. Apart from in the military—the only sector that needs to be unified into a single structure—centralization contributes to fragility. ... The second soft spot is the absence of economic diversity. Economic concentration can be even more harmful than political centralization. Economists since David Ricardo have touted the gains in efficiency to be had if countries specialize in the sectors in which they hold a comparative advantage. But specialization makes a state more vulnerable in the face of random events. ... The third source of fragility is also economic in nature: being highly indebted and highly leveraged. Debt is perhaps the single most critical source of fragility. It makes an entity more sensitive to shortfalls in revenue, and all the more so as those shortfalls accelerate. ... The fourth source of fragility is a lack of political variability. Contrary to conventional wisdom, genuinely stable countries experience moderate political changes, continually switching governments and reversing their political orientations. ... The fifth marker of fragility takes the proposition that there is no stability without volatility a step further: it is the lack of a record of surviving big shocks. States that have experienced a worst-case scenario in the recent past (say, around the previous two decades) and recovered from it are likely to be more stable than those that haven’t.
Consider the most familiar random shape, the random walk, which shows up everywhere from the movement of financial asset prices to the path of particles in quantum physics. These walks are described as random because no knowledge of the path up to a given point can allow you to predict where it will go next. ... Beyond the one-dimensional random walk, there are many other kinds of random shapes. There are varieties of random paths, random two-dimensional surfaces, random growth models that approximate, for example, the way a lichen spreads on a rock. All of these shapes emerge naturally in the physical world, yet until recently they’ve existed beyond the boundaries of rigorous mathematical thought. Given a large collection of random paths or random two-dimensional shapes, mathematicians would have been at a loss to say much about what these random objects shared in common. ... have shown that these random shapes can be categorized into various classes, that these classes have distinct properties of their own, and that some kinds of random objects have surprisingly clear connections with other kinds of random objects. Their work forms the beginning of a unified theory of geometric randomness. ... “You take the most natural objects — trees, paths, surfaces — and you show they’re all related to each other,” Sheffield said. “And once you have these relationships, you can prove all sorts of new theorems you couldn’t prove before.” ... incoherent is not the same as incomprehensible. ... In practical terms, the results by Sheffield and Miller can be used to describe the random growth of real phenomena like snowflakes, mineral deposits, and dendrites in caves, but only when that growth takes place in the imagined world of random surfaces.