Around 540 million years ago, the ancestors of most modern animal groups suddenly appeared on the scene, in an outburst of speciation known as the Cambrian explosion. Many of these pioneering creatures left fossils behind. Some are so well preserved that scientists have been able to use scanning electron microscope images to piece together their inner anatomy, eyes included, and reconstruct their owners’ view of the world. ... But these eyes were already complex, and there are no traces of their simpler precursors. The fossil record tells us nothing about how sightless animals first came to see the world. This mystery flustered Charles Darwin. “To suppose that the eye, with all its inimitable contrivances ... could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree,” he wrote in Origin of Species. ... in the very next sentence, Darwin solves his own dilemma: “Yet reason tells me, that if numerous gradations from a perfect and complex eye to one very imperfect and simple, each grade being useful to its possessor, can be shown to exist … then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, can hardly be considered real.” ... The gradations he spoke of can be shown to exist. Living animals illustrate every possible intermediate between the primitive light-sensitive patches on an earthworm and the supersharp camera eyes of eagles. ... Even under the most pessimistic conditions, with the eye improving by just 0.005 percent each generation, it takes just 364,000 years for the simple sheet to become a fully functioning camera-like organ. As far as evolution goes, that’s a blink of an eye. ... But simple eyes should not be seen as just stepping-stones along a path toward greater complexity. Those that exist today are tailored to the needs of their users. ... Nothing that sees does so without proteins called opsins—the molecular basis of all eyes. Opsins work by embracing a chromophore, a molecule that can absorb the energy of an incoming photon. The energy rapidly snaps the chromophore into a different shape, forcing its opsin partner to likewise contort. This transformation sets off a series of chemical reactions that ends with an electrical signal.
Blue is a rarity among plants and animals. When it does occur in nature, it often isn’t truly blue, but rather a trick of diffraction, or the scattering of light, which is the case for bird feathers, sky, ice, water and iridescent butterfly wings. ... In response to growing pressure from consumers across the globe, Mars announced in February that over the next five years it would remove artificial colors from all the processed foods it makes for human consumption, and that pigments found in natural substances would take their place. ... In 2013, the Food and Drug Administration approved Mars’s petition to use the microscopic algae spirulina to make the first natural blue dye approved for use in the United States. As a result, any food manufacturer in the country can legally use spirulina as a colorant. Mars spent years researching spirulina’s safety; in order to overhaul 1,700 or so recipes and update its global manufacturing capabilities, the company desperately needs a substitute for synthetic Blue No. 1, as does the rest of the industry. But right now, there isn’t nearly enough spirulina dye to go around — and in any case, sometimes it doesn’t yield just the right blue, or the color degrades and comes out blotchy, or it tastes odd. ... Humans are color-seeking animals, and food companies learned to manipulate that trait early. ... One Mars executive told me that to convert only its blue M&Ms to spirulina blue, the company would, in his estimation, need twice the current global supply. ... last year the global market in natural colors was worth an estimated $970 million, up 60 percent since 2011. Natural colors now represent more than half the food-colors market in dollar terms.
The most remarkable thing about neural nets is that no human being has programmed a computer to perform any of the stunts described above. In fact, no human could. Programmers have, rather, fed the computer a learning algorithm, exposed it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it, and have then allowed the computer to figure out for itself how to recognize the desired objects, words, or sentences. ... Neural nets aren’t new. The concept dates back to the 1950s, and many of the key algorithmic breakthroughs occurred in the 1980s and 1990s. What’s changed is that today computer scientists have finally harnessed both the vast computational power and the enormous storehouses of data—images, video, audio, and text files strewn across the Internet—that, it turns out, are essential to making neural nets work well. ... That dramatic progress has sparked a burst of activity. Equity funding of AI-focused startups reached an all-time high last quarter of more than $1 billion, according to the CB Insights research firm. There were 121 funding rounds for such startups in the second quarter of 2016, compared with 21 in the equivalent quarter of 2011, that group says. More than $7.5 billion in total investments have been made during that stretch—with more than $6 billion of that coming since 2014. ... The hardware world is feeling the tremors. The increased computational power that is making all this possible derives not only from Moore’s law but also from the realization in the late 2000s that graphics processing units (GPUs) made by Nvidia—the powerful chips that were first designed to give gamers rich, 3D visual experiences—were 20 to 50 times more efficient than traditional central processing units (CPUs) for deep-learning computations. ... Think of deep learning as a subset of a subset. “Artificial intelligence” encompasses a vast range of technologies—like traditional logic and rules-based systems—that enable computers and robots to solve problems in ways that at least superficially resemble thinking. Within that realm is a smaller category called machine learning, which is the name for a whole toolbox of arcane but important mathematical techniques that enable computers to improve at performing tasks with experience. Finally, within machine learning is the smaller subcategory called deep learning.
- Also: FiveThirtyEight - Some Like It Bot < 5min
- Also: Vox - Venture capitalist Marc Andreessen explains how AI will change the world 5-15min
- Also: Nautilus - Moore’s Law Is About to Get Weird < 5min
- Also: Edge - AI & The Future Of Civilization < 5min
- Also: Medium - Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning 5-15min
- Also: Rolling Stone - Inside the Artificial Intelligence Revolution: Pt. 1 5-15min
- Also: Rolling Stone - Inside the Artificial Intelligence Revolution: Pt. 2 5-15min
The capital of the Kunene region, Opuwo lies in the heartland of the Himba people, a semi-nomadic people who spend their days herding cattle. Long after many of the world’s other indigenous populations had begun to migrate to cities, the Himba had mostly avoided contact with modern culture, quietly continuing their traditional life. But that is slowly changing, with younger generations feeling the draw of Opuwo, where they will encounter cars, brick buildings, and writing for the first time. ... How does the human mind cope with all those novelties and new sensations? By studying people like the Himba, at the start of their journey into modernity, scientists are now hoping to understand the ways that modern life may have altered all of our minds. ... Like an irregular lens, our modern, urban brains distort the images hitting our retina, magnifying some parts of the scene and shrinking others.