With its flying buttresses and domed roof, the Terascale Simulation Facility at the Lawrence Livermore National Laboratory was built as a cathedral to the supercomputer. … The design may suggest some religious reverence for the all-powerful machines. More practically, though, it means they are unencumbered by supporting columns, saving space and creating a more direct route for the arteries of cables to feed and cool the world’s fastest supercomputers. But less than 10 years after being built, the building is already in danger of becoming outdated. … “When we designed this building, we thought it would be good for 50 years but already it’s only adequate and not robust,” says Mike McCoy, who leads the supercomputing effort at Lawrence Livermore as director of the Advanced Simulation and Computing programme. … Even its name risks becoming an anachronism. Terascale computing has been superseded by petascale computing – 1,000 times faster. By 2020, we will be in the exascale age – a thousand times faster again.
Google has always been an artificial intelligence company, so it really shouldn’t have been a surprise that Ray Kurzweil, one of the leading scientists in the field, joined the search giant late last year. Nonetheless, the hiring raised some eyebrows, since Kurzweil is perhaps the most prominent proselytizer of “hard AI,” which argues that it is possible to create consciousness in an artificial being. Add to this Google’s revelation that it is using techniques of deep learning to produce an artificial brain, and a subsequent hiring of the godfather of computer neural nets Geoffrey Hinton, and it would seem that Google is becoming the most daring developer of AI, a fact that some may consider thrilling and others deeply unsettling. Or both.
“Why, when I want to turn on my software and computer, do I need to have three fingers: control, alt, delete?” Rubenstein asked the living tech legend. “Whose idea was that?” The crowd laughed as Gates shifted his weight and scratched his ear sheepishly. His response began with some hemming and hawing, but he eventually wound his way to a straight answer
Matt Ginsberg’s training is in astrophysics. He got his Ph.D. from Oxford when he was 24 years old. His doctoral advisor there was the famed mathematical physicist Roger Penrose, and he recalls rubbing elbows with the academic rock stars Stephen Hawking and the late Richard Feynman. He created an artificial intelligence crossword puzzle solver called Dr. Fill and a computer bridge world champion called GIB. ... Unsurprisingly, there’s pretty heavy math involved to make this real-time sports predictor work. For one element of the system’s calculations, Ginsberg sent me a pdf with eight dense pages of physics diagrams and systems of equations and notes on derivations. It uses something called the Levenberg-Marquardt algorithm. It requires Jacobians and the taking of partial derivatives and the solving of quartics, and code efficient enough to calculate it all up to the split second. If predicting the future were easy, I suppose everybody would do it. ... One thing this project can’t predict, however, is its own future. Its uses are, so far, largely speculative, and cashing in on a minor superpower might not be easy. Even gamblers who bet during play would struggle to make much money from a half-second heads-up that a shot is going in. But Ginsberg’s system would find a natural place in the long line of sports technologies that have been used for a singular end — TV.
It took years for the Internet to reach its first 100 computers. Today, 100 new ones join each second. And running deep within the silicon souls of most of these machines is the work of a technical wizard of remarkable power, a man described as a genius and a bully, a spiritual leader and a benevolent dictator. ... Linus Torvalds — who in person could be mistaken for just another paunchy, middle-aged suburban dad who happens to have a curiously large collection of stuffed penguin dolls — looms over the future of computing much as Bill Gates and the late Steve Jobs loom over its past and present. For Linux, the operating system that Torvalds created and named after himself, has come to dominate the exploding online world, making it more popular overall than rivals from Microsoft and Apple. ... But while Linux is fast, flexible and free, a growing chorus of critics warn that it has security weaknesses that could be fixed but haven’t been. Worse, as Internet security has surged as a subject of international concern, Torvalds has engaged in an occasionally profane standoff with experts on the subject. ... Linux has thrived in part because of Torvalds’s relentless focus on performance and reliability, both of which could suffer if more security features were added. Linux works on almost any chip in the world and is famously stable as it manages the demands of many programs at once, allowing computers to hum along for years at a time without rebooting. ... Yet even among Linux’s many fans there is growing unease about vulnerabilities in the operating system’s most basic, foundational elements — housed in something called “the kernel,” which Torvalds has personally managed since its creation in 1991.
Patrick Soon-Shiong wants to turn cancer treatment upside down. On January 12, Soon-Shiong and a consortium of industry, government, and academia announced the launch of the Cancer MoonShot 2020, an ambitious program aiming to replace a long history of blunt trial-and-error treatment with what amounts to a training regimen for the body’s own immune system. That system, Soon-Shiong argues, is perfectly adept at finding and eliminating cancer with exquisite precision—if it can recognize the mutated cells in the first place. Helping it to do so could represent a powerful new treatment for the disease, akin to a flu vaccine. ... Soon-Shiong has hit home runs before. This past July, one of his firms underwent the highest-value biotech IPO in history. A cancer drug he developed, called Abraxane, is approved to fight breast, lung, and pancreatic cancers in more than 40 countries. Soon-Shiong’s path from medical school in South Africa through residency in Canada, to UCLA professor, NASA researcher and corporate CEO has given him the bird’s-eye view necessary to take on a project this ambitious, as well as the resources to marshal the world-class computing and genome-sequencing facilities that it requires.
- Also: Science Alert - Scientists report "unprecedented" success using T-cells to treat cancer < 5min
- Also: Aeon - Death of cancer 5-15min
- Also: The Conversation - The equation that will help us decode cancer’s secrets < 5min
- Also: The New Yorker - Tough Medicine: A disturbing report from the front lines of the war on cancer. < 5min
- Repeat: Mosaic - What’s wrong with Craig Venter? 5-15min
The most remarkable thing about neural nets is that no human being has programmed a computer to perform any of the stunts described above. In fact, no human could. Programmers have, rather, fed the computer a learning algorithm, exposed it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it, and have then allowed the computer to figure out for itself how to recognize the desired objects, words, or sentences. ... Neural nets aren’t new. The concept dates back to the 1950s, and many of the key algorithmic breakthroughs occurred in the 1980s and 1990s. What’s changed is that today computer scientists have finally harnessed both the vast computational power and the enormous storehouses of data—images, video, audio, and text files strewn across the Internet—that, it turns out, are essential to making neural nets work well. ... That dramatic progress has sparked a burst of activity. Equity funding of AI-focused startups reached an all-time high last quarter of more than $1 billion, according to the CB Insights research firm. There were 121 funding rounds for such startups in the second quarter of 2016, compared with 21 in the equivalent quarter of 2011, that group says. More than $7.5 billion in total investments have been made during that stretch—with more than $6 billion of that coming since 2014. ... The hardware world is feeling the tremors. The increased computational power that is making all this possible derives not only from Moore’s law but also from the realization in the late 2000s that graphics processing units (GPUs) made by Nvidia—the powerful chips that were first designed to give gamers rich, 3D visual experiences—were 20 to 50 times more efficient than traditional central processing units (CPUs) for deep-learning computations. ... Think of deep learning as a subset of a subset. “Artificial intelligence” encompasses a vast range of technologies—like traditional logic and rules-based systems—that enable computers and robots to solve problems in ways that at least superficially resemble thinking. Within that realm is a smaller category called machine learning, which is the name for a whole toolbox of arcane but important mathematical techniques that enable computers to improve at performing tasks with experience. Finally, within machine learning is the smaller subcategory called deep learning.

- Also: FiveThirtyEight - Some Like It Bot < 5min
- Also: Vox - Venture capitalist Marc Andreessen explains how AI will change the world 5-15min
- Also: Nautilus - Moore’s Law Is About to Get Weird < 5min
- Also: Edge - AI & The Future Of Civilization < 5min
- Also: Medium - Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning 5-15min
- Also: Rolling Stone - Inside the Artificial Intelligence Revolution: Pt. 1 5-15min
- Also: Rolling Stone - Inside the Artificial Intelligence Revolution: Pt. 2 5-15min
Sensors gave machines the ability to perceive things like light, altitude, and moisture by converting stimuli into ones and zeros. The coming revolution will be filled with what are called “actuators,” which do the reverse. They allow machines to simplify our world by converting those ones and zeros back into some form of force, such as light or magnetic waves, or even physical pressure that can push objects. The actuator, like the sensor before it, is part of technology’s relentless quest to make machines do more and more things with greater and greater efficiency, as epitomized by the microprocessor, the most efficient information device ever made. ... whole industries will be reshaped. The market for fossil fuels, for example, will suffer a new setback, as power for your electric vehicle can be delivered from a simple charging plate that works in much the same way your Apple Watch gets juiced up in its cradle. The life-sciences market will have to adjust to a world where tests can be performed and therapies delivered from a capsule you swallow to detect cancer. And robots that use actuators to move parts with great precision—and can be recharged wirelessly—will take on more manufacturing tasks. ... One of the most promising is made of a compound of gallium and nitride, referred to as GaN. It’s far more efficient than silicon at converting the movement of electrons into energy radiating outward.
In a rare interview Abovitz says Magic Leap has spent a billion dollars perfecting a prototype and has begun constructing manufacturing lines in Florida, ahead of a release of a consumer version of its technology. When it arrives–best guess is within the next 18 months–it could usher in a new era of computing, a next-generation interface we’ll use for decades to come. ... Magic Leap’s innovation isn’t just a high-tech display–it’s a disruption machine. This technology could affect every business that uses screens or computers and many that don’t. It could kill the $120 billion market for flat-panel displays and shake the $1 trillion global consumer-electronics business to its core. ... The centerpiece of Magic Leap’s technology is a head-mounted display, but the final product should fit into a pair of spectacles. When you’re wearing the device, it doesn’t block your view of the world; the hardware projects an image directly onto your retina through an optics system built into a piece of semitransparent glass (the product won’t fry your eyeballs; it’s replicating the way we naturally observe the world instead of forcing you to stare at a screen). The hardware also constantly gathers information, scanning the room for obstacles, listening for voices, tracking eye movements and watching hands.
The difference between the 4004 and the Skylake is the difference between computer behemoths that occupy whole basements and stylish little slabs 100,000 times more powerful that slip into a pocket. It is the difference between telephone systems operated circuit by circuit with bulky electromechanical switches and an internet that ceaselessly shuttles data packets around the world in their countless trillions. It is a difference that has changed everything from metal-bashing to foreign policy, from the booking of holidays to the designing of H-bombs. ... Moore’s law is not a law in the sense of, say, Newton’s laws of motion. But Intel, which has for decades been the leading maker of microprocessors, and the rest of the industry turned it into a self-fulfilling prophecy. ... That fulfilment was made possible largely because transistors have the unusual quality of getting better as they get smaller; a small transistor can be turned on and off with less power and at greater speeds than a larger one. ... “There’s a law about Moore’s law,” jokes Peter Lee, a vice-president at Microsoft Research: “The number of people predicting the death of Moore’s law doubles every two years.” ... making transistors smaller has no longer been making them more energy-efficient; as a result, the operating speed of high-end chips has been on a plateau since the mid-2000s ... while the benefits of making things smaller have been decreasing, the costs have been rising. This is in large part because the components are approaching a fundamental limit of smallness: the atom. ... One idea is to harness quantum mechanics to perform certain calculations much faster than any classical computer could ever hope to do. Another is to emulate biological brains, which perform impressive feats using very little energy. Yet another is to diffuse computer power rather than concentrating it, spreading the ability to calculate and communicate across an ever greater range of everyday objects in the nascent internet of things. ... in 2012 the record for maintaining a quantum superposition without the use of silicon stood at two seconds; by last year it had risen to six hours. ... For a quantum algorithm to work, the machine must be manipulated in such a way that the probability of obtaining the right answer is continually reinforced while the chances of getting a wrong answer are suppressed.

Yet the mystery of the mechanism is only partly solved. No one knows who made it, how many others like it were made, or where it was going when the ship carrying it sank. ... What if other objects like the Antikythera Mechanism have already been discovered and forgotten? There may well be documented evidence of such finds somewhere in the world, in the vast archives of human research, scholarly and otherwise, but simply no way to search for them. Until now. ... Scholars have long wrestled with “undiscovered public knowledge,” a problem that occurs when researchers arrive at conclusions independently from one another, creating fragments of understanding that are “logically related but never retrieved, brought together, [or] interpreted,” as Don Swanson wrote in an influential 1986 essay introducing the concept. ... In other words, on top of everything we don’t know, there’s everything we don’t know that we already know. ... Discovery in the online realm is powered by a mix of human curiosity and algorithmic inquiry, a dynamic that is reflected in the earliest language of the internet. The web was built to be explored not just by people, but by machines. As humans surf the web, they’re aided by algorithms doing the work beneath the surface, sequenced to monitor and rank an ever-swelling current of information for pluckable treasures. The search engine’s cultural status has evolved with the dramatic expansion of the web. ... Using machines to find meaning in vast sets of data has been one of the great promises of the computing age since long before the internet was built.
- Also: Quartz - Inside the secret meeting where Apple revealed the state of its AI research < 5min
- Also: The Library Quarterly - Undiscovered Public Knowledge > 15min
- Also: AAAI - Undiscovered Public Knowledge: a Ten-Year Update 5-15min
- Also: Wired - Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free 5-15min
A mathematical prodigy, he worked out how to “beat the dealer” at blackjack while a postdoctoral student at MIT. After he published a book in 1962 revealing how to count cards, he became so famous that casinos banned him from playing — he says one even resorted to drugging him. Many changed their rules to thwart people using his counting system. ... Next came an attempt to beat roulette, using a contraption tied to his foot that is now described as the world’s first wearable computer; after that, an expedition into Wall Street that netted hundreds of millions of dollars. ... Thorp’s then revolutionary use of mathematics, options-pricing and computers gave him a huge advantage. ... “Adam Smith’s market is a whole lot different from our markets. He imagined a market with lots of buyers and sellers of things, nobody had market dominance or could impose things on the market, and there was a lot of competition. The market we have now is nothing like that. The players are so big that they control the levers of financial policy.” ... “One of the things that’s served me very well in life is having an extraordinary bullsh*t detector.”
Everyone knows that modern computers are better than old ones. But it is hard to convey just how much better, for no other consumer technology has improved at anything approaching a similar pace. The standard analogy is with cars: if the car from 1971 had improved at the same rate as computer chips, then by 2015 new models would have had top speeds of about 420 million miles per hour. ... There have been roughly 22 ticks of Moore’s law since the launch of the 4004 in 1971 through to mid-2016. For the law to hold until 2050 means there will have to be 17 more, in which case those engineers would have to figure out how to build computers from components smaller than an atom of hydrogen, the smallest element there is. ... a consensus among Silicon Valley’s experts that Moore’s law is near its end.
Reversing Paralysis: Scientists are making remarkable progress at using brain implants to restore the freedom of movement that spinal cord injuries take away.
Self-Driving Trucks: Tractor-trailers without a human at the wheel will soon barrel onto highways near you. What will this mean for the nation’s 1.7 million truck drivers?
Paying with Your Face: Face-detecting systems in China now authorize payments, provide access to facilities, and track down criminals. Will other countries follow?
Practical Quantum Computers: Advances at Google, Intel, and several research groups indicate that computers with previously unimaginable power are finally within reach.
The 360-Degree Selfie: Inexpensive cameras that make spherical images are opening a new era in photography and changing the way people share stories.
Hot Solar Cells: By converting heat to focused beams of light, a new solar device could create cheap and continuous power.
Gene Therapy 2.0: Scientists have solved fundamental problems that were holding back cures for rare hereditary disorders. Next we’ll see if the same approach can take on cancer, heart disease, and other common illnesses.
The Cell Atlas: Biology’s next mega-project will find out what we’re really made of.
Botnets of Things: The relentless push to add connectivity to home gadgets is creating dangerous side effects that figure to get even worse.
Reinforcement Learning: By experimenting, computers are figuring out how to do things that no programmer could teach them.
Whatever the truth of actual brainwashing incidents, the battle for people’s minds loomed large in the late 1950s, and was the subject of serious Pentagon discussions. The US and the Soviet Union were engaged in an ideological – and psychological – battle. Eager to exploit the science of human behaviour as it had physics and chemistry, the Pentagon commissioned a high-level panel at the Smithsonian Institution to recommend the best course of action. ... Psychology during the Cold War had fast become a darling of the military. ... That recommendation was translated by Pentagon officials into two separate assignments handed down to ARPA: one in the behavioural sciences, which would include everything from the psychology of brainwashing to quantitative modelling of society, and a second in command-and-control, to focus on computers. ... Licklider envisioned the modern conception of interactive computing: a future where people worked on personal consoles at their desks, rather than having to walk into a large room and feed punch cards into machines to crunch numbers. ... Licklider wanted people to understand that, more than any specific application, what he was describing was an entire metamorphosis of man and machine interaction. Personal consoles, time-sharing, and networking – the article essentially spelled out all the underpinnings of the modern internet.