Wired - How Ray Kurzweil Will Help Google Make the Ultimate AI Brain < 5min

Google has always been an artificial intelligence company, so it really shouldn’t have been a surprise that Ray Kurzweil, one of the leading scientists in the field, joined the search giant late last year. Nonetheless, the hiring raised some eyebrows, since Kurzweil is perhaps the most prominent proselytizer of “hard AI,” which argues that it is possible to create consciousness in an artificial being. Add to this Google’s revelation that it is using techniques of deep learning to produce an artificial brain, and a subsequent hiring of the godfather of computer neural nets Geoffrey Hinton, and it would seem that Google is becoming the most daring developer of AI, a fact that some may consider thrilling and others deeply unsettling. Or both.

Institutional Investor - The Rise of the Tech Model May Soon Make You Obsolete 5-15min

Machine learning, artificial intelligence and other technological advances are transforming how pensions, endowments, sovereign funds and other institutions manage their assets. ... Will the financial services industry soon be challenged by technology entrepreneurs with little initial - or no exclusive - interest in the investment business? ... The hot technologies being developed today will offer unparalleled insight into the complex world around us, and the applications to the entire domain of finance and investing are countless. ... One example: The ascendance of nonbiological intelligence means computing systems will learn and process many types of inputs far faster than even the most-expert individuals. Once experts partner with the systems, these man-machine teams will become extremely competent at rules-based goal seeking. The days of using scarce computing resources to model complex systems - backcasting, calibrating, validating and eventually forecasting - are nearly over. ... a growing number of computing systems and technologies will empower people, organizations, networks and information in transformative ways. Service industries will be particularly affected, as they often require human, labor-intensive analytics and networking scale. But if technologies can help people network and analyze faster and better, some of the companies in the industries that provide these services will face an existential challenge. As with the rise of computing and the Internet, we expect new technologies in the coming decade to challenge service industries, such as finance, in ways that few people today appreciate.

Financial Times - Breakfast with the FT: Ray Kurzweil < 5min

Over a specially prepared breakfast, the inventor and futurist details his plans to live for ever ... Kurzweil, who invented the first print-to-speech reading machine for the blind, the flatbed scanner and a music synthesiser capable of reproducing the sound of a grand piano, has been thinking about artificial intelligence (AI) for 50 years. In The Age of Intelligent Machines (1990), he predicted the internet’s ubiquity and the rise of mobile devices. The Singularity is Near, his 2005 bestseller, focused on AI and the future of mankind. In 2012 he joined Google as a director of engineering to develop machine intelligence. ... Kurzweil’s supporters hail him as “the ultimate thinking machine” and “the rightful heir to Thomas Edison”. Microsoft co-founder Bill Gates has called him “the best person I know at predicting the future of artificial intelligence”. To his critics, he is “one of the greatest hucksters of the age”, and a “narcissistic crackpot obsessed with longevity”. ... His interest in health goes back to when he was 15 and his father, Fredric, had a heart attack. “He died when I was 22. He was 58.” Kurzweil realised he could inherit his father’s dispositions. In his thirties, he was diagnosed with type-two diabetes. Frustrated by conventional treatments, he “approached this as an inventor”. It has not returned. “You can overcome your genetic disposition. The common wisdom is it’s 80 per cent genes, 20 per cent lifestyle. If you’re diligent, it’s 90 per cent intervention and 10 per cent genes,” he claims.

The Atlantic - The Man Who Would Teach Machines to Think 5-15min

Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind. ... “It depends on what you mean by artificial intelligence.” Douglas Hofstadter is in a grocery store in Bloomington, Indiana, picking out salad ingredients. “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done.” ... Hofstadter says this with an easy deliberateness, and he says it that way because for him, it is an uncontroversial conviction that the most-exciting projects in modern artificial intelligence, the stuff the public maybe sees as stepping stones on the way to science fiction—like Watson, IBM’s Jeopardy-playing supercomputer, or Siri, Apple’s iPhone assistant—in fact have very little to do with intelligence. For the past 30 years, most of them spent in an old house just northwest of the Indiana University campus, he and his graduate students have been picking up the slack: trying to figure out how our thinking works, by writing computer programs that think. ... Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.

The New Yorker - The Doomsday Invention: Will artificial intelligence bring us utopia or destruction? > 15min

Central to this concern is the prospect of an “intelligence explosion,” a speculative event in which an A.I. gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude. ... Such a system would effectively be a new kind of life, and Bostrom’s fears, in their simplest form, are evolutionary: that humanity will unexpectedly become outmatched by a smarter competitor. He sometimes notes, as a point of comparison, the trajectories of people and gorillas: both primates, but with one species dominating the planet and the other at the edge of annihilation. ... Bostrom is arguably the leading transhumanist philosopher today, a position achieved by bringing order to ideas that might otherwise never have survived outside the half-crazy Internet ecosystem where they formed. He rarely makes concrete predictions, but, by relying on probability theory, he seeks to tease out insights where insights seem impossible. ... The people who say that artificial intelligence is not a problem tend to work in artificial intelligence.

Bloomberg - The First Person to Hack the iPhone Built a Self-Driving Car. In His Garage 5-15min

He says it’s a self-driving car that he had built in about a month. The claim seems absurd. But when I turn up that morning, in his garage there’s a white 2016 Acura ILX outfitted with a laser-based radar (lidar) system on the roof and a camera mounted near the rearview mirror. A tangle of electronics is attached to a wooden board where the glove compartment used to be, a joystick protrudes where you’d usually find a gearshift, and a 21.5-inch screen is attached to the center of the dash. “Tesla only has a 17-inch screen,” Hotz says. ... Hotz was the first person to hack Apple’s iPhone, allowing anyone—well, anyone with a soldering iron and some software smarts—to use the phone on networks other than AT&T’s. He later became the first person to run through a gantlet of hard-core defense systems in the Sony PlayStation 3 and crack that open, too. ... The technology he’s building represents an end run on much more expensive systems being designed by Google, Uber, the major automakers, and, if persistent rumors and numerous news reports are true, Apple. More short term, he thinks he can challenge Mobileye, the Israeli company that supplies Tesla Motors, BMW, Ford Motor, General Motors, and others with their current driver-assist technology. ... Hotz plans to best the Mobileye technology with off-the-shelf electronics. He’s building a kit consisting of six cameras—similar to the $13 ones found in smartphones—that would be placed around the car. ... The goal is to sell the camera and software package for $1,000 a pop either to automakers or, if need be, directly to consumers who would buy customized vehicles at a showroom run by Hotz. ... There are two breakthroughs that make Hotz’s system possible. The first comes from the rise in computing power since the days of the Grand Challenge. He uses graphics chips that normally power video game consoles to process images pulled in by the car’s camera and speedy Intel chips to run his AI calculations. ... The second advance is deep learning, an AI technology that has taken off over the past few years. It allows researchers to assign a task to computers and then sit back as the machines in essence teach themselves how to accomplish and finally master the job. ... Instead of the hundreds of thousands of lines of code found in other self-driving vehicles, Hotz’s software is based on about 2,000 lines.

Nautilus - The Man Who Tried to Redeem the World with Logic 5-15min

Walter Pitts was used to being bullied. He’d been born into a tough family in Prohibition-era Detroit, where his father, a boiler-maker, had no trouble raising his fists to get his way. The neighborhood boys weren’t much better. One afternoon in 1935, they chased him through the streets until he ducked into the local library to hide. The library was familiar ground, where he had taught himself Greek, Latin, logic, and mathematics—better than home, where his father insisted he drop out of school and go to work. Outside, the world was messy. Inside, it all made sense. ... Not wanting to risk another run-in that night, Pitts stayed hidden until the library closed for the evening. Alone, he wandered through the stacks of books until he came across Principia Mathematica, a three-volume tome written by Bertrand Russell and Alfred Whitehead between 1910 and 1913, which attempted to reduce all of mathematics to pure logic. Pitts sat down and began to read. For three days he remained in the library until he had read each volume cover to cover—nearly 2,000 pages in all—and had identified several mistakes. Deciding that Bertrand Russell himself needed to know about these, the boy drafted a letter to Russell detailing the errors. Not only did Russell write back, he was so impressed that he invited Pitts to study with him as a graduate student at Cambridge University in England. Pitts couldn’t oblige him, though—he was only 12 years old. But three years later, when he heard that Russell would be visiting the University of Chicago, the 15-year-old ran away from home and headed for Illinois. He never saw his family again. ... Though they started at opposite ends of the socioeconomic spectrum, McCulloch and Pitts were destined to live, work, and die together. Along the way, they would create the first mechanistic theory of the mind, the first computational approach to neuroscience, the logical design of modern computers, and the pillars of artificial intelligence. But this is more than a story about a fruitful research collaboration. It is also about the bonds of friendship, the fragility of the mind, and the limits of logic’s ability to redeem a messy and imperfect world. ... “He was absolutely incomparable in the scholarship of chemistry, physics, of everything you could talk about history, botany, etc. When you asked him a question, you would get back a whole textbook … To him, the world was connected in a very complex and wonderful fashion.”

Aeon - Hive consciousness 5-15min

New research puts us on the cusp of brain-to-brain communication. Could the next step spell the end of individual minds? ... we’ve moved beyond merely thinking orders at machinery. Now we’re using that machinery to wire living brains together. Last year, a team of European neuroscientists headed by Carles Grau of the University of Barcelona reported a kind of – let’s call it mail-order telepathy – in which the recorded brainwaves of someone thinking a salutation in India were emailed, decoded and implanted into the brains of recipients in Spain and France (where they were perceived as flashes of light). ... What are the implications of a technology that seems to be converging on the sharing of consciousness? ... It would be a lot easier to answer that question if anyone knew what consciousness is. There’s no shortage of theories. ... Their models – right or wrong – describe computation, not awareness. There’s no great mystery to intelligence; it’s easy to see how natural selection would promote flexible problem-solving, the triage of sensory input, the high-grading of relevant data (aka attention). ... If physics is right – if everything ultimately comes down to matter, energy and numbers – then any sufficiently accurate copy of a thing will manifest the characteristics of that thing. Sapience should therefore emerge from any physical structure that replicates the relevant properties of the brain.

Wired - Andy Rubin Unleashed Android On The World. Now Watch Him Do The Same With AI 5-15min

Rubin has a theory that humanity is on the cusp of a new computing age. Just as MS-DOS gave way to Macintosh and Windows, which gave way to the web, which gave way to smartphones, he thinks the forces are in place to begin a decades-long transition to the next great platform: artificial intelligence. ... Google, Facebook, and Microsoft have collectively spent billions to fund the development of neural networks that can understand human speech or recognize faces in photos. And over the next decade AI is bound to grow more powerful, capable of tasks we can’t imagine today. Soon, Rubin figures, it will be available as a cloud service, powering thousands of gadgets and machines. Just as practically every device today contains software of some kind, it could soon be nearly impossible to buy a device without some kind of AI inside. It’s hard to imagine precisely what that future will look like, but for a rough idea, think about the difference between your car and a self-driving car; now apply that difference to every object you own. ... Rubin wants Playground to become the factory that creates the standard building blocks—the basic quartermaster’s inventory of components—for the AI-infused future. And he wants to open up this platform of hardware and software tools so that anyone, not just the companies he works with directly, can create an intelligent device. If he’s successful, Playground stands to have the same kind of impact on smart machines that Android had on smartphones, providing the technological infrastructure for thousands of products and giving a generation of entrepreneurs the ability to build a smart drone. ... The fundamental idea, Rubin says, is to create what he calls an idea amplifier—a system that quickly turns concepts into products with maximum impact. ... For AI to reach its true potential, Rubin argues, we need to bring it into the physical world. And the way to do that is to create thousands of devices that pull information from their environment

Popular Science - Fooling The Machine 5-15min

An accelerating field of research suggests that most of the artificial intelligence we’ve created so far has learned enough to give a correct answer, but without truly understanding the information. And that means it’s easy to deceive. ... Machine learning algorithms have quickly become the all-seeing shepherds of the human flock. This software connects us on the internet, monitors our email for spam or malicious content, and will soon drive our cars. To deceive them would be to shift tectonic underpinnings of the internet, and could pose even greater threats for our safety and security in the future. ... Small groups of researchers—from Pennsylvania State University to Google to the U.S. military— are devising and defending against potential attacks that could be carried out on artificially intelligent systems. In theories posed in the research, an attacker could change what a driverless car sees. Or, it could activate voice recognition on any phone and make it visit a website with malware, only sounding like white noise to humans. Or let a virus travel through a firewall into a network. ... Instead of taking the controls of a driverless car, this method shows it a kind of a hallucination—images that aren’t really there. ... “We show you a photo that’s clearly a photo of a school bus, and we make you think it’s an ostrich,” says Ian Goodfellow, a researcher at Google who has driven much of the work on adversarial examples.

Business Insider - The inside story of how Amazon created Echo < 5min

The reaction was understandable given the lofty goals outlined in the Echo's original plan: It envisioned an intelligent, voice-controlled household appliance that could play music, read the news aloud and order groceries — all by simply letting users talk to it from anywhere in the house. ... the Echo's path into consumers' homes was hardly a sure thing. The gadget was stuck in Amazon's in-house labs for years, subject to the perfectionist demands of Amazon CEO Jeff Bezos and lengthy internal debates about its market appeal. And in the wake of the high-profile failure of Amazon's smartphone, the industry rumors that circulated for years about a speaker product languishing within Amazon's labs seemed like more confirmation that the ecommerce giant lacked the chops to create a game-changing hardware device. ... The story of the Echo's origins, recounted by several insiders, reflects the ambitions and challenges within Amazon as it quietly set its sights on the tech industry's next big battleground. ... The key to getting latency down was to collect as much data as possible and constantly apply them to improve the product. The team did thousands of internal tests and weekly data analysis with speech scientists. Eventually, the team was able to bring latency down to below 1.5 seconds, far exceeding the speed of its competitors.

Wired - The End Of Code: Soon We Won’t Program Computers. We’ll Train Them Like Dogs 5-15min

The so-called cognitive revolution started small, but as computers became standard equipment in psychology labs across the country, it gained broader acceptance. By the late 1970s, cognitive psychology had overthrown behaviorism, and with the new regime came a whole new language for talking about mental life. Psychologists began describing thoughts as programs, ordinary people talked about storing facts away in their memory banks, and business gurus fretted about the limits of mental bandwidth and processing power in the modern workplace. ... This story has repeated itself again and again. As the digital revolution wormed its way into every part of our lives, it also seeped into our language and our deep, basic theories about how things work. Technology always does this. During the Enlightenment, Newton and Descartes inspired people to think of the universe as an elaborate clock. In the industrial age, it was a machine with pistons. (Freud’s idea of psychodynamics borrowed from the thermodynamics of steam engines.) Now it’s a computer. Which is, when you think about it, a fundamentally empowering idea. Because if the world is a computer, then the world can be coded. ... Code is logical. Code is hackable. Code is destiny. These are the central tenets (and self-fulfilling prophecies) of life in the digital age. ... In this world, the ability to write code has become not just a desirable skill but a language that grants insider status to those who speak it. They have access to what in a more mechanical age would have been called the levers of power. ... whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand.

Global Challenges Foundation - Global Catastrophic Risks 2016 [Executive Summary] 5-15min

The global catastrophic risks in this report can be divided into two categories. Some are ongoing and could potentially occur in any given year. Others are emerging and may be very unlikely today but will become significantly more likely in the coming decades. The most significant ongoing risks are natural pandemics and nuclear war, whereas the most significant emerging risks are catastrophic climate change and risks stemming from emerging technologies. Even where risks remain in the future, there are things we can do today to address them. ... The relative likelihood and urgency of the different risks matters when deciding how to respond. Even though the level of uncertainty is extreme, rational action requires explicit assessments of how much attention the different risks deserve, and how likely they are. The views of the authors on these vexed questions, based on our reading of the scientific evidence, are summarised in the following table. More information can be found in the full version of this report.

Global catastrophe
The Verge - Why Microsoft is betting its future on AI 5-15min

No matter where we work in the future, Nadella says, Microsoft will have a place in it. The company’s "conversation as a platform" offering, which it unveiled in March, represents a bet that chat-based interfaces will overtake apps as our primary way of using the internet: for finding information, for shopping, and for accessing a range of services. And apps will become smarter thanks to "cognitive APIs," made available by Microsoft, that let them understand faces, emotions, and other information contained in photos and videos. ... Microsoft argues that it has the best "brain," built on nearly two decades of advancements in machine learning and natural language processing, for delivering a future powered by artificial intelligence. It has a head start in building bots that resonate with users emotionally, thanks to an early experiment in China. And among the giants, Microsoft was first to release a true platform for text-based chat interfaces ... The company, as ever, talks a big game. Microsoft's historical instincts about where technology is going have been spot-on. But the company has a record of dropping the ball when it comes to acting on that instinct. It saw the promise in smartphones and tablets, for example, long before its peers. ... Xiaoice, which Microsoft introduced on the Chinese messaging app WeChat in 2014, can answer simple questions, just like Microsoft's virtual assistant Cortana. Where Xiaoice excels, though, is in conversation. The bot is programmed to be sensitive to emotions, and to remember your previous chats.

BuzzFeed - Attack of the Killer Robots 27min

One afternoon this spring at the United Nations in Geneva, I sat behind Wareham in a large wood-paneled, beige-carpeted assembly room that hosted the Convention on Certain Conventional Weapons (CCW), a group of 121 countries that have signed the agreement to restrict weapons that “are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately”— in other words, weapons humanity deems too cruel to use in war. ... The UN moves at a glacial pace, but the CCW is even worse. There’s no vote at the end of meetings; instead, every contracting party needs to agree in order to get anything done. (Its last and only successful prohibitive weapons ban was in 1995.) It was the start of five days of meetings to discuss lethal autonomous weapons systems (LAWS): weapons that have the ability to independently select and engage targets, i.e., machines that can make the decision to kill humans, i.e., killer robots. The world slept through the advent of drone attacks. ... Yet it’s important to get one thing clear: This isn’t a conversation about drones. By now, drone warfare has been normalized — at least 10 countries have them. ... LAWS are generally broken down into three categories. Most simply, there’s humans in the loop — where the machine performs the task under human supervision, arriving at the target and waiting for permission to fire. Humans on the loop — where the machine gets to the place and takes out the target, but the human can override the system. And then, humans out of the loop — where the human releases the machine to perform a task and that’s it — no supervision, no recall, no stop function. The debate happening at the UN is which of these to preemptively ban, if any at all.

Fortune - Why Deep Learning is Suddenly Changing Your Life 13min

The most remarkable thing about neural nets is that no human being has programmed a computer to perform any of the stunts described above. In fact, no human could. Programmers have, rather, fed the computer a learning algorithm, exposed it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it, and have then allowed the computer to figure out for itself how to recognize the desired objects, words, or sentences. ... Neural nets aren’t new. The concept dates back to the 1950s, and many of the key algorithmic breakthroughs occurred in the 1980s and 1990s. What’s changed is that today computer scientists have finally harnessed both the vast computational power and the enormous storehouses of data—images, video, audio, and text files strewn across the Internet—that, it turns out, are essential to making neural nets work well. ... That dramatic progress has sparked a burst of activity. Equity funding of AI-focused startups reached an all-time high last quarter of more than $1 billion, according to the CB Insights research firm. There were 121 funding rounds for such startups in the second quarter of 2016, compared with 21 in the equivalent quarter of 2011, that group says. More than $7.5 billion in total investments have been made during that stretch—with more than $6 billion of that coming since 2014. ... The hardware world is feeling the tremors. The increased computational power that is making all this possible derives not only from Moore’s law but also from the realization in the late 2000s that graphics processing units (GPUs) made by Nvidia—the powerful chips that were first designed to give gamers rich, 3D visual experiences—were 20 to 50 times more efficient than traditional central processing units (CPUs) for deep-learning computations. ... Think of deep learning as a subset of a subset. “Artificial intelligence” encompasses a vast range of technologies—like traditional logic and rules-based systems—that enable computers and robots to solve problems in ways that at least superficially resemble thinking. Within that realm is a smaller category called machine learning, which is the name for a whole toolbox of arcane but important mathematical techniques that enable computers to improve at performing tasks with experience. Finally, within machine learning is the smaller subcategory called deep learning.

Ai glossary
The Economist - Technology Quarterly: After Moore’s Law 31min

The difference between the 4004 and the Skylake is the difference between computer behemoths that occupy whole basements and stylish little slabs 100,000 times more powerful that slip into a pocket. It is the difference between telephone systems operated circuit by circuit with bulky electromechanical switches and an internet that ceaselessly shuttles data packets around the world in their countless trillions. It is a difference that has changed everything from metal-bashing to foreign policy, from the booking of holidays to the designing of H-bombs. ... Moore’s law is not a law in the sense of, say, Newton’s laws of motion. But Intel, which has for decades been the leading maker of microprocessors, and the rest of the industry turned it into a self-fulfilling prophecy. ... That fulfilment was made possible largely because transistors have the unusual quality of getting better as they get smaller; a small transistor can be turned on and off with less power and at greater speeds than a larger one. ... “There’s a law about Moore’s law,” jokes Peter Lee, a vice-president at Microsoft Research: “The number of people predicting the death of Moore’s law doubles every two years.” ... making transistors smaller has no longer been making them more energy-efficient; as a result, the operating speed of high-end chips has been on a plateau since the mid-2000s ... while the benefits of making things smaller have been decreasing, the costs have been rising. This is in large part because the components are approaching a fundamental limit of smallness: the atom. ... One idea is to harness quantum mechanics to perform certain calculations much faster than any classical computer could ever hope to do. Another is to emulate biological brains, which perform impressive feats using very little energy. Yet another is to diffuse computer power rather than concentrating it, spreading the ability to calculate and communicate across an ever greater range of everyday objects in the nascent internet of things. ... in 2012 the record for maintaining a quantum superposition without the use of silicon stood at two seconds; by last year it had risen to six hours. ... For a quantum algorithm to work, the machine must be manipulated in such a way that the probability of obtaining the right answer is continually reinforced while the chances of getting a wrong answer are suppressed.

Screen shot 2016 11 22 at 9.22.58 am
The Atlantic - Searching for Lost Knowledge in the Age of Intelligent Machines 21min

Yet the mystery of the mechanism is only partly solved. No one knows who made it, how many others like it were made, or where it was going when the ship carrying it sank. ... What if other objects like the Antikythera Mechanism have already been discovered and forgotten? There may well be documented evidence of such finds somewhere in the world, in the vast archives of human research, scholarly and otherwise, but simply no way to search for them. Until now. ... Scholars have long wrestled with “undiscovered public knowledge,” a problem that occurs when researchers arrive at conclusions independently from one another, creating fragments of understanding that are “logically related but never retrieved, brought together, [or] interpreted,” as Don Swanson wrote in an influential 1986 essay introducing the concept. ... In other words, on top of everything we don’t know, there’s everything we don’t know that we already know. ... Discovery in the online realm is powered by a mix of human curiosity and algorithmic inquiry, a dynamic that is reflected in the earliest language of the internet. The web was built to be explored not just by people, but by machines. As humans surf the web, they’re aided by algorithms doing the work beneath the surface, sequenced to monitor and rank an ever-swelling current of information for pluckable treasures. The search engine’s cultural status has evolved with the dramatic expansion of the web. ... Using machines to find meaning in vast sets of data has been one of the great promises of the computing age since long before the internet was built.

MIT Technology Review - 10 Breakthrough Technologies 2017 5-15min

Reversing Paralysis: Scientists are making remarkable progress at using brain implants to restore the freedom of movement that spinal cord injuries take away.
Self-Driving Trucks: Tractor-trailers without a human at the wheel will soon barrel onto highways near you. What will this mean for the nation’s 1.7 million truck drivers?
Paying with Your Face: Face-detecting systems in China now authorize payments, provide access to facilities, and track down criminals. Will other countries follow?
Practical Quantum Computers: Advances at Google, Intel, and several research groups indicate that computers with previously unimaginable power are finally within reach.
The 360-Degree Selfie: Inexpensive cameras that make spherical images are opening a new era in photography and changing the way people share stories.
Hot Solar Cells: By converting heat to focused beams of light, a new solar device could create cheap and continuous power.
Gene Therapy 2.0: Scientists have solved fundamental problems that were holding back cures for rare hereditary disorders. Next we’ll see if the same approach can take on cancer, heart disease, and other common illnesses.
The Cell Atlas: Biology’s next mega-project will find out what we’re really made of.
Botnets of Things: The relentless push to add connectivity to home gadgets is creating dangerous side effects that figure to get even worse.
Reinforcement Learning: By experimenting, computers are figuring out how to do things that no programmer could teach them.

TechRepublic - Inside Amazon's clickworker platform: How half a million people are being paid pennies to train AI 17min

The number of active workers, who live across the globe, is estimated to run between 15,000 and 20,000 per month, according to Panos Ipeirotis, a computer scientist and professor at New York University's business school. Turkers work anywhere from a few minutes to 24 hours a day. ... American Turkers are mostly women. In India, they're mostly men. Globally, they're most likely to have been born between 1980-1990. About 75% are Americans, roughly 15-20% are from India, and the remaining 10% are from other countries. ... "Requesters"—the people, businesses, and organizations that outsource the work—set prices for each task, and the tasks vary widely. ... what do Turkers make, on average? It's hard to say. But Adrien Jabbour, in India, said "it's an achievement to make $700 in 2 months of work, working 4-5 hours every day." Milland reported that she recently made $25 for 8 hours of work, and called that "a good day."

Popular Mechanics - It’ll Take An Army To Kill The Emperor 33min

The men and women who are trying to bring down cancer are starting to join forces rather than work alone. Together, they are winning a few of the battles against the world's fiercest disease. ... It's not like you don't have cancer and then one day you just do. Cancer—or, really, cancers, because cancer is not a single disease—happens when glitches in genes cause cells to grow out of control until they overtake the body, like a kudzu plant. Genes develop glitches all the time: There are roughly twenty thousand genes in the human body, any of which can get misspelled or chopped up. Bits can be inserted or deleted. Whole copies of genes can appear and disappear, or combine to form mutants. ... Cancer is not an ordinary disease. Cancer is the disease—a phenomenon that contains the whole of genetics and biology and human life in a single cell. It will take an army of researchers to defeat it.

1843 Magazine - Teaching robots right from wrong 8min

Legions of robots now carry out our instructions unreflectively. How do we ensure that these creatures, regardless of whether they’re built from clay or silicon, always work in our best interests? Should we teach them to think for themselves? And if so, how are we to teach them right from wrong? ... In 2017, this is an urgent question. Self-driving cars have clocked up millions of miles on our roads while making autonomous decisions that might affect the safety of other human road-users. Roboticists in Japan, Europe and the United States are developing service robots to provide care for the elderly and disabled. One such robot carer, which was launched in 2015 and dubbed Robear (it sports the face of a polar-bear cub), is strong enough to lift frail patients from their beds; if it can do that, it can also, conceivably, crush them. Since 2000 the US Army has deployed thousands of robots equipped with machineguns, each one able to locate targets and aim at them without the need for human involvement (they are not, however, permitted to pull the trigger unsupervised).

Smithsonian - A Visit to Seoul Brings Our Writer Face-to-Face With the Future of Robots 17min

Striving for perfection in mind, body and spirit is a Korean way of life, and the cult of endless self-improvement begins as early as the hagwons, the cram schools that keep the nation’s children miserable and sleep-deprived, and sends a sizable portion of the population under the plastic surgeon’s knife. ... I have come to South Korea to find out just how close humanity is to transforming everyday life by relying on artificial intelligence and the robots that increasingly possess it, and by insinuating smart technology into every aspect of life, bit by bit. Fifty years ago, the country was among the poorest on earth, devastated after a war with North Korea. Today South Korea feels like an outpost from the future, while its conjoined twin remains trapped inside a funhouse mirror, unable to function as a modern society, pouring everything it has into missile tests and bellicose foreign policy. Just 35 miles south of the fragile DMZ, you’ll find bins that ask you (very politely) to fill them with trash, and automated smart apartments that anticipate your every need. ... The automation of society seems to feed directly into the longing for perfection; a machine will simply do things better and more efficiently, whether scanning your license plate or annihilating you at a Go tournament. ... the mood is not one of luxury and happy success but of exhaustion and insecurity.

The Atlantic - How Checkers Was Solved 14min

Marion Tinsley—math professor, minister, and the best checkers player in the world—sat across a game board from a computer, dying. ... Tinsley had been the world’s best for 40 years, a time during which he'd lost a handful of games to humans, but never a match. It's possible no single person had ever dominated a competitive pursuit the way Tinsley dominated checkers. But this was a different sort of competition, the Man-Machine World Championship. ... His opponent was Chinook, a checkers-playing program programmed by Jonathan Schaeffer, a round, frizzy-haired professor from the University of Alberta, who operated the machine. Through obsessive work, Chinook had become very good. It hadn't lost a game in its last 125—and since they’d come close to defeating Tinsley in 1992, Schaeffer’s team had spent thousands of hours perfecting his machine. ... The two men were slated to play 30 matches over the next two weeks. The year was 1994, before Garry Kasparov and Deep Blue or Lee Sedol and AlphaGo. ... With Tinsley gone, the only way to prove that Chinook could have beaten the man was to beat the game itself. The results would be published July 19, 2007, in Science with the headline: Checkers Is Solved. ... At the highest levels, checkers is a game of mental attrition. Most games are draws. In serious matches, players don’t begin with the standard initial starting position. Instead, a three-move opening is drawn from a stack of approved beginnings, which give some tiny advantage to one or the other player. They play that out, then switch colors. The primary way to lose is to make a mistake that your opponent can jump on.

Berenberg Bank - Patiently waiting: the productivity super-cycle 11min

The rate of productivity growth – the major determinant of long run economic growth – has slowed sharply since the start of the century. The so called ‘productivity puzzle’ is one of the most pertinent macroeconomic questions of our time. Are fast rates of economic growth, the hallmark of the 20th century, a thing of the past? Or is it just another ‘bad attack of economic pessimism’ as economist John Maynard Keynes wrote nearly a century ago? ... Our chart below shows two distinct ‘super-cycles’ in UK productivity growth since the First Industrial Revolution. The first cycle brought about an acceleration in productivity growth over an approximate 70-year period that peaked in around 1870 before a 30-year deceleration thereafter. It ended at the turn of the 20th century during the middle of the Second Industrial Revolution. The cycle of the 20th century followed a similar pattern to the 19th, but with much, much bigger gains in productivity and well being. ... Comparing the western world’s current struggle for productivity gains against the ongoing fast rates of discovery in energy, artificial intelligence and robotics, to name but a few, suggests that we may be back at stage one and could be heading for stage two with rapid productivity advances in a while. ... Once we fully exploit the potential of the current wave of new technologies, the risks to the future seem skewed to the upside.