It’s hard to imagine an encryption machine more sophisticated than the human brain. This three-pound blob of tissue holds an estimated 86 billion neurons, cells that rapidly fire electrical pulses in split-second response to whatever stimuli our bodies encounter in the external environment. Each neuron, in turn, has thousands of spindly branches that reach out to nodes, called synapses, which transmit those electrical messages to other cells. Somehow the brain interprets this impossibly noisy code, allowing us to effectively respond to an ever-changing world. ... Given the complexity of the neural code, it’s not surprising that some neuroscientists are borrowing tricks from more experienced hackers: cryptographers, the puzzle-obsessed who draw on math, logic, and computer science to make and break secret codes. That’s precisely the approach of two neuroscience labs at the University of Pennsylvania, whose novel use of cryptography has distinguished them among other labs around the world, which are hard at work deciphering how the brain encodes complex behaviors, abstract thinking, conscious awareness, and all of the other things that make us human.
This summer, Jordan Elpern-Waxman had a revelation. He’d quit his job in order to start a company that markets craft beer, and, as most new entrepreneurs do, he’d been paying for the whole thing himself. “I had gone through my savings and put everything on my credit card, and I woke up one morning and looked at the balance and said, ‘Holy s**t, how am I ever going to pay this thing off?’ ” Elpern-Waxman told me. So he did something unusual: he sold off a share of his future. ... He went to a new site called Upstart. Founded last year by former Google employees, it’s a crowdfunding marketplace where people looking to start a business, say, or pursue more education can raise cash from investors. In exchange, they pay some of what they earn over the next five or ten years—what percentage you have to pay is determined by how much you want to raise and by the Upstart algorithm’s assessment of your earnings potential. For thirty thousand dollars today, you might end up paying out, say, two per cent of your income for the next five years.
Machine learning, artificial intelligence and other technological advances are transforming how pensions, endowments, sovereign funds and other institutions manage their assets. ... Will the financial services industry soon be challenged by technology entrepreneurs with little initial - or no exclusive - interest in the investment business? ... The hot technologies being developed today will offer unparalleled insight into the complex world around us, and the applications to the entire domain of finance and investing are countless. ... One example: The ascendance of nonbiological intelligence means computing systems will learn and process many types of inputs far faster than even the most-expert individuals. Once experts partner with the systems, these man-machine teams will become extremely competent at rules-based goal seeking. The days of using scarce computing resources to model complex systems - backcasting, calibrating, validating and eventually forecasting - are nearly over. ... a growing number of computing systems and technologies will empower people, organizations, networks and information in transformative ways. Service industries will be particularly affected, as they often require human, labor-intensive analytics and networking scale. But if technologies can help people network and analyze faster and better, some of the companies in the industries that provide these services will face an existential challenge. As with the rise of computing and the Internet, we expect new technologies in the coming decade to challenge service industries, such as finance, in ways that few people today appreciate.
How much should you charge someone to live in your house? Or how much would you pay to live in someone else’s house? Would you pay more or less for a planned vacation or for a spur-of-the-moment getaway? ... In focus groups, we watched people go through the process of listing their properties on our site—and get stumped when they came to the price field. Many would take a look at what their neighbors were charging and pick a comparable price; this involved opening a lot of tabs in their browsers and figuring out which listings were similar to theirs. Some people had a goal in mind before they signed up, maybe to make a little extra money to help pay the mortgage or defray the costs of a vacation. So they set a price that would help them meet that goal without considering the real market value of their listing. And some people, unfortunately, just gave up. ... Clearly, Airbnb needed to offer people a better way—an automated source of pricing information to help hosts come to a decision. That’s why we started building pricing tools in 2012 and have been working to make them better ever since. ... we’ve added what we think is a unique approach to machine learning that lets our system not only learn from its own experience but also take advantage of a little human intuition when necessary.
Netflix’s video algorithms team had developed a number of quality levels, or recipes, as they’re called in the world of video encoding. Each video file on Netflix’s servers was being prepared with these same recipes to make multiple versions necessary to serve users at different speeds. ... Netflix’s service has been dynamically delivering these versions based on a consumer’s bandwidth needs, which is why the quality of a stream occasionally shifts in the middle of a binge-watching session. But across its entire catalog of movies and TV shows, the company has been using the same rules — which didn’t really make sense. ... they decided that each title should get its own set of rules. This allows the company to stream visually simple videos like “My Little Pony” in a 1080p resolution with a bitrate of just 1.5 Mbps. In other words: Even someone with a very slow broadband or mobile internet connection can watch the animated show in full HD quality under the new approach. Previously, the same consumer would have just been able to watch the show with a resolution of 720*480, and still used more data.
CargoMetrics, a start-up investment firm, is not your typical money manager or hedge fund. It was originally set up to supply information on cargo shipping to commodities traders, among others. Now it links satellite signals, historical shipping data and proprietary analytics for its own trading in commodities, currencies and equity index futures. ... There was an air of excitement in the office that day because the signals were continuing to show a slowdown in shipping that had earlier triggered the firm's automated trading system to short West Texas Intermediate (WTI) oil futures. Two days later the U.S. Department of Energy's official report came out, confirming the firm's hunch, and the oil futures market reacted accordingly. ... in this era of globalization 50,000 ships carry 90 percent of the $18.5 trillion in annual world trade. ... "My vision is to map historically and in real time what's really going on in economic supply and demand across the planet" ... building a "learning machine" that will be able to automatically profit from spotting and publicly traded security that is mispriced, using what he refers to as systematic fundamental macro strategies. ... CargoMetrics was one of the first maritime data analytics companies to seize the potential of the global Automatic Identification System. Ships transmit AIS signals via very high frequency (VHF) radio to receiver devices on other ships or land.
An accelerating field of research suggests that most of the artificial intelligence we’ve created so far has learned enough to give a correct answer, but without truly understanding the information. And that means it’s easy to deceive. ... Machine learning algorithms have quickly become the all-seeing shepherds of the human flock. This software connects us on the internet, monitors our email for spam or malicious content, and will soon drive our cars. To deceive them would be to shift tectonic underpinnings of the internet, and could pose even greater threats for our safety and security in the future. ... Small groups of researchers—from Pennsylvania State University to Google to the U.S. military— are devising and defending against potential attacks that could be carried out on artificially intelligent systems. In theories posed in the research, an attacker could change what a driverless car sees. Or, it could activate voice recognition on any phone and make it visit a website with malware, only sounding like white noise to humans. Or let a virus travel through a firewall into a network. ... Instead of taking the controls of a driverless car, this method shows it a kind of a hallucination—images that aren’t really there. ... “We show you a photo that’s clearly a photo of a school bus, and we make you think it’s an ostrich,” says Ian Goodfellow, a researcher at Google who has driven much of the work on adversarial examples.
Many companies already have the ability to run keyword searches of employees’ emails, looking for worrisome words and phrases like embezzle and I loathe this job. But the Stroz Friedberg software, called Scout, aspires to go a giant step further, detecting indirectly, through unconscious syntactic and grammatical clues, workers’ anger, financial or personal stress, and other tip-offs that an employee might be about to lose it. ... To measure employees’ disgruntlement, for instance, it uses an algorithm based on linguistic tells found to connote feelings of victimization, anger, and blame. ... It’s not illegal to be disgruntled. But today’s frustrated worker could engineer tomorrow’s hundred-million-dollar data breach. Scout is being marketed as a cutting-edge weapon in the growing arsenal that helps corporations combat “insider threat,” the phenomenon of employees going bad. Workers who commit fraud or embezzlement are one example, but so are “bad leavers”—employees or contractors who, when they depart, steal intellectual property or other confidential data, sabotage the information technology system, or threaten to do so unless they’re paid off. Workplace violence is a growing concern too. ... Though companies have long been arming themselves against cyberattack by external hackers, often presumed to come from distant lands like Russia and China, they’re increasingly realizing that many assaults are launched from within—by, say, the quiet guy down the hall whose contract wasn’t renewed.
Risk scores, generated by algorithms, are an increasingly common factor in sentencing. Computers crunch data—arrests, type of crime committed, and demographic information—and a risk rating is generated. The idea is to create a guide that’s less likely to be subject to unconscious biases, the mood of a judge, or other human shortcomings. Similar tools are used to decide which blocks police officers should patrol, where to put inmates in prison, and who to let out on parole. Supporters of these tools claim they’ll help solve historical inequities, but their critics say they have the potential to aggravate them, by hiding old prejudices under the veneer of computerized precision. ... Computer scientists have a maxim, “Garbage in, garbage out.” In this case, the garbage would be decades of racial and socioeconomic disparities in the criminal justice system. Predictions about future crimes based on data about historical crime statistics have the potential to equate past patterns of policing with the predisposition of people in certain groups—mostly poor and nonwhite—to commit crimes.
Professionals in many organizations are assigned arbitrarily to cases: appraisers in credit-rating agencies, physicians in emergency rooms, underwriters of loans and insurance, and others. Organizations expect consistency from these professionals: Identical cases should be treated similarly, if not identically. The problem is that humans are unreliable decision makers; their judgments are strongly influenced by irrelevant factors, such as their current mood, the time since their last meal, and the weather. We call the chance variability of judgments noise. It is an invisible tax on the bottom line of many companies. ... The prevalence of noise has been demonstrated in several studies. Academic researchers have repeatedly confirmed that professionals often contradict their own prior judgments when given the same data on different occasions. ... The unavoidable conclusion is that professionals often make decisions that deviate significantly from those of their peers, from their own prior decisions, and from rules that they themselves claim to follow. ... It has long been known that predictions and decisions generated by simple statistical algorithms are often more accurate than those made by experts, even when the experts have access to more information than the formulas use. It is less well known that the key advantage of algorithms is that they are noise-free: Unlike humans, a formula will always return the same output for any given input. Superior consistency allows even simple and imperfect algorithms to achieve greater accuracy than human professionals. ... One reason the problem of noise is invisible is that people do not go through life imagining plausible alternatives to every judgment they make. ... The bottom line here is that if you plan to use an algorithm to reduce noise, you need not wait for outcome data. You can reap most of the benefits by using common sense to select variables and the simplest possible rule to combine them.
This problem has a name: the paradox of automation. It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. ... The paradox of automation, then, has three strands to it. First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. Because of this, an inexpert operator can function for a long time before his lack of skill becomes apparent – his incompetence is a hidden weakness that can persist almost indefinitely. Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response. A more capable and reliable automatic system makes the situation worse. ... The rarer the exception gets, as with fly-by-wire, the less gracefully we are likely to deal with it. We assume that the computer is always right, and when someone says the computer made a mistake, we assume they are wrong or lying. ... For all the power and the genuine usefulness of data, perhaps we have not yet acknowledged how imperfectly a tidy database maps on to a messy world. We fail to see that a computer that is a hundred times more accurate than a human, and a million times faster, will make 10,000 times as many mistakes. ... If you occasionally need human skill at short notice to navigate a hugely messy situation, it may make sense to artificially create smaller messes, just to keep people on their toes.
Mass, who is 64, has become the most widely recognized critic of weather forecasting in the United States — and specifically the National Oceanic and Atmospheric Administration, which manages the National Weather Service and its underling agencies, including the National Centers for Environmental Prediction, where the nation’s weather models are run. Mass argues that these models are significantly flawed in comparison with commercial and European alternatives. American forecasting also does poorly at data assimilation, the process of integrating information about atmospheric conditions into modeling programs; in the meantime, a lack of available computing power precludes the use of more advanced systems already operating at places like the European Center for Medium-Range Weather Forecasts, based in Reading, England. And there are persistent management challenges, perhaps best represented by the legions of NOAA scientists whose innovations remain stranded in research labs and out of the hands of the National Weather Service operational forecasters who make the day-to-day predictions in 122 regional offices around the country. ... accuracy is everything, often the difference between life and death, given that extreme weather ... Industries like shipping, energy, agriculture and utilities lose money when predictions fail. Even slightly more precise wind-speed projections would help airlines greatly reduce fuel costs. ... the Weather Service interface was so primitive — the protocol was originally designed for the telegraph — it could only accommodate uppercase type.
The difference between the 4004 and the Skylake is the difference between computer behemoths that occupy whole basements and stylish little slabs 100,000 times more powerful that slip into a pocket. It is the difference between telephone systems operated circuit by circuit with bulky electromechanical switches and an internet that ceaselessly shuttles data packets around the world in their countless trillions. It is a difference that has changed everything from metal-bashing to foreign policy, from the booking of holidays to the designing of H-bombs. ... Moore’s law is not a law in the sense of, say, Newton’s laws of motion. But Intel, which has for decades been the leading maker of microprocessors, and the rest of the industry turned it into a self-fulfilling prophecy. ... That fulfilment was made possible largely because transistors have the unusual quality of getting better as they get smaller; a small transistor can be turned on and off with less power and at greater speeds than a larger one. ... “There’s a law about Moore’s law,” jokes Peter Lee, a vice-president at Microsoft Research: “The number of people predicting the death of Moore’s law doubles every two years.” ... making transistors smaller has no longer been making them more energy-efficient; as a result, the operating speed of high-end chips has been on a plateau since the mid-2000s ... while the benefits of making things smaller have been decreasing, the costs have been rising. This is in large part because the components are approaching a fundamental limit of smallness: the atom. ... One idea is to harness quantum mechanics to perform certain calculations much faster than any classical computer could ever hope to do. Another is to emulate biological brains, which perform impressive feats using very little energy. Yet another is to diffuse computer power rather than concentrating it, spreading the ability to calculate and communicate across an ever greater range of everyday objects in the nascent internet of things. ... in 2012 the record for maintaining a quantum superposition without the use of silicon stood at two seconds; by last year it had risen to six hours. ... For a quantum algorithm to work, the machine must be manipulated in such a way that the probability of obtaining the right answer is continually reinforced while the chances of getting a wrong answer are suppressed.

The fund almost never loses money. Its biggest drawdown in one five-year period was half a percent. ... Few firms are the subject of so much fascination, rumor, or speculation. Everyone has heard of Renaissance; almost no one knows what goes on inside. ... For outsiders, the mystery of mysteries is how Medallion has managed to pump out annualized returns of almost 80 percent a year, before fees. ... Competitors have identified some likely reasons for the fund’s success, though. Renaissance’s computers are some of the world’s most powerful, for one. Its employees have more—and better—data. They’ve found more signals on which to base their predictions and have better models for allocating capital. They also pay close attention to the cost of trades and to how their own trading moves the markets. ... At their core, such models usually fall into one of two camps, trend-following or mean-reversion. ... “You need to build a system that is layered and layered,” Simons said in a 2000 interview with Institutional Investor, explaining some of the philosophy behind the firm and the Medallion model. “And with each new idea, you have to determine: Is this really new, or is this somehow embedded in what we’ve done already?”
Wu believes Opendoor can buy and sell homes, in quantity, by employing the type of data analysis that has powered so many Silicon Valley companies and by targeting the broad middle of the market. It deals in single-family homes built after 1960, priced between $125,000 and $500,000. It has no interest in distressed properties, which require too much work, or in luxury properties, which are harder to value. ... Of course, buying up houses to make a market is capital-intensive, and the risks are great. Opendoor has raised $110 million in equity from Khosla Ventures, GGV Capital and Access Industries, among others, most recently at a valuation of $580 million earlier this year. And it has also raised more than $400 million in debt to buy the homes. To succeed, it has to price the homes it buys accurately, without seeing them, and it has to sell them quickly to minimize the costs of carrying them. ... Opendoor is a big, bold play in a market with $1.4 trillion in annual transaction volume that’s been largely undisturbed for decades. ... the model has yet to be tested by a recession or a market crash, which can catch even the smartest players by surprise. Wu says he modeled the business through the 2008 subprime crisis to understand the risk.
Yet the mystery of the mechanism is only partly solved. No one knows who made it, how many others like it were made, or where it was going when the ship carrying it sank. ... What if other objects like the Antikythera Mechanism have already been discovered and forgotten? There may well be documented evidence of such finds somewhere in the world, in the vast archives of human research, scholarly and otherwise, but simply no way to search for them. Until now. ... Scholars have long wrestled with “undiscovered public knowledge,” a problem that occurs when researchers arrive at conclusions independently from one another, creating fragments of understanding that are “logically related but never retrieved, brought together, [or] interpreted,” as Don Swanson wrote in an influential 1986 essay introducing the concept. ... In other words, on top of everything we don’t know, there’s everything we don’t know that we already know. ... Discovery in the online realm is powered by a mix of human curiosity and algorithmic inquiry, a dynamic that is reflected in the earliest language of the internet. The web was built to be explored not just by people, but by machines. As humans surf the web, they’re aided by algorithms doing the work beneath the surface, sequenced to monitor and rank an ever-swelling current of information for pluckable treasures. The search engine’s cultural status has evolved with the dramatic expansion of the web. ... Using machines to find meaning in vast sets of data has been one of the great promises of the computing age since long before the internet was built.
- Also: Quartz - Inside the secret meeting where Apple revealed the state of its AI research < 5min
- Also: The Library Quarterly - Undiscovered Public Knowledge > 15min
- Also: AAAI - Undiscovered Public Knowledge: a Ten-Year Update 5-15min
- Also: Wired - Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free 5-15min
Statcheck had read some 50,000 published psychology papers and checked the maths behind every statistical result it encountered. In the space of 24 hours, virtually every academic active in the field in the past two decades had received an email from the program, informing them that their work had been reviewed. Nothing like this had ever been seen before: a massive, open, retroactive evaluation of scientific literature, conducted entirely by computer. ... Statcheck’s method was relatively simple, more like the mathematical equivalent of a spellchecker than a thoughtful review, but some scientists saw it as a new form of scrutiny and suspicion, portending a future in which the objective authority of peer review would be undermined by unaccountable and uncredentialed critics. ... When it comes to fraud – or in the more neutral terms he prefers, “scientific misconduct” ... Despite its professed commitment to self-correction, science is a discipline that relies mainly on a culture of mutual trust and good faith to stay clean. Talking about its faults can feel like a kind of heresy. ... Even in the more mundane business of day-to-day research, scientists are constantly building on past work, relying on its solidity to underpin their own theories. If misconduct really is as widespread as Hartgerink and Van Assen think, then false results are strewn across scientific literature, like unexploded mines that threaten any new structure built over them.
For more than a decade, Wiseguy was the biggest name in ticket scalping. The company fundamentally broke Ticketmaster, using one of the first ever automated "ticket bots" to buy and flip millions of tickets between 1999 and Lowson's eventual arrest on wire fraud charges in 2010. ... The scourge of ticket bots and the immorality of the shady ticket scalpers using them is conventional wisdom that's so ingrained in the public consciousness and so politically safe that a law to ban ticket bots passed both houses of Congress unanimously late last year, in part thanks to a high-profile public relations campaign spearheaded by Hamilton creator Lin-Manuel Miranda. ... But no one actually involved in the ticket scalping industry thinks that banning bots will do much to slow down the secondary market. ... Between 2001 and 2010, the company bought and resold roughly 1.5 million tickets, amassing more than $25 million in profits overall
Legions of robots now carry out our instructions unreflectively. How do we ensure that these creatures, regardless of whether they’re built from clay or silicon, always work in our best interests? Should we teach them to think for themselves? And if so, how are we to teach them right from wrong? ... In 2017, this is an urgent question. Self-driving cars have clocked up millions of miles on our roads while making autonomous decisions that might affect the safety of other human road-users. Roboticists in Japan, Europe and the United States are developing service robots to provide care for the elderly and disabled. One such robot carer, which was launched in 2015 and dubbed Robear (it sports the face of a polar-bear cub), is strong enough to lift frail patients from their beds; if it can do that, it can also, conceivably, crush them. Since 2000 the US Army has deployed thousands of robots equipped with machineguns, each one able to locate targets and aim at them without the need for human involvement (they are not, however, permitted to pull the trigger unsupervised).
- Also: The New Yorker - A.I. verus M.D. > 15min
- Also: Fast Company - Here’s The Unofficial Silicon Valley Explainer On Artificial Intelligence 5-15min
- Also: Fortune - How AI Is Changing Your Job Hunt 5-15min
- Also: Vanity Fair - Elon Musk’s Billion-Dollar Crusade To Stop The A.I. Apocalypse 5-15min
- Also: Backchannel - The AI Cargo Cult: The Myth of Superhuman AI 5-15min
They use phones to record video of a vulnerable machine in action, then transmit the footage to an office in St. Petersburg. There, Alex and his assistants analyze the video to determine when the games’ odds will briefly tilt against the house. They then send timing data to a custom app on an agent’s phone; this data causes the phones to vibrate a split second before the agent should press the “Spin” button. By using these cues to beat slots in multiple casinos, a four-person team can earn more than $250,000 a week. ... Determined to find a way to score one last payday before shutting down his enterprise, Alex reached out to Aristocrat Leisure, an Australian slot machine manufacturer whose vulnerable products have been his chief targets. ... ideally, a PRNG should approximate the utter unpredictability of radioactive decay.