Conceptually, bioelectronics is straightforward: Get the nervous system to tell the body to heal itself. But of course it’s not that simple. “What we’re trying to do here is completely novel,” says Pedro Irazoqui, a professor of biomedical engineering at Purdue University, where he’s investigating bioelectronic therapies for epilepsy. Jay Pasricha, a professor of medicine and neurosciences at Johns Hopkins University who studies how nerve signals affect obesity, diabetes and gastrointestinal-motility disorders, among other digestive diseases, says, “What we’re doing today is like the precursor to the Model T.” ... The biggest challenge is interpreting the conversation between the body’s organs and its nervous system, according to Kris Famm, who runs the newly formed Bioelectronics R. & D. Unit at GlaxoSmithKline, the world’s seventh-largest pharmaceutical company. “No one has really tried to speak the electrical language of the body,” he says. Another obstacle is building small implants, some of them as tiny as a cubic millimeter, robust enough to run powerful microprocessors. Should scientists succeed and bioelectronics become widely adopted, millions of people could one day be walking around with networked computers hooked up to their nervous systems. And that prospect highlights yet another concern the nascent industry will have to confront: the possibility of malignant hacking. As Anand Raghunathan, a professor of electrical and computer engineering at Purdue, puts it, bioelectronics “gives me a remote control to someone’s body.”
It’s hard to imagine an encryption machine more sophisticated than the human brain. This three-pound blob of tissue holds an estimated 86 billion neurons, cells that rapidly fire electrical pulses in split-second response to whatever stimuli our bodies encounter in the external environment. Each neuron, in turn, has thousands of spindly branches that reach out to nodes, called synapses, which transmit those electrical messages to other cells. Somehow the brain interprets this impossibly noisy code, allowing us to effectively respond to an ever-changing world. ... Given the complexity of the neural code, it’s not surprising that some neuroscientists are borrowing tricks from more experienced hackers: cryptographers, the puzzle-obsessed who draw on math, logic, and computer science to make and break secret codes. That’s precisely the approach of two neuroscience labs at the University of Pennsylvania, whose novel use of cryptography has distinguished them among other labs around the world, which are hard at work deciphering how the brain encodes complex behaviors, abstract thinking, conscious awareness, and all of the other things that make us human.
New research puts us on the cusp of brain-to-brain communication. Could the next step spell the end of individual minds? ... we’ve moved beyond merely thinking orders at machinery. Now we’re using that machinery to wire living brains together. Last year, a team of European neuroscientists headed by Carles Grau of the University of Barcelona reported a kind of – let’s call it mail-order telepathy – in which the recorded brainwaves of someone thinking a salutation in India were emailed, decoded and implanted into the brains of recipients in Spain and France (where they were perceived as flashes of light). ... What are the implications of a technology that seems to be converging on the sharing of consciousness? ... It would be a lot easier to answer that question if anyone knew what consciousness is. There’s no shortage of theories. ... Their models – right or wrong – describe computation, not awareness. There’s no great mystery to intelligence; it’s easy to see how natural selection would promote flexible problem-solving, the triage of sensory input, the high-grading of relevant data (aka attention). ... If physics is right – if everything ultimately comes down to matter, energy and numbers – then any sufficiently accurate copy of a thing will manifest the characteristics of that thing. Sapience should therefore emerge from any physical structure that replicates the relevant properties of the brain.
The root cause of fear, and how to treat it, has been one of modern psychology’s central questions. In the early twentieth century, Sigmund Freud argued phobias were “protective structures” springing from a patient’s “repressed longing” for his mother. In 1920, however, the American psychologist John B. Watson put forward a simpler theory: People develop fears through negative experiences. To test his hypothesis, he sought to condition an infant, whom he called “Little Albert,” to fear a white rat by presenting the rat to the child and simultaneously striking a steel bar. ... Different types of memories consolidate in different parts of the brain. Explicit memories of life events, for instance, consolidate in the hippocampus, the long, podlike structures near the center of the brain. Emotional memories, including fear, consolidate nearby in the amygdala, which activates the fight-or-flight response when it senses danger. The subjective experience of fear often involves both of these memory systems—a person will consciously remember past experiences while also undergoing several automatic physiological responses, such as increased heart rate—but they operate independently of each other.
The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’. ... Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms. ... Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers? ... A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks. When strong emotions are involved, millions of neurons can become more active.
The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties—which some people have to a greater degree than others—to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance. ... The 20th-century nature-nurture debate prepared us to think of ourselves as shaped by influences beyond our control. But it left some room, at least in the popular imagination, for the possibility that we could overcome our circumstances or our genes to become the author of our own destiny. The challenge posed by neuroscience is more radical: It describes the brain as a physical system like any other, and suggests that we no more will it to operate in a particular way than we will our heart to beat. ... If we could understand any individual’s brain architecture and chemistry well enough, we could, in theory, predict that individual’s response to any given stimulus with 100 percent accuracy. ... What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. ... When people stop believing they are free agents, they stop seeing themselves as blameworthy for their actions.
- Also: Aeon - Getting smarter 5-15min
It seemed absolutely crazy. The idea that an Iowa housewife, equipped with the cutting-edge medical tool known as Google Images, would make a medical discovery about a pro athlete who sees doctors and athletic trainers as part of her job? ... First, it was with her family’s Emery-Dreifuss, then when she thought they had lipodystrophy, and now she thought that she and Priscilla just must have a mutant gene in common because of the exact same pattern of missing fat. But how, then, did Priscilla get a double-helping of muscle while Jill’s muscles were scarcely there?
Our world had spun around the sun more than 30 times since, though Henry’s world had stayed still, frozen in orbit. This is because 1953 was the year he received an experimental operation, one that destroyed most of several deep-seated structures in his brain, including his hippocampus, his amygdala and his entorhinal cortex. The operation, performed on both sides of his brain and intended to treat Henry’s epilepsy, rendered him profoundly amnesiac, unable to hold on to the present moment for more than 30 seconds or so. That outcome, devastating to Henry, was a boon to science: By 1986, Patient H.M. — as he was called in countless journal articles and textbooks — had become arguably the most important human research subject of all time, revolutionizing our understanding of how memory works. ... Of course, Henry didn’t know that. No matter how many times the scientists told him he was famous, he’d always forget. ... one of the things about Henry that fascinated scientists: His amnesia often appeared, as they termed it, pure. There was an abyss in his brain that all the passing events of his life tumbled into, but on the surface he could seem almost normal. ... Even as a nonscientist, I couldn’t help noticing that some of the unpublished data I came across while reporting my book went against the grain of the established narrative of Patient H.M. For example, unpublished parts of a three-page psychological assessment of Henry provided evidence that even before the operation that transformed Henry Molaison into the amnesiac Patient H.M., his memory was already severely impaired. The causes and significance of Henry’s preoperative memory deficits can be debated, but their existence only underscores the importance of preserving the complete record of the most important research subject in the history of memory science.
- Also: Aeon - My spotless mind 5-15min
Learning math and then science as an adult gave me passage into the empowering world of engineering. But these hard-won, adult-age changes in my brain have also given me an insider’s perspective on the neuroplasticity that underlies adult learning. ... In the current educational climate, memorization and repetition in the STEM disciplines (as opposed to in the study of language or music), are often seen as demeaning and a waste of time for students and teachers alike. Many teachers have long been taught that conceptual understanding in STEM trumps everything else. And indeed, it’s easier for teachers to induce students to discuss a mathematical subject (which, if done properly, can do much to help promote understanding) than it is for that teacher to tediously grade math homework. What this all means is that, despite the fact that procedural skills and fluency, along with application, are supposed to be given equal emphasis with conceptual understanding, all too often it doesn’t happen. Imparting a conceptual understanding reigns supreme—especially during precious class time. ... The problem with focusing relentlessly on understanding is that math and science students can often grasp essentials of an important idea, but this understanding can quickly slip away without consolidation through practice and repetition. Worse, students often believe they understand something when, in fact, they don’t. ... Chunking was originally conceptualized in the groundbreaking work of Herbert Simon in his analysis of chess—chunks were envisioned as the varying neural counterparts of different chess patterns. Gradually, neuroscientists came to realize that experts such as chess grand masters are experts because they have stored thousands of chunks of knowledge about their area of expertise in their long-term memory. ... As studies of chess masters, emergency room physicians, and fighter pilots have shown, in times of critical stress, conscious analysis of a situation is replaced by quick, subconscious processing as these experts rapidly draw on their deeply ingrained repertoire of neural subroutines—chunks. ... Understanding doesn’t build fluency; instead, fluency builds understanding.
The most remarkable thing about neural nets is that no human being has programmed a computer to perform any of the stunts described above. In fact, no human could. Programmers have, rather, fed the computer a learning algorithm, exposed it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it, and have then allowed the computer to figure out for itself how to recognize the desired objects, words, or sentences. ... Neural nets aren’t new. The concept dates back to the 1950s, and many of the key algorithmic breakthroughs occurred in the 1980s and 1990s. What’s changed is that today computer scientists have finally harnessed both the vast computational power and the enormous storehouses of data—images, video, audio, and text files strewn across the Internet—that, it turns out, are essential to making neural nets work well. ... That dramatic progress has sparked a burst of activity. Equity funding of AI-focused startups reached an all-time high last quarter of more than $1 billion, according to the CB Insights research firm. There were 121 funding rounds for such startups in the second quarter of 2016, compared with 21 in the equivalent quarter of 2011, that group says. More than $7.5 billion in total investments have been made during that stretch—with more than $6 billion of that coming since 2014. ... The hardware world is feeling the tremors. The increased computational power that is making all this possible derives not only from Moore’s law but also from the realization in the late 2000s that graphics processing units (GPUs) made by Nvidia—the powerful chips that were first designed to give gamers rich, 3D visual experiences—were 20 to 50 times more efficient than traditional central processing units (CPUs) for deep-learning computations. ... Think of deep learning as a subset of a subset. “Artificial intelligence” encompasses a vast range of technologies—like traditional logic and rules-based systems—that enable computers and robots to solve problems in ways that at least superficially resemble thinking. Within that realm is a smaller category called machine learning, which is the name for a whole toolbox of arcane but important mathematical techniques that enable computers to improve at performing tasks with experience. Finally, within machine learning is the smaller subcategory called deep learning.

- Also: FiveThirtyEight - Some Like It Bot < 5min
- Also: Vox - Venture capitalist Marc Andreessen explains how AI will change the world 5-15min
- Also: Nautilus - Moore’s Law Is About to Get Weird < 5min
- Also: Edge - AI & The Future Of Civilization < 5min
- Also: Medium - Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning 5-15min
- Also: Rolling Stone - Inside the Artificial Intelligence Revolution: Pt. 1 5-15min
- Also: Rolling Stone - Inside the Artificial Intelligence Revolution: Pt. 2 5-15min
If this election cycle is a mirror, then it is reflecting a society choked with fear. It's not just threats of terrorism, economic collapse, cyberwarfare and government corruption – each of which some 70 percent of our citizenry is afraid of, according to the Chapman University Survey on American Fears. It's the stakes of the election itself, with Hillary Clinton at last month's debate conjuring images of an angry Donald Trump with his finger on the nuclear codes, while Trump warned "we're not going to have a country" if things don't change. ... Meanwhile, the electorate is commensurately terrified of its potential leaders. According to a September Associated Press poll, 56 percent of Americans said they'd be afraid if Trump won the election, while 43 percent said they'd be afraid if Clinton won – with 18 percent of respondents saying they're afraid of either candidate winning. ... Around the globe, household wealth, longevity and education are on the rise, while violent crime and extreme poverty are down. In the U.S., life expectancy is higher than ever, our air is the cleanest it's been in a decade, and despite a slight uptick last year, violent crime has been trending down since 1991. ... For mass media, insurance companies, Big Pharma, advocacy groups, lawyers, politicians and so many more, your fear is worth billions. And fortunately for them, your fear is also very easy to manipulate. We're wired to respond to it above everything else. If we miss an opportunity for abundance, life goes on; if we miss an important fear cue, it doesn't. ... in order to resist being manipulated by those who spread fear for personal, political and corporate gain, it's necessary to understand it.
When I returned to addiction, it was as a scientist studying the addicted brain. The data were indisputable: brains change with addiction. I wanted to understand how – and why. I wanted to understand addiction with fastidious objectivity, but I didn’t want to lose touch with its subjectivity – how it feels, how hard it is – in the process. ... One explanation is that addiction is a brain disease. The United States National Institute on Drug Abuse, the American Society of Addiction Medicine, and the American Medical Association ubiquitously define addiction as a ‘chronic disease of brain reward, motivation, memory and related circuitry’ ... If only the disease model worked. Yet, more and more, we find that it doesn’t. First of all, brain change alone isn’t evidence for brain disease. Brains are designed to change. ... we now know that drugs don’t cause addiction. ... One idea is that addicts voluntarily choose to remain addicted: if they don’t quit, it’s because they don’t want to. ... The view that addiction arises through learning, in the context of environmental forces, appears to be gathering momentum.
But after stopping on a desolate gravel road next to a sign for a gas station, Santillan got the feeling that the voice might be steering him wrong. He’d already been driving for nearly an hour, yet the ETA on the GPS put his arrival time at around 5:20 P.M., eight hours later. He reentered his destination and got the same result. Though he sensed that something was off, he made a conscious choice to trust the machine. He had come here for an adventure, after all, and maybe it knew where he was really supposed to go. ... It’s comforting to know where you are, to see yourself distilled into a steady blue icon gliding smoothly along a screen. With a finger tap or a short request to Siri or Google Now—which, like other smartphone tools, rely heavily on data from cell towers and Wi-Fi hot spots as well as satellites—a wonderful little trail appears on your device, beckoning you to follow. ... The convenience comes at a price, however. There’s the creepy Orwellian fact of Them always knowing where We are (or We always knowing where They are). More concerning are the navigation-fail horror stories that have become legend. ... Enough people have been led astray by their GPS in Death Valley that the area’s former wilderness coordinator called the phenomenon “death by GPS.” ... By turning on a GPS every time we head somewhere new, we’re also cutting something fundamental out of the experience of traveling: the adventures and surprises that come with finding—and losing—our way. ... Individuals who frequently navigate complex environments the old-fashioned way, by identifying landmarks, literally grow their brains.
Hrusovksy’s pitch to me is roughly the same as the one he just gave Jeff Miller, the NFL’s senior vice president for health and safety—skittering from drones, to driverless cars, to Tesla, to heart attacks and diabetes. “I’m still addicted to pastries at night,” Hrusovsky says before circling back to his thesis: Quanterix’s machines are on the brink of delivering a revolution in medicine, as scientists use them to detect diseases earlier, target them more precisely, and create breakthrough treatments for cancer, heart disease, diabetes, and Alzheimer’s, to name a few. ... Discovering, for instance, that half its linemen show signs of CTE could starve the league of talent or force changes that make it unrecognizable to fans. And football isn’t alone: CTE presents similarly dire questions for hockey, soccer, and ultimate fighting, among other contact sports. ... The method is a thousand times more sensitive than the Elisa, capable of detecting molecules in concentrations as low as 30,000 per drop—the equivalent, Hrusovsky says, of finding a grain of sand in 2,000 swimming pools.
Whether it takes the form of a touch of the Holy Spirit at a Florida revival meeting or a dip in the water of the Ganges, the healing power of belief is all around us. Studies suggest that regular religious services may improve the immune system, decrease blood pressure, add years to our lives. ... Religious faith is hardly the only kind of belief that has the ability to make us feel inexplicably better. ... just as a good performance in a theater can draw us in until we feel we’re watching something real, the theater of healing is designed to draw us in by creating powerful expectations in our brains. These expectations drive the so-called placebo effect, which can affect what happens in our bodies as well. Scientists have known about the placebo effect for decades and have used it as a control in drug trials. Now they are seeing placebos as a window into the neurochemical mechanisms that connect the mind with the body, belief with experience. ... How does a belief become so potent it can heal? ... Most astonishingly, placebos can work even when the person taking them knows they are placebos.
- Also: Aeon - The lizard inside 5-15min
Maps are for humans, but how do animals, which began navigating millions of years before parchment was invented, manage to find their way around? Do animal (and human) brains contain a map, and if so does it have islands and capes, North Poles and Equators, reference lines and so on? And if they do, where is it, and how does it work? How could a jelly-like blob of protoplasm contain anything as structured as a map? ... These questions have intrigued biologists for many decades, particularly because animals can perform astonishing feats such as navigating their way from the North Pole to the South and back again, like the Arctic tern; or returning home after being transported hundreds of miles away, like the homing pigeon. How animals (both human and non-human) work out their location is just beginning to be understood by brain scientists. There are maps in the brain, as it happens. The properties of these maps, which neuroscientists call ‘cognitive maps’, have turned out to be highly intriguing, and are helping us to understand not just how animals navigate, but also more general principles about how the brain forms, stores and retrieves knowledge.
Decision fatigue helps explain why ordinarily sensible people get angry at colleagues and families, splurge on clothes, buy junk food at the supermarket and can’t resist the dealer’s offer to rustproof their new car. No matter how rational and high-minded you try to be, you can’t make decision after decision without paying a biological price. It’s different from ordinary physical fatigue — you’re not consciously aware of being tired — but you’re low on mental energy. The more choices you make throughout the day, the harder each one becomes for your brain, and eventually it looks for shortcuts, usually in either of two very different ways. One shortcut is to become reckless: to act impulsively instead of expending the energy to first think through the consequences. (Sure, tweet that photo! What could go wrong?) The other shortcut is the ultimate energy saver: do nothing. Instead of agonizing over decisions, avoid any choice. Ducking a decision often creates bigger problems in the long run, but for the moment, it eases the mental strain. You start to resist any change, any potentially risky move ... experiments confirmed the 19th-century notion of willpower being like a muscle that was fatigued with use, a force that could be conserved by avoiding temptation. ... Any decision, whether it’s what pants to buy or whether to start a war, can be broken down into what psychologists call the Rubicon model of action phases, in honor of the river that separated Italy from the Roman province of Gaul.
Why do some people so clearly have it and others don’t? Why do we fall so easily under its influence? Charismatics can make us feel charmed and great about ourselves. They can inspire us to excel. But they can also be dangerous. They use charisma for their own purposes, to enhance their power, to manipulate others. ... Individuals with charisma tap our unfettered emotions and can shut down our rational minds. They hypnotize us. But studies show charisma is not just something a person alone possesses. It’s created by our own perceptions, particularly when we are feeling vulnerable in politically tense times. I’m going to tell you about these studies and spotlight the opinions of the neuroscientists, psychologists, and sociologists who conducted them. ... Antonakis has identified a series of what he calls Charismatic Leadership Tactics (CLTs), which range from the use of metaphors and storytelling to nonverbal methods of communication like open posture and animated, representative gestures at key moments. When taken together, he has shown, they have helped decide eight of the last 10 presidential elections.
The capital of the Kunene region, Opuwo lies in the heartland of the Himba people, a semi-nomadic people who spend their days herding cattle. Long after many of the world’s other indigenous populations had begun to migrate to cities, the Himba had mostly avoided contact with modern culture, quietly continuing their traditional life. But that is slowly changing, with younger generations feeling the draw of Opuwo, where they will encounter cars, brick buildings, and writing for the first time. ... How does the human mind cope with all those novelties and new sensations? By studying people like the Himba, at the start of their journey into modernity, scientists are now hoping to understand the ways that modern life may have altered all of our minds. ... Like an irregular lens, our modern, urban brains distort the images hitting our retina, magnifying some parts of the scene and shrinking others.
Consider Einstein’s impact on physics. With no tools at his disposal other than the force of his own thoughts, he predicted in his general theory of relativity that massive accelerating objects—like black holes orbiting each other—would create ripples in the fabric of space-time. It took one hundred years, enormous computational power, and massively sophisticated technology to definitively prove him right, with the physical detection of such gravitational waves less than two years ago. ... Einstein revolutionized our understanding of the very laws of the universe. But our understanding of how a mind like his works remains stubbornly earthbound. What set his brainpower, his thought processes, apart from those of his merely brilliant peers? What makes a genius? ... Genius is too elusive, too subjective, too wedded to the verdict of history to be easily identified. And it requires the ultimate expression of too many traits to be simplified into the highest point on one human scale. Instead we can try to understand it by unraveling the complex and tangled qualities—intelligence, creativity, perseverance, and simple good fortune, to name a few—that entwine to create a person capable of changing the world.
As space exploration geared up in the 1960s, scientists were faced with a new dilemma. How could they recognize life on other planets, where it may have evolved very differently—and therefore have a different chemical signature—than it has on Earth? James Lovelock, father of the Gaia theory, gave this advice: Look for order. Every organism is a brief upwelling of structure from chaos, a self-assembled wonder that must jealously defend its order until the day it dies. Sophisticated information processing is necessary to preserve and pass down the rules for maintaining this order, yet life is built out of the messiest materials: tumbling chemicals, soft cells, and tangled polymers. Shouldn’t, therefore, information in biological systems be handled messily, and wasted? In fact, many biological computations are so perfect that they bump up against the mathematical limits of efficiency; genius is our inheritance.
The most intriguing part of the antenna, though, is that it gives him an ability the rest of us don’t have. He looked at the lamps on the roof deck and sensed that the infrared lights that activate them were off. He glanced at the planters and could “see” the ultraviolet markings that show where nectar is located at the centers of the flowers. He has not just matched ordinary human skills; he has exceeded them. ... He is, then, a first step toward the goal that visionary futurists have always had, an early example of what Ray Kurzweil in his well-known book The Singularity Is Near calls “the vast expansion of human potential.” ... But are we on the way to redefining how we evolve? Does evolution now mean not just the slow grind of natural selection spreading desirable genes, but also everything that we can do to amplify our powers and the powers of the things we make—a union of genes, culture, and technology? And if so, where is it taking us? ... Conventional evolution is alive and well in our species. Not long ago we knew the makeup of only a handful of the roughly 20,000 protein-encoding genes in our cells; today we know the function of about 12,000. But genes are only a tiny percentage of the DNA in our genome. More discoveries are certain to come—and quickly. From this trove of genetic information, researchers have already identified dozens of examples of relatively recent evolution. ... In our world now, the primary mover for reproductive success—and thus evolutionary change—is culture, and its weaponized cousin, technology. ... One human trait with a strong genetic component continues to increase in value, even more so as technology grows more dominant. The universal ambition of humanity remains greater intelligence.