Studies in the past two decades indicate that people often understand and remember text on paper better than on a screen. Screens may inhibit comprehension by preventing people from intuitively navigating and mentally mapping long texts. … In general, screens are also more cognitively and physically taxing than paper. Scrolling demands constant conscious effort, and LCD screens on tablets and laptops can strain the eyes and cause headaches by shining light directly on people 's faces. … Preliminary research suggests that even so-called digital natives are more likely to recall the gist of a story when they read it on paper because enhanced e-books and e-readers themselves are too distracting. Paper's greatest strength may be its simplicity.
It’s hard to imagine an encryption machine more sophisticated than the human brain. This three-pound blob of tissue holds an estimated 86 billion neurons, cells that rapidly fire electrical pulses in split-second response to whatever stimuli our bodies encounter in the external environment. Each neuron, in turn, has thousands of spindly branches that reach out to nodes, called synapses, which transmit those electrical messages to other cells. Somehow the brain interprets this impossibly noisy code, allowing us to effectively respond to an ever-changing world. ... Given the complexity of the neural code, it’s not surprising that some neuroscientists are borrowing tricks from more experienced hackers: cryptographers, the puzzle-obsessed who draw on math, logic, and computer science to make and break secret codes. That’s precisely the approach of two neuroscience labs at the University of Pennsylvania, whose novel use of cryptography has distinguished them among other labs around the world, which are hard at work deciphering how the brain encodes complex behaviors, abstract thinking, conscious awareness, and all of the other things that make us human.
Google has always been an artificial intelligence company, so it really shouldn’t have been a surprise that Ray Kurzweil, one of the leading scientists in the field, joined the search giant late last year. Nonetheless, the hiring raised some eyebrows, since Kurzweil is perhaps the most prominent proselytizer of “hard AI,” which argues that it is possible to create consciousness in an artificial being. Add to this Google’s revelation that it is using techniques of deep learning to produce an artificial brain, and a subsequent hiring of the godfather of computer neural nets Geoffrey Hinton, and it would seem that Google is becoming the most daring developer of AI, a fact that some may consider thrilling and others deeply unsettling. Or both.
In the year 1723, a French merchant ship sat becalmed halfway across the atlantic ocean. For over a month, she drifted with the currents, sails loose and flapping, waiting for a steady breeze. More than two hundred years had passed since Columbus made the same journey, and transatlantic travel was now a matter of course. But sometimes the fate and consequence of a voyage still hinged on seeds. By some accounts, the drifting ship had already faced a troubled passage—outrunning a deadly storm off Gibraltar, and narrowly avoiding capture by tunisian pirates. Now, stuck in that windless equatorial zone known as the doldrums, the ship had run so low on fresh water that the captain ordered strict rationing for crew and passengers alike. Among those travelers, one gentleman felt particularly parched, because he was sharing his small allotment with a thirsty tropical shrub. ... “It serves no purpose to go into the details of the infinite care I had to provide that delicate plant,” he wrote, long after the wind picked up and the ship docked safely at the Caribbean island of Martinique. And long after the descendants of his spindly sapling were well on their way to changing economies throughout Central and South America. The plant, of course, was coffee, but just how a young naval officer named Gabriel-Mathieu De Clieu got his hands on it remains a matter of debate.
Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind. ... “It depends on what you mean by artificial intelligence.” Douglas Hofstadter is in a grocery store in Bloomington, Indiana, picking out salad ingredients. “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done.” ... Hofstadter says this with an easy deliberateness, and he says it that way because for him, it is an uncontroversial conviction that the most-exciting projects in modern artificial intelligence, the stuff the public maybe sees as stepping stones on the way to science fiction—like Watson, IBM’s Jeopardy-playing supercomputer, or Siri, Apple’s iPhone assistant—in fact have very little to do with intelligence. For the past 30 years, most of them spent in an old house just northwest of the Indiana University campus, he and his graduate students have been picking up the slack: trying to figure out how our thinking works, by writing computer programs that think. ... Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
Was human evolution inevitable, or do we owe our existence to a once-in-a-universe stroke of luck? ... At first glance, everything that’s happened during the 3.8 billion-year history of life on our planet seems to have depended quite critically on all that came before. And Homo sapiens arrived on the scene only 200,000 years ago. The world got along just fine without us for billions of years. Gould didn’t mention chaos theory in his book, but he described it perfectly: ‘Little quirks at the outset, occurring for no particular reason, unleash cascades of consequences that make a particular future seem inevitable in retrospect,’ he wrote. ‘But the slightest early nudge contacts a different groove, and history veers into another plausible channel, diverging continually from its original pathway.’ ... One of the first lucky breaks in our story occurred at the dawn of biological complexity, when unicellular life evolved into multicellular. ... Throughout human prehistory, biological change and technological change ran in parallel. Brains were increasing in size – but this was not unique to our ancestors, and can be seen across multiple hominin species. Something very complicated was going on – a kind of arms race, Tattersall suggests, in which cognitive capacity and technology reinforced each other. At the same time, each branch of the human evolutionary tree was forced to adapt to an ever-changing climate.
Exposing the reasons we fail to understand the minds of others. ... No human being succeeds in life alone. Getting along and getting ahead requires coordinating with others, either in cooperation as friends, spouses, teammates, and coworkers, or in competition as adversaries, opponents, or rivals. Arguably our brain’s greatest skill is its ability to think about the minds of others to understand them better. ... the ways in which our sixth sense works well, but not nearly as well as we might think. The truth is that you are likely to understand much less about the minds of your family members, friends, neighbors, coworkers, competitors, and fellow citizens than you would guess. ... One of the biggest barriers to understanding others is excessive egocentrism. You can’t see into the mind of others because you can’t get over yourself. You can’t overcome your own experiences, beliefs, attitudes, emotions, knowledge, and visual perspective to recognize that others may view the world differently. Copernicus may have removed the Earth from the center of the universe, but every person on this planet is still at the center of his or her own universe. ... The important point is to relax a bit when others don’t seem to appreciate you as much as you think they should. ... The point here is that few of us are quite the celebrity that our own experience suggests we might be; nor are we under as much careful scrutiny from others as we might expect. ... Knowledge is a curse because once you have it, you can’t imagine what it’s like not to possess it.
Humans are social and generally want to be part of the crowd. Studies of social conformity suggest that the group’s view may shape how we perceive a situation. Those individuals who remain independent show activity in a part of the brain associated with fear. ... We are natural pattern seekers and see them even where none exist. Our brains are keen to make causal inferences, which can lead to faulty conclusions. ... Standard economic theory assumes that one discount rate allows us to translate value in the future to value in the present, and vice versa. Yet humans often use a high discount rate in the short term and a low one in the long term. This may be because different parts of the brain mediate short- and long-term decisions. ... We suffer losses more than we enjoy gains of comparable size. But the magnitude of loss aversion varies across the population and even for each individual based on recent experience. As a result, we sometimes forgo attractive opportunities because the fear of loss looms too large.
Discovery used to mean going out and coming across stuff - now it seems to mean turning inwards and gazing at screens. We've become reliant on machines to help us get around, so much so that it's changing the way we behave, particularly among younger people who have no experience of a time before GPS. ... Experts believe that making maps in our heads, by working out routes and remembering them, is a vital cognitive function for developing minds. ... the maze and its leafy purlieus were also vital as an escape from the overwhelming busy-ness of court life. Kings and courtesans fled the info-babble of their own day to this soothing oasis of flower power, centuries before the hippies were even thought of.
The brain’s craving for novelty, constant stimulation and immediate gratification creates something called a “compulsion loop.” Like lab rats and drug addicts, we need more and more to get the same effect. ... Endless access to new information also easily overloads our working memory. When we reach cognitive overload, our ability to transfer learning to long-term memory significantly deteriorates. ... we humans have a very limited reservoir of will and discipline. We’re far more likely to succeed by trying to change one behavior at a time, ideally at the same time each day, so that it becomes a habit, requiring less and less energy to sustain.
Walter Pitts was used to being bullied. He’d been born into a tough family in Prohibition-era Detroit, where his father, a boiler-maker, had no trouble raising his fists to get his way. The neighborhood boys weren’t much better. One afternoon in 1935, they chased him through the streets until he ducked into the local library to hide. The library was familiar ground, where he had taught himself Greek, Latin, logic, and mathematics—better than home, where his father insisted he drop out of school and go to work. Outside, the world was messy. Inside, it all made sense. ... Not wanting to risk another run-in that night, Pitts stayed hidden until the library closed for the evening. Alone, he wandered through the stacks of books until he came across Principia Mathematica, a three-volume tome written by Bertrand Russell and Alfred Whitehead between 1910 and 1913, which attempted to reduce all of mathematics to pure logic. Pitts sat down and began to read. For three days he remained in the library until he had read each volume cover to cover—nearly 2,000 pages in all—and had identified several mistakes. Deciding that Bertrand Russell himself needed to know about these, the boy drafted a letter to Russell detailing the errors. Not only did Russell write back, he was so impressed that he invited Pitts to study with him as a graduate student at Cambridge University in England. Pitts couldn’t oblige him, though—he was only 12 years old. But three years later, when he heard that Russell would be visiting the University of Chicago, the 15-year-old ran away from home and headed for Illinois. He never saw his family again. ... Though they started at opposite ends of the socioeconomic spectrum, McCulloch and Pitts were destined to live, work, and die together. Along the way, they would create the first mechanistic theory of the mind, the first computational approach to neuroscience, the logical design of modern computers, and the pillars of artificial intelligence. But this is more than a story about a fruitful research collaboration. It is also about the bonds of friendship, the fragility of the mind, and the limits of logic’s ability to redeem a messy and imperfect world. ... “He was absolutely incomparable in the scholarship of chemistry, physics, of everything you could talk about history, botany, etc. When you asked him a question, you would get back a whole textbook … To him, the world was connected in a very complex and wonderful fashion.”
New research puts us on the cusp of brain-to-brain communication. Could the next step spell the end of individual minds? ... we’ve moved beyond merely thinking orders at machinery. Now we’re using that machinery to wire living brains together. Last year, a team of European neuroscientists headed by Carles Grau of the University of Barcelona reported a kind of – let’s call it mail-order telepathy – in which the recorded brainwaves of someone thinking a salutation in India were emailed, decoded and implanted into the brains of recipients in Spain and France (where they were perceived as flashes of light). ... What are the implications of a technology that seems to be converging on the sharing of consciousness? ... It would be a lot easier to answer that question if anyone knew what consciousness is. There’s no shortage of theories. ... Their models – right or wrong – describe computation, not awareness. There’s no great mystery to intelligence; it’s easy to see how natural selection would promote flexible problem-solving, the triage of sensory input, the high-grading of relevant data (aka attention). ... If physics is right – if everything ultimately comes down to matter, energy and numbers – then any sufficiently accurate copy of a thing will manifest the characteristics of that thing. Sapience should therefore emerge from any physical structure that replicates the relevant properties of the brain.
There is no universally accepted definition of boredom. But whatever it is, researchers argue, it is not simply another name for depression or apathy. It seems to be a specific mental state that people find unpleasant — a lack of stimulation that leaves them craving relief, with a host of behavioural, medical and social consequences. ... Researchers hope to turn such hints into a deep understanding of what boredom is, how it manifests in the brain and how it relates to factors such as self-control. ... The scientific study of boredom dates back to at least 1885, when the British polymath Francis Galton published a short note in Nature on 'The Measure of Fidget' — his account of how restless audience members behaved during a scientific meeting. But decades passed with only a few people taking a serious interest in the subject.
Meditation and mindfulness are the new rage in Silicon Valley. And it’s not just about inner peace—it’s about getting ahead. … Across the Valley, quiet contemplation is seen as the new caffeine, the fuel that allegedly unlocks productivity and creative bursts. Classes in meditation and mindfulness—paying close, nonjudgmental attention—have become staples at many of the region’s most prominent companies. There’s a Search Inside Yourself Leadership Institute now teaching the Google meditation method to whoever wants it. The cofounders of Twitter and Facebook have made contemplative practices key features of their new enterprises, holding regular in-office meditation sessions and arranging for work routines that maximize mindfulness. Some 1,700 people showed up at a Wisdom 2.0 conference held in San Francisco this winter, with top executives from LinkedIn, Cisco, and Ford featured among the headliners. … These companies are doing more than simply seizing on Buddhist practices. Entrepreneurs and engineers are taking millennia-old traditions and reshaping them to fit the Valley’s goal-oriented, data-driven, largely atheistic culture. Forget past lives; never mind nirvana. The technology community of Northern California wants return on its investment in meditation. … It can be tempting to dismiss the interest in these ancient practices as just another neo-spiritual fad from a part of the country that’s cycled through one New Age after another. But it’s worth noting that the prophets of this new gospel are in the tech companies that already underpin so much of our lives.
Virtual reality overlaid on the real world in this manner is called mixed reality, or MR. (The goggles are semitransparent, allowing you to see your actual surroundings.) It is more difficult to achieve than the classic fully immersive virtual reality, or VR, where all you see are synthetic images, and in many ways MR is the more powerful of the two technologies. ... Magic Leap is not the only company creating mixed-reality technology, but right now the quality of its virtual visions exceeds all others. Because of this lead, money is pouring into this Florida office park. ... At the beginning of this year, the company completed what may be the largest C-round of financing in history: $793.5 million. To date, investors have funneled $1.4 billion into it. ... to really understand what’s happening at Magic Leap, you need to also understand the tidal wave surging through the entire tech industry. All the major players—Facebook, Google, Apple, Amazon, Microsoft, Sony, Samsung—have whole groups dedicated to artificial reality, and they’re hiring more engineers daily. Facebook alone has over 400 people working on VR. Then there are some 230 other companies, such as Meta, the Void, Atheer, Lytro, and 8i, working furiously on hardware and content for this new platform. To fully appreciate Magic Leap’s gravitational pull, you really must see this emerging industry—every virtual-reality and mixed-reality headset, every VR camera technique, all the novel VR applications, beta-version VR games, every prototype VR social world. ... The recurring discovery I made in each virtual world I entered was that although every one of these environments was fake, the experiences I had in them were genuine. ... The technology forces you to be present—in a way flatscreens do not—so that you gain authentic experiences, as authentic as in real life.
According to scientists I spoke with, the quality of your slumber has more repercussions on your happiness, intelligence, and health than what you eat, where you live, or how much money you make. Not to be a downer, but chronic sleep deprivation, which Amnesty International designates a form of torture, has been linked to diabetes, cancer, high blood pressure, heart disease, stroke, learning difficulties, colds, gastrointestinal problems, depression, execution (the sleep-starved defense minister of North Korea is rumored to have been shot after dozing in the presence of Kim Jong-un), world disasters (the Challenger explosion, the Three Mile Island meltdown), and non-disasters ... Many scientists have come to believe that while we sleep the space between our neurons expands, allowing a cranial sewage network—the glymphatic system—to flush the brain of waste products that might otherwise not only prevent memory formation but muck up our mental machinery and perhaps eventually lead to Alzheimer’s. Failing to get enough sleep is like throwing a party and then firing the cleanup crew. ... A National Institutes of Health study showed that twenty-five to thirty per cent of American adults have periodic episodes of sleeplessness and twenty per cent suffer from chronic insomnia. On the advice of sleep doctors, fatigue-management specialists, and know-it-alls on wellness blogs, these tossers and turners drink cherry juice, eat Atlantic perch, set the bedroom thermostat between sixty-seven and seventy degrees, put magnets under the pillow, curl their toes, uncurl their toes, and kick their partners out of bed, usually to little avail. ... The ancient Romans smeared mouse fat onto the soles of their feet, and the Lunesta of the Dark Ages was a smoothie made from the gall of castrated boars.
As we go about our daily lives, we tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world. Sure, when we stop and think about it — or when we find ourselves fooled by a perceptual illusion — we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like. ... Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.
The root cause of fear, and how to treat it, has been one of modern psychology’s central questions. In the early twentieth century, Sigmund Freud argued phobias were “protective structures” springing from a patient’s “repressed longing” for his mother. In 1920, however, the American psychologist John B. Watson put forward a simpler theory: People develop fears through negative experiences. To test his hypothesis, he sought to condition an infant, whom he called “Little Albert,” to fear a white rat by presenting the rat to the child and simultaneously striking a steel bar. ... Different types of memories consolidate in different parts of the brain. Explicit memories of life events, for instance, consolidate in the hippocampus, the long, podlike structures near the center of the brain. Emotional memories, including fear, consolidate nearby in the amygdala, which activates the fight-or-flight response when it senses danger. The subjective experience of fear often involves both of these memory systems—a person will consciously remember past experiences while also undergoing several automatic physiological responses, such as increased heart rate—but they operate independently of each other.
The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’. ... Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms. ... Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers? ... A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks. When strong emotions are involved, millions of neurons can become more active.
The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties—which some people have to a greater degree than others—to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance. ... The 20th-century nature-nurture debate prepared us to think of ourselves as shaped by influences beyond our control. But it left some room, at least in the popular imagination, for the possibility that we could overcome our circumstances or our genes to become the author of our own destiny. The challenge posed by neuroscience is more radical: It describes the brain as a physical system like any other, and suggests that we no more will it to operate in a particular way than we will our heart to beat. ... If we could understand any individual’s brain architecture and chemistry well enough, we could, in theory, predict that individual’s response to any given stimulus with 100 percent accuracy. ... What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. ... When people stop believing they are free agents, they stop seeing themselves as blameworthy for their actions.
- Also: Aeon - Getting smarter 5-15min
Indulging in undirected positive flights of fancy isn’t always in our interest. Positive thinking can make us feel better in the short term, but over the long term it saps our motivation, preventing us from achieving our wishes and goals, and leaving us feeling frustrated, stymied and stuck. If we really want to move ahead in our lives, engage with the world and feel energised, we need to go beyond positive thinking and connect as well with the obstacles that stand in our way. By bringing our dreams into contact with reality, we can unleash our greatest energies and make the most progress in our lives. ... Now, you might wonder if positive thinking is really as harmful as I’m suggesting. In fact, it is. In a number of studies over two decades, my colleagues and I have discovered a powerful link between positive thinking and poor performance. ... Positive thinking impedes performance because it relaxes us and drains the energy we need to take action. After having participants in one study positively fantasise about the future for as little as a few minutes, we observed declines in systolic blood pressure, a standard measure of a person’s energy level. These declines were significant: whereas smoking a cigarette will typically raise a person’s blood pressure by five or 10 points, engaging in positive fantasies lowers it by about half as much. ... Such relaxation occurs because positive fantasies fool our minds into thinking that we’ve already achieved our goals – what psychologists call ‘mental attainment’. ... WOOP – Wish, Outcome, Obstacle, Plan. Defining the Wish and identifying and visualising the desired Outcome and Obstacle are the mental contrasting part; forming implementation intentions represent the final step: the Plan.
Our world had spun around the sun more than 30 times since, though Henry’s world had stayed still, frozen in orbit. This is because 1953 was the year he received an experimental operation, one that destroyed most of several deep-seated structures in his brain, including his hippocampus, his amygdala and his entorhinal cortex. The operation, performed on both sides of his brain and intended to treat Henry’s epilepsy, rendered him profoundly amnesiac, unable to hold on to the present moment for more than 30 seconds or so. That outcome, devastating to Henry, was a boon to science: By 1986, Patient H.M. — as he was called in countless journal articles and textbooks — had become arguably the most important human research subject of all time, revolutionizing our understanding of how memory works. ... Of course, Henry didn’t know that. No matter how many times the scientists told him he was famous, he’d always forget. ... one of the things about Henry that fascinated scientists: His amnesia often appeared, as they termed it, pure. There was an abyss in his brain that all the passing events of his life tumbled into, but on the surface he could seem almost normal. ... Even as a nonscientist, I couldn’t help noticing that some of the unpublished data I came across while reporting my book went against the grain of the established narrative of Patient H.M. For example, unpublished parts of a three-page psychological assessment of Henry provided evidence that even before the operation that transformed Henry Molaison into the amnesiac Patient H.M., his memory was already severely impaired. The causes and significance of Henry’s preoperative memory deficits can be debated, but their existence only underscores the importance of preserving the complete record of the most important research subject in the history of memory science.
- Also: Aeon - My spotless mind 5-15min
London has more than eight million residents; unless somebody recognizes a suspect, CCTV footage is effectively useless. Investigators circulated photographs of the man with the mustache, but nobody came forward with information. So they turned to a tiny unit that had recently been established by London’s Metropolitan Police Service. In Room 901 of New Scotland Yard, the police had assembled half a dozen officers who shared an unusual talent: they all had a preternatural ability to recognize human faces. ... Most police precincts have an officer or two with a knack for recalling faces, but the Met (as the Metropolitan Police Service is known) is the first department in the world to create a specialized unit. The team is called the super-recognizers, and each member has taken a battery of tests, administered by scientists, to establish this uncanny credential. Glancing at a pixelated face in a low-resolution screen grab, super-recognizers can identify a crook with whom they had a chance encounter years earlier, or whom they recognize from a mug shot. ... By some estimates, as many as a million CCTV cameras are installed in London, making it the most surveilled metropolis on the planet. ... Prosopagnosics often have strange stories about how they cope with their condition. The subjects had their own curious tales about being on the other end of the spectrum. They not only recognized character actors in movies—they recognized the extras, too. In social situations, prosopagnosics often smiled blandly and behaved as if they had previously encountered everyone they met, rather than risk offending acquaintances. Russell’s subjects described the opposite adaptation: they often pretended that they were meeting for the first time people whom they knew they’d met before.
For a decade and a half, I’d been a web obsessive, publishing blog posts multiple times a day, seven days a week, and ultimately corralling a team that curated the web every 20 minutes during peak hours. Each morning began with a full immersion in the stream of internet consciousness and news, jumping from site to site, tweet to tweet, breaking news story to hottest take, scanning countless images and videos, catching up with multiple memes. Throughout the day, I’d cough up an insight or an argument or a joke about what had just occurred or what was happening right now. And at times, as events took over, I’d spend weeks manically grabbing every tiny scrap of a developing story in order to fuse them into a narrative in real time. I was in an unending dialogue with readers who were caviling, praising, booing, correcting. My brain had never been so occupied so insistently by so many different subjects and in so public a way for so long. ... I was, in other words, a very early adopter of what we might now call living-in-the-web. And as the years went by, I realized I was no longer alone. Facebook soon gave everyone the equivalent of their own blog and their own audience. More and more people got a smartphone — connecting them instantly to a deluge of febrile content, forcing them to cull and absorb and assimilate the online torrent as relentlessly as I had once. ... Then the apps descended, like the rain, to inundate what was left of our free time. It was ubiquitous now, this virtual living, this never-stopping, this always-updating. ... the insanity was now banality ... e almost forget that ten years ago, there were no smartphones, and as recently as 2011, only a third of Americans owned one. Now nearly two-thirds do. That figure reaches 85 percent when you’re only counting young adults. And 46 percent of Americans told Pew surveyors last year a simple but remarkable thing: They could not live without one. ... By rapidly substituting virtual reality for reality, we are diminishing the scope of this interaction even as we multiply the number of people with whom we interact.
Learning math and then science as an adult gave me passage into the empowering world of engineering. But these hard-won, adult-age changes in my brain have also given me an insider’s perspective on the neuroplasticity that underlies adult learning. ... In the current educational climate, memorization and repetition in the STEM disciplines (as opposed to in the study of language or music), are often seen as demeaning and a waste of time for students and teachers alike. Many teachers have long been taught that conceptual understanding in STEM trumps everything else. And indeed, it’s easier for teachers to induce students to discuss a mathematical subject (which, if done properly, can do much to help promote understanding) than it is for that teacher to tediously grade math homework. What this all means is that, despite the fact that procedural skills and fluency, along with application, are supposed to be given equal emphasis with conceptual understanding, all too often it doesn’t happen. Imparting a conceptual understanding reigns supreme—especially during precious class time. ... The problem with focusing relentlessly on understanding is that math and science students can often grasp essentials of an important idea, but this understanding can quickly slip away without consolidation through practice and repetition. Worse, students often believe they understand something when, in fact, they don’t. ... Chunking was originally conceptualized in the groundbreaking work of Herbert Simon in his analysis of chess—chunks were envisioned as the varying neural counterparts of different chess patterns. Gradually, neuroscientists came to realize that experts such as chess grand masters are experts because they have stored thousands of chunks of knowledge about their area of expertise in their long-term memory. ... As studies of chess masters, emergency room physicians, and fighter pilots have shown, in times of critical stress, conscious analysis of a situation is replaced by quick, subconscious processing as these experts rapidly draw on their deeply ingrained repertoire of neural subroutines—chunks. ... Understanding doesn’t build fluency; instead, fluency builds understanding.