How an artificial language from 1887 is finding new life online ... Like its vastly more successful digital cousins — C++, HTML, Python, Linux — Esperanto is an artificial language, designed to have perfectly regular grammar, with none of the messy exceptions of natural tongues. Out loud, all that regularity creates strange cadences, like someone speaking Italian slowly while chewing gum. William Auld, the Modernist Scottish poet who wrote his greatest work in Esperanto, was nominated for the Nobel Prize multiple times, but never won. But it is supremely easy to learn, like a puzzle piece formed to fit into the human brain. ... Decades before Couchsurfing became a website (or the word website existed), Esperantists had an international homestay service called Pasaporta Servo, in which friendly hosts around the world listed their phone numbers and home addresses in a central directory available to traveling Esperantists. It may be a small, widely dispersed, and self-selected diaspora, but wherever you go, there are Esperantists who are excited that you exist. ... There’s no money, no power, no marketing, no prestige — Esperanto speakers speak Esperanto because they believe in it, and because it’s fun to speak a foreign language almost instantly, after a couple months of rolling the words around in your mouth. ... Esperanto was invented in 1887 by a Polish ophthalmologist named L.L. Zamenhof, who hoped his creation would bring about world peace. Zamenhof saw a turbulent world divided by language, and concluded that the situation was too complicated, essentially unfair, and ultimately doomed. He believed that the languages people already spoke were oversaturated with history, politics, and power, making it impossible to communicate clearly. Esperanto was a fresh start, a technology that would allow its speakers to sidestep the difficulties of natural languages altogether.
If we want to safeguard our languages, stories and ideas against extinction, we had better study Egyptology ... The scientific community has recently begun to think hard about natural and technological existential risks to human beings: a wandering asteroid, an unfortunately timed gamma-ray burst, a warming planet. But we should also begin to think about the possibility of cultural apocalypse. The Egyptian case is instructive: an epoch of stunning continuity, followed by abrupt extinction. This is a decline and fall worth keeping in mind. ... for all its carven glyphs, Egypt cannot claim to have passed down its dreams, memories and hopes for the future. Some of its civilisation has been recovered, but some was lost irretrievably. This is sobering enough on its own terms. When you examine our beloved present day from an Egyptological distance, you see that we are vulnerable to a similar fate. ... Imagine the pharaohs’ frustration at all the bits of language lost, the prayers and tributes especially. This was a civilisation that had its eyes fixed on eternity. Its civil calendar was apparently keyed to the heliacal rising of Sothis, whose astronomical cycle has a period of some 1,400 years. By dint of longevity, the first Egyptologists were Egyptian, and ditto the first tomb robbers. Is it a bridge too far to say the first futurists were Egyptian too? ... Cretan hieroglyphs remain impenetrable, Olmec – the language of the first major civilization in Mexico – is largely a mystery, and only within the past half-century or so has meaning been teased from the Mayan script. For every civilisation retrieved, another remains substantially beyond our comprehension.
Google has always been an artificial intelligence company, so it really shouldn’t have been a surprise that Ray Kurzweil, one of the leading scientists in the field, joined the search giant late last year. Nonetheless, the hiring raised some eyebrows, since Kurzweil is perhaps the most prominent proselytizer of “hard AI,” which argues that it is possible to create consciousness in an artificial being. Add to this Google’s revelation that it is using techniques of deep learning to produce an artificial brain, and a subsequent hiring of the godfather of computer neural nets Geoffrey Hinton, and it would seem that Google is becoming the most daring developer of AI, a fact that some may consider thrilling and others deeply unsettling. Or both.
Paleogenetics is helping to solve the great mystery of prehistory: how did humans spread out over the earth? ... Before the Second World War, prehistory was seen as a series of invasions, with proto-Celts and Indo-Aryans swooping down on unsuspecting swaths of Europe and Asia like so many Vikings, while megalith builders wandered between continents in indecisive meanders. After the Second World War, this view was replaced by the processual school, which attributed cultural changes to internal adaptations. Ideas and technologies might travel, but people by and large stayed put. Today, however, migration is making a comeback. ... Much of this shift has to do with the introduction of powerful new techniques for studying ancient DNA. ... Whole-genome sequencing yields orders of magnitude more data than organelle-based testing, and allows geneticists to make detailed comparisons between individuals and populations. Those comparisons are now illuminating new branches of the human family tree. ... In five years, we’ve gone from thinking we shared no DNA with Neanderthals, to realising that there was widespread interbreeding, to pinpointing it (for one individual) within 200 years – almost the span of a family album. But the use of ancient DNA isn’t limited to our near-human relatives. It is also telling us about the dispersal of humans out of Africa, and the origin and spread of agriculture, and the peopling of the Americas. It is also helping archaeologists crack one of the great mysteries of prehistory: the origins of the Indo-Europeans.
Words have power, especially in meetings. A new study from MIT’s Sloan School of Management finds that saying “yeah”, “give”, “start” and even “meeting” can boost a person’s persuasive powers among co-workers. … Statisticians Cynthia Rudin and Been Kim studied 95 meetings for the vocabulary used in proposals that were accepted by the group. They concluded that the most persuasive words are those that build consensus. … “Yeah” signals agreement with a previous idea, the authors posit. Using the word “start” in sentences like “I think we should start with the basics” is useful for building early alliances early; group participants want to appear interested in being productive. The word “give” indicates some benefit to the group.
As Nadella, a 24-year veteran of the company, would have known, the process of turning a Microsoft Research project into a product would often happen slowly, if at all. That's partly by design. The company's research group was set up in isolation from the product teams to allow researchers to envision the future without worrying about how their inventions will make money or fit into the company's mission. ... But Nadella's tight deadline left executives with no time to debate the separation of church and state. ... Microsoft is overhauling its research arm and the way it works with the rest of the company. The goal is to quickly identify technology with the most potential and get it into customers' hands before a competitor replicates it. ... To break down the walls between its research group and the rest of the company, Microsoft reassigned about half of its more than 1,000 research staff in September 2014 to a new group called MSR NExT. Its focus is on projects with greater impact to the company rather than pure research. Meanwhile, the other half of Microsoft Research is getting pushed to find more significant ways it can contribute to the company's products.
More so than the average American sitcom, Seinfeld has had difficulty reaching global audiences. While it’s popular in Latin America, it hasn’t been widely accepted in Germany, France, Italy, and the Netherlands. Two decades after it went off the air, Seinfeld remains relevant to American audiences — thanks in part to omnipresent syndicated reruns — but in much of Europe it is considered a cult hit, and commonly relegated to deep-late-night time slots. Its humor, it seems, is just too complicated, too cultural and word-based, to make for easy translation. ... Jokes are the hardest things to translate into another language, another culture, another world. A good script for dubbing an American sitcom for foreign consumption does more than literally translate. It manages to convey the same meaning, the same feeling, the same story — the same direct hit to the lower frontal lobes of the brain that produces a laugh, even though those frontal lobes are steeped in a completely different cultural brew. ... Lip-synch dubbing, despite its ultimate benefits, can get very complicated. It’s not just that the lines may not translate directly — they also have to take just as long to say in both languages and approximate, to the best of their abilities, the lip movements of the original actors. That can pose an added challenge when translating from laconic languages like English into verbose languages like German.
It’s easy to dismiss emoji. They are, at first glance, ridiculous. They are a small invasive cartoon army of faces and vehicles and flags and food and symbols trying to topple the millennia-long reign of words. Emoji are intended to illustrate, or in some cases replace altogether, the words we send each other digitally, whether in a text message, email, or tweet. ... And yet, if you have a smartphone, emoji are now available to you as an optional written language, just like any global language, such as Arabic and Catalan and Cherokee and Tamil and Tibetan and English. You’ll find an emoji keyboard on your iPhone, nestled right between Dutch and Estonian. The current set is limited to 722 symbols—these are the ones that have been officially encoded into Unicode, which is an international programming standard that allows one operating system to recognize text from another. ... Emoji were born in a true eureka moment, from the mind of a single man: Shigetaka Kurita, an employee at the Japanese telecom company NTT Docomo. Back in the late 1990s, the company was looking for a way to distinguish its pager service from its competitors in a very tight market. Kurita hit on the idea of adding simplistic cartoon images to its messaging functions as a way to appeal to teens. The first round of what came to be called emoji—a Japanese neologism that means, more or less, “picture word”—were designed by Kurita, using a pencil and paper, as drawings on a 12-by-12-pixel grid and were inspired by pictorial Japanese sources, like manga (Japanese comic books) and kanji (Japanese characters borrowed from written Chinese).
Many companies already have the ability to run keyword searches of employees’ emails, looking for worrisome words and phrases like embezzle and I loathe this job. But the Stroz Friedberg software, called Scout, aspires to go a giant step further, detecting indirectly, through unconscious syntactic and grammatical clues, workers’ anger, financial or personal stress, and other tip-offs that an employee might be about to lose it. ... To measure employees’ disgruntlement, for instance, it uses an algorithm based on linguistic tells found to connote feelings of victimization, anger, and blame. ... It’s not illegal to be disgruntled. But today’s frustrated worker could engineer tomorrow’s hundred-million-dollar data breach. Scout is being marketed as a cutting-edge weapon in the growing arsenal that helps corporations combat “insider threat,” the phenomenon of employees going bad. Workers who commit fraud or embezzlement are one example, but so are “bad leavers”—employees or contractors who, when they depart, steal intellectual property or other confidential data, sabotage the information technology system, or threaten to do so unless they’re paid off. Workplace violence is a growing concern too. ... Though companies have long been arming themselves against cyberattack by external hackers, often presumed to come from distant lands like Russia and China, they’re increasingly realizing that many assaults are launched from within—by, say, the quiet guy down the hall whose contract wasn’t renewed.
Kos-Read, who is known in China only as Cao Cao, is by far the leading foreign actor working in the country today, having appeared in about 100 movies and television programs since his career began in 1999. He is famous throughout the mainland, and his career has been on a steady upward trajectory. Last December he appeared in the action film “Mojin — The Lost Legend,” currently the fifth-highest-grossing movie in Chinese history. ... just as Hollywood has begun to crack the market, Chinese cinema has come into its own. In recent years, Chinese studios have started shifting away from the agitprop that defined their cinematic output for generations and are instead focusing on genres that draw viewers to theaters in any country: action, adventure, comedy. In February, a sci-fi comedy called “The Mermaid” became the highest-grossing movie ever in China within 12 days of its release, earning more than $430 million. Increasingly, Chinese cinemagoers are opting to buy tickets for movies made specifically for them — like those in the “Ip Man” series — not those that pander to them or lecture them. It is in this sort of film that Kos-Read has finally had the chance to act, rather than portray a stand-in for Western imperiousness. If the Hollywood studios really want to understand how to succeed in China, Kos-Read’s journey makes for a kind of accidental guide.
Learning math and then science as an adult gave me passage into the empowering world of engineering. But these hard-won, adult-age changes in my brain have also given me an insider’s perspective on the neuroplasticity that underlies adult learning. ... In the current educational climate, memorization and repetition in the STEM disciplines (as opposed to in the study of language or music), are often seen as demeaning and a waste of time for students and teachers alike. Many teachers have long been taught that conceptual understanding in STEM trumps everything else. And indeed, it’s easier for teachers to induce students to discuss a mathematical subject (which, if done properly, can do much to help promote understanding) than it is for that teacher to tediously grade math homework. What this all means is that, despite the fact that procedural skills and fluency, along with application, are supposed to be given equal emphasis with conceptual understanding, all too often it doesn’t happen. Imparting a conceptual understanding reigns supreme—especially during precious class time. ... The problem with focusing relentlessly on understanding is that math and science students can often grasp essentials of an important idea, but this understanding can quickly slip away without consolidation through practice and repetition. Worse, students often believe they understand something when, in fact, they don’t. ... Chunking was originally conceptualized in the groundbreaking work of Herbert Simon in his analysis of chess—chunks were envisioned as the varying neural counterparts of different chess patterns. Gradually, neuroscientists came to realize that experts such as chess grand masters are experts because they have stored thousands of chunks of knowledge about their area of expertise in their long-term memory. ... As studies of chess masters, emergency room physicians, and fighter pilots have shown, in times of critical stress, conscious analysis of a situation is replaced by quick, subconscious processing as these experts rapidly draw on their deeply ingrained repertoire of neural subroutines—chunks. ... Understanding doesn’t build fluency; instead, fluency builds understanding.
My time in China has taught me the pleasure and value of craftsmanship, simply because it’s so rare. To see somebody doing a job well, not just for its own reward, but for the satisfaction of good work, thrills my heart; it doesn’t matter whether it’s cooking or candle-making or fixing a bike. ... the prevailing attitude is chabuduo, or ‘close enough’. It’s a phrase you’ll hear with grating regularity, one that speaks to a job 70 per cent done, a plan sketched out but never completed, a gauge unchecked or a socket put in the wrong size. ... implies that to put any more time or effort into a piece of work would be the act of a fool. China is the land of the cut corner, of ‘good enough for government work’. ... sometimes there’s a brilliance to chabuduo. One of the daily necessities of life under Maoism was improvisation; finding ways to keep irreplaceable luxuries such as tractors or machine tools going, despite missing parts or broken supply chains. ... More usually, chabuduo is the domain of a village uncle who grew up with nothing and can whip up a solution to anything out of two bits of wire and some tape.
The most remarkable thing about neural nets is that no human being has programmed a computer to perform any of the stunts described above. In fact, no human could. Programmers have, rather, fed the computer a learning algorithm, exposed it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it, and have then allowed the computer to figure out for itself how to recognize the desired objects, words, or sentences. ... Neural nets aren’t new. The concept dates back to the 1950s, and many of the key algorithmic breakthroughs occurred in the 1980s and 1990s. What’s changed is that today computer scientists have finally harnessed both the vast computational power and the enormous storehouses of data—images, video, audio, and text files strewn across the Internet—that, it turns out, are essential to making neural nets work well. ... That dramatic progress has sparked a burst of activity. Equity funding of AI-focused startups reached an all-time high last quarter of more than $1 billion, according to the CB Insights research firm. There were 121 funding rounds for such startups in the second quarter of 2016, compared with 21 in the equivalent quarter of 2011, that group says. More than $7.5 billion in total investments have been made during that stretch—with more than $6 billion of that coming since 2014. ... The hardware world is feeling the tremors. The increased computational power that is making all this possible derives not only from Moore’s law but also from the realization in the late 2000s that graphics processing units (GPUs) made by Nvidia—the powerful chips that were first designed to give gamers rich, 3D visual experiences—were 20 to 50 times more efficient than traditional central processing units (CPUs) for deep-learning computations. ... Think of deep learning as a subset of a subset. “Artificial intelligence” encompasses a vast range of technologies—like traditional logic and rules-based systems—that enable computers and robots to solve problems in ways that at least superficially resemble thinking. Within that realm is a smaller category called machine learning, which is the name for a whole toolbox of arcane but important mathematical techniques that enable computers to improve at performing tasks with experience. Finally, within machine learning is the smaller subcategory called deep learning.
- Also: FiveThirtyEight - Some Like It Bot < 5min
- Also: Vox - Venture capitalist Marc Andreessen explains how AI will change the world 5-15min
- Also: Nautilus - Moore’s Law Is About to Get Weird < 5min
- Also: Edge - AI & The Future Of Civilization < 5min
- Also: Medium - Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning 5-15min
- Also: Rolling Stone - Inside the Artificial Intelligence Revolution: Pt. 1 5-15min
- Also: Rolling Stone - Inside the Artificial Intelligence Revolution: Pt. 2 5-15min
Residents here still speak Sardo, the closest living form of Latin. Grandmothers gaze warily at outsiders from under embroidered veils. And, in a modest apartment in the town of Nuoro, a slight 62-year-old named Paola Abraini wakes up every day at 7 am to begin making su filindeu – the rarest pasta in the world. ... In fact, there are only two other women on the planet who still know how to make it: Abraini’s niece and her sister-in-law, both of whom live in this far-flung town clinging to the slopes of Monte Ortobene. ... No one can remember how or why the women in Nuoro started preparing su filindeu (whose name means “the threads of God”), but for more than 300 years, the recipe and technique have only been passed down through the women in Abraini’s family – each of whom have guarded it tightly before teaching it to their daughters. ... Last year, a team of engineers from Barilla pasta came to see if they could reproduce her technique with a machine. They couldn’t.
Becoming a rapper today might seem as easy as signing up for SoundCloud and visiting your neighborhood face-tattoo parlor, but only a few artists get to travel the country playing to sold-out arenas. Whichever end of this vast spectrum you find yourself on, it helps to be young and unattached, and able to tour constantly. E-40 is none of those things: he is 49, happily married with two sons. His rap career began when cassette tapes still seemed pretty novel, and now that many of us don’t even have a way of listening to CDs, he’s returned to making music the way he did back in the late ’80s: completely independently, selling his raps more or less directly to his fans.
Aching, throbbing, searing, excruciating – pain is difficult to describe and impossible to see. So how can doctors measure it? ... During that period of convalescence, as I watched her grimace and clench her teeth and let slip little cries of anguish until a long regimen of combined ibuprofen and codeine finally conquered the pain, several questions came into my head. Chief among them was: Can anyone in the medical profession talk about pain with any authority? From the family doctor to the surgeon, their remarks and suggestions seemed tentative, generalised, unknowing – and potentially dangerous: Was it right for the doctor to tell my wife that her level of pain didn’t sound like appendicitis when the doctor didn’t know whether she had a high or low pain threshold? Should he have advised her to stay in bed and risk her appendix exploding into peritonitis? How could surgeons predict that patients would feel only ‘discomfort’ after such an operation when she felt agony – an agony that was aggravated by fear that the operation had been a failure? ... There seemed to be a chasm of understanding in human discussions of pain. I wanted to find out how the medical profession apprehends pain – the language it uses for something that’s invisible to the naked eye, that can’t be measured except by asking for the sufferer’s subjective description, and that can be treated only by the use of opium derivatives that go back to the Middle Ages.
He was, she remembered, preoccupied with the math problems he worked over in the evenings, and he was prone to writing down stray equations on napkins at restaurants in the middle of meals. He had few strong opinions about the war or politics, but many about this or that jazz musician. ... Oliver, Pierce, and Shannon—a genius clique, each secure enough in his own intellect to find comfort in the company of the others. They shared a fascination with the emerging field of digital communication and co-wrote a key paper explaining its advantages in accuracy and reliability. ... Partly, it seems, the distance between Shannon and his colleagues was a matter of sheer processing speed. ... Shannon’s response to colleagues who could not keep pace was simply to forget about them. ... George Henry Lewes once observed that “genius is rarely able to give an account of its own processes.” This seems to have been true of Shannon, who could neither explain himself to others, nor cared to. In his work life, he preferred solitude and kept his professional associations to a minimum. ... Shannon wouldn’t have been the first genius with an inward-looking temperament, but even among the brains of Bell Labs, he was a man apart. ... It was Shannon who made the final synthesis, who defined the concept of information and effectively solved the problem of noise. It was Shannon who was credited with gathering the threads into a new science. But he had important predecessors at Bell Labs, two engineers who had shaped his thinking since he discovered their work as an undergraduate at the University of Michigan, who were the first to consider how information might be put on a scientific footing, and whom Shannon’s landmark paper singled out as pioneers.