Thursday, March 29, 2007

How Not to Talk to Your Kids

How Not to Talk to Your Kids

The Inverse Power of Praise.



What do we make of a boy like Thomas?

Thomas (his middle name) is a fifth-grader at the highly competitive P.S. 334, the Anderson School on West 84th. Slim as they get, Thomas recently had his long sandy-blond hair cut short to look like the new James Bond (he took a photo of Daniel Craig to the barber). Unlike Bond, he prefers a uniform of cargo pants and a T-shirt emblazoned with a photo of one of his heroes: Frank Zappa. Thomas hangs out with five friends from the Anderson School. They are “the smart kids.” Thomas’s one of them, and he likes belonging.

Since Thomas could walk, he has heard constantly that he’s smart. Not just from his parents but from any adult who has come in contact with this precocious child. When he applied to Anderson for kindergarten, his intelligence was statistically confirmed. The school is reserved for the top one percent of all applicants, and an IQ test is required. Thomas didn’t just score in the top one percent. He scored in the top one percent of the top one percent.

But as Thomas has progressed through school, this self-awareness that he’s smart hasn’t always translated into fearless confidence when attacking his schoolwork. In fact, Thomas’s father noticed just the opposite. “Thomas didn’t want to try things he wouldn’t be successful at,” his father says. “Some things came very quickly to him, but when they didn’t, he gave up almost immediately, concluding, ‘I’m not good at this.’ ” With no more than a glance, Thomas was dividing the world into two—things he was naturally good at and things he wasn’t.

For instance, in the early grades, Thomas wasn’t very good at spelling, so he simply demurred from spelling out loud. When Thomas took his first look at fractions, he balked. The biggest hurdle came in third grade. He was supposed to learn cursive penmanship, but he wouldn’t even try for weeks. By then, his teacher was demanding homework be completed in cursive. Rather than play catch-up on his penmanship, Thomas refused outright. Thomas’s father tried to reason with him. “Look, just because you’re smart doesn’t mean you don’t have to put out some effort.” (Eventually, he mastered cursive, but not without a lot of cajoling from his father.)

Why does this child, who is measurably at the very top of the charts, lack confidence about his ability to tackle routine school challenges?

Thomas is not alone. For a few decades, it’s been noted that a large percentage of all gifted students (those who score in the top 10 percent on aptitude tests) severely underestimate their own abilities. Those afflicted with this lack of perceived competence adopt lower standards for success and expect less of themselves. They underrate the importance of effort, and they overrate how much help they need from a parent.

When parents praise their children’s intelligence, they believe they are providing the solution to this problem. According to a survey conducted by Columbia University, 85 percent of American parents think it’s important to tell their kids that they’re smart. In and around the New York area, according to my own (admittedly nonscientific) poll, the number is more like 100 percent. Everyone does it, habitually. The constant praise is meant to be an angel on the shoulder, ensuring that children do not sell their talents short.

But a growing body of research—and a new study from the trenches of the New York public-school system—strongly suggests it might be the other way around. Giving kids the label of “smart” does not prevent them from underperforming. It might actually be causing it.

For the past ten years, psychologist Carol Dweck and her team at Columbia (she’s now at Stanford) studied the effect of praise on students in a dozen New York schools. Her seminal work—a series of experiments on 400 fifth-graders—paints the picture most clearly.

Dweck sent four female research assistants into New York fifth-grade classrooms. The researchers would take a single child out of the classroom for a nonverbal IQ test consisting of a series of puzzles—puzzles easy enough that all the children would do fairly well. Once the child finished the test, the researchers told each student his score, then gave him a single line of praise. Randomly divided into groups, some were praised for their intelligence. They were told, “You must be smart at this.” Other students were praised for their effort: “You must have worked really hard.”

Why just a single line of praise? “We wanted to see how sensitive children were,” Dweck explained. “We had a hunch that one line might be enough to see an effect.”

Then the students were given a choice of test for the second round. One choice was a test that would be more difficult than the first, but the researchers told the kids that they’d learn a lot from attempting the puzzles. The other choice, Dweck’s team explained, was an easy test, just like the first. Of those praised for their effort, 90 percent chose the harder set of puzzles. Of those praised for their intelligence, a majority chose the easy test. The “smart” kids took the cop-out.



Why did this happen? “When we praise children for their intelligence,” Dweck wrote in her study summary, “we tell them that this is the name of the game: Look smart, don’t risk making mistakes.” And that’s what the fifth-graders had done: They’d chosen to look smart and avoid the risk of being embarrassed.

In a subsequent round, none of the fifth-graders had a choice. The test was difficult, designed for kids two years ahead of their grade level. Predictably, everyone failed. But again, the two groups of children, divided at random at the study’s start, responded differently. Those praised for their effort on the first test assumed they simply hadn’t focused hard enough on this test. “They got very involved, willing to try every solution to the puzzles,” Dweck recalled. “Many of them remarked, unprovoked, ‘This is my favorite test.’ ” Not so for those praised for their smarts. They assumed their failure was evidence that they weren’t really smart at all. “Just watching them, you could see the strain. They were sweating and miserable.”

Having artificially induced a round of failure, Dweck’s researchers then gave all the fifth-graders a final round of tests that were engineered to be as easy as the first round. Those who had been praised for their effort significantly improved on their first score—by about 30 percent. Those who’d been told they were smart did worse than they had at the very beginning—by about 20 percent.

Dweck had suspected that praise could backfire, but even she was surprised by the magnitude of the effect. “Emphasizing effort gives a child a variable that they can control,” she explains. “They come to see themselves as in control of their success. Emphasizing natural intelligence takes it out of the child’s control, and it provides no good recipe for responding to a failure.”

In follow-up interviews, Dweck discovered that those who think that innate intelligence is the key to success begin to discount the importance of effort. I am smart, the kids’ reasoning goes; I don’t need to put out effort. Expending effort becomes stigmatized—it’s public proof that you can’t cut it on your natural gifts.

Repeating her experiments, Dweck found this effect of praise on performance held true for students of every socioeconomic class. It hit both boys and girls—the very brightest girls especially (they collapsed the most following failure). Even preschoolers weren’t immune to the inverse power of praise.

Jill Abraham is a mother of three in Scarsdale, and her view is typical of those in my straw poll. I told her about Dweck’s research on praise, and she flatly wasn’t interested in brief tests without long-term follow-up. Abraham is one of the 85 percent who think praising her children’s intelligence is important. Her kids are thriving, so she’s proved that praise works in the real world. “I don’t care what the experts say,” Jill says defiantly. “I’m living it.”

Even those who’ve accepted the new research on praise have trouble putting it into practice. Sue Needleman is both a mother of two and an elementary-school teacher with eleven years’ experience. Last year, she was a fourth-grade teacher at Ridge Ranch Elementary in Paramus, New Jersey. She has never heard of Carol Dweck, but the gist of Dweck’s research has trickled down to her school, and Needleman has learned to say, “I like how you keep trying.” She tries to keep her praise specific, rather than general, so that a child knows exactly what she did to earn the praise (and thus can get more). She will occasionally tell a child, “You’re good at math,” but she’ll never tell a child he’s bad at math.

But that’s at school, as a teacher. At home, old habits die hard. Her 8-year-old daughter and her 5-year-old son are indeed smart, and sometimes she hears herself saying, “You’re great. You did it. You’re smart.” When I press her on this, Needleman says that what comes out of academia often feels artificial. “When I read the mock dialogues, my first thought is, Oh, please. How corny.

No such qualms exist for teachers at the Life Sciences Secondary School in East Harlem, because they’ve seen Dweck’s theories applied to their junior-high students. Last week, Dweck and her protégée, Lisa Blackwell, published a report in the academic journal Child Development about the effect of a semester-long intervention conducted to improve students’ math scores.

Life Sciences is a health-science magnet school with high aspirations but 700 students whose main attributes are being predominantly minority and low achieving. Blackwell split her kids into two groups for an eight-session workshop. The control group was taught study skills, and the others got study skills and a special module on how intelligence is not innate. These students took turns reading aloud an essay on how the brain grows new neurons when challenged. They saw slides of the brain and acted out skits. “Even as I was teaching these ideas,” Blackwell noted, “I would hear the students joking, calling one another ‘dummy’ or ‘stupid.’ ” After the module was concluded, Blackwell tracked her students’ grades to see if it had any effect.



It didn’t take long. The teachers—who hadn’t known which students had been assigned to which workshop—could pick out the students who had been taught that intelligence can be developed. They improved their study habits and grades. In a single semester, Blackwell reversed the students’ longtime trend of decreasing math grades.

The only difference between the control group and the test group were two lessons, a total of 50 minutes spent teaching not math but a single idea: that the brain is a muscle. Giving it a harder workout makes you smarter. That alone improved their math scores.

“These are very persuasive findings,” says Columbia’s Dr. Geraldine Downey, a specialist in children’s sensitivity to rejection. “They show how you can take a specific theory and develop a curriculum that works.” Downey’s comment is typical of what other scholars in the field are saying. Dr. Mahzarin Banaji, a Harvard social psychologist who is an expert in stereotyping, told me, “Carol Dweck is a flat-out genius. I hope the work is taken seriously. It scares people when they see these results.”

Since the 1969 publication of The Psychology of Self-Esteem, in which Nathaniel Branden opined that self-esteem was the single most important facet of a person, the belief that one must do whatever he can to achieve positive self-esteem has become a movement with broad societal effects. Anything potentially damaging to kids’ self-esteem was axed. Competitions were frowned upon. Soccer coaches stopped counting goals and handed out trophies to everyone. Teachers threw out their red pencils. Criticism was replaced with ubiquitous, even undeserved, praise.

Dweck and Blackwell’s work is part of a larger academic challenge to one of the self-esteem movement’s key tenets: that praise, self-esteem, and performance rise and fall together. From 1970 to 2000, there were over 15,000 scholarly articles written on self-esteem and its relationship to everything—from sex to career advancement. But results were often contradictory or inconclusive. So in 2003 the Association for Psychological Science asked Dr. Roy Baumeister, then a leading proponent of self-esteem, to review this literature. His team concluded that self-esteem was polluted with flawed science. Only 200 of those 15,000 studies met their rigorous standards.

After reviewing those 200 studies, Baumeister concluded that having high self-esteem didn’t improve grades or career achievement. It didn’t even reduce alcohol usage. And it especially did not lower violence of any sort. (Highly aggressive, violent people happen to think very highly of themselves, debunking the theory that people are aggressive to make up for low self-esteem.) At the time, Baumeister was quoted as saying that his findings were “the biggest disappointment of my career.”

Now he’s on Dweck’s side of the argument, and his work is going in a similar direction: He will soon publish an article showing that for college students on the verge of failing in class, esteem-building praise causes their grades to sink further. Baumeister has come to believe the continued appeal of self-esteem is largely tied to parents’ pride in their children’s achievements: It’s so strong that “when they praise their kids, it’s not that far from praising themselves.”

By and large, the literature on praise shows that it can be effective—a positive, motivating force. In one study, University of Notre Dame researchers tested praise’s efficacy on a losing college hockey team. The experiment worked: The team got into the playoffs. But all praise is not equal—and, as Dweck demonstrated, the effects of praise can vary significantly depending on the praise given. To be effective, researchers have found, praise needs to be specific. (The hockey players were specifically complimented on the number of times they checked an opponent.)

Sincerity of praise is also crucial. Just as we can sniff out the true meaning of a backhanded compliment or a disingenuous apology, children, too, scrutinize praise for hidden agendas. Only young children—under the age of 7—take praise at face value: Older children are just as suspicious of it as adults.

Psychologist Wulf-Uwe Meyer, a pioneer in the field, conducted a series of studies where children watched other students receive praise. According to Meyer’s findings, by the age of 12, children believe that earning praise from a teacher is not a sign you did well—it’s actually a sign you lack ability and the teacher thinks you need extra encouragement. And teens, Meyer found, discounted praise to such an extent that they believed it’s a teacher’s criticism—not praise at all—that really conveys a positive belief in a student’s aptitude.

In the opinion of cognitive scientist Daniel T. Willingham, a teacher who praises a child may be unwittingly sending the message that the student reached the limit of his innate ability, while a teacher who criticizes a pupil conveys the message that he can improve his performance even further.

New York University professor of psychiatry Judith Brook explains that the issue for parents is one of credibility. “Praise is important, but not vacuous praise,” she says. “It has to be based on a real thing—some skill or talent they have.” Once children hear praise they interpret as meritless, they discount not just the insincere praise, but sincere praise as well.

Scholars from Reed College and Stanford reviewed over 150 praise studies. Their meta-analysis determined that praised students become risk-averse and lack perceived autonomy. The scholars found consistent correlations between a liberal use of praise and students’ “shorter task persistence, more eye-checking with the teacher, and inflected speech such that answers have the intonation of questions.”

Dweck’s research on overpraised kids strongly suggests that image maintenance becomes their primary concern—they are more competitive and more interested in tearing others down. A raft of very alarming studies illustrate this.

In one, students are given two puzzle tests. Between the first and the second, they are offered a choice between learning a new puzzle strategy for the second test or finding out how they did compared with other students on the first test: They have only enough time to do one or the other. Students praised for intelligence choose to find out their class rank, rather than use the time to prepare.

In another, students get a do-it-yourself report card and are told these forms will be mailed to students at another school—they’ll never meet these students and don’t know their names. Of the kids praised for their intelligence, 40 percent lie, inflating their scores. Of the kids praised for effort, few lie.

When students transition into junior high, some who’d done well in elementary school inevitably struggle in the larger and more demanding environment. Those who equated their earlier success with their innate ability surmise they’ve been dumb all along. Their grades never recover because the likely key to their recovery—increasing effort—they view as just further proof of their failure. In interviews many confess they would “seriously consider cheating.”

Students turn to cheating because they haven’t developed a strategy for handling failure. The problem is compounded when a parent ignores a child’s failures and insists he’ll do better next time. Michigan scholar Jennifer Crocker studies this exact scenario and explains that the child may come to believe failure is something so terrible, the family can’t acknowledge its existence. A child deprived of the opportunity to discuss mistakes can’t learn from them.

My son, Luke, is in kindergarten. He seems supersensitive to the potential judgment of his peers. Luke justifies it by saying, “I’m shy,” but he’s not really shy. He has no fear of strange cities or talking to strangers, and at his school, he has sung in front of large audiences. Rather, I’d say he’s proud and self-conscious. His school has simple uniforms (navy T-shirt, navy pants), and he loves that his choice of clothes can’t be ridiculed, “because then they’d be teasing themselves too.”

After reading Carol Dweck’s research, I began to alter how I praised him, but not completely. I suppose my hesitation was that the mind-set Dweck wants students to have—a firm belief that the way to bounce back from failure is to work harder—sounds awfully clichéd: Try, try again.

But it turns out that the ability to repeatedly respond to failure by exerting more effort—instead of simply giving up—is a trait well studied in psychology. People with this trait, persistence, rebound well and can sustain their motivation through long periods of delayed gratification. Delving into this research, I learned that persistence turns out to be more than a conscious act of will; it’s also an unconscious response, governed by a circuit in the brain. Dr. Robert Cloninger at Washington University in St. Louis located the circuit in a part of the brain called the orbital and medial prefrontal cortex. It monitors the reward center of the brain, and like a switch, it intervenes when there’s a lack of immediate reward. When it switches on, it’s telling the rest of the brain, “Don’t stop trying. There’s dopa [the brain’s chemical reward for success] on the horizon.” While putting people through MRI scans, Cloninger could see this switch lighting up regularly in some. In others, barely at all.

What makes some people wired to have an active circuit?

Cloninger has trained rats and mice in mazes to have persistence by carefully not rewarding them when they get to the finish. “The key is intermittent reinforcement,” says Cloninger. The brain has to learn that frustrating spells can be worked through. “A person who grows up getting too frequent rewards will not have persistence, because they’ll quit when the rewards disappear.”

That sold me. I’d thought “praise junkie” was just an expression—but suddenly, it seemed as if I could be setting up my son’s brain for an actual chemical need for constant reward.

What would it mean, to give up praising our children so often? Well, if I am one example, there are stages of withdrawal, each of them subtle. In the first stage, I fell off the wagon around other parents when they were busy praising their kids. I didn’t want Luke to feel left out. I felt like a former alcoholic who continues to drink socially. I became a Social Praiser.

Then I tried to use the specific-type praise that Dweck recommends. I praised Luke, but I attempted to praise his “process.” This was easier said than done. What are the processes that go on in a 5-year-old’s mind? In my impression, 80 percent of his brain processes lengthy scenarios for his action figures.

But every night he has math homework and is supposed to read a phonics book aloud. Each takes about five minutes if he concentrates, but he’s easily distracted. So I praised him for concentrating without asking to take a break. If he listened to instructions carefully, I praised him for that. After soccer games, I praised him for looking to pass, rather than just saying, “You played great.” And if he worked hard to get to the ball, I praised the effort he applied.

Just as the research promised, this focused praise helped him see strategies he could apply the next day. It was remarkable how noticeably effective this new form of praise was.

Truth be told, while my son was getting along fine under the new praise regime, it was I who was suffering. It turns out that I was the real praise junkie in the family. Praising him for just a particular skill or task felt like I left other parts of him ignored and unappreciated. I recognized that praising him with the universal “You’re great—I’m proud of you” was a way I expressed unconditional love.

Offering praise has become a sort of panacea for the anxieties of modern parenting. Out of our children’s lives from breakfast to dinner, we turn it up a notch when we get home. In those few hours together, we want them to hear the things we can’t say during the day—We are in your corner, we are here for you, we believe in you.

In a similar way, we put our children in high-pressure environments, seeking out the best schools we can find, then we use the constant praise to soften the intensity of those environments. We expect so much of them, but we hide our expectations behind constant glowing praise. The duplicity became glaring to me.

Eventually, in my final stage of praise withdrawal, I realized that not telling my son he was smart meant I was leaving it up to him to make his own conclusion about his intelligence. Jumping in with praise is like jumping in too soon with the answer to a homework problem—it robs him of the chance to make the deduction himself.

But what if he makes the wrong conclusion?

Can I really leave this up to him, at his age?

I’m still an anxious parent. This morning, I tested him on the way to school: “What happens to your brain, again, when it gets to think about something hard?”

“It gets bigger, like a muscle,” he responded, having aced this one before.

Additional reporting by Ashley Merryman

Thursday, March 22, 2007

Mixed Feelings

Wired, Issue 15.04 - March 2007

See with your tongue. Navigate with your skin. Fly by the seat of your pants (literally). How researchers can tap the plasticity of the brain to hack our 5 senses — and build a few new ones.
By Sunny Bains

For six weird weeks in the fall of 2004, Udo Wächter had an unerring sense of direction. Every morning after he got out of the shower, Wächter, a sysadmin at the University of Osnabrück in Germany, put on a wide beige belt lined with 13 vibrating pads — the same weight-and-gear modules that make a cell phone judder. On the outside of the belt were a power supply and a sensor that detected Earth's magnetic field. Whichever buzzer was pointing north would go off. Constantly.

"It was slightly strange at first," Wächter says, "though on the bike, it was great." He started to become more aware of the peregrinations he had to make while trying to reach a destination. "I finally understood just how much roads actually wind," he says. He learned to deal with the stares he got in the library, his belt humming like a distant chain saw. Deep into the experiment, Wächter says, "I suddenly realized that my perception had shifted. I had some kind of internal map of the city in my head. I could always find my way home. Eventually, I felt I couldn't get lost, even in a completely new place."

The effects of the "feelSpace belt" — as its inventor, Osnabrück cognitive scientist Peter König, dubbed the device — became even more profound over time. König says while he wore it he was "intuitively aware of the direction of my home or my office. I'd be waiting in line in the cafeteria and spontaneously think: I live over there." On a visit to Hamburg, about 100 miles away, he noticed that he was conscious of the direction of his hometown. Wächter felt the vibration in his dreams, moving around his waist, just like when he was awake.

Direction isn't something humans can detect innately. Some birds can, of course, and for them it's no less important than taste or smell are for us. In fact, lots of animals have cool, "extra" senses. Sunfish see polarized light. Loggerhead turtles feel Earth's magnetic field. Bonnethead sharks detect subtle changes (less than a nanovolt) in small electrical fields. And other critters have heightened versions of familiar senses — bats hear frequencies outside our auditory range, and some insects see ultraviolet light.

We humans get just the five. But why? Can our senses be modified? Expanded? Given the right prosthetics, could we feel electromagnetic fields or hear ultrasound? The answers to these questions, according to researchers at a handful of labs around the world, appear to be yes.

It turns out that the tricky bit isn't the sensing. The world is full of gadgets that detect things humans cannot. The hard part is processing the input. Neuroscientists don't know enough about how the brain interprets data. The science of plugging things directly into the brain — artificial retinas or cochlear implants — remains primitive.

So here's the solution: Figure out how to change the sensory data you want — the electromagnetic fields, the ultrasound, the infrared — into something that the human brain is already wired to accept, like touch or sight. The brain, it turns out, is dramatically more flexible than anyone previously thought, as if we had unused sensory ports just waiting for the right plug-ins. Now it's time to build them.

How do we sense the world around us? It seems like a simple question. Eyes collect photons of certain wavelengths, transduce them into electrical signals, and send them to the brain. Ears do the same thing with vibrations in the air — sound waves. Touch receptors pick up pressure, heat, cold, pain. Smell: chemicals contacting receptors inside the nose. Taste: buds of cells on the tongue.

There's a reasonably well-accepted sixth sense (or fifth and a half, at least) called proprioception. A network of nerves, in conjunction with the inner ear, tells the brain where the body and all its parts are and how they're oriented. This is how you know when you're upside down, or how you can tell the car you're riding in is turning, even with your eyes closed.

When computers sense the world, they do it in largely the same way we do. They have some kind of peripheral sensor, built to pick up radiation, let's say, or sound, or chemicals. The sensor is connected to a transducer that can change analog data about the world into electrons, bits, a digital form that computers can understand — like recording live music onto a CD. The transducer then pipes the converted data into the computer.

But before all that happens, programmers and engineers make decisions about what data is important and what isn't. They know the bandwidth and the data rate the transducer and computer are capable of, and they constrain the sensor to provide only the most relevant information. The computer can "see" only what it's been told to look for.

The brain, by contrast, has to integrate all kinds of information from all five and a half senses all the time, and then generate a complete picture of the world. So it's constantly making decisions about what to pay attention to, what to generalize or approximate, and what to ignore. In other words, it's flexible.

In February, for example, a team of German researchers confirmed that the auditory cortex of macaques can process visual information. Similarly, our visual cortex can accommodate all sorts of altered data. More than 50 years ago, Austrian researcher Ivo Kohler gave people goggles that severely distorted their vision: The lenses turned the world upside down. After several weeks, subjects adjusted — their vision was still tweaked, but their brains were processing the images so they'd appear normal. In fact, when people took the glasses off at the end of the trial, everything seemed to move and distort in the opposite way.

Later, in the '60s and '70s, Harvard neuro biologists David Hubel and Torsten Wiesel figured out that visual input at a certain critical age helps animals develop a functioning visual cortex (the pair shared a 1981 Nobel Prize for their work). But it wasn't until the late '90s that researchers realized the adult brain was just as changeable, that it could redeploy neurons by forming new synapses, remapping itself. That property is called neuroplasticity.

This is really good news for people building sensory prosthetics, because it means that the brain can change how it interprets information from a particular sense, or take information from one sense and interpret it with another. In other words, you can use whatever sensor you want, as long as you convert the data it collects into a form the human brain can absorb.

Paul Bach-y-Rita built his first "tactile display" in the 1960s. Inspired by the plasticity he saw in his father as the older man recovered from a stroke, Bach-y-Rita wanted to prove that the brain could assimilate disparate types of information. So he installed a 20-by-20 array of metal rods in the back of an old dentist chair. The ends of the rods were the pixels — people sitting in the chairs could identify, with great accuracy, "pictures" poked into their backs; they could, in effect, see the images with their sense of touch.

By the 1980s, Bach-y-Rita's team of neuroscientists — now located at the University of Wisconsin — were working on a much more sophisticated version of the chair. Bach-y-Rita died last November, but his lab and the company he cofounded, Wicab, are still using touch to carry new sensory information. Having long ago abandoned the vaguely Marathon Man like dentist chair, the team now uses a mouthpiece studded with 144 tiny electrodes. It's attached by ribbon cable to a pulse generator that induces electric current against the tongue. (As a sensing organ, the tongue has a lot going for it: nerves and touch receptors packed close together and bathed in a conducting liquid, saliva.)

So what kind of information could they pipe in? Mitch Tyler, one of Bach-y-Rita's closest research colleagues, literally stumbled upon the answer in 2000, when he got an inner ear infection. If you've had one of these (or a hangover), you know the feeling: Tyler's world was spinning. His semicircular canals — where the inner ear senses orientation in space — weren't working. "It was hell," he says. "I could stay upright only by fixating on distant objects." Struggling into work one day, he realized that the tongue display might be able to help.

The team attached an accelerometer to the pulse generator, which they programmed to produce a tiny square. Stay upright and you feel the square in the center of your tongue; move to the right or left and the square moves in that direction, too. In this setup, the accelerometer is the sensor and the combination of mouthpiece and tongue is the transducer, the doorway into the brain.

The researchers started testing the device on people with damaged inner ears. Not only did it restore their balance (presumably by giving them a data feed that was cleaner than the one coming from their semi circular canals) but the effects lasted even after they'd removed the mouthpiece — sometimes for hours or days.

The success of that balance therapy, now in clinical trials, led Wicab researchers to start thinking about other kinds of data they could pipe to the mouthpiece. During a long brainstorm session, they wondered whether the tongue could actually augment sight for the visually impaired. I tried the prototype; in a white-walled office strewn with spare electronics parts, Wicab neuroscientist Aimee Arnoldussen hung a plastic box the size of a brick around my neck and gave me the mouthpiece. "Some people hold it still, and some keep it moving like a lollipop," she said. "It's up to you."

Arnoldussen handed me a pair of blacked-out glasses with a tiny camera attached to the bridge. The camera was cabled to a laptop that would relay images to the mouthpiece. The look was pretty geeky, but the folks at the lab were used to it.

She turned it on. Nothing happened.

"Those buttons on the box?" she said. "They're like the volume controls for the image. You want to turn it up as high as you're comfortable."

I cranked up the voltage of the electric shocks to my tongue. It didn't feel bad, actually — like licking the leads on a really weak 9-volt battery. Arnoldussen handed me a long white foam cylinder and spun my chair toward a large black rectangle painted on the wall. "Move the foam against the black to see how it feels," she said.

I could see it. Feel it. Whatever — I could tell where the foam was. With Arnold ussen behind me carrying the laptop, I walked around the Wicab offices. I managed to avoid most walls and desks, scanning my head from side to side slowly to give myself a wider field of view, like radar. Thinking back on it, I don't remember the feeling of the electrodes on my tongue at all during my walkabout. What I remember are pictures: high-contrast images of cubicle walls and office doors, as though I'd seen them with my eyes. Tyler's group hasn't done the brain imaging studies to figure out why this is so — they don't know whether my visual cortex was processing the information from my tongue or whether some other region was doing the work.

I later tried another version of the technology meant for divers. It displayed a set of directional glyphs on my tongue intended to tell them which way to swim. A flashing triangle on the right would mean "turn right," vertical bars moving right says "float right but keep going straight," and so on. At the University of Wisconsin lab, Tyler set me up with the prototype, a joystick, and a computer screen depicting a rudimentary maze. After a minute of bumping against the virtual walls, I asked Tyler to hide the maze window, closed my eyes, and successfully navigated two courses in 15 minutes. It was like I had something in my head magically telling me which way to go.

In the 1970s, the story goes, a Navy flight surgeon named Angus Rupert went skydiving nude. And on his way down, in (very) free fall, he realized that with his eyes closed, the only way he could tell he was plummeting toward earth was from the feel of the wind against his skin (well, that and the flopping). He couldn't sense gravity at all.

The experience gave Rupert the idea for the Tactical Situational Awareness System, a suitably macho name for a vest loaded with vibration elements, much like the feelSpace belt. But the TSAS doesn't tell you which way is north; it tells you which way is down.

In an airplane, the human proprioceptive system gets easily confused. A 1-g turn could set the plane perpendicular to the ground but still feel like straight and level flight. On a clear day, visual cues let the pilot's brain correct for errors. But in the dark, a pilot who misreads the plane's instruments can end up in a death spiral. Between 1990 and 2004, 11 percent of US Air Force crashes — and almost a quarter of crashes at night — resulted from spatial disorientation.

TSAS technology might fix that problem. At the University of Iowa's Operator Performance Laboratory, actually a hangar at a little airfield in Iowa City, director Tom Schnell showed me the next-generation garment, the Spatial Orientation Enhancement System.

First we set a baseline. Schnell sat me down in front of OPL's elaborate flight simulator and had me fly a couple of missions over some virtual mountains, trying to follow a "path" in the sky. I was awful — I kept oversteering. Eventually, I hit a mountain.

Then he brought out his SOES, a mesh of hard-shell plastic, elastic, and Velcro that fit over my arms and torso, strung with vibrating elements called tactile stimulators, or tactors. "The legs aren't working," Schnell said, "but they never helped much anyway."

Flight became intuitive. When the plane tilted to the right, my right wrist started to vibrate — then the elbow, and then the shoulder as the bank sharpened. It was like my arm was getting deeper and deeper into something. To level off, I just moved the joystick until the buzzing stopped. I closed my eyes so I could ignore the screen.

Finally, Schnell set the simulator to put the plane into a dive. Even with my eyes open, he said, the screen wouldn't help me because the visual cues were poor. But with the vest, I never lost track of the plane's orientation. I almost stopped noticing the buzzing on my arms and chest; I simply knew where I was, how I was moving. I pulled the plane out.

When the original feelSpace experiment ended, Wächter, the sysadmin who started dreaming in north, says he felt lost; like the people wearing the weird goggles in those Austrian experiments, his brain had remapped in expectation of the new input. "Sometimes I would even get a phantom buzzing." He bought himself a GPS unit, which today he glances at obsessively. One woman was so dizzy and disoriented for her first two post-feelSpace days that her colleagues wanted to send her home from work. "My living space shrank quickly," says König. "The world appeared smaller and more chaotic."

I wore a feelSpace belt for just a day or so, not long enough to have my brain remapped. In fact, my biggest worry was that as a dark-complexioned person wearing a wide belt bristling with wires and batteries, I'd be mistaken for a suicide bomber in charming downtown Osnabrück.

The puzzling reactions of the longtime feelSpace wearers are characteristic of the problems researchers are bumping into as they play in the brain's cross-modal spaces. Nobody has done the imaging studies yet; the areas that integrate the senses are still unmapped.

Success is still a long way off. The current incarnations of sensory prosthetics are bulky and low-resolution — largely impractical. What the researchers working on this technology are looking for is something transparent, something that users can (safely) forget they're wearing. But sensor technology isn't the main problem. The trick will be to finally understand more about how the brain processes the information, even while seeing the world with many different kinds of eyes.

Sunny Bains (www.sunnybains.com/blog) wrote about self-repairing micromachines in issue 13.09.

Thursday, March 15, 2007

The Thinking Machine

Wired, Issue 15.03 - March 2007

Jeff Hawkins created the Palm Pilot and the Treo. Now he says he’s got the ultimate invention: software that mimics the human brain.
By Evan Ratliff

“When you are born, you know nothing.”

This is the kind of statement you expect to hear from a philosophy professor, not a Silicon Valley executive with a new company to pitch and money to make. Yet Jeff Hawkins drops this epistemological axiom while sitting at a coffee shop downstairs from his latest startup. A tall, rangy man who is almost implausibly cheerful, Hawkins created the Palm and Treo handhelds and cofounded Palm Computing and Handspring. His is the consummate high tech success story, the brilliant, driven engineer who beat the critics to make it big. Now he’s about to unveil his entrepreneurial third act: a company called Numenta. But what Hawkins, 49, really wants to talk about — in fact, what he has really wanted to talk about for the past 30 years — isn’t gadgets or source codes or market niches. It’s the human brain. Your brain. And today, the most important thing he wants you to know is that, at birth, your brain is completely clueless.

After a pause, he corrects himself. “You know a few basic things, like how to poop.” His point, though, is that your brain starts out with no actual understanding or even representation of the world or its objects. “You don’t know anything about tables and language and buildings and cars and computers,” he says, sweeping his hand to represent the world at large. “The brain has to, on its own, discover that these things are out there. To me,” he adds, “that’s a fascinating idea.”

It’s this fascination with the human mind that drove Hawkins, in the flush of his success with Palm, to create the nonprofit Redwood Neuroscience Institute and hire top neuroscientists to pursue a grand unifying theory of cognition. It drove him to write On Intelligence, the 2004 book outlining his theory of how the brain works. And it has driven him to what has been his intended destination all along: Numenta. Here, with longtime business partner Donna Dubinsky and 12 engineers, Hawkins has created an artificial intelligence program that he believes is the first software truly based on the principles of the human brain. Like your brain, the software is born knowing nothing. And like your brain, it learns from what it senses, builds a model of the world, and then makes predictions based on that model. The result, Hawkins says, is a thinking machine that will solve problems that humans find trivial but that have long confounded our computers — including, say, sight and robot locomotion.

Hawkins believes that his program, combined with the ever-faster computational power of digital processors, will also be able to solve massively complex problems by treating them just as an infant’s brain treats the world: as a stream of new sensory data to interpret. Feed information from an electrical power network into Numenta’s system and it builds its own virtual model of how that network operates. And just as a child learns that a glass dropped on concrete will break, the system learns to predict how that network will fail. In a few years, Hawkins boasts, such systems could capture the subtleties of everything from the stock market to the weather in a way that computers now can’t.

Numenta is close to issuing a “research release” of its platform, which has three main components: the core problem-solving engine, which works sort of like an operating system based on Hawkins’ theory of the cortex; a set of open source software tools; and the code for the learning algorithms themselves, which users can alter as long as they make their creations available to others. Numenta will earn its money by owning and licensing the basic platform, and Hawkins hopes a new industry will grow up around it, with companies customizing and reselling the intelligence in unexpected and dazzling ways. To Hawkins, the idea that we’re born knowing nothing leads to a technology that will be vastly more important than his Palm or Treo — and perhaps as lucrative.

But wait, your no-longer clueless brain is warning you, doesn’t this sound familiar? Indeed, Hawkins joins a long line of thinkers claiming to have unlocked the secrets of the mind and coded them into machines. So thoroughly have such efforts failed that AI researchers have largely given up the quest for the kind of general, humanlike intelligence that Hawkins describes. “There have been all those others,” he acknowledges, “the Decade of the Brain, the 5th Generation Computing Project in Japan, fuzzy logic, neural networks, all flavors of AI. Is this just another shot in the dark?” He lets the question hang for a moment. “No,” he says. “It’s quite different, and I can explain why.”


Illustration: Arno Ghelfi

How Numenta’s Software IDs a Chopper
Scan and match

1) The system is shown a poor-quality image of a helicopter moving across a screen. It’s read by low-level nodes that each see a 4 x 4-pixel section of the image.
2) The low-level nodes pass the pattern they see up to the next level.
3) Intermediate nodes aggregate input from the low-level nodes to form shapes.
4) The top-level node compares the shapes against a library of objects and selects the best match.


Predict and refine
5) That info is passed back down to the intermediate - level nodes so they can better predict what shape they’ll see next.
6) Data from higher-up nodes allows the bottom nodes to clean up the image by ignoring pixels that don’t match the expected pattern (indicated above by an X). This entire process repeats until the image is crisp.
Greta Lorge

Jeff Hawkins grew up on Long Island, the son of a ceaseless inventor. While working at a company called Sperry Gyroscope in the 1960s, Robert Hawkins created the Sceptron, a device that could be used to (among other things) decode the noises of marine animals. It landed him on the cover of Weekly Reader magazine. “I was in the third grade, and there was my dad standing by this pool holding a microphone,” Hawkins recalls. “And a dolphin is sticking its nose out of the water, speaking into it.”

As a teenager, Hawkins became intrigued by the mysteries of human intelligence. But as he recounts in On Intelligence, cowritten with New York Times reporter Sandra Blakeslee, for 25 years he pursued his dream to develop a theory of how the brain works and to create a machine that can mimic it as an amateur. Rejected from graduate school at MIT, where he had hoped to enter the AI lab, he enrolled in the biophysics PhD program at UC Berkeley in the mid-1980s, only to drop out after the school refused to let him forgo lab work to pursue his own theories.

Instead, Hawkins found success in business at Intel, at Grid Computing, and eventually at Palm and then Handspring. But all along, Hawkins says, his ultimate goal was to generate the resources to pursue his neuro-science research. Even while raising the first investments for Palm, he says, “I had to tell people, ‘I really want to work on brains.’” In 2002, he finally was able to focus on brain work. He founded the Redwood Neuroscience Institute, a small think tank that’s now part of UC Berkeley, and settled in to write his book.

It was while he was in his PhD program that Hawkins stumbled upon the central premise of On Intelligence: that prediction is the fundamental component of intelligence. In a flash of insight he had while wondering how he would react if a blue coffee cup suddenly appeared on his desk, he realized that the brain is not only constantly absorbing and storing information about its surroundings — the objects in a room, its sounds, brightness, temperature — but also making predictions about what it will encounter next.

On Intelligence elucidates this intelligence-as-prediction function, which Hawkins says derives almost entirely from the cortex, a portion of the brain that’s basically layers of neurons stacked on top of one another. In a highly simplified sense, the cortex acquires information (about letters on this page, for example) from our senses in fractional amounts, through a large number of neurons at the lowest level. Those inputs are fed upward to a higher layer of neurons, which make wider interpretations (about the patterns those letters form as words) and are then passed higher up the pyramid. Simultaneously, these interpretations travel back down, helping the lower-level neurons predict what they are about to experience next. Eventually, the cortex decodes the sentence you are seeing and the article you are reading.

Considering it came from an outsider, On Intelligence received surprising accolades from neuroscientists. What critics there were argued not that the book was wrong but that it rehashed old research. “Still,” says Michael Merzenich, a neuroscientist at UC San Francisco, “no one has expressed it in such a cogent way. Hawkins is damn clever.”

As Hawkins was writing On Intelligence, an electrical engineering graduate student named Dileep George was working part-time at the Redwood Neuro-science Institute, looking for a PhD topic involving the brain. He heard Hawkins lecture about the cortex and immediately thought that he might be able to re-create its processes in software.

George built his original demonstration program, a basic representation of the process used in the human visual cortex, over several weekends. Most modeling programs are linear; they process data and make calculations in one direction. But George designed multiple, parallel layers of nodes — each representing thousands of neurons in cortical columns and each a small program with its own ability to process information, remember patterns, and make predictions.

George and Hawkins called the new technology hierarchical temporal memory, or HTM. An HTM consists of a pyramid of nodes, each encoded with a set of statistical formulas. The whole HTM is pointed at a data set, and the nodes create representations of the world the data describes — whether a series of pictures or the temperature fluctuations of a river. The temporal label reflects the fact that in order to learn, an HTM has to be fed information with a time component — say, pictures moving across a screen or temperatures rising and falling over a week. Just as with the brain, the easiest way for an HTM to learn to identify an object is by recognizing that its elements — the four legs of a dog, the lines of a letter in the alphabet — are consistently found in similar arrangements. Other than that, an HTM is agnostic; it can form a model of just about any set of data it’s exposed to. And, just as your cortex can combine sound with vision to confirm that you are seeing a dog instead of a fox, HTMs can also be hooked together. Most important, Hawkins says, an HTM can do what humans start doing from birth but that computers never have: not just learn, but generalize.

At Numenta’s Menlo Park, California, offices one afternoon this winter, George showed off the latest version of his original picture-recognition demo. He had trained the HTM by feeding it a series of simple black-and-white pictures — dogs, coffee cups, helicopters — classified for the HTM into 91 cate-gories and shown zigzagging all over the screen in randomly chosen directions. The nodes at the bottom level of the HTM sense a small fraction of each image, a four- by four-pixel patch that the node might assess as a single line or curve. That information is passed to the second-level node, which combines it with the output of other first-level nodes and calculates a probability, based on what it has seen before, that it is seeing a cockpit or a chopper blade. The highest-level node combines these predictions and then, like a helpful parent, tells the lower-level nodes what they’re seeing: a helicopter. The lower-level nodes then know, for example, that the fuzzy things they can’t quite make out are landing skids and that the next thing they see is more apt to be a rear rotor than a tennis racket handle.

On his laptop, George had several pictures the HTM had never seen before, images of highly distorted helicopters oriented in various directions. To human eyes, each was still easily recognizable. Computers, however, haven’t traditionally been able to handle such deviations from what they’ve been programmed to detect, which is why spambots are foiled by strings of fuzzy letters that humans easily type in. George clicked on a picture, and after a few seconds the program spit out the correct identification: helicopter. It also cleaned up the image, just as our visual cortex does when it turns the messy data arriving from our retinas into clear images in our mind. The HTM even seems to handle optical illusions much like the human cortex. When George showed his HTM a capital A without its central horizontal line, the software filled in the missing information, just as our brains would.

George’s results with images are impressive. But the challenge facing any machine intelligence is to expand such small-scale experiments to massively complex problems like complete visual scenes (say, a helicopter rescuing someone on top of a building) or the chaotic dynamics of weather. Tomaso Poggio, a computational neuro-scientist at MIT’s McGovern Institute for Brain Research, says he’s intrigued by the theory but that “it would be nice to see a demonstration on more challenging problems.”

Similar criticism was aimed at the last promising AI technology supposedly based on the brain: neural networks. That technology rose to prominence in the 1980s. But despite some successes in pattern recognition, it never scaled to more complex problems. Hawkins argues that such networks have traditionally lacked “neuro-realism”: Although they use the basic principle of inter-connected neurons, they don’t employ the information-processing hierarchy used by the cortex. Whereas HTMs continually pass information up and down a hierarchy, from large collections of nodes at the bottom to a few at the top and back down again, neural networks typically send information through their layers of nodes in one direction — and if they send information in both directions, it’s often just to train the system. In other words, while HTMs attempt to mimic the way the brain learns — for instance, by recognizing that the common elements of a car occur together — neural networks use static input, which prevents prediction.

Hawkins is relying on his own fidelity to the brain to overcome the scale problems of the past. “If you believe this is the mechanism that’s actually being used in the cortex — which I do,” he says, “then we know it can scale to a certain size because the cortex does. Now, I haven’t proven that. The proof comes in, you know, doing it.”

Unlike most startups, Numenta has no marketing department, nor even a discernible strategy for recruiting customers. But who needs marketing when you are deluged by daily emails — many from researchers, engineers, and executives who have read On Intelligence — asking for your technology?

Already, Numenta is in discussion with automakers who want to use HTMs in smart cars. Analyzing the data from a host of cameras and sensors inside and outside the car, the system would do what a human passenger can do if a driver’s eyelids droop or a car drifts from its lane: realize the driver is too drowsy and sound a warning.

Numenta is also working with Edsa Micro, a company that designs software to monitor power supplies for operations like offshore oil platforms and air-traffic controllers. The firm currently models a power system down to the smallest detail, ensuring it can continue operating during contingencies like power spikes and explosions. Edsa’s software also collects data from thousands of temperature, voltage, current, and other sensors. If that information could be analyzed in real time, it could signal potential power failures.

That kind of analysis is what the company expects Numenta’s software will do, and Edsa is now setting up customized HTMs for each of the electrical system’s “senses.” Engineers program the bottom-level nodes to accept information about the power system. After that, the HTM is fed Edsa’s historical sensory data, representing the electrical system’s normal state of affairs. When the system goes live, most likely in about a year, Edsa hopes it will be able to generalize the sensory data into an understanding of whether an electrical network is running smoothly or is overtaxed. If the latter, an HTM might send out a signal: Explosion risk high. “We’ve seen some incredible speed improvements,” says Adib Nasle, Edsa’s presi-dent, about the work done so far. “Some approaches, you give too many examples and they get dumber. HTM seems not to suffer from that. It’s pretty impressive.”

The graveyard of AI, of course, is littered with impressive technologies that died on the development table. One can’t help thinking of the Sceptron and of Hawkins’ father holding its microphone up to a dolphin’s snout. In 1964, Sceptron makers promised it would be “a self-programming, small-size, and potentially inexpensive device operating in real time to recognize complex frequency patterns” — a device that could someday be used to translate the language of dolphins.

Asked whether, given the fate of past AI promises, he has even the smallest doubt about the theories behind Numenta, Hawkins is unflinching. “Is the platform that we are shipping going to be around in 10 years? Probably not. Will the ideas be around? Absolutely.” He breaks into a wide smile. “The core principle, the hierarchical temporal memory component, I cannot imagine being wrong.”

Contributing editor Evan Ratliff (www.atavistic.org) wrote about machine translation in issue 14.12.