Abnormal neural activity recorded from the deep brain of Parkinson's disease and dystonia patients

 Movement disorders such as Parkinson's diseases and dystonia are caused by abnormal neural activity of the basal ganglia located deep in the brain. The basal ganglia are connected to the cerebral cortex in the brain surface through complex neural circuits. Their basic structure and connections, as well as the dysfunctions in movement disorders, have been examined extensively by using experimental animals. On the other hand, little is known about the human brain that is much more complex in either normal or diseased states.

An international joint research team led by Professor Toru Itakura and Assistant Professor Hiroki Nishibayashi from Wakayama Medical University, Japan, Professor Atsushi Nambu from the National Institute for Physiological Sciences, Japan, and Professor Hitoshi Kita from The University of Tennessee Health Science Center, TN, succeeded, for the first time, in recording cortically induced neural activity of the basal ganglia in patients with Parkinson's disease and dystonia during stereotaxic neurosurgery for the deep brain stimulation (DBS).

This research has been reported in the journal Movement Disorders.

With the consent of patients and based on the ethical guidelines of Wakayama Medical University, the team recorded the neural activity of the globus pallidus, one of the nuclei in the basal ganglia, and examined their activity changes in response to the stimulation of the primary motor cortex. Typical triphasic responses were observed in patients with Parkinson's disease, and enhanced inhibitory responses were observed in a dystonia patient. The results confirmed previous data observed in experimental animals. These results suggest: 1) Cortically evoked neural responses in the basal ganglia can be useful for determining the target location of the DBS electrodes, and 2) Enhanced inhibitory neural responses in the globus pallidus may cause abnormal movements observed in dystonia.

This research was supported by Grants-in-Aid for Scientific Research, from the MEXT, Japan.


Journal Reference:

  1. Hiroki Nishibayashi, Mitsuhiro Ogura, Koji Kakishita, Satoshi Tanaka, Yoshihisa Tachibana, Atsushi Nambu, Hitoshi Kita, Toru Itakura. Cortically evoked responses of human pallidal neurons recorded during stereotactic neurosurgery. Movement Disorders, 2011; DOI: 10.1002/mds.23502

How do we combine faces and voices?

— Human social interactions are shaped by our ability to recognise people. Faces and voices are known to be some of the key features that enable us to identify individual people, and they are rich in information such as gender, age, and body size, that lead to a unique identity for a person. A large body of neuropsychological and neuroimaging research has already determined the various brain regions responsible for face recognition and voice recognition separately, but exactly how our brain goes about combining the two different types of information (visual and auditory) is still unknown.

Now a new study, published in the March 2011 issue of Elsevier's Cortex, has revealed the brain networks involved in this "cross-modal" person recognition.

A team of researchers in Belgium used functional magnetic resonance imaging (fMRI) to measure brain activity in 14 participants while they performed a task in which they recognised previously learned faces, voices, and voice-face associations. Dr Frédéric Joassin, Dr Salvatore Campanella, and colleagues compared the brain areas activated when recognising people using information from only their faces (visual areas), or only their voices (auditory areas), to those activated when using the combined information. They found that voice-face recognition activated specific "cross-modal" regions of the brain, located in areas known as the left angular gyrus and the right hippocampus. Further analysis also confirmed that the right hippocampus was connected to the separate visual and auditory areas of the brain.

Recognising a person from the combined information of their face and voice therefore relies not only on the same brain networks involved in using only visual or only auditory information, but also on brain regions associated with attention (left angular gyrus) and memory (hippocampus). According to the authors, the findings support a dynamic vision of cross-modal interactions in which the areas involved in processing both face and voice information are not simply the final stage of a hierarchical model, but rather, they may work in parallel and influence each other.


Journal Reference:

  1. Frédéric Joassin, Mauro Pesenti, Pierre Maurage, Emilie Verreckt, Raymond Bruyer, Salvatore Campanella. Cross-modal interactions between human faces and voices involved in person recognition. Cortex, 2011; 47 (3): 367 DOI: 10.1016/j.cortex.2010.03.003

Reading in two colors at the same time: Patterns of synesthesia brain activity revealed

People with synesthesia often report perceiving letters as appearing in different colors. But how do their brains accomplish this feat?

The Nobel prize-winning physicist Richard Feynman once wrote in his autobiographical book (What do you care what other people think?): "When I see equations, I see letters in colors — I don't know why […] And I wonder what the hell it must look like to the students." This neurological phenomenon is known to psychologists as synesthesia, and Feynman's experience of "seeing" the letters in color was a specific form known today as "grapheme-color" synesthesia. What is perhaps most puzzling about this condition is that people actually claim to see two colors simultaneously when reading letters or numbers: the real color of the ink (e.g. black) and an additional — synesthetic — color.

Now a new study, published in the March 2011 issue of Elsevier's Cortex, has revealed the patterns of brain activity that allow some people to experience the sensation of "seeing" two colors at the same time.

A group of researchers in Norway used functional magnetic resonance imaging (fMRI) to investigate the brain activity patterns of two grapheme-color synesthetes, as they looked at letters written in different colors, presented on a screen while inside an MRI scanner. The participants had previously been asked to indicate the synesthetic colors that they associated with given letters and were then presented with single letters whose physical color sometimes corresponded to the synesthetic color and other times was clearly different.

Prof. Bruno Laeng from the University of Oslo, along with colleagues Kenneth Hugdahl and Karsten Specht from the University of Bergen, had reasoned that increasing the similarity between the physical and synesthetic colors should affect the level of activity seen in areas of the brain known to be important for color processing, and their results confirmed this expectation, revealing that the strength of the observed brain activity was correlated with the similarity of the colors.

The authors concluded that the same brain areas that support the conscious experience of color also support the experience of synesthetic colors, allowing the two to be "seen" at the same time. This supports the view that the phenomenon of color synesthesia is perceptual in nature.


Journal Reference:

  1. Bruno Laeng, Kenneth Hugdahl, Karsten Specht. The neural correlate of colour distances revealed with competing synaesthetic and real colours. Cortex, 2011; 47 (3): 320 DOI: 10.1016/j.cortex.2009.09.004

Tobacco smoking impacts teens' brains, study shows

 Tobacco smoking is the leading preventable cause of death and disease in the U.S., with more than 400,000 deaths each year attributable to smoking or its consequences. And yet teens still smoke. Indeed, smoking usually begins in the teen years, and approximately 80 percent of adult smokers became hooked by the time they were 18. Meanwhile, teens who don't take up smoking usually never do.

While studies have linked cigarette smoking to deficits in attention and memory in adults, UCLA researchers wanted to compare brain function in adolescent smokers and non-smokers, with a focus on the prefrontal cortex, the area of the brain that guides "executive functions" like decision-making and that is still developing structurally and functionally in adolescents.

They found a disturbing correlation: The greater a teen's addiction to nicotine, the less active the prefrontal cortex was, suggesting that smoking can affect brain function.

The research appears in the current online edition of the journal Neuropsychopharmacology.

The finding is obviously not good news for smokers, said the study's senior author, Edythe London, a professor of psychiatry at the Semel Institute for Neuroscience and Human Behavior at UCLA.

"As the prefrontal cortex continues to develop during the critical period of adolescence, smoking may influence the trajectory of brain development and affect the function of the prefrontal cortex," London said.

In the study, 25 smokers and 25 non-smokers between the ages of 15 to 21 were asked to perform a test that activated the prefrontal cortex and required them to inhibit responding.

The test, called the Stop-Signal Task (SST), was done while the participants were undergoing functional magnetic resonance imaging (fMRI). The Stop-Signal Task involves pressing a button as quickly as possible every time a lighted arrow appears — unless an auditory tone is played, in which case the participant must prevent himself from pressing the button. It is a test of a person's ability to inhibit an action.

Prior to the fMRI test, the researchers used the Heaviness of Smoking Index (HSI) to measure the level of nicotine dependence in the smoking group. The HSI takes into account how many cigarettes a teen smokes in a day and how soon after waking he or she takes the first smoke.

The results of the tests, London said, were interesting — and surprising. Among smokers, the researchers found that the higher the HSI — that is, the more a teen smoked — the lesser the activity in the prefrontal cortex. And yet, despite these lower levels of activation, the smoking group and the non-smoking group performed roughly the same with respect to inhibition on the Stop-Signal Task.

"The finding that there was little difference on the Stop-Signal Task between smokers and non-smokers was a surprise," said London, who is also a professor of molecular and medical pharmacology at the David Geffen School of Medicine at UCLA and a member of the UCLA Brain Research Institute. "That suggested to us that the motor response of smokers may be maintained through some kind of compensation from other brain areas."

Protracted development of the prefrontal cortex has been implicated as a cause of poor decision-making in teens, London said, caused by immature cognitive control during adolescence.

"Such an effect can influence the ability of youth to make rational decisions regarding their well-being, and that includes the decision to stop smoking," she said.

The key finding, London noted, is that "as the prefrontal cortex continues to develop during the critical period of adolescence, smoking may influence the trajectory of brain development, affecting the function of the prefrontal cortex. In turn, if the prefrontal cortex is negatively impacted, a teen may be more likely to start smoking and to keep smoking — instead of making the decision that would favor a healthier life."

On the other hand, the fact that adolescent smokers and non-smokers performed equally well during a response-inhibition test suggests that early interventions during the teen years may prevent the transition from a teen smoking an occasional cigarette in response to peer pressure to addiction in later adolescence.

In addition to London, study authors included lead author Adriana Galván, Christine M. Baker and Kristine M. McGlennen of UCLA, and Russell A. Poldrack, of the University of Texas at Austin.

Funding for this study was provided by Philip Morris USA, an endowment from the Thomas P. and Katherine K. Pike Chair in Addiction Studies, and a gift from the Marjorie M Greene Trust. None of the sponsors had any involvement in the design, collection, analysis or interpretation of data, the writing the manuscript, or the decision to submit the manuscript for publication.


Journal Reference:

  1. Adriana Galván, Russell A Poldrack, Christine M Baker, Kristine M McGlennen, Edythe D London. Neural Correlates of Response Inhibition and Cigarette Smoking in Late Adolescence. Neuropsychopharmacology, 2011; DOI: 10.1038/npp.2010.235

Popular psychology theories on self-esteem not backed up by serious research, study finds

Low self-esteem is associated with a greater risk of mental health problems such as eating disorders and depression. From a public health perspective, it is important for staff in various health-related professions to know about self-esteem. However, there is a vast difference between the research-based knowledge on self-esteem and the simplified popular psychology theories that are disseminated through books and motivational talks, reveals research from the University of Gothenburg.

Current popular psychology books distinguish between self-esteem and self-confidence. It is also believed that it is possible to improve self-esteem without there being a link to how competent people perceive themselves to be in areas they consider important.

This is in stark contrast to the results of a new study carried out by researcher Magnus Lindwall from the University of Gothenburg's Department of Psychology and colleagues from the UK, Turkey and Portugal.

"I think it's important that people have a more balanced idea of what self-esteem actually is," says Lindwall. "Our results show that self-esteem is generally linked most strongly to people's perceived competence in areas that they consider to be important."

The flip side is that the researchers show that people are most vulnerable to low self-esteem when they fail or feel less competent in areas that are important to them.

"Self-esteem is also closely linked to self-confidence and perceived competence in different areas, primarily those areas that a person considers to be important," says Lindwall.

The study builds on one of the dominant theories in the field, formulated over a hundred years ago by the American philosopher William James. It states that self-esteem is actually the result of perceived success, or competence, in an area relative to how important this area is.

The current study, involving 1,831 university students from the four countries, focuses specifically on self-perception of the body — for example, how strong or fit the test subjects consider themselves to be and how attractive they believe their bodies to be.

"The results show that self-perception of the body, primarily in those areas that were considered to be important, is linked with general self-esteem," says Lindwall.

"In general, our study — along with plenty of other research in the field — paints a completely different and more complicated picture of self-esteem than that set out in best-selling popular psychology books," says Lindwall. "These books are often based on a person's own experiences and anecdotes rather than systematic research. Self-esteem is just not as simple as that, otherwise interest in the concept wouldn't be so great."

The results will be published in the Journal of Personality.


Journal Reference:

  1. Magnus Lindwall, F. Hülya Aşçı, Antonio Palmeira, Kenneth R. Fox, Martin S. Hagger. The Importance of Importance in the Physical Self: Support for the Theoretically Appealing but Empirically Elusive Model of James. Journal of Personality, 2010; DOI: 10.1111/j.1467-6494.2010.00678.x

Staring contests are automatic: People lock eyes to establish dominance

 Imagine that you're in a bar and you accidentally knock over your neighbor's beer. He turns around and stares at you, looking for confrontation. Do you buy him a new drink, or do you try to outstare him to make him back off? New research published in Psychological Science suggests that the dominance behavior exhibited by staring someone down can be reflexive.

Our primate relatives certainly get into dominance battles; they mostly resolve the dominance hierarchy not through fighting, but through staring contests. And humans are like that, too. David Terburg, Nicole Hooiveld, Henk Aarts, J. Leon Kenemans, and Jack van Honk of the University of Utrecht in the Netherlands wanted to examine something that's been assumed in a lot of research: that staring for dominance is automatic for humans.

For the study, participants watched a computer screen while a series of colored ovals appeared. Below each oval were blue, green, and red dots; they were supposed to look away from the oval to the dot with the same color. What they didn't know was that for a split-second before the colored oval appeared, a face of the same color appeared, with either an angry, happy, or neutral expression. So the researchers were testing how long it took for people to look away from faces with different emotions. Participants also completed a questionnaire that reflected how dominant they were in social situations.

People who were more motivated to be dominant were also slower to look away from angry faces, while people who were motivated to seek rewards gazed at the happy faces longer. In other words, the assumptions were correct — for people who are dominant, engaging in gaze contests is a reflex.

"When people are dominant, they are dominant in a snap of a second," says Terburg. "From an evolutionary point of view, it's understandable — if you have a dominance motive, you can't have the reflex to look away from angry people; then you have already lost the gaze contest."

Your best bet in the bar, though, might just be to buy your neighbor a new beer.


Journal Reference:

  1. D. Terburg, N. Hooiveld, H. Aarts, J. L. Kenemans, J. van Honk. Eye Tracking Unconscious Face-to-Face Confrontations: Dominance Motives Prolong Gaze to Masked Angry Faces. Psychological Science, 2011; DOI: 10.1177/0956797611398492

Mind over matter: EECoG may finally allow enduring control of a prosthetic or a paralyzed arm by thought alone

 Daniel Moran has dedicated his career to developing the best brain-computer interface, or BCI, he possibly can. His motivation is simple but compelling. "My sophomore year in high school," Moran says, "a good friend and I were on the varsity baseball team. I broke my arm and was out for the season. I was feeling sorry for myself when he slide into home plate head first and broke his neck.

"So I knew what I wanted to do when I was 15 years old, and all my career is just based on that."

Moran, PhD, associate professor of biomedical engineering and neurobiology in the School of Engineering & Applied Science at Washington University in St. Louis, is young enough that his career has coincided with the rapid development of the field of brain interfaces. When he began, scientists struggled to achieve lasting control over the movement of a cursor in two dimensions. These days, his aspirational goal is mind control of the nerves and muscles in a paralyzed arm.

A typical primate arm uses 38 independent muscles to control the positions of the shoulder and elbow joints, the forearm and the wrist. To fully control the arm, a BCI system would need 38 independent control channels.

The latest from Moran's lab

There are four types of brain-computer interfaces: EEGs, where the electrodes are outside the skull; microelectrodes, where the electrodes are inserted in the brain; ECoGs, grids of disk-like electrodes that lie directly on the brain, and, Moran's choice, EECoGs, grids of disk-like electrodes that lie inside the skull but outside the dura mater, a membrane that covers and protects the brain.

Moran has just completed a set of experiments with MD/PhD student Adam Rouse to define the minimum spacing between the EECoG electrodes that preserves the independence of control channels. Together with Justin Williams at the University of Wisconsin, he has built a 32-channel EECoG grid small enough to fit within the boundaries of the sensorimotor cortex of the brain.

His next step is to slip the thin, flexible grid under a macaque's skull and to train the monkey to control — strictly by thinking about it — a computational model of a macaque arm that he published in the Journal of Neural Engineering in 2006.

This might sound like science fiction, but in 2006, Moran and his long-time collaborator Eric Leuthardt, MD, a Washington University neurosurgeon at Barnes-Jewish Hospital, had demonstrated that a young patient, in the hospital for surgery to treat intractable epilepsy, could play the video game Space Invaders just by thinking about it.

Of course the virtual arm is a much more ambitious project. Only two degrees of freedom (two independent control channels) are required to move the Space Invaders' cursor in a two-dimensional plane.

The arm, on the other hand, will have seven degrees of freedom, including rotation about the shoulder joint, flexion and extension of the elbow, pronation and supination of the lower forelimb, and flexion, extension, abduction and adduction of the wrist.

(The monkey will not be harmed in this experiment, but instead will be persuaded by a virtual reality simulator into treating the virtual arm as though it were its own.)

Using the virtual arm, Moran showed that the classic task that has been used to study motor control for 20 years, called center-out reaching, does not adequately separate out the control signals that add up to an arm motion, making it difficult to determine which part of the brain is controlling which element of the motion.

So the monkey will instead be asked to trace with its virtual hand three circles that intersect in space at 45 degrees to one another, like interlocked embroidery hoops. Because this task better separates degrees of freedom, it will make it easier for the scientists to map cortical activity to details of movement, such as joint angular velocity or hand velocity.

Should this experiment be successful, and Moran fully expects it will be, he would like eventually connect his EECoG BCI to a new peripheral nerve-stimulating electrode he is developing together with MD/PhD student Matthew R. MacEwan. By connecting these two devices they will create a neuroprosthetic arm: that is, a paralyzed arm that can move again because the mind is sending signals to peripheral nerves that stimulate muscles to expand or contract.

Neuroprosthestics like the one Moran and colleagues are designing may one day help people suffering from spinal cord injury, brainstem stroke or amyotrophic lateral sclerosis, which paralyzes the body while leaving the mind intact.

The background

BCI has been slow to develop in part because early scientists worked with two "platforms" that have turned out to have serious limitations: EEG systems that measure brain signals through the skull and arrays of microelectrodes inserted directly into the brain.

EEG systems have a series of drawbacks related to the distance between the electrodes and the scene of the action. They have poor spatial resolution, the signals do not contain detailed information, and the signals are weak.

"Here's the deal," says Moran. "The brain is about an inch below the surface of your scalp, which in recording terms is a long, long way away. When you're on the surface of the scalp, it's kind of like being five blocks from Busch Stadium. You can't hear anything unless someone hits a home run and all 60,000 fans scream simultaneously.

"For an EEG, you need the neurons in a chunk of cortex about the size of a quarter screaming at the same time in order to record anything. And the primary motor cortex, the thin strip of the brain that controls the skeletal muscles, is so small you're only going to get a few control channels up there."

There are other drawbacks to EEG as well. For example, it takes many training sessions (roughly 20 to 50 half-hour sessions) to learn to control an EEG BCI.

Still, Moran and colleagues write in a review article in Neurosurgical Focus, EEG BCIs perform better than is sometimes supposed. They allow accurate control of a computer cursor in two or three dimensions and so far they are the only systems that have achieved clinical use (in patients with amyotrophic lateral sclerosis and spinal cord injury).

Microelectrode arrays

The traditional alternative to an EEG platform has been an array of microelectrodes whose tiny tips are implanted a few millimeters into the motor cortex.

Microelectrodes were implanted in both monkeys (in the 1970s) and in humans (in 1998), and were very successful — but only for a short time.

They suffer from what is probably a fatal drawback: The insertion of the electrodes initiates a reactive cell response that promotes the formation of a sheath around the electrode that electrically isolates it from the surrounding neural tissue.

Some labs are investigating biomaterial coatings for electrodes or drug delivery systems that would prevent this foreign body response, but these efforts are still preliminary.

No needles

In working with penetrating microelectrodes scientists made several discoveries that had interesting implications.

The first systems recorded the action potentials in single neurons, but in the 1980s, scientists discovered that populations of neurons in the motor cortex could be used to control the direction and speed of movements in three-dimensional space.

These small assembles of cortical cells synchronize their activity to produce high-frequency local field potentials, called gamma waves, that resemble signals from nearby single-unit microelectrodes.

In short, the gamma waves from neuronal populations can substitute for the action potentials from individual neurons. This meant it wasn't necessary to poke anything into the brain to get a useful signal. Instead, a sheet of disc like electrodes could be laid on the surface of the brain.

Moran was able to piggyback his first ECoG experiments on human epilepsy monitoring taking place at Barnes-Jewish Hospital.

"Our first ECoG experiments in 2004 were done with people," he says. "Patients with focal (localized) seizures that cannot be controlled with medication are regularly implanted with ECoG grids so that surgeons can pinpoint damaged portions of the brain for removal without disturbing healthy tissue. "

In 2006, Moran and Leuthardt attached an ECoG grid that had been implanted in a 15-year-old boy to monitor seizures to a computer running the game Space Invaders.

In order to move the cannon right, the subject thought about wiggling his fingers and to make it move left he thought about wiggling his tongue. "He could duck and dive and had pretty elegant control of the video game," Moran says, "and he made it to level three on the first day."

In the video the subject can be seen wiggling his fingers, but this behavior soon drops away, Moran says. The brain adapts and instead of imagining "wiggle fingers" the boy imagines "cursor right."

Intuitively you would think that signals from the motor cortex would provide the best control for tasks involving movement. But even this turned out not to be true. In 2007, scientists at the University of Wisconsin-Madison reported that patients were able to teach themselves to modulate gamma band activity either by imagining hand, foot or tongue movements, or by imagining a phone ring tone, a song or the voice of a relative. In other words, they were able to train neurons in the auditory as well as the motor cortex to control movement.

A thin sheet slipped under the skull

All of this was very exciting. What if, Moran wondered, the brain was completely plastic and populations of neurons could arbitrarily be reassigned to control movement in different directions in space?

If neuronal populations could be reassigned, maybe more electrodes could be crowded into a grid without losing independent control of movement along different axes in space.

If the electrodes were shrunk as well as moved closer together, he wondered, how far could you go? How many degrees of freedom could you bring under the brain's control?

And why not make the implants safer as well? Instead of laying the electrode grids on the brain's surface, why not lay them on the dura mater, the outermost of the three membranes surrounding and protecting the brain and spinal cord.

In 2009, Moran published the first studies of epidural electrocortiocography (EECoG — not to be confused with ECoG). Recording sites over the motor cortex of monkeys were arbitrarily assigned to control a cursor's motion in the horizontal and vertical directions as the monkey traced circles on a computer screen.

In the latest set of experiments, Moran sought to define the minimal separation between electrodes that preserved independence of control. Once a monkey gained control of the cursor, the initial electrodes were abandoned and control was given to two electrodes that were closer together. The next week, the control electrodes were closer still.

Moran found that the electrodes, which were initially a centimeter apart, maintained their independence until they were only a few millimeters apart. "So now that we know how many electrodes we can pack into an area, we have some idea how many degrees of freedom we'll be able to control," he says.

Together with Williams, he designed a 32-channel EECoG supported on a sheet of plastic thinner than Saran Wrap that sucks down to the dura and sticks like glue. He can hardly wait to test it with the virtual arm.

"I like doing basic research and I want to continue to do basic research," Moran says, "but I also really want to solve the problem and help people. Someone's got to get the technology translated to the marketplace, so we're trying to do that as well.

"Eventually," he says, "we'll have a little piece of Saran Wrap with telemetry. We'll drill a small hole in the skull, pop the bone out, drop the device in, replace the bone, sew up the scalp and you'll have what amounts to Bluetooth in your head that translates your thoughts into actions.

"My passion is for paralyzed individuals," he says, "but you can see down the road that a lot of people will want one of these devices."

What a rat can tell us about touch

— In her search to understand one of the most basic human senses — touch — Mitra Hartmann turns to what is becoming one of the best studied model systems in neuroscience: the whiskers of a rat. In her research, Hartmann, associate professor of biomedical engineering and mechanical engineering in the McCormick School of Engineering and Applied Science at Northwestern University, uses the rat whisker system as a model to understand how the brain seamlessly integrates the sense of touch with movement.

Hartmann discussed her research in a daylong seminar "Body and Machine" at the American Association for the Advancement of Science (AAAS) annual meeting in Washington, D.C. Her presentation was part of the session, "Linking Mechanics, Robotics, and Neuroscience: Novel Insights from Novel Systems," held on Feb. 18.

Rats are nocturnal, burrowing animals that move their whiskers rhythmically to explore the environment by touch. Using only tactile information from its whiskers, a rat can determine all of an object's spatial properties, including size, shape, orientation and texture. Hartmann's research group is particularly interested in characterizing the mechanics of sensory behaviors, and how mechanics influences perception.

"The big question our laboratory is interested in is how do animals, including humans, actively move their sensors through the environment, and somehow turn that sensory data into a stable perception of the world," Hartmann says.

Hundreds of papers are published each year that use the rat whisker system as a model to understand neural processing. But there is a big missing piece that prevents a full understanding the neural signals recorded in these studies: no one knows how to represent the "touch" of a whisker in terms of mechanical variables. "We don't understand touch nearly as well as other senses," Hartmann says. "We know that visual and auditory stimuli can be quantified by the intensity and frequency of light and sound, but we don't fully understand the mechanics that generate our sense of touch."

In order to gain a better understanding of how the rat uses its whiskers to sense its world, Hartmann's group works to both better understand the rat's behavior and to create models of the system that enable the creation of artificial whisker arrays.

To determine how a rat can sense the shape of an object, Hartmann's team developed a light sheet to monitor the precise locations of the whiskers as they came in contact with the object. Using high-speed video, the team can also analyze how the rat moves its head to explore different shapes.

More recently, Hartmann's team has created a model that establishes the full structure of the rat head and whisker array. This means that the team can now simulate the rat "whisking " into different objects, and predict the full range of inputs into the whisker system as a rat encounters an object. The simulations can then be compared against real behavior, as monitored with the light sheet.

These advances will provide insight into the sense of touch, but may also enable new technologies that could make use of the whisker system. For example, Hartmann's lab created arrays of robotic whiskers that can, in several respects, mimic the capabilities of mammalian whiskers. The researchers demonstrated that these arrays can sense information about both object shape and fluid flow.

"We show that the bending moment, or torque, at the whisker base can be used to generate three-dimensional spatial representations of the environment," Hartmann says. "We used this principle to make arrays of robotic whiskers that in replicate much of the basic mechanics of rat whiskers." The technology, she said, could be used to extract the three-dimensional features of almost any solid object.

Hartmann envisions that a better understanding of the whisker system may be useful for engineering applications in which vision is limited. But most importantly, a better understanding of the rat whisker system could translate into a better understanding of ourselves.

"Although whiskers and hands are very different, the basic neural pathways that process tactile information are in many respects similar across mammals," Hartmann says. "A better understanding of neural processing in the whisker system may provide insights into how our own brains process information."

The real avatar: Swiss researchers use virtual reality and brain imaging to hunt for the science of the self

That feeling of being in, and owning, your own body is a fundamental human experience. But where does it originate and how does it come to be? Now, Professor Olaf Blanke, a neurologist with the Brain Mind Institute at EPFL and the Department of Neurology at the University of Geneva in Switzerland, announces an important step in decoding the phenomenon. By combining techniques from cognitive science with those of Virtual Reality (VR) and brain imaging, he and his team are narrowing in on the first experimental, data-driven approach to understanding self-consciousness.

In recent unpublished work, Blanke and his fellow researchers performed a series of studies in which they immersed subjects, via VR settings, into the body of an avatar, or virtual human. Each subject was fitted with an electrode-studded skullcap to monitor brain activity and exposed to different digital, 3D environments through a head-mounted stereoscopic visor or projections on a large screen.

Blanke and his colleagues then perturbed the most fundamental aspects of consciousness in their subjects, such as "Where am I localized in space" and "What is my body?" by physically touching their real-life volunteers either in or out of sync with the avatar. They even swapped perspectives from first to third person and put their male subjects inside female avatars, all the while measuring the change in brain activity. Use of electrical brain signals meant subjects could stand, move their heads, and (in the most recent experiments) walk with the VR on. Other techniques such as fMRI would have required them to remain still.

The team's results expand on clinical studies done in neurological patients reporting out-of-body experiences. And the data show marked changes in the response of the brain's temporo-parietal and frontal regions — the parts of the brain responsible for integrating touch and vision into a coherent perception — compared to a series of control conditions.

"Traditional approaches have not been looking at the right information in order to understand the notion of the 'I' of conscious feeling and thinking," Blanke says. "Our research approaches the self first of all as the way the body is represented in the brain and how this affects the conscious mind. And this concept of the bodily self most likely came before more developed notions of 'I' in the evolutionary development of man."

A deeper understanding of the neurobiological basis for the self could lead to advances in the fields of touch and balance perception, neuro-rehabilitation, and pain treatments, contribute to the understanding of neurological and psychiatric disease, and have impacts on the fields of robotics and virtual reality.

But finding basic brain response to VR is just the beginning. Next up for the researchers is to induce stronger illusions of the self by altering signals of balance and limb position — two very powerful bodily cues. Once subjects can no longer distinguish between the real and the virtual self, cognitive science and brain imaging may be able to glimpse the causal mechanisms of self-consciousness and solve the mystery of the "I" once and for all.

JPEG for the mind: How the brain compresses visual information

Most of us are familiar with the idea of image compression in computers. File extensions like ." jpg" or ." png" signify that millions of pixel values have been compressed into a more efficient format, reducing file size by a factor of 10 or more with little or no apparent change in image quality. The full set of original pixel values would occupy too much space in computer memory and take too long to transmit across networks.

The brain is faced with a similar problem. The images captured by light-sensitive cells in the retina are on the order of a megapixel. The brain does not have the transmission or memory capacity to deal with a lifetime of megapixel images. Instead, the brain must select out only the most vital information for understanding the visual world.

In the February 10 online issue of Current Biology, a Johns Hopkins team led by neuroscientists Ed Connor and Kechen Zhang describes what appears to be the next step in understanding how the brain compresses visual information down to the essentials.

They found that cells in area "V4," a midlevel stage in the primate brain's object vision pathway, are highly selective for image regions containing acute curvature. Experiments by doctoral student Eric Carlson showed that V4 cells are very responsive to sharply curved or angled edges, and much less responsive to flat edges or shallow curves.

To understand how selectivity for acute curvature might help with compression of visual information, co-author Russell Rasquinha (now at University of Toronto) created a computer model of hundreds of V4-like cells, training them on thousands of natural object images. After training, each image evoked responses from a large proportion of the virtual V4 cells — the opposite of a compressed format. And, somewhat surprisingly, these virtual V4 cells responded mostly to flat edges and shallow curvatures, just the opposite of what was observed for real V4 cells.

The results were quite different when the model was trained to limit the number of virtual V4 cells responding to each image. As this limit on responsive cells was tightened, the selectivity of the cells shifted from shallow to acute curvature. The tightest limit produced an eight-fold decrease in the number of cells responding to each image, comparable to the file size reduction achieved by compressing photographs into the .jpeg format. At this level, the computer model produced the same strong bias toward high curvature observed in the real V4 cells.

Why would focusing on acute curvature regions produce such savings? Because, as the group's analyses showed, high-curvature regions are relatively rare in natural objects, compared to flat and shallow curvature. Responding to rare features rather than common features is automatically economical.

Despite the fact that they are relatively rare, high-curvature regions are very useful for distinguishing and recognizing objects, said Connor, a professor in the Solomon H. Snyder Department of Neuroscience in the School of Medicine, and director of the Zanvyl Krieger Mind/Brain Institute.

"Psychological experiments have shown that subjects can still recognize line drawings of objects when flat edges are erased. But erasing angles and other regions of high curvature makes recognition difficult," he explained

Brain mechanisms such as the V4 coding scheme described by Connor and colleagues help explain why we are all visual geniuses.

"Computers can beat us at math and chess," said Connor, "but they can't match our ability to distinguish, recognize, understand, remember, and manipulate the objects that make up our world." This core human ability depends in part on condensing visual information to a tractable level. For now, at least, the .brain format seems to be the best compression algorithm around.


Journal Reference:

  1. Eric T. Carlson, Russell J. Rasquinha, Kechen Zhang and Charles E. Connor. A Sparse Object Coding Scheme in Area V4. Current Biology, (in press) DOI: 10.1016/j.cub.2011.01.013