Even positive stereotypes can hinder performance, researchers report

Does hearing that you are a member of an elite group — of chess players, say, or scholars — enhance your performance on tasks related to your alleged area of expertise? Not necessarily, say researchers who tested how sweeping pronouncements about the skills or likely success of social groups can influence children's performance.

The researchers found that broad generalizations about the likely success of a social group — of boys or girls, for example — actually undermined both boys' and girls' performance on a challenging activity.

The new study appears in the journal Psychological Science.

"Some children believe that their ability to perform a task is dictated by the amount of natural talent they possess for that task," said University of Illinois psychology professor Andrei Cimpian, who led the study. "Previous studies have demonstrated that this belief can undermine their performance. It is important, therefore, to understand what leads children to adopt this belief."

The researchers hypothesized that exposure to broad generalizations about the abilities of social groups induces children to believe that success depends on "natural talent." If the hypothesis were correct, then hearing messages such as "girls are very good at this task," should impair children's performance by leading them to believe that success depends primarily on innate talent and has little to do with factors under their control, such as effort.

In line with this hypothesis, two experiments with 4- to 7-year-olds demonstrated that the children performed more poorly after they were exposed to information that associated success on a given task with membership in a certain social group, regardless of whether the children themselves belonged to that group.

"These findings suggest we should be cautious in making pronouncements about the abilities of social groups such as boys and girls," Cimpian said. "Not only is the truth of such statements questionable, but they also send the wrong message about what it takes to succeed, thereby undermining achievement — even when they are actually meant as encouragement."

The research team also included scientists from Sun Yat-sen University, in Guangdong, China; and Carnegie Mellon University.


Journal Reference:

  1. A. Cimpian, Y. Mu, L. C. Erickson. Who Is Good at This Game? Linking an Activity to a Social Category Undermines Children's Achievement. Psychological Science, 2012; DOI: 10.1177/095679761142980
 

Multitasking: Not so bad for you after all?

Our obsession with multiple forms of media is not necessarily all bad news, according to a new study by Kelvin Lui and Alan Wong from The Chinese University of Hong Kong. Their work shows that those who frequently use different types of media at the same time appear to be better at integrating information from multiple senses — vision and hearing in this instance — when asked to perform a specific task. This may be due to their experience of spreading their attention to different sources of information while media multitasking.

Their study is published online in Springer's Psychonomic Bulletin & Review.

To date, there has been a lot of publicity about the detrimental aspects of media multitasking — using more than one form of media or technology simultaneously. Especially prevalent in young people, this could be instant messaging, music, web surfing, e-mail, online videos, computer games or social networking. Research has demonstrated impairments during certain cognitive tasks involving task switching, selective attention and working memory, both in the laboratory and in real-life situations. This type of cognitive impairment may be due to the fact that multitaskers tend to pay attention to various sources of information available in their environment, without sufficient focus on the information most relevant to the task at hand.

But does this cognitive style have any advantages? Lui and Wong's study explored the differences between media multitaskers' tendency and ability to capture information from seemingly irrelevant sources. In particular, they assessed how much two different groups (frequent multitaskers and light multitaskers) could integrate visual and auditory information automatically.

A total of 63 participants, aged 19-28 years, took part in the experiment. They completed questionnaires looking at their media usage — both time spent using various media and the extent to which they used more than one at a time. The participants were then set a visual search task, with and without synchronous sound, i.e. a short auditory pip, which contained no information about the visual target's location, but indicated the instant it changed color.

On average, participants regularly received information from at least three media at the same time. Those who media multitasked the most tended to be more efficient at multisensory integration. In other words, they performed better in the task when the tone was present than when it was absent. They also performed worse than light media multitaskers in the tasks without the tone. It appears that their ability to routinely take in information from a number of different sources made it easier for them to use the unexpected auditory signal in the task with tone, leading to a large improvement in performance in the presence of the tone.

The authors conclude: "Although the present findings do not demonstrate any causal effect, they highlight an interesting possibility of the effect of media multitasking on certain cognitive abilities, multisensory integration in particular. Media multitasking may not always be a bad thing."


Journal Reference:

  1. Kelvin F. H. Lui, Alan C.-N. Wong. Does media multitasking always hurt? A positive correlation between multitasking and multisensory integration. Psychonomic Bulletin & Review, 2012; DOI: 10.3758/s13423-012-0245-7
 

Music training has biological impact on aging process

Age-related delays in neural timing are not inevitable and can be avoided or offset with musical training, according to a new study from Northwestern University. The study is the first to provide biological evidence that lifelong musical experience has an impact on the aging process.

Measuring the automatic brain responses of younger and older musicians and non-musicians to speech sounds, researchers in the Auditory Neuroscience Laboratory discovered that older musicians had a distinct neural timing advantage.

"The older musicians not only outperformed their older non-musician counterparts, they encoded the sound stimuli as quickly and accurately as the younger non-musicians," said Northwestern neuroscientist Nina Kraus. "This reinforces the idea that how we actively experience sound over the course of our lives has a profound effect on how our nervous system functions."

Kraus, professor of communication sciences in the School of Communication and professor of neurobiology and physiology in the Weinberg College of Arts and Sciences, is co-author of "Musical experience offsets age-related delays in neural timing" published online in the journal Neurobiology of Aging.

"These are very interesting and important findings," said Don Caspary, a nationally known researcher on age-related hearing loss at Southern Illinois University School of Medicine. "They support the idea that the brain can be trained to overcome, in part, some age-related hearing loss."

"The new Northwestern data, with recent animal data from Michael Merzenich and his colleagues at University of California, San Francisco, strongly suggest that intensive training even late in life could improve speech processing in older adults and, as a result, improve their ability to communicate in complex, noisy acoustic environments," Caspary added.

Previous studies from Kraus' Auditory Neuroscience Laboratory suggest that musical training also offset losses in memory and difficulties hearing speech in noise — two common complaints of older adults. The lab has been extensively studying the effects of musical experience on brain plasticity across the life span in normal and clinical populations, and in educational settings.

However, Kraus warns that the current study's findings were not pervasive and do not demonstrate that musician's have a neural timing advantage in every neural response to sound. "Instead, this study showed that musical experience selectively affected the timing of sound elements that are important in distinguishing one consonant from another."

The automatic neural responses to speech sounds delivered to 87 normal-hearing, native English-speaking adults were measured as they watched a captioned video. "Musician" participants began musical training before age 9 and engaged consistently in musical activities through their lives, while "non-musicians" had three years or less of musical training.


Journal Reference:

  1. Alexandra Parbery-Clark, Samira Anderson, Emily Hittner, Nina Kraus. Musical experience offsets age-related delays in neural timing. Neurobiology of Aging, 2012; DOI: 10.1016/j.neurobiolaging.2011.12.015
 

Brain makes call on which ear is used for cell phone

— If you're a left brain thinker, chances are you use your right hand to hold your cell phone up to your right ear, according to a new study from Henry Ford Hospital in Detroit.

The study finds a strong correlation between brain dominance and the ear used to listen to a cell phone, with more than 70 percent of participants holding their cell phone up to the ear on the same side as their dominant hand.

Left brain dominate people — those whose speech and language center is on the left side of the brain — are more likely to use their right hand for writing and other everyday tasks.

Likewise, the Henry Ford study shows most left brain dominant people also use the phone in their right ear, despite there being no perceived difference in their hearing in the left or right ear. And, right brain dominant people are more likely to use their left hand to hold the phone in their left ear.

"Our findings have several implications, especially for mapping the language center of the brain," says Michael Seidman, M.D., FACS, director of the division of otologic and neurotologic surgery in the Department of Otolaryngology — Head and Neck Surgery at Henry Ford.

"By establishing a correlation between cerebral dominance and sidedness of cell phone use, it may be possible to develop a less-invasive, lower-cost option to establish the side of the brain where speech and language occurs rather than the Wada test, a procedure that injects an anesthetic into the carotid artery to put half of the brain to sleep in order to map activity."

Dr. Seidman notes that the study also may offer additional evidence that cell phone use and brain, and head and neck tumors may not be linked.

If there was a strong connection, he says there would be a far more people diagnosed with cancer on the right side of their brain, head and neck — the dominate side for cell phone use. But it's likely that there is a time and "dose-dependence" to the development of tumors, he notes.

Study results will be presented Feb. 26 in San Diego at the 25th Mid-Winter Meeting of the Association for Research in Otolaryngology.

The study began with the simple observation that most people use their right hand to hold a cell phone to their right ear. This practice, Dr. Seidman says, is illogical since it is challenging to listen on the phone with the right ear and take notes with the right hand.

To determine if there is an association between sidedness of cell phone use and auditory or language hemispheric dominance, the Henry Ford team developed a online survey using modifications of the Edinburgh Handedness protocol, a tool used for more than 40 years to assess handedness and predict cerebral dominance.

The Henry Ford survey included questions about which hand was used for tasks such as writing; time spent talking on cell phone; whether the right or left ear is used to listen to phone conversations; and if respondents had been diagnosed with a brain or head and neck tumor.

It was distributed to 5,000 individuals who were either with an otology online group or a patient undergoing Wada and MRI for non-invasive localization purposes. More than 700 responded to the online survey.

On average, respondents' cell phone usage was 540 minutes per month.

The majority of respondents (90 percent) were right handed, 9 percent were left handed and 1 percent was ambidextrous.

Among those who are right handed, 68 percent reported that they hold the phone to their right ear, while 25 percent used the left ear and 7 percent used both right and left ears. For those who are left handed, 72 percent said they used their left ear for cell phone conversations, while 23 percent used their right ear and 5 percent had no preference.

The study also revealed that having a hearing difference can impact ear preference for cell phone use.

In all, the study found that there is a correlation between brain dominance and laterality of cell phone use, and there is a significantly higher probability of using the dominant hand side ear.

Funding: Henry Ford Hospital

Along with Dr. Seidman, study authors from Henry Ford are Bianca Siegel, M.D.; Priyanka Shah; and Susan M. Bowyer, Ph.D.

 

Schizophrenia: When hallucinatory voices suppress real ones, new electronic application may help

When a patient afflicted with schizophrenia hears inner voices something is taking place inside the brain that prevents the individual from perceiving real voices. A simple electronic application may help the patient learn to shift focus.

"The patient experiences the inner voices as 100 per cent real, just as if someone was standing next to him and speaking" explains Professor Kenneth Hugdahl of the University of Bergen. "At the same time, he can't hear voices of others actually present in the same room."

Auditory hallucinations are one of the most common symptoms associated with schizophrenia.

Neural activity ceases

Dr Hugdahl's research group has made use of a variety of neuroimaging techniques, including functional magnetic resonance imaging technology (fMRI) to enable them quite literally to see what happens inside the brain when the inner voices make their presence known. The project received funding under the NevroNor national initiative on neuroscientific research administered under the auspices of the Research Council of Norway

Images of patients' brains reveal a spontaneous activation of neurons in a particular area of the brain — specifically the rear, upper region of the left temporal lobe. This is the area responsible for speech perception, and when healthy people hear speech it becomes activated. So what happens when patients with schizophrenia hear a real voice and a hallucinatory one at the same time?

"It would be natural to assume that neural activity would increase somewhat — even twofold. But quite the opposite takes place; we actually observed that the activity ceased altogether," states Professor Hugdahl.

Losing contact with the outside world

In order to learn more about what was happening, Hugdahl and his colleagues Kristiina Kompus and René Westerhausen carried out a meta-analysis of 23 studies. These studies focused either on spontaneous inner-voice triggered neural activation in subjects with schizophrenia or the stimulatory reaction prompted by actual sounds in both healthy and schizophrenic subjects.

It emerged that many researchers had observed either that a spontaneous activation of neurons occurs in patients hearing inner voices or that the patients' perception of actual voices becomes suppressed when these are heard simultaneously with inner voices. No one had seen the connection between these findings.

"Previously, we thought these were two separate phenomena. But our analyses revealed that the one causes the other: when neurons become activated by inner voices it inhibits perception of outside speech. The neurons become 'preoccupied' and can't 'process' voices from the outside," explains Professor Hugdahl.

"This may explain why schizophrenic patients close themselves off so completely and lose touch with the outside world when experiencing hallucinations," he purports.

Electronic app designed to improve impulse control

Hugdal and his colleagues made yet another discovery that may well help explain how the lives of these individuals become consumed by inner voices. It turns out that the frontal lobe in the brains of schizophrenia patients does not function exactly the way it should. As a result, these patients have a lesser degree of impulse control and are unable to filter out their inner voices.

"Every one of us hears inner voices or melodies from time to time. The difference between non-afflicted individuals and schizophrenia patients is that the former manage to tune these out better," the professor points out.

If patients could learn to stifle inner noise it could have a huge impact on our ability to treat schizophrenia, he states. To this end, Professor Hugdahl's research group has developed an application that can be used on mobile phones and other simple electronic devices, to help patients improve their filters.

Wearing headphones, the patient is exposed to simple speech sounds with different sounds played in each ear. The task is to practice hearing the sound in one ear while blocking out sound in the other. The application has only been tested on two patients with schizophrenia so far. The response from these patients is promising, Dr Hugdahl relates.

"The voices are still there, but the test subjects feel that they have control over the voices instead of the other way around. The patient feels it is a breakthrough since it means he can actively shift his focus from the inner voices over to the sounds coming from the outside," the professor explains.

 

Internet-based therapy relieves persistent tinnitus, study suggests

Those suffering from nagging tinnitus can benefit from internet-based therapy just as much as patients who take part in group therapy sessions. These are the findings of a German-Swedish study in which patients with moderate to severe tinnitus tried out various forms of therapy over a ten-week period. The outcome of both the internet-based therapy and group therapy sessions was significantly better than that of a control group that only participated in an online discussion forum and thus demonstrated both the former to be effective methods of managing the symptoms of irritating ringing in the ears.

The study was conducted by the Clinical Psychology and Psychotherapy division of the Institute of Psychology at Johannes Gutenberg University Mainz (JGU) and the Department of Behavioral Sciences and Learning at Linköping University in Sweden. According to the German Tinnitus League (Deutsche Tinnitus-Liga, DTL), two percent of the population suffer from moderate to unbearable tinnitus. But the symptoms of tinnitus can be successfully managed by means of cognitive behavioral therapy. However, not everyone has the opportunity or the desire to take a course of psychotherapy.

As shown by the German-Swedish study, those affected by tinnitus can now achieve the same level of outcome with the help of an internet-based therapy program, which encourages them to adopt individual and active strategies to combat their tinnitus. For the purposes of the study, the training program developed in Sweden was adapted so that it could be used for German patients and then be evaluated for its effectiveness. The study showed that distress measured using the Tinnitus Handicap Inventory was reduced on average from moderate (40 points) to mild (29 points) in participants who completed the internet-based training course.

The results for subjects in the cognitive behavioral therapy group were also very good, with distress levels being reduced from 44 to 29 points. In contrast, there was hardly any change in this respect in the control group subjects participating in the online discussion forum. Their average distress level was 40 points at the beginning of the study and remained at 37 points thereafter.

"Our internet-based therapy concept was very effective when it came to the reduction of tinnitus-related distress or, to put it another way, at increasing the tolerance levels of subjects with regard to their tinnitus," concludes Dr. Maria Kleinstäuber of the Clinical Psychology and Psychotherapy division at JGU. At the same time, another interesting result was produced with regard to the preferred method of therapy. A significant number of subjects were initially skeptical with regard to the internet-based therapy concept and expressed a preference for the group therapy course. However, they were randomly assigned to the groups.

To everyone's surprise it turned out on the completion of treatment that there was no difference in the effectiveness of the two strategies. "This means that the internet-based therapy concept produced as positive a result as group therapy despite the initial skepticism," says Kleinstäuber. Initial evaluations indicate that the effects of both therapy forms were still persisting after six months.

The authors of the study propose that internet-based forms of therapy should be increasingly used in the psychotherapeutic treatment of tinnitus patients. Furthermore, they call for additional research on patients' skepticism of internet-based therapy, particularly in view of the long waiting times and the lack of outpatient forms of therapy.

 

Deafening affects vocal nerve cells within hours

Portions of a songbird's brain that control how it sings have been shown to decay within 24 hours of the animal losing its hearing.

The findings, by researchers at Duke University Medical Center, show that deafness penetrates much more rapidly and deeply into the brain than previously thought. As the size and strength of nerve cell connections visibly changed under a microscope, researchers could even predict which songbirds would have worse songs in coming days.

"When hearing was lost, we saw rapid changes in motor areas in that control song, the bird's equivalent of speech," said senior author Richard Mooney, PhD, professor of neurobiology at Duke. "This study provided a laser-like focus on what happens in the living songbird brain, narrowed down to the particular cell type involved."

The study was published in Neuron journal online on March 7, 2012.

Like humans, songbirds depend on hearing to learn their mating songs — males that sing poorly don't attract mates, so hearing a song, learning it, and singing correctly are all critical for songbird survival. Songbirds also resemble humans and differ from most other animals in that their songs fall apart when they lose their hearing, and this feature makes them an ideal organism to study how hearing loss may affect the parts of the brain that control vocalization, Mooney said.

"I will go out on a limb and say that I think similar changes also occur in human brains after hearing loss, specifically in Broca's area, a part of the human brain that plays an important role in generating speech and that also receives inputs from the auditory system," Mooney said.

About 30 million Americans are hard of hearing or deaf. This study could shed light on why and how some people's speech changes as their hearing starts to decline, Mooney said.

"Our vocal system depends on the auditory system to create intelligible speech. When people suffer profound hearing loss, their speech often becomes hoarse, garbled, and harder to understand, so not only do they have trouble hearing, they often can't speak fluently any more," Mooney said.

The nerve cells that showed changes after deafening send signals to the basal ganglia, a part of the brain that plays a role in learning and initiating motor sequences, including the complex vocal sequences that make up birdsong and speech.

Although other studies had looked at the effects of deafening on neurons in auditory brain areas, this is the first time that scientists have been able to watch how deafening affects connections between nerve cells in a vocal motor area of the brain in a living animal, said Katie Tschida, PhD, a postdoctoral research associate in the Mooney laboratory who led the study.

Using a protein isolated from jellyfish that can make songbird nerve cells glow bright green when viewed under a laser-powered microscope, they were able to determine that deafening triggered rapid changes to the tiny connections between nerve cells, called synapses, which are only one thousandth of a millimeter across.

"I was very surprised that the weakening of connections between nerve cells was visible and emerged so rapidly — over the course of days these changes allowed us to predict which birds' songs would fall apart most dramatically," Tschida said. "Considering that we were only tracking a handful of neurons in each bird, I never thought we'd get information specific enough to predict such a thing."

The research was supported by the National Science Foundation and the National Institute on Deafness and Other Communication Disorders.


Journal Reference:

  1. Katherine A. Tschida, Richard Mooney. Deafening Drives Cell-Type-Specific Changes to Dendritic Spines in a Sensorimotor Nucleus Important to Learned Vocalizations. Neuron, 8 March 2012; 73(5) pp. 1028 – 1039 DOI: 10.1016/j.neuron.2011.12.038
 

Biologists locate brain's processing point for acoustic signals essential to human communication

In both animals and humans, vocal signals used for communication contain a wide array of different sounds that are determined by the vibrational frequencies of vocal cords. For example, the pitch of someone's voice, and how it changes as they are speaking, depends on a complex series of varying frequencies. Knowing how the brain sorts out these different frequencies — which are called frequency-modulated (FM) sweeps — is believed to be essential to understanding many hearing-related behaviors, like speech. Now, a pair of biologists at the California Institute of Technology (Caltech) has identified how and where the brain processes this type of sound signal.

Their findings are outlined in a paper published in the March 8 issue of the journal Neuron.

Knowing the direction of an FM sweep — if it is rising or falling, for example — and decoding its meaning, is important in every language. The significance of the direction of an FM sweep is most evident in tone languages such as Mandarin Chinese, in which rising or dipping frequencies within a single syllable can change the meaning of a word.

In their paper, the researchers pinpointed the brain region in rats where the task of sorting FM sweeps begins.

"This type of processing is very important for understanding language and speech in humans," says Guangying Wu, principal investigator of the study and a Broad Senior Research Fellow in Brain Circuitry at Caltech. "There are some people who have deficits in processing this kind of changing frequency; they experience difficulty in reading and learning language, and in perceiving the emotional states of speakers. Our research might help us understand these types of disorders, and may give some clues for future therapeutic designs or designs for prostheses like hearing implants."

The researchers — including co-author Richard I. Kuo, a research technician in Wu's laboratory at the time of the study (now a graduate student at the University of Edinburg) — found that the processing of FM sweeps begins in the midbrain, an area located below the cerebral cortex near the center of the brain — which, Wu says, was actually a surprise.

"Some people thought this type of sorting happened in a different region, for example in the auditory nerve or in the brain stem," says Wu. "Others argued that it might happen in the cortex or thalamus. "

To acquire high-quality in-vivo measurements in the midbrain, which is located deep within the brain, the team designed a novel technique using two paired — or co-axial — electrodes. Previously, it had been very difficult for scientists to acquire recordings in hard-to-access brain regions such as the midbrain, thalamus, and brain stem, says Wu, who believes the new method will be applicable to a wide range of deep-brain research studies.

In addition to finding the site where FM sweep selectivity begins, the researchers discovered how auditory neurons in the midbrain respond to these frequency changes. Combining physical measurements with computational models confirmed that the recorded neurons were able to selectively respond to FM sweeps based on their directions. For example, some neurons were more sensitive to upward sweeps, while others responded more to downward sweeps.

"Our findings suggest that neural networks in the midbrain can convert from non-selective neurons that process all sounds to direction-selective neurons that help us give meanings to words based on how they are spoken. That's a very fundamental process," says Wu.

Wu says he plans to continue this line of research, with an eye — or ear — toward helping people with hearing-related disorders. "We might be able to target this area of the midbrain for treatment in the near future," he says.


Journal Reference:

  1. Richard I. Kuo, Guangying K. Wu. The Generation of Direction Selectivity in the Auditory System. Neuron, 2012; 73 (5): 1016 DOI: 10.1016/j.neuron.2011.11.035
 

Research aims for better diagnosis of language impairments

 Recent studies by a UT Dallas researcher aim at finding better ways to diagnose young children with language impairments.

Dr. Christine Dollaghan, a professor at the Callier Center for Communication Disorders and the School of Behavioral and Brain Sciences, is author of a paper in the Journal of Speech, Language, and Hearing Research. The study evaluated data collected from a large sample of about 600 children. Some of the participants had specific language impairments, or SLI. She wanted to deterimine whether SLI should be regarded as a discrete diagnostic category.

"One of the most basic and long-standing questions about SLI is whether children with the disorder have language skills that differ qualitatively and nonarbitrarily from those of other children or whether their language skills simply fall at the lower end of a continuous distribution, below some arbitrary threshold but not otherwise unique," she wrote in the October article titled "Taxometric Analyses of Specific Language Impairment in 6-Year-Old Children."

Dollaghan previously reported on this sample of children when they were 3 and 4 years old. The new study included some test results that were not available at the earlier ages. She focused on four common indicators of SLI — receptive vocabulary, expressive utterance length, expressive vocabulary diversity and nonsense word repetition.

As in the earlier investigation, she found the 6-year-olds with SLI did not represent a distinct group with unique characteristics Instead, they fell at the lower end of a continuous distribution of language skills.

The results of the study could help in developing diagnostic protocols for children with language impairment and tailoring treatments to the characteristics of individual children. Dollaghan said the categorical-continuous question is being examined by investigators interested in many other diagnostic categories, including autism, schizophrenia and ADHD.

Dollaghan also co-authored an article in the November edition of Artificial Intelligence in Medicine with colleagues from UT Dallas' Erik Jonsson School of Engineering and Computer Science, including lead author Keyur Gabani, Yang Liu and Khairun-nisa Hassanali. The team examined the use of automated machine learning and natural language processing methods for diagnosing language impairment in children based on samples of their language.

In "Exploring a corpus-based approach for detecting language impairment in monolingual English-speaking children," the team reported that automated methods performed well. The findings suggested future collaborations between researchers in computer science and communication disorders will likely be useful, Dollaghan said.

 

Discovery of hair-cell roots suggests the brain modulates sound sensitivity

The hair cells of the inner ear have a previously unknown "root" extension that may allow them to communicate with nerve cells and the brain to regulate sensitivity to sound vibrations and head position, researchers at the University of Illinois at Chicago College of Medicine have discovered.

Their finding is reported online in advance of print in the Proceedings of the National Academy of Sciences.

The hair-like structures, called stereocilia, are fairly rigid and are interlinked at their tops by structures called tip-links.

When you move your head, or when a sound vibration enters your ear, motion of fluid in the ear causes the tip-links to get displaced and stretched, opening up ion channels and exciting the cell, which can then relay information to the brain, says Anna Lysakowski, professor of anatomy and cell biology at the UIC College of Medicine and principal investigator on the study.

The stereocilia are rooted in a gel-like cuticle on the top of the cell that is believed to act as a rigid platform, helping the hairs return to their resting position.

Lysakowski and her colleagues were interested in a part of the cell called the striated organelle, which lies underneath this cuticle plate and is believed to be responsible for its stability. Using a high-voltage electron microscope at the National Center for Microscopy and Imaging Research at the University of California, San Diego, Florin Vranceanu, a recent doctoral student in Lysakowski's UIC lab and first author of the paper, was able to construct a composite picture of the entire top section of the hair cell.

"When I saw the pictures, I was amazed," said Lysakowski.

Textbooks, she said, describe the roots of the stereocilia ending in the cuticular plate. But the new pictures showed that the roots continue through, make a sharp 110-degree angle, and extend all the way to the membrane at the opposite side of the cell, where they connect with the striated organelle.

For Lysakowski, this suggested a new way to envision how hair cells work. Just as the brain adjusts the sensitivity of retinal cells in the eye to light, it may also modulate the sensitivity of hair cells in the inner ear to sound and head position.

When the eye detects light, there is feedback from the brain to the eye. "If it's too bright the brain can say, okay, I'll detect less light — or, it's not bright enough, let me detect more," Lysakowski said.

With the striated organelle connecting the rootlets to the cell membrane, it creates the possibility of feedback from the cell to the very detectors that detect motion. Feedback from the brain could alter the tension on the rootlets and their sensitivity to stimuli. The striated organelle may also tip the whole cuticular plate at once to modulate the entire process.

"This may revolutionize the way we think about the hair cells in the inner ear," Lysakowski said.

The study was supported by the grants from the National Institutes of Deafness and other Communication Disorders, the American Hearing Research Foundation, the National Center for Research Resources, and the 2008 Tallu Rosen Grant in Auditory Science from the National Organization for Hearing Research Foundation.

Graduate student Robstein Chidavaenzi and Steven Price, an electron microscope technologist, also contributed by identifying three of the proteins composing the striated organelle and demonstrating how they arise during development. Guy Perkins, Masako Terada and Mark Ellisman from the National Center for Microscopy and Imaging Research in Biological Systems, University of California, San Diego, also contributed to the study.


Journal Reference:

  1. F. Vranceanu, G. A. Perkins, M. Terada, R. L. Chidavaenzi, M. H. Ellisman, A. Lysakowski. Striated organelle, a cytoskeletal structure positioned to modulate hair-cell transduction. Proceedings of the National Academy of Sciences, 2012; DOI: 10.1073/pnas.1101003109