Study to test new tinnitus 'treatment'

A new clinical trial is to test whether a pocket-sized device that uses sound simulation to reboot faulty 'wiring' in the brain could cure people with the debilitating hearing disorder tinnitus.

The CR® neuromodulation device delivers specific sequences of sounds to disrupt the pattern of neurons firing in the brain. It is believed that conditions such as hearing loss can cause neurons in the brain to fire simultaneously instead of in a random pattern which can cause an overload and lead to a ringing or buzzing in the ear, the classic symptom of tinnitus.

The study is being led by the National Biomedical Research Unit in Hearing (NBRUH) which is funded by the National Institute for Health Research (NIHR), a partnership bringing together expertise from researchers at The University of Nottingham and the Medical Research Council Institute of Hearing with leading clinicians from Nottingham University Hospitals NHS Trust.

Dr Derek Hoare, a research fellow at the NBRUH, said: "In the UK, around five million people suffer from tinnitus, a debilitating condition which can be exceptionally difficult to treat due to the huge variation in symptoms and severity between individual patients.

"We know there are very many people out there suffering with tinnitus who have tried a number of different treatments including hearing aids, sound therapies, counselling and other alternative medicines such as acupuncture but to no avail.

"We want to scientifically establish whether this new method of sound simulation could offer patients a new hope for treating tinnitus, which can have such a distressing impact on people's day to day lives."

Tinnitus is a secondary symptom usually resulting from damage to the ears, including hearing loss following exposure to loud noises, congenital hearing loss, ear infections and ear hair cell death caused by exposure to a number of different drugs.

The revolutionary CR® neuromodulation device is already being marketed by the private healthcare sector both in the UK and in Germany, where it was originally manufactured and where an exploratory study has already produced promising results.

Funded with just over £345,000 from the the specialist private audiologists The Tinnitus Clinic in London, the study will also involve collaboration with experts at the Ear Institute at University College London (UCL).

The scientists will be looking to recruit patients who have suffered from bothersome tinnitus for at least three months but are not currently receiving any treatment for the condition. Those with associated hearing loss will need to forego the use of their normal hearing aid for the four to six hours per day when the device needs to be worn.

The study will involve two groups of participants, one of which will be fitted with the CR® neuromodulation device and the other of which will be fitted with a placebo device. Over a period of three months, the researchers will then monitor the effect of wearing the device on the patient's condition through a series of hearing tests, questionnaires and EEG recordings of the electrical activity of their brain.

After three months, all patients — even those who previously received a placebo — will be fitted with a working device which they will be free to keep.

The researchers hope to be able to prove that by disrupting the abnormal firing of neurons in the brain the device can encourage them to return to a normal healthy pattern, eradicating the symptoms of tinnitus. In some cases, patients may find the device has permanently improved their symptoms, with potentially no further treatment needed in the future.

The National Biomedical Research Unit in Hearing was established in 2008 as part of the National Institute for Health Research and is the only biomedical research unit funded to conduct pure translational research in deafness and hearing problems, taking new medical discoveries into a clinical setting for the benefit of patients.

 

Deaf sign language users pick up faster on body language

Deaf people who use sign language are quicker at recognizing and interpreting body language than hearing non-signers, according to new research from investigators at UC Davis and UC Irvine.

The work suggests that deaf people may be especially adept at picking up on subtle visual traits in the actions of others, an ability that could be useful for some sensitive jobs, such as airport screening.

"There are a lot of anecdotes about deaf people being better able to pick up on body language, but this is the first evidence of that," said David Corina, professor in the UC Davis Department of Linguistics and Center for Mind and Brain.

Corina and graduate student Michael Grosvald, now a postdoctoral researcher at UC Irvine, measured the response times of both deaf and hearing people to a series of video clips showing people making American Sign Language signs or "non-language" gestures, such as stroking the chin. Their work was published online Dec. 6 in the journal Cognition.

"We expected that deaf people would recognize sign language faster than hearing people, as the deaf people know and use sign language daily, but the real surprise was that deaf people also were about 100 milliseconds faster at recognizing non-language gestures than were hearing people," Corina said.

This work is important because it suggests that the human ability for communication is modifiable and is not limited to speech, Corina said. Deaf people show us that language can be expressed by the hands and be perceived through the visual system. When this happens, deaf signers get the added benefit of being able to recognize non-language actions better than hearing people who do not know a sign language, Corina said.

The study supports the idea that sign language is based on a modification of the system that all humans use to recognize gestures and body language, rather than working through a completely different system, Corina said.

The research was supported by grants from the National Institutes of Health and National Science Foundation.


Journal Reference:

  1. David P. Corina, Michael Grosvald. Exploring perceptual processing of ASL and human actions: Effects of inversion and repetition priming. Cognition, 2011; DOI: 10.1016/j.cognition.2011.10.011

Using MP3 players at high volume puts teens at risk for early hearing loss, say researchers

Today's ubiquitous MP3 players permit users to listen to crystal-clear tunes at high volume for hours on end — a marked improvement on the days of the Walkman. But according to Tel Aviv University research, these advances have also turned personal listening devices into a serious health hazard, with teenagers as the most at-risk group.

One in four teens is in danger of early hearing loss as a direct result of these listening habits, says Prof. Chava Muchnik of TAU's Department of Communication Disorders in the Stanley Steyer School of Health Professions at the Sackler Faculty of Medicine and the Sheba Medical Center. With her colleagues Dr. Ricky Kaplan-Neeman, Dr. Noam Amir, and Ester Shabtai, Prof. Muchnik studied teens' music listening habits and took acoustic measurements of preferred listening levels.

The results, published in the International Journal of Audiology, demonstrate clearly that teens have harmful music-listening habits when it comes to iPods and other MP3 devices. "In 10 or 20 years it will be too late to realize that an entire generation of young people is suffering from hearing problems much earlier than expected from natural aging," says Prof. Muchnik.

Hearing loss before middle age

Hearing loss caused by continuous exposure to loud noise is a slow and progressive process. People may not notice the harm they are causing until years of accumulated damage begin to take hold, warns Prof. Muchnik. Those who are misusing MP3 players today might find that their hearing begins to deteriorate as early as their 30's and 40's — much earlier than past generations.

The first stage of the study included 289 participants aged 13 to 17. They were asked to answer questions about their habits on personal listening devices (PLDs) — specifically, their preferred listening levels and the duration of their listening. In the second stage, measurements of these listening levels were performed on 74 teens in both quiet and noisy environments. The measured volume levels were used to calculate the potential risk to hearing according to damage risk criteria laid out by industrial health and safety regulations.

The study's findings are worrisome, says Prof. Muchnik. Eighty percent of teens use their PLDs regularly, with 21 percent listening from one to four hours daily, and eight percent listening more than four hours consecutively. Taken together with the acoustic measurement results, the data indicate that a quarter of the participants are at severe risk for hearing loss.

Dangerous decibels

Currently, industry-related health and safety regulations are the only benchmark for measuring the harm caused by continuous exposure to high volume noise. But there is a real need for additional music risk criteria in order to prevent music-induced hearing loss, Prof. Muchnik says. In the meantime, she recommends that manufacturers adopt the European standards that limit the output of PLDs to 100 decibels. Currently, maximum decibel levels can differ from model to model, but some can go up to 129 decibels.

Steps can also be taken by schools and parents, she says. Some school boards are developing programs to increase awareness of hearing health, such as the "Dangerous Decibels" program in Oregon schools, which provides early education on the subject. Teens could also choose over-the-ear headphones instead of the ear buds that commonly come with an iPod.

In the near future, the researchers will focus on the music listening habits of younger children, including pre-teens, and the development of advanced technological solutions to enable the safe use of PLDs.


Journal Reference:

  1. Chava Muchnik, Noam Amir, Ester Shabtai, Ricky Kaplan-Neeman. Preferred listening levels of personal listening devices in young teenagers: Self reports and physical measurements. International Journal of Audiology, 2011; 1 DOI: 10.3109/14992027.2011.631590

Gene therapy for ears

NewsPsychology (Dec. 19, 2011) — Normal hearing depends on the presence of healthy hair cells in the inner ear. Gene therapy has the potential to slow the loss of hair cells and promote the growth of hair cells that have already been damaged.

In gene therapy, genetic material — DNA or RNA — is transported by a carrier to cells to provide instructions for and replace damaged genes. The carrier must protect its genetic package and help it make its way through the membranes that protect cells and their surroundings. The carrier should also be able to transport the genetic material right to the cells that need help.

For the first time ever, chitosan nanoparticles have been used as a carrier for gene therapy in the ear. Chitosan is produced from shrimp shells.

“Gene therapy may someday be an alterna­tive to using surgery to implant CI, cochlear implants, in the deaf and hard of hearing,” says Sabina Strand, at NTNU’s Department of Biotechnology.

Basic research promising

Strand studies the use of chitosan in gene therapy, and conducted this basic research, now ended, in cooperation with the Karolinska Institutet in Sweden. Here, researchers attempted to use chitosan as a carrier to deliver drugs and genes to the inner ear in guinea pigs. Chitosan was able to deliver drugs through the membrane that covers the tiny gap between the middle ear and inner ear. Chitosan was also able to deliver genes to the hair cells. Whether or not the results from guinea pigs can be transferred to human ears remains uncertain.

“However, chitosan is non-toxic and is not harmful to cells. Chitosan is therefore better than other carriers and has characteristics that mean it could potentially be used with patients,” says Strand.

Tidy packages

Chitosan is produced from powdered shrimp shells. Acid removes salts, minerals and calcium carbonate. Strong alkalis and heat remove proteins. What remains is chitosan.

Extremely small nanoparticles in the range of 50-200 nm (nanometres) are formed spon­taneously when the positively charged chitosan and negatively charged genes are mixed. Chitosan does a good job packaging up DNA and RNA’s relatively large molecules.

Tailored therapy

In the body, chitosan attaches itself to molecules, cells and membranes. When the nanoparticles have passed through a membrane, chitosan packages up the gene molecules so they return to their normal size again. Chitosan also creates gaps between cells, which facilitate the absorption of medicine.

Different forms of gene therapy require nanoparticles with different properties. The properties of nanoparticles are controlled by the way in which researchers tailor the chitosan structure, its molecular size and 3D architecture. But whether or not researchers will find the perfect mix of medicines for our ears and hair cells remains to be seen — and heard.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by The Norwegian University of Science and Technology (NTNU), via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


People with DFNA2 hearing loss show increased touch sensitivity, study shows

People with a certain form of inherited hearing loss have increased sensitivity to low frequency vibration, according to a study by Professor Thomas Jentsch of the Leibniz-Institut für Molekulare Pharmakologie (FMP)/Max Delbrück Center for Molecular Medicine (MDC) Berlin-Buch and Professor Gary Lewin (MDC), conducted in cooperation with clinicians from Madrid, Spain and Nijmegen, the Netherlands.

The research findings, which were published in Nature Neuroscience, reveal previously unknown relationships between hearing loss and touch sensitivity: In order to be able to 'feel', specialized cells in the skin must be tuned like instruments in an orchestra.

The members of the Spanish and Dutch families who participated in the study were quite amazed when the researchers from Berlin unpacked their testing equipment. Many of the family members suffer from hereditary DFNA2 hearing loss, but the researchers were less interested in their hearing ability than in their sense of touch. The hearing impairment is caused by a mutation which disrupts the function of many hair cells in the inner ear. This mutation, the researchers suspected, might also affect the sense of touch.

Tiny, delicate hairs in our inner ear vibrate to the pressure of the sound waves. The vibrations cause an influx of positively charged potassium ions into the hair cells. This electric current produces a nerve signal that is transmitted to the brain — we hear. The potassium ions flow through a channel in the cell membrane and again out of the hair cells. This potassium channel, a protein molecule called KCNQ4, is destroyed by the mutation in hearing-impaired people. The sensory cells gradually die off due to overload. "But we have found that KCNQ4 is present not only in the ear, but also in some sensory cells of the skin," Thomas Jentsch explained. "This gave us the idea that the mutation might also affect the sense of touch. And this is exactly what we were able to show in our research, which we conducted in a close collaboration with the lab of Gary Lewin, a colleague from the MDC who is specialized in touch sensation."

Whether we caress our child, search in our bag for a certain object or hold a pen in our hand — each touch conveys a variety of precise and important information about our environment. We distinguish between a rough and smooth surface by the vibrations that occur in the skin when the surface is stroked. For the different touch stimuli there are sensory cells in the skin with different structures — through the deformation of the delicate structures, electric nerve signals are generated. Exactly how this happens is still a mystery — of the five senses of Aristoteles, the sense of touch is the least understood.

Clearly there are parallels to hearing, as the findings of Matthias Heidenreich and Stefan Lechner from the research groups of Thomas Jentsch and Gary Lewin show. As a first step, the researchers in the Jentsch lab created a mouse model for deafness by generating a mouse line that carries the same mutation in the potassium channel as a patient with this form of genetic hearing loss. The touch receptors in the skin where the KCNQ4 potassium channel is found did not die off due to the defective channel like they did in the ear, but instead showed an altered electric response to the mechanical stimuli in the mutated mouse. They reacted much more sensitively to vibration stimuli in the low frequency range. The outlet valve for potassium ions normally functions here as a filter to dampen the excitability of the cells preferentially at low frequencies. This normally tunes these mechanoreceptors to moderately high frequencies in normal people. In mice lacking functional KCNQ4 channels, these receptors can no longer distinguish between low and high frequencies.

The deaf patients with mutations in the potassium channel who were examined by Stefan Lechner and Matthias Heidenreich showed exactly the same effect. They could even perceive very slow vibrations that their healthy siblings could not perceive. Due to mutations in the KCNQ4 channel gene, the fine tuning of the mechanoreceptors for normal touch sensation was altered.

The sensation of touch varies greatly from person to person — some people are much more sensitive to touch than others. DFNA2 patients are extremely sensitive to vibrations, according to Gary Lewin and Thomas Jentsch. "The skin has several different types of mechanoreceptors, which respond to different qualities of stimuli, especially to different frequency ranges. The interaction of different receptor classes is important for the touch sensation. Although the receptors we studied became more sensitive due to the loss of the potassium channel, this may be outweighed by the disadvantage of the wrong 'tuning to other frequencies'. With KCNQ4 we have for the first time identified a human gene that changes the traits of the touch sensation."

The research group led by Thomas Jentsch belongs both to the FMP and the MDC in Berlin and studies ion transport and its role in disease. The group led by Gary Lewin belongs to the MDC and is specialized in peripheral sensory perception.


Journal Reference:

  1. Matthias Heidenreich, Stefan G Lechner, Vitya Vardanyan, Christiane Wetzel, Cor W Cremers, Els M De Leenheer, Gracia Aránguez, Miguel Ángel Moreno-Pelayo, Thomas J Jentsch, Gary R Lewin. KCNQ4 K+ channels tune mechanoreceptors for normal touch sensation in mouse and man. Nature Neuroscience, 2011; DOI: 10.1038/nn.2985

Even unconsciously, sound helps us see

"Imagine you are playing ping-pong with a friend. Your friend makes a serve. Information about where and when the ball hit the table is provided by both vision and hearing. Scientists have believed that each of the senses produces an estimate relevant for the task (in this example, about the location or time of the ball's impact) and then these votes get combined subconsciously according to rules that take into account which sense is more reliable. And this is how the senses interact in how we perceive the world.

However, our findings show that the senses of hearing and vision can also interact at a more basic level, before they each even produce an estimate," says Ladan Shams, a UCLA professor of psychology, and the senior author of a new study appearing in the December issue of Psychological Science, a journal published by the Association for Psychological Science. "If we think of the perceptual system as a democracy where each sense is like a person casting a vote and all votes are counted (albeit with different weights) to reach a decision, what our study shows is that the voters talk to one another and influence one another even before each casts a vote."

"The senses affect each other in many ways," says cognitive neuroscientist Robyn Kim. There are connections between the auditory and visual portions of the brain and at the cognitive level. When the information from one sense is ambiguous, another sense can step in and clarify or ratify the perception. Now, for the first time, Kim, Megan Peters, and Ladan Shams, working at the University of California Los Angeles, have shown behavioral evidence that this interplay happens in the earliest workings of perception — not just before that logical decision-making stage, but before the pre-conscious combination of sensory information.

To demonstrate that one sense can affect another even before perception, the researchers showed 63 participants a bunch of dots on a screen, in two phases with a pause between them. In one phase, the dots moved around at random; in the other, some proportion moved together from right to left. The participants had to indicate in which phase the dots moved together horizontally. In experiment 1, the subjects were divided into three groups. While they looked at the dots, one group heard sound moving in the same direction as the right-to-left dots, and stationary sound in the random phase. A second group heard the same right-to-left sound in both phases. The third group heard the identical sound in both phases, but it moved in the opposite direction of the dots. In the second and third conditions, because the sound was exactly the same in both phases, it added no cognitively useful information about which phase had the leftward-moving dots. In experiment 2, each participant experienced trials in all three conditions.

The results: All did best under the first condition — when the sound moved only in the leftward-motion phase. The opposite-moving sound neither enhanced nor worsened the visual perception. But surprisingly, the uninformative sound — the one that traveled leftward both with the leftward-moving dots and also when the dots moved randomly — helped people correctly perceive when the dots were moving from one side to the other. Hearing enhanced seeing, even though the added sense couldn't help them make the choice.

The study, says Kim, should add to our appreciation of the complexity of our senses. "Most of us understand that smell affects taste. But people tend to think that what they see is what they see and what they hear is what they hear." The findings of this study offer "further evidence that, even at a non-conscious level, visual and auditory processes are not so straightforward," she says. "Perception is actually a very complex thing affected by many factors."

"This study shows that at least in regards to perception of moving objects, hearing and sight are deeply intertwined, to the degree that even when sound is completely irrelevant to the task, it still influences the way we see the world," Shams says.

The article is entitled, "How adding non-informative sound improves performance on a visual task."

Key molecules for hearing and balance discovered: Can hearing be restored?

National Institutes of Health-funded researchers have identified two proteins that may be the key components of the long-sought after mechanotransduction channel in the inner ear — the place where the mechanical stimulation of sound waves is transformed into electrical signals that the brain recognizes as sound.

The findings are published in the Nov. 21 online issue of The Journal of Clinical Investigation.

The study used mice in which two genes, TMC1 and TMC2, have been deleted. The researchers revealed a specific functional deficit in the mechanotransduction channels of the mice's stereocilia (bristly projections that perch atop the sensory cells of the inner ear, called hair cells), while the rest of the hair cell's structure and function was normal.

These genes and the proteins they regulate are the strongest candidates yet in a decades-long search for the transduction channel that is at the center of the inner ear's ability to receive sound and transfer it to the brain. Andrew J. Griffith, M.D., Ph.D., chief of the molecular biology and genetics section and the otolaryngology branch at the National Institute on Deafness and Other Communication Disorders (NIDCD) at NIH, and Jeffrey R. Holt, Ph.D., an associate professor in the department of otolaryngology at Harvard Medical School's Children's Hospital in Boston, co-led the team that published the findings.

"For many years, the NIDCD has funded research using genetic approaches to discover and analyze genes underlying hereditary deafness," said James F. Battey, Jr., M.D., Ph.D., director of the NIDCD. "We believed these studies would also help us identify genes and proteins that are critical for normal hearing. Now our efforts appear to be paying off, in this discovery of integral components in the mechanotransduction complex."

Like other sensory cells, the hair cell's transduction channel is presumed to be an ion channel — a tiny opening or pore in the cell that lets electrically charged molecules (ions) pass in and out — and which acts as a molecular mechanism for turning sound vibrations into electrical signals in the cochlea, the snail-shaped organ of the inner ear. Mechanotransduction in sensory hair cells also underlies the sense of balance in the vestibular organs of the inner ear. Researchers have theorized that the channel must be located in the tips of hair cell stereocilia, which are linked by a system of horizontal filaments (called tip links) that connect the shorter stereocilia to their taller neighbors so that the whole bundle moves as one unit when it is stimulated by sound or head movements.

Drs. Griffith and Holt and their team focused on TMC1, a gene named for its trans-membrane-channel-like amino acid sequence. Dr. Griffith and another team of NIDCD-funded collaborators had previously discovered TMC1 as a gene in which mutations cause hereditary deafness in humans and mice. Multiple regions of the protein that TMC1 encodes looked as though they would be able to span the plasma membrane (the outer membrane of a cell that controls cellular traffic) and act as a receptor or a channel. The researchers also zeroed in on TMC2, a gene that has a structure much like TMC1's and has similar membrane-spanning domains in its code.

The scientists genetically engineered mice with knocked-out versions of the two genes and then bred the mice so that some had no functional copies of TMC1 or TMC2, and some had one gene knocked out but the other present. This was to help the scientists identify redundancy in gene function, a consequence of families of genes that can fill in for each other when one of them is deleted or mutated.

The team observed that TMC2 knockout mice had normal hearing and no balance issues (balance issues would indicate problems with the hair cells in the vestibular system), but that mice with no functional copies of TMC1 or TMC2 had the classic behaviors of dizzy mice — head bobbing, neck arching, unstable gait, and circling movements — and they were deaf. The TMC1 knockout mice were also deaf, but they had no balance issues. Looking at tissue slices of the mouse inner ears over time from birth, the researchers could see the expression of TMC1 and TMC2 in hair cells in the vestibular organs and the cochlea from birth. But a week later, TMC2 appeared to be turned off in the cochlea while it continued to be expressed in the vestibular organs. Since only TMC1 continues to be expressed in the mature cochlear hair cell, the researchers propose that TMC1 is essential for hearing, but TMC2 is not. However, in the vestibular system, TMC2 expression can substitute for TMC1 to maintain vestibular function.

To further home in on the properties of TMC1, the scientists measured the electrical activity in hair cells from the double mutant mice. Mice that had no functional TMC1 or TMC2 had no mechanotransduction currents in their cells. All other ion channels in the double mutant mice appeared to be functioning normally. The TMC1 deficit appeared to be specific to mechanotransduction — not just a symptom of a problem that affects the whole cell.

Under a scanning electron microscope, the structure of the bundles of double mutant hair cells looked completely normal, which ruled out structural anomalies that could be interrupting transduction. Other tests probed for the presence of mechanotransduction channels by using a fluorescent dye and gentamicin (a drug that causes hearing loss by damaging hair cells), both of which are known to be freely taken up into stereocilia. The double mutant mice did not take up either substance, while the normal mice did.

Another novel technique, adapted in labs at NIDCD for studying inner ear hair cells, used a gene gun to fire fluorescent tagged TMC1 and TMC2 genes at normal tissue to see where the genes expressed their proteins. The proteins clustered at the tips of the stereocilia, where one would expect to see them if they played a prominent role in mechanotransduction.

To further support their findings, the researchers found that by using a gene therapy technique that adds the proteins back into the cell, they could restore transduction to vestibular and cochlear hair cells of the mice missing TMC1 and TMC2. This suggests that it might be possible to reverse genetic deficits at the cellular level.

"What we see in the hair cells of these double mutant knockout mice," says Dr. Griffith, "is a unique combination of properties that one would expect to see in a hair cell that has a defective transduction channel or some defect in getting that channel to where it needs to be and functioning."

To discover exactly how the channel machinery operates, the team will continue to explore how TMC1 and TMC2 interact with each other as well as how they interact with other proteins at the stereocilia tip that are essential to transduction. These include the tip link cadherins and protocadherins, which were also identified and characterized in NIDCD-funded laboratories. If these genes encode the transduction channel, they will be useful tools to screen for drugs or molecules that bind to the channel and could be used to prevent damage to hair cells.


Journal Reference:

  1. Yoshiyuki Kawashima, Gwenaëlle S.G. Géléoc, Kiyoto Kurima, Valentina Labay, Andrea Lelli, Yukako Asai, Tomoko Makishima, Doris K. Wu, Charles C. Della Santina, Jeffrey R. Holt, Andrew J. Griffith. Mechanotransduction in mouse inner ear hair cells requires transmembrane channel–like genes. Journal of Clinical Investigation, 2011; DOI: 10.1172/JCI60405

Multiple surgeries and anesthesia exposure

Every year millions of babies and toddlers receive general anesthesia for procedures ranging from hernia repair to ear surgery. Now, researchers at Mayo Clinic in Rochester have found a link among children undergoing multiple surgeries requiring general anesthesia before age 2 and learning disabilities later in childhood.

The study, which will be published in the November 2011 issue of Pediatrics (published online Oct. 3), was conducted with existing data of 5,357 children from the Rochester Epidemiology Project and examined the medical and educational records of 1,050 children born between 1976 and 1982 in a single school district in Rochester.

"After removing factors related to existing health issues, we found that children exposed more than once to anesthesia and surgery prior to age 2 were approximately three times as likely to develop problems related to speech and language when compared to children who never underwent surgeries at that young age," says David Warner, M.D., Mayo Clinic anesthesiologist and co-author of the study.

Among the 5,357 children in the cohort, 350 underwent surgeries with general anesthesia before their second birthday and were matched with 700 children who did not undergo a procedure with anesthesia. Of those exposed to anesthesia, 286 experienced only one surgery and 64 had more than one. Among those children who had multiple surgeries before age 2, 36.6 percent developed a learning disability later in life. Of those with just one surgery, 23.6 percent developed a learning disability, which compares to 21.2 percent of the children who developed learning disabilities but never had surgery or anesthesia before age 2. However, researchers saw no increase in behavior disorders among children with multiple surgeries.

"Our advice to parents considering surgery for a child under age 2 is to speak with your child's physician," says Randall Flick, M.D., Mayo Clinic pediatric anesthesiologist and lead author of the study. "In general, this study should not alter decision-making related to surgery in young children. We do not yet have sufficient information to prompt a change in practice and want to avoid problems that may occur as a result of delaying needed procedures. For example, delaying ear surgery for children with repeated ear infections might cause hearing problems that could create learning difficulties later in school."

This study, funded by the U.S. Food and Drug Administration, examines the same population data used in a 2009 study by Mayo Clinic researchers, which reviewed records for children under age 4 and was published in the medical journal Anesthesiology.

The 2009 Mayo Clinic study was the first complete study in humans to suggest that exposure of children to anesthesia might affect development of the brain. Several previous studies suggested that anesthetic drugs might cause abnormalities in the brains of young animals. The study released today is significant because it examines children experiencing anesthesia and surgeries under age 2 and removes factors associated with existing health issues.

Additional co-authors include Slavica Katusic, M.D.; Robert Colligan, Ph.D.; Robert Wilder, M.D., Ph.D.; Michael Olson, Juraj Sprung, M.D., Ph.D.; Amy Weaver; and Darrell Schroeder; all of Mayo Clinic, and Robert Voigt, M.D., Texas Children's Hospital.


Journal Reference:

  1. Randall P. Flick, Slavica K. Katusic, Robert C. Colligan, Robert T. Wilder, Robert G. Voigt, Michael D. Olson, Juraj Sprung, Amy L. Weaver, Darrell R. Schroeder, and David O. Warner. Cognitive and Behavioral Outcomes After Early Exposure to Anesthesia and Surgery. Pediatrics, 2011-0351; October 3, 2011 DOI: 10.1542/peds.2011-0351

Scientists discover an organizing principle for our sense of smell based on pleasantness

The fact that certain smells cause us pleasure or disgust would seem to be a matter of personal taste. But new research at the Weizmann Institute shows that odors can be rated on a scale of pleasantness, and this turns out to be an organizing principle for the way we experience smell. The findings, which appeared September 26 in Nature Neuroscience, reveal a correlation between the response of certain nerves to particular scents and the pleasantness of those scents. Based on this correlation, the researchers could tell by measuring the nerve responses whether a subject found a smell pleasant or unpleasant.

Our various sensory organs are have evolved patterns of organization that reflect the type of input they receive. Thus the receptors in the retina, in the back of the eye, are arranged spatially for efficiently mapping out visual coordinates. The structure of the inner ear, on the other hand, is set up according to a tonal scale. But the organizational principle for our sense of smell has remained a mystery: Scientists have not even been sure if there is a scale that determines the organization of our smell organ, much less how the arrangement of smell receptors on the membranes in our nasal passages might reflect such a scale.

A team headed by Prof. Noam Sobel of the Weizmann Institute's Neurobiology Department set out to search for the principle of organization for smell. Hints that the answer could be tied to pleasantness had been seen in research labs around the world, including that of Sobel, who had previously found a connection between the chemical structure of an odor molecule and its place on a pleasantness scale. Sobel and his team thought that smell receptors in the nose — of which there are some 400 subtypes — could be arranged on the nasal membrane according to this scale. This hypothesis goes against the conventional view, which claims that the various smell receptors are mixed — distributed evenly, but randomly, around the membrane.

In the experiment, the researchers inserted electrodes into the nasal passages of volunteers and measured the nerves' responses to different smells in various sites. Each measurement actually captured the response of thousands of smell receptors, as these are densely packed on the membrane. The scientists found that the strength of the nerve signal varies from place to place on the membrane. It appeared that the receptors are not evenly distributed, but rather, that they are grouped into distinct sites, each engaging most strongly with a particular type of scent. Further investigation showed that the intensity of a reaction was linked to the odor's place on the pleasantness scale. A site where the nerves reacted strongly to a certain agreeable scent also showed strong reactions to other pleasing smells and vice versa: The nerves in an area with a high response to an unpleasant odor reacted similarly to other disagreeable smells. The implication is that a pleasantness scale is, indeed, an organizing principle for our smell organ.

But does our sense of smell really work according to this simple principle? Natural odors are composed of a large number of molecules — roses, for instance, release 172 different odor molecules. Nonetheless, says Sobel, the most dominant of those determine which sites on the membrane will react the most strongly, while the other substances make secondary contributions to the scent.

'We uncovered a clear correlation between the pattern of nerve reaction to various smells and the pleasantness of those smells. As in sight and hearing, the receptors for our sense of smell are spatially organized in a way that reflects the nature of the sensory experience,' says Sobel. In addition, the findings confirm the idea that our experience of smells as nice or nasty is hardwired into our physiology, and not purely the result of individual preference. Sobel doesn't discount the idea that individuals may experience smells differently. He theorizes that cultural context and personal experience may cause a certain amount of reorganization in smell perception over a person's lifetime.


Journal Reference:

  1. Hadas Lapid, Sagit Shushan, Anton Plotkin, Hillary Voet, Yehudah Roth, Thomas Hummel, Elad Schneidman, Noam Sobel. Neural activity at the human olfactory epithelium reflects olfactory perception. Nature Neuroscience, 2011; DOI: 10.1038/nn.2926

Older musicians experience less age-related decline in hearing abilities than non-musicians

A study led by Canadian researchers has found the first evidence that lifelong musicians experience less age-related hearing problems than non-musicians.

While hearing studies have already shown that trained musicians have highly developed auditory abilities compared to non-musicians, this is the first study to examine hearing abilities in musicians and non-musicians across the age spectrum — from 18 to 91 years of age.

The study was led by Baycrest's Rotman Research Institute in Toronto and is published online September 13 in the journal Psychology and Aging, ahead of print publication.

Investigators wanted to determine if lifelong musicianship protects against normal hearing decline in later years, specifically for central auditory processing associated with understanding speech. Hearing problems are prevalent in the elderly, who often report having difficulty understanding speech in the presence of background noise. Scientists describe this as the "cocktail party problem." Part of this difficulty is due to an age-related decrease in the ability to detect and discriminate acoustic information from the environment.

"What we found was that being a musician may contribute to better hearing in old age by delaying some of the age-related changes in central auditory processing. This advantage widened considerably for musicians as they got older when compared to similar-aged non-musicians," said lead investigator Benjamin Rich Zendel at Baycrest's Rotman Research Institute. Zendel is completing his Ph.D. in Psychology at the University of Toronto and conducted the study with senior cognitive scientist and assistant director of the Rotman Research Institute, Dr. Claude Alain.

In the study, 74 musicians (ages 19-91) and 89 non-musicians (ages 18-86) participated in a series of auditory assessments. A musician was defined as someone who started musical training by the age of 16, continued practicing music until the day of testing, and had an equivalent of at least six years of formal music lessons. Non-musicians in the study did not play any musical instrument.

Wearing insert earphones, participants sat in a soundproof room and completed four auditory tasks that assessed pure tone thresholds (ability to detect sounds that grew increasingly quieter); gap detection (ability to detect a short silent gap in an otherwise continuous sound, which is important for perceiving common speech sounds such as the words that contain "aga" or ata"); mistuned harmonic detection (ability to detect the relationship between different sound frequencies, which is important for separating sounds that are occurring simultaneously in a noisy environment); and speech-in-noise (ability to hear a spoken sentence in the presence of background noise).

Scientists found that being a musician did not offer any advantage in the pure-tone thresholds test, across the age span. However, in the three other auditory tasks — mistuned harmonic detection, gap detection and speech-in-noise — musicians showed a clear advantage over non-musicians and this advantage gap widened as both groups got older. By age 70, the average musician was able to understand speech in a noisy environment as well as an average 50 year old non-musician, suggesting that lifelong musicianship can delay this age-related decline by 20 years.

Most importantly, the three assessments where musicians demonstrated an advantage all rely on auditory processing in the brain, while pure-tone thresholds do not. This suggests that lifelong musicianship mitigates age-related changes in the brains of musicians, which is probably due to musicians using their auditory systems at a high level on a regular basis. In other words, "use it or lose it."

The study was funded by the Canadian Institutes of Health Research and the Natural Sciences and Engineering Research Council of Canada.