Silence May Lead To Phantom Noises Misinterpreted As Tinnitus

NewsPsychology (Jan. 4, 2008) — Phantom noises, that mimic ringing in the ears associated with tinnitus, can be experienced by people with normal hearing in quiet situations, according to new research.

The Brazilian study, which consisted of 66 people with normal hearing and no tinnitus, found that among subjects placed in a quiet environment where they were asked to focus on their hearing senses, 68 percent experienced phantom ringing noises similar to that of tinnitus.

This is compared to only 45.5 percent of participants who heard phantom ringing when asked to focus on visual stimuli and not on their hearing, and 19.7 percent of those asked to focus on a task in a quiet environment.

The authors believe that these findings show that with regards to tinnitus, the role of attention to symptoms, as well as silence, plays a large role in experience and severity.

Tinnitus, an auditory perception that cannot be attributed to an external source, affects at least 36 million Americans on some level, with at least seven million experiencing it so severely that it interferes with daily activities.

The disorder is most often caused by damage to the microscopic endings of the hearing nerve in the inner ear, although it can also be attributed to allergies, high or low blood pressure (blood circulation problems), a tumor, diabetes, thyroid problems, injury to the head or neck, and use of medications such as anti-inflammatories, antibiotics, sedatives, antidepressants, and aspirin.

Full details of the study are published in the January 2008 edition of Otolaryngology — Head and Neck Surgery.

Email or share this story:


Story Source:

The above story is reprinted (with editorial adaptations by newsPsychology staff) from materials provided by American Academy of Otolaryngology, Head and Neck Surgery, via EurekAlert!, a service of AAAS.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of NewsPsychology or its staff.

Neuroscience Discovery May Hold Key To Hearing Loss Remedy

A Rutgers University team led by neuroscientist Robin Davis is opening new doors to improved hearing for the congenitally or profoundly deaf. Their findings could lead to a new generation of cochlear implants.

Cochlear implants today operate with varying degrees of success in different patients. Some may be able to hear sounds like the rush of traffic or the crash of thunder. Others can do even better, detecting voice and understanding speech while still being unable to appreciate music. With the latest research, across-the-board improvement may be within reach.

Davis' work is important for engineers and surgeons in designing new cochlear implants. "The significance of our work lies in the fact that we can change an element in a very peripheral part of the sensory system that can have an impact all the way into the brain," Davis said.

Cochlear implants, also known as "bionic ears," are surgically inserted into the snail-shell shaped structure — the cochlea — within the inner ear. Ordinarily, hair cells line the cochlea and convert acoustic signals into electrical signals that nerves then carry to the brain. Where some hair cells exist, sounds can be amplified with a hearing aid. Where the hair cells are missing or damaged — a condition generally associated with severe hearing impairment — an implant may be used to replace their function.

Davis, a professor in the Department of Cell Biology and Neuroscience of Rutgers' School of Arts and Sciences, works with mouse cochlear tissue cultured in the laboratory. The spiraled cochlea is unwound and laid out in a line. Davis described the hair cells as being analogous to the keys of a piano and the nerves to which they attach — the spiral ganglion neurons that connect to the brain — are the piano's strings.

"Our studies have revealed that spiral ganglion auditory neurons possess a rich complexity that is only now beginning to be understood," said Davis.

The researchers found that two neurotrophin proteins in the cochlea — brain-derived neurotrophic factor (BDNF) and neurotrophin-3 (NT-3) — figure prominently in the relay of sound messages to the brain. Research by Davis and her team, begun more than six years ago, is now producing insights into precisely how these multidimensional proteins operate in the cochlea. These most recent findings appear in the Dec. 19 issue of The Journal of Neuroscience.

While neurotrophins have historically been prized for the survival value they impart to nerve cells, the researchers found that in the cochlea they do a great deal more. Their presence in relative proportions transforms the spiral ganglion neurons into either fast-firing transmitters to carry high pitched sound messages to the brain, or slow-firing carriers for the transmission of lower pitched signals. The neurotrophins accomplish this at the molecular level by tightly regulating a newly-defined and complex series of signaling proteins.

Davis explained that one end of the cochlea is home to the slower-firing neurons characterized by a preponderance of NT-3, while the other cochlear end is rich in BDNF, making those neurons faster-firing. Both neurotrophins are present in gradients throughout the range, but at any specific locale their amounts vary relative to each other — lots of BDNF and a little NT-3 in the high frequency transmitters, for example, and the reverse as you move toward the other end.

In one possible remedial approach, Davis described how the neurotrophins could potentially be pumped into a newly-designed cochlear implant and released through graduated ports along its length.

Left Brain Helps Hear Through The Noise

 Our brain is very good at picking up speech even in a noisy room, an adaptation essential for holding a conversation at a cocktail party, and now we are beginning to understand the neural interactions that underlie this ability. An international research team reports today, in the online open access journal BMC Biology, how investigations using neuroimaging have revealed that the brain's left hemisphere helps discern the signal from the noise.

In our daily lives, we are exposed to many different sounds from multiple sources at the same time, from traffic noise to background chatter. These noisy signals interact and compete with each other when they are being processed by the brain, a process called simultaneous masking. The brain's response to masking stimuli brings about the 'cocktail-party effect' so that we are able to hear a particular sound, even in presence of a competing sound or background noise.

Hidehiko Okamoto and colleagues of the Institute for Biomagnetism and Biosignal analysis, Muenster, Germany, and colleagues in Japan and Canada have used a neuroimaging technique known as magnetoencephalography (MEG) to follow the underlying neural mechanisms and hemispheric differences related to simultaneous masking as volunteers listened to different combinations of test and background sounds. Test sounds were played either to the left or to the right ear, while the competing noise was presented either to the same or to the opposite ear.

By monitoring the brain's response to these different sound combinations, the team observed that the left hemisphere was the site of most neural activity associated with processing sounds in a noisy environment.

Journal article:  Left hemispheric dominance during auditory processing in noisy environment, Hidehiko Okamoto, Henning Stracke, Bernhard Ross, Ryusuke Kakigi and Christo Pantev, BMC Biology

Ears Ringing? Cells In Developing Ear May Explain Tinnitus

Brain scientists at Johns Hopkins have discovered how cells in the developing ear make their own noise, long before the ear is able to detect sound around them. The finding, reported in Nature, helps to explain how the developing auditory system generates brain activity in the absence of sound. It also may explain why people sometimes experience tinnitus and hear sounds that seem to come from nowhere.

The research team made their discovery while studying the properties of non-nerve cells in the ears of young rats. These so-called support cells were thought to be silent bystanders not directly involved in nerve communication. However, to the researchers' surprise, these cells showed robust electrical activity, similar to nerve cells. Further, this activity occurred spontaneously, without sound or any external stimulus.

"It's long been thought that nerve cells that connect auditory organs to the brain need to experience sound or other nerve activity to find their way to the part of the brain responsible for processing sound," says the study's lead author, Dwight Bergles, Ph.D., an associate professor of neuroscience at Hopkins. "So when we saw that these supporting cells could generate their own electrical activity, we suspected they might somehow be involved in triggering the activity required for proper nerve wiring."

To figure out how these cells were generating electrical pulses, Bergles' team suspected that a chemical might be involved; so they applied a number of different candidate drugs and chemicals to the developing cochlea ¯ the small, hollow and liquid-filled chamber in the inner ear that converts sound waves to electrical signals ¯ hoping to block the mystery trigger. The few drugs that altered the electrical output all disabled ATP (adenosine triphosphate), a chemical most often used as a cell's energy currency but also, as in this case, as a signal to communicate with other cells.

According to Bergles, a breakthrough came when it was discovered that ATP also caused the supporting cells to change their shape. By simply videotaping the developing cochlea, the team was able to monitor where and when ATP was released. After studying these movies, they found that ATP was being released near hair cells, the cells that are responsible for transferring sound information to auditory nerves. It was known that hair cells have receptors for ATP, so they might also be affected by the ATP released from the supporting cells. Indeed, the team found that hair cells also showed spontaneous electrical activity, which occurred at the same time as the responses in neighboring support cells and was blocked by drugs that block ATP receptors.

In a domino-like effect, ATP then signals the hair cells to release another chemical, glutamate, which then activates the nerve cells that project into the brain. "It is as if ATP substitutes for sound when the ear is still immature and physically incapable of detecting sound," says Bergles, adding that "the cells we have been studying seem to be warming up the machinery that will later be used to transmit sound signals to the brain."

"We think that only a few cells release ATP at one time," says Bergles. "And that small amount of free-floating ATP then activates only a few nearby hair cells." This may help associated nerve cells, far away in the depths of the brain, figure out who and where their neighbors are.

Bergles acknowledges that his experiments beg the question of why a human or any animal would need to "hear" before birth. He speculates that the ability to hear subtle differences, like the inflection in one's voice, "requires a lot of fine-tuning based on where in the brain the nerves connect. It could be that brief bursts of electrical activity in just a few nerve cells at a time help do that fine-tuning so the system works well."

While this activity likely is essential for the auditory system's proper development, it could be bad in the adult, mature nervous system as it would trigger electrical signals in the absence of sound. However, as the ear matures during the first two weeks of a rat's life, most of the cells that release ATP disappear so that by the time the rat can hear sound, all the spontaneous electrical activity in its ears has stopped.

Although there is no ATP floating around at that point, the hair cells continue to be able to respond to it, and exposure to loud sounds can trigger ATP release in the ear. Bergles suspects that "if ATP were released by the remaining support cells, it may cause the sensation of sound when there is none," a condition known as tinnitus or ringing in the ears. Alternatively, he notes that bursts of activity might trigger changes in the connectivity of neurons in the brain, just like it does during development, eventually leading to abnormal activity that is perceived as sound.

The research was funded by the National Institutes of Health.

Authors on the paper are Nicolas Tritsch, Eunyoung Yi, Elisabeth Glowatzki and Bergles, all of Hopkins, and Jonathan Gale of University College London.

New Hearing Mechanism Discovered

MIT researchers have discovered a hearing mechanism that fundamentally changes the current understanding of inner ear function. This new mechanism could help explain the ear's remarkable ability to sense and discriminate sounds. Its discovery could eventually lead to improved systems for restoring hearing.

MIT Professor Dennis M. Freeman, working with graduate student Roozbeh Ghaffari and research scientist Alexander J. Aranyosi, found that the tectorial membrane, a gelatinous structure inside the cochlea of the ear, is much more important to hearing than previously thought. It can selectively pick up and transmit energy to different parts of the cochlea via a kind of wave that is different from that commonly associated with hearing.

Ghaffari, the lead author of the paper, is in the Harvard-MIT Division of Health Sciences and Technology, as is Freeman. All three researchers are in MIT's Research Laboratory of Electronics. Freeman is also in MIT's Department of Electrical Engineering and Computer Science and the Massachusetts Eye and Ear Infirmary.

It has been known for over half a century that inside the cochlea sound waves are translated into up-and-down waves that travel along a structure called the basilar membrane. But the team has now found that a different kind of wave, a traveling wave that moves from side to side, can also carry sound energy. This wave moves along the tectorial membrane, which is situated directly above the sensory hair cells that transmit sounds to the brain. This second wave mechanism is poised to play a crucial role in delivering sound signals to these hair cells.

In short, the ear can mechanically translate sounds into two different kinds of wave motion at once. These waves can interact to excite the hair cells and enhance their sensitivity, "which may help explain how we hear sounds as quiet as whispers," says Aranyosi. The interactions between these two wave mechanisms may be a key part of how we are able to hear with such fidelity – for example, knowing when a single instrument in an orchestra is out of tune.

"We know the ear is enormously sensitive" in its ability to discriminate between different kinds of sound, Freeman says. "We don't know the mechanism that lets it do that." The new work has revealed "a whole new mechanism that nobody had thought of. It's really a very different way of looking at things."

The tectorial membrane is difficult to study because it is small (the entire length could fit inside a one-inch piece of human hair), fragile (it is 97 percent water, with a consistency similar to that of a jellyfish), and nearly transparent. In addition, sound vibrations cause nanometer-scale displacements of cochlear structures at audio frequencies. "We had to develop an entirely new class of measurement tools for the nano-scale regime," Ghaffari says.

The team learned about the new wave mechanism by suspending an isolated piece of tectorial membrane between two supports, one fixed and one moveable. They launched waves at audio frequencies along the membrane and watched how it responded by using a stroboscopic imaging system developed in Freeman's lab. That system can measure nanometer-scale displacements at frequencies up to a million cycles per second.

The team's discovery has implications for how we model cochlear mechanisms. "In the long run, this could affect the design of hearing aids and cochlear implants," says Ghaffari. The research also has implications for inherited forms of hearing loss that affect the tectorial membrane. Previous measurements of cochlear function in mouse models of these diseases "are consistent with disruptions of this second wave," Aranyosi adds.

Because the tectorial membrane is so tiny and so fragile, people "tend to think of it as something that's wimpy and not important," Freeman says. "Well, it's not wimpy at all." The new discovery "that it can transport energy throughout the cochlea is very significant, and it's not something that's intuitive."

The research is described in the advance online issue of the Proceedings of the National Academy of Sciences the week of October 8.

This research was funded by the National Institutes of Health.

Searching For The Brain Center Responsible For Tinnitus

For the more than 50 million Americans who experience the phantom sounds of tinnitus — ringing in the ears that can range from annoying to debilitating — certain well-trained rats may be their best hope for finding relief.

Researchers at the University at Buffalo have studied the condition for more than 10 years and have developed these animal models, which can "tell" the researchers if they are experiencing tinnitus.

These scientists now have received a $2.9 million five-year grant from the National Institutes of Health to study the brain signals responsible for creating the phantom sounds, using the animal models, and to test potential therapies to quiet the noise.

The research will take place at the Center for Hearing and Deafness, part of the Department of Communicative Disorders and Sciences in the university's College of Arts and Sciences. Richard Salvi, Ph.D., director of the center, is principal investigator. Scientists from UB's Department of Nuclear Medicine and from Roswell Park Cancer Institute in Buffalo are major collaborators on portions of the project.

Tinnitus is caused by continued exposure to loud noise, by normal aging and, to a much lesser extent, as a side effect of taking certain anti-cancer drugs. It is a major concern in the military: 30 percent of Iraq and Afghanistan combat veterans suffer from the condition.

"For many years it was thought that the buzzing or ringing sounds heard by people with tinnitus originated in the ear," Salvi said. "But by using positron emission tomography [known as PET scanning] to view the brain activity of people with tinnitus at UB, we've been able to show that these phantom auditory sensations originated somewhere in brain, not in the ear. That changed the whole research approach."

Salvi and colleagues discovered that when the brain's auditory cortex begins receiving diminished neural signals from the cochlea, the hearing organ, due to injury or age, the auditory cortex "turns up the volume," increasing weak neural signals from the cochlea. Increasing the volume of these weak signals may be experienced as the buzzing, ringing, or hissing characteristic of tinnitus. Currently there is no drug or treatment that can abolish these phantom sounds.

  Over the past decade, Salvi's team has developed the animal models, allowing the researchers to explore the neurophysiological and biological mechanisms associated with tinnitus, the major focus of this new study.  Ed Lobarinas, Ph.D., and Wei Sun, Ph.D., in the Department of Communicative Disorders and Sciences, developed the models.

One of the major goals of the project is to try to identify the neural signature of tinnitus — what aberrant pattern of neural activity in the auditory cortex is associated with the onset of tinnitus. In another study phase, the researchers will assess neural activity throughout the entire brain using a radioactive tracer, fluorodeoxyglucose (FDG), which is taken up preferentially into regions of the brain that are highly active metabolically.    

The third phase of the study involves the use of potential therapeutic drugs to suppress salicylate- or noise-induced tinnitus.  In early studies, the researchers have been able to modulate some ion channels with one unique compound, and have been able to completely eliminate aspirin-induced tinnitus using the highest doses of the compound. This phase involves collaboration with scientists at NeuroSearch Pharmaceuticals in Denmark.

Combined TMS Shows Potential In Tinnitus Treatment

— It is estimated that more than 50 million Americans suffer from tinnitus, a condition where the patient experiences ringing or other head noises that are not produced by an external source. This disorder can occur in one or both ears, range in pitch from a low roar to a high squeal, and may be continuous or sporadic.

This often debilitating condition has been linked to ear injuries, circulatory system problems, noise-induced hearing loss, wax build-up in the ear canal, medications harmful to the ear, ear or sinus infections, misaligned jaw joints, head and neck trauma, Ménière’s disease, and an abnormal growth of bone of the middle ear.

A new study presented at the 2007 AAO-HNSF Annual Meeting & OTO EXPO shows promise for a tinnitus treatment using combined transcranial magnetic stimulation (TMS), a noninvasive method to excite neurons in the brain. The study included 32 patients who received either low-frequency temporal TMS or a combination of high-frequency prefrontal and low-frequency temporal TMS.

Treatment effects were assessed by using a standardized tinnitus questionnaire directly after the therapy and three months later. Evaluation after three months revealed remarkable advantages for the group of patients who received the combination TMS treatment.

The results of the study support recent data that suggest that auditory and non-auditory areas of the brain are involved in the pathophysiology of tinnitus, and that this information can guide future treatment strategies.

Title: Combined Temporal and Prefrontal TMS for Tinnitus Treatment

Authors: Tobias Kleinjung MD, Peter Eichhammer, MD, Michael Landgrebe, MD, Philipp Sand, MD,Goeran Hajak, MD, Juergen Strutz, MD, PhD,Berthold Langguth, MD

New Cell Culturing Method Pumps Up The Volume

— In a breakthrough that will likely accelerate research aimed at cures for hearing loss, tinnitus, and balance problems, scientists have perfected a laboratory culturing technique that provides a reliable new source of cells critical to understanding certain inner-ear disorders.

The cells, known as hair cells, are the essential sound and balance detectors in the inner ear. Damage to these cells is a key factor in hearing and balance loss, and while birds, fishes, and amphibians can quickly regrow damaged hair cells, humans cannot. Until now, scientists seeking clues to this problem have been hampered by difficult procedures required to gather these cells for their research.

MBL Whitman Investigators Zhengqing Hu and Jeffrey Corwin, both of the University of Virginia School of Medicine, developed a new technique for isolating cells from the inner ears of chicken embryos and growing them in their laboratory. The scientists achieved these results by inducing avian cells to differentiate into hair cells via a process known as mesenchymal-to-epithelial transition.

Hu and Corwin were able to freeze and thaw the cultured cells, then grow new cells from the thawed cultures – a discovery that will make hair cells accessible to more researchers.

The study of hair cells is crucial to understanding hearing loss because hair cells are a precious commodity in humans. We are born with a limited number of these sound detectors in each ear, which can be easily damaged by age, certain illnesses, loud noises, and adverse reactions to medications. Once damaged, the cells do not grow back, causing hearing and balance problems.

"Until now, scientists working to understand many inner ear disorders had to resort to difficult microdissections to gather even small numbers of these cells, which limited the types of research that could be pursued and slowed the pace of discoveries," says Corwin.

The availability of vials of frozen cells that can be induced to form hair cells should remove a significant barrier to progress toward the development of treatments for the more than 20 million Americans who suffer from hearing loss and balance problems.

The research is published in the September 24-28 early edition of the Proceedings of the National Academy of Sciences.

Dr. Corwin, a professor of neuroscience at the University of Virginia School of Medicine, is a co-director of the MBL's Biology of the Inner Ear course. 

The research was supported by two grants from the National Institutes of Health and by the Grass Foundation.

Mild Hearing Loss Leaves Lasting Impact On Neurological Processes

Mild to moderate forms of hearing loss can have a lasting impact on the auditory cortex, according to findings by researchers at New York University's Center for Neural Science. The study, which is the first to show central effects of mild hearing loss, appears in a recent issue of the Journal of Neuroscience.

Previously, researchers had been unable to conclusively determine the neurological impact of mild forms of hearing loss, which occurs when the pathway by which sound reaches the cochlea is disrupted–such as is experienced with middle ear infections during childhood. The NYU study sought to address this question in an animal model by measuring the impact of conductive hearing loss without injury to the cochlea.

The researchers induced hearing loss in the subjects during early development, then measured the functionality of neural connections within the subjects' auditory cortex, which processes all acoustic cues.

The results showed that the projection to auditory cortex had changed following a brief period of hearing loss. Specifically, the researchers found that the synaptic response of the auditory neurons adapted more rapidly and to a greater extent. They also found that auditory cortex neurons became more sensitive to stimulation.

These findings indicate that auditory cortex function is susceptible to relatively modest loss of hearing loss during development and suggest that perceptual deficits may be linked to alterations in the central nervous system.

The study was authored by NYU scientists Han Xu, Vibhakar Kotak, and Dan Sanes, working in NYU's Center for Neural Science.

Alternative Treatment Brings Hearing To Both Ears

Thomas Lynch, age 2, is now able to hear on both sides of his head with a device and surgical procedure pioneered by a surgeon-led team at Loyola University Medical Center.

Born with no ear canal on his left side, Tom had significant hearing impairment and went to Loyola University Medical Center, where Dr. Sam Marzo surgically implanted a bone-anchored cochlear stimulator that delivers sound to the inner ear by bone conduction. Marzo activated Tom’s device at Loyola’s Oakbrook Terrace Medical Center.

“It harnesses the ability of the skull bone to conduct sound vibrations,” said Marzo, associate professor of otolaryngology, Loyola University Chicago Stritch School of Medicine, Maywood, Ill. “It will enable Tom to perceive sounds on both sides of his head, which is critical for his speech development.”

Bone conduction is an alternative way to stimulate the cochlea if the regular sound route—via the ear canal—is interrupted or not available. The cochlea is the snail-shaped part of the inner ear that is responsible for hearing.

The device may be an alternative for people whose deafness cannot be helped by traditional hearing aids or cochlear implants.

The treatment is applicable for single-sided deafness, which affects some 60,000 people each year. The device can be snapped on and off—for showering and sleeping.

“People unable to hear as a result of chronic ear inflammation or drainage can benefit from this new therapy,” said Marzo, who also serves as program director of the Hearing and Balance Center at Loyola’s Oakbrook Terrace Medical Center, One South Summit Ave, Oakbrook Terrace, Ill. “The device will work for people who do not have a functioning ear canal.”

It has successfully treated sudden hearing loss, as well as hearing loss secondary to acoustic neuroma (tumor) and Meniere's disease (excessive fluid in the inner ear.)

Marzo noted that patients must have one working cochlea for the treatment to be effective.

To provide the therapy, a small titanium post is surgically implanted in the skull bone, one-half inch behind the ear. It takes three months for the implant to be integrated into the bone. A 1.5-inch x 1-inch sound processor, which snaps onto the post, transmits sound via bone conduction directly to the cochlea. The result is the sensation of hearing from both ears.

Hearing is an important safety issue, Marzo said. For example, walkers, joggers and bicyclists need to hear oncoming traffic. “Without being able to hear on both sides, it is difficult to perceive direction,” he said.

Marzo has a non-invasive test to determine if the bone-anchored hearing aid will be effective for a patient. To begin, the patient puts on what appears to be a set of headphones. One of the earpieces is placed on the mastoid bone behind the ear. This earpiece is a bone oscillator, the size of a U.S. quarter in radius that will send sound waves to the inner ear via bone conduction. Then, for the test, the patient blocks out any ear canal sound by putting a finger in each ear. The device is turned on. “If they are able to hear at this point, the procedure will work,” said Marzo.