Healthy ears hear the first sound, ignoring the echoes, barn owl study finds

— Voices carry, reflect off objects and create echoes. Most people rarely hear the echoes; instead they only process the first sound received. For the hard of hearing, though, being in an acoustically challenging room can be a problem. For them, echoes carry. Ever listen to a lecture recorded in a large room?

That most people only process the first-arriving sound is not new. Physicist Joseph Henry, the first secretary of the Smithsonian Institution, noted it in 1849, dubbing it the precedence effect. Since then, classrooms, lecture halls and public-gathering places have been designed to reduce reverberating sounds. And scientists have been trying to identify a precise neural mechanism that shuts down trailing echoes.

In a new paper published in the Aug. 26 issue of the journal Neuron, University of Oregon scientists Brian S. Nelson, a postdoctoral researcher, and Terry T. Takahashi, professor of biology and member of the UO Institute of Neuroscience, suggest that the filtering process is really simple.

When a sound reaching the ear is loud enough, auditory neurons simply accept that sound and ignore subsequent reverberations, Takahashi said. "If someone were to call out your name from behind you, that caller's voice would reach your ears directly from his or her mouth, but those sound waves will also bounce off your computer monitor and arrive at your ears a little later and get mixed in with the direct sound. You aren't even aware of the echo."

Takahashi studies hearing in barn owls with the goal of understanding the fundamentals of sound processing so that future hearing aids, for example, might be developed. In studying how his owls hear, he usually relies on clicking sounds one at a time.

For the new study, funded by the National Institutes of Deafness and Communication Disorders, Nelson said: "We studied longer sounds, comparable in duration to many of the consonant sounds in human speech. As in previous studies, we showed that the sound that arrives first — the direct sound — evokes a neural and behavioral response that is similar to a single source. What makes our new study interesting is that the neural response to the reflection was not decreased in comparison to when two different sounds were presented."

The owls were subjected to two distinct sounds, direct and reflected, with the first-arriving sound causing neurons to discharge. "The owls' auditory neurons are very responsive to the leading edge of the peaks," said Takahashi, "and those leading edges in the echo are masked by the peak in the direct waveform that preceded it. The auditory cells therefore can't respond to the echo."

When the leading sound is not deep enough in modulation and more time passes between sounds, the single filtering process disappears and the owls respond to the sounds coming from different locations, the researchers noted.

The significance, Takahashi said, is that for more than 60 years researchers have sought a physiological mechanism that actively suppresses echoes. "Our results suggest that you might not need such a sophisticated system."


Journal Reference:

  1. Brian S. Nelson, Terry T. Takahashi. Spatial Hearing in Echoic Environments: The Role of the Envelope in Owls. Neuron, 2010; 67 (4): 643 DOI: 10.1016/j.neuron.2010.07.014

Sign language speakers' hands, mouths operate separately

 When people are communicating in sign languages, they also move their mouths. But scientists have debated whether mouth movements resembling spoken language are part of the sign itself or are connected directly to English. In a new study on British Sign Language, signers made different mistakes in the sign and in the mouthing—which means the hand and lip movements are separate in the signer’s brain, not part of the same sign.

David P. Vinson, of University College London, and his colleagues Robin L. Thompson, Robert Skinner, Neil Fox, and Gabriella Vigliocco planned to do basic research on how signers process language. They recruited both deaf and hearing signers, all of whom grew up signing with deaf parents. Each person sat in front of a monitor with a video camera pointed at them. They were shown sets of pictures—for example, one set contained various fruits, another set contained modes of transportation—and were asked to sign the name of each item. In another session, they were shown those words in English and asked to translate them into British Sign Language. The idea is to show the pictures or words quickly enough that people tend to make mistakes, mistakes which help reveal how language is processed.

The researchers only planned to look at the signs, but the videos also captured the signers’ mouths. “We noticed that there were quite a few cases where the hands and the mouth seemed to be doing something different,” says Vinson. When people were looking at pictures, the hands and mouth would usually make the same mistakes—signing and mouthing “banana” when the picture was an apple, for example. But when they were translating English words, the hands made the same kind of mistakes, but the lips didn’t. This suggests that the lip movement isn’t part of the sign. “In essence, they’re doing the same thing as reading an English word aloud without pronouncing it,” says Vinson. “So they seem to be processing two languages at the same time.” This study appears in Psychological Science, a journal of the Association for Psychological Science.

British Sign Language is a separate language from both English and American Sign Language; it developed naturally, and is mentioned in historical records as far back as 1576. Most British signers are bilingual in English. Vinson speculates that mouthing English words may help deaf people develop literacy in English.


Journal Reference:

  1. David P. Vinson, Robin L. Thompson, Robert Skinner, Neil Fox, and Gabriella Vigliocco. The Hands and Mouth Do Not Always Slip Together in British Sign Language: Dissociating Articulatory Channels in the Lexicon. Psychological Science, (in press)

Deaf, hard-of-hearing students perform first test of sign language by cell phone

University of Washington engineers are developing the first device able to transmit American Sign Language over U.S. cellular networks. The tool is just completing its initial field test by participants in a UW summer program for deaf and hard-of-hearing students.

"This is the first study of how deaf people in the United States use mobile video phones," said project leader Eve Riskin, a UW professor of electrical engineering.

The MobileASL team has been working to optimize compressed video signals for sign language. By increasing image quality around the face and hands, researchers have brought the data rate down to 30 kilobytes per second while still delivering intelligible sign language. MobileASL also uses motion detection to identify whether a person is signing or not, in order to extend the phones' battery life during video use.

Transmitting sign language as efficiently as possible increases affordability, improves reliability on slower networks and extends battery life, even on devices that might have the capacity to deliver higher quality video.

This summer's field test is allowing the team to see how people use the tool in their daily lives and what obstacles they encounter. Eleven participants are testing the phones for three weeks. They meet with the research team for interviews and occasionally have survey questions pop up after a call is completed asking about the call quality.

The field test began July 28 and concludes August 18. In the first two and a half weeks of the study, some 200 calls were made with an average call duration of a minute and a half, researchers said. A larger field study will begin this winter.

"We know these phones work in a lab setting, but conditions are different in people's everyday lives," Riskin said. "The field study is an important step toward putting this technology into practice." Participants in the current field test are students in the UW Summer Academy for Advancing Deaf and Hard of Hearing in Computing. The academy accepts academically gifted deaf and hard-of-hearing students interested in pursuing computing careers. Students spend nine weeks at the UW taking computer programming and animation classes, meeting with deaf and hard-of-hearing role models who already work in computing fields, UW graduate students and visiting local computer software and hardware companies.

Most study participants say texting or e-mail is currently their preferred method for distance communication. Their experiences with the MobileASL phone are, in general, positive.

"It is good for fast communication," said Tong Song, a Chinese national who is studying at Gallaudet University in Washington, D.C. "Texting sometimes is very slow, because you send the message and you're not sure that the person is going to get it right away. If you're using this kind of phone then you're either able to get in touch with the person or not right away, and you can save a lot of time."

Josiah Cheslik, a UW undergraduate and past participant in the summer academy who is now a teaching assistant, agreed.

"Texting is for short things, like 'I'm here,' or, 'What do you need at the grocery store?'" he said. "This is like making a real phone call."

As everyone knows, text-based communication can also lead to mix-ups.

"Sometimes with texting people will be confused about what it really means," Song said. "With the MobileASL phone people can see each other eye to eye, face to face, and really have better understanding."

Some students also use video chat on a laptop, home computer or video phone terminal, but none of these existing technologies for transmitting sign language fits in your pocket.

Cheslik recounts that during the study one participant was lost riding a Seattle city bus and the two were able to communicate using MobileASL. The student on the bus described what he was seeing and Cheslik helped him navigate where he wanted to go.

Newly released high-end phones, such as the iPhone 4 and the HTC Evo, offer video conferencing. But users are already running into hitches — broadband companies have blocked the bandwidth-hogging video conferencing from their networks, and are rolling out tiered pricing plans that would charge more to heavy data users.

The UW team estimates that iPhone's FaceTime video conferencing service uses nearly 10 times the bandwidth of MobileASL. Even after the anticipated release of an iPhone app to transmit sign language, people would need to own an iPhone 4 and be in an area with very fast network speeds in order to use the service. The MobileASL system could be integrated with the iPhone 4, the HTC Evo, or any device that has a video camera on the same side as the screen.

"We want to deliver affordable, reliable ASL on as many devices as possible," Riskin said. "It's a question of equal access to mobile communication technology."

Jessica Tran, a doctoral student in electrical engineering who is running the field study, is experimenting with different compression systems to extend the life of the battery under heavy video use. Electrical engineering doctoral student Jaehong Chon made MobileASL compatible with H.264, an industry standard for video compression. Tressa Johnson, a master's student in library and information science and a certified ASL interpreter, is studying the phones' impact on the deaf community.

The MobileASL research is primarily funded by the National Science Foundation, with additional gifts from Sprint Nextel Corp., Sorenson Communications and Microsoft Corp. Collaborators at the UW are Richard Ladner, professor of computer science and engineering, and Jacob Wobbrock, assistant professor in the Information School.

The Summer Academy for Advancing Deaf and Hard of Hearing in Computing is applying for a third round of funding from the National Science Foundation. Additional support for this year's program came from the Johnson Family Foundation, the Bill and Melinda Gates Foundation, Cray Corp., Oracle Corp., Google Corp. and SignOn Inc.

Humans imitate aspects of speech we see

Humans are incessant imitators. We unintentionally imitate subtle aspects of each other's mannerisms, postures and facial expressions. We also imitate each other's speech patterns, including inflections, talking speed and speaking style. Sometimes, we even take on the foreign accent of the person to whom we're talking, leading to embarrassing consequences.

New research by the University of California, Riverside, published in the August issue of the journal Attention, Perception, & Psychophysics, shows that unintentional speech imitation can even make us sound like people whose voices we never hear. The journal is published by The Psychonomic Society, which promotes scientific research in psychology and allied sciences.

UCR psychology professor Lawrence D. Rosenblum and graduate students Rachel M. Miller and Kauyumari Sanchez found that when people lipread from a talker and say aloud what they've lipread, their speech sounds like that of the talker.

The researchers asked hearing individuals with no formal lipreading experience to watch a silent face articulate 80 simple words, such as tennis and cabbage. Those individuals were asked to identify the words by saying them out loud clearly and quickly. To make the lipreading task easier, the test subjects were given a choice of two words: e.g., tennis or table). They were never asked to imitate or repeat the talker.

Even so, the researchers found that words spoken by the test subjects sounded more like the words of the talker they lipread than did words they spoke when simply reading from a list. That finding is evidence that unintentional speech imitation extends to lipreading, even for normal hearing individuals with no formal lipreading experience, they wrote in a paper titled "Alignment to Visual Speech Information."

"Whether we are hearing or lipreading speech articulations, a talker's speaking style has subtle influences on our own manner of speaking," Rosenblum says. "This unintentional imitation could serve as a social glue, helping us to affiliate and empathize with each other. But it also might reflect deep aspects of the language function. Specifically, it adds to evidence that the speech brain is sensitive to — and primed by — speech articulation, whether heard or seen. It also adds to the evidence that a familiar talker's speaking style can help us recognize words."

The research project was funded by a grant from the National Institutes of Health's National Institute on Deafness and Other Communication Disorders.


Journal Reference:

  1. Miller et al. Alignment to visual speech information. Attention Perception & Psychophysics, 2010; 72 (6): 1614 DOI: 10.3758/APP.72.6.1614

Socioeconomic status not associated with access to cochlear implants, study finds

Poor children with hearing loss appear to have equal access to cochlear implantation, but have more complications and worse compliance with follow-up regimens than children with higher socioeconomic status, according to a report in the July issue of Archives of Otolaryngology-Head & Neck Surgery, one of the JAMA/Archives journals.

"Cochlear implantation is a powerful tool for helping children with severe to profound sensorineural hearing loss gain the ability to hear, achieve age-appropriate reading skills and develop communication skills equal to those of their hearing counterparts," the authors write as background information in the article. "Owing to cochlear implant's well established societal cost-effectiveness, the U.S. Department of Health and Human Services included cochlear implantation as a point of emphasis of Healthy People 2010." However, recent studies estimate that only 55 percent of all candidates for cochlear implants age 1 to 6 receive them.

Medicaid status — since it is based on federal poverty levels — has been used as a proxy for socioeconomic status. David T. Chang, Ph.D., of Case Western Reserve University School of Medicine, University Hospitals Case Medical Center, Cleveland, and colleagues and studied 133 pediatric patients who were referred for cochlear implants between 1996 and 2008, including 64 who were Medicaid-insured and 69 who were privately insured. Some have suggested that inadequate Medicaid reimbursement leading to negative financial pressures on hospitals has been a factor in limiting access to cochlear implants; however, since Medicaid coverage in Ohio is available for all eligible children and has full cochlear implant benefits, the authors were able to study the effects of socioeconomic status alone on cochlear implant access and outcomes.

There was no difference between the two groups in the odds of receiving an initial implantation, age at referral to the cochlear implant program or age at implantation. However, the odds of complications following implantation were almost five-fold greater in Medicaid-insured children than privately insured children (10 complications in 51 Medicaid insured patients, or 19.6 percent, vs. three complications in 61 privately insured patients, or 4.9 percent). Major complications were also more common in the Medicaid population (six or 11.8 percent vs. two or 3.3 percent). In addition, patients on Medicaid missed substantially more follow-up appointments (35 percent vs. 23 percent) and more consecutive visits (1.9 vs. 1.1) than did those on private insurance.

"Given the excellent Medicaid coverage in Ohio, our results suggest that eliminating the definite financial obstacle that currently exists in other states across the nation for children from lower-income households would allow all eligible children, regardless of socioeconomic background, access to this powerful technology," the authors write. "However, despite equal access among Medicaid-insured and privately insured patients, there seem to be important differences between the groups postimplantation that influence outcome, namely, decreased follow-up compliance, increased incidence of minor and major complications and decreased rates of sequential bilateral implantation," or the implantation of a second device in the other ear.

"Taken together, these results indicate that centers should further investigate opportunities to minimize these downstream disparities," they conclude.


Journal Reference:

  1. David T. Chang; Alvin B. Ko; Gail S. Murray; James E. Arnold; Cliff A. Megerian. Lack of Financial Barriers to Pediatric Cochlear Implantation: Impact of Socioeconomic Status on Access and Outcomes. Arch Otolaryngol Head Neck Surg, 2010; 136 (7): 648-657 [link]

How technology may improve treatment for children with brain cancer

A study presented at the 52nd Annual Meeting of the American Association of Physicists in Medicine (AAPM) shows that children with brain tumors who undergo radiation therapy (the application of X-rays to kill cancerous cells and shrink tumors) may benefit from a technique known as "intensity modulated arc therapy" or IMAT.

This technique relies upon new features on the latest generation of X-ray therapy equipment that allow X-ray sources to be continuously rotated in any direction around a patient during treatment, potentially increasing the number of directions that the beams come from.

The study, which was conducted by medical physicists at St. Jude Children's Hospital in Memphis, TN, compared different treatment strategies including IMAT for nine children treated with radiation therapy for brain tumors. It showed that IMAT could irradiate these tumors effectively while overall reducing the exposure to the surrounding tissue.

"Anything we can do to reduce that dose is obviously better," says St. Jude's Chris Beltran, who is presenting the study in Philadelphia.

Treating cancer through radiation therapy can be complicated for certain types of tumors that are surrounded by sensitive tissue. Many brain tumors, for instance, are deep inside the skull and may require the X-rays to pass through critical structures — the eyes, the ears, and parts of the brain itself.

The X-rays have the potential to damage these structures, which can lead to lasting side-effects from the treatment. Sending X-rays through the ear may damage the cochlea and lead to permanent hearing loss. Likewise, exposing the brain's temporal lobes to ionizing X-ray radiation can cause loss of mental acuity.

Because modern equipment for radiation therapy allows the source of X-rays to continuously move around the patient, says Beltran, "It gives you the freedom to choose where the beams come from."

In his study he showed that a treatment plan incorporating IMAT would help spare the sensitive surrounding tissues. Using common measures that relate radiation dosage to tissue damage, he predicts that the IMAT plan would cause less hearing loss and damage to the temporal lobes as compared to other treatment plans.

Gene mutation that causes rare form of deafness identified

 Researchers have identified a gene mutation that causes a rare form of hearing loss known as auditory neuropathy, according to U-M Medical School scientists.

In the study published online in the Proceedings of the National Academy of Sciences, U-M's Marci Lesperance, M.D., and Margit Burmeister, Ph.D. led a team of researchers who examined the DNA of individuals from the same large family afflicted with the disorder.

The researchers identified a mutation in the DIAPH3 gene that causes over-production of a compound known as a diaphanous protein. In previous studies, hearing loss has been linked to a related gene that also affects a diaphanous protein.

Currently, diagnosing auditory neuropathy requires specific testing. Auditory neuropathy may be unrecognized if testing is not performed early in life.

"Since we previously knew of only two genes associated with auditory neuropathy, finding this gene mutation is significant," says Lesperance, professor in U-M's Department of Otolaryngology and chief of the Division of Pediatric Otolaryngology.

"This discovery will be helpful in developing genetic tests in the future, which will be useful not only for this family, but for all patients with auditory neuropathy," Lesperance says.

To investigate the role of these compounds in auditory function, the authors engineered a line of fruit flies that expressed an overactive diaphanous protein in the insects' auditory organ. Using sound to induce measurable voltage changes, Frances Hannan of New York Medical College determined that the flies' hearing was significantly degraded compared to normal flies.

Burmeister says finding the genes causing such rare disorders is very difficult because researchers cannot look at many different families, and instead have to rely on a single family that is often not large enough. But in this study, the researchers used a multi-pronged approach. Rather than relying purely on genetic inheritance information, they combined this information with biological function regarding gene activity.

"The approach we used here of combining genetic inheritance with functional information can be applied to identify the culprit genes in many other rare genetic diseases that have so far been impossible to nail down," says Burmeister, professor of Psychiatry and Human Genetics.

"We can now say we have a tool by combining several genomic approaches to find these genes."

Burmeister, Lesperance and colleagues are actively recruiting research subjects for studies to identify genes involved in genetic hearing loss and also for inherited neurological disorders. Those interested can sign up at www.umengage.org after searching on keywords neurological, deafness or hearing loss.

Additional authors include: Michael Hortsch, associate professor of Cell and Developmental Biology at the University of Michigan; Marc C. Thorne, assistant professor of Otolaryngology; Cynthia J. Schoen, Elzbieta Sliwerska, Jameson Arnett and Sarah B. Emery, all of U-M; and Frances Hannan and Hima R. Ammana of New York Medical College.

Funding: National Institutes of Health, Children's Hearing Foundation of New York.


Journal Reference:

  1. Cynthia J. Schoen, Sarah B. Emery, Marc C. Thorne, Hima R. Ammana, El%u017Cbieta %u015Aliwerska, Jameson Arnett, Michael Hortsch, Frances Hannan, Margit Burmeister, and Marci M. Lesperance. Increased activity of Diaphanous homolog 3 (DIAPH3)/diaphanous causes hearing defects in humans with auditory neuropathy and in Drosophila. Proceedings of the National Academy of Sciences, 2010; DOI: 10.1073/pnas.1003027107

Making the invisible visible: Verbal cues enhance visual detection

Cognitive psychologists at the University of Pennsylvania and University of California have shown that an image displayed too quickly to be seen by an observer can be detected if the participant first hears the name of the object.

Through a series of experiments published in the journal PLoS ONE, researchers found that hearing the name of an object improved participants' ability to see it, even when the object was flashed onscreen in conditions and speeds (50 milliseconds) that would render it invisible. Surprisingly, the effect seemed to be specific to language. A visual preview did not make the invisible target visible. Getting a good look at the object before the experiment did nothing to help participants see it flashed.

The study demonstrated that language can change what we see and can also enhance perceptual sensitivity. Verbal cues can influence even the most elementary visual processing and inform our understanding of how language affects perception.

Researchers led by psychologist Gary Lupyan, assistant professor in the Department of Psychology at Penn, had participants complete an object detection task in which they made an object-presence or -absence decision to briefly presented capital letters.

Other experiments within the study further defined the relationship between auditory cues and identification of visual images. For example, researchers reasoned that if auditory cues help with object detection by encouraging participants to mentally picture the image, then the cuing effect might disappear when the target moved on screen. The study found that verbal cues still clued participants in. No matter what position on screen the target showed up the effect of the auditory cue was not diminished, an advantage over visual cues.

Researchers also found that the magnitude of the cuing effect correlated with each participant's own estimation of the vividness of their mental imagery. Using a common questionnaire, researchers found that those who consider their mental imagery particularly vivid scored higher when provided an auditory cue.

The team went on to determine that the auditory cue improved detection only when the cue was correct — that is the target image and the verbal cue had to match. According to researchers, hearing the image labeled evokes an image of the object, strengthening its visual representation and thus making it visible.

"This research speaks to the idea that perception is shaped moment-by-moment by language," said Lupyan. "Although only English speakers were tested, the results suggest that because words in different languages pick out different things in the environment, learning different languages can shape perception in subtle, but pervasive ways."

The single study is part of a greater effort by Lupyan and other Penn psychologists to understand how high-level cognitive expectation can influence low-level sensory processing, in this case verbal cues. For years, cognitive psychologists have known that directing participant's attention to a general location improves reaction times to target objects appearing in that location. More recently, experimental evidence has shown that semantic information can influence what one sees in surprising ways. For instance, hearing words that associate with directions of motion, such as a falling "bomb," can interfere with an observer's ability to quickly recognize the next movement they see. Moreover, hearing a word that labels a target improves the speed and efficiency of the search. For instance, when searching for the number 2 among 5's, participants are faster to find the target when they actually hear "find the two" immediately prior to the search — even when 2 has been the target all along.

The study was conducted by Lupyan of Penn's Department of Psychology and Michael Spivey of the University of California, Merced.

Research was conducted with funding from the National Science Foundation.


Journal Reference:

  1. Lupyan et al. Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection. PLoS ONE, 2010; 5 (7): e11452 DOI: 10.1371/journal.pone.0011452