Language may play important role in learning the meanings of numbers

New research conducted with deaf people in Nicaragua shows that language may play an important role in learning the meanings of numbers.

Field studies by University of Chicago psychologist Susan Goldin-Meadow and a team of researchers found deaf people in Nicaragua, who had not learned formal sign language, do not have a complete understanding of numbers greater than three.

Researchers surmised the lack of large number comprehension was because the deaf Nicaraguans were not being taught numbers or number words. Instead they learned to communicate using self-developed gestures called "homesigns," a language developed in the absence of formal education and exposure to formal sign language.

"The research doesn't determine which aspects of language are doing the work, but it does suggest that language is an important player in number acquisition," said Betty Tuller, a program director in the National Science Foundation's (NSF) Division of Behavioral and Cognitive Sciences, which funded the research.

"The finding may help narrow down the range of experiences that play a role in learning number concepts," she said.

Research results are reported in the current issue of the Proceedings of the National Academy of Sciences (PNAS) in a paper titled, "Number Without a Language Model."

While the homesigners do have gestures for number, those gestures are accurately used for only small numbers–numbers less than three–and not for large ones.

By contrast, deaf people who acquire conventional sign languages learn the values of large numbers because they learn a counting routine early in childhood, just as children who acquire spoken languages do.

"It's not just the vocabulary words that matter, but understanding the relationships that underlie the words–the fact that 'eight' is one more than 'seven' and one less than 'nine,'" said Goldin-Meadow. "Without having a set of number words to guide them, the deaf homesigners in the study failed to understand that numbers build on each other in value."

"What's most striking is that the homesigners can see that seven fingers are more than six fingers and less than eight fingers, but they are unable to order six, seven and eight fingers," added Tuller. "In other words, they don't seem to understand the successor function that underlies number."

The complexity for homesigners learning seemingly simple concepts such as "seven" may help researchers learn more about the important role language plays in how all children learn early mathematical concepts, especially children who are having trouble learning number concepts in their preschool years.

Scholars previously found that in isolated cultures where the local language does not have large number words, people do not learn the value of large numbers. Two groups of people studied in the Amazon, for instance, do not have words for numbers greater than five. But their culture does not require the use of exact large numbers, which could explain the Amazonians' difficulty with these numbers.

In Nicaraguan society, however, exact numbers are an important part of everyday life, as Nicaraguans use money for their transactions. Although the deaf homesigners in the University of Chicago study understand the relative value of their money, their understanding is incomplete because they have never been taught number words, said Elizabet Spaepen, the study's lead author.

For the study, the scholars gave the homesigners a series of tasks to determine how well they could recognize money. They were shown a 10-unit bill and a 20-unit bill and asked which had more value. They were also asked if nine 10-unit coins had more or less value than a 100-unit bill. Each of the homesigners was able to determine the relative value of the money.

"The coins and bills used in Nicaraguan currency vary in size and color according to value, which give clues to their value even if the user has no knowledge of numbers," Spaepen said. The deaf homesigners could be learning rote information about the currency based on the color and shape of the currency without fully understanding numerical value.

"The findings show that simply living in a numerate culture isn't enough to develop an understanding of large number," said Tuller. "This conclusion comes from the observation that the homesigners are surrounded by hearing individuals who deal with large numbers all of the time.

"The findings point toward language since that's what the homesigners lack," she said. "In all other respects they are fully functioning members of their community. But that doesn't mean that there might not be other, nonlinguistic ways of teaching them, or others, the idea of an exact large number."

The research team is currently working on developing a training procedure to do exactly that–train deaf homesigners the meaning of number using nonlinguistic means.

Other authors on the paper are Marie Coppola, Assistant Professor in Psychology at the University of Connecticut; Elizabeth Spelke, the Marshall L. Berkman Professor of Psychology at Harvard; and Susan Carey, Professor of Psychology at Harvard.

NSF supports all fields of fundamental science and engineering, except for medical sciences, by funding the research of scientists, engineers and educators directly through their own home institutions, typically universities and colleges.


Journal Reference:

  1. E. Spaepen, M. Coppola, E. S. Spelke, S. E. Carey, S. Goldin-Meadow. Number without a language model. Proceedings of the National Academy of Sciences, 2011; DOI: 10.1073/pnas.1015975108

Workplace noise-related hearing loss affects sleep quality

Sustained exposure to loud workplace noise may affect quality of sleep in workers with occupational-related hearing loss, according to a new study by Ben-Gurion University of the Negev researchers.

Published in the journal Sleep, the study compared the sleep quality of individuals at the same workplace, some with workplace noise-related hearing loss and some without.

Workers with hearing loss had a higher average age and longer duration of exposure than those without hearing impairments. Also, 51 percent of those with hearing loss reported tinnitus (continual ringing in the ears) as opposed to 14 percent of those without hearing impairments.

Although tinnitus was reported as the main sleep disrupting factor, hearing impairment among workers exposed to harmful noise contributed to sleep impairment, especially to insomnia, regardless of age and years of exposure.

"The homogeneous study population exposed to identical harmful noise at the same workplace allowed us to compare sleep quality between similar groups differing only by hearing status," explains Tsafnat Test, a medical student who carried out this study as her B.Sc. thesis in the BGU Faculty of Health Sciences, supervised by Dr. Sheiner, Dr. Eyal, Dr. Canfi and Prof. Shoham-Vardi.

Two hundred and ninety eight male volunteers with occupational exposure to harmful noise were given a hearing test prior to the start of study. Ninety-nine of the participants were judged to have a hearing impairment and 199 had normal hearing.

The researchers explored various elements of sleep including difficulty falling asleep; waking too early or during the night; excessive daytime sleepiness or falling asleep during daytime; snoring; and excessive sleep movement.


Journal Reference:

  1. Test Tsafnat, Ayala Canfi, Arnona Eyal, Ilana Shoam-Vardi, Einat K. Sheiner. The Influence of Hearing Impairment on Sleep Quality among Workers Exposed to Harmful Noise. Sleep, (in press)

Function of novel molecule that underlies human deafness revealed

New research from the University of Sheffield has revealed that the molecular mechanism underlying deafness is caused by a mutation of a specific microRNA called miR-96. The discovery could provide the basis for treating progressive hearing loss and deafness.

The research team, led by Dr Walter Marcotti, Royal Society University Research Fellow from the University's Department of Biomedical Science, in collaboration with Professor Karen Steel at the Sanger Institute in Cambridge, discovered that the mutation in miR-96 prevents development of the auditory sensory hair cells. These cells are located in the inner ear and are essential for encoding sound as electrical signals that are then sent to the brain.

The research has been published in the Proceedings of the National Academy of Sciences and was based on studies of mice, which do not normally hear until about 12 days after birth. Prior to this age their immature hair cells must execute a precise genetic program that regulates the development of distinct types of sensory hair cell, namely inner and outer hair cells.

The research teams found that in a strain of mice called diminuendo — which carry a single base mutation in the miR-96 gene — hair cell development is arrested around birth.

The study shows that miR-96 normally regulates hair cell development by influencing the expression of many different genes associated with a wide range of developmental processes at a specific stage. The researchers discovered that the mutation hinders the development not only of the mechanically sensitive hair bundle on the cell apex but also the synaptic structures at the base that govern transfer of electrical information to the sensory nerves. These new findings suggest that miR-96 is a master regulator responsible for coordinating the development of the sensory cells that are vital to hearing.

Since the mutation in miR-96 is known to cause human deafness and microRNA molecules can be targeted by drugs, the work also raises new opportunities for developing treatments to treat hearing loss.

Dr Walter Marcotti said: "Progressive hearing loss affects a large proportion of the human population, including new born and young children. Despite the relevance of this problem, very little is currently known regarding the genetic basis of progressive hearing loss. Our research has provided new and exciting results that further our understanding of auditory development as well as possible molecular targets for the development of future therapies."

The work was supported by the Royal National Institute for Deaf People (RNID), The Wellcome Trust and the University of Sheffield.


Journal Reference:

  1. S. Kuhn, S. L. Johnson, D. N. Furness, J. Chen, N. Ingham, J. M. Hilton, G. Steffes, M. A. Lewis, V. Zampini, C. M. Hackney, S. Masetto, M. C. Holley, K. P. Steel, W. Marcotti. miR-96 regulates the progression of differentiation in mammalian cochlear inner and outer hair cells. Proceedings of the National Academy of Sciences, 2011; DOI: 10.1073/pnas.1016646108

Awake despite anesthesia

Out of every 1000 patients, two at most wake up during their operation. Unintended awareness in the patient is thus classified as an occasional complication of anesthesia — but being aware of things happening during the operation, and being able to recall them later, can leave a patient with long-term psychological trauma.

How to avoid such awareness events, and what treatment is available for a patient who does experience awareness, is the subject of a report by Petra Bischoff of the Ruhr University in Bochum and Ingrid Rundshagen of the Charité Berlin in the current issue of Deutsches Ärzteblatt International.

The usual culprit in cases of unintended awareness during an operation is an inadequate depth of anesthesia. In addition, several risk factors exist that promote awareness events. For example, children have eight to ten times the risk of being aware under anesthesia. Long-term use of painkillers or misuse of medication can also make patients more liable to this kind of experience. The nature of the operation and the surrounding circumstances can also play a part: cesarean sections and emergency operations carry a higher risk of awareness than other kinds of surgery, and operations at night a higher risk than those carried out during the day.

For prevention of awareness during anesthesia, the authors recommend taking into account the risk factors that have been mentioned and raising the level of vigilance among medical personnel for awareness phenomena by regular training sessions. Premedication with benzodiazepines and not using muscle relaxants are also worthwhile measures. Additionally, it is important to measure the anesthetic gas concentrations regularly and monitor brain electrical activity by EEG. If possible, the patient should be given hearing protection. If a post-traumatic stress disorder does occur, the prognosis is good if professional treatment is started without delay.


Journal Reference:

  1. Bischoff P, Rundshagen I. Awareness during general anesthesia. Dtsch Arztebl Int, 108(1-2): 1-7 DOI: 10.3238/arztebl.2011.0001

Boxing is risky business for the brain

Up to 20% of professional boxers develop neuropsychiatric sequelae. But which acute complications and which late sequelae can boxers expect throughout the course of their career? These are the questions studied by Hans Förstl from the Technical University Munich and his co-authors in the current issue of Deutsches Ärzteblatt International.

Their evaluation of the biggest studies on the subject of boxers' health in the past 10 years yielded the following results: The most relevant acute consequence is the knock-out, which conforms to the rules of the sport and which, in neuropsychiatric terms, corresponds to cerebral concussion.

In addition, boxers are at substantial risk for acute injuries to the head, heart, and skeleton. Subacute consequences after being knocked out include persistent symptoms such as headaches, impaired hearing, nausea, unstable gait, and forgetfulness. The cognitive deficits after blunt craniocerebral trauma last measurably longer than the symptoms persist in the individual's subjective perception. Some 10-20% of boxers develop persistent neuropsychiatric impairments. The repeated cerebral trauma in a long career in boxing may result in boxer's dementia (dementia pugilistica), which is neurobiologically similar to Alzheimer's disease.

With regard to the health risks, a clear difference exists between professional boxing and amateur boxing. Amateur boxers are examined regularly every year and in advance of boxing matches, whereas professionals subject themselves to their fights without such protective measures. In view of the risk for injuries that may result in impaired cerebral performance in the short or long term, similar measures would be advisable in the professional setting too.


Journal Reference:

  1. Förstl H, Haass C, Hemmer B, Meyer B, Halle M:. Boxing: acute complications and late sequelae, from concussion to dementia. Deutsches Ärzteblatt International, 2010; 107[47]: 835-9 DOI: 10.3238/arztebl.2010.0835

Growth-factor gel shows promise as hearing-loss treatment

A new treatment has been developed for sudden sensorineural hearing loss (SSHL), a condition that causes deafness in 40,000 Americans each year, usually in early middle-age. Researchers writing in the open access journal BMC Medicine describe the positive results of a preliminary trial of insulin-like growth factor 1 (IGF1), applied as a topical gel.

Takayuki Nakagawa, from Kyoto University, Japan, worked with a team of researchers to test the gel in 25 patients whose SSHL had not responded to the normal treatment of systemic gluticosteroids. He said, "The results indicated that the topical IGF1 application using gelatin hydrogels was safe, and had equivalent or superior efficiency to the hyperbaric oxygen therapy that was used as a historical control; this suggests that the efficacy of topical IGF1 application should be further evaluated using randomized clinical trials."

At 12 weeks after the test treatment, 48% of patients showed hearing improvement, and the proportion increased to 56% at 24 weeks. No serious adverse events were observed. This is the first time that growth factors have been tested as a hearing remedy. According to Nakagawa, "Although systemic glucocorticoid application results in hearing recovery in some patients with SSHL, approximately 20% show no recovery. Topical IGF1 application using gelatin hydrogels is well tolerated and may be efficacious for these patients."


Journal Reference:

  1. Takayuki Nakagawa, Tatsunori Sakamoto, Harukazu Hiraumi, Yayoi S Kikkawa, Norio Yamamoto, Kiyomi Hamaguchi, Kazuya Ono, Masaya Yamamoto, Yasuhiko Tabata, Satoshi Teramukai, Shiro Tanaka, Harue Tada, Rie Onodera, Atsushi Yonezawa, Ken-ichi Inui and Juichi Ito. A Topical insulin-like growth factor 1 treatment using gelatin hydrogels for glucocorticoid-resistant sudden sensorineural hearing loss: a prospective clinical trial. BMC Medicine, (in press) [link]

Banking on predictability, the mind increases efficiency

Like musical compression saves space on your mp3 player, the human brain has ways of recoding sounds to save precious processing power.

To whittle a recording of your favorite song down to a manageable pile of megabytes, computers take advantage of reliable qualities of sounds to reduce the amount of information needed. Collections of neurons have their own ways to efficiently encode sound properties that are predictable.

"In perception, whether visual or auditory, sensory input has a lot of structure to it," said Keith Kluender, a psychology professor at the University of Wisconsin-Madison. "Your brain takes advantage of the fact that the world is predictable, and pays less attention to parts it can predict."

Along with graduate student Christian Stilp and assistant professor Timothy Rogers, Kluender co-authored a study published in the Nov. 22 early online edition of the Proceedings of the National Academy of Sciences showing listeners can become effectively deaf to sounds that do not conform to their brains' expectations.

The researchers crafted an orderly set of novel sounds that combined elements of a tenor saxophone and a French horn. The sounds also varied systematically in onset — from abrupt, like the pluck of a violin string, to gradual, like a bowed string. These sounds were played in the background while test subjects played with Etch-a-Sketches.

After a little more than seven minutes, listeners completed trials where they were asked to identify one sound in a set of three that was unlike the other two.

Distinguishing sounds that varied in instrument and onset in the same way they had just heard was a simple matter. But sounds that didn't fit — with, say, more pluck and not enough saxophone — were completely lost to the listeners. They could not correctly identify one of the non-conforming sounds as the odd one among three examples.

"They're so good at perceiving the correlations between the orderly sounds, that's all they hear," says Kluender, whose work is funded by the National Institute of Deafness and Other Communication Disorders. "Perceptually, they've discarded the physical attributes of the sounds."

The results jibe well with theoretical descriptions of an efficient brain, and the researchers were able to accurately predict listener performance using a computational model simulating brain connections.

"The world around us isn't random," Stilp says. "If you have an efficient system, you should take advantage of that in the way you perceive the world around you. That's never been demonstrated this clearly with people."

To avoid having to carefully take in and remember every last bit of visual or audible stimulus it encounters, the mind quickly acquaints itself with the world's predictability and redundancy.

"That's part of why people can understand speech even in really terrible conditions," Kluender says. "You can press your ear to the wall in a cheap apartment and make out a conversation going on next door even though the wall removes two-thirds of the acoustic information. From just small pieces of sounds, your brain can predict the rest."


Journal Reference:

  1. Christian E. Stilp, Timothy T. Rogers and Keith R. Kluender. Rapid efficient coding of correlated complex acoustic properties. PNAS, November 22, 2010 DOI: 10.1073/pnas.1009020107

Brain region responsible for speech illusion identified; Study explains how visual cues disrupt speech perception

Watching lips move is key to accurately hearing what someone says. The McGurk Effect, an auditory phenomenon in which viewing lips moving out of sync with words creates other words, has been known since the 1970s; now researchers have pinpointed the brain region responsible for it.

The findings were presented at Neuroscience 2010, the annual meeting of the Society for Neuroscience, held in San Diego.

Scientists at the University of Texas Medical School found that the superior temporal sulcus, known to play a role in language and eye gaze processing, is the hub of the sensory overlap. In the study, researchers first had volunteers experience the McGurk Effect while undergoing functional magnetic resonance imaging (fMRI). The fMRI showed the authors which part of the brain was active during the effect.

The activity in that region was then disrupted using transcranial magnetic stimulation, while participants remarked on what they heard during the speech and vision tests. The researchers discovered that the McGurk Effect disappeared when they targeted the superior temporal sulcus. As importantly, the participants perceived other sounds and sights normally.

"These results demonstrate that the superior temporal sulcus plays a critical role in the McGurk Effect and auditory- visual integration of speech," said Michael Beauchamp, PhD, who led the study.

Research was supported by the National Science Foundation and the National Institute of Neurological Disorders and Stroke.

Why you can listen at cocktail parties: Songbirds' individual brain cells are tuned to particular sounds

Nerve cells in the brains of songbirds are sensitive to specific sounds, and only respond when those sounds occur during communication, a recent study shows. The finding helps explain people's ability to listen to a conversation while in a noisy environment — the "cocktail party effect."

The research was presented at Neuroscience 2010, the annual meeting of the Society for Neuroscience, held in San Diego.

"While the cocktail party effect has been well-documented, it is not clear exactly how our brains are able to separate different voices so well," said senior author Frederic Theunissen, PhD, of University of California, Berkeley. "In fact, background noise is a constant challenge for engineers who design hearing aids and voice-recognition systems. Knowledge about how our ears and brains solve this task could lead to substantial improvements in hearing aid performance."

To explore how people filter out different sounds, the researchers focused on the hearing processes of songbirds. The ways that humans learn to speak and birds learn to sing is strikingly similar, and there are also similarities in their brains' auditory structures.

The authors played sound recordings for zebra finches and noted the responses of individual auditory nerve cells. The neurons were exposed to bird songs, non-communicative noises, and combination of the two. Results showed that certain cells responded almost identically to a song note played in quiet and to the same note played over the noise. The study helps identify how these neurons extracted sounds in a challenging environment. "Our group has demonstrated that individual nerve cells can be very good at picking vocalization out of background noise," Theunissen said.

Research was supported by the National Institute on Deafness and Other Communication Disorders.

First implanted device to treat balance disorder developed

A University of Washington Medical Center patient on Thursday, Oct. 21, became the world's first recipient of a device that aims to quell the disabling vertigo associated with Meniere's disease.

The UW Medicine clinicians who developed the implantable device hope that success in a 10-person surgical trial of Meniere's patients will lead to exploration of its usefulness against other common balance disorders that torment millions of people worldwide.

The device being tested — a cochlear implant and processor with re-engineered software and electrode arrays — represents four-plus years of work by Drs. Jay Rubinstein and James Phillips of UW's Department of Otolaryngology-Head and Neck Surgery. They worked with Drs. Steven Bierer, Albert Fuchs, Chris Kaneko, Leo Ling and Kaibao Nie, UW specialists in signal processing, brainstem physiology and vestibular neural coding.

"What we're proposing here is a potentially safer and more effective therapy than exists now," said Rubinstein, an ear surgeon and auditory scientist who has earned a doctoral degree in bioengineering and who holds multiple U.S. patents.

In the United States, Meniere's affects less than one percent of the population. The disease occurs mostly in people between ages 30 and 50, but can strike anyone. Patients more often experience the condition in one ear; about 30 percent of cases are bilateral.

The disease affects hearing and balance with varying intensity and frequency but can be extremely debilitating. Its episodic attacks are thought to stem from the rupture of an inner-ear membrane. Endolymphatic fluid leaks out of the vestibular system, causing havoc to the brain's perception of balance.

To stave off nausea, afflicted people must lie still, typically for several hours and sometimes up to half a day while the membrane self-repairs and equilibrium is restored, said Phillips, a UW research associate professor and director of the UW Dizziness and Balance Center. Because the attacks come with scant warning, a Meniere's diagnosis can cause people to change careers and curb their lifestyles.

Many patients respond to first-line treatments of medication and changes to diet and activity. When those therapies fail to reduce the rate of attacks, surgery is often an effective option but it typically is ablative (destructive) in nature. In essence, the patient sacrifices function in the affected ear to halt the vertigo — akin to a pilot who shuts down an erratic engine during flight. Forever after, the person's balance and, often, hearing are based on one ear's function.

With their device, Phillips and Rubinstein aim to restore the patient's balance during attacks while leaving natural hearing and residual balance function intact.

A patient wears a processor behind the affected ear and activates it as an attack starts. The processor wirelessly signals the device, which is implanted almost directly underneath in a small well created in the temporal bone. The device in turn transmits electrical impulses through three electrodes inserted into the canals of the inner ear's bony labyrinth.

"It's an override," Phillips said. "It doesn't change what's happening in the ear, but it eliminates the symptoms while replacing the function of that ear until it recovers."

The specific placement of the electrodes in the bony labyrinth is determined by neuronal signal testing at the time of implant. The superior semicircular canal, lateral semicircular canal and posterior semicircular canal each receive one electrode array.

A National Institutes of Health grant funded the development of the device and its initial testing at the Washington National Primate Research Center. The promising results from those tests led the U.S. Food and Drug Administration, in June, to approve the device and the proposed surgical implantation procedure. Shortly thereafter, the limited surgical trial in humans won approval from the Western Institutional Review Board, an independent body charged with protecting the safety of research subjects.

By basing their invention on cochlear implants whose design and surgical implantation were already FDA-approved, Phillips and Rubinstein leapfrogged scientists at other institutions who had begun years earlier but chosen to develop novel prototypes.

"If you started from scratch, in a circumstance like this where no one has ever treated a vestibular disorder with a device, it probably would take 10 years to develop such a device," Rubinstein said.

The device epitomizes the translational advancements pursued at UW's academic medical centers, he said. He credited the team's skills and its access to the primate center, whose labs facilitated the quick turnaround of results that helped win the FDA's support.

A successful human trial could lead the implant to become the first-choice surgical intervention for Meniere's patients, Phillips said, and spark collaboration with other researchers who are studying more widespread balance disorders.

The first patient will be a 56-year-old man from Yakima, Wash. He has unilateral Meniere's disease and has been a patient of Rubinstein's for about two years.

See a related video at UW Medicine's YouTube site. Drs. Rubinstein and Phillips discuss the device: http://www.youtube.com/watch?v=iu047vTckvA

Cochlear Ltd. of Lane Cove, Australia, will manufacture the device. Cochlear is a medical equipment company and longtime maker of devices for hearing-impaired people.