The Bionic Ear
12/16/15 | 51m 11s | Rating: TV-G
Ruth Litovsky, Professor of Surgery/Otolaryngology at the UW School of Medicine and Public Health, shares ongoing research into the electrical stimulation of the ear in people with hearing loss. Litovsky discusses the progress and challenges of cochlear implants in patients ranging in age from toddlers to the elderly.
Copy and Paste the Following Code to Embed this Video:
The Bionic Ear
Welcome everyone to We dnesday Nite @ The Lab. I'm Tom Zinnen, I work here at the UW-Madison Biotechnology Center. I also work for UW Extension- Cooperative Extension, and, on behalf of those folks and our other co-organizers, Wisconsin Public Television, the Wisconsin Alumni Association, and the UW-Madison Science Alliance, thanks again for coming to Wednesday Ni te @ the Lab. We do this every Wednesday night, 50 times a year. Tonight, it's my pleasure to get to introduce to you Professor Ruth Litovsky. She's a professor in the Department of Communication Sciences and Disorders and also in Surgery. She was born in Israel, grew up in Chicago, got her undergraduate degree at Washington University in St. Louis, then came here for a PhD, went to Boston University and MIT for post docs, came back here to be on the faculty. Just last year, she was a Fulbright fellow in Melbourne, Australia. She's gonna to be talking with us about one of the more remarkable technologies I've heard about, that's the cochlear implant, otherwise known as the bionic ear. Please join me in welcoming Professor Litovsky to Wednesday Nite @ the Lab. (audience applauds) Thank you so much, Tom, for the invitation. Good evening, everybody. It's my pleasure to be here tonight to speak to you about research that we've been doing in my lab here at the Waisman Center for the past 14, 15 years or so. We call it the bionic ear because one might wonder whether, with electrical stimulation, you might hear better than you would with acoustic hearing. And it's true for people who have severe or profound hearing loss. So I'd like to start by talking about the human ear. And the human ear is remarkable. I'm gonna show you this in a sort of 3-D rendition, which my students love to see because you're looking through the external ear into the middle ear. And then, if I freeze this for a second, you see this lovely, wide, blue band, which is your eighth nerve, cochlear nerve, or auditory nerve. This is the nerve that we stimulate, if you have hearing loss in the cochlea, if the hair cells have died. I'll show you pictures of what damage looks like in the cochlea, but the idea is that we position an array of electrodes into this swirl, the cochlea. We stimulate the auditory nerve, and then the brain has to take care of a lot of what goes on next with interpreting the sound. So this is a picture of the cochlea and its bony labyrinth. It's a beautiful snail-shaped structure only 35 millimeters in length in an adult. It is mature at birth, though, which is quite remarkable, which is why we can implant infants when they're quite young. The issue in infants is are they hefty enough, in terms of blood loss, to sustain the surgical procedure? But otherwise, the cochlea can take an implant at a very, very young age. The cochlea is also remarkable because we have a map of frequency. We call it tonotopic organization. So from the base of the cochlea, as we move up to what we call the apex, as we move through 2 1/2 turns in the human ear, we go from very high frequencies, most of us don't really hear about 20,000 hertz, but we move on a logarithmic scale as we go through to the apex. And this wavy bit here is what we call the traveling wave, which is that stimulation travels from the base toward the apex, and the cochlea is a frequency analyzer. So every place along the cochlea resonates best with a given frequency. And so as the sounds travel through the cochlea, they find their sweet spot, or their best place at which that frequency best stimulates that basilar membrane, or the membrane on the cochlea that responds to sound. So we have maybe one place along the cochlea that responds best to one frequency, another place to another frequency, et cetera. This is what a little video of the basilar membrane looks like. This is the membrane in which the hair cells sit. And so as the traveling wave comes through the cochlea, this membrane will vibrate in an oscillatory motion up and down, in response to sound. When there is no sound, it doesn't vibrate. When the sound finishes, it stops to vibrate. But this vibration occurs at all places along the cochlea. And this is a very important part of acoustic hearing. We bypass all of this in the bionic ear. So it's quite remarkable that we can lose what we call cochlear mechanics and still stimulate the auditory nerve and have the brain interpret sound. Speaking of cochlear mechanics, this is a little video of what a hair cell looks like. And hair cells have to move in order, these are outer hair cells, they have to move in order for the basilar membrane to amplify sound. So if you're somebody with hearing loss of about 50 decibels, moderate toward severe hearing loss, this is because the outer hair cells have ceased to function, or they have been damaged, and they cannot move. And the loss of mechanical movement leads to loss of amplification or loss of hearing and other limitations that I can address as well later on. But let me play for you this really cute video. It's courtesy of Professor Ashmore. And you'll recognize the song, and you'll see the hair cell vibrating at the percussion of the song. One, two, 3 o'clock, 4 o'clock rock Five, six, 7 o'clock, 8 o'clock rock Nine, ten, 11 o'clock, 12 o'clock rock We're gonna rock around the clock tonight Put your glad rags on and join me hon We'll have some fun when Okay, you get the idea. (laughs with audience) But it's a really nice way to show cochlear mechanics, which is that there's physical vibratory movement of these hair cells and loss of cochlear mechanics is hugely responsible for loss of hearing. So seen here a cross-section of the cochlea and the hair cells, we have outer and inner hair cells. It is these guys, the inner hair cells, that actually transmit information up to the brain. So we have the cochlear mechanics, but then we have these inner hair cells. Their loss or their death is what primarily causes what we call severe to profound hearing loss, and lack of ability to really benefit from hearing aids in acoustic hearing. So we have these beautiful rows of hair cells, and I'll talk a little bit about what they look like when they're damaged, but the mechanics, also if you look down here at the bottom, involve the physical movement of these teeny little hair cells to open ion channels that lead to transduction of ions into the cell and hearing as well. I love to talk about this because the cascade of events is beautiful, it's complicated, and at every place along the way, the loss of any of these pieces can lead to hearing loss. So, information about sound is transmitted to our brain through networks of neurons. I've talked about the periphery up until now, but let's not forget about the brain. And so here, we're looking at the cochlea and the tonotopic organization, the map of frequency that I described to you, but all of this is transmitted up to higher levels in the brain, and at each step along the way, we have a new level of analysis of the signal of the information, which, again is what's really important to lead to hearing. So we have the frequency map. That is maintained as we move up all the way to the auditory cortex. Lower down in the system, we have the ability to resolve spectral information, or resolve the components of speech cues. As we move a little bit higher up, we have the ability to combine information across the two ears so that we can localize sound, and so we can separate speech in noisy environments. And then, of course, let's not forget that hearing involves listening, so we've got attention, memory, emotion, and I'll talk a little bit later, toward the end, about cognitive load or listening effort and the difficulty that people experience in noisy environments and how we're trying to study that. So there's a very important concept here, which is top down processing. The signal travels up the pathway, but the brain has to enact on the signal in order for us to interpret it. I would like to give you an example of what top-down processing and online recognition of signals might sound like. So I'm going to play for you a degraded form of speech. We call it sine-wave speech. And the first thing I'll do is play a sound that I don't think you'll be able to understand. (voice speaks distorted, machine-like speech) Anybody understand that? Okay good. (voice speaks distorted, machine-like speech) Now here's the clear speech version of that. -
Voiceover
She cuts with her knife. -
Ruth
Now listen again to the degraded speech. (voice speaks distorted, machine-like speech) (audience murmurs) Aah. So you have a filling in of the missing information because you know what you're listening for. I'll play one more just 'cos they're fun. This is my colleague Quentin Sommerfield from the UK. (voice speaks distorted, machine-like speech) Again. (voice speaks distorted, machine-like speech) -
Voiceover
They're buying some bread. -
Ruth
Did you get it? Oh, very nice. So you're learning to use the sine-wave speech. (voice speaks distorted, machine-like speech) So what you're getting is this pop-out effect or insight into the signal. This is a little bit like what people who use cochlear implants have to do. They have a really degraded signal. I'll play some more examples for you. They're having to fill in the information based on top-down processing, based on learning and training and knowing how to interpret the signal. So what we're dealing with in terms of a bionic ear in situation is, sensori-neural hearing loss, hearing loss in the periphery. We're dealing with permanent types of hearing loss. They're not restorable today with stem cells or hair-cell regeneration or some of the other beautiful technologies that we would love to be able to have in this population. We're dealing with a comparison of normal hair cells, and this is what damaged hair cells look like. So they can't move, they can't have ion channels open. They're basically not functional. And there are a lot of issues that I can talk about a little bit later if we have time, which are the education and social issues in particular for children with hearing loss. Do you mainstream them? How do you mainstream them in educational situations if they have hearing loss, if they're struggling to listen? What about school for the deaf? That's been a very interesting issue of the deaf culture with a capital D. So I won't really talk about those as much as the science behind the bionic ear, but I'm very sensitive to these issues. So, we can have progressive hearing loss, which is that many adults, and also some children who are born with normal hearing, can experience hearing loss as it progresses over a period of months or years, until eventually, they become profoundly deaf. There are many, many causes. I'm only listing a few here. Meningitis is an example of sudden hearing loss. Accident is an example of sudden hearing loss. Oto-toxicity, you can accidentally end up being given a cocktail of drugs to deal with other medical issues, but they are toxic to the ear, and they can cause deafness. But noise exposure and heredity are examples of loss of hearing that are often progressive. They can take months and years to get to where they are, and often, there are, of course, unknown reasons. And again, on the right, you see examples of damaged, dead, lost hair cells, compared on the left with healthy hair cells. So, when we think about what cochlear implants help us with, we're thinking about the importance of communication and learning in natural, everyday listening environments. This is where we want people to be able to participate, to succeed, to learn, to grow, grow, and to be members of society. Now, a lot of the issues that we deal with are whether you can know where sounds are coming from so that you can direct your attention to them. And a little bit later, I'll talk about the fact that this depends a lot on whether you can have access to sound in both ears. So, cochlear implants were designed to give input to one ear. Later on, we started to see a progression and a push in the field to give people two cochlear implants. And one of the things that having two cochlear implants helps you with is improvement in localizing sounds, in knowing where to direct your attention. We also expect children to acquire language in complex environments like classrooms and to be able to listen to what we call streamed speech and to predict what is about to come next, and to follow instructions. And this is very complex. And being able to also segregate the important speech sounds from noise in the environment is part of what they have to do to engage in the real world. So they have to hear a target in a noisy environment. They have to segregate speech from noise or from other distracting sounds. Cochlear implants in one ear don't do a terribly good job of that. So again, having input in both ears makes a very big difference. We want children and infants to be able to do everything that I have just talked about without being overwhelmed. And I like to point this out because giving somebody a cochlear implant surgically is one step. It's the very first step. There are many steps along the way in which we have to program the device and make sure that they can use it well. They have to revisit the audiologist, and the device has to work well. And if it doesn't work well, if it's set to be too loud, they can be over-whelmed, they might choose to simply take it off their ears. So you really have to work very closely between the parents, the school audiologist, many, many other professionals that are involved in treating these children. We don't want them to get confused about what they're hearing. Part of the confusion comes from not having enough speech therapy. So habilitation, rehabilitation speech therapy are all part of training children and adults to learn their devices. And of course, we want them to enjoy their auditory input. And one thing that I want to point out right now is, music is very tricky. And I'll explain a little bit why. But the speech signal and the music signal that come in through the implants are somewhat degraded. And so there's a little bit of a tricky issue relating to music, and it's one of the areas that the field is working on. So what does the implant do? You're looking at a simulation of electrodes that are sitting in the cochlea. They're stimulating the auditory nerve. The goal is to bypass the damaged hair cells, stimulate the nerve. Today children receive these devices definitely by 12 months of age, but I speak to surgeons all over the world who, I watched a surgery the other day in Germany, where they implanted an infant simultaneously in both ears at eight months of age. So the standard of care has really shifted over the years. It depends a lot on issues like what the parents want, what the surgeon recommends, if a child is old enough to say what they want, that gets taken into account, and money. Depends on whether the health insurance company will cover the cost. So adults who become deaf are also eligible, but not in every country. And so that's a very interesting part of the issue. But today, we estimate there are over 350,000 people, and the numbers are growing. These are pictures of some of the kids that we have studied in the lab, and one of the things I want to mention is, you can see here that they really also have to work with the ergonomics of where are they gonna put these devices so the child won't rip it off the head. But here, you see microphones that are sitting on the shoulders rather than behind the ear. That's not ideal. And so part of what I try to do when they come to my lab for my research is talk about training both the parents and the audiologists to use the devices to the best way that is possible. And this picture of this little girl is exciting to show because she has two pigtails in this picture. But when she came to my lab, she had received two cochlear implants at a year of age. She was now three, but she had both of them on a ponytail on top of her head rather than behind each ear. So she wasn't getting the spatial cues that we get by being able to hear sounds on both sides of the head. But talking to the parents 101 (laughs) on cochlear implants and sound localization, they were right there with me, they redid her hairdo, she's wearing two devices with two pigtails. Little things like that, which is sort of bench to clinic, taking the information that we have in the lab and really working with the population to get the information out there. So we have sound through electrical stimulation. What we're seeing on this slide is the fact that we're bypassing anything that the ear does, the ear canal, the bones, which are really important, and we're going straight into this swirly shape, the cochlea, stimulating the auditory nerve. In here you see, there are only about 22 electrodes, and this is very different than the tens of thousands of hair cells and of auditory nerve fibers that are responding separately and beautifully to each frequency. So we have, as I'll talk about in a minute, a degradation of the speech signal. At the bottom right, you see what is the actual implant that goes into the bone, and so, what happens is that it's implanted into the temporal bone, and the information is transmitted through a coil through an RF signal through the scalp. This also means that when a child or an adult takes the implant, the outside part off, they are deaf again. So they can hear when they're being stimulated. They take it off to swim, to have a shower, mostly to go to sleep at night, then they are deaf. And so this is a very interesting part of coping with hearing most of the time but not always hearing. And if the battery dies, they become deaf. And so they have to cope with those situations as well. So what we are doing is we are trying to replace acoustic, rich, beautiful acoustic stimulation with electrical stimulation, and we are taking advantage of that tonotopic organization, the map of frequency in the cochlea. So, recall that we have a map that goes from the base, 20,000 Hertz, to the apex, where we have very low frequencies. Here you see an electrode array going into the cochlea, but it stops over here after about one turn. We can't get them in too deep because of the anatomy of the cochlea and the virtual size, the actual size of the electrode array. So we're remapping the brain to use frequencies at places that the brain isn't used to getting. There's a lot of adaptability and plasticity that has to happen. Because, for example, we send all the frequencies to all of these electrodes. This place in the cochlea likes about 1,500 hertz, but it's going to receive the lowest frequency possible, maybe 200 hertz. So, there's a shifting of frequency and adaptability that the brain has to go through, which is why, when you first turn on a cochlear implant, it might sound bionic, or fuzzy, or a little bit strange, and it takes time to get to know how to use the information. Little bit of history here, very interesting history, which is that the early invention was around 1979. Then implants began to be provided to children around 1985. In the 1990s is when the discussion happened about whether people should get implants in both ears or not, and that's when clinical trials began in the United States. What's really interesting is to talk to some of the engineers who were involved in developing these devices in the '70s and '80s. I happened to meet one of them when I was in Australia last year, Peter Seligman, and he says, "You know, when I started doing that work, "we never believed it would ever work, "but we thought we'd have a lot of fun trying." (audience laughs) And it's really amazing how far it has come, and in fact, now, he and others are working on retinal implants, and he has the same attitude. "I don't think it'll work, maybe it'll work, maybe not, "but we're having fun trying." And so that's a true engineer, that's really the true attitude of an engineer. But it takes engineers, otology physicians, audiologists, speech pathologists, and researchers to really put all of this together. So around 2005 is when two cochlear implants became more standard of care. That's when health insurances really started to cover the cost of the devices. The other thing that happened is that implants started to be provided also in other places, like beyond the auditory nerve. So there are disorders in which the auditory nerve doesn't work. So what do you do then? Well, maybe you can bypass the auditory nerve and stimulate the brain stem, the cochlear nucleus, or the midbrain. So smaller numbers of patients, but definitely moving the field in that direction. It also became clear that not everybody has profound loss of hearing all the way along the cochlea. Many people have what we call residual, or leftover, hearing in the low frequencies. So we don't want to put the cochlear implant all the way in and destroy whatever hair cells are still surviving there. And so there are now what we call short electrode arrays that try to preserve whatever hearing people still have left. We call that electric-acoustic hearing, which is that, you might have a cochlear implant in your ear, it's only halfway in, you can use a hearing aid combined with a cochlear implant and have hybrid hearing, or electric-acoustic hearing. You might have one implant in one ear and a hearing aid in the other ear and be able to use those together as well. So there lots of advances in the field we call the preservation of hearing. We don't want to ruin what you still have left, so we want to let you use what you have, preserve hearing, and use the technology as well. The other very interesting thing that started really recently, a few years ago, is what we call cochlear implantation in people with single-sided deafness. So somebody with perfectly good hearing in one ear, deafness in the other ear, often due to tinnitus. So, people who have severe tinnitus in one ear and, it's accompanied by deafness, are starting to receive an implant in that ear. Well, it turns out that for many of those patients, the electrical stimulation suppresses the tinnitus and leads to an enormous improvement in quality of life. And it started out that way. Once those floodgates began to open, other people with deafness in one ear said, "I would like "a cochlear implant, so that I can hear in both ears," and some of those have paid out of pocket, about $60,000, but some of them have had insurance companies pay for them as well. So kind of a wave into the future that's very interesting. So, for the engineers out there, I'm gonna take less than a minute of your time to just mention what we mean when we talk about signal processing. We have a speech waveform here on the left. We take that waveform, and we break it down. We call it bandpass filtering. So we filter that sound into those 20-some channels. We then extract the envelope of that signal. We have to do what's called compression in that signal. We then have pulsatile stimulation that is sent to all of the electrodes, and now you see the auditory nerve firing. But that yellow shape is showing you that, every time I stimulate any place along the cochlea, I'm not very good at containing to just that place. We have what's called a spread, which means that the signal is not that clear. And so we're not able to present stimulation to only one place and keep it just there. There's a spread of excitation. And so that's part of why the signal sounds a little bit degraded. So we have some limitations, some distortion in the level cues, the two ears are not fitted in a way that they're synchronized, so that's also a problem. And we have this limited spectral resolution. But we have a lot of happy results or a lot of people who are very happy with their results. I really like showing this picture in particular on the right because this family has two parents who are both deaf from birth. They sign and they speak. They then had two children who were deaf. And within a year after being born, each of those children had two cochlear implants because the parents wanted the kids to sign but to speak. And, I have to tell you, I've met these kids many times. They come to my lab every year for research, and they speak beautifully. And they have many, many hours a week of speech therapy, but they also sign with their parents. They're bilingual. The parents then each got a cochlear implant because they wanted to know what it's like for the children to have to use them. So the parents don't have clear speech or very clear understanding of the signal, but they understand that sounds are happening in the world. They use the sound to augment their speech understanding and their lip-reading. This guy over here on the left is a graduate from UW-Madison. He became deaf as a child, and by age 12 received an implant, and he really loves having just one in one ear. He functions beautifully. I can call him from the other side of the room, and he hears me perfectly well. He was a cheerleader here at UW, he was an athlete, functions really, really well. And so, you see, a lot variability in what people choose and also in the outcomes, partly depending on the history and partly depending on unknown factors. I'd like to play some sounds for you of what we think simulations of cochlear implants sound like. I'll play the sound initially with what we call an 8 channel simulation of a girl who is talking. And this is what it might sound like when the activation of the implant first happens. See if you can understand the speech. (voice speaks distorted, metallic speech) Okay, let's make it a little bit clearer with more channels. (voice speaks distorted, metallic speech) -
Ruth
More channels. -
Voiceover
Wanna come to my birthday party in August? -
Ruth
And then actual speech. -
Voiceover
Wanna come to my birthday party in August? (voice speaks distorted, metallic speech) -
Ruth
Now that you know what you're listening for, it's much easier to interpret the signal, and cochlear implant users say that their improvement happened very quickly over months, and by a year, they really stabilized quite well. Another example that I have is of music. So let's listen to the 8-channel music. Uh oh. Sorry. I'm not sure what's happening there. I think my music demo's not working. So, sorry about that. I can put it online. So here's some important considerations. The age at implantation, which I have mentioned. Now we're going down to very, very young age, but that's not available in every country. The quality of the signal that is reaching the brain. The training. Training the users to interpret the signal so that speech and language can develop. The brain has to interpret, define, use memory and cognition to interpret the signal to fill in what is going on. So I really can't emphasize enough the importance of habilitation or rehabilitation. So how well do children with cochlear implants acquire language? These are data from a study that was published a number of years ago. On the left, we see comprehension, which is the ability to understand language, as a function of the test age in years, and their scores. What we see here are the normal hearing children. As they get older, they ramp up very quickly. And then we see three lines. The yellow are children who received their implants before 18 months of age. The blue are children who received them between 18 months and three years of age, and the green are children who waited more than three years of age. And you can see that the age of implantation has a huge effect on the ability of children to ramp up their language comprehension on standardized measures of language. On the right, we see similar findings for expression, so testing the ability of a child to express themselves on standardized measure of language. But you see that we don't know when they catch up. The testing was completed in the study at 8 1/2 years of age. The question is, what happens when they get to be 12, 15, 20? The man whose picture I showed you earlier, I think ramped up very quickly and was near where normal-hearing age-matched peers are very quickly, but there's a lot of variability. Here are data from a study that we published last year, in which we now looked at also the IQ of the children. So the children that come to my lab travel from out of town, typically, with a parent. The mothers typically have a very high education. These kids come from a high socioeconomic background with really intense habilitation because to travel to Madison, Wisconsin, from anywhere in the country, for the parent to take time off work, they really are gonna fall into that group. So notice the mean IQ score is down here in this black line. Most of the kids in this sample fall above the mean in IQ. So they're really smart kids, high ability to function on these tasks. But when we look at their language scores, they look like the kids in the previous slide, which is that they're very smart, but their language is delayed. So a lot of the kids are one standard deviation or two standard deviations below the mean for their age. So there's a lot of progress still to be made even in this really high-functioning group of kids who are getting sort of the best that they're going to get. We also looked at something really fun to study, which is real-time speech processing. So, children are very good at processing language in their native language. And it takes time, though, if I'm going to show them a picture of an object and then ask them to find a word that they hear that matches that picture. How long will it take them to visually identify the auditory information and match the visual and the auditory targets? So I'm going to show you now a little video of a 2 1/2-year-old boy who is wearing cochlear implants, and he is looking at a screen that has the two pictures that you see, a baby and a doggie. We call it doggie. And he's listening to the following sound. So watch his eyes as the target word comes on. I'll play it for you twice. -
Voiceover
Look at the doggie? Where is it? -
Ruth
So you saw the video. Let's watch it one more time, and watch what his eyes are doing as she's saying, "Where is the," which is not information, and then after she says the target word, doggie, what happens to his eyes. -
Voiceover
Look at the doggie. Where is it? -
Ruth
So he finds the target. His eyes land there. And now this is what it sounds like with background noise. (men's voices speaking simultaneously) -
Voiceover
Look at the doggie. Where is it? (cross-talk in background) -
Ruth
So we call this +10 dB signal-to-noise ratio, which means that the target is 10 decibels above that background babble noise. Now we paid undergraduate students to analyze these data by judging where the child was looking every frame, 30 frames per second, and I'm gonna show you now the results. This was done in collaboration with Jenny Saffran, my colleague from the Waisman Center. And now, what you see, is the proportion of looking at the target. So 50% would mean chance. It means they're not looking at either the target or the non target. And then after the sound is presented, how fast are they, how good are they? Now, these are normal-hearing children. The gray data are when the children-- Oops, sorry. Are when the children are listening in quiet. So notice that they get there fast, and they get to nearly 100%. They're not 100% because they're not always paying attention when they're 2 1/2 years old. These are the same normal-hearing children, now in noise. So even if you're a normal-hearing 2 1/2-year-old, the minute I introduce background babble, something happens, and you're slower, and you are less accurate. Now, what happens to the children who have cochlear implants? I have grayed out the data from the normal-hearing children. So the black triangles are the cochlear-implant children listening in quiet. They look like the normal-hearing children in noise. And people say that listening to an implant is like listening to a noisy system. And that is one piece of evidence here, which is that, even in quiet, at 2 1/2 years of age, they've had at least one implant since 12 months of age, they're still struggling. And in noise, they're really having a hard time. And those are the red triangles here. So, the red diamonds. So, yes, they can acquire language. Yes, they can understand these. The glass is half full. But there's really room, also, for improvement and for progress. And these are the average data for the normal-hearing group in quiet or with the competing sound, the cochlear-implant group, which look like the normal-hearing in noise, when they're listening in quiet. So we've had successes, and we've had limitations. They perform with cochlear implants better than children who have hearing aids who have profound hearing loss, but the variability is high, and many of them are worse than their normal-hearing peers. So the glass is both half full and half empty. Well, what else might we need to worry about? So, some of the hot terms today in the field are executive function, which is how, what kind of resources do you have in order to solve problems? What is your working memory like? What can you remember? How fast can you solve problems? What is your cognitive load? How hard are you trying to solve problems? These are very timely for people with hearing loss because they come to the clinic often and report that they're exhausted from working so hard to listen all day. A good example of somebody with two cochlear implants is where the mother was telling me that the girl used to go to school with one implant, came home, was exhausted, couldn't do after-school activities, slept. When she got the second one, she said, "Wow, there's a big difference," because she was working less hard. And this is something that people with hearing loss, with hearing aids will often report to their audiologist. "I'm hearing the speech. I'm getting there. "But, oh boy, I'm working so hard to get there." So, we are trying to understand how speech understanding and cognitive load are related. And what we are doing to get there is using a machine called an eye tracker. What an eye tracker does, if you see my research assistant here, sitting in this contraption, it's really not uncomfortable. We're just holding her head still because the cameras are closing in on her eyeballs, and we are measuring where her eyes are looking and the size of the pupils. So, pupil dilation has been, over the years, suggested to be related to arousal, attention to the environment, fight or flight. It's controlled by norepinephrine and other things. There's a big growing literature on this. The idea is that, if your pupil is small, as is shown here, you're exerting less effort. But when you're exerting more effort, the pupil size is larger. So, we're bringing people into the eye tracking facility. The procedure is, they listen to sentences and they repeat them while they're fixating on this monitor. And we measure real time processing, not unlike what we did with the babies, although the work that we did with the babies was about eight years ago, before we had the eye tracker. Now remember earlier, I played some speech sounds for you, the 4, 8, 16 channels? This was the... (voice speaks distorted, metallic speech) and then the natural. -
Voiceover natural voice
Wanna come to my birthday party in August? -
Ruth
Right, so now what we're looking at are similar sentences that were degraded to all of these levels but, as a function of real time and a change in the pupil size in millimeters. So we present the sentence. After the sentence ends, the pupil size begins to grow. And it reaches some kind of a maximum, at the window in time when we think people are rehearsing, processing, getting ready to tell us what they heard, to give us the response. And then it comes down. The maximum pupil dilation is when the sound is most degraded. And the less degraded the sound is, the less effort people are exerting. So it's not just the percent correct. It's not just that speech understanding changes. It's that the pupil dilation, the listening effort, the cognitive load is also changing, and we can measure it in real time. These are the summary of those data as a function of time for the 4 channels, 8, all the way to the normal situation. And again, these are the mean data for normal and getting higher and higher and higher as we get to the more degraded speech. Okay, so the other area that I mentioned several times is really this question of whether children, adults, should receive one or two cochlear implants. And I did mention that fact, and you can see it here, that, when we give a child two cochlear implants, we're fitting them with two independent ears. And it's really a wonder because they cost about $60,000, so you'd think that by now, we should have developed the engineering skills to synchronize the two devices. We can do that in my lab with a research processor and in other people's labs, but actually, in the field, it's harder than you think. So one of the things we're working on is to reverse engineer the way that the brain uses information across the ears so we can have these devices talk to one another or be synchronized to improve performance. But there is the question of one or both ears. And remember, one of the things I mentioned earlier was why binaural hearing, or being able to hear with both ears is useful. So, one of the usefulness areas is sound localization. Another one is the ability to separate speech from noise. And the third one is what we call de-reverberation. So we have localization, the cocktail party effect, as we fondly like to call it, and the idea that, if you're sitting in a reverberant environment, the binaural system can squelch or suppress echoes, so that we can focus and give greater perceptual attention or weight to the original sound. And hearing in reverberative environments is something that's very challenging for people with hearing loss, and it's very challenging with only one ear. So how do we study this? I'll tell you just a little bit about some of these studies, mostly by showing you these pictures because the methods, I think, speak a thousand words, or pictures speak a thousand words. So if you have a little child who is about two years old, we have to motivate them to participate on the task. We've tried to do things like head turning and eye gaze and really boring tasks, and sometimes after you test them about 10 or 20 times, they literally get up and say, "All done." (laughs with audience) Now, as the experimenter, you're not all done, but they're done, and they've walked out the door. (laughs) So we've designed a really fun task for them, in which, what they're doing is, they're learning that we hide a toy or sticker or food behind a curtain, and we present the sound from one of those locations, and then they have to reach into the hole, so it's called reaching for sound, and they find the object of interest. It's really like doing animal behavior studies. (laughs) Where you're training the child, you give them reinforcement, and it works really beautifully. And so they will sit there for an hour and a half, and they do the task really beautifully. But it took us a very long time to develop that and to understand what 2 1/2 year-olds want and how to test them. When they get a little bit older, we bring them into the sound booth, and now this child is facing an array of loudspeaker, and he is telling us, based on a computer game on a screen, where he hears the sounds. So he can have a spatial map, and he can tell us, using this computerized game, where the sound is coming from. They can do this by about four or five years of age. And then, when they're adults, we bring them into other sort of more complex situations where sounds can come from many, many locations. They don't know where the sounds are, and we use much more complex ways of testing them. So, what we have found over the years is that bilateral, or dual, cochlear implants significantly improve the ability of children and adults to know where sounds are coming from and to understand speech and noise, and we also have found that there is a reduction in cognitive load. And so I wanna, I have a few more minutes left, so I wanted to show you how we measure the reduction in cognitive load when we turn on either one ear or the other ear or both ears. Now most adults who have hearing loss will tell us that they have a better ear and a poorer ear. They will say, "Oh my left ear, "that's definitely much worse," Or, "Yeah, my right ear is much, much better." So the question is, what happens if I turn on just the poorer ear or just the better ear or both? So here, what you see, is the pupil dilation change from baseline when we only turn on the poorer ear. So you can see that, relative to stimulus offset, they're listening to the stimulus, the pupil is growing, growing, reaches a maximum, they're rehearsing and repeating, and then it comes down. When we only turn on the better ear, it's a little bit better. So there's a release from effort when they're using their better ear. They told us they like using that ear better, and we can measure it with the pupil dilation. Now, the question is, what happens when I turn on both ears. Will the worse ear win by bringing it back up, or will it be as good as just the better ear, or will there be some kind of an improvement? And we were delighted to find that, in fact, there's some kind of an integration that goes on in the brain, where having both ears, even if one is worse than the other, leads to a better ability to hear a speech signal. So the cognitive load is reduced. We knew from a lot of our studies that they can get more words correct, better percent correct. But they're reporting to us that they don't want to go home without both implants. They don't want to leave the house without both of their devices. People are giving us those reports, subjective impression of what life is like, and, in fact, it turns out that we're measuring it with this reduction in cognitive load. Before I finish, I want to mention that a number of years ago, we had some very interesting things happen on campus, and which I started to work on some of these issues with my prior mentor, Tom Yin, who conducted studies on sound localization in cats for many years, and I know that many of you followed the stories of what happened with Tom's lab and the People for Ethical Treatment of Animals, and some of the controversy around this, so what we tried to do is to build a lab in which we could conduct some studies in animals in which we provide them with cochlear implants, understand what the brain is doing with those signals, be able to then improve the signals, so that we can then improve the engineering. As many of you know, that didn't end up happening. There were some political and controversial issues, but I thought I would mention that Tom suggested that I do that, and I'm happy to answer any questions about how animal studies play a role in a lot of what we have learned about the bionic ear and the ability to provide sound to people who are deaf through this very complex technological advancement. I wanted to finish off by saying that bilateral listening mode produces reduced listening effort. Patient subjective reports of which ear is better are consistent with the amount of listening effort. And the bionic hearing, as we've followed it from the 1970s and 1980s has really evolved and has been improving. People with profound hearing loss can hear and function in everyday hearing environments. We're working on improvement of the signal. We think the importance of rehabilitation and training really should continue to be mentioned. We think it's important to reduce the cost. And I really didn't discuss much here about hearing culture versus deaf culture, but I'd be happy to do that. I wanted to thank everybody in my lab and colleagues around the world and past students who've been involved in this research. Also, the work was funded primarily by the National Institutes of Health, the National Institute of Deafness and Communicative Disorders, and also by the Waisman Center, and various of the manufacturers who produce implants have over the years helped us with some of the work. Thank you for your time and attention. (audience applauds)
Search University Place Episodes
Related Stories from PBS Wisconsin's Blog
Donate to sign up. Activate and sign in to Passport. It's that easy to help PBS Wisconsin serve your community through media that educates, inspires, and entertains.
Make your membership gift today
Only for new users: Activate Passport using your code or email address
Already a member?
Look up my account
Need some help? Go to FAQ or visit PBS Passport Help
Need help accessing PBS Wisconsin anywhere?
Online Access | Platform & Device Access | Cable or Satellite Access | Over-The-Air Access
Visit Access Guide
Need help accessing PBS Wisconsin anywhere?
Visit Our
Live TV Access Guide
Online AccessPlatform & Device Access
Cable or Satellite Access
Over-The-Air Access
Visit Access Guide
Passport













Follow Us