Wednesday, November 5, 2008

Figuring Out Why Sounds, Sound That Way.

In the world of Cochlear Implants, and in many cases of new technology hearing aids, the science of it all is baffling to me.

Although a cochlear implant and a "hearing aid" are as different as night and day, the way the sound is processed, is similar. At one time hearing aids were analog. Now new technology presents the sound in a digital format. I have never worn hearing aids, my world of hearing went from the ears and working cochlea that God gave me, to a surgically implanted array of electrodes and a tiny computer the size of my finger nail embedded millimeters inside my head. The show and tell of the system is the mic and the BTE (behind the ear) gizmo, that attaches to a magnet, that sticks to the tiny computer, that is implanted in my head, that takes the sound, that sends it to the man made electrodes, that fires it at my brain...........
and this is the sound that Jack built!!!
Whew!

Digital aids (both hearing aids and cochlear implants) work in a different way than the everyday world of analog sound that most of world hears by in our Horton Hears a Who ear.
Digital devices, take the signal from the microphone, and convert it into "bits" of data - ("0s" and "1's") - numbers that can be manipulated by a tiny computer in the processing part of the system. This makes it possible to tailor and process sounds very precisely, in ways that are impossible with analog aids. The bits representing the sound are analyzed and manipulated by algorithms (a set of instructions) to perform precise, complex actions, and are then converted back into electricity, which is finally changed back into sound that gets fired at my auditory nerve, or in the case of a digital hearing aid, goes into the ear.

This process happens very rapidly: there are several million calculations occurring in the processor per second. The numbers can be manipulated in almost any way imaginable, and this is what gives the digital hearing processor its big advantage. The binary numbers can perform numerous complex calculations that create very precise sound in theory.
The process is how I understand it and really a compilation of many discussions with audiologists, medical people and some users.
Best way to illustrate is to walk through my world:

You say "Hello David"
and my sound processor picks up speech and environmental sounds It then codes the information and send it to implanted part in my head, through the use of radio waves and a magnet. The implanted part of the system transmits signals to the auditory nerve, which carries them to the brain.

A cochlear implant does not correct hearing loss. In fact, it bypasses the normal hearing pathway, in which sounds travel through the outer, middle, and inner ear to reach the auditory nerve. (You see, my inner stuff got broken in a "medical firestorm" 14 months ago, So we need to bypass all the broken parts). A cochlear implant stimulates the auditory nerve directly. The brain then learns to take this electrical code and "interpret" it as speech. All of this happens as fast as your gums flap.

So what I "hear" is a really a mile long string of zero's and one's or digital code.

Someone had to write that "code" or turn the "sound" into a string of zeros and ones, so I could interpret it.

What got me pondering this way too complicated issue, was listening to the new President elect speech last night. I have never heard Obama's voice in my hearing days, so as I listened to him talk, my version of his voice is through a long string of digital code. "Binary speak" as I call it. I have no idea if it is how you hear him. I have not a single memory of his voice to draw on, since I was "introduced to him, long after I lost my hearing. No memory to use. So it is purely a mechanical algorithms.

My question, or where I really am going with this post, is this: Is what I "hear" (as all this process happens), just an interpretation of a software writers interpretation of the sound?

Perhaps what also got me thinking of this, is a more general thought on music and sound.

Is what we hear, and how we feel about what we hear in life, our own interpretation of it. Or is it how everyone hears it. Do all sounds, sound the same to everyone? Does Bob Dylan sound the same to me as he does to you?

If not, then that would explain likes and dislikes. I like a certain sound or feel in my old music listening days. Lets say that I liked Jazz. Is it because of my upbringing, my hard wiring, a product of my genetic code, or because of how I "heard" it or how my brain interpreted it?

I like a certain actors voice. Some people don't. Is it how I or we interpret it, or is there more to it?

If all sound, is our own personal interpretation of it. Then what I hear now, is really how that sounds, sound like to a software writer. The person who wrote the code, that fires the string of zero's and one's at my auditory nerves, writes them as he hears them. Calls them how he sees them really eh? What if this person and I don't hear ear to ear?

Phone rings and I now hear "braaaaaaaaaaaacccccccccccccccckkk". It never sounded like that before I got a CI. It used to "brrrrrrinnnnnnnggg". Is it because my writer interpreted it like a "brack" sound in his world, so he wrote the code based on that.

Many things sound different. Speeder's bark is quite different. I often wonder if it is because the writer never knew his sound, or his personality, so how could he even be close? I know my brain plays a huge part in this digital system. My memory is accessed in nano seconds when I get sound. This is huge in voice recognition. If I had no memory everyone would sound the same.
Memory access is key as it places the nuances of speech that my memory serves me with, of that person. Bad side to that is, when my son talks, I hear him in his "old' voice, before it changed. So this 14 year old boy sounds like a little kid. It changed when I had no hearing for 9 months.

I don't know, just got me wondering about the person that wrote my software, and if he or she was a jazz fan, or if he heard a doorbell differently than I did in my old hearing life.

Probably more important things to do than think about this, but it does indeed provoke thought.

What if the writer was Celine Dion fan?

Yikes!



Warmly,



David

11 comments:

Unknown said...

ha, the celine dion comment made me laugh outloud!!!Can't stand her.

This was a fascinating post.

Kay Dennison said...

Your thoughts here fascinate me. It's opened my eyes to what one goes through with hearing loss. I appreciate your sharing it here.

And yeah, the Celine Dion reference made me giggle.

Thanks!!!

Anonymous said...

Fascinating! I found myself leaning in closer and closer to the monitor, trying hard to grasp this concept.
You really caught my attention with the Bob Dylan example. I learned to like his music when I fell in love with my husband, so I associate Dylan with my dh. Also, my 2 oldest sons now have lower voices than their father. It's a shock for far-away family to call us on the telephone and have either of the teenagers answer the phone. I know I still think of old friends as having not changed. There is a lot of mind-over-matter in our lives, eh?
I suppose that is why it is important to remain open-minded; otherwise, we are unable to accept change.

You always make me think in new ways, Dave!

Bear Naked said...

David
This must be about physics because I could not understand anything once you typed analog and digital.
Sorry.

And here I thought I was the only person who cannot tolerate Celine Dion.--Who knew there are other non-fans out there?

Bear((( )))

themom said...

For the record...I'm not a Celine Dion fan either. But back to your analysis, now you have me wondering. We know that people hear different decibel levels (thus partial deafness), and then I tried to apply this to hearing my own voice on and answering machine or other tape recording...and I would swear it is not my voice. Very interesting.

abc said...

I wonder if it makes a difference that you have bilateral implants, instead of a single implant? I've often heard people ask questions like, "I wonder if we all see colors the same?" Like is the color RED the same experience for me as it is for you? Do you actually see what I call RED, but the color you actually see is what I have learned to call BLUE? Really no way to know -- our brains are incredibly complex and resilient computers... Its amazing that they can interpret sounds such that they become meaningful to people... Dylan is a master poet, storyteller and songwriter, but as a singer, my brain tells me that Art Garfunkel would do a much better job! LOL

ronnie said...

An absolutely AMAZING post that touches on many things I've wondered about myself.

For example, people that I've met since the implant have very distinct voices (distinct from each other, and appropriate to them), which sound "normal" to me. But am I really hearing their voice as a hearing person hears it? And, frankly, how much of this am I making up in my head?? Am I assigning my interpretation of an "appropriate" voice to this 40-year-old tall white woman, or 18-year-old black man, or 12-year-old giggly Asian girl? I guess at this point there's no way I'll ever know. But I do wonder.

Pseudo said...

Very interesting. Is it similar at all to reader response theory? Bringing meaning to what we read from ur past experiences and our own likes and dislikes?

BTW I have something for you over at my blog.

bobbie said...

Poor Celine. No one likes her. And yet there she is, in the public eye - or ear.

This is quite a post! It presents us with a chain of thought that will keep me going for quite a while. These implants are such a long way from the days when my mother "listened" to the old Atwater Kent radio through bone conduction by holding an ear phone under her chin.

And now abc has me thinking about how we see colors. Oh, this will have me thinking a long time!

Xtreme English said...

i've experienced three kinds of sound in my life: perfectly natural, highly acute sound from birth to age 26; progressively deteriorating sound assisted (aided? gack) by three kinds of hearing aids (at first one, then two) from age 30 to about 18 months ago when i was something like 70; and hearing with one c.i. from april 2007 to the present.

they're all different, but the hearing aids, even the fancy (expensive) digital ones, were the worst. hearing aid sound was harsh and awful. but that's because hearing aids just make a bad sound louder. they are simply more convenient than hauling around an ear horn, which works on the same principle. my first hearing aids sounded like a taxi squawk box. but as my hearing deteriorated, no amount of fancy amplification worked on the sound quality. at the end, it was all bad, most of the time.

c.i.s are something else altogether. what i hear now involves not only technology but the regrowth of neurons--the regrowth being stimulated by the implanted electrodes. this last is why people could hear with the very first c.i.s, which had only one or two electrodes. mine has 22 but it's programmed to act like 24.

the more you hear with these things, the more you can hear.

it's not perfect, but it's not harsh and distorted as with hearing aids. it's as close to what a hearing person hears as i'll experience in my lifetime. i really don't care about the differences. yes, the traffic sounds i hear outside my condo in the a.m. sound just like the ones outside the hotel in italy this past summer. and the chatting during intermission at the opera t'other night sounded just like the freakin music blasting away in the restaurant. i can deal with that just fine as long as i can talk with my friends in person and on the phone or go to plays and the blessed opera in DC which has supertitles over the stage. i've always gone to movies whether they've had subtitles or i could hear anything or not. it's just a different experience than watching something from netflix at home.

now i find music is best when i listen to it via the patch cord plugged into a really good disk player (and thanks to ronnie for that hint). or live. live music is better than anything over the radio or TV.

it's complicated and fascinating and a blessing of the highest order.

Sarah C said...

This is something that I think about every day as my job provokes me to think about what children are hearing through their C.I's and compare this to what I'm hearing through mine. Granted, with me it's still very early days but I can compare between my HA and my CI. Sounds that I heard through my HA that I can hear through my CI now sound similar, but now I have the added bonus of hearing the higher pitched part of that sound so claps don't sound the same as before but as I give my CI more patience and time I notice that people's voices sound the same, not fully because it's still a little computerised.

But is what a child with absolutely no auditory memory implanted with a CI at a young age and what I hear with auditory memory with a CI the same? I don't know but I don't think so. The effort that my brain has put into deciphering these beeps and squeaks and adjusting to the more "normal" sound I remember is astounding. And this was only possible because I can remember natural sound.. But I also "see" sound when sometimes there is no sound there. My brain is giving me an auditory experience despite not actually receiving information from the auditory nerve! This is how I filled in the blanks when I had my HA's and I couldn't hear the higher pitched parts of speech sounds. No two CI users will hear the same thing as everybody's different - like the colour thing - I say turquoise, another person says green or blue. It depends on our outlook and experiences. What a fascinating topic!