Although a cochlear implant and a "hearing aid" are as different as night and day, the way the sound is processed, is similar. At one time hearing aids were analog. Now new technology presents the sound in a digital format. I have never worn hearing aids, my world of hearing went from the ears and working cochlea that God gave me, to a surgically implanted array of electrodes and a tiny computer the size of my finger nail embedded millimeters inside my head. The show and tell of the system is the mic and the BTE (behind the ear) gizmo, that attaches to a magnet, that sticks to the tiny computer, that is implanted in my head, that takes the sound, that sends it to the man made electrodes, that fires it at my brain...........
and this is the sound that Jack built!!!
Whew!
Digital aids (both hearing aids and cochlear implants) work in a different way than the everyday world of analog sound that most of world hears by in our Horton Hears a Who ear.
Digital devices, take the signal from the microphone, and convert it into "bits" of data - ("0s" and "1's") - numbers that can be manipulated by a tiny computer in the processing part of the system. This makes it possible to tailor and process sounds very precisely, in ways that are impossible with analog aids. The bits representing the sound are analyzed and manipulated by algorithms (a set of instructions) to perform precise, complex actions, and are then converted back into electricity, which is finally changed back into sound that gets fired at my auditory nerve, or in the case of a digital hearing aid, goes into the ear.
This process happens very rapidly: there are several million calculations occurring in the processor per second. The numbers can be manipulated in almost any way imaginable, and this is what gives the digital hearing processor its big advantage. The binary numbers can perform numerous complex calculations that create very precise sound in theory.
The process is how I understand it and really a compilation of many discussions with audiologists, medical people and some users.
Best way to illustrate is to walk through my world:
You say "Hello David" and my sound processor picks up speech and environmental sounds It then codes the information and send it to implanted part in my head, through the use of radio waves and a magnet. The implanted part of the system transmits signals to the auditory nerve, which carries them to the brain.
A cochlear implant does not correct hearing loss. In fact, it bypasses the normal hearing pathway, in which sounds travel through the outer, middle, and inner ear to reach the auditory nerve. (You see, my inner stuff got broken in a "medical firestorm" 14 months ago, So we need to bypass all the broken parts). A cochlear implant stimulates the auditory nerve directly. The brain then learns to take this electrical code and "interpret" it as speech. All of this happens as fast as your gums flap.
So what I "hear" is a really a mile long string of zero's and one's or digital code.
Someone had to write that "code" or turn the "sound" into a string of zeros and ones, so I could interpret it.
What got me pondering this way too complicated issue, was listening to the new President elect speech last night. I have never heard Obama's voice in my hearing days, so as I listened to him talk, my version of his voice is through a long string of digital code. "Binary speak" as I call it. I have no idea if it is how you hear him. I have not a single memory of his voice to draw on, since I was "introduced to him, long after I lost my hearing. No memory to use. So it is purely a mechanical algorithms.
My question, or where I really am going with this post, is this: Is what I "hear" (as all this process happens), just an interpretation of a software writers interpretation of the sound?
Perhaps what also got me thinking of this, is a more general thought on music and sound.
Is what we hear, and how we feel about what we hear in life, our own interpretation of it. Or is it how everyone hears it. Do all sounds, sound the same to everyone? Does Bob Dylan sound the same to me as he does to you?
If not, then that would explain likes and dislikes. I like a certain sound or feel in my old music listening days. Lets say that I liked Jazz. Is it because of my upbringing, my hard wiring, a product of my genetic code, or because of how I "heard" it or how my brain interpreted it?
I like a certain actors voice. Some people don't. Is it how I or we interpret it, or is there more to it?
If all sound, is our own personal interpretation of it. Then what I hear now, is really how that sounds, sound like to a software writer. The person who wrote the code, that fires the string of zero's and one's at my auditory nerves, writes them as he hears them. Calls them how he sees them really eh? What if this person and I don't hear ear to ear?
Phone rings and I now hear "braaaaaaaaaaaacccccccccccccccckkk". It never sounded like that before I got a CI. It used to "brrrrrrinnnnnnnggg". Is it because my writer interpreted it like a "brack" sound in his world, so he wrote the code based on that.
Many things sound different. Speeder's bark is quite different. I often wonder if it is because the writer never knew his sound, or his personality, so how could he even be close? I know my brain plays a huge part in this digital system. My memory is accessed in nano seconds when I get sound. This is huge in voice recognition. If I had no memory everyone would sound the same.
Memory access is key as it places the nuances of speech that my memory serves me with, of that person. Bad side to that is, when my son talks, I hear him in his "old' voice, before it changed. So this 14 year old boy sounds like a little kid. It changed when I had no hearing for 9 months.
I don't know, just got me wondering about the person that wrote my software, and if he or she was a jazz fan, or if he heard a doorbell differently than I did in my old hearing life.
Probably more important things to do than think about this, but it does indeed provoke thought.
What if the writer was Celine Dion fan?
Yikes!
Warmly,
David