Discover, October 1, 1993
When one sense is impaired, another may do. Researchers are building devices that let the blind hear images and the deaf touch sounds.
For decades many researchers have dreamed of giving sight to the blind and hearing to the deaf with surgically implanted devices. Yet the blind and deaf themselves have developed a completely different strategy: training another sense to do the job. People who read braille, for instance, can process written information as quickly through their skin as others do with their eyes. Sign language, although purely visual, is as rich and complex as any spoken language and is processed in the same regions of the brain.
Impressed with this perceptual pinch-hitting, a few researchers have for 30 years now been bucking the trend toward surgical implants. Instead of trying to fix the sense that’s broken, these investigators have been working on electronic devices that would help the sense-impaired switch senses more effectively–devices that would let a deaf person perceive speech with his skin, for example, or a blind person perceive his visual surroundings with his ears. After many years of frustration, sense- switching research seems at last on the verge of paying off, thanks both to the ongoing revolution in microelectronics and to a more realistic set of ambitions. The sense-switching devices that are beginning to emerge from laboratories offer the possibility of real help to the blind and deaf, if only in limited circumstances.
One of the first people to explore sense substitution seriously, and one whose ambitions were the grandest, was Paul Bach-y-Rita, a specialist in rehabilitation medicine at the University of Wisconsin. In the early 1960s he and other researchers had begun to reveal just how porous the boundaries between the senses are. Experiments showed, for example, that when a cat’s paw was pricked, nerve cells in the visual center of its brain responded, albeit weakly. Neurologists were also beginning to discover how dramatically the adult brain can reorganize itself to recover partially from paralysis or a stroke. Both lines of research led Bach-y-Rita to speculate that the brain of a blind person might learn to process touch signals as if they were visual ones–seeing things in his mind’s eye, as in a dream, without using his real eyes.
Bach-y-Rita built a prototype system in which a blind person wore a pair of glasses equipped with a miniature television camera. The video signal traveled to a small computer the person wore on a vest, along with batteries. Lining the inside of the vest was a rectangular mesh of more than 1,000 electrodes, each of which could deliver eight different levels of stimulation to the person’s abdomen. The computer used the electrodes to deliver a pattern of stimulation that corresponded to the shades of gray in each gridded picture element, or pixel, of the video frames (high stimulation for white pixels, low for black).
Early research was promising–a few subjects became so adept at using the system that they could even recognize individual faces. But Bach- y-Rita wasn’t able to turn his device into something feasible outside the lab. Everything we did was in a controlled environment–minimal clutter, maximal contrast, no problems with shading, he says. It’s fine if a person takes a minute to recognize a face that way in a lab because it proves the brain can do it. But it’s not fine in a real environment, where recognition has to be instantaneous.
In the late 1970s Bach-y-Rita left sense substitution to work on the rehabilitation of brain-damage victims. Recently, though, he’s come back–but this time he’s starting with the narrower goal of just trying to help blind people in the workplace. A lot of blind people were channeled into the computer field, he points out, and that was fine when everything was word output, which the computers could convert into artificial speech. But now that computer graphics are ubiquitous, Bach-y-Rita says, it’s really a desperate situation for blind computer users. They need a way to perceive and manipulate computer graphics.
Bach-y-Rita and his colleagues are now building a desktop tablet, consisting of an array of 384 electrodes, that will transform computer graphics into patterns of tactile stimulation. A blind person will be able to skim his fingers over the electrodes to sense the image. If some part of the image is confusing, he will order the computer to zoom in on the detail. Even so, it’s not clear yet whether the resolution of the tablet will be good enough for computer graphics work; the touch image may be too fuzzy. But in principle a computer screen is exactly the kind of limited arena in which Bach-y-Rita’s ideas would work best.
While Bach-y-Rita continues trying to substitute touch for vision, other researchers have been inspired by his work to try different sense substitutions. Peter Meijer, a computer engineer at Philips Research Laboratories in the Netherlands, has recently built a device designed to allow the blind to see with their ears. It does so by converting pictures taken once a second by a miniature video camera into complex, one-second- long sounds.
As the device’s computer scans the video image, it represents pixels near the top of the image by high-frequency tones, and pixels near the bottom by low-frequency ones. The brighter the pixel is, the louder that particular frequency is played. Scanning from left to right over the course of a second, the device plays all the tones representing a single column of pixels simultaneously, one column after another. If the image shows, say, a white line rising diagonally from left to right on a black background, Meijer’s device emits a single rising tone, because in each column of the image, only one pixel is white. If the camera is pointed at Meijer’s own face, on the other hand (see illustration), the result is a one-second cacophony that is recognizable, after some training, as a face– although not necessarily as Meijer’s.
At the end of each one-second scan, the device emits a click to let its user know that it will begin converting the next picture. When something in the image frame moves, the sound changes accordingly. Blind people have told Meijer his device might help them perceive an obstacle from a distance as they walk, long before they would be able to touch it with a cane. At the very least the device could help them under restricted conditions, perhaps enabling them to grasp geometric figures when studying math.
Meanwhile researchers at Oxford University are trying to help deaf people by turning sounds back into images. One of the biggest challenges even to the partially deaf is learning to speak. A hearing child can compare his own voice with those of others, but the deaf must resort to other kinds of feedback. Looking at the shape of a teacher’s mouth helps, but it doesn’t tell you what subtle movement the tongue is making inside.
Lionel Tarassenko and Jake Reynolds of Oxford have figured out a way to create a fast visual feedback for speech training. They’ve designed a program, called the Visual Ear, in which each point on the screen corresponds to a certain combination of frequencies found in speech. A spoken word can then be represented by a curve. To use the program, a deaf student picks a word to practice. The curve corresponding to the correct pronunciation appears on the screen. As the student pronounces the word into a microphone, the curve produced is superimposed on the correct one. By trying repeatedly to reduce the distance between the two curves, the student learns what mouth and tongue movements yield the correct pronunciation.
The Visual Ear is merely a teaching aid. Many researchers, though, are now working on a type of sense-switching device called a vocoder that would help the deaf in a more general way, by allowing them to feel sounds. There are various versions of the vocoder, but most have the same basic form. their central element is a line of vibrators that is strapped to the abdomen, forehead, or arm. Sound entering a microphone gets split into bands of frequencies, and each band is channeled to a particular vibrator. The intensity of its vibration depends on how loud the sound is in those frequencies.
The goal of vocoder research is not just to let deaf people perceive, say, the sound of an oncoming truck. It is something much more ambitious: to allow deaf people to understand speech. The newest models show a lot of promise; they are so sensitive that their wearers can discriminate between sounds as similar as sh and s. Deaf children can combine lipreading with vocoders to improve their understanding of speech as well as their own pronunciation. With some training they do nearly as well as children with surgically implanted auditory-nerve stimulators, but without surgery and at a fraction of the cost.
Right now children usually get to use their vocoders only a few hours a day, typically in a classroom, because the electronic equipment is so bulky. That may soon change: Özcan Özdamar, a biomedical engineer at the University of Miami, has designed the first fully digital vocoder, in which the sound-translating circuitry is confined to a microchip. The chip can be programmed to filter sounds in such a way as to highlight the ones that are most important for understanding speech. Özdamar hopes soon to build a digital vocoder that children can wear all day.
In Bach-y-Rita’s view, children–more specifically, infants–are the key to realizing the full potential of sense substitution. Their brains have not yet organized themselves for a life without sight or sound, he says, so they may be better able to adapt to a device that requires them to process tactile stimuli, say, as visual or auditory ones. Bach-y-Rita plans to test this idea soon with colleagues in France, who will have blind infants use a modernized version of his original video-camera-and- electrode-vest system for an hour or two each day. I expect a lot more from the children than from adults, he says. He hopes, for instance, that the infants will learn to recognize the sight of their mothers leaning over their cribs.
Clearly Bach-y-Rita has not given up on the dreams he had in the 1960s, when he helped launch the field of sense substitution–the dreams of blind people walking down the street without canes or guide dogs, and of deaf people participating fully and without lipreading in spoken conversations. Some experts think those dreams will never be realized. Others share Bach-y-Rita’s belief that sense substitution in the fullest sense will eventually work, even if it takes several more decades of research. I’m less dramatic now than I was at the beginning, says Bach-y-Rita. But basically, I still think the brain can learn to handle it.
Copyright 1993 Discover Magazine. Reprinted with permission.