VR has recently made a remarkable headway in medical applications like cardiac surgery http://wapo.st/2uNioCJ and psychological therapy http://nyti.ms/2uNHukW—but audiology doesn’t fall behind. Here are examples of how audiologists and other investigators are advancing VR in hearing health care.
VR ASL TRANSLATOR
Can VR replace a human American Sign Language (ASL) interpreter? Meet Paula http://asl.cs.depaul.edu/index.html, the VR avatar developed at DePaul University that can translate English to ASL.
“There are many places and situations where an ASL avatar would be useful, such as interactions involving hotel clerks, bank tellers, airport security personnel, and receptionists at medical facilities—in other words, any time or place where the conversation is so short that a certified ASL/English interpreter will not be available,” said Rosalee Wolfe, PhD, a professor of computer science and one of Paula’s principal creators.
But why use a virtual ASL interpreter when a smartphone could just as easily provide speech-to-text translation for people with hearing loss? “For people who use ASL as their first or preferred language, the usefulness of speech-to-text technology is limited because English is their second language,” Wolfe explained. “A good analogy would be offering speech-to-Spanish captioning to people whose first language is English but took Spanish in high school. Much of the information in the spoken message would remain inaccessible. Paula can make information more accessible to people whose first language is ASL.”
Paula is an ongoing project. She’s now learning French Sign Language, Swiss-German Sign Language, and even Libras—the sign language of Brazil—through collaborations with other research groups. Her creators anticipate that she will eventually be able to “read” ASL and translate it into English.
“Paula has also been lending a hand to help people who want to learn ASL. She is now part of learning tools that teach ASL vocabulary and fingerspelling,” Wolfe said. “Ultimately, our goal for Paula is for her to sign as naturally, as gracefully, and as eloquently as a human signer.”
Paula resides on a desktop computer for now, but she will be in smartphones in the not-too-distant future, Wolfe predicted. The creators have a prototype of a fully-functional fingerspelling smartphone app, and are looking to further expand Paula’s smartphone capabilities.
“We are always open to collaborating with like-minded people whose goal is to bridge the gap between the deaf and hearing communities,” Wolfe added.
HIGHLY ANIMATED ASL AVATARS
ASL doesn’t translate word-for-word into English. In addition, a signer’s facial expressions and physical movements also convey inflections and shades of meaning not communicated in the sign language itself. So in a VR setting, the challenge is to create visualizations of sign language that flow naturally and are easily understood by people who sign.
To that end, researchers at the Rochester Institute of Technology (RIT) are using motion capture of actual signers, among other techniques, to create better animations of virtual ASL avatars http://latlab.ist.rit.edu/projects.html. “Our lab is focused on creating tools that would allow someone to write a script for an ASL sentence, press a button, then our software would automatically produce an animation of a virtual human character performing the sentence,” explained Matt Huenerfauth, PhD, MSE, a professor in RIT’s Golisano College of Computer & Information Sciences and the director of the Linguistic and Assistive Technologies Laboratory.
Among their goals, Huenerfauth’s team is creating animations of ASL avatars with linguistically accurate facial expressions. For people who sign, these animations will be easier to understand and more effective at conveying information. One potential application would allow people whose primary language is ASL to easily access text-heavy websites and online information via these animations.
As a byproduct of their research, the RIT team has assembled videos and 3D motion data of thousands of ASL signs and sentences that are freely available to other researchers for non-commercial use.
VR IN THE HEARING LAB
A hearing research lab can be an austere and bewildering place, especially for a child. But what if you created a VR environment to make your lab more familiar and user-friendly? That’s what researchers at the Boys Town Research Hospital sought to find out.
As the director of the hospital’s Listening and Learning Laboratory http://bit.ly/2w0E2CO), Lewis’s work has focused on the impact of hearing loss on children and their ability to understand speech in complex environments, such as a classroom. “But if we tried to go into real classrooms, we wouldn’t be able to have the kind of experimental control we need,” Lewis explained. So for many years, her lab has approximated a noisy classroom environment by using localized acoustics and video monitors that surround the listener. But the place still looked like a hearing research lab, not a classroom.
When the Oculus Rift VR headset came out, Lewis jumped on board. The team soon created a virtual reality classroom with the aid of Boys Town’s systems analyst, Tim Vallier. “With the advent of these head-mounted displays for virtual reality, you can really get a high-quality immersive environment,” she said. The hearing lab is still the same. The difference is that the listener now wears the headset and sees only a classroom, a gym, or some other noisy and complex environment.
But Lewis was faced with a new question: Was the VR environment too immersive? That is, were test subjects more distracted by the eye-popping VR classroom—and thus more likely to test poorly—than they were with the boring, old hearing lab? Like any good scientist, Lewis performed a study. The outcome: Listeners were no more distracted by the VR environment than they were by the lab.
“In fact, we had people tell us that it was less distracting because, in the test room, you can see that there’s a tester who’s recording your responses,” she explained. “But when you get into this virtual space, those distractions are gone and you forget that someone is sitting there recording whatever you’re saying.”
VR FOR VESTIBULAR RETRAINING
Can VR be used to retrain the vestibular system of a patient with chronic dizziness? As the director of the Virtual Environment and Postural Orientation Laboratory https://sites.google.com/a/temple.edu/vepo/home at Temple University, Emily Keshner, PT, EdD, uses virtual reality to take her lab’s environmental control to a whole new level—to diagnose and retrain a dizzy patient’s vestibular system.
Her lab has a Cave Automatic Virtual Environment (CAVE), a physically immersive VR setting where the virtual environment is projected onto the walls of a small, room-sized cube. A dynamic platform is built into the floor and used for both testing and retraining for people with balance and vestibular disorders, particularly those of the inner ear. “What we found is that you can actually influence the movement of people in these environments,” Keshner said.
The visual and vestibular systems are tightly intertwined, she explained. But by manipulating the visual and physical environments inside the CAVE—either visually with the VR projection system or mechanically with the platform—Keshner can begin to distinguish between, and therefore diagnose, problems with physical reactions and problems with visual-vestibular reactions.
Once a patient is diagnosed, treatment can be done in the CAVE at the lab or offsite using a VR headset. Treatment largely involves retraining the vestibular system through repeated motions in diverse environments. “We put them in our environments, present them with complex choices they have to make about sensory information, and force their body to respond to it through repetition,” Keshner explained.
So far, this immersive rehabilitation has been successful—as well as exciting—for patients and clinicians alike. “We found that there are various ways that we can stimulate their system to help them adapt and reduce their dizziness,” Keshner said.
VR WITH WEARABLE ELECTRONICS
Give these guys a hand: A pair of college students at the University of Washington (UW) won the 2016 Lemelson-MIT Student Prize http://bit.ly/2uNn2Ri for their invention—called SignAloud—which are haptic gloves that can translate ASL into speech or text.
How SignAloud works: Sensors embedded in the gloves identify the user’s hand movements and positions. The gloves send this data wirelessly to a computer, which analyzes the data through sequential statistical regressions, similar to a neural network. When the computer matches the data to a stored library of ASL signs, a voice processor says the associated word or phrase through a speaker.
“Our gloves are lightweight and compact but ergonomic enough to use as an everyday accessory, similar to hearing aids or contact lenses,” said co-inventor Thomas Pryor, an undergraduate researcher in UW’s Composite Structures Laboratory.
Other researchers, including engineers at the University of California in San Diego http://bit.ly/2uNtCaE, are also developing their own wireless gloves capable of translating ASL.
Many of the technologies described above are still prototypes or used only in research labs, but the success seen so far proves the potential of VR to improve or even reinvent audiology.
“The time is now for using VR in hearing research and audiology,” stressed Lewis. “The possibilities are just endless.”