To top
Share on FacebookTweet about this on TwitterPin on PinterestShare on Google+Share on LinkedInEmail this to someone

The history of sign language in North America goes back to around 1541, when European explorers reported that the Plains Indians, who inhabited what is now the United States and Canada, communicated with other tribes who spoke different languages via sign language. In modern times, of course, sign language is most commonly employed to communicate with the hearing impaired, harkening back to the 1817 founding of The American School for the Deaf (ASD)  in Hartford, Connecticut, the first school of its kind in North America that catered to the hearing impaired.

Most users of sign language today adhere to American Sign Language (ASL), though many different variations exist. The number of sign language speakers in the United States is still unknown, as it has never been counted by the American census, though estimates range from as low as 100,000 to as high as 15 million. While sign language may be important for those who wish to communicate with the deaf, the deaf won’t have to live in silence for much longer—wearable technology seeks to literally give them a voice. Researchers at Texas A&M University have been working on this technology, which according to Reuters, uses a system of sensors that recognize hand gestures and their motions, alongside electromyography (EMG) signals (which evaluate and record electrical activity produced by skeletal muscles) coming from the muscles in the wrist, to translate sign language into words.

“We decode the muscle activities we are capturing from the wrist,” said Roozbeh Jafari, associate professor of biomedical engineering at Texas A&M, who also notes the placement of the fingers is indirectly analyzed based on the composition of one’s fist. While the device will help break down communication barriers between the deaf and those who do not know sign language, there are significant technical challenges present. For instance, processing and translating the EMG signals accurately in real time would require sophisticated algorithms, and since no two people sign alike, the system also has to be designed to learn from the user.

“When you wear the system for the first time, the system operates with some level of accuracy. But as you start using the system more often, the system learns from your behavior and it will adapt its own learning models to fit you,” Jafari said. The current concept for the device involves sending translated sign language to one’s computer or smartphone using Bluetooth, but Jafari’s team hopes to reduce the size of the device so it can be worn on the wrist and decipher complete sentences instead of just words. With an estimated 70 million deaf people in the world who use sign language as a first language, according to the World Federation of the Deaf, this sort of wearable technology could prove groundbreaking in abolishing language barriers and granting the hearing impaired an actual voice.

Leave a Reply

We are on Instagram