Deaf programmer Alexei Prikhodko has developed a system that automatically translates sign language into Russian. The technology allows you to control your computer without using a mouse. He finished the prototype at the beginning of August.
The program camera recognizes the image and captures gestures the gestures, then the image is translated into the model and processed within the system. The system compares the image with the data in the neural network, after which the computer displays a gesture-compliant translation on the monitor, according to the website of the Novosibirsk State Technical University.
According to Alexei Prikhodko, at the moment there are a lot of companies that announce their own developments of the translator for the deaf, and some even sell their solutions. But because of the specifics of the hard language, all of them are not of sufficient quality to use, says the programmer, who himself has not heard from childhood.
"Savvy Motion, Kinect Sign Language Translator at Microsoft Research, and other large companies have not yet been able to fully implement this task of translation from hard language to audio language, so the quality of such applications leaves much to be desired," he says.
The problem is related to the grammar of sign language. No program can replace a live translator, because the translation directly depends not only on the configuration and orientation of the hands, but also on their movement, location and the so-called linguists' non-manual component of gestures (facial expression, lip movement and other articulation marks), according to the website of the university.
"It is not difficult to translate from written language to sign language. Technically, it is difficult to recognize and translate gestures. It still depends on which camera and which sensors are used. There are two ways to recognize gestures using technology: markerless and markerless. A marker system is when a person wears special gloves, wrist devices, bracelets and modern devices that take into account the movement of muscles and dots on the body.
I have taken a difficult path that does not require a lot of work on special equipment to create a program with a marker system. My program uses markerless method to recognize a person and his gestures with the help of cameras," said Alexei Prikhodko.
A markerless system created by the programmer uses special cameras to superimpose a virtual "grid" on the image. On it, software algorithms find reference points by which gestures are defined. The system then processes the data and reproduces the specified actions: translation or control.
"If the model determines, for example, that the fingers are open - the letter B, if the fingers are assembled - O. Bent the elbow or not. Depending on this, a mathematical model is formed, which is created from the skeleton model. And accordingly, each number from this model is assigned a coordinate system, and on the screen we see what kind of gesture it is," said the programmer.
At the moment, the prototype translates at the level of alphabet deaf people. Alexei Prikhodko intends to train the system in other components of gesture language grammar to create a ready-made product for mass introduction and use by people with hearing loss. The programmer is looking for investors to refine the prototype.
Alexey Prikhodko is the only programmer and expert on the development of a system for translation of the gesture language in the world, according to the website of the Novosibirsk State Technical University. Earlier, he had already created technical solutions for people with hearing loss. At the beginning of July 2019, he and his team created a project for training the deaf - "Gesture Interface", in 2017 - the project "Mathematics in silence".
P.S. Previously, scientists at the University of California, San Diego, developed a smart glove that automatically translates American Sign Language (ASL) into text on a digital device screen. The cost of the gadget did not exceed $100.