Talking Gloves for speech impaired patients will work on AI and ML algorithms |
By using the principles of Artificial Intelligence(AI) and Machine learning (ML) doctors and engineers from the All India Medical Institute (AIIMS) Jodhpur and Indian Institute of Technology (IIT) Jodhpur have jointly designed low-cost ‘Talking-gloves’ for persons with speech disability. Costs at less than Rs 5000.
This patent
device called ‘Talking Gloves’ will be Language Independent as it uses AI and
ML algorithms to automatically generate speech that make
easy communication between mute people and the others.
With the help of
this device a speech impaired person can convert his hand gestures into text or
pre-recorded voices. It can enable a person to communicate their messages
independently.
Additionally, the
device can be customized to produce a voice similar to the original voice of
the patient, which makes it appear more natural while using the device.
"The
language-independent speech generation device will bring people back to the
mainstream in today's global era without any language barrier. Users of the
device only need to learn once and they will be able to verbally communicate in
any language with their knowledge," said Sumit Kalra, Assistant Professor,
Department of Computer Science and Engineering, IIT Jodhpur.
Sumit Kalra who
lead the charge in the design and development of Talking Gloves is a Assistant
Professor at the Department of Computer Science and Engineering, IIT Jodhpur
and was assisted by fellow innovators Dr Arpit Khandelwal from IIT
Jodhpur, and Dr Nithin Prakash Nair (Senior Resident, Otorhinolaryngology or
ENT), Dr Amit Goyal (Professor and Head, Department of Otorhinolaryngology) and
Dr Abhinav Dixit (Professor, Department of Physiology), from AIIMS Jodhpur.
How the
Talking Gloves work?
In the device,
electrical signals are generated by a first set of sensors, wearable on a
combination of a thumb, fingers, and a wrist of a first hand of a user. These
electrical signals are produced by the combination of fingers, thumb, hand and
wrist movements. Similarly, electrical signals are also generated by the second
set of sensors on the other hand.
"These
electrical signals are received at a signal processing unit. The magnitude of
the received electrical signals is compared with a plurality of pre-defined
combinations of magnitudes stored in a memory by using the signal processing
unit. By using AI and ML algorithms, these combinations of signals are
translated into phonetics corresponding to at least one consonant and a vowel.
In an example
implementation, the consonant and the vowel can be from Hindi language
phonetics. A phonetic is assigned to the received electrical signals based on
the comparison, an audio signal is generated by an audio transmitter
corresponding to the assigned phonetic and based on trained data associated
with vocal characteristics stored in a machine learning unit. The generation of
audio signals according to the phonetics having combination of vowels and
consonants leads to the generation of speech and enables the mute people to
audibly communicate with others. The speech synthesis technique of the present
subject matter uses phonetics, and therefore the speech generation is
independent of any language" stated by Prof Sumit Kalra, in a
press release issued by IIT Jodhpur earlier this week.
Representational Image: “Existing methods to generate speech are either symbolic or language-dependent.” (Source: def.org.in) |
The speech synthesis technique of the present subject matter uses phonetics, and therefore the speech generation is independent of any language.
"The team is
further working to enhance features such as durability, weight, responsiveness,
and ease-of-use of the device. The developed product will be commercialized
through a startup incubated by IIT Jodhpur, Potential users and customers can
expect the first version of these Talking Gloves to launch in the market by the
end of 2022" he said.
0 Comments