London, Oct 15 (IANS): A team of researchers have developed a system that uses an ultrasound to display movements of tongues in real time, a finding which may help patients with tongue surgery to improve their speech.
The study showed that these movements, captured using an ultrasound probe placed under the jaw, are processed by a machine learning algorithm that controls an "articulatory talking head".
For a person with an articulation disorder, speech therapy partly uses repetition exercises. The practitioner qualitatively analyses the patient's pronunciations and orally explains, using drawings, how to place articulators, particularly the tongue: something patients are generally unaware of.
However, using the algorithm one can see the tongue, palate and teeth, as well as the face and lips, usually hidden inside the vocal tract, in the new system.
This "visual biofeedback" system, would produce better correction of pronunciation, and could be used for speech therapy and for learning foreign languages, said Thomas Hueber from the University of Grenoble in France.
The system, published in the journal Speech Communication, lets patients see their articulatory movements in real time, and in particular how their tongues move, so that they are aware of these movements and can correct pronunciation problems faster.
Further, the machine learning algorithm exploits a probabilistic model based on a large articulatory database acquired from an "expert" speaker capable of pronouncing all of the sounds in one or more languages.
This model is automatically adapted to the morphology of each new user, over the course of a short system calibration phase, during which the patient must pronounce a few phrases.