Computer system using mouth movement developed for communication

Professor Yamamoto (left) checks the system (right)
. http://www.asahi.com/

April 27, 2012

Professor Yamamoto Fujio of Kanagawa Institute of Technology in Atsugi-shi, Kanagawa Prefeture and others are tackling the joint research of the system that a computer reads a motion of a mouth in the video and changes to some written Japanese before being tweeted.

They aim at the system's utilization, saying that it would support a deaf/hard of hearing person using IT.

There are the forms of the mouth movement when to speak Japanese; five vowels comprised of "a, i, u, e, o" and the closed mouth form, which all are classified and coded.

A computer reads each motion of a mouth in the video, guesses what is said, and translates to a code. And then, in search of the code for a word out of a dictionary database, it changes to the word. If it is sent by a twitter, it will help communication between the Deaf/hard of hearing person and his family, etc.

Professor Yamamoto says, "If an intention could be conveyed only by a mouth motion, the system can be utilizable for the conversation in the noisy place, or the conversation which does not want to be heard by other people."

Future issues are that reading a mouth motion by individual difference correctly, development of a dictionary function, etc.


Japanese original article:
http://www.asahi.com/national/update/0427/TKY201204270120.html

No comments: