Blog Archive

Search This Blog

Friday, November 16, 2018

Synthesising visual speech using dynamic visemes and deep learning architectures

Publication date: Available online 16 November 2018

Source: Computer Speech & Language

Author(s): Ausdang Thangthai, Ben Milner, Sarah Taylor

Abstract

This paper proposes and compares a range of methods to improve the naturalness of visual speech synthesis. A feedforward deep neural network (DNN) and many-to-one and many-to-many recurrent neural networks (RNNs) using long short-term memory (LSTM) are considered. Rather than using acoustically derived units of speech, such as phonemes, viseme representations are considered and we propose using dynamic visemes together with a deep learning framework. The input feature representation to the models is also investigated and we determine that including wide phoneme and viseme contexts is crucial for predicting realistic lip motions that are sufficiently smooth but not under-articulated. A detailed objective evaluation across a range of system configurations shows that a combined dynamic viseme-phoneme speech unit combined with a many-to-many encoder-decoder architecture models visual co-articulations effectively. Subjective preference tests reveal there to be no significant difference between animations produced using this system and using ground truth facial motion taken from the original video. Furthermore, the dynamic viseme system also outperforms significantly conventional phoneme-driven speech animation systems.



from Speech via a.sfakia on Inoreader https://ift.tt/2OM73K0

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Blog Archive

Pages

   International Journal of Environmental Research and Public Health IJERPH, Vol. 17, Pages 6976: Overcoming Barriers to Agriculture Green T...