Computer Science ›› 2014, Vol. 41 ›› Issue (1): 100-104.

Previous Articles     Next Articles

Bi-level Codebook Based Speech-driven Visual-speech Synthesis System

JIA Xi-bin,YIN Bao-cai and SUN Yan-fen   

  • Online:2018-11-14 Published:2018-11-14

Abstract: The paper proposed a bi-level codebook based speech-driven visual-speech synthesis system. The system uses the vector quantization principle to establish a coarse-coupling mapping relationship from the speech feature space to the visual speech feature space.In order to enhance the relationship between the speech and the visual speech,the system makes the unsupervising-clustering on the sample data according to the similarity of both the acoustic speech and the visual speech and constructs the bi-level mapping codebook reflecting the similarity of both the acoustic speech and the visual speech .At the stage of preprocessing,the paper proposed a joint feature model,which reflects the geometric character and the visibility of teeth.The paper also proposed an approach to extract the visual speech correlative speech feature from the speech features of LPCC and MFCC on the basis of genetic algorithm.The comparison results between the synthesis image sequences with the original one show that the synthesis one can approximate the original one and the result is good.In the future research,the restriction between the visual speech contexts should be considered to improve the smoothness of the synthesis results.

Key words: Bi-level codebook,Visual speech synthesis,Visual speech feature,Speech feature

[1] Jia Jia,Zhang Shen,Meng Fan-bo,et al.Emotional audio-visual speech synthesis based on PAD[J].IEEE Transactions on Audio,Speech and Language Processing,2011,9(3):570-582
[2] 谢金晶,陈益强,刘军发.基于语音情感识别的多表情人脸动画方法[J].计算机辅助设计与图形学学报,2005,0(4):520-525
[3] Pandzic I S,Ostermann J,et al.User evaluation:synthetic tal-king faces for interactive services[J].The Visual Computer,1999,15(7/8):330-340
[4] Massaro D W,Ouni S,Cohen M M,et al.A multilingual embo-died conversational agent[A]∥Proceedings of 38th Annual Hawaii International Conference on System Sciences (HICCS’05) (CD-ROM,10pages) [C].Los Alimitos,CA,IEEE Computer Society Press,2005
[5] 王志明,陶建华.文本-视觉语音合成综述[J].计算机研究与发展,2006,43(1):145-152
[6] Gao W,Chen Y Q,et al.Learning and synthesizing mpeg-4compatible 3-d face animation from video sequence[J].IEEE Transactions on Circuits and Systems for Video Technology,2003,3(11):1119-1128
[7] Brand M.Voice puppetry[C]∥Proceedings of ACM SIG-GRAPH 1999.ACM Press/Addison-Wesley Publishing Co:New York,NY,USA,1999:21-28
[8] Morishima S,Harashima H.Speech-to-image media conversionbased on VQ and neural network,ICASSP 91[C]∥1991International Conference on Acoustics,Speech and Signal Processing.1991:2865-2868
[9] Gutierrez-Osuna R,Kakumanu P,Esposito A,et al.Speech-drivenfacial animation with realistic dynamics[J].IEEE Transactions On Multimedia,2005,7(1):33-41
[10] Jiang J T,Alwan A,Bernstein L E,et al.Predicting face movements from speech acoustics using spectral dynamics[C]∥IEEE International Conference on Multimedia and Expo.2002:181-184
[11] Bregler C,Covell M,Slaney M.Video rewrite:driving visualspeech with audio,SIGGRAPH’97[C]∥ACM Press/Addison-Wesley Publishing Co:New York.NY,USA,1997:353-360
[12] Graf H P,Cosatto E.Sample-based synthesis of talking-heads[C]∥The 8th IEEE Int’l Conf.Computer Vision.2001:3-7

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!