Search:
Menu VisualVoice / Results

Visual Voice Home


About Artistic Vision People Teams Contact us


Activities



Publications Media Images/Movies


Opportunities


Related Links


Local Only


Website problems?

edit SideBar

Results

A basic implementation of a talking face has been realized for the DIVA project. Each requirement has been fulfilled in the following manner:

i) A 3-d face model which can be stretched in real-time
the StartAS class (see Implementation step 1) starts Artisynth's KuraFace model, waiting for a socket connection to receive data.

ii) A set of tools enabling the user to create phoneme-to-vizeme mappings
VizemeBuilder enables the creation & storage of vizemes, and CreateVizemeMap allows the user to select from these to construct the desired mapping.

iii) An object to process phoneme data and drive the face
the ArtisynthManager launches StartAS, connects to it, loads a mapping from a map file and specified vizeme files, and uses this information to convert phoneme data to vizeme values and send these through the socket connection to drive the face.

How to Create a Mapping, and Use in Performance Mode

The user proceeds as follows to use the new features:

i) Open the DIVA main window

ii) Load a profile, or create a new profile

iii) Select VizemeBuilder from the VIzeme Tools menu in the top right-hand corner

iv) Create and store a facial expression for each phoneme:
.....a) select a phoneme from the dictionary on the left-hand side
.....b) adjust the eight knobs to stretch the face to the desired shape
.....c) click SaveAs, entering a filename to which to save the vizeme file
v) Close VizemeBuilder, and select CreateVizemeMap from the Vizeme Tools menu

vi) Create and store a phoneme -> vizeme mapping:
.....a) select dictionary.dict from the top menu on the left-hand side and click "Choose Dictionary"
.....b) click on "new map" to load entries to the table on the right-hand side
.....c) for each phoneme:
..........1) select the phoneme from the dictionary the left hand-side. This will load relevant vizeme files to the table on the left-hand side
..........2) select the desired vizeme file, and select a destination row in the right-hand table. Be sure that the current phoneme matches that of the selected row
..........3) click on the "add/replace vizeme" button to place the vizeme file in the destination row
.....this action will associate that row's phoneme & expression to the corresponding vizeme file's PC vector in the phoneme -> vizeme mapping
.....d)click "SaveAs" and enter a filename to save the map file

vii) Close CreateVizemeMap, and open the perform window by clicking on the "Perform" icon in the main window

viii) Load an accent, and a preset if desired, by selecting from the menus, and clicking Load.

ix) Select dictionary.dict from the Vizeme Dictionary menu

x) Select the desired map file from the Vizeme Map menu

xi) Click on Launch Artisynth to launch the KuraFace in the Artisynth window, load the selected map file, and connect to the face model.

At this point, the system is ready for performance, and a talking face will accompany the audio output.