Search:
Menu VisualVoice / Step6

Visual Voice Home


About Artistic Vision People Teams Contact us


Activities



Publications Media Images/Movies


Opportunities


Related Links


Local Only


Website problems?

edit SideBar

Running the Talking Face in DIVA Performance Mode

The final step of implementation is largely a linking process, and consists of making the following adjustments:

i) Modify the object sendToFace from step three as follows:
.....a) add functionality to start a process which runs StartAS, so that the KuraFace can be launched from MAX code
.....b) modify the MXJ code to parse map files in the format created by the vizeme tools from step 5
.....c) place the object in DIVA performance mode, and route phoneme data to its inlets
ii) Add to the Perform window:
.....a) menus enabling the user to select a vizeme dictionary and corresponding vizeme map file
.....b) a button to launch the KuraFace, and load the selected vizeme mapping, such that the system is ready to convert performance data to face parameters

At this point, the user should be able to start the system in performance mode, choose a map file, launch artisynth, and a talking face should appear which moves in sync with speech output as the user performs.

The MXJ object, in this context, is more appropriately named ArtisynthManager, since it is responsible for all actions related to Artisynth in the DIVA project. Its implementation in the patcher "voiceMapping" is shown below:

The object "Artisynth Manager" is shown at bottom right. Note that as in step 4, an x-y space is provided for the user, simulating the performance vowel space.

The updated performance window is here displayed, with additional "Vizeme Dictionary" and "Vizeme map" menus, and buttons for launching and stopping artisynth.