|(Hubal, 2003) JUST-TALK utilizes an architecture called
AVATALK to allow conversations between people and virtual humans. The human user then gets to see and hear
responses from their virtual human subject.
The system is broken into three main components:
Processor – this module breaks down human speech into a semantic
representation suitable for interpretation.
Once interpretation is performed, the response is fed back from the
behavior engine. The Language
processor works in reverse and speaks or formulates a facial or hand gesture.
|2) Behavior Engine –
dynamically loads the context and the knowledge needed by the Language
Engine – takes gesture, movement, and speech output and uses a 3-D virtual
human to perform the requested actions.
The mouth moves to lip synch the words using a morphing of the 3D
model and playing selected animation frames created by motion capture