Back in the year 2000 I designed lead and planned the interactive avatar prototype project for Autostadt Wolfsburg. I lead a small team of 3D artists and programers.
We photographed a Volkswagen worker from multiple angles, then created an animatable 3D model from that source data.
Multiple gestures and facial expressions were created to use as pose in our animation editor.
By speech recognition and voice synthesis, visitors were able to interact with the avatar and have a conversation following a non linear storyline.
Animations were triggered by a custom made editor, allowing quick changes of content and animation.
Sensors recognized visitors. The avatar started to communicate with the visitor, offering a set of topics to talk about.
The system and the editor worked in realtime.
We developed a set of storylines about the history of Volkswagen.
I designed a flexible animation editor. Content can be changed anytime and results are shown in realtime.
Text can be enriched by adding facial expression, gesture, images, videos, sound FX and music.
The customer creates stories in the custom story editor. Wording for the speech synthesis can be changed so that the pronouncation is more accurate.
Adding links connects different storylines.
For the conversation, I designed a keyword system allowing multiple triggerwords and fallback questions similar to real human interaction.