The project I am working on right now could not have been any better as a testbed for Moviesandbox. It’s a 80 minute live theatre performance mixing two stage actors with stereoscopic 3D projection (as you might have figured out by now).
The piece is run live, to make sure that we can give the actors some room with their timing.
Which means that all cues for animations, lipsync, sound and cameramovement have to be triggered manually.
This is a screenshot of what one scene in this play looks like:
The left screen is the node setup with all the key-press events and Udp-(network) input events in place.
In order to stay in sync with the sound and not have the overall piece become a complete trigger-mess, we decided to have the sound as the primary cue source, so we send start- and end-commands from the sound computer over a local network using UDP. We also send the amplitude of the character voices, so their mouths move accordingly.
To get a better idea of the setup, feel free to have a look at this flickr set which documents the production and the people involved:
www.flickr.com |
The amazing thing about this project is that it was completed from start to finish, including script development and technical R&D in under 4 months with a software team of 2 people and 3 additional illustrators.