This is an older version of the documentation. View the latest version here.
Flipside Studio is built around the concept of recording and playback of a number of elements simultaneously. Together, these can be used to tell stories, craft shows, and capture content.
Recording in Flipside Studio can refer to either motion capture or video.
For motion capture, Flipside Studio combines motion capture data with several other elements into its own custom file format. The data recorded in a Flipside Studio recording includes:
Flipside Studio will record motion capture data for the points that it has available, and will animate the missing data points for a character using real-time inverse kinematics.
Flipside Studio can animate a wide variety of characters with widely varying proportions using as few as 3 data points (head + hands) captured via the VR hardware's positional tracking, or up to 6 data points using HTC Vive Trackers.
The configuration options include:
Additionally, Flipside Studio records lip sync data that it derives from analyzing the microphone input for each actor, as well as basic facial expressions. This adds a new level of detail on top of what is traditionally recorded in a motion capture session.
Flipside Studio can record scenes with up to 5 users simultaneously, and extra parts can be added after the fact as well by overdubbing roles over existing motion capture recordings.
The real benefit of Flipside Studio comes as a result of recording motion capture while the actors are immersed in the virtual sets. This lets actors see themselves and each other at the correct scale of their characters.
This benefit means better alignment between the actor's movement and the character's movement, which requires substantially less cleanup. For example, an actor playing a much shorter character will automatically take more steps to walk between two points, and they will naturally look up to meet the gaze of taller characters who may not be any taller than them in real life.
In order to benefit from this, the actors do need to be immersed in the virtual scene, since simply adding face tracking on its own still doesn't solve the problem of character scale differences, because actors need to know where to look, and for the steps they take to match the steps their character would take.
Flipside Studio does all of this and more for you automatically.
Flipside Studio sends periodic ping messages between users in a multiplayer session, which enables it to keep track of the average network latency between them. It then uses that information to keep recorded parts in sync and to minimize the perceived latency between them.
We plan to introduce additional data exports in the near future. Contact us if you're interested in these capabilities.