- by John Luxford
Our first big update to the Flipside Alpha is out! Alpha users will see the option to update on the Flipside Steam page and in the Oculus Home app (or if you have auto-updates enabled, you should automatically have the latest version).
Since this is a big update with lots of improvements, let's start with the highlights first:
Based on your feedback, we made several changes to the user interface:
This felt too video game-like and also had accessibility issues for single-handed use.
As a result, the following user interface elements have been added:
This falls in line with many other VR apps and games, and should make it easier for users to pick up.
We created a utility belt (more like floating holsters) by your waist where you can always grab these common tools.
The palettes have also been unified to act like a single menu system for the whole app.
This helps distinguish the Flipside user interface from show elements like props and sets.
To stop you can either press the usual A/X on Oculus Rift or Application Menu on HTC Vive, but you'll also see a wrist watch appear on your left wrist while recording is underway that has a stop button on it. We decided to keep that bit of skeumorphism :)
We've released the first public release of the Flipside Creator Tools, which enable you to import your own custom characters from 3D models, complete with full-body movement, hand animations, facial expressions, lip syncing, and natural eye movement. Now you can be anything you can imagine in Flipside!
We've added a calibration button found on the underside of the Characters palette. This measures and stores your height, shoulder height, and arm span and uses these to provide more accurate motion capture.
The first alpha release had a bug where your character's feet would lift too easily off the floor. The calibration as well as some other character changes have made big improvements to how the feet feel and connect with the floor.
Looking up, down and all around doesn't cause your character's body to move nearly as much, which helps convey body language that much better.
Flipside now supports both 2 and 3 Vive Tracker configurations, tracking either both feet or both feet and your waist. This opens up whole new possibilities for physical acting in Flipside.
The handheld camera had a bug where it wasn't projecting to the 2D output, which made it impossible to capture handheld shots. Now whenever you grab the handheld camera from your side, it will become the active camera.
The handheld camera defaults to facing outward. A bug had it defaulting to selfie mode, but this makes the camera way more useful for quickly capturing the right shots for your shows.
We've improved the handheld camera's steadicam smoothing effect, helping you capture more stable footage of your shows.
We can't wait to hear your feedback on these changes and see what you guys make in Flipside!
- by John Luxford
This year started with the decision to focus exclusively on Flipside as a company. That was a hard decision because up until that point we were a bootstrapped company relying on service-based income to survive. It was even harder because it meant deciding to cancel our then-imminent plans to release Lost Cities on the Oculus Rift. But far and away it was the right decision.
We were just starting the art production on our first Flipside show, Super Secret Science Island, in collaboration with the super creative comedy duo Bucko. Super Secret Science Island is an improv comedy set on a deserted island which really stretched our multiplayer and avatar capabilities. It also taught us reams and reams about what actors need to perform well in a virtual environment (see our 3-part acting in VR series).
We had also just been accepted into Tribe 9 of Boost VC, a startup accelerator focused on frontier tech companies like us working in areas like VR, AR, AI, and Blockchain. Boost VC believing in our vision was all the proof we needed that we made the right move. Throughout the program, we made some amazing relationships and connections, learned a metric ton, and moved our product forward by leaps and bounds.
While at Boost VC, we were connected with San Francisco comedian Jordan Cerminara, who became the writer and actor in our second Flipside show, Earth From Up Here. This show made extensive use of our camera system, our teleprompter, and our slideshow for delivering SNL Weekend Update-style news. In the show, Jordan plays Zeblo Gonzor, an alien newscaster who makes jokes about how crazy Earthly news must seem from the outside.
Having produced a complete season of two shows, we went back to the drawing board and determined what it would take to provide the same experience for creators who could work on their own, without our help troubleshooting issues on the fly. We talked to lots of creators, and really honed our vision for what Flipside 1.0 ought to be.
We also demoed Flipside at a ton of events, from All Access to Vidcon, grew into our very own office space from our previous co-working spaces, and also grew to a team of seven. Compared to the year before, having a whole team working on a single shared vision has been amazing, to say the least.
And now, just in time for the holidays, we're sending our first Flipside Early Access release out into the wild to our first group of beta testers, warts and all.
They say if you're not at least a little embarrassed about showing your app to the world you've waited too long. I wouldn't say that we're embarrassed because we're all immensely proud of the work and creativity that's gone into this release, but there's a list a mile long of things we can't wait to fix or add in.
2017 has been the craziest year yet for us, and ending it by getting Flipside into the hands of its first users feels like exactly how it should end. I expect 2018 to be even crazier, with more beta updates, a budding new creator community to grow, a wider public release and more content in the works.
We'd like to end with a huge THANK YOU to everyone who has been along for this ride, and who have supported us in any way this past year. Flipside is the most ambitious and creative thing I think any of us have attempted to make, and it wouldn't be where it is today without you.
And another huge THANK YOU to our creator community who have waited patiently for us to get Flipside into your hands. The desire to help people share their stories has been at the heart of our company from before the beginning, and we can't wait to see what all of you come up with!
The Flipside Team
PS. Have a safe and warm holiday, and an inspired new year!
- by John Luxford
By Rachael Hosein (CCO / Co-founder) & John Luxford (CTO / Co-founder)
The first step was to see what being a stick figure avatar felt like in VR, which it turns out is ridiculous fun! From there we wanted to let people make their own comic cells with speech bubbles and props and save them as images that matched the xkcd style.
We did have to deviate from the style in some places, like adding outlines to our speech bubbles because without outlines they were harder to grab and place. But overall, we're pretty proud of how well we were able to match the look and feel.
Here are the features we managed to finish over the course of the jam:
This was a super fun project that we'd love to incorporate pieces of in Flipside proper. Imagine making your own animated shows as the xkcd characters in a web comic world? How cool is that?
You can download xkcd vr for Oculus Rift and HTC Vive here. We hope you enjoy it as much as we enjoyed making it, and feel free to reach out - we'd love to hear what you think!
- by John Luxford
“The changes are dynamic and take place in real time. The show reconfigures itself dynamically depending upon what happens moment to moment… It’s a smart play.”
– Neal Stephenson, The Diamond Age
By John Luxford, CTO & Cofounder - Flipside
We started Flipside out of a shared passion for using technology to empower people to be creative and to tell their stories. We love building creation tools, and our mission is to enable a new generation of creativity using VR and AR, or as people have been dubbing the collective Virtual/Augmented/Mixed Reality combo, just XR.
When we started Flipside, we reflected a lot about where we saw these technologies going. We wondered: what will kids growing up in a time when AR headsets are commonplace expect from their apps?
If we can understand this, we can avoid building things that seem novel today but won’t have staying power. We call this target user The XR Generation.
Some key insights emerged from these reflections:
Just like our kids don't share our passion for the movies and shows we loved growing up, the next generations will prefer content made for their generation.
We're just starting to see the potential of 3D content today, but this content consumption pattern led us to the realization that as display resolutions improve and miniaturization lets hardware reach a consumer-friendly level of style, 3D content will overtake 2D content at some point in the future. At that point, kids will all want to wear XR glasses, because everyone else will be doing it and they won't want to miss out on sharing that experience.
2D content will still be a part of that experience, but it will simply become a virtual flat surface in a larger 3D context. And the reason users will prefer real-time rendering over pre-rendered content like stereoscopic 360 video is because real-time content can be interacted with. A 2D screen or even a 360 video just can't compete on that front. And as interactions become increasingly physical at the same time, the level of engagement will be profoundly different than today's touchscreen world.
Video games are today's 3D content rendered to a 2D screen, and the immersion is lacking. The content never feels like it's part of your world, and you never feel truly transported to another world, because there's a flat piece of glass always in the way. High resolution VR and AR will enable true immersion and physical interaction with the games and entertainment people consume in the future, and today's video games will seem antiquated by comparison, just like our favourite movies growing up seem to kids today.
The XR Generation will expect that they’ll be able to invite their friends to join them in any experience worth their time, and that it will be a rich and expressive shared experience.
The video game industry is already bigger than the movie industry, and games like Night in the Woods keep showing us how interactive storytelling can tell deeply personal and human stories while giving the player agency to explore the game's world and go where they choose, and to feel a stronger affinity with the characters they become while playing.
Like Neal Stephenson’s vision of “ractives” in The Diamond Age, we see the line between performer and viewer blurring until they’re almost indistinguishable. Not every piece of content will necessarily be fully interactive, but new forms of interactivity will emerge that haven't even been imagined yet.
Users already expect some degree of agency in their virtual worlds, and they will feel a need to participate in the creation of the impossible things that they are going to experience. Humans are born curious and unafraid to express their creativity, and that creativity is the key to the process of discovery and learning about the world.
If you can think it, why can't it become virtually real? And with advancements in haptic feedback, it may feel just as good as the real thing, too.
The idea of a single metaverse that everyone visits sounds great in sci-fi novels, but doesn't quite add up in practice.
There will be countless virtual worlds, not just one. But we can only interact with so many people at one time, and the intimacy of interaction is what brings it much of its inherent value.
We anticipate that there will be hugely popular metaverse-like apps, but no one app will be able to satisfy everyone's creative and entertainment needs.
Individual games will run outside of that metaverse, even if you launch them from inside of it and end up back there later, like Steam or Oculus Home today. With AR, there won't be a need for a metaverse at all, just for pieces of a metaverse like a unified avatar system.
For these pieces that make up a metaverse, standards will likely emerge just like we've seen on the web: your avatar will go with you from world to world or place to place; there will be an operating system which acts as a way of organizing and launching your XR apps just like we have now, and glues the various standards together to make a whole, but it won't be where users spend most of their time.
The key to realizing this long-term vision is to build the features necessary to bridge the gap between what is possible today, and what will become possible as the technology matures. This means providing real value to creators and viewers now, and building the future out in careful steps, which leads us to the following axioms.
360 video is going to get much better, but it still has inherent limitations like the viewer being stuck in a fixed point in the scene, and will never be fully interactive in the way a real-time rendered scene can be. Lightfield technology may get us closer to interactivity, but it has its own limitations too and is still years away from hitting the market.
Today's real-time rendering is also limited in its ability to render truly lifelike scenes, but this is improving rapidly and won't be a problem in the future we're talking about. Nothing but real-time rendering can provide users with an immersive as well as interactive experience, allowing them to affect its outcomes.
We're focusing on real-time now, because it aligns perfectly with the way we see the XR content of the future being consumed.
We are not building the metaverse. We are intentionally building a specific show production and viewing app for actors and viewers, centered on their needs.
This means that actors need to have the tools to help them act, and viewers need to have the tools to engage with the content. It's our job to take care of the rest, which should largely remain unseen to both sides.
Live shows and single-take recordings make production faster today, and allow for real-time engagement.
The easier we can make producing content, the more content can be produced, which provides content creators with a tangible benefit today, and helps us get to a stage where the audience jump inside of, and become part of, the show itself.
We started Flipside as a multiplayer experience from day one. This helps us in achieving axiom 3, because multiple users can act together in real-time, without the need to jump between characters over a series of takes.
It also helps us push ourselves when it comes to the expressiveness we want in our avatars. It's one thing to act a part and watch it back, but it's another to see someone in front of you, and assess in real-time whether their expressions are being sufficiently conveyed.
Since VR and AR are new mediums, we know that the best way to accelerate the pace of discovery is to bring as wide of a variety of content creators into them as possible. Because Flipside is a social show production tool, users can craft simple shareables or more elaborate live and recorded productions like comedies, dramas, and game shows.
An engaged audience comes back.
Our social interactivity framework allows each show to have a game element or custom activity that creates unique levels of engagement on a per-show basis.
While these are simple interactions in the beginning, they will become increasingly varied and rich over time as more creators use Flipside to create new worlds, stories, and experiences. This axiom comes more from live performance and theatre than from television or movies, which we explore in further detail in Les' post Virtual Reality Will Disrupt the Stage.
From the beginning, we understood the value of onboarding teams and scaling production.
Together, we are building the future of entertainment, and we’ve only just scratched the surface of what’s possible. We envision Flipside as nothing less than the technology that powers interactive entertainment of the future, something that empowers millions of creators to reach billions of viewers, not that those distinctions even make sense to the XR Generation to come.
If you want to join us on this creative journey, make sure to sign up for early access to Flipside.
- by John Luxford
By John Luxford, CTO & Cofounder - Flipside
I had my first visit to Austin, Texas for Unity’s Unite conference which ran from October 3-5, and wanted to share some highlights from the amazing week I had there.
Neill Blomkamp, maker of District 9 and Chappie, has been collaborating with Unity to produce the next two short films that are part of Unity’s Adam series. The series is meant to demonstrate Unity’s ability to render near lifelike animated content in real-time, and it is just beautiful.
The first of the two new short films is called Adam: The Mirror, which was shown during the Unite keynote just before Neill Blomkamp was invited on stage to talk about the differences moving from pre-rendered CG to real-time, and how his team felt like it was almost cheating because you no longer have to wait for each frame to render before watching the results. It was super cool to hear him speak, having been a big fan ever since I first saw District 9.
It’s always a pleasure to hear Devin and Alex from Owlchemy Labs talk about VR, not just because they’re super entertaining, but because of the careful consideration they bring to each aspect of design and development.
This talk deconstructed their Rick and Morty: Virtual Rick-ality game, showing how they solved issues like designing for multiple playspace sizes, 360° and 180° setups, making teleportation work in the context of the game, and designing interactions for a single trigger button control scheme.
They also showed a spreadsheet of the possible permutations that the Combinator machine lets players create, and it reminded me a lot of their talk about Dyscourse having around 180 separate branching narratives all weaved together. Sometimes to solve the hard problem the right way takes an awful lot of hard work.
I arrived in Austin just in time to head over to the Unite Pre-Mixer put on by the local Unity User Group. As I was walking around the room going between the drink station, the various VR and AR demo stations, and chatting with the occasional person, I hear a “Hey you, you look like you’re just wandering around. Come talk to us!” from someone who then introduced himself as Byron. We all had a great chat, and it was a nice welcome into the Unite community.
Fast-forward a day and I head back to my Airbnb around 1am after cruising 6th Street with some friends I ran into / met that day, and sitting on the couch is a guy who introduces himself as Mike who just came back from Unite as well. Awesome! So we get to talking, and about 15 minutes later Byron walks into the house, looks at me, and says “I didn’t know we were roommates!”
We all laugh about it, find out we’re all Canadian too, and stay up until around 4am laughing and telling stories. Couldn’t have been a better living situation to find yourself in for your first Unite :)
I got invited to participate in a roundtable with a handful of other developers to share our thoughts on how Unity can better support AR development. It was an honour and super fun too! We talked about so many aspects of both AR and VR, challenges with input limitations, tracking limitations, trying to create useful apps instead of short-term gimmicks, and lots more.
This highlighted for me how receptive Unity is to learning from its community of developers and artists. Coming from the open source world, you can really see which projects listen to their user base, and which ones assume the user must have done something wrong. The bug couldn’t possibly be our fault.
It’s very encouraging to see Unity fall squarely on the right side of that cultural divide, and is something I felt echoed in each conversation I had with Unity’s developers over the course of the week.
Now I can’t wait for Unite 2018. I look forward to learning lots of new things again, and seeing many familiar faces. Thanks to everyone who made my first Unite such an awesome one!
- by John Luxford
By John Luxford, CTO & Co-founder - Flipside
This is part 3 of our blog post series about acting in VR and working with actors in virtual environments. Here are the two previous posts:
Now that we’ve explored some general lessons learned as well as lessons by actors for actors wanting to act in VR, here are some of the more technical discoveries we've made that can have a big impact on the quality of your final output.
Actors need to respond quickly to verbal and physical cues from the other actors present, as well as to changes in the environment. This is not a problem in the real world because there is no latency between the actors who are present in the same physical space, but actors over multiplayer are always seeing each others' actions from the past. This is the latency between them.
In a virtual space, latency is impossible to avoid. Even the time it takes for an action taken by the actor to be shown in their own VR headset can be upwards of 20 milliseconds. Remote actors will see each other's actions with latencies of 100 milliseconds or greater, even over short distance peer-to-peer connections.
Depending on the distance and connection quality, that can be as much as half a second or more, in which case reaction times are simply too slow. Past the 100 millisecond mark, actor-to-actor response times can degrade quickly, making the reaction to a joke fall flat, or creating awkward pauses similar to those you see on a slow Skype connection. For this reason, a virtual studio needs to be designed to keep latency to an absolute minimum.
Fortunately, a virtual studio doesn't have a lot of the same requirements video games have that make peer-to-peer connections disadvantageous. For example, the number of peers is going to be relatively low, and you don't need to protect against cheating, or waiting for the slowest player to catch up before achieving consensus on the next action in the game. So for VR over shorter distances, peer-to-peer is a better option than to use a server in the middle (although a server can often decrease latency over greater distances because of the faster connections between data centers).
Buffering needs to be minimized as much as possible too. Minimal buffers also mean the system can't smooth over network hiccups as easily, so a stable and fast network connection is needed at both ends.
A great way to keep latency to a minimum is to make sure the actors are physically located close together, preferably connected via Ethernet to the same network.
If you're recording a show with multiple actors in the same physical space, soundproofing between them becomes critical because the microphones in each VR headset can pick up the other actors's voices, causing issues with lip syncing where the lips move when an actor isn't speaking, or even hearing one actor faintly coming out of the other actor's mouth.
Even hearing feet on the ground, or the clicks from the older Oculus Touch engineering samples, can be picked up and become audible, or cause the character's lips to twitch. Wearing socks and using the consumer edition of the Oculus Touch controllers can make a big difference.
In-ear earphones are also key for ensuring voices don't bleed through from the earphones into the microphone of the wrong actor as well.
On the simplest level, this means adjusting the VR headset microphone levels in the system settings so that the voices at their loudest aren't clipping (e.g., causing audio distortion). It also means getting the audio mix right between the actors, the music, and other sound effects.
Clipping in a digital audio signal.
For traditional 2D output, a spatialized audio mix is not ideal either, since that means the mix will be relative to the position and direction of the local actor's head in the scene. For this reason, a stereo mix is important if you're recording for 2D viewers, but with Flipside we built a way of replaying the spatialized version in VR while outputting to stereo while recording.
Another challenge is that VoIP quality voice recording is substandard for recorded shows, by about half. Because higher frequency sound waves move faster than lower ones, a 16kHz sample rate is too slow to capture the higher frequencies of an actor's voice, losing detail and leaving them sounding muffled.
This ceiling where voice stops being captured properly is around 7.2kHz, but to capture the full frequency range of a voice you want to capture everything up to 12kHz, or even higher. But this is a trade-off between quality and the size of audio data being sent between actors. If the data is too large, it can slow things down, adding to the latency problem.
There are pros and cons to both platforms, and while both are amazing platforms in their own right, which one is anyone's favourite usually comes down to personal preference.
That said, the Oculus Rift with Touch Controllers has certain advantages and the HTC Vive has other advantages too, for the purposes of acting.
On the Oculus Rift, the microphone generally sounds better, and the Oculus Touch controllers offer more expressiveness in the hands, as well as additional buttons which allow for control of things like a teleprompter or slideshow in the scene. We've found the Oculus Touch's joystick easier for precision control of facial expressions than the thumb pad on the HTC Vive controllers, and the middle finger button easier to grab with than the Vive's grip buttons.
On the other hand, the HTC Vive's larger tracking volume is much more ideal for actors looking to move around, although a 3-sensor setup can easily achieve a sufficient tracking volume for Oculus Rift users. The Vive also wins on cord length from the PC to the headset, and the Vive trackers are awesome for doing full body motion capture!
After working with professional actors in Flipside Studio for the past few months, it really opened our eyes to the subtle balance needed to provide an environment they feel not just comfortable acting in, but inspired to be in too.
We're glad we could share what we've learned with you, and as we continue seeking out new actors for our Flipside Studio early access program, we hope these lessons will inform creators and help you create better content, faster.
- by John Luxford
Earth From Up Here is a news-style comedy show produced in Flipside that features an alien newscaster named Zēblō Gonzor, who sits at a UFO-shaped desk, with a screen behind him for showing photos and video clips. On his desk are a number of props - a coffee cup, a paper and pencil - and some buttons that trigger the show's intro, outro, and alien laugh tracks. The concept is similar to Saturday Night Live's Weekend Update and Last Week Tonight with John Oliver.
Earth From Up Here was designed for a two-person team made up of a producer/director and a writer/performer. Here is how we make an episode of the show.
The producer/director searches for funny news stories and videos online and adds them to a preparation document.
The writer/performer takes this document and writes a script based on the materials provided.
Based on the script, the writer and producer also compile the needed photos and video clips. We use Photoshop and Adobe Premiere for this step.
We then upload the images and video files to the web. We use Backblaze B2 to store these files, but you could also use Amazon S3 or any website or file hosting service that provides public links to the files.
Now we're ready to launch Flipside and enter the studio. Once inside, we select the Earth From Up Here set.
To add our slideshow links, we right-click the Slideshow button under the Activities tab and edit the slideshow settings. We paste the links in one-per-line in the order they need to appear for the show.
Next, we do the same thing for the Teleprompter and paste the script into its settings, so the talent can read from the script while they're performing (Figure 5).
Now that the show is ready to be recorded, we also launch OBS Studio to record the video output. Flipside controls OBS Studio transparently using the obs-websocket plugin. This allows Flipside to focus on recording the VR version of the show, while OBS Studio simultaneously handles recording the video as well as the ability to stream a video feed live to Twitch, YouTube, Facebook Live, and other streaming services.
To ensure OBS Studio is ready, we click the Connect button and wait for the OBS Status to say Connected.
When we click Record, the show starts and the actor performs while the director cuts camera angles. Either the actor or the director can control the teleprompter and slideshow, depending on what works best for the team members.
When a take is good, the director or the actor can click the star icon on that take to keep track of the good ones for later.
So that's the process of producing an episode of Earth From Up Here in Flipside. As you can see, Flipside streamlines much of the time it takes to produce an animated show, and enables the talent to focus on what they do best.
- by John Luxford
By Lauren Cochrane, Improviser & Writer - Bucko
Welcome to part 2 of our blog post series on working with live actors in VR (click here to read part 1). This part is about working as an actor in VR, and is a guest post courtesy of our friends at Bucko (aka Genefur and 2B from Super Secret Science Island).
Improv is and always will be about listening. Connecting to your scene partner requires communication both verbally and physically. In VR improv, the same is true, but you will need to listen in a whole new way.
You won’t have the same physical connection as you would in a live theatre setting, so it’s imperative that you take your time and speak and move with intention. It’s actually quite a gift for an improvisor to have the opportunity to play in a format that slows you down and makes you come at a scene from a completely new perspective.
Yes, you will absolutely react to and communicate with the character’s movements (both your own, and your scene partner) but the movements themselves are different. For example: If I am in regular live scene, and I put my hand to my ear to listen to something in the distance, that’s a pretty simple gesture that I can do to communicate an offer and I know my scene partner will follow along. In VR, my character’s body and limitations are different than my own.
Same scenario as above: Genefur’s ear is about half a foot away from my head because the character’s head is wider than mine. So I, as the actor, have to think about how to move my body and puppet Genefur to put her hand to her ear, to communicate the same offer of listening to something in the distance.
This adds a few extra seconds to my offer. Both scene partners have to be aware of this added time. If you are just talking and flailing around you will miss things and the audience will won’t be able to immerse themselves in the world. So, it takes a little bit to get used to the timing and new reaction process. Practice and play.
It’s important to be aware of your virtual surroundings and space, so you don’t bump into things you can’t feel. Using the monitors inside the VR scene is really important. You will want to get familiar with the environments and props to know where they end and you begin.
Yes, it can be funny to use the world and hide inside it (like when Genefur hid inside the fridge to scare 2B) but it can take away the reality if you walk through the table or each other when you’re not supposed to be able to - just like in live improv - if I set up the scene and indicated that “this is where the grand piano is, it’s big and beautiful and right there” and then I walk through it, it dispels the reality I worked so hard to create. Same goes with VR. It will take the viewer out of the magic and then the “oops” becomes the focus, rather than the scene.
Trust is still conveyed between virtual avatars, even though you’re acting against the other person's character and not their real self. If you are playing with someone who you are used to playing with, it will make the VR scene process MUCH easier. It’s a little funny to get used to, but knowing that the person you are playing with is right there with you in the scene, while also at the same time, being in a room down the hall-it’s kind of new sensation.
Your brain will honestly believe you can reach out and physically touch your partner. Aaron and I usually high five to connect and ground ourselves together in live scenes. And without talking about it, we naturally did that between takes in VR scenes.
When you come out of VR and realize this person your brain swore was beside you isn’t, it’s a very cool/weird/eerie/amazing feeling. It’s a feeling that is probably easiest to process, if you are super comfortable with your scene partner. It’s a new level of trust. As an improvisor that is quite the experience!
Lack of an audience can take a little bit of adjustment. Live improv is pretty clear with it’s audience communication. You, as the improvisor will know if the audience is digging what you are putting out by their laughter, clapping - even just the vibe of the room. You can feel and see if the audience is enjoying themselves and connecting.
In VR, you won’t have that, which can take a little bit of getting used to. You have to trust the same instincts and timing like you would in a live theatre, but be okay without the instant payoff of a room laughing and giggling with you. You’re a seasoned improviser. You got this. Trust it’s working, and you can go back and try again if it’s not.
In live improv, you will constantly be told “Cheat to the audience!” They need to be able to see what you are doing, what you are holding (mimed or real) and what you are trying to do with it. It’s the absolute same in VR improv.
The cool thing about props in the VR scenes is that they are used to add even more to the world than you could ever have in live scenes. (Ahem: Super Science Pencil-YES!) So if you use or make something, be sure to check the monitors to see that it’s purpose is being communicated.
That if you drew a sword and are now using it, make sure the camera’s and your scene partner can see it clearly. Otherwise, you will have a break in communication. If that happens, scroll up and re-read this again. It’s all there my friends. :)
- by John Luxford
By John Luxford, CTO & Co-founder - Flipside
With Flipside’s first two show productions in full swing, we've now been through a number of production days with live actors working inside the platform. We learned a ton as a result, and wanted to share what we learned with you.
This first post explores some of the more general lessons we learned that have helped streamline our productions and helped us empower our actors to do the best job they can.
Processes need to be honed, but they also need to be documented. These are living documents that will evolve rapidly, but you won’t be able to iterate on them as fast if you don’t have them written down to begin with.
We currently have documentation covering:
These act as checklists to make sure we don’t miss a step that may have cost us time, or worse, a usable end result.
Because Flipside is still in beta, our "known issues and workarounds” document becomes critical. The purpose of this is to provide quick actionables we can use when an issue arises, without having to worry about finding a creative solution on the spot. Not having a workaround ready can quickly eat into your production time.
At first, there were little things that we would have to reset between takes, and early on some of these even required restarting the app, which doesn’t help the actors stay in the right frame of mind. Context switching hurts creativity, and our goal isn’t just to be a virtual studio, but to use that opportunity to eliminate as much of the friction that goes into show production as possible.
So we iterated on ways of reducing the time between takes as much as possible. We now have a process that is impressively automated, with one person manning the camera switcher and director tools, while the actors are free to concentrate their attention on what they do best.
Actors need to learn and get into their characters, how they move, how they talk. They also need to get comfortable acting with a VR headset on. One request we got was a simple mirror scene, so the actors could practice their parts while seeing themselves from the front, side profile, and back all at the same time. Actors can now hop in there and see exactly how their movements translate to their virtual counterparts.
The actors need to know when something is about to change, or where they should be standing and facing to be in frame. For example, we added virtual marks for each camera position, which update prior to the next camera change so the actors can know to move into place or turn if needed for the next shot.
This can also be as simple as counting down to “Action!” in VR when the director clicks record instead of starting recording immediately after pressing the button. These little things add up to make a more intuitive experience for everyone.
This means we put a lot of work into making eye contact feel right, and blinking feel natural, because we don’t have eye tracking available in consumer VR headsets just yet.
We also devised our own system for more natural neck and spine movement, as well as arms that emulate more traditional animation techniques that emphasize for style over accuracy. Since today’s full body inverse kinematics options still don't feel quite natural, and the closer you get to the character feeling alive, the more you risk falling into the uncanny valley.
The more you play into the strengths of the medium, the more the quality of the content can shine. Counter-intuitively, the better things get, the more noticeable the remaining issues become.
We quickly realized that even with lip syncing and natural eye movements, the avatar faces felt dead. To solve this, we created an expression system that the actors can control with the joystick on their motion controllers that allows them to express four distinct emotions, but also blend between them (smoothly transitioning between happy to upset, while blending naturally with the lip syncing).
With a little practice, these expressions can become reflexive actions for the actors, giving them a new level of expressive control as they embody their characters.
There are lots of unsolved problems in VR, probably most notably locomotion without causing motion sickness. But there are other subtler causes of motion sickness too, which can include anything that creates even slight disorientation.
Image source: techradar.com
One of the strangest examples we encountered in Flipside was in our preview monitors (which are just flat virtual screens for the actors to see the 2D output). We found that there was a perceived parallax in the preview monitors which caused a tiny amount of motion sickness over time. Nothing crazy, but present nonetheless. The solution we came up with was to flip the video on the preview screens horizontally. This had the effect of making any on-screen text appear backwards, but eliminated the perceived parallax which slowly caused discomfort for the actors.
The reason this is so critical is that actors are likely spending prolonged periods of time in the virtual sets, doing several takes before they get it just right, or doing batches of episodes in a single shoot. Anything that causes discomfort can potentially cut your shoot short in an unpleasant way.
These are some of the more general lessons we took away from working hands-on with live actors in a virtual world. They’ve helped us hone our vision, and Flipside is already way better because of it.
Stay tuned for the next post in this 3-part series on live acting in virtual reality. We have lots more to share! And if you're a content creator, make sure to sign up for early access to Flipside Studio!
- by John Luxford
By John Luxford, CTO & Co-founder - Flipside
Storytelling in virtual reality is about directing attention and engaging via interaction with the user. In film, there is the frame to direct attention, but in VR the user can look away when you didn't intend for them to and miss a key plot point or instruction.
Here, I'd like to explore how Teller's secrets that magicians use to manipulate audiences can be applied to build stronger narratives in VR.
Note: This post uses examples from common VR experiences, so it may contain spoilers if you haven't played them. I'll do my best to keep the examples simpler and relatively spoiler-free.
Repetition builds expectation, which will draw the user's attention back to the repeating action. For example in Accounting, the repeated use of putting on a virtual headset to move to the next part of the narrative, or the miniature lawyers that keep popping out of the briefcase in the courtroom scene.
If we look to create common metaphors and repetition around choices the user is given, it helps build familiarity and understanding of their surroundings so they can get lost in the details of the experience itself as they progress through it.
By making something more elaborate, we can string them along multiple steps, or wow them with the perceived complexity of the task, even if it involves just two or three elements. In the extreme example, a Rube Goldberg machine comes to mind, whose complexity can easily wow audiences both in and out of VR. Many simpler examples exist in VR due to the perceived physical nature of the medium.
In terms of immersion, this can be as simple as confirming a single interaction via multiple senses (visually via depth cues and believable movement, accurate spatialized audio, and haptic feedback to touch), which tricks the mind into believing the interaction was with a real object. This can lend believability to any interaction, from pressing a button to mimicking the feel of shooting a bow and arrow.
Probably the simplest but best example of this is a technique that originated in Job Simulator, which is that as soon as you pick up an object your virtual hand disappears and is replaced by the object itself.
Owlchemy Labs had discovered that when you're busy pouring virtual hot sauce on things and laughing at the hot sauce coming out of the virtual bottle, you don't even notice your hand isn't wrapped around the bottle anymore. This has the benefit of avoiding having to make the virtual hand look right when wrapped around a wide variety of objects with varying shapes, which when done wrong can highlight the limitations of the experience and break the sense of immersion as a result.
As an interesting aside, we opted not to use this technique in Flipside after much testing, because the perception of the viewer is more important than the perception of the actor in a show, and viewers found it odd that the actors' hands kept disappearing.
The frame in both magic on stage and story in VR is where the user's attention has been drawn. Fortunately in VR, the trickery is often in the form of making something appear or disappear, which can happen in a variety of ways.
You can make something appear right in front of the user to create an element of surprise, or worse you can make a monster appear beside them when they're not looking to create a jump scare.
To make a character disappear when they're no longer needed, they might walk out of a door or behind an obstruction and can be turned off when safely out of view. Meanwhile the user's attention is directed towards the next part of the experience.
This also relates to the example of combining multiple senses I used earlier, since much of VR's ability to immerse comes from the reinforcement of information across multiple sensory inputs. The eye can't be wrong if the ear heard it too.
But in terms of narrative and progression, combining two tasks or actions can be a great way to reinforce the direction of a user's attention, helping ensure they're seeing the key points along the way.
For example in Rick and Morty: Virtual Rick-ality, the need to use the microverse machine to complete the task of recharging the batteries adds a second layer of interaction to the original goal, which immerses the user in a deeper way.
Bringing physicality into the experience by requiring the user to carry out a task itself is a way of combining the story element with participation. When the user is busy doing, we can be busy setting up the next steps for them.
To use something from Flipside in an example, we have a VR episode of Super Secret Science Island in our not-yet-released Flipside viewer app where 2B throws a magic pencil out to the audience, breaking the fourth wall. If the viewer catches the magic pencil, the magic pencil grows to match the size of the viewer's hand. Most users we've tested don't even notice it happen, which means they've suspended their disbelief.
In fact, the magical action of throwing something out of the pre-recorded show and into your hand, akin to a character throwing something out of the television and into your living room, doesn't even seem that far fetched as it happens. The pencil appears in your hand as expected, and most users simply begin to draw with it.
Whenever the user can inspect an object for themselves, they'll believe what their senses tell them.
The simplest example here isn't exclusive to VR games, but any sandbox narrative style game offers the user a limited set of choices, and as you make these choices you begin to feel that the game's outcomes are tailored to your individual set of choices. But in reality there are only a handful of final outcomes in the game's sandboxed world.
As you can see, there are many parallels between VR narratives and traditional forms of live entertainment like theatre, improv, and magic. Magicians especially have had millenia to hone the craft of manipulating perception, which makes for a rich body of work to learn from as we craft our narratives in this new medium. Once again, what is old becomes new again, which might be the oldest trick in the book.