This is a (rather technical) postmortem about our VR experience “Sherlock Holmes - The Wagner Ritual” which was made during the Gear VR Jam 2015 and is now out on the Gear VR store. The experience now available has the same content as the VR Jam version. The only changes were bug fixes, performance improvements and including Gear VR specific functions. Keeping that in mind, the whole experience was basically made in roughly 4 weeks.
I am often asked how we achieved such high quality panoramic photos and how we set up the actual photo shoot. Instead of answering everyone individually, here’s how we did it:
As the photos would be the main piece of the experience, it was very very important to shoot them in the best quality we could achieve. Our own GoPro rigs are capable of producing the desired photosphere, but the quality is lacking. On top of that, the high overlap of pictures produces a lot of errors when stitching the pictures and requires a high amount of post-processing, for which we simply didn’t have time during the game jam.
Our solution (and salvation) was the ‘roundshot VR Drive’ camera arm, which takes the required photos for a spherical, panoramic picture automatically. The ability to use almost any high quality camera (the list of compatible cameras is very long) to take pictures enabled us to produce panoramic shots with a resolution of 9500x4750, which really showed in the end product.
The ‘roundshot’ automatically takes a set number of pictures in precise angles, which eliminated the need for a large overlap and decreases the time it take to stitch the spherical picture immensely. The ability to control the arm via a smartphone app remote and pause the recording at any moment were a huge help during the actual shoot.
We always took at least three versions of the same picture sequence without moving the camera. Due to the precise angles of the ‘roundshot’, this enabled us to assemble the final photosphere from the best parts. So if one actor moved a bit during the exposure time, we didn’t have to throw away the whole picture - just take a good angle of him from another sequence. In this way, we could focus our effort on producing great pictures instead of fiddling with the stitching.
For most indoor shots with non-moving elements, the standard exposure of the camera was fine. After some experiments, we set the exposure time to 0.5 seconds for most outdoor shots with moving elements in the background. In that way our unmoving actors were very clear in the picture, and everything else, which was unimportant was slightly blurred. This achieved a sense of motion and doubled as any protection from privacy concerns, as all faces and number plates were automatically blurred.
Here’s a full resolution panorama to experiment with:
The Photo Shoot
During the actual photo shoot, we had a very small crew of just one photographer, the actors and a coordinator. (Though we quickly attracted other interested parties, so in the end the group was quite large.) If you are recording a scene in 360°, you can’t have any of the crew present in the pictures, which is a challenge if the whole picture is recorded at once. But due to the sequential picture taking of the camera arm, non-involved people can always stand behind the camera when a picture is taken.
This enabled us to take great pictures, even in relatively dark places, like the cellar in chapter 4. Someone standing behind the camera can always hold a light and make sure that the photo is lighted correctly and in tune with the other photos. This ensures a good exposure and even pictures, which lessens the need for extensive post-production even further. In the end, the whole process looked a bit like a very slow merry-go-round, with the lighting person and camera man moving in time with the camera arm rotation.
It was also important to set up actors in a way that they would only ever be in one picture, to avoid any nasty stitching errors due to slight movement between the pictures.
Moving in a circle and lighting everything inside this narrow cellar space was a challenge, but great fun!
Following the script, the game was set up into a number of scenes, each represented by an individual photo. As this was the first time we attempted to shoot photospheres in this context, you can see a progression in quality of the content from the first photos we took (chapter 3) to the last ones (chapter 4).
Because the background would always be unmoving, we needed to set up all elements in a way that the player would want to look around and explore them, compensating for the static look. There are a lot of additional elements to the picture in the final experience to achieve this, and some work better than others, but here I want to focus solely on the picture composition.
Every scene has more content through the dialogue than can be shown in one picture. The actors are positioned in a way that they represent the main action of the scene, but can also be viewed in sequence. We tried to direct the player’s gaze through the picture by making the characters look at each other and parts of the environment. When the player first arrives in the scene, he will see almost always see one character, who is occupied with something else in the room. Following the gaze of the actor, the player simultaneously discovers the room and follows the story. (Similar to the composition in classical paintings to create movement and lines for the eyes to follow.) In the end, you can never be sure in which way the player will actually look, but can only give him as many visual clues as possible.
The Unity Integration
To achieve a high picture quality with a stable frame rate, we opted to display only one photosphere instead of a stereoscopic picture. The picture resolution for integration was 4096x2048, which was within the texture memory limits without crashing, even when loading additional textures for interface elements.
The picture is mapped to the inside of sphere, with the VR camera in the middle, which is a pretty standard setup. This part of the process was the easiest.
The app is made completely in Unity 5 with the help of NGUI (for rendering performance in the interface) and the latest Oculus Mobile SDK.
“Sherlock Holmes - The Wagner Ritual” is our first step into making narrative content for Virtual Reality. Due to time constraints, the experience turned out rather passive, as all energy went into producing the content and finishing up the app in only four weeks. We decided not to change anything about the content for the release, to reflect the achievement of producing a story of this length in such a short amount of time.
We like to look at “The Wagner Ritual” as a technical prototype and a good way to collect experience - while having a lot of fun making it. With a bit of luck, we can begin working on the next instalment of “Sherlock VR” soon. :)