This summer I had the chance to be a part of a student led production team from the Guildhall at Southern Methodist University. The project was sponsored by the psychological department at SMU with the goal of updating a 10 year old training simulation that helps prevent dating violence. Although a decade old, the first project received national acclaim in the US and was even covered by News Week.
Here are some links concerning the project
Seeing that the original simulation was 10 years old and made with the Source Engine, the team didn’t think there would be much trouble in replicating the features of the old project while sprucing up the visuals to increase participant immersion. To understand what we were aiming to reproduce, let me describe how the original system worked. The old simulator placed participants into a virtual environment using a VR device where they would interact with an avatar puppeteered by a trained actor/facilitator. The actor’s speech was captured in real time and replicated by the avatar in the virtual environment. Keyboard hot keys where linked to the avatars movements and facial expression resulting in a surprisingly wide range of actions and emotions for the actor to draw on.
All said, the scope of the project and it’s requirements were well defined since we had the previous version as our guide. So it’s no surprise that most of us thought the project would be straightforward and after a whole lot of rowing, we would row the production boat to it’s destination without much trouble. That’s not what happened but here’s the postmortem for what really did.
What Went Well
No one foresaw how difficult it would to find the tech to reproduce the original. The Source Engine was truly ahead of it’s time in some ways. The last project was entirely built around the Source Engine’s lip-sync capabilities and finding a modern equivalent was challenging. While all this sounds like something that should be in the “what went wrong” section, the team’s ability to do research and development was one of its strong suits. In a matter of a couple of weeks we were able to narrow the tech we needed down and a couple more to figure out possible solutions to our seemingly unique requirements. I say unique because lip-sync with runtime playback is not in any current commercial engine out-of-the-box. There is simply not a market for it. So in order to pull this off and stay within budget, we had to get creative.
We originally wanted to develop the project in Unreal Engine 4 because of the quality of the visuals we could obtain with it would greatly increase immersion. However, due to the lack of support for any type of lip-sync support we had to base our engine decision around the tools that would allow us to pull off the lip-sync. We went with Unity 5 because of a simple lip-flap solution offered on the asset store called Salsa. This move allowed us to mimic the lip-sync functionality adequately while letting us iterate further at a later date using more expensive facial animation tech from Dynamicxyz if additional funding came available. Overall, we learned a lot in a short amount of time and were able to make considerable progress by combining R&D with rapid prototyping.
What Went Wrong
Because this project was student led, we didn’t have the luxury of delaying the project until the necessary assets became available. As I stated before, we originally planned to develop the project in UE4 and may still have if we could have found a programmer familiar enough with UE4 to deliver the functionality we needed. Although UE4 is fairly new, I was surprised by the lack of available programmers willing to work on the project especially since it was a chance to develop for the Oculus Rift. However, as soon as we made the switch to Unity 5 we had a programmer within a day.
We also ran into the perfect storm of communication issues. The team consisted of a mixture of students and contractors. The students worked together during specified times while the contractors largely worked from separate locations and at different times. This minimized the effectiveness of Agile with Scrum, as the daily scrum meeting became the weekly scrum meeting and without all members in attendance. This was supplemented with a flurry of emails at all hours of the day. Without a doubt this led to confusion, delayed work, and decreased productivity.
What We Learned
I am definitely walking away better for this project. Very few people get to be a part of a true research and development project, even if that is not how the project was initially envisioned. It also opened my eyes to the huge gap in commercially available tech and tools for developers of Serious Games. With educational institutions putting more emphasis on this area of learning, there may be a market for simulation type tools in the near future.
Secondly, a virtual organization needs a virtual workspace. Our team was unable to utilize any communication tools except for email, skype, and phone. This significantly hurt communication. Having a virtual workspace like Flowdock, Yammer, Jira, or Trello would have let us stay in communication easier and even enable us to Scrum as a team virtually. In addition, having some sort of time overlap between the team’s working hours would have let us pass information more effectively without causing a delay in the workflow.
In conclusion, I’ll endeavor to never underestimate the work that goes into Serious Games again. Serious research and development and truly creative solutions are critical to bringing these projects to life and while they might not be as flashy as AAA titles, they can certainly make a meaningful impact in the lives of others.
Many Thanks to the Fantastic Team
Prof. Elizabeth Stringer – Executive Producer
Kevin L. Morris – Assistant Producer
Marcelo Raimbault – Assistant Producer
Matt Miller – Level Designer
Lucas Vasconcelos - Programmer
Prof. Joel Farrell - Artist
Prof. Boris Fisher - Artist
Mat Toellner - Artist
Taylor Wright – Animator