By now, you should have a fairly good understanding of predictive modeling and how it relates to games -- if not, see part 1 and part 2 of this series. We’ve set a good baseline, and this final installment does a little predicting on prediction, and looks to the future of the burgeoning industry.
After all is said and done, big data prediction is still new. We’re just in the beginning stages of discovering what this technology is capable of. So if you’re thinking about utilizing prediction in your gaming company now, that’s great news -- you’re getting in on the ground floor and you’re ahead of a lot of your competitors.
The bad news is that this is so new, there aren’t a lot of qualified analysts out there to interpret the data you’re churning out -- and there isn’t a lot of data to begin with. To create these models, you need power users. In social predictive analytics (discussed in the last post), these are the social whales. They’re the big names spewing out lots of value, and are therefore teaching the models a lot by proving or disproving their predictions. Prediction gets better by learning from player actions (the repeated patterns), and power users are at the core of these models. Without them, we can’t improve on models.
For a lot of game developers, the real gating item is not a lack of data, but a lack of qualified analysts. The academy (and I’m part of that problem) just doesn’t train people for these roles well yet. We either train good technicians who can handle data, or good interpreters, but rarely those who can do both. Here’s someone else’s deeper thinking on the issue. From my vantage point, the biggest shortage is in people who understand the right questions. Technical skills are easier to find, but taking business processes and fitting data to them is tougher. That’s where social scientists can shine.
The problem for everyone, though, is a lack of frictionless lay-friendly tools. These models aren’t the “Up and running in 30 minutes!” analytics programs that you can install in an afternoon. They’re big, bulky and take time to implement. Right now, it isn’t an easy thing for game developers to roll out: you have to be committed to knowing your players, and take the time to gather good data before you start seeing a return. You need to deal with an integration or an SDK, and once it’s up, the tools need to be understandable and accessible to all levels of management. Easier said than done, but possible
Sounds pessimistic, right? Like I’ve said before, prediction isn’t magic -- gaming prediction, in particular, is lots of science and hard work. It’s not all bleak, though: there are good tools coming online and best practices are starting to take shape. There are researchers in the trenches right now, interpreting data and building better models to bring this industry up to speed. And once it gets going, it won’t stop.
Eventually, we’ll get to a point where prediction is commodified, and will be a standard part of every dashboard. It’s my hope that a Social Value score will be right up with a K-Score or LTV in terms of measurement.
But if prediction is so valuable, you might ask, aren’t analytics companies all making the transition over to prediction?
Well, there’s a continuing struggle between power and understandability. To put it simply (as you might have picked up from the series so far), analytics are becoming increasingly complex. As this happens, we get more powerful and accurate results -- but those results often come at the expense of comprehension.
The new computer science models we’ve been discussing for predictions, for example, beat regular social science and business school approaches. Hands down. The proof is in the pudding: We’re getting over 90% accuracy levels with some models, which you would never be able to get with “traditional” approaches (e.g. regression, logit models).
There is a flipside, though. These models aren’t traditional, so we aren’t getting understandable results: The results come out in tables, if-then statements, rule sets and other long, unfathomable formats. Literally no human being can intuit it -- I’ve spent the better portion of my career figuring out data and the human nature of gamers and after 2 solid pages of if-then statements, I have no idea what it means.
So how can we place our faith in models that we don’t understand? Because, thanks to all of our research and testing, we know that it just “is.” However the sausage is made, it tastes good. It’s accurate and stands up to testing and repeated trials. It’s a black box that works.
To put it directly in the context of gaming, imagine you want to know if Player A is going to spend money next month. You have two models: one that will tell you if she will with an 85% accuracy rate, but you can’t know why, and another with a 40% accuracy rate and a full explanation. Which box do you want?
From a practical, actionable point of view, it’s actually an easy question to answer. I would choose the model with a high accuracy rate, every time. If you’re thoughtful and going to run interventions and test their effectiveness anyway, you’re going to develop a theory and get to the “why” eventually.
Don’t get me wrong: I’m a long-time modeler who likes to know the “why”. But if I can get 80%+ confidence levels without it, the price is often worth paying. I would prefer to have other parts of my dashboard focus on “why” issues, and I’ll focus my energies on making those parts accessible to the design and community teams. They understand the game context best, so they need tools they can grok. It’s the smarter, more actionable entrance to the rabbit hole.
As I said when I started the series, no one can truly predict the future. But by taking observations of patterns and predictions, and adding a heavy dose of data science, we’re getting pretty darn close.