I won't lie. GDC has been a little slow this year. I think that's why everyone has been talking about the recently launched service OnLive.
OnLive allows you to run games on their servers which will be streamed through a set-top box to your TV (or computer). A controller hooks up to the box just like a console. There has been a lot of talk this week about digital distribution for games so this is right in line with one of the largest themes at the conference. But, there are a few problems I just don't understand.
First off, a bit of background. I studied high-end rendering in grad school. I had my own 16 machine cluster which I would interact with via my desktop. I wrote the code that compressed the simulation, streamed it over the network, and processed the user's interaction to be used by the simulation. You can even check out the movie. I also wrote up a white paper about graphics as a remote service which would have been my Ph.D. topic had I continued with it. That was a couple of years ago, but hopefully it will convince you that I'm not a complete idiot when it comes to this stuff.
Because the paper is incredibly long and boring, I'll sum up my thoughts here. I think that service-based graphics are great in a few different environments:
- The dataset it too large to ship. The national laboratories (LLNL, ORNL, etc.) used our technology because the fastest way for them to share data was via FedEx.
- Minority Report. The server solution provides centralized computation for thousands of small tasks where the cost of additional hardware for each display is prohibitive.
OnLive falls into the second category. They are "shifting the economics of the industry" by providing a microconsole so that the game simulation can be moved to the server. The problem is in the nature of the task. Games are inherently compute intensive. There's a reason that you need a behemoth of a machine to run Crysis.
For a service of this kind to make any money, you need to be able to support tens of thousands of users at the same time. Halo 3, for example, has 80k users online as I write this. Granted, this is across the entire world, but the hardware to support the simulation, rendering and video compression for each of those games would be staggering.
So this is all just speculation and gut instinct at this point. Maybe OnLive has figured out something I haven't. And I do honestly want to think that they have - it was my research after all. Anyway, let's look at some numbers.
OnLive states that they can stream content from its servers to your house in less than 80 milliseconds. This means that any event (e.g. user tries to shoot someone) generated on the client needs to be sent back to the server, processed, and the results need to be sent back. In an ideal situation, given OnLive's reported numbers, you get the next frame 80ms later. This is a quite a while in real-time games. Effectively, you are capable of responding to actions and seeing the visual results at a maximum rate of about 12 events per second.
The reason that games need to run fast isn't only because the graphics need to be smooth. It's because the user needs to be able to respond to their own actions. Streaming video doesn't have this requirement, which is why on-demand movies work so well. You still need to interact with the video when you pause or rewind but that interaction does not need to be in real-time; 80ms would be fine.
To provide another argument slightly more founded in theory, we can talk about the speed of light. OnLive says that they will launch with servers on both the West and East coast. The US is about 2600 miles across. So let's say I'm smack in the middle, 1300 miles from a server. The speed of light is 186,282.397 miles per second. So it takes 7ms to travel 1300 miles at the speed of light.
At that speed, I can respond to something, see the results, and repond again at a maximum rate of 70 times a second. This assumes a line-of-sight fiber-based connection between my computer and the server. If you also assume that games usually run at a rate of 60 frames per second, then it takes over 16ms to render a frame. Add 14ms for the round-trip of the data itself and you end up with a maximum frequency of interaction and response of just over 30 times a second.
I've actually played games at their booth here at GDC. Honestly, it's impressive. Apparently the server is 50 miles away, but the lag is pretty decent. The basis for comparison isn't quite there though. I have the exact same experience playing games at home, so there isn't much new. The big question though is that stated 80ms of compression, and specifically whether or not it matters. They do explicitly state that their technology can stream the data in "less than" 80ms. So maybe the real story is that they've come up with the most efficient video streaming ever.
Part of the discussion that I've completely left out so far is that of human reaction time. Generally, it takes 190ms to respond to a visual stimulus. Relative to the compression time, this is quite a while. So maybe it's possible that the user doesn't perceive a lag. My feeling is that, here at GDC, OnLive seems to be just below the point at which we begin to feel it. But the reality of the rest of the United States is that you have an outdated telecommunications infrastructure as well increasing distances to metropolitan areas. This comes back to my first point: the cost to support these types of data centers all over the country seems like a pretty large undertaking.
I certainly can't prove that any of this is impossible. However, the genuine feeling of skepticism coming from everyone here at GDC is undeniable. I hope that they can prove us all wrong because it'd be pretty cool if this works.