Taken from OnLive blog
http://blog.onlive.com/.Posted May 12,2009.
To the user, OnLive is exceptionally easy to use: The latest high-end titles are available on PC and Mac (via a small browser plug-in) or on TV (via the OnLive MicroConsole). Games startup instantly. There is no physical media, no downloads, no patches, no updates, and no high-end hardware is needed to play the games. Pretty much any XP/Vista PC or Intel-based Mac will work. And, you never need to upgrade your PC, Mac or MicroConsole™: you’ll continue to be able to play increasingly higher performance games on your existing PC, Mac or MicroConsole.
But to achieve this level of simplicity, “behind the scenes” OnLive is an immensely complex computing system. It’s a type of computing system called a “cloud computing” system, because computing occurs in a data center within the Internet (aka “the cloud”). But, OnLive is a cloud computing system that is quite different. In this post, we’ll explore one of those differences: how in a typical OnLive session you use many different computers (called “servers” because they are in data centers), and how you seamlessly transition from one server to another.
When you are using OnLive, while it seems like you are using just one immensely powerful server that is constantly providing a non-stop video experience, nothing could be further from the truth. Actually, from the moment you start up OnLive you are using many servers working together in a myriad of different ways, sometimes with a server dedicated to your use, sometimes sharing a server with other users, sometimes using several servers at once, and sometimes a combination of some or all of the above.
For example, let’s consider a typical user who tried out OnLive at the Game Developer Conference and played for five minutes (say, navigating the user interface, playing a few games, snapping and watching Brag Clips™, and spectating other users playing). Given that range of activities, the user easily used more than a dozen servers at different times. Just to identify a few: some OnLive servers ran (i.e. “hosted”) particular games, other OnLive servers hosted the user interface, and others handled the distribution of spectating video streams and Brag Clips.
As the user transitioned from one experience to another (e.g. clicking in the user interface to start a game), OnLive would “hand off” the user from one server to another, transferring the “user state” (e.g. all the data unique to that user, including the live characteristics of the Internet connection) from the user interface server to the game server, while switching the live compressed HDTV video/audio from the user interface server to the game server. And, all of this occurred on a video frame boundary. So, from the point of view of the user, it seemed like the video was just continuing onward from the user interface video to the game video as if it was running on the same server. In actuality, it was seamlessly handed-off from one server to the next.
And, when using OnLive, you are using even more servers than just the ones you transition to. For example, massive spectating (when you watch lots of thumbnail video windows of live games being played) is tapping into “IP Broadcasts” (i.e. data broadcasts over OnLive’s internal networks) of the live video generated by many different servers hosting many other users. And, when you play back a Brag Clip, yet another server is handling that for you. So, one question you might ask is, why does OnLive go to all of this trouble to transition users around from server-to-server? There are 4 main reasons: 1) many things (e.g. massive spectating) simply can’t be done with one server, 2) it allows us to always provide users with state-of-the-art performance, 3) it dramatically lowers our cost of operations, and 4) it dramatically reduces power consumption.
For example, if a user decides to play a very high-performance game, the user will be transitioned to a very high-performance server that can handle the game. If the user is running a lower-performance game, then the user will be sent to lower-performance server (OnLive has many classes of servers), and, in the case of many games, we can have more than one user share a single server without any impact on gameplay (“real-time virtualization”).
Every six months or so, we install new servers with the latest GPU and CPU technology, able to run the latest most advanced games. But the older servers are still fine for running lower-performance games (or, say, the OnLive user interface), and users never know what server(s) or shared servers are hosting their games. Needless to say, this not only gives gamers access to the very latest gaming hardware, but it also dramatically reduces OnLive’s costs of operation since, at any given time, many users are playing games (or in the OnLive user interface) and require less than state-of-the-art performance. And, from a user’s point of view, the experience is always fast and high quality because each game plays on a server providing the level of performance required. But of course, behind the scenes, OnLive transitions the user seamlessly from server-to-server, leaving the user with the perception of simply having one incredibly high performance and flexible computer.
Finally, OnLive consumes far less energy by only providing each user with as much computing power as is needed for the particular task the user is doing. Not only is this good for the environment (particularly if the user is using an OnLive MicroConsole in the home, which only consumes about a few watts), but it also further reduces OnLive’s costs of operation. Good deal all around.
So, OnLive not only provides you with far more computing power than any single computer or console when you do need it, OnLive provides you with far less computing power when you don’t. Gameplay is always state-of-the-art, but cost of operations and energy consumption is minimized.
I hope you found this OnLive tech overview interesting. As you can imagine, designing and building this technology was really fun. It’s rare to have a chance to design a mass-market system based on a completely different view of computing, yet one that provides an experience to the user where all of the complexity and tricky engineering is invisible.
More cool tech postings to come…