Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Latent potential: How souped-up physics engines could transform gaming

At the "sweet-spot" between the cost of hardware and the demands on programmers, the platforms we use to run today's software have the latent capacity for direct modern physics simulations, while modern physicists use hardware that's just as inexpensive.

Daniel Strano, Blogger

March 4, 2013

5 Min Read

In software development, great overall efficiency is gained in the compromise of using abstraction to reduce demands on programmers with an economically small trade-off in increases in computational overhead. The software market naturally seeks the point of maximum efficiency; the value of the compromise is proven in profit.

The effectiveness of balancing abstraction against hardware capacity has been apparent for decades, but an effect that might be less well-recognized is the huge increase in latent “absolute” hardware capacity typically accruing at the balance point of economic optimization. Typical household computers have capacity exceeding the supercomputers of even about two or three decades ago.

Programmers know this, and we know that optimizing to “squeeze cycles” beyond a limited point is often exactly the opposite of optimizing for economic gain. However, today, unlike even maybe a decade ago, the latent hardware power accessible in low-level design for household computers is sufficient for tasks as computationally expensive as even animating relativistic and quantum physics simulations in real time.

In game development, the extent of developers' imaginations can always exceed the maximum capacity of available hardware. High-power computation is by no means the same as a good game, but utilizing relativity and quantum mechanics certainly doesn't insure a bad game by the same token.

 

A number of developers try to incorporate such effects by imitating their qualitative feel, often creating fantastic games in the process, like Braid. Direct simulation of modern physics in games isn't necessarily out of reach, though.

 

As a physics graduate student, I saw the sorts of programs that were typically run on my university's cluster at our physics department's request. For the most part, specialization in physics limits the time researchers can invest in developing software, and physical simulations are often written in very high-level scripting languages.

 

Even when ostensibly less abstract languages like Fortran are used, scientists often rely on the high level aspects of recent language standards. As in the software development market, this is generally the point of compromise between demands on the programmer and available hardware, particularly when a university cluster is used by many different departments within a school as well as research groups not affiliated with the institution.

 

The capacity needs to be available for the biggest projects, but periods of low computational load still arise; efficiency is recovered in project scheduling. When a scientific computation will exceed alloted cluster resources, considerations for significant optimization are made, but periodic excess cluster capacity reduces the pressure to optimize many research applications. As in the commercial software market, the demands on the programmers are of much greater cost when unused cluster capacity is simply a missed opportunity.

 

Many computationally expensive math and physics simulations, like iterative integrations and numerical differential equation solving, are efficiently and effectively implemented with less than 20 lines of code, yet they can still run for minutes or hours if implemented as even austere C programs. Often, such programs require less than 1KB of memory. It's not unusual that they only produce output at the complete termination of an iterative process.

 

Even before considering gains possible from parallelism, the run times of such programs could be reduced to shockingly tiny fractions by simply transferring the program from a typical workspace to one mostly localized in cache, as can easily be accomplished with OpenCL. A simple, low level optimization gain like moving from “host-side” to “device-side” this way is also not locked into a particular platform because of the relative universality of OpenCL.

 

We don't need the capacity of a cluster to implement these simulations. Rather, people with the most obvious incentive to run these calculations are often researchers with some access to clusters. Speaking from my own experience, a finite difference 2-dimensional relativistic quantum mechanics simulation on a 256x256 spatial grid can easily be animated on a 2.53GHz Intel Core i3 with a trivial amount of general RAM. The bulk of visualization, including interpolation between grid points, can be handled on the GPU while direct physics simulation runs on the CPU.

 

This is more than sufficient for inclusion in a smooth-running, smooth-looking game, and it isn't close to a ceiling of optimization. We already have the capacity if we want to draw on it; we don't have to play at the feel of relativity and quantum mechanics in games today, if we would rather implement the real thing. The practicality of including these effects in 2013, of course, still hinges on the cost of carrying out the optimization necessary to tap the latent potential of the hardware we already have.

 

In gaming, we seem to have incentive to push the frontiers of modern computational physics on home computers and consoles, in order to produce totally new types of game physics engines. Surprisingly, I think game developers have more incentive to make these optimizations than researchers do. Many physics researchers are trained in the use of existing particle dynamics simulation software rather than in the direct creation of such software. Potentially, the residuals of modern physics game engine software could find application in research, as well.

 

The industry focus on high-level design seems to be an inadvertent barrier to making games with modern physics engines. The vast majority of applications do not particularly benefit from minimally abstracted code, but perhaps system designers should place some renewed emphasis on support for low level programming given just how much computation power we really could squeeze out of even today's cell phones.

 

For those of us who've wanted to see true time dilation, length contraction, particle superposition, and entanglement in video games, we should no longer ask, “When will we have the hardware for this?” We have more latent potential for computation in our hands today than entire countries had just decades ago.

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like