Sponsored By

Sponsored Feature: An Interview with Intel's Mike Burrows

In this Intel-sponsored feature, part of <a href="http://www.gamasutra.com/visualcomputing">the Visual Computing website</a>, the company's Mike Burrows, who is Senior Graphics Software Architect Manager for the Larrabee creator, talks about the upcoming chipset and its relevance for video games.

July 30, 2009

11 Min Read
Game Developer logo in a gray background | Game Developer

Author: by Gamasutra

[In this Intel-sponsored feature, part of the Visual Computing website, the company's Mike Burrows, who is Senior Graphics Software Architect Manager for the Larrabee creator, talks about the upcoming chipset and its relevance for video games.]

Developing new computing hardware requires both an understanding of current industry trends as well as a longer-term view of where technology is going. Mike Burrows, Senior Graphics Software Architect Manager for Intel, has his sights on the far-reaching strategic view, determining how upcoming technologies like Larrabee can help take developers into the future.

Prior to his role at Intel, Mike spent a decade at Microsoft where he worked with game developers to fully exploit DirectX and participated in the earliest stages of the project that eventually became the Xbox. He also co-founded Microsoft's graphics advisory board, which includes top-tier studios like Blizzard, id, and Epic.

Today Mike continues working with those and other notable creators for Intel, where he serves as a liaison between third-party game developers and Intel's own visual computing group. The editors of Gamasutra spoke with him about how Larrabee will introduce new development possibilities, how the relationship of game design and graphics could change with upcoming visual computing trends, and about his decision to make a career change.

Tell us about your overall role at Intel.

Mike: Here, we're thinking strategically about how to move the industry forward. Really, that's what enticed me more than anything else to come to Intel -- looking across the field at the most exciting opportunities. The scope of Larrabee isn't just to help improve the core areas of functionality and make graphics cards more programmable; it's really about turning the system upside down and showing the exploitive revolution that's possible.

What specifically enticed you to leave Microsoft after a decade there? Did it have to do with Intel's longer-term projects in that space?

Mike: Exactly. I'm someone who thinks about things in the mid- to longer-term. How do we actually enable people, tactically, to get to those endpoints? By focusing on the strategic time frame and implementing things on the tactical road map to make that happen. That's a passion of mine and something I take great pride in.

Part of the reason for the move was the realization that the technologies behind Larrabee are an inflection point in the development of graphics cards. Around DirectX 8, Microsoft started exposing programmable shader languages -- as we call them now -- and allowing developers to write mini-programs. That's continued on a linear ramp.

But with Larrabee, that field has expanded. Yes, you can still program them like a traditional GPU, but Larrabee places you in a new realm where you have a huge amount of flexibility and freedom of development choice. That's a revolutionary change, instead of the usual stair-step innovation of simply adding more programmability.

It comes almost full circle in terms of flexibility from my own background -- I'm an old-school developer, a bedroom programmer from England who typed code into magazines and sent it off to publication.

Before I joined Microsoft, I headed up the R&D department of a company called Digital Image Design. We were great at creating software rasterization technologies, but we were restricted by the advent of 3D consumer graphics accelerators. We found we had to constrain our ideas to fit the nuances of hardware. I see Larrabee as the first chance to come full circle back to that amount of programmable flexibility and developer freedom.

That's another major reason that enticed me to join -- the promise of that freedom. And also, whenever the question is, "Do you want to go work with a bunch of really smart people on a truly revolutionary piece of technology?" then the answer is always going to be, "Well, yeah."

How do you see Larrabee fulfilling those promises of flexibility and freedom?

Mike: Just in terms of raw computing power and development flexibility, it blows the socks off anything else I'm aware of. I don't think I can say any more than that without tripping over NDA issues.

As someone who looks toward the long term, what are some of the trends you see coming down the pipeline in visual computing?

Mike: One of the things I noticed a few years ago was people blurring the lines between the graphics computing power, which is traditionally a vector stream compute unit, and the more traditional CPUs as we know them today.

Maybe people went a little bit too far in terms of trying to emulate this flexible computing power using their constrained vector computing unit, the GPU, but what was interesting to me was the way they were rethinking the algorithms.

It's questions like, "How do you map an algorithm to a more constrained set of computing resources, while also being significantly more flexible in terms of the width of the data?" That's SIMD: Single Instruction, Multiple Data -- maximizing the effective compute power per instruction. That's something that's been on the CPU side for a reasonable amount of time already.

We have a lot of vector computes on the CPU, but the CPU, particularly from the game side, is busy trying to deal with a lot of the complexities of game-specific problems.

Longer term, yes, we probably want to be able to load-balance those two sets of discrete compute units on the graphics card and the CPU, but having more flexible systems long-term is just as important. The buzzword in the industry today is heterogeneous computing. It's a matter of finding the most appropriate set of computing technology to run a given algorithm. That is very exciting to me at the moment.

Do you think there will be challenges for developers in adopting new attitudes towards development in that vein?

Mike: It differs depending on the type of developer you're talking about, but game developers in general are very creative individuals who look at problems as new and interesting challenges to overcome. Just look at the consoles and some of the really complex systems required to tap their potential.

That just proves how flexible these guys are and how much they really crave more and more compute power. That's really meaningful to me -- on the power side, games are just limited by the amount of compute resources available.

There are some definite areas to explore: How does the rest of the system fit together? Do characters act like real people? Do they have an emotional state? Is the AI acting intelligent, or is it a scripted system?

There is so much headroom for game developers at the moment. I look forward to supplying significantly more computing power to see how they realize some of that, as well as how they start applying it to tools and production capability.

For games, if you increase the complexity on the systems, you also have to increase the quality of the content that goes into those systems. What I've seen in this industry is that a lot of those technologies behind that trend bleed into broader, non-game applications as well.

Some developers have suggested we'll eventually hit a technical plateau where the progression of graphical technology will start to level out and significant gains will slow. I take it you don't ascribe to that theory?

Mike: I definitely do not subscribe to that theory, but it depends on your definition of graphical fidelity. If graphical fidelity means rendering a chair or table at ultimate resolution, I could see where that could plateau. But if you talk about something organic or something with any kind of physical state that changes over time, usually with some kind of intelligence behind it, there are a lot of things happening there.

I've seen great presentations on modeling animals, in the way that animals actually move and walk, whereas today we have great animators who work to ultimate extremes, trying to get more lifelike animations. Assassin's Creed was an awesome representation of that. From their own presentations, the amount of time that they spent with animators trying to get to that level of fidelity was huge.

But what I look for is applying computer technology to actually model more of the physical characteristics behind those organic behaviors: How does the human move? How do the muscle groups move and interact with each other? What are the constraints applied to the system?

We may be at the earlier stages of that happening or at least at the stage where we're realizing some of the potential, but can't actually do it in a full game because of the other game computing requirements. It's one thing to show a technology sample with one set of animation. It's a different thing to have that running at 60 frames per second and a full game world going on. Still, I don't believe in the plateau theory.

Do you work directly with developers in your role at Intel?

Mike: Yes. The game industry is a wonderful place to be. A lot of very smart people stay in contact with one another. There are a lot of fascinating conversations that occur between great people with lots of smart ideas.

We work with a lot of triple-A game developers, helping them take a more holistic approach to problems, rather than just by way of a vertical slice of technology. It's a matter of talking with the high-end developers on how to take best advantage of the new technologies coming.

Do you have a sense of some of the upcoming developments they're interested in?

Mike: Well, a lot of these developers feel they're constrained and are trying to find contrived solutions to their problems. But when you put Larrabee in front of them and talk under a restricted confidence agreement about what exactly it holds, their eyes start to bulge out of their heads with the range of possibilities.

You might say something like, "Sure, you can do this wonderful shadow algorithm you were told could never happen on hardware," and you just watch them salivate. It's a great place to be, and I look forward to when we can deliver that promise. I'm kind of biased that way, because I wouldn't be here if I didn't believe in this effort with my heart, mind, and soul.

Everyone generally talks about higher screen resolutions and more scene depth, but I look at it in terms of compelling visual differentiation that you can't do on today's traditional GPUs. And order-independent translucency -- that's something game developers have continually struggled with over many years.

When you're doing transparencies in games -- transparent objects, transparently-textured objects -- there's generally a CPU sort you have to do, because when these objects are in the scene, you have to be able to see what's behind them. Graphics cards as they exist today have a Z-buffer. The fact is, if you put a transparent pixel to the screen that's ahead of something which is behind it, you won't see it, even though that pixel is transparent. So then you have to sort the scene. At the moment, that takes some CPU cycles to happen. You have to cheat and hack, basically.

Well, I'd much prefer using the CPU for something else. That's one thing where game developers say, "Aha, we've been waiting for that for eight years." And we tell them we can do that pretty easily. That's when the drooling begins.

Often there's a bit of a push and pull in development between what's desired from a design standpoint and what's feasible from a graphical standpoint. Does that sound like the kind of area that might break down some of those walls?

Mike: Yeah. I'm definitely not a marketing person, but the thing that really resonates in my mind is unleashing creative potential. You know, "Free my pixels!"

I'll speculate a little bit. I've been a game developer, but I've never been a game designer. Designers are a great breed of people. They're trying to realize a vision in their minds of how a game works, both graphically and interactively. They are presented with constraints to realizing that vision. There's a lot of tension between them and the technology groups who try to realize all the graphics and other subsystems in the games.

The one thing I look forward to is no longer having that case of, "No, you can't do that, because it's just impossible." Rather, it will be a case of there being a certain known cost to those decisions. You can go back to the designer and have less of a fight between those two sides, and more of, "Well, how much do you want? Here is the cost that we believe will apply to each area."

So it's no longer a fixed constraint, and more a case of, "Yes, we can do that, but we may have to trade something off to make it happen." That is a much better conversation to have, and I think long-term it's going to really enable some revolutionary things we haven't seen before or haven't even thought of yet.

Read more about:

Features
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like