Anyway, to point:
Games are built on abstractions, both from the design standpoint and the player standpoint. There’s no point in “real life” where pressing X makes a person perform a jump, or a kick, or whatever. It’s an abstraction. Game designers provide a world where players connect the relationship between pressing X and jumping, and players accept that in order make their avatar jump they have to press X.
However, button input is inherently disconnected and artificial – it’s abstracted from what “really” performing that action would take. An action which might have many steps in the real world (brewing a potion, hijacking a car, performing a spinning slash, commanding an army to move, etc.) is abstracted to a single step for two reasons: adding much more development time to decrease the abstraction isn’t generally worth it for the player’s enjoyment, but also because performing multiple in-game actions takes awhile for players without added benefit.
To draw out some of the meat of this distinction, let’s talk about Heavy Rain, because Heavy Rain is a game that attempts to give players less abstraction. Players must move the controller in 3D space, simulate actions with repetitive motion (if the “real-life” version of that action requires something similar), and perform multiple steps on the route to one action. The game even supports the Playstation Move controller, which allows for less button pressing and more movement. However, even with that said: the control scheme is still VERY abstract. Shaking a controller to dry hair is more like drying hair than pressing a button, but still a long way from the actual experience. Despite strides forward by the Heavy Rain team, there is still a huge level of abstraction in the controls.
The first few hours of Heavy Rain, which you can argue are necessary from an “immersive experience” standpoint, are absolutely awful from a game standpoint. They are filled with tedious and mind-numbing actions. This isn’t, as you’d think, only due to the fact that the character is performing actions which are even boring in real life: showering, setting the table, etc. That’s certainly part of it, but the more important part is this: most of the reason these actions in the game are tedious is that many actions in the game require more skill and effort in the game than they require in real-life.
Abstraction provides us with a way to make players more skilled, faster, stronger, and better than they are in real life. A person cannot lift a car in real life, but a player can press X to lift a car in a game.
Pressing a button and shaking a controller to collect a towel and dry your hair isn’t necessarily any more interesting, from a player’s perspective, than simply pressing one button to perform the whole sequence. Pressing a button and shaking a controller is still so unlike the actual actions (despite being more like them than just pressing a button) that the additional layer of “realism” adds nothing. The control scheme in Heavy Rain too complex considering the amount of abstraction the interface is burdened with. I can set a table in real life faster than I can set on in the game, simply because the control scheme is still abstracted enough that it cannot be a simulation, which is the intent. Thus, the extra actions add more abstraction without adding the sensation of more interaction.
This isn’t the fault of Heavy Rain, but a limitation of plastic controllers and buttons.
However, there is still a lesson to draw from this, too. Adding more actions still does, at least in some fashion, somewhat decrease abstraction, even if the level of abstraction is still high. Indeed, in order to make more “immersive” and “realistic” games, we must add more representative actions to everything the player must do. The further we move from abstraction, the closer we move to direct simulation, rather than abstracted simulation. However, there are several very large problems with adding more steps to in-game actions in order to decrease abstraction:
The first and most pressing issue is the sheer number of actions a player must be taught in order to replicate the number of steps needed to perform almost any task with limited abstraction. Think about it. In real life doing anything as simple as taking a shower comes with hundreds of tiny actions. Not only would it be unfeasible to teach players an abstracted control scheme and method for each one of these actions (press X to grab the shampoo, Press Y to open the cap, Press X+Y to flip the bottle, Press X and Down Arrow multiple times to shake the bottle) the standard interfaces we use today to play games do not support it. Heavy Rain gets around this issue by using a “quick time event” system and context sensitive commands, but this still makes each short action sequence a linear enterprise, even if the game provides large “set piece” choices. A true immersive experience would allow players to choose the order of all the tiny actions, which means knowing the controls for every single one of those tiny actions. This just isn’t feasible from a learning curve standpoint.
But even beyond the large huge hurdle that the control scheme would need to be complex enough, in order to differentiate between all those actions, that there is almost no chance of a player performing those actions in the correct order in anything even close to real time. If you must teach the player even fifty commands the player will take a very long time to gain fluency with those commands. That’s why we use abstraction in games in the first place.
What does this mean if we’d like to add more actions and continue to remove abstraction? It means that a different controlling interface is needed. The control interface must support many more action combinations than today’s present interfaces, but also must be intuitive and natural enough to allow players to acquire the skills with the controlling interface quickly and in almost real time. There is only one control interface I know of that allows these things: the human body.
Our current level of technology is such that the only way for developers and players to perform actions in the game is through a high level of abstraction, since button and even touch control schemes are so distant from actual experience. Also, due to the learning limitations of our current control schemes (the player must learn that X makes his avatar jump and O makes his avatar shoot), there is a limit to the number of actions a player can learn and utilize in any amount of time.
In order to lessen the amount of abstraction, we need to increase the number of actions a player can perform in a small amount of time. We also need that performance to mimic the “realistic” performance of that action as closely as possible. Finally, we need a control scheme that allows players to perform actions intuitively even though they may have never played the game before. Without that intuitive performance there is no feasible way of players learning and utilizing the actions they must perform in a reasonable length of time.
This is why “body” control systems like the Kinect, the Move, and the Wii are the future, even for the hardcore gamer. To most hardcore gamers these control schemes are currently jokes, but only because the technology is lacking. Currently even the most sensitive “body” controller in the consumer market are laggy, insensitive, inaccurate, and don’t have the fine recognition to differentiate between nuanced movements. Currently, controllers allow for more accurate actions per second, even though pressing buttons is a higher level of abstraction than using one’s body for movements. However, the more accurate body controllers get and the more developers who learn to use them to create new experiences, the chance for a game experience with less abstraction will be irresistible even for hardcore gamers.
Until we have full body controls and full-body tactile feedback for our games, games will always rely on some systems of abstraction. However, until we have skin suits, Kinect Gen23, brainjacks, or anything else that allows us to minimize abstraction, we can still minimize abstraction with good game design choices:
Eliminate Any of Loss of Player Control – Anything that takes control from the player adds more abstraction to the game should be eliminated. When in real life do you watch cut scenes? When do you stop between rooms to load the next area? Never, that’s when. Game worlds should be seamless and never restrain the player with mechanics. Any time the player has to wait or only watch the game is unacceptable.
Make Players Do Things – If you want your player to perform an action in the game, make the player do it with several steps. Cooking Mama, Majicka, Flight Sims – these games all make players perform steps to create an outcome, rather than relying on single-button abstractions.
Create Open, Realistic Worlds – This one is pretty self explanatory. There’s a reason that Western RPGs are beating JRPGs and it’s because a linear hallway simulator (FF13, ugh!) adds much more abstraction to a game than an open world.
Use Intuitive Controls – Apple touch devices are lauded because they are intuitive. Or, to put it the way a really smart guy put it: The interface fundamentally determines the behavior. (Link to that smart guy) If you want players to quickly learn controls, make the controls mimic what they would intuitively do as much as possible.
Keep Pushing Movement Controllers Forward – The Move, the Kinect, and the Wii aren’t good enough. Make better, more accurate, and more immersive control schemes.
Develop More Tactile Feedback and Immersion – Rumble controllers are the only mainstream tactile feedback created since the creation of the mouse, the button, the controller, and the switch. Well, Rock Band added some feedback with things like the drums and that DJ device, I suppose, but those are specific to a specific genre of games that involves the abstraction of an activity that is already an abstraction: playing music. Also, sure, there’s the Novint Falcon, but that’s hardly mainstream. We need more tactile feedback to remove abstraction. Rumble, sensation, variant surfaces, haptic stuff, and whatever else.
Aside from that – there needs to be more immersion in audio and video. 3D stuff is one step that’s happening, but where is the mainstream version of the wrap-around-your-head video device? Why are we making larger and larger televisions when we should be making personal screens that fully immerse players? Screens that allow users to have outside peripheral vision are woefully outdated. Can’t we make flexible OLED screens now? Didn’t I read that somewhere? What’s the holdup?
So – that’s what I’ve got so far. Other ideas to minimize abstraction?
This post is cross posted from mispeled.net.