Sponsored By

Cop A Feel...With Haptic Peripherals

As Force-feedback input devices are now hitting store shelves. Learn how the internals of these joysticks and steering wheels work, and see how you can implement force effects using DirectInput.

December 19, 1997

26 Min Read

Author: by Chuck Walters

hap.tic (adj.)(1890)
{(haptesthai) to touch}
1: relating to or based on the sense of touch
2: characterized by a predilection for the sense of touch (a haptic person)
3: to be out of haptic with reality

Seven peripheral manufacturers are prototyping and/or shipping more than nine force-feedback game devices for home use, at or under $200. Three companies are supplying the force-feedback technology: CyberNet, Immersion, and Microsoft. With force feedback ramping up and three implementations available,

Direct X has stepped in to standardize the API used to control these devices. This article will give you a general overview of the tools and techniques used to implement force feedback in your games. It will focus on generic design issues and DirectX 5 implementation. Before diving into the nitty-gritty code work, however, we will review the hardware.

Two force-feedback joysticks and three force-feedback wheels are expected to be in retail outlets by Christmas 1997. At least six more force-feedback devices are planned for next year. CyberNet, Immersion, and Exos (acquired by Microsoft) have been offering products for high-end simulations and research for years, but CH Products was the first consumer product to make it to the retail market. The companies in Table 1 have at least a working prototype slated for home PC use.


FFB Engine

 Product Name




Act Labs


Racing System




CH Products


Force FX






Racing Wheel FX




Interactive I/O


Virtual Vehicle Tdi




Logitech Inc.


WingMan Force




SC&T International


Ultimate PER4MER




Thrust Master


MotorSports GT






Real Feel Wheel






Real Feel Yolk






SideWinder FFB Pro




Legend: SP=Serial Port GP=Game Port USB=Universal Serial Bus

Table 1. Availability of force feedback devices

Some of the companies listed in Table 1 also distribute force-feedback devices for game consoles and arcade machines. Companies such as Happ Controls (the largest manufacturer of arcade force-feedback devices) use technologies similar to those being discussed, but they won't be the focus of this article. For a more complete view of the market, take some time to browse the research sites listed at the end of this article. The Intel site has the specifications for the Open Arcade Architecture Coin-op Reference Platform also known as the Arcade PC standard (which includes force feedback).


Not all force feedback devices are created equal. The more expensive devices are able to produce high-frequency sensations at a cost of a few thousand dollars, compared to a few hundred for the home versions. A hefty portion of the cost difference is spent reducing the compliance (play) between the actuators (motors) and the stick. The following sections discuss the interface between the actuators and the stick.

COMPONENTS. The play and durability of a force-feedback device depends, in part, on the material and tolerance of the machining. Cheaper materials, such as plastic, are pliable, do not hold up to high-tolerance machining, and wear faster. Here, the material limits the performance. More expensive alternatives, such as composites, aluminum, and alloys, can hold higher-tolerance machining. Still, just because quality material is present doesn't mean that it has undergone the costly high-tolerance machining. Quality material has the added benefit of holding up better to general wear and tear. Let's open a force-feedback input device and examine the inner workings.

GIMBAL. The gimbal connects the stick to the transmission. Force-feedback joysticks generally have more play than steering wheels because of their dual-axis motion. The biggest contributor to gimbal play are the slots of a slotted bale. Double-slotted bales (Figure 1) are the loosest. Immersion has patented a slotless gimbal, shown in Figure 2. Keep an eye out for this design in retail sticks over the next year. Currently, it's only available in research-quality devices.


Figure 1: Double-Slotted Bale




Figure 2: Five-Bar Linkage

TRANSMISSION. Actuators are connected to the gimbal by the transmission. Geared transmissions must have play or the gears will bind. Tight gears require precise machining. Therefore, cost becomes reliant on the material and machining commodities. A cable/belt drive is a higher-quality method of transmission. Since there are no gears that can bind, this method has the potential for a very tight force response and much less noise (gears are quite noisy). The counter rotational requirements of force-feedback transmissions are demanding, so either the cable/belt will need to be extremely strong or devices will need easily accessible adjustment screws for keeping the cable/belt tight.

ACTUATORS. Force-feedback joysticks use two motors, which are similar to those found in fax machines and printers. They exert about 1 lb. of sustainable force per motor, peaking at around 1.5 lbs. Force-feedback wheels have one motor that can sustain 3-4 lbs., peaking at around 5 lbs. Cheaper motors normally exhibit higher friction, so they dampen out subtle forces, causing poor or unperceivable response.

Designing Force Effects

The force-feedback element of a game requires aspects of physics and collision detection that should already be implemented and used by the audio engine. When you get down to the code level, force response is very similar to 3D audio. In my experience, piggybacking the audio code helps determine what effects to add, where to add them, and saves both CPU and developers' time by reusing computations. Exceptions to this are effects such as "wheel stiffness," that do not have an associated sound effect, but remain active throughout the simulation.

While Microsoft's DirectX 5 documentation doesn't go into the artistic part of force feedback, the three force-feedback engine manufacturers have a lot to say in their development documentation. I won't cover the design elements in the same manner, so I recommend examining the commercial web sites listed at the end of this article and the books in the "Books On Force-Feedback Technology" sidebar.

The initialization of a force-feedback device is faster with DirectX 5 (if the device is powered), because the device must exist in the game controller's property sheet. Proprietary APIs that don't use this information will search for a force-feedback device on each port and wait for either a confirmation or a timeout before moving on to the next. This can take up to five seconds, which makes automatic detection of a force-feedback device at game start-up less appealing.

Once the force-feedback device is initialized, the user should be able to customize the force settings for the device. At a minimum, users should be able to set the gain, since force-feedback devices tend to vary in the amount of foot-pounds they can exert. (There is a proposal to have this gain setting incorporated into the device's property sheet in Windows, but you'll need to implement this yourself in your game, at least until DirectX 6.) Various types of devices exert different amounts of force (for example, wheels typically exert more force than sticks), but even the forces applied by different brands of joysticks can vary. Gain adjustment may seem like a simple task at first, and it can be depending on how far you take this setting. Gain can easily be applied to Springs and Jolts for example, but Vibrations and custom forces are commonly very temperamental, and adjusting their gain can ruin the desired effect. So, if some effects can be ruined by differences in gain, and force-feedback devices have different force ratings, there's a problem. Basically, each force-feedback device requires some special tuning consideration to get the desired effects just right.

Carefully consider how you allow players to map (or customize) their input controls. When input mappings are altered by the game player, the new mapping should also change the implementation of force-feedback effects. If you don't give adequate thought to how your input device will behave once the player modifies the standard configuration, the player could be in for an unpleasant experience. For instance, imagine that a player is using a joystick in a car racing game. The default mappings have the y axis act as a combination gas/brake pedal, increasing the vehicle's speed when the stick is moved forward and braking the car when the stick is moved back toward the player. If the player remaps the y axis so that pushing forward on the stick up-shifts the car's gears and pulling back down-shifts the car, then a y-axis jolt can cause a problem. A collision to the car from behind would cause a forward jolt effect on the stick, and inadvertently cause the car to shift up a gear.

Allowing the player to turn on or off certain force effects is a nice feature (for example, engine vibration transmitted to the joystick may be annoying in a long race). You could even go so far as to let players customize certain effects. While this may be going overboard, remember that game extensibility is valued by most players. If effects are created with a sensation editor and saved to a file loaded by the game, players may be able to customize these effects with the same editor, and share their customized effects with other players on the Internet.

Switching between various frames of reference in your game is risky. A frame of reference (FOR) is the "who" or "what" to which the force response attaches. You generally only need to consider two FORs, the player's body and their machine (one inside the other, but not rigidly attached). Switching between FORs may confuse the player, making forces appear buggy. For example, forces on a car don't have the same vectors as those on the driver, and switching between the two hurts the realism of the game. Choosing the object that directly interacts with the simulated environment usually provides the best experience. Switching FORs or choosing a bad FOR also complicates development by making it difficult for the engineer to identify and correct undesired feedback loops.

Be careful of feedback loops in your force effects. Feedback loops occur when a force is attractive. A simple example of this is when a car hits a barrier and the frame of reference is attached to the driver of the car, but the collision detection/correction is attached to the car itself (Figure 3).


Figure 3. The forces involved in a car crash


The force response is in the general direction of the barrier, which makes the player hit the barrier again - hence the feedback loop. A feedback loop was not appropriate for the example barrier collision (caused by the poor FOR choice), but in some cases it works out great. Gravity wells, rubber-banding, and tractor beams could all benefit from a feedback loop, which, by the way, can be pulled out of with a little effort on the gamers part.

Types of Feedback Effects

The terms "texture" and "effect" are often used interchangeably to define the force-response data sent to the device. In haptics, "textures" are literally textures (such as sandpaper, wood grain, and so on), so I will refrain from confusing the distinction. Effects can be two-dimensional on joysticks and are one-dimensional on wheels. Many effects are additive, meaning they can be played with other effects simultaneously. Effect addition is nice to a point, but too much can take away from the experience (and wear out the user). The following sections discuss ways of thinking about effects.

STATIC AND DYNAMIC EFFECTS. After creating an effect, it can remain unchanged or undergo continuous modification. Static effects (also known as "canned" effects) don't require run-time modification, which makes them simple and well-suited for trigger effects. A gun shot is a good example of a static effect because it always produces the same kick in the same direction. Static effects can be downloaded and remain on the device (as long as there is enough device RAM), which gives them very low response latency. Static effects are also easier to design because testing doesn't need to be done on numerous variations of the effect, as is the case with dynamic effects. They can also be easily created with the sensation editors (which I will discuss later). Some devices have ROM effects that don't use any RAM (beyond the space allocated for parameter modification); such effects can be used as either static or dynamic effects at a very reduced space cost.

Most effects can be modified during playback, producing a dynamic effect. A joystick vibration that varies with engine RPM is an example of a dynamic effect. Dynamic effects are where force-feedback technology really shines, as they let the player experience the best motion within the game world. Unfortunately, good dynamic effects can also be time consuming to create. One pitfall is that some devices don't work well at the extent of their range; sometimes simply incrementing a variable can drastically change a force effect. For example, some force-feedback devices have too much play and/or motor friction to adequately represent the full spectrum of frequencies for which the engine vibration effect may have been coded. To address this problem, you should critique individual devices to find a general minimum and maximum across the range of force feedback hardware.

OPEN-ENDED AND ONE-SHOT EFFECTS. There are two ways to manage effect playback, and both will probably show up in any given game. One-shot effects are those with a finite duration. They are simple to manage because you play the effect and forget about it. A single jolt to a joystick as a result of pulling the joystick trigger to shoot a gun is an example of a one-shot effect. The effect is executed and stops on its own.

On the other hand, open-ended effects require the game to monitor state information in order to stop the effect at the proper time. Consider, for example, a car (a fast, exotic, red one) driving over a wood-plank bridge. There is no finite duration for the car's time on the bridge, so the effect will need to be monitored by the game to regulate the vibration in accordance with speed, and stopped whenever the car stops or leaves the bridge.

INTERACTIVE AND TIME-BASED EFFECTS. Effects can be divided into interactive and time-based events. Time-based effects are played regardless of where the force-feedback device is positioned. Jolts and vibrations are time-based effects. Interactive effects are based on the state of the stick (position, velocity, and/or acceleration). Springs and friction are some examples of interactive effects. DirectInput categorizes interactive effects as "conditions." All other effects are time-based and fall under the categories of "periodic," "ramp," or "constant" (Table 2).
























Sawtooth Up



Sawtooth Down










Table 2. Types of force-feedback effects


Force-Feedback APIs

The DirectInput API in DirectX 5 is definitely the force-feedback API of choice in the long run. All three force-feedback engine manufacturers (Immersion, Microsoft, and Cybernet) currently or will soon support DirectInput. That is reason enough to use it, but there are other redeeming qualities. The API is designed to be very flexible and is easily extensible. The problem with DirectInput is that it can be a bit convoluted at first glance.

DirectInput wrappers (whether you make your own or use someone else's) are necessary to restrict the generality and redundancy of the API, which is a result of DirectInput's basis on Microsoft's Component Object Model (COM). Effect management should also be addressed by a wrapper because DirectInput doesn't do this for you. DirectInput didn't hide the RAM limitations of the devices by providing software mixing, which creates a download/offload dilemma that you, the developer, have to address.

Other issues that DirectInput wrappers can simplify are the notation and numeric ranges of the DIEFFECT structure variables. For example, the dwGain element in the DIEFFECT structure has a range of 0-10,000. Gain is analogous to a volume setting for the device, which is easier to manage as a percentage (0-100) of effect strength. A wrapper could also simplify DirectInput's handling of periods. DirectInput uses microsecond notation, which a wrapper could convert to the conceptually easier frequency notation. Finally, get this: direction can be expressed in polar, spherical, or Cartesian coordinates. Decent error handling and recovery can get out of control unless this measurement system is simplified by a wrapper.

Although I recommend using DirectInput as your force-feedback API, it's not your only option. Here's a short description of proprietary APIs and DirectInput wrapper APIs:

I-FORCE 1. This is Immersion's proprietary stand-alone API. I used this simple, easily implemented API in THE NEED FOR SPEED - SPECIAL EDITION. I-FORCE 1 only works with the Immersion engines, but it works in both Windows and DOS. Future effort will likely lean on DirectInput device driver support, rather than improving I-FORCE 1.

CYBERNET 2. The latest Cybernet API supports DirectInput and I-FORCE compatible drivers. It's a stand-alone API that falls between I-FORCE 1 and DirectInput in terms of complexity and features.

WRAPPERS. Using a wrapper will make initial implementation easier because it will hide many of the gotchas. But before you simply jump into using one of these premade wrappers, realize that they are focused around their creators' devices and will exhibit some problems running on their competitors' devices. Immersion and Microsoft both provide source code of their wrappers, and it's an excellent way to determine the differences between the two companies' interpretations of the DirectInput force-feedback specification. It's also a good starting point if you want to create your own universal wrapper. If I decide to use a prefabricated wrapper, I expect it will be for effect file loading only.

I-FORCE 2. This is a misleading name because I-FORCE 2 is not a stand-alone API upgrade to I-FORCE 1. I-FORCE 2 only loads effect files created by Immersion's sensation editor (I-FORCE Studio), and wraps the DirectInput effect create and release interfaces. The I-FORCE 2 documentation has sample code for a more complete wrapper, but this code isn't present in I-FORCE 2.

SIDEWINDER FORCE. This is Microsoft's fully functional DirectInput wrapper. SideWinder Force has functions to work with Microsoft's sensation editor (Visual Force Factory) and wraps effect modification and playback. SideWinder Force also wraps device setup tasks. The source code to the SideWinder Force library is provided with Microsoft's SDK.


Sensation editors provide a means to visually edit forces. The visual environment helps clarify the purpose of all the variables in a given effect and modify those variables on a picture or graph. After creating an effect, it can be played back, allowing for a fast edit/play development cycle. Once the effect(s) are finalized, they can be saved to a file for the game to load at run time. A force-feedback emulator, on the other hand, creates a virtual device that visually represents force-feedback effects on a monitor.

SENSATION EDITORS. Currently, two sensation editors are available: Microsoft's Visual Force Factory (Figure 4) and Immersion's I-FORCE Studio (Figure 5). Both of these editors output DirectInput-compatible files that can be statically linked to the game or loaded at run time. The benefit of loading the effects at run time is that consumers can modify the effect in the sensation editor themselves. Both editors are capable of single and compound (concatenated) effect creation. They will both support playback within the editor on all DirectInput-compatible force-feedback devices, but the first revisions may not work flawlessly with competing devices due to the infancy of DirectInput force-feedback device drivers. I recommend playing with the editors even if you don't plan on using them extensively. This will help you understand the parameters for various effects.


Figure 4: Microsoft's Visual Force Factory



Figure 5: Immersion's I-FORCE Studio


FORCE-FEEDBACK EMULATOR. Immersion has a serial-based emulator that runs in a DOS window on a second machine. This expedites trouble shooting if you don't have a force-feedback device handy. In some instances, the emulator is better than a device because you can watch the state of a virtual device on your monitor; however, it only works with I-FORCE 1 drivers. This emulator was created when prototype devices were scarce. However, since force-feedback devices are now readily available, both Microsoft and Immersion believe that emulators are non-essential for DirectInput.

Sample DirectInput Code

The sample code on the Game Developer web site is straight DirectInput code. I refrained from using wrapper APIs in order to remain flexible. The mouse and keyboard are referenced in the sample code because they were closely tied to the device setup in my test suite. I removed most of this code to minimize the distraction from the joystick and force-feedback code relevant to this discussion.

DEVICE SETUP. The following is an ordered list of essentials to get up and running:

  1. Retrieve a pointer to the DirectInput interface (LPDIRECTINPUT).

  2. Using the LPDIRECTINPUT, enumerate the joysticks.

  3. Your enumeration callback function can either save the LPDIDEVICEINSTANCE passed to it for each joystick for later device creation, or create the device within the callback function (as the sample code does).

  4. Using the LPDIDEVICEINSTANCE for a given device, get a pointer to the LPDIRECTINPUTDEVICE interface for that device.

  5. Using the LPDIRECTINPUTDEVICE, set the data format (the sample code uses the default format explained in the DirectX 5 documentation).

  6. Again using the LPDIRECTINPUTDEVICE (DID), call QueryInterface() to upgrade to an LPDIRECTINPUTDEVICE2 (DID2). The DID2 inherits all the functionality of a DID, so you can even use DID2 for game controls without force-feedback support. The standard DID doesn't support force feedback.

  7. Using the DID2, set the co-op level. It must be DISCL_EXCLUSIVE and either DISCL_FOREGROUND or DISCL_BACKGROUND.

  8. Release the DID because you now have a DID2 that takes its place. This is the last thing I do in my callback function because I have the DID2 that is used to acquire the device and query its capabilities.

  9. Repeat steps 4-8 for each device you want to use.

  10. Using the DID2, acquire access to the device. This call is made initially and again whenever you lose and regain window focus.

  11. Using the DID2, get the device's capabilities. This will tell you if the device supports force feedback, as well as how many axes, buttons, and so on, are on the device.

EFFECT SETUP.  Ideally, you want the effects that you create and the effects played back in your game to have a one-to-one correspondence. Even though DirectInput makes no attempt to hide the RAM limitations of the force-feedback device, and even though some devices can only have one type of resident effect, it's still possible to get accurate effect playback. You just need to prioritize the playback of effects of the same type.

The basic steps of effect creation are:

  1. Determine if the effect is supported by the device.

  2. Create the effect using the DID2 of the device for which you want the effect created (the same effect must be created on each device separately) and passing in a filled in DIEFFECT structure.

  3. Unload the effect to make room for the next effect.

Effects are automatically downloaded to the device when created. If the device is full, however, the CreateEffect() interface will produce an error. To prevent this error from occurring, you can set a flag to prevent the automatic download of effects during the effect creation stage, and then turn it back on so that the effects automatically download when you play them.

PLAYBACK. To play a trigger effect, you must download the effect (make sure the trigger button variable is set appropriately in the DIEFFECT structure). Unloading the effect will stop the effect. When the effect is unloaded, pressing the trigger no longer plays the effect. Unfortunately, Immersion and Microsoft have interpreted the DirectX 5 specification for this process slightly differently, and as of this writing they haven't arrived at a common solution.

In Immersion's implementation, starting a trigger effect won't play it immediately. Rather, it will activate the effect to be played when you press the assigned trigger. The benefit of this approach is that multiple effects can be mapped to a trigger and remain resident on the device. This means switching trigger effects is a low-latency process.

In Microsoft's implementation, the trigger effect is ready to use once it is downloaded. If you start the effect, it will play back immediately, even if it's mapped to a trigger and the trigger hasn't been pressed. The benefit to this implementation is that the effect can be used for both trigger and non-trigger effects, but in order to unlink the effect from the trigger, the effect must be modified (requiring download) to detach it from the trigger.

PROCESS LISTS. A process list allows you to concatenate effects so that one plays after the previous is completed. Process lists are being proposed for inclusion in DirectX 6. This feature is currently available through use of the DICUSTOMFORCE effect, which can be created manually (with a fair amount of effort) or with the sensation editors. There is no standard way to concatenate effects, however, so these emulated process lists don't port well between the different force-feedback engines.

MODIFYING EFFECTS. Many effects can be modified during playback, depending on which device you are targeting. While tuning dynamic effects, be aware of their impact on latency and processor load - you'll run into trouble if you're not making a conscious effort to avoid both. There are several ways to modify effects. You could release the old effect, create a new one, and play it - probably the worst method. The best way is to modify an existing effect with the SetParameters() interface. With this function, you can specify changes in single parameters, and only those parameters will be downloaded,thereby reducing latency. Another method is to modify an effect and then play or update the effect in the same call to SetParameters(), which reduces the amount of communication required with the device, since the play command piggybacks on the modify command.

MIXING EFFECTS. An effect must be on the device in order to play it. Yet most force-feedback devices limit the number of effects that reside in the device's RAM at any given time, and software mixing isn't supported in DirectInput. The result is that you may be limited to playing only two DICONSTANTFORCE effects at once. Until effect management is improved, you'll probably experience some difficulties with mixing configurations on low-RAM devices.

GAIN ADJUSTMENT. The gain can be adjusted in one of three ways. Most effects have a magnitude parameter for each axis - some go as far as one parameter for each direction of each axis. The second method is to set the gain on the entire effect, which will be applied to all sub-elements, including envelops. The third method works on a device level. All effects are attenuated by the setting passed to SetProperty() (not to be confused with SetParameters(), which works at an effect level).

Shake It Up

Force-feedback devices are now readily available to consumers looking for good force response in their games. DirectInput force feedback is a little detailed, but programming with it will increase the chances that all devices will work with your game. Remember that DirectInput force feedback is a "version 1" API, as are the accompanying device drivers. Effect management/emulation is bound to improve in the next revision. Inserting force feedback into the game near the accompanying sound effects will make implementation easier. Competition between the contenders in the force-feedback market is bound to bring out improved force-feedback technologies in the coming years. If all this stops being fun, take a vacation.

Chuck Walters is a software engineer at Electronic Arts in Seattle, Wash., and received his BSCS degree from the University of Washington, Seattle. He can be reached at [email protected].

Read more about:

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like