Massively Multiplayer Game Development 2: Architecture and Techniques for an MMORTS
This excerpt from Massively Multiplayer Game Development 2 describes the algorithmic basis needed for implementing an MMORTS game capable of sustaining hundreds of units for each player.
June 13, 2005
Author: by Gideon Amir
Introduction
In a massive multiplayer online real-time strategy game (MMORTS), thousands of players share the same contiguous virtual space. Each player controls dozens of units (in contrast with MMORPGs) and can battle with/against any number of players simultaneously. The players need to receive information in the immediate surrounding of all their units. As these units can be very far apart and see many other players with thousands of units, the amount of information that must be transferred to the client machines to keep their world state consistent is far greater than those required in the MMORPG genre.
Dealing with this tremendous flow of information poses one of the greatest challenges of the MMORTS game genre [Jaecheol01]. Naïvely, updating all the information (in excess of 50KB/sec) around each of the players’ units all the time is impractical in today’s low-bandwidth modems. Even if every end user had equipment capable of reaching this kind of throughput, the cost of such communication load will be tremendous to the game provider.
The new game genre also poses new demands in other areas such as content creation and data presentation to the user. For example, a new level of demand on computational resources is needed for Artificial Intelligence (AI), both for players and for the computer adversaries needed to act against these players, as typical virtual worlds might include millions of AI units. Moreover, the large maps needed to house thousands of players are problematic for standard path-finding techniques.
In this article, we describe the algorithmic basis needed for implementing an MMORTS game capable of sustaining hundreds of units for each player, all of which can affect their surrounding. We focus mainly on the task of keeping data consistency across all servers and all connected clients with minimal bandwidth, while keeping acceptable perceived latency. Stated differently, we need to send only the relevant information for each client and in a way that fits nicely into 1 to 2 KB/sec.
The final sections describe how the algorithm devised can also benefit other parts of the system (clustering, graphics, etc.) and deal briefly with other problems of MMORTS games.
Basic Techniques
We will start with the most naïve approach to the problem of sending all relevant data to each client. The idea is to generalize the algorithms like [TNL01] used in MMOG with a single avatar for sending relevant data, so that it loops through all the units of each player. This can be best understood by the following pseudocode:
for (each region of map R) do {
for (each connected player P viewing region R) do
for (each element E in region R)
Create message updating element E’s state
}
for (each connected player P) do {
Compress all messages to player P
Send messages to player P
}
The compression algorithm can take into account the fact that many consecutive elements "U" near each other have similar characteristics, such as similar unique identifiers (ID), sharing a close position and engaging in the same action (all attacking, all moving, etc.), and achieve a fairly good compression ratio.
The first step for optimizing the algorithm comes from noting that a lot of information is covered by more than one element of the connected player (usually in RTS, players move their units in fairly large groups—and all see more or less the same things). Although it is possible to simply flag each element "U" if it has been sent in order to avoid retransmission, we will take a slightly longer route, which will prove more useful at the end.
World Segmentation
We divide the world into small square regions (see Figure 2.8.1), which must be bigger than the highest line-of-sight (LOS) radius, so that a unit in a square can only see up to the adjacent squares. The region should not exceed the LOS by much so that the adjacent squares will not include excess information that cannot be seen. Note that this bears vague similarity to sections of grid solutions (see [Ferguson01]), but for different purposes.
For each region, we keep track of all the elements in the region and of all the players who should receive information about that region or part of it (players viewing the region). This makes updating the state of the world relatively trivial as is illustrated by the following pseudocode:
for (each region of map R) do {
for (each connected player P viewing region R) do
for (each element E in region R)
Create message updating element E’s state
}
for (each connected player P) do {
Compress all messages to player P
Send messages to player P
}
Keeping track of the elements in each region is pretty straightforward, but keeping track of the list of players who view the region is somewhat trickier, as it depends on their elements in adjacent regions as well. To do so, we assign for each viewing player entry in the region a bitfield, which designates from which adjacent squares (or this one) it is viewed. When the first unit of a player enters a region, all the squares adjacent to it are marked as "viewed" by that player (the player is added if he is not yet present and the proper bit is set). When the last unit of that player leaves the square, all bitfields in adjacent squares are updated, and players who do not see a square are deleted (see Figure 2.8.1).
This solution still falls short of the mark, as can be easily seen even by considering the conservative scenario given in the introduction: a player and four allies attack an enemy base guarded by five players; each of them also has 100 units, totaling 1000 units. This scenario already requires each client to be sent updates of about 1000 units approximately twice per second. Even at 6 bytes/unit after amazing compression (a single second of battle can contain updates to several abilities: HP, mana, xp, carried items etc., and each update must also contain the unit’s unique identifier), the updates exceed standard modems and will cost the game provider a fortune. Since MMORTS games contain units that persist over many weeks, it is necessary to provide additional data to support them (more configurable options and abilities), which adds to the amount of data that must be transferred.
Advanced Methods for Bandwidth Reduction
The next step in reducing the communication bandwidth is to note that many of the changes in the state of an element do not necessarily require update messages to be sent, because the updates are predictable. For example, if a moving unit continues moving in the same direction, there is no need to send updates of these movements. Updates will be sent only when the unit changes direction or stops. Many network games use action prediction of the user mainly to cut the response time, not bandwidth. If the user chooses the trivial predicted action, the client will start executing the move at once.
It is possible to make a slightly more elaborate prediction scheme that can also take into account more "active" decisions (assuming one choice is favored over the others). The server runs the prediction algorithm against the actual state, and if the prediction fails for any reason, it sends corrections to the clients about it.
RTS games offer the opportunity for a far more elaborate prediction scheme. Since the units are usually given "complex" commands (build here, attack there, etc.) and the AI is responsible for decomposing it into step-by-step actions, many of the units’ actions are entirely predictable by the same algorithm. Therefore, if we send these complex commands at low rate and perform the same low-level-AI routines on the server and all viewing clients, we could achieve very good bandwidth while keeping the state consistent.
The output of such a prediction algorithm might depend on the unit itself and its surroundings. For example, the movement of a unit A following another unit B depends on the whereabouts of unit B, as well as on the obstacles lying in its immediate path; target selection might depend on all of the local enemy units. Typically, the low-level-AI dependencies are restricted to a certain radius around the unit creating a dependency bubble. For the output of the algorithms to be consistent between the server and all the clients, they have the same commands, and the relevant information inside the dependency bubble must be identical on all of them.
Developers of MMORTS games should also consider the following issues:
The computation power on the server restricts the possible prediction power. Naïvely, one could send only very complex actions, such as the user’s actions,
in large time intervals, and all clients can predict everything from it. However, such actions usually require vast amounts of computation. Using the movement example, the entire path of a unit can be constructed from the true destination. However, even using a fully optimized A* there is no way a server can deal with more than a few thousand units (when millions are needed). Therefore, we should restrict ourselves in practice to actions that are computationally simple. Examples include retaliating to attacks by close creatures, or simple steering (e.g., walking few tiles, following another unit, moving with an offset).
Since the actions of units depend on other units in the dependency bubble, the order of processing is a significant factor. Suppose that B changes its direction in the preceding example. If A is processed first, it will not be aware of the changing direction and will continue in its previous heading. However, if B is processed first, A will change its heading to match. One way to achieve the same processing order on the server and all clients is to process all units according to their unique identifier in ascending order. To avoid ordering advantage/disadvantages, one has to process in ascending order and descending order interleaved turns.
Another related issue is the consistency of the data in the dependency bubble. For the prediction to be accurate, all data in the dependency bubble must be available and current. On a client machine, for units positioned near the end of the viewable region, part of their dependency bubble falls outside this region, so one cannot use the prediction and still be consistent with the server. As mentioned previously, once the prediction fails, the server must send corrections to the client, resulting in the need for continuous updates of units near the end of the viewable region.
The practical question is how to implement a simple system on the server that will know which unit requires continuous updates and which unit can use prediction. One possible solution is to generalize the segmentation to regions by dividing the regions on the server into two types for each player: the CUR (Continuously Updated Region) and the PUR (Prediction Used Region). The PURs of a certain player are those squares that hold his elements, and the CURs are all the remaining viewed squares. Figure 2.8.2 illustrates a scenario similar to that in Figure 2.8.1.
Intuitively, having only those squares that hold the player’s elements as his PURs will only provide mild results, as most of the viewed area still needs to be continuously updated. It is also possible to define the eight adjacent squares as PURs, and only the squares around them as CURs, giving better PUR/CUR ratio. However, in practice, the gain from such a choice is small, as the distribution of elements is far from uniform. Units of different players get close to one another only for interaction (allies move together and trade, enemies fight, etc.), which usually drives them to the same square. In fact, careful choice of square size can ensure that the CURs are usually very empty even in very large battles with thousands of units and many players. In addition, increasing the number of PURS and CURS results in a wider area on which the client must be updated, ultimately resulting in larger bandwidth.
To complete the discussion, the data-updating algorithm now looks something like:
for (each region of map R) do
for (each connected player P viewing region R) do
if (Region is Border for P){
for (each element E in region R) do
Create message updating E’s state
}
else (region is Inner for P){
for (each element E in region R) do {
if (Prediction fails due to user change)
Create message updating E’s state
}
}
for (each connected player P) do {
Compress all messages to player P
Send messages to player P
}
It should be noted that the algorithm also provides data that lends itself to compression. As mentioned earlier, even without any proper world segmentation, a compression algorithm is quite effective because many consecutive elements U are near each other, have similar unique identifiers (IDs), and probably engage in the same action. The advanced world segmentation allows a far better compression ratio:
• The world segmentation essentially ensures that consecutive units lay close to each other.
• At the complex action level, more units are engaged in the same action than at a step-by-step update (since in RTS, units move in formations that have the same complex action).
• The game elements are processed in an ascending/descending order so that IDs can be even more tightly packed.
Reusing the World Segmentation for Other Purposes
The world segmentation algorithms suggested in the previous section could also be used on the server and client machines to facilitate/accelerate other common RTS tasks.
Clustering: One of the more compelling ways to divide the workload of maintaining an MMORTS server cluster between individual machines is according to spatial world position: each server deals with all the game elements and players that are in a certain area. The world segmentation algorithm can be extended to have a seamless world map across servers and allow shrinking and growing of server areas in real-time (each server is responsible for a certain region, but a small patch of squares around it is still updated by adjacent servers—still viewed by that server. Having data in this patch continuously updated by other servers allows for a fast and easy passage of units across the seam).
3D graphics rendering: It is impractical to keep a mesh of the entire world in MMORTS games. Rather, the mesh is constructed in real-time on a need basis and a cache is kept of recently viewed areas. It is in general very convenient to construct the meshes for areas that match the segmentation, and to keep regions that are marked as viewed by the player longer in the cache (as they are inherently viewed more). With a slight effort, even fog-of-war calculations can take advantage of the viewed/unviewed division.
The data structure can be easily used to answer queries like, "What elements are positioned at a certain location?" and "Which elements are near/around a location?" This greatly accelerates choosing targets for attacking units as well as other actions.
AI: The proper segmentation lends itself easily to creating a tiered approach. First, one builds a scaled-down version of the map needed by defining each square region as a tile on a scaled map. Then, same-player groups of units that are geographically close together are aggregated together on the scaled map. This can be repeated iteratively.
Distributing AI to Clients
Because a great deal of AI is needed in RTS, simply adding a few more computers at the server farm is not sufficient to support the performance requirements. Since the connected users have a lot of CPU power at their disposal, it is logical to try to harness that power by distributing AI processes to the clients.
We will focus on the technical issues of distributing processes, and will not dwell on the legal and semi-legal ones, most of which can be covered by adding one more term to the terms of use paper players accept as part of the installation. Still, it is worth mentioning that when one plays a regular, offline RTS, the client uses part of the CPU for playing "opponent" AI. We suggest that distributed AI in MMORTS games does essentially the same thing.
Returning to technical issues, distributed AI still has to conform to the low bandwidth restriction. This has great impact on the type of distributed algorithms possible:
We provide only guidelines. Fine-tuning depends greatly on the architecture of the MMORTS and of the AI and the exact algorithms used. More detailed discussion and specific solutions lie outside the scope of this article.
The game should distribute CPU intensive algorithms, which require little input data and short output answers. Examples include terrain analysis routines (e.g., choke points), optimization algorithms over spaces with many variables (e.g., resource allocation and management), and searching of large data-spaces that are built on the fly.
Favor strategic, long-range planning that needs only minor changes afterward over tactical short-range decisions. Favor sending strategic, long-range planning that needs only little changes afterward to the client’s overall tactical short-range decisions, which change rapidly and require constant updating.
If distributing tactical unit decisions is important, transfer control of entire NPC armies to a client and do not divide them across clients. There is significant interdependency between units inside an army but little interdependency between units of different armies. Therefore, playing few NPCs of each army on each client will increase the bandwidth tremendously.
All MMOGs share a common problem: content creation. The need for filling an enormous virtual world is a time-consuming and expensive task for game designers, who in turn seek automated solutions. Since the AI in RTS games includes many routines for "building," it is tempting to apply them also for automatic content creation. The local flavor of these tasks makes them ideal candidates for distribution.
After deciding what to distribute, we are faced with the practical question of how to build a task-distributing manager. We start by collecting information about the bandwidth available for each client and processing power to decide which clients can deal with additional tasks.
Next, each time there is a need to send a location-based query (e.g., terrain analysis) or transfer control over an NPC army to the AI on a client machine, send it to the most appropriate player(s). The algorithm should try to pick players already viewing the regions relevant to the query in order to reduce traffic. For playing NPC armies, this achieves even better bandwidth reduction because all the squares in which these NPCs reside are turned into PURs of that client. For reasons of security and robustness, send the queries to more than one client and keep only the majority result, which avoids dependency on any one client.
The pseudocode for such a system is shown here:
for (each pending distributed job J) do {
get list L of players viewing regions of J
sort list L according to size of viewed region of J
for (each player P in list L) do {
if (player P hasn’t got enough CPU or bandwidth)
erase P from list L
}
if (not enough eligible players remained)
add players not viewing the region to the list
send query to top 3 players
}
Note that the list of players viewing regions of J can be formed very quickly by using the advanced world segmentation explained previously.
Conclusion
MMORTS provides great challenges to both designers and programmers. This article covered mainly the architecture and the algorithms needed to achieve low bandwidth per client (1–2 KB/sec) for transferring updates on large armies comprised of thousands of units. The basic idea common in many network games—only send information relevant to the client—was supplemented by world segmentation and complex actions. The suggested scheme also helps other areas of the MMORTS such as clustering and AI.
References
Ferguson, Mitch, and Michael Ballbach, “Product Review: Massively Multiplayer Online Game Middleware,” January 2003.
Jaecheol, Kim; Eunsil Hong; Yanghee Choi, "Measurement and Analysis of a Massively Multiplayer Online Role Playing Game Traffic:" http://www.apan.net/2003_busan/34.doc
Torque Network Library project documentation, “Torque Network Library Design Fundamentals:" http://opentnl.sourceforge.net/doxydocs/fundamentals.html#fundamentals
--
This article is excerpted from Charles River's Massively Multiplayer Game Development 2, (ISBN 1-58450-390-4 ) edited by Thor Alexander.
_____________________________________________________
Read more about:
FeaturesYou May Also Like