Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Type Safe Flags in C++

Using bits and shifting enums is a standard way of storing and reading flags. But the lack of type safety and clarify makes them prone to error. Type Safe flags in C++ can provide the same functionality while reducing the chance of human error.

Lee Winder, Blogger

October 3, 2010

19 Min Read

Using bits and shifting enums is a pretty standard way of storing and reading flags. It’s quick, it’s compact and the syntax isn’t to difficult to pick up for people who haven’t seen it before. But since a bit is only a bit and an enum is only an enum it’s far to easy to make mistakes with these. There is no type checking when you accidentally use the wrong enum and passing an unsigned int around loses all relevance to what these ‘flags’ actually are.

So I wanted to look for a solution that not only stopped us accidentally using the wrong flag with the wrong variable (and allow the compiler to let us know if it happens) but to also to remove the loss of relevance when the flags were passed between different systems.

The Problem
Anyone who’s taken Programming 101 will know what I’m talking about.

01

enum CharacterBattleProperties

02

{

03

  HasTarget = (1<<0),

04

  CarryingWeapon = (1<<1),

05

};

06

 

07

enum CharacterState

08

{

09

  IsStunned = (1<<0),

10

  IsPoisoned = (1<<1),

11

};

12

 

13

unsigned int battleFlags;

14

unsigned int stateFlags;

15

 

16

// Set or remove some flags based on whats happening

17

// Character has just lost sight of the player

18

battleFlags &= ~HasTarget;

19

 

20

// We've just picked up a weapon

21

battleFlags |= CarryingWeapon;

22

 

23

// ...

24

 

25

// Some more code, some more function calls and

26

// now we have a player who is stunned

27

battleFlags |= IsStunned;

28

 

29

// Whoops, I just set the wrong flag on the wrong variable...

The First Solution
While investigating a couple of solutions I came across a post on Game Architect titled Evolution of a Flag Set Class which examined a lot of the questions I was asking myself. And it took it pretty far, coming up with a set of solutions that more-or-less did what I was looking for, but I wanted something that would be easier to scale and specialise based on the number of flags we had and not one based on an integer as the smallest possible storage unit.

So in trying to keep it as simple as possible the flag set initially ended up with a very simple interface

01

// Stripping it down to it's most basic behaviour

02

template < typename TEnum, typename TBaseType = unsigned char >

03

class flag_set

04

{

05

public:

06

    void set(const TEnum flag)

07

    {

08

      m_flags |= (1 << flag);

09

    }

10

 

11

    void remove(const TEnum flag)

12

    {

13

      m_flags &= ~(1 << flag);

14

    }

15

 

16

private:

17

    TBaseType m_flags;

18

};

You’ll notice a couple of things with this implementation (on top of ‘remove’ being highlighted in pink for some probably awesome reason)

  • The first is the default value of the second template parameter is set to an unsigned char, meaning that in the majority of cases the variable can be declared simply with the enum type but it’s easy to increase as needed (though it obviously has limits).

  • The second is that the object itself deals with the shifting of the enum value. This was a conscious decision to remove the idea that when setting a flag you are not ’setting a bit’ rather you’re setting a property, something I wanted the interface to promote.

And this works pretty well, but contradicts something quite important.

I wanted the flag_set to move away from ’setting bits’ so why do we have to specify a base type which makes us think about how many bits we need. And since we won’t know how many enum entries we have until we start to set them at runtime, we could end up with a flag_set that doesn’t have enough space to store all our flags and the compiler won’t be able to tell us about it (in real life it asserts when you try to use a flag outside the range of the storage type, but that’s not good enough).

A More Appropriate Solution
So I wanted to look at working with the number of flags, and work on making it have as small a footprint as possible. So taking a step back I started out with the interface I wanted.

1

template < typename TEnum, int TMaxFlags = 8 >

2

class flag_set

Where people could define them as follows

1

// This will assume there are 8 or fewer flags

2

flag_set< MyFlags::Enum > myDefaultSizeFlagSet; 

3

 

4

// This will create enough space for MyFlags::Noof flags

5

flag_set< MyFlags::Enum, MyFlags::Noof > myDeclaredFlagSet;

So we need a way to create enough storage space for the number of flags we have as compile time and since the base type of a bitset in the FTL is a 32bit unsigned int that’s not a good enough solution. There is no point storing 3 flags in an integer when it could drop into a char. If multiple flag sets were packed together, the smaller the storage space the better.

Creating enough storage space is pretty simple, we store an array of unsigned chars based on how many flags we have.

01

// Generate the array length based on an 8-bit type

02

unsigned char m_flags[ ((( TMaxFlags + 7 ) & ~7 ) >> 3) ];

03

 

04

// Figure out which bit to set/remove when needed

05

// I'm not worrying about endian code in this example...

06

const int index = ((( (entry+1) + 7 ) & ~7 ) >> 3)-1;

07

const int offset = entry - (8*index);

08

 

09

// Set the bit

10

m_flags[index] |= 1 << offset;

Optimising For Certain Types
So we now have a flag set which allows us to have as many flags as we could possible want (and yes I have seen code with more than 128 'flags' in the past - don't ask me to justify that, I have no idea...) but with a slight problem. In the original version there were no arrays, no calculations when figuring out which bit to set and no loops to check it's state.

All of this over complicates what could be just 'one bit being set on one variable'.

We can easily get around this by specialising the flag set for any types of 8, 16 or 32 bit flags. But I didn't want to specialise the whole flag_set, especially when we might continue this specialisation up to 64 and 128 bit flags - and specialising a whole class just becomes a nightmare to maintain.

So we extract the storage type into its own structure and specialise that based on the number of flags. This allows the general flag_set behaviour to be defined within the one class, and the length based behaviour to be extracted into it's own implementation.

01

// Just a note that the following code is simplified to make it

02

// easier to read on a blog post, but the general gist is the same.

03

 

04

template  < typename TBitCount >

05

struct bit_set

06

{

07

  enum

08

  {

09

    ArrayLength = ((( TBitCount + 7 ) & ~7 ) >> 3),

10

  }

11

  unsigned char m_bitArray[ArrayLength];

12

};

13

 

14

template <>

15

struct bit_set<8>

16

{

17

  enum

18

  {

19

    ArrayLength = 1,

20

  }

21

  unsigned char m_bitArray;

22

};

Then we simply replace our array member in the flag set with an instance of the bit_set (where we calculate the bit_set size as a power of 8 based on the number of flags we specify in the flag_set).

Since the flag_set doesn't want to worry about what kind of data it's using, we can replace the in-class implementation with a number of utility functions which not only reduces the amount of code duplication but, when specialised for the same reasons as the bit_set, allows us to have the same interface and to use the same class with only a small number of functions specialised as needed.

01

// Basic, none specialised function for setting

02

// a flag as called from flag_set::set

03

template < typename ContainerT >

04

void set_bit_entry(ContainerT& flags, const uint entry)

05

{

06

  // Get the entry point to the bit list

07

  const int index = ((( (entry+1) + 7 ) & ~7 ) >> 3)-1;

08

  const int offset = entry - (8*index);

09

 

10

  // Set the bit

11

  flags.m_bitArray[index] |= 1 << offset;

12

}

13

 

14

// Specilised for a bit count of 8 - no arrays, no additional calculations

15

template< >

16

void set_bit_entry< bit_set< 8 > >(bit_set< 8 >& flags, const uint entry)

17

{

18

  flags.m_bitArray |= 1 << entry;

19

}

01

// So our basic none-specialised flag set looks something like this

02

// (code simplified to make it a bit more presentable)

03

template < typename TEnum, int TMaxFlags = 8 >

04

class flag_set

05

{

06

public:

07

    void set(const TEnum flag)

08

    {

09

      set_bit_entry(m_flags.m_bitArray, flag, true)

10

    }

11

 

12

    void remove(const TEnum flag)

13

    {

14

      set_bit_entry(m_flags.m_bitArray, flag, false)

15

    }

16

 

17

private:

18

    static const uint BitSetSize = ((( TMaxFlags + 7 ) & ~7 ) >> 3)*8;

19

    bit_set< BitSetSize > m_flags;

20

};

This method makes is very easy to specialise the flag set for flags that can be stored in a built in type and falling back to an array if the size doesn't fit into one type. If we want to specialise for 16 or 32 bit flags, then we simply create specialised versions of our utility functions making the flag_set implementation the same no matter how much additional work is being done behind the scenes.

Final Outcome
So we now have a flag set that meets the issues I wanted to resolve head on. Passing a structure with strong type behaviour stops programmers accidentally using the wrong flag with the wrong set (and the compiler's the one that stops them) and passing a named structure to a function keeps the meaning and reason behind the flags intact.

The code obviously increases in complexity over a simple bitwise operation but from a client point of view, the readability of the client code becomes easier. Calls to .set, .remove and .clear make it very clear what the intention of the code is and makes it easier to avoid simple keyboard mistakes.

It also makes it immeasurably easier to add the debugging features to the flag set, making it easier to see what is set and when. But this post has been long enough and I'll look at that in my next post.

 

This post was originally published on Engineering Game Development on the 26th September 2010.

 

Read more about:

Featured Blogs

About the Author(s)

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like