Introduction: Using Enumerated Types As Bitflags
Often flags are specified in hexadecimal notation using powers of two (to access individual bit locations) and created using notation that defines the meaning of a particular bit location in an unsigned integer flag. There are several problems with this despite its popularity and wide use. The first problem is that once you've defined your flag values, it's nearly impossible to insert new flags except at the end, lest you have to readjust and shift down all trailing values behind the newly inserted one. Then, possibly, the most important problem is that it isn't type-safe. That is to say, the way bit values are typically defined (using the preprocessor macro #define) doesn't allow for any strict type checking at compile time, leaving the check for runtime, which can create and propagate sometimes very subtle errors in your code.
This short instructable will show you a new method of creating flags using the bit values of an integer that not only allows you to insert new flags as they are required without any renumbering, but also provides strong type-checking, as well.
Step 1: Background
For instance, we could declare an 8-bit integer as a flag for whether or not to display some instructions or not:
unsigned char DISPLAY_INSTRUCTIONS = 1;
Using an 8-bit value would allow us to have 28 possible values, or 256 unique flags. Each flag would take up a byte of resources, either on disk or in core RAM. We could also use other standard values for the flag like uint16_t, a 16-bit value storing up to 65,535 unique flags, probably way more than we could need or ever use. But what if we only had, say, 6 flags we needed to track? In the above scheme it would require six bytes, one byte for each flag, but using bit flags we could store all six flags (and then some) inside a single byte.
If you're unfamiliar with binary numbers, you might take a moment and read my instructable on number bases, which includes binary or find a number of tutorials online. I'll assume you already have a passing understanding of how binary numbers are put together. So, you may be asking how does one use the individual bits inside an integer as a flag for use in code/software? Typically, this is done by assigning a bit to "1" or "0" and based on its placement inside the integer, can be used for an affirmation or negation of the specified value. For example, take an 8-bit number, say, zero and look at it.0000 0000
Ok, nothing exciting here. Now in the 1's location, make it 1:0000 0000
We can do the same for the two's location:0000 0010
or the four's location:0000 0100
or maybe the 32 and 8th location:0010 1000
What numbers do these bit flips make? We don't care. We're only concerned about the individual bits inside the number in this case. To easily assign flag values to bits, it's common to use the following idiom:#define FLAG1 0x01 // 0000 0001
#define FLAG2 0x02 // 0000 0010
#define FLAG3 0x04 // 0000 0100
#define FLAG4 0x08 // 0000 1000
#define FLAG5 0x10 // 0001 0000
#define FLAG6 0x20 // 0010 0000
#define FLAG7 0x40 // 0100 0000
#define FLAG8 0x80 // 1000 0000
There we've defined our flags. Take note of the pattern in both the hexadecimal number and its binary representation. So, if FLAG5 is set, then the integer flag would have bit 5 set (using a 1-based index, contrary to the more common 0-based, but it's not important for us right now). Creating the flag variable and setting FLAG5 looks like this:unsigned char myFlags = 0x00; myFlags |= FLAG5;
We OR the flags so that any existing flags will remain preserved. If you AND the flags, you will write over any existing flags with the bitmask your ANDing it to. This is useful when you want to clear a flag:myFlags &= ~FLAG3;
The above sets FLAG3 to zero. Notice that you are ANDing with the complement of the flag.
But what happens if you have:
#define FLAG28 0x00400000
and you try to set it with our 8-bit integer? You'll have an overflow with mismatched integer sizes and get an error, most likely at runtime and not compile time because the compiler will usually promote or otherwise implicitly convert your constant flag into the type that's being operated on. This isn't always the case, but it often is. This can create hard to catch bugs, and the worst kind: runtime bugs. They're bad because they can be harder to track down, harder to replicate, and introduced to your users, which you never want to do if at all possible.
Further, what if you needed a new flag, say, FLAG9 but with the value 0x0080 for whatever reason? You would have to 1) extend your flag type to accommodate the larger size and 2) insert the flag into your flag declarations and renumber all the flags behind it. That's simply not doable and definitely not maintainable.
What do you do about these problems? How can we guarantee type safety and extensibility to our flags? Turn the page and find out!
Step 2: Using Enumerated Types
#define FLAG_1 0x01
#define FLAG_2 0x02
#define FLAG_3 0x04
#define FLAG_4 0x08
...
uint32_t _flags;
// set a flag
_flags |= FLAG_2;
// clear a flag
_flags &= ~FLAG_3;
// test bit flag
if (_flags & FLAG_4)
do_something();
It can sometimes be a great pain to manage the bit masks, especially when you go above the 32-bit flag space. Any small error in your define can completely throw off your flag checking. There is an easier way through dynamically creating bit masks using the bit location, and I’ve written a class that does just this.
template
class EnumFlag {
public:
void set(T f) {
_flags |= (1<<(uint64_t)f);
}
void unset(T f) {
_flags &= ~(1<<(uint64_t)f);
}
void toggle(T f) {
_flags ^= (1<<(uint64_t)f);
}
void zero() { _flags = (uint64_t)0; }
bool has(T f) { return (_flags & f); }
private:
uint64_t _flags;
};
You would obviously change uint64_t to uint32_t if you’re on a 32-bit system. You
use an EnumFlag with 1-indexed enumerators, like this:
enum class MyFlags : uint64_t {
FLAG1 = 1,
FLAG2,
FLAG3,
FLAG4
};
EnumFlag<MyFlags> myFlag;
// Initialize the flags
myFlag.zero();
// Set flag2
myFlag.set(MyFlags::FLAG2);
// Check for flag3
if (myFlag.has(MyFlags.FLAG3) do_something();
Notice the flags are just items in an enumerated data type. This means no hard-coding the flag mask to the flag name and gives you an easy way to insert new flags anywhere in the enum flag structure and next time you compile it'll refactor everything.
How neat is that?!
Step 3: Wrapping Up
As always, I hope you enjoyed this instructable and I'm always open to hearing your comments and suggestions on about this or any other of my instructables.
Cheers!
Gian