Using the individual bits in any sized integer is a common and very useful (that is, frugal and parsimonious) way to add flags to a program. Instead of using Boolean values or integer values with may be anywhere from 8-bits (one byte) to even four bytes (32 bits), you can instead "flip" bits inside an integer used as an atomic storage location. Of course, by atomic, here I am referring to each individual bit making up the integer as its own unique, useable, indivisible, atomic unit.

Often flags are specified in hexadecimal notation using powers of two (to access individual bit locations) and created using notation that defines the meaning of a particular bit location in an unsigned integer flag. There are several problems with this despite its popularity and wide use. The first problem is that once you've defined your flag values, it's nearly impossible to insert new flags except at the end, lest you have to readjust and shift down all trailing values behind the newly inserted one. Then, possibly, the most important problem is that it isn't type-safe. That is to say, the way bit values are typically defined (using the preprocessor macro #define) doesn't allow for any strict type checking at compile time, leaving the check for runtime, which can create and propagate sometimes very subtle errors in your code.

This short instructable will show you a new method of creating flags using the bit values of an integer that not only allows you to insert new flags as they are required without any renumbering, but also provides strong type-checking, as well.

Step 1: Background

The use of bit flags goes way far back to the early days of C programming. The idea is actually very simple. Instead of using a singular integer as a Boolean value (either true or false or one or zero), you can use each of the bits inside the integer as a flag themselves. This only works if your flags are naturally Boolean values, but luckily, in programming many times they are.

For instance, we could declare an 8-bit integer as a flag for whether or not to display some instructions or not:

unsigned char DISPLAY_INSTRUCTIONS = 1;

Using an 8-bit value would allow us to have 28 possible values, or 256 unique flags. Each flag would take up a byte of resources, either on disk or in core RAM. We could also use other standard values for the flag like uint16_t, a 16-bit value storing up to 65,535 unique flags, probably way more than we could need or ever use. But what if we only had, say, 6 flags we needed to track? In the above scheme it would require six bytes, one byte for each flag, but using bit flags we could store all six flags (and then some) inside a single byte.

If you're unfamiliar with binary numbers, you might take a moment and read my instructable on number bases, which includes binary or find a number of tutorials online. I'll assume you already have a passing understanding of how binary numbers are put together. So, you may be asking how does one use the individual bits inside an integer as a flag for use in code/software? Typically, this is done by assigning a bit to "1" or "0" and based on its placement inside the integer, can be used for an affirmation or negation of the specified value. For example, take an 8-bit number, say, zero and look at it.

0000 0000

Ok, nothing exciting here. Now in the 1's location, make it 1:

0000 0000

We can do the same for the two's location:

0000 0010

or the four's location:

0000 0100

or maybe the 32 and 8th location:
0010 1000

What numbers do these bit flips make? We don't care. We're only concerned about the individual bits inside the number in this case. To easily assign flag values to bits, it's common to use the following idiom:

#define FLAG1 0x01 // 0000 0001
#define FLAG2 0x02 // 0000 0010
#define FLAG3 0x04 // 0000 0100
#define FLAG4 0x08 // 0000 1000
#define FLAG5 0x10 // 0001 0000
#define FLAG6 0x20 // 0010 0000
#define FLAG7 0x40 // 0100 0000
#define FLAG8 0x80 // 1000 0000

There we've defined our flags. Take note of the pattern in both the hexadecimal number and its binary representation. So, if FLAG5 is set, then the integer flag would have bit 5 set (using a 1-based index, contrary to the more common 0-based, but it's not important for us right now). Creating the flag variable and setting FLAG5 looks like this:

unsigned char myFlags = 0x00; myFlags |= FLAG5;

We OR the flags so that any existing flags will remain preserved. If you AND the flags, you will write over any existing flags with the bitmask your ANDing it to. This is useful when you want to clear a flag:

myFlags &= ~FLAG3;

The above sets FLAG3 to zero. Notice that you are ANDing with the complement of the flag.
But what happens if you have:

#define FLAG28 0x00400000

and you try to set it with our 8-bit integer? You'll have an overflow with mismatched integer sizes and get an error, most likely at runtime and not compile time because the compiler will usually promote or otherwise implicitly convert your constant flag into the type that's being operated on. This isn't always the case, but it often is. This can create hard to catch bugs, and the worst kind: runtime bugs. They're bad because they can be harder to track down, harder to replicate, and introduced to your users, which you never want to do if at all possible.

Further, what if you needed a new flag, say, FLAG9 but with the value 0x0080 for whatever reason? You would have to 1) extend your flag type to accommodate the larger size and 2) insert the flag into your flag declarations and renumber all the flags behind it. That's simply not doable and definitely not maintainable.

What do you do about these problems? How can we guarantee type safety and extensibility to our flags? Turn the page and find out!
ive done this before for permissions on some reports in our application. if a certain bit was set the user had access to this report, worked great
ah, very nice. good use of bit flags!

About This Instructable


6 favorites


Bio: Gian is a computational biologist and is the Managing Director at Open Design Strategies, LLC. He holds a BA in Molecular/Cellular Biology and an ... More »
More by nevdull: Create A Custom Medieval-/Fantasy-Style Calligraphy Quill Practical DACs Using Enumerated Types as Bitflags
Add instructable to: