Often flags are specified in hexadecimal notation using powers of two (to access individual bit locations) and created using notation that defines the meaning of a particular bit location in an unsigned integer flag. There are several problems with this despite its popularity and wide use. The first problem is that once you've defined your flag values, it's nearly impossible to insert new flags except at the end, lest you have to readjust and shift down all trailing values behind the newly inserted one. Then, possibly, the most important problem is that it isn't type-safe. That is to say, the way bit values are typically defined (using the preprocessor macro #define) doesn't allow for any strict type checking at compile time, leaving the check for runtime, which can create and propagate sometimes very subtle errors in your code.
This short instructable will show you a new method of creating flags using the bit values of an integer that not only allows you to insert new flags as they are required without any renumbering, but also provides strong type-checking, as well.