且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

#define的bitflags和枚举 - 在&QUOT和平共处; C"

更新时间:2023-02-15 20:45:07

正如其他人所说,你的问题(a)是解析使用< stdint.h> ,要么 uint32_t的 uint_least32_t (如果你要担心的具有36位字巴勒斯大型机)。需要注意的是MSVC不支持C99,但@DigitalRoss显示可以在其中获得合适的标头MSVC使用。

您的问题(b)是不是一个问题; ç会为你安全的类型转换,如果它是必要的,但它可能甚至不是必要的。

最关心的区域是(c)和在特定的格式的子字段。还有,3个值是有效的。可以通过分配3比特,并要求3位字段是值1的一个,2个或4(任何其他值是因为过多或过少的比特的设置无效)处理这个问题。或者你可以分配一个2位数字,并指定0或3(或者,如果你真的想,1一2)是无效的。第一种方法是使用多一个位(目前还不是因为你只使用32 20位的问题),而是一个纯粹的位标志的方法。

当写函数调用,就没有特别的问题写作:

  some_function(FORMAT_A | STORAG​​E_INTERNAL,...);

这将工作FORMAT_A是使用#define还是枚举(只要您正确指定枚举值)。被叫code应该检查调用是否有浓度时隔写道:

  some_function(FORMAT_A | FORMAT_B,...);

不过,这是模块的担心,并不是对模块的用户担心的支票进行内部检查。

如果人会在将转换位周围有很多标志成员,设置和取消格式字段宏可能是有益的。有些人可能会争辩说,任何纯布尔字段仅仅需要它,但(我会同情)。这可能是***的治疗标记成员为不透明,并提供'功能'(或宏)获取或设置所有字段。的人更可能出错的地方,就越会出问题。

考虑使用位字段是否为你的作品。我的经验是它们导致大code并不见得非常有效的code;情况因人而异。

嗯......没有什么很明确的在这里,这么远。


  • 我会用枚举的一切,因为这些都保证在调试器,其中的#define值不可见。

  • 我可能不会提供宏来获取或设置位,但我有时一个残酷的人。

  • 我将提供关于如何设置标志字段的格式部分指导,并可能提供一个宏来做到这一点。

就像这样,也许:

 枚举{...,FORMAT_A = 0×0010,FORMAT_B = 0×0020,FORMAT_C =×0040,...};
枚举{FORMAT_MASK = FORMAT_A | FORMAT_B | FORMAT_C};的#define SET_FORMAT(标志的newval)(((标志)及〜FORMAT_MASK)|(的newval))
的#define GET_FORMAT(标志)((标志)及FORMAT_MASK)

如果准确地使用,但可怕的,如果滥用

SET_FORMAT 是安全的。宏的一个好处是,你可以使用,如果有必要彻底确认事情的函数替换它们;这个效果很好,如果人们使用宏一致。

I have just discovered the joy of bitflags. I have several questions related to "best-practices" regarding the use of bitflags in C. I learned everything from various examples I found on the web but still have questions.

In order to save space, I am using a single 32bit integer field in a struct (A->flag) to represent several different sets of boolean properties. In all, 20 different bits are #defined. Some of these are truly presence/absence flags (STORAGE-INTERNAL vs. STORAGE-EXTERNAL). Others have more than two values (e.g. mutually exclusive set of formats: FORMAT-A, FORMAT-B, FORMAT-C). I have defined macros for setting specific bits (and simultaneously turning off mutually exclusive bits). I have also defined macros for testing if specific combination of bits are set in the flag.

However, what is lost in the above approach is the specific grouping of flags that is best captured by enums. For writing functions, I would like to use enums (e.g., STORAGE-TYPE and FORMAT-TYPE), so that function definitions look nice. I expect to use enums only for passing parameters and #defined macros for setting and testing flags.

  1. (a) How do I define flag (A->flag) as a 32 bit integer in a portable fashion (across 32 bit / 64 bit platforms)?

  2. (b) Should I worry about potential size differences in how A->flag vs. #defined constants vs. enums are stored?

  3. (c) Am I making things unnecessarily complicated, meaning should I just stick to using #defined constants for passing parameters as ordinary ints? What else should I worry about in all this?

I apologize for the poorly articulated question. It reflects my ignorance about potential issues.

As others have said, your problem (a) is resolvable by using <stdint.h> and either uint32_t or uint_least32_t (if you want to worry about Burroughs mainframes which have 36-bit words). Note that MSVC does not support C99, but @DigitalRoss shows where you can obtain a suitable header to use with MSVC.

Your problem (b) is not an issue; C will type convert safely for you if it is necessary, but it probably isn't even necessary.

The area of most concern is (c) and in particular the format sub-field. There, 3 values are valid. You can handle this by allocating 3 bits and requiring that the 3-bit field is one of the values 1, 2, or 4 (any other value is invalid because of too many or too few bits set). Or you could allocate a 2-bit number, and specify that either 0 or 3 (or, if you really want to, one of 1 or 2) is invalid. The first approach uses one more bit (not currently a problem since you're only using 20 of 32 bits) but is a pure bitflag approach.

When writing function calls, there is no particular problem writing:

some_function(FORMAT_A | STORAGE_INTERNAL, ...);

This will work whether FORMAT_A is a #define or an enum (as long as you specify the enum value correctly). The called code should check whether the caller had a lapse in concentration and wrote:

some_function(FORMAT_A | FORMAT_B, ...);

But that is an internal check for the module to worry about, not a check for users of the module to worry about.

If people are going to be switching bits in the flags member around a lot, the macros for setting and unsetting the format field might be beneficial. Some might argue that any pure boolean fields barely need it, though (and I'd sympathize). It might be best to treat the flags member as opaque and provide 'functions' (or macros) to get or set all the fields. The less people can get wrong, the less will go wrong.

Consider whether using bit-fields works for you. My experience is that they lead to big code and not necessarily very efficient code; YMMV.

Hmmm...nothing very definitive here, so far.

  • I would use enums for everything because those are guaranteed to be visible in a debugger where #define values are not.
  • I would probably not provide macros to get or set bits, but I'm a cruel person at times.
  • I would provide guidance on how to set the format part of the flags field, and might provide a macro to do that.

Like this, perhaps:

enum { ..., FORMAT_A = 0x0010, FORMAT_B = 0x0020, FORMAT_C = 0x0040, ... };
enum { FORMAT_MASK = FORMAT_A | FORMAT_B | FORMAT_C };

#define SET_FORMAT(flag, newval)    (((flag) & ~FORMAT_MASK) | (newval))
#define GET_FORMAT(flag)            ((flag) & FORMAT_MASK)

SET_FORMAT is safe if used accurately but horrid if abused. One advantage of the macros is that you could replace them with a function that validated things thoroughly if necessary; this works well if people use the macros consistently.