On Wed, Mar 05, 2025 at 11:48:10PM +0900, Vincent Mailhol wrote: > On 05/03/2025 at 23:33, Andy Shevchenko wrote: > > On Wed, Mar 05, 2025 at 10:00:16PM +0900, Vincent Mailhol via B4 Relay wrote: ... > >> +#define BIT_U8(b) (BIT_INPUT_CHECK(u8, b) + (unsigned int)BIT(b)) > >> +#define BIT_U16(b) (BIT_INPUT_CHECK(u16, b) + (unsigned int)BIT(b)) > > > > Why not u8 and u16? This inconsistency needs to be well justified. > > Because of the C integer promotion rules, if casted to u8 or u16, the > expression will immediately become a signed integer as soon as it is get > used. For example, if casted to u8 > > BIT_U8(0) + BIT_U8(1) > > would be a signed integer. And that may surprise people. Yes, but wouldn't be better to put it more explicitly like #define BIT_U8(b) (BIT_INPUT_CHECK(u8, b) + (u8)BIT(b) + 0 + UL(0)) // + ULL(0) ? Also, BIT_Uxx() gives different type at the end, shouldn't they all be promoted to unsigned long long at the end? Probably it won't work in real assembly. Can you add test cases which are written in assembly? (Yes, I understand that it will be architecture dependent, but still.) > David also pointed this in the v3: > > https://lore.kernel.org/intel-xe/d42dc197a15649e69d459362849a37f2@xxxxxxxxxxxxxxxx/ > > and I agree with his comment. > > I explained this in the changelog below the --- cutter, but it is > probably better to make the explanation more visible. I will add a > comment in the code to explain this. > > >> +#define BIT_U32(b) (BIT_INPUT_CHECK(u32, b) + (u32)BIT(b)) > >> +#define BIT_U64(b) (BIT_INPUT_CHECK(u64, b) + (u64)BIT_ULL(b)) -- With Best Regards, Andy Shevchenko