On Thu, Mar 06, 2025 at 02:17:18AM +0900, Vincent Mailhol wrote: > On 06/03/2025 at 00:48, Andy Shevchenko wrote: > > On Wed, Mar 05, 2025 at 11:48:10PM +0900, Vincent Mailhol wrote: > >> On 05/03/2025 at 23:33, Andy Shevchenko wrote: > >>> On Wed, Mar 05, 2025 at 10:00:16PM +0900, Vincent Mailhol via B4 Relay wrote: ... > >>>> +#define BIT_U8(b) (BIT_INPUT_CHECK(u8, b) + (unsigned int)BIT(b)) > >>>> +#define BIT_U16(b) (BIT_INPUT_CHECK(u16, b) + (unsigned int)BIT(b)) > >>> > >>> Why not u8 and u16? This inconsistency needs to be well justified. > >> > >> Because of the C integer promotion rules, if casted to u8 or u16, the > >> expression will immediately become a signed integer as soon as it is get > >> used. For example, if casted to u8 > >> > >> BIT_U8(0) + BIT_U8(1) > >> > >> would be a signed integer. And that may surprise people. > > > > Yes, but wouldn't be better to put it more explicitly like > > > > #define BIT_U8(b) (BIT_INPUT_CHECK(u8, b) + (u8)BIT(b) + 0 + UL(0)) // + ULL(0) ? > > OK, the final result would be unsigned. But, I do not follow how this is > more explicit. > > Also, why doing: > > (u8)BIT(b) + 0 + UL(0) > > and not just: > > (u8)BIT(b) + UL(0) > > ? > > What is that intermediary '+ 0' for? > > I am sorry, but I am having a hard time understanding how casting to u8 > and then doing an addition with an unsigned long is more explicit than > directly doing a cast to the desired type. Reading this again, I think we don't need it at all. u8, aka unsigned char, will be promoted to int, but it will be int with a value < 256, can't be signed as far as I understand this correctly. > As I mentioned in my answer to Yuri, I have a slight preference for the > unsigned int cast, but I am OK to go back to the u8/u16 cast as it was > in v3. Which means that the simples uXX castings should suffice. In any case we need test cases for that. > However, I really do not see how that '+ 0 + UL(0)' would be an improvement. > > > Also, BIT_Uxx() gives different type at the end, shouldn't they all be promoted > > to unsigned long long at the end? Probably it won't work in real assembly. > > Can you add test cases which are written in assembly? (Yes, I understand that it will > > be architecture dependent, but still.) > > No. I purposely guarded the definition of the BIT_Uxx() by a > > #if !defined(__ASSEMBLY__) > > so that these are never visible in assembly. I actually put a comment to > explain why the GENMASK_U*() are not available in assembly. I can copy > paste the same comment to explain why why BIT_U*() are not made > available either: > > /* > * Missing asm support > * > * BIT_U*() depends on BITS_PER_TYPE() which would not work in the asm > * code as BITS_PER_TYPE() relies on sizeof(), something not available > * in asm. Nethertheless, the concept of fixed width integers is a C > * thing which does not apply to assembly code. > */ > > I really believe that it would be a mistake to make the GENMASK_U*() or > the BIT_U*() available to assembly. Ah, okay then! > >> David also pointed this in the v3: > >> > >> https://lore.kernel.org/intel-xe/d42dc197a15649e69d459362849a37f2@xxxxxxxxxxxxxxxx/ > >> > >> and I agree with his comment. Why unsigned char won't work? > >> I explained this in the changelog below the --- cutter, but it is > >> probably better to make the explanation more visible. I will add a > >> comment in the code to explain this. > >> > >>>> +#define BIT_U32(b) (BIT_INPUT_CHECK(u32, b) + (u32)BIT(b)) > >>>> +#define BIT_U64(b) (BIT_INPUT_CHECK(u64, b) + (u64)BIT_ULL(b)) -- With Best Regards, Andy Shevchenko