Paul Clements wrote: > Michael Tokarev wrote: >> Neil Brown wrote: > >>> ffs is closer, but takes an 'int' and we have a 'unsigned long'. >>> So use ffz(~X) to convert a chunksize into a chunkshift. >> >> So we don't use ffs(int) for an unsigned value because of int vs >> unsigned int, but we use ffz() with negated UNSIGNED. Looks even >> more broken to me, even if it happens to work correctly... ;) > > No, it doesn't matter about the signedness, these are just bit > operations. The problem is the size (int vs. long), even though in > practice it's very unlikely you'd ever have a bitmap chunk size that > exceeded 32 bits. But it's better to be correct and not have to worry > about it. I understand the point, in the first place (I didn't mentioned long vs int above, however). The thing is: when reading the code, it looks just plain wrong. Esp. since function prototypes aren't here, but for those ffs(), ffz() etc they're hidden somewhere in include/asm/* (as they're architecture-dependent), and it's not at all obvious which is signed and which is unsigned, which is long or int etc. At the very least, return -ENOCOMMENT :) /mjt - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html