Hi Peter,
On 11/04/2014 09:29 AM, Peter Zijlstra wrote:
On Mon, Nov 03, 2014 at 06:39:58PM +0100, Maxime COQUELIN wrote:
On some 32 bits architectures, including x86, GENMASK(31, 0) returns 0
instead of the expected ~0UL.
This is the same on some 64 bits architectures with GENMASK_ULL(63, 0).
This is due to an overflow in the shift operand, 1 << 32 for GENMASK,
1 << 64 for GENMASK_ULL.
Fixes: 10ef6b0dffe404bcc54e94cb2ca1a5b18445a66b
Cc: <stable@xxxxxxxxxxxxxxx> #v3.13+
Reported-by: Eric Paire <eric.paire@xxxxxx>
Signed-off-by: Maxime Coquelin <maxime.coquelin@xxxxxx>
---
include/linux/bitops.h | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/linux/bitops.h b/include/linux/bitops.h
index be5fd38..81f9725 100644
--- a/include/linux/bitops.h
+++ b/include/linux/bitops.h
@@ -18,8 +18,12 @@
* position @h. For example
* GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.
*/
-#define GENMASK(h, l) (((U32_C(1) << ((h) - (l) + 1)) - 1) << (l))
-#define GENMASK_ULL(h, l) (((U64_C(1) << ((h) - (l) + 1)) - 1) << (l))
+#define GENMASK(h, l) \
+ ((~0UL >> ((BITS_PER_LONG - 1) - (h))) & ~((1UL << (l)) - 1))
+
+#define GENMASK_ULL(h, l) \
+ ((~0ULL >> ((BITS_PER_LONG_LONG - 1) - (h))) & ~((1ULL << (l)) - 1))
+
I was not expecting the mask there, but instead something like:
((~0UL >> (BITS_PER_LONG - (h-l+1))) << l)
which shifts the bits to the desired length and then back to the desired
place. Would that not be more readable?
Yes, this is indeed more readable.
I will send a v2 with your implementation.
Thanks,
Maxime
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html