Re: better patch for linux/bitops.h

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/04/2016 02:42 PM, I wrote:

> I find it very odd that the other seven functions were not
> upgraded. I suggest the attached fix-others.diff would make
> things more consistent.

Here's a replacement patch.
Same idea, less brain damage.
Sorry for the confusion.

commit ba83b16d8430ee6104aa1feeed4ff7a82b02747a
Author: John Denker <jsd@xxxxxxxx>
Date:   Wed May 4 13:55:51 2016 -0700

    Make ror64, rol64, ror32, ror16, rol16, ror8, and rol8
    consistent with rol32 in their handling of shifting by a zero amount.
    
    Same overall rationale as in d7e35dfa, just more consistently applied.
    
    Beware that shifting by an amount >= the number of bits in the
    word remains Undefined Behavior.  This should be either documented
    or fixed.  It could be fixed easily enough.

diff --git a/include/linux/bitops.h b/include/linux/bitops.h
index defeaac..90f389b 100644
--- a/include/linux/bitops.h
+++ b/include/linux/bitops.h
@@ -87,7 +87,7 @@ static __always_inline unsigned long hweight_long(unsigned long w)
  */
 static inline __u64 rol64(__u64 word, unsigned int shift)
 {
-	return (word << shift) | (word >> (64 - shift));
+	return (word << shift) | (word >> ((-shift) & 63));
 }
 
 /**
@@ -97,7 +97,7 @@ static inline __u64 rol64(__u64 word, unsigned int shift)
  */
 static inline __u64 ror64(__u64 word, unsigned int shift)
 {
-	return (word >> shift) | (word << (64 - shift));
+	return (word >> shift) | (word << ((-shift) & 63));
 }
 
 /**
@@ -117,7 +117,7 @@ static inline __u32 rol32(__u32 word, unsigned int shift)
  */
 static inline __u32 ror32(__u32 word, unsigned int shift)
 {
-	return (word >> shift) | (word << (32 - shift));
+	return (word >> shift) | (word << ((-shift) & 31));
 }
 
 /**
@@ -127,7 +127,7 @@ static inline __u32 ror32(__u32 word, unsigned int shift)
  */
 static inline __u16 rol16(__u16 word, unsigned int shift)
 {
-	return (word << shift) | (word >> (16 - shift));
+	return (word << shift) | (word >> ((-shift) & 15));
 }
 
 /**
@@ -137,7 +137,7 @@ static inline __u16 rol16(__u16 word, unsigned int shift)
  */
 static inline __u16 ror16(__u16 word, unsigned int shift)
 {
-	return (word >> shift) | (word << (16 - shift));
+	return (word >> shift) | (word << ((-shift) & 15));
 }
 
 /**
@@ -147,7 +147,7 @@ static inline __u16 ror16(__u16 word, unsigned int shift)
  */
 static inline __u8 rol8(__u8 word, unsigned int shift)
 {
-	return (word << shift) | (word >> (8 - shift));
+	return (word << shift) | (word >> ((-shift) & 7));
 }
 
 /**
@@ -157,7 +157,7 @@ static inline __u8 rol8(__u8 word, unsigned int shift)
  */
 static inline __u8 ror8(__u8 word, unsigned int shift)
 {
-	return (word >> shift) | (word << (8 - shift));
+	return (word >> shift) | (word << ((-shift) & 7));
 }
 
 /**

[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux