<davem@xxxxxxxxxxxxx>,Chris Metcalf <cmetcalf@xxxxxxxxxxxx>,Thomas Gleixner <tglx@xxxxxxxxxxxxx>,Ingo Molnar <mingo@xxxxxxxxxx>,Chris Zankel <chris@xxxxxxxxxx>,Max Filippov <jcmvbkbc@xxxxxxxxx>,Arnd Bergmann <arnd@xxxxxxxx>,x86@xxxxxxxxxx,linux-alpha@xxxxxxxxxxxxxxx,linux-snps-arc@xxxxxxxxxxxxxxxxxxx,linux-arm-kernel@xxxxxxxxxxxxxxxxxxx,linux-hexagon@xxxxxxxxxxxxxxx,linux-ia64@xxxxxxxxxxxxxxx,linux-mips@xxxxxxxxxxxxxx,openrisc@xxxxxxxxxxxxxxxxxxxx,linux-parisc@xxxxxxxxxxxxxxx,linuxppc-dev@xxxxxxxxxxxxxxxx,linux-s390@xxxxxxxxxxxxxxx,linux-sh@xxxxxxxxxxxxxxx,sparclinux@xxxxxxxxxxxxxxx,linux-xtensa@xxxxxxxxxxxxxxxx,linux-arch@xxxxxxxxxxxxxxx From: hpa@xxxxxxxxx Message-ID: <CF18535E-39E7-44D3-88D0-80B9961E6681@xxxxxxxxx> On March 4, 2017 1:38:05 PM PST, Stafford Horne <shorne@xxxxxxxxx> wrote: >On Sat, Mar 04, 2017 at 11:15:17AM -0800, H. Peter Anvin wrote: >> On 03/04/17 05:05, Russell King - ARM Linux wrote: >> >> >> >> +static int futex_atomic_op_inuser(int encoded_op, u32 __user >*uaddr) >> >> +{ >> >> + int op = (encoded_op >> 28) & 7; >> >> + int cmp = (encoded_op >> 24) & 15; >> >> + int oparg = (encoded_op << 8) >> 20; >> >> + int cmparg = (encoded_op << 20) >> 20; >> > >> > Hmm. oparg and cmparg look like they're doing these shifts to get >sign >> > extension of the 12-bit values by assuming that "int" is 32-bit - >> > probably worth a comment, or for safety, they should be "s32" so >it's >> > not dependent on the bit-width of "int". >> > >> >> For readability, perhaps we should make sign- and zero-extension an >> explicit facility? > >There is some of this in already here, 32 and 64 bit versions: > > include/linux/bitops.h > >Do we really need zero extension? It seems the same. > >Example implementation from bitops.h > >static inline __s32 sign_extend32(__u32 value, int index) >{ > __u8 shift = 31 - index; > return (__s32)(value << shift) >> shift; >} > >> /* >> * Truncate an integer x to n bits, using sign- or >> * zero-extension, respectively. >> */ >> static inline __const_func__ s32 sex32(s32 x, int n) >> { >> return (x << (32-n)) >> (32-n); >> } >> >> static inline __const_func__ s64 sex64(s64 x, int n) >> { >> return (x << (64-n)) >> (64-n); >> } >> >> #define sex(x,y) \ >> ((__typeof__(x)) \ >> (((__builtin_constant_p(y) && ((y) <= 32)) || \ >> (sizeof(x) <= sizeof(s32))) \ >> ? sex32((x),(y)) : sex64((x),(y)))) >> >> static inline __const_func__ u32 zex32(u32 x, int n) >> { >> return (x << (32-n)) >> (32-n); >> } >> >> static inline __const_func__ u64 zex64(u64 x, int n) >> { >> return (x << (64-n)) >> (64-n); >> } >> >> #define zex(x,y) \ >> ((__typeof__(x)) \ >> (((__builtin_constant_p(y) && ((y) <= 32)) || \ >> (sizeof(x) <= sizeof(u32))) \ >> ? zex32((x),(y)) : zex64((x),(y)))) >> Also, i strongly believe that making it syntactically cumbersome encodes people to open-code it which is bad... -- Sent from my Android device with K-9 Mail. Please excuse my brevity.