Re: [PATCH] x86: bitops: fix build regression

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On May 5, 2020 10:44:22 AM PDT, Nick Desaulniers <ndesaulniers@xxxxxxxxxx> wrote:
>From: Sedat Dilek <sedat.dilek@xxxxxxxxx>
>
>It turns out that if your config tickles __builtin_constant_p via
>differences in choices to inline or not, this now produces invalid
>assembly:
>
>$ cat foo.c
>long a(long b, long c) {
>  asm("orb\t%1, %0" : "+q"(c): "r"(b));
>  return c;
>}
>$ gcc foo.c
>foo.c: Assembler messages:
>foo.c:2: Error: `%rax' not allowed with `orb'
>
>The "q" constraint only has meanting on -m32 otherwise is treated as
>"r".
>
>This is easily reproducible via Clang+CONFIG_STAGING=y+CONFIG_VT6656=m,
>or Clang+allyesconfig.
>
>Keep the masking operation to appease sparse (`make C=1`), add back the
>cast in order to properly select the proper 8b register alias.
>
> [Nick: reworded]
>
>Cc: stable@xxxxxxxxxxxxxxx
>Cc: Jesse Brandeburg <jesse.brandeburg@xxxxxxxxx>
>Link: https://github.com/ClangBuiltLinux/linux/issues/961
>Link: https://lore.kernel.org/lkml/20200504193524.GA221287@xxxxxxxxxx/
>Fixes: 1651e700664b4 ("x86: Fix bitops.h warning with a moved cast")
>Reported-by: Sedat Dilek <sedat.dilek@xxxxxxxxx>
>Reported-by: kernelci.org bot <bot@xxxxxxxxxxxx>
>Suggested-by: Andy Shevchenko <andriy.shevchenko@xxxxxxxxx>
>Suggested-by: Ilie Halip <ilie.halip@xxxxxxxxx>
>Tested-by: Sedat Dilek <sedat.dilek@xxxxxxxxx>
>Signed-off-by: Sedat Dilek <sedat.dilek@xxxxxxxxx>
>Signed-off-by: Nick Desaulniers <ndesaulniers@xxxxxxxxxx>
>---
> arch/x86/include/asm/bitops.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
>diff --git a/arch/x86/include/asm/bitops.h
>b/arch/x86/include/asm/bitops.h
>index b392571c1f1d..139122e5b25b 100644
>--- a/arch/x86/include/asm/bitops.h
>+++ b/arch/x86/include/asm/bitops.h
>@@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
> 	if (__builtin_constant_p(nr)) {
> 		asm volatile(LOCK_PREFIX "orb %1,%0"
> 			: CONST_MASK_ADDR(nr, addr)
>-			: "iq" (CONST_MASK(nr) & 0xff)
>+			: "iq" ((u8)(CONST_MASK(nr) & 0xff))
> 			: "memory");
> 	} else {
> 		asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
>@@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
> 	if (__builtin_constant_p(nr)) {
> 		asm volatile(LOCK_PREFIX "andb %1,%0"
> 			: CONST_MASK_ADDR(nr, addr)
>-			: "iq" (CONST_MASK(nr) ^ 0xff));
>+			: "iq" ((u8)(CONST_MASK(nr) ^ 0xff)));
> 	} else {
> 		asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
> 			: : RLONG_ADDR(addr), "Ir" (nr) : "memory");

Drop & 0xff and change ^ 0xff to ~.

The redundancy is confusing.
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux