[merged] arm64-mark-__cpus_have_const_cap-as-__always_inline.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: arm64: mark (__)cpus_have_const_cap as __always_inline
has been removed from the -mm tree.  Its filename was
     arm64-mark-__cpus_have_const_cap-as-__always_inline.patch

This patch was dropped because it was merged into mainline or a subsystem tree

------------------------------------------------------
From: Masahiro Yamada <yamada.masahiro@xxxxxxxxxxxxx>
Subject: arm64: mark (__)cpus_have_const_cap as __always_inline

This prepares to move CONFIG_OPTIMIZE_INLINING from x86 to a common place.
We need to eliminate potential issues beforehand.

If it is enabled for arm64, the following errors are reported:

In file included from ././include/linux/compiler_types.h:68,
                 from <command-line>:
./arch/arm64/include/asm/jump_label.h: In function 'cpus_have_const_cap':
./include/linux/compiler-gcc.h:120:38: warning: asm operand 0 probably doesn't match constraints
 #define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)
                                      ^~~
./arch/arm64/include/asm/jump_label.h:32:2: note: in expansion of macro 'asm_volatile_goto'
  asm_volatile_goto(
  ^~~~~~~~~~~~~~~~~
./include/linux/compiler-gcc.h:120:38: error: impossible constraint in 'asm'
 #define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)
                                      ^~~
./arch/arm64/include/asm/jump_label.h:32:2: note: in expansion of macro 'asm_volatile_goto'
  asm_volatile_goto(
  ^~~~~~~~~~~~~~~~~

Link: http://lkml.kernel.org/r/20190423034959.13525-3-yamada.masahiro@xxxxxxxxxxxxx
Signed-off-by: Masahiro Yamada <yamada.masahiro@xxxxxxxxxxxxx>
Tested-by: Mark Rutland <mark.rutland@xxxxxxx>
Cc: Arnd Bergmann <arnd@xxxxxxxx>
Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx>
Cc: Boris Brezillon <bbrezillon@xxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxx>
Cc: Brian Norris <computersforpeace@xxxxxxxxx>
Cc: Christophe Leroy <christophe.leroy@xxxxxx>
Cc: David Woodhouse <dwmw2@xxxxxxxxxxxxx>
Cc: Heiko Carstens <heiko.carstens@xxxxxxxxxx>
Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Marek Vasut <marek.vasut@xxxxxxxxx>
Cc: Mathieu Malaterre <malat@xxxxxxxxxx>
Cc: Miquel Raynal <miquel.raynal@xxxxxxxxxxx>
Cc: Paul Mackerras <paulus@xxxxxxxxx>
Cc: Ralf Baechle <ralf@xxxxxxxxxxxxxx>
Cc: Richard Weinberger <richard@xxxxxx>
Cc: Russell King <rmk+kernel@xxxxxxxxxxxxxxxx>
Cc: Stefan Agner <stefan@xxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 arch/arm64/include/asm/cpufeature.h |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h~arm64-mark-__cpus_have_const_cap-as-__always_inline
+++ a/arch/arm64/include/asm/cpufeature.h
@@ -401,7 +401,7 @@ unsigned long cpu_get_elf_hwcap2(void);
 #define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
 
 /* System capability check for constant caps */
-static inline bool __cpus_have_const_cap(int num)
+static __always_inline bool __cpus_have_const_cap(int num)
 {
 	if (num >= ARM64_NCAPS)
 		return false;
@@ -415,7 +415,7 @@ static inline bool cpus_have_cap(unsigne
 	return test_bit(num, cpu_hwcaps);
 }
 
-static inline bool cpus_have_const_cap(int num)
+static __always_inline bool cpus_have_const_cap(int num)
 {
 	if (static_branch_likely(&arm64_const_caps_ready))
 		return __cpus_have_const_cap(num);
_

Patches currently in -mm which might be from yamada.masahiro@xxxxxxxxxxxxx are





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux