This prepares to move CONFIG_OPTIMIZE_INLINING from x86 to a common place. We need to eliminate potential issues beforehand. If it is enabled for mips, the following errors are reported: arch/mips/mm/sc-mips.o: In function `mips_sc_prefetch_enable.part.2': sc-mips.c:(.text+0x98): undefined reference to `mips_gcr_base' sc-mips.c:(.text+0x9c): undefined reference to `mips_gcr_base' sc-mips.c:(.text+0xbc): undefined reference to `mips_gcr_base' sc-mips.c:(.text+0xc8): undefined reference to `mips_gcr_base' sc-mips.c:(.text+0xdc): undefined reference to `mips_gcr_base' arch/mips/mm/sc-mips.o:sc-mips.c:(.text.unlikely+0x44): more undefined references to `mips_gcr_base' Signed-off-by: Masahiro Yamada <yamada.masahiro@xxxxxxxxxxxxx> --- Changes in v3: - forcibly inline __ffs() too Changes in v2: - new patch arch/mips/include/asm/bitops.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/mips/include/asm/bitops.h b/arch/mips/include/asm/bitops.h index 830c93a010c3..9a466dde9b96 100644 --- a/arch/mips/include/asm/bitops.h +++ b/arch/mips/include/asm/bitops.h @@ -482,7 +482,7 @@ static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long * * Return the bit position (0..63) of the most significant 1 bit in a word * Returns -1 if no 1 bit exists */ -static inline unsigned long __fls(unsigned long word) +static __always_inline unsigned long __fls(unsigned long word) { int num; @@ -548,7 +548,7 @@ static inline unsigned long __fls(unsigned long word) * Returns 0..SZLONG-1 * Undefined if no bit exists, so code should check against 0 first. */ -static inline unsigned long __ffs(unsigned long word) +static __always_inline unsigned long __ffs(unsigned long word) { return __fls(word & -word); } -- 2.17.1