3.16.81-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Dmitry Vyukov <dvyukov@xxxxxxxxxx> commit 31b35f6b4d5285a311e10753f4eb17304326b211 upstream. It is completely unused and implemented only on x86. Remove it. Suggested-by: Mark Rutland <mark.rutland@xxxxxxx> Signed-off-by: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> Cc: H. Peter Anvin <hpa@xxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Link: http://lkml.kernel.org/r/20170526172900.91058-1-dvyukov@xxxxxxxxxx Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> [bwh: Backported to 3.16 because this function is broken after "x86/atomic: Fix smp_mb__{before,after}_atomic()": - Adjust context] Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx> --- arch/tile/lib/atomic_asm_32.S | 3 +-- arch/x86/include/asm/atomic.h | 13 ------------- 2 files changed, 1 insertion(+), 15 deletions(-) --- a/arch/tile/lib/atomic_asm_32.S +++ b/arch/tile/lib/atomic_asm_32.S @@ -24,8 +24,7 @@ * has an opportunity to return -EFAULT to the user if needed. * The 64-bit routines just return a "long long" with the value, * since they are only used from kernel space and don't expect to fault. - * Support for 16-bit ops is included in the framework but we don't provide - * any (x86_64 has an atomic_inc_short(), so we might want to some day). + * Support for 16-bit ops is included in the framework but we don't provide any. * * Note that the caller is advised to issue a suitable L1 or L2 * prefetch on the address being manipulated to avoid extra stalls. --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -205,19 +205,6 @@ static inline int __atomic_add_unless(at return c; } -/** - * atomic_inc_short - increment of a short integer - * @v: pointer to type int - * - * Atomically adds 1 to @v - * Returns the new value of @u - */ -static inline short int atomic_inc_short(short int *v) -{ - asm(LOCK_PREFIX "addw $1, %0" : "+m" (*v)); - return *v; -} - /* These are x86-specific, used by some header files */ #define atomic_clear_mask(mask, addr) \ asm volatile(LOCK_PREFIX "andl %0,%1" \