Re: [tip:locking/core] atomics/riscv: Define atomic64_fetch_add_unless()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 21 Jun 2018 06:18:59 PDT (-0700), tipbot@xxxxxxxxx wrote:
Commit-ID:  2b523f170e399b0e1c8eec2c4b5889735b0d2b9b
Gitweb:     https://git.kernel.org/tip/2b523f170e399b0e1c8eec2c4b5889735b0d2b9b
Author:     Mark Rutland <mark.rutland@xxxxxxx>
AuthorDate: Thu, 21 Jun 2018 13:13:16 +0100
Committer:  Ingo Molnar <mingo@xxxxxxxxxx>
CommitDate: Thu, 21 Jun 2018 14:25:24 +0200

atomics/riscv: Define atomic64_fetch_add_unless()

As a step towards unifying the atomic/atomic64/atomic_long APIs, this
patch converts the arch/riscv implementation of atomic64_add_unless()
into an implementation of atomic64_fetch_add_unless().

A wrapper in <linux/atomic.h> will build atomic_add_unless() atop of
this, provided it is given a preprocessor definition.

No functional change is intended as a result of this patch.

Acked-by Palmer Dabbelt <palmer@xxxxxxxxxx>
Signed-off-by: Mark Rutland <mark.rutland@xxxxxxx>
Reviewed-by: Will Deacon <will.deacon@xxxxxxx>
Acked-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Cc: Albert Ou <albert@xxxxxxxxxx>
Cc: Boqun Feng <boqun.feng@xxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Palmer Dabbelt <palmer@xxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Link: https://lore.kernel.org/lkml/20180621121321.4761-14-mark.rutland@xxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
 arch/riscv/include/asm/atomic.h | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 5f161daefcd2..d959bbaaad41 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -352,7 +352,7 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 #define atomic_fetch_add_unless atomic_fetch_add_unless

 #ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long __atomic64_add_unless(atomic64_t *v, long a, long u)
+static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
 {
        long prev, rc;

@@ -369,11 +369,7 @@ static __always_inline long __atomic64_add_unless(atomic64_t *v, long a, long u)
 		: "memory");
 	return prev;
 }
-
-static __always_inline int atomic64_add_unless(atomic64_t *v, long a, long u)
-{
-	return __atomic64_add_unless(v, a, u) != u;
-}
+#define atomic64_fetch_add_unless atomic64_fetch_add_unless
 #endif

 /*

Reviewed-by: Palmer Dabbelt <palmer@xxxxxxxxxx>

Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-tip-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Stable Commits]     [Linux Stable Kernel]     [Linux Kernel]     [Linux USB Devel]     [Linux Video &Media]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux