Hi Trond, On Thu, Nov 01, 2018 at 12:17:31AM +0000, Trond Myklebust wrote: > On Wed, 2018-10-31 at 23:32 +0000, Paul Burton wrote: > > In this particular case I have no idea why > > net/sunrpc/auth_gss/gss_krb5_seal.c is using cmpxchg64() at all. It's > > essentially reinventing atomic64_fetch_inc() which is already > > provided > > everywhere via CONFIG_GENERIC_ATOMIC64 & the spinlock approach. At > > least > > for atomic64_* functions the assumption that all access will be > > performed using those same functions seems somewhat reasonable. > > > > So how does the below look? Trond? > > My one question (and the reason why I went with cmpxchg() in the first > place) would be about the overflow behaviour for atomic_fetch_inc() and > friends. I believe those functions should be OK on x86, so that when we > overflow the counter, it behaves like an unsigned value and wraps back > around. Is that the case for all architectures? > > i.e. are atomic_t/atomic64_t always guaranteed to behave like u32/u64 > on increment? > > I could not find any documentation that explicitly stated that they > should. Based on other replies it seems like it's at least implicitly assumed by other code, even if not explicitly stated. >From a MIPS perspective where atomics are implemented using load-linked & store-conditional instructions the actual addition will be performed using the same addu instruction that a plain integer addition would generate (regardless of signedness), so there'll be absolutely no difference in arithmetic between your gss_seq_send64_fetch_and_inc() function and atomic64_fetch_inc(). I'd expect the same to be true for other architectures with load-linked & store-conditional style atomics. In any case, for the benefit of anyone interested who I didn't copy on the patch submission, here it is: https://lore.kernel.org/lkml/20181101175109.8621-1-paul.burton@xxxxxxxx/ Thanks, Paul