On 9/7/22 12:21 PM, Uros Bizjak wrote: > Use atomic_long_try_cmpxchg instead of > atomic_long_cmpxchg (*ptr, old, new) == old in __sbitmap_queue_get_batch > and atomic_try_cmpxchg instead of atomic_cmpxchg (*ptr, old, new) == old > in __sbq_wake_up. x86 CMPXCHG instruction returns success in ZF flag, so > this change saves a compare after cmpxchg (and related move instruction > in front of cmpxchg). > > Also, atomic_long_cmpxchg implicitly assigns old *ptr value to "old" > when cmpxchg fails, enabling further code simplifications, e.g. > an extra memory read can be avoided in the loop. > > No functional change intended. It doesn't apply to the current tree, can you please resend one against for-6.1/block? -- Jens Axboe