On Thu, 9 Nov 2023 at 21:57, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > > So something like this should fix lockref. ENTIRELY UNTESTED, except > now the code generation of lockref_put_return() looks much better, > without a pointless flush to the stack, and now it has no pointless > stack frame as a result. Heh. And because I was looking at Al's tree, I didn't notice that commit c6f4a9002252 ("asm-generic: ticket-lock: Optimize arch_spin_value_unlocked()") had solved the ticket spinlock part of this in this merge window in the meantime. The qspinlock implementation - which is what x86 uses - is still broken in mainline, though. So that part of my patch still stands. Now attached just the small one-liner part. Adding Ingo and Guo Ren, who did the ticket lock part (and looks to have done it very similarly to my suggested patch. Ingo? Linus
include/asm-generic/qspinlock.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index 995513fa2690..0655aa5b57b2 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -70,7 +70,7 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock) */ static __always_inline int queued_spin_value_unlocked(struct qspinlock lock) { - return !atomic_read(&lock.val); + return !lock.val.counter; } /**