On Tue, Aug 06, 2024 at 12:38:08PM +0200, Peter Zijlstra wrote: > On Tue, Aug 06, 2024 at 11:44:13AM +0200, Peter Zijlstra wrote: > > On Mon, Aug 05, 2024 at 07:35:22AM -0700, Darrick J. Wong wrote: > > > On Wed, Jul 31, 2024 at 12:55:57PM +0200, Peter Zijlstra wrote: > > > > On Tue, Jul 30, 2024 at 10:33:41PM -0700, Darrick J. Wong wrote: > > > > > > > > > Sooooo... it turns out that somehow your patch got mismerged on the > > > > > first go-round, and that worked. The second time, there was no > > > > > mismerge, which mean that the wrong atomic_cmpxchg() callsite was > > > > > tested. > > > > > > > > > > Looking back at the mismerge, it actually changed > > > > > __static_key_slow_dec_cpuslocked, which had in 6.10: > > > > > > > > > > if (atomic_dec_and_test(&key->enabled)) > > > > > jump_label_update(key); > > > > > > > > > > Decrement, then return true if the value was set to zero. With the 6.11 > > > > > code, it looks like we want to exchange a 1 with a 0, and act only if > > > > > the previous value had been 1. > > > > > > > > > > So perhaps we really want this change? I'll send it out to the fleet > > > > > and we'll see what it reports tomorrow morning. > > > > > > > > Bah yes, I missed we had it twice. Definitely both sites want this. > > > > > > > > I'll tentatively merge the below patch in tip/locking/urgent. I can > > > > rebase if there is need. > > > > > > Hi Peter, > > > > > > This morning, I noticed the splat below with -rc2. > > > > > > WARNING: CPU: 0 PID: 8578 at kernel/jump_label.c:295 __static_key_slow_dec_cpuslocked.part.0+0x50/0x60 > > > > > > Line 295 is the else branch of this code: > > > > > > if (atomic_cmpxchg(&key->enabled, 1, 0) == 1) > > > jump_label_update(key); > > > else > > > WARN_ON_ONCE(!static_key_slow_try_dec(key)); > > > > > > Apparently static_key_slow_try_dec returned false? Looking at that > > > function, I suppose the atomic_read of key->enabled returned 0, since it > > > didn't trigger the "WARN_ON_ONCE(v < 0)" code. Does that mean the value > > > must have dropped from positive N to 0 without anyone ever taking the > > > jump_label_mutex? > > > > One possible scenario I see: > > > > slow_dec > > if (try_dec) // dec_not_one-ish, false > > // enabled == 1 > > slow_inc > > if (inc_not_disabled) // inc_not_zero-ish > > // enabled == 2 > > return > > > > guard((mutex)(&jump_label_mutex); > > if (atomic_cmpxchg(1,0)==1) // false, we're 2 > > > > slow_dec > > if (try-dec) // dec_not_one, true > > // enabled == 1 > > return > > else > > try_dec() // dec_not_one, false > > WARN > > > > > > Let me go play to see how best to cure this. > > I've ended up with this, not exactly pretty :/ > > Thomas? It seems to survive a short test, will send it out for overnight testing on the full fleet, thanks. --D > --- > diff --git a/kernel/jump_label.c b/kernel/jump_label.c > index 6dc76b590703..5fa2c9f094b1 100644 > --- a/kernel/jump_label.c > +++ b/kernel/jump_label.c > @@ -168,8 +168,8 @@ bool static_key_slow_inc_cpuslocked(struct static_key *key) > jump_label_update(key); > /* > * Ensure that when static_key_fast_inc_not_disabled() or > - * static_key_slow_try_dec() observe the positive value, > - * they must also observe all the text changes. > + * static_key_dec() observe the positive value, they must also > + * observe all the text changes. > */ > atomic_set_release(&key->enabled, 1); > } else { > @@ -250,7 +250,7 @@ void static_key_disable(struct static_key *key) > } > EXPORT_SYMBOL_GPL(static_key_disable); > > -static bool static_key_slow_try_dec(struct static_key *key) > +static bool static_key_dec(struct static_key *key, bool fast) > { > int v; > > @@ -268,31 +268,45 @@ static bool static_key_slow_try_dec(struct static_key *key) > v = atomic_read(&key->enabled); > do { > /* > - * Warn about the '-1' case though; since that means a > - * decrement is concurrent with a first (0->1) increment. IOW > - * people are trying to disable something that wasn't yet fully > - * enabled. This suggests an ordering problem on the user side. > + * Warn about the '-1' case; since that means a decrement is > + * concurrent with a first (0->1) increment. IOW people are > + * trying to disable something that wasn't yet fully enabled. > + * This suggests an ordering problem on the user side. > + * > + * Warn about the '0' case; simple underflow. > + * > + * Neither case should succeed and change things. > + */ > + if (WARN_ON_ONCE(v <= 0)) > + return false; > + > + /* > + * Lockless fast-path, dec-not-one like behaviour. > */ > - WARN_ON_ONCE(v < 0); > - if (v <= 1) > + if (fast && v <= 1) > return false; > } while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v - 1))); > > - return true; > + if (fast) > + return true; > + > + /* > + * Locked slow path, dec-and-test like behaviour. > + */ > + lockdep_assert_held(&jump_label_mutex); > + return v == 1; > } > > static void __static_key_slow_dec_cpuslocked(struct static_key *key) > { > lockdep_assert_cpus_held(); > > - if (static_key_slow_try_dec(key)) > + if (static_key_dec(key, true)) // dec-not-one > return; > > guard(mutex)(&jump_label_mutex); > - if (atomic_cmpxchg(&key->enabled, 1, 0) == 1) > + if (static_key_dec(key, false)) // dec-and-test > jump_label_update(key); > - else > - WARN_ON_ONCE(!static_key_slow_try_dec(key)); > } > > static void __static_key_slow_dec(struct static_key *key) > @@ -329,7 +343,7 @@ void __static_key_slow_dec_deferred(struct static_key *key, > { > STATIC_KEY_CHECK_USE(key); > > - if (static_key_slow_try_dec(key)) > + if (static_key_dec(key, true)) // dec-not-one > return; > > schedule_delayed_work(work, timeout);