On Fri, Nov 01, 2024 at 04:35:28PM -0700, Paul E. McKenney wrote: > On Fri, Nov 01, 2024 at 12:54:38PM -0700, Boqun Feng wrote: > > Paul reported an invalid wait context issue in scftorture catched by > > lockdep, and the cause of the issue is because scf_handler() may call > > kfree() to free the struct scf_check: > > > > static void scf_handler(void *scfc_in) > > { > > [...] > > } else { > > kfree(scfcp); > > } > > } > > > > (call chain anlysis from Marco Elver) > > > > This is problematic because smp_call_function() uses non-threaded > > interrupt and kfree() may acquire a local_lock which is a sleepable lock > > on RT. > > > > The general rule is: do not alloc or free memory in non-threaded > > interrupt conntexts. > > > > A quick fix is to use workqueue to defer the kfree(). However, this is > > OK only because scftorture is test code. In general the users of > > interrupts should avoid giving interrupt handlers the ownership of > > objects, that is, users should handle the lifetime of objects outside > > and interrupt handlers should only hold references to objects. > > > > Reported-by: "Paul E. McKenney" <paulmck@xxxxxxxxxx> > > Link: https://lore.kernel.org/lkml/41619255-cdc2-4573-a360-7794fc3614f7@paulmck-laptop/ > > Signed-off-by: Boqun Feng <boqun.feng@xxxxxxxxx> > > Thank you! > > I was worried that putting each kfree() into a separate workqueue handler > would result in freeing not keeping up with allocation for asynchronous > testing (for example, scftorture.weight_single=1), but it seems to be > doing fine in early testing. > I shared the same worry, so it's why I added the comments before queue_work() saying it's only OK because it's test code, it's certainly not something recommended for general use. But glad it turns out OK so far for scftorture ;-) Regards, Boqun > So I have queued this in my -rcu tree for review and further testing. > > Thanx, Paul > > > --- > > kernel/scftorture.c | 14 +++++++++++++- > > 1 file changed, 13 insertions(+), 1 deletion(-) > > > > diff --git a/kernel/scftorture.c b/kernel/scftorture.c > > index 44e83a646264..ab6dcc7c0116 100644 > > --- a/kernel/scftorture.c > > +++ b/kernel/scftorture.c > > @@ -127,6 +127,7 @@ static unsigned long scf_sel_totweight; > > > > // Communicate between caller and handler. > > struct scf_check { > > + struct work_struct work; > > bool scfc_in; > > bool scfc_out; > > int scfc_cpu; // -1 for not _single(). > > @@ -252,6 +253,13 @@ static struct scf_selector *scf_sel_rand(struct torture_random_state *trsp) > > return &scf_sel_array[0]; > > } > > > > +static void kfree_scf_check_work(struct work_struct *w) > > +{ > > + struct scf_check *scfcp = container_of(w, struct scf_check, work); > > + > > + kfree(scfcp); > > +} > > + > > // Update statistics and occasionally burn up mass quantities of CPU time, > > // if told to do so via scftorture.longwait. Otherwise, occasionally burn > > // a little bit. > > @@ -296,7 +304,10 @@ static void scf_handler(void *scfc_in) > > if (scfcp->scfc_rpc) > > complete(&scfcp->scfc_completion); > > } else { > > - kfree(scfcp); > > + // Cannot call kfree() directly, pass it to workqueue. It's OK > > + // only because this is test code, avoid this in real world > > + // usage. > > + queue_work(system_wq, &scfcp->work); > > } > > } > > > > @@ -335,6 +346,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra > > scfcp->scfc_wait = scfsp->scfs_wait; > > scfcp->scfc_out = false; > > scfcp->scfc_rpc = false; > > + INIT_WORK(&scfcp->work, kfree_scf_check_work); > > } > > } > > switch (scfsp->scfs_prim) { > > -- > > 2.45.2 > >