On Fri, Sep 29, 2023 at 3:35 AM Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote: > > Hello, > Firstly, kudos to the detailed report and analysis. Rare failures are > hard and your usage crash/kdump is awesome to dig deeper into the > issue.. > > On Thu, Sep 28, 2023 at 3:59 AM zhuangel570 <zhuangel570@xxxxxxxxx> wrote: > > > > Hi, > > > > We encounter SRCU hung issue in stable tree 5.4.203, we are running VM create > > and destroy concurrent test, the issue happens after several weeks. Now we > > didn't have a way to reproduce this issue, the issue happens randomly, this > > is the second time we found it in this year. > > > > > > > > CASE1: entry of CPU 136 belongs to GP 288 was accelerated to GP 292 > > - [CPU 136] [GP 280-284] finished, yet not enter srcu_invoke_callbacks. > > - [CPU 136] [GP 284-288] starting, new synchronize_srcu request, queue entry > > to SDP. > > - [CPU 041] [GP 284-288] starting, new synchronize_srcu request, workload run > > faster than CPU 136, start GP, set rcu_seq_start. > > - [CPU 136] [GP 284-288] starting, call srcu_funnel_gp_start, found no need > > to start GP. > > - [CPU 136] [GP 280-284] finished, start to run srcu_invoke_callbacks, > > "accelerate" the seq of new added entry to 292 (it should be 288). > > But srcu_gp_seq is at 304 right now. How does it matter that the CB is > marked for 292? It should be ready to execute anyway even at 292. Note > the meaning of "acceleration", the idea is to start conservatively and > move the callbacks forward as more accurate information is available. > Considering this, 292 initially should be OK IMHO (that's just more > conservative than 288).. > Maybe it matters, as for a CPU, the callbacks will only be scheduled in srcu_gp_end() for the GPs, for which it has updated ->srcu_data_have_cbs[idx] and ->srcu_have_cbs[idx] > > > > /* > > * CASE2 > > * - entry of CPU 136 belongs to GP 288 was accelerated to GP 296. > > * - GP0: 280-284, GP1: 284-288, GP2: 288-292. > > */ > > > > /* [GP0][CPU-136] */ > > process_srcu { > > srcu_gp_end > > } > > > > /* [GP1][CPU-136] */ > > synchronize_srcu { > > __call_srcu { > > rcu_segcblist_enqueue > > /* [GP1][CPU-041] */ > > synchronize_srcu { > > __call_srcu { > > srcu_funnel_gp_start > > srcu_gp_start > > } > > } > > process_srcu { > > srcu_gp_end > > rcu_seq_end > > } > > /* [GP1][CPU-136] */ > > srcu_funnel_gp_start > > } > > } > > /* [GP0][CPU-136] */ > > srcu_invoke_callbacks { > > If srcu_invoke_callbacks() was really called for the rdp, I would have > expected rcu_segcblist_advance() to advance all those pending > callbacks to 304. If the callback is in NEXT_TAIL and not assigned GP num, rcu_segcblist_advance() won't move it and next accelerate in srcu_invoke_callbacks() will assign it the next gp sequence num. Thanks Neeraj > > I posit that probably srcu_invoke_callbacks() is not even being called > in the first place, otherwise why is the DONE segment count still 0? > DONE should not be 0, otherwise callbacks will not run. The segcblist > of the rdp seems out of date perhaps due to rcu_invoke_callbacks() not > being called due to some other bug (or a workqueue/timer issue causing > the srcu_invoke_callbacks() to not run?). > > I wonder if you would consider trying to reproduce it with the > rcutorture module's SRCU testing in the chance that it occurs with it > on your setup. > > thanks, > > - Joel > > > > /* [GP2][CPU-017] */ > > synchronize_srcu { > > __call_srcu { > > srcu_funnel_gp_start > > srcu_gp_start > > rcu_seq_start > > } > > } > > /* [GP0][CPU-136] */ > > rcu_segcblist_accelerate > > } > > > > > > Potential fix: > > We shouldn't accelerate the segcblist in srcu_invoke_callbacks base snapshot > > of current gp seq of SRCU struct, how about accelerate the sda base > > sdp->srcu_gp_seq_needed, maybe like this: > > > > > > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c > > index 21115ffb6c44..ff8579abc924 100644 > > --- a/kernel/rcu/srcutree.c > > +++ b/kernel/rcu/srcutree.c > > @@ -1197,7 +1197,7 @@ static void srcu_invoke_callbacks(struct > > work_struct *work) > > spin_lock_irq_rcu_node(sdp); > > rcu_segcblist_insert_count(&sdp->srcu_cblist, &ready_cbs); > > (void)rcu_segcblist_accelerate(&sdp->srcu_cblist, > > - rcu_seq_snap(&ssp->srcu_gp_seq)); > > + rcu_seq_snap(&sdp->srcu_gp_seq_needed)); > > sdp->srcu_cblist_invoking = false; > > more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist); > > spin_unlock_irq_rcu_node(sdp); > > I think "rcu_seq_snap(&ssp->srcu_gp_seq))" is correct IMHO, > rcu_seq_snap() says which what's the gp_seq at which a full GP has > passed from the moment rcu_seq_snap() is called. That's precisely when > it is safe to call all those non-accelerated callbacks. > > Basically, if any of the bits are set in the state portion of the > sequence of the gp_seq number, then it returns (COUNTER_PORTION + 2) > << RCU_SEQ_CTR_SHIFT. > Otherwise, it returns (COUNTER_PORTION + 1) << RCU_SEQ_CTR_SHIFT. > > > > > > > > > > > Thanks for reading! > > > > > > Reference detail log of crash dump information: > > https://raw.githubusercontent.com/zhuangel/misc/main/debug/srcu_hung/bt.f.252684.log > > https://raw.githubusercontent.com/zhuangel/misc/main/debug/srcu_hung/irq_srcu.partial.log > > https://raw.githubusercontent.com/zhuangel/misc/main/debug/srcu_hung/irq_srcu.raw.log > > https://raw.githubusercontent.com/zhuangel/misc/main/debug/srcu_hung/irq_srcu.sda.lock.log > > https://raw.githubusercontent.com/zhuangel/misc/main/debug/srcu_hung/irq_srcu.sda.raw.log