Re: [BUG] Random intermittent boost failures (Was Re: [BUG] TREE04..)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 14, 2023 at 06:56:27PM +0000, Joel Fernandes wrote:
> On Thu, Sep 14, 2023 at 08:23:38AM -0700, Paul E. McKenney wrote:
> > On Thu, Sep 14, 2023 at 01:13:51PM +0000, Joel Fernandes wrote:
> > > On Thu, Sep 14, 2023 at 04:11:26AM -0700, Paul E. McKenney wrote:
> > > > On Wed, Sep 13, 2023 at 04:30:20PM -0400, Joel Fernandes wrote:
> > > > > On Mon, Sep 11, 2023 at 4:16 AM Paul E. McKenney <paulmck@xxxxxxxxxx> wrote:
> > > > > [..]
> > > > > > > I am digging deeper to see why the rcu_preempt thread cannot be pushed out
> > > > > > > and then I'll also look at why is it being pushed out in the first place.
> > > > > > >
> > > > > > > At least I have a strong repro now running 5 instances of TREE03 in parallel
> > > > > > > for several hours.
> > > > > >
> > > > > > Very good!  Then why not boot with rcutorture.onoff_interval=0 and see if
> > > > > > the problem still occurs?  If yes, then there is definitely some reason
> > > > > > other than CPU hotplug that makes this happen.
> > > > > 
> > > > > Hi Paul,
> > > > > So looks so far like onoff_interval=0 makes the issue disappear. So
> > > > > likely hotplug related. I am ok with doing the cpus_read_lock during
> > > > > boost testing and seeing if that fixes it. If it does, I can move on
> > > > > to the next thing in my backlog.
> > > > > 
> > > > > What do you think? Or should I spend more time root-causing it? It is
> > > > > most like runaway RT threads combined with the CPU hotplug threads,
> > > > > making scheduling of the rcu_preempt thread not happen. But I can't
> > > > > say for sure without more/better tracing (Speaking of better tracing,
> > > > > I am adding core-dump support to rcutorture, but it is not there yet).
> > > > 
> > > > This would not be the first time rcutorture has had trouble with those
> > > > threads, so I am for adding the cpus_read_lock().
> > > > 
> > > > Additional root-causing might be helpful, but then again, you might
> > > > have higher priority things to worry about.  ;-)
> > > 
> > > No worries. Unfortunately putting cpus_read_lock() around the boost test
> > > causes hangs. I tried something like the following [1]. If you have a diff, I can
> > > quickly try something to see if the issue goes away as well.
> > 
> > The other approaches that occur to me are:
> > 
> > 1.	Synchronize with the torture.c CPU-hotplug code.  This is a bit
> > 	tricky as well.
> > 
> > 2.	Rearrange the testing to convert one of the TREE0* scenarios that
> > 	is not in CFLIST (TREE06 or TREE08) to a real-time configuration,
> > 	with boosting but without CPU hotplug.	Then remove boosting
> > 	from TREE04.
> > 
> > Of these, #2 seems most productive.  But is there a better way?
> 
> We could have the gp thread at higher priority for TREE03. What I see
> consistently is that the GP thread gets migrated from CPU M to CPU N only to
> be immediately sent back. Dumping the state showed CPU N is running ksoftirqd
> which is also a rt priority 2.  Making rcu_preempt 3 and ksoftirqd 2 might
> give less of a run-around to rcu_preempt maybe enough to prevent the grace
> period from stalling. I am not sure if this will fix it, but I am running a
> test to see how it goes, will let you know.

That led to a lot of fireworks. :-) I am thinking though, do we really need
to run a boost kthread on all CPUs? I think that might be the root cause
because the boost threads run on all CPUs except perhaps the one dying.

We could run them on just the odd, or even ones and still be able to get
sufficient boost testing. This may be especially important without RT
throttling. I'll go ahead and queue a test like that.

Thoughts?

thanks,

 - Joel




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux