Re: [PATCH V4 2/2] rcu: Update jiffies in rcu_cpu_stall_reset()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Aug 27, 2023 at 06:11:40PM -0400, Joel Fernandes wrote:
> On Sun, Aug 27, 2023 at 1:51 AM Huacai Chen <chenhuacai@xxxxxxxxxx> wrote:
> [..]
> > > > > > The only way I know of to avoid these sorts of false positives is for
> > > > > > the user to manually suppress all timeouts (perhaps using a kernel-boot
> > > > > > parameter for your early-boot case), do the gdb work, and then unsuppress
> > > > > > all stalls.  Even that won't work for networking, because the other
> > > > > > system's clock will be running throughout.
> > > > > >
> > > > > > In other words, from what I know now, there is no perfect solution.
> > > > > > Therefore, there are sharp limits to the complexity of any solution that
> > > > > > I will be willing to accept.
> > > > > I think the simplest solution is (I hope Joel will not angry):
> > > >
> > > > Not angry at all, just want to help. ;-). The problem is the 300*HZ solution
> > > > will also effect the VM workloads which also do a similar reset.  Allow me few
> > > > days to see if I can take a shot at fixing it slightly differently. I am
> > > > trying Paul's idea of setting jiffies at a later time. I think it is doable.
> > > > I think the advantage of doing this is it will make stall detection more
> > > > robust in this face of these gaps in jiffie update. And that solution does
> > > > not even need us to rely on ktime (and all the issues that come with that).
> > > >
> > >
> > > I wrote a patch similar to Paul's idea and sent it out for review, the
> > > advantage being it purely is based on jiffies. Could you try it out
> > > and let me know?
> > If you can cc my gmail <chenhuacai@xxxxxxxxx>, that could be better.
> 
> Sure, will do.
> 
> > I have read your patch, maybe the counter (nr_fqs_jiffies_stall)
> > should be atomic_t and we should use atomic operation to decrement its
> > value. Because rcu_gp_fqs() can be run concurrently, and we may miss
> > the (nr_fqs == 1) condition.
> 
> I don't think so. There is only 1 place where RMW operation happens
> and rcu_gp_fqs() is called only from the GP kthread. So a concurrent
> RMW (and hence a lost update) is not possible.

Huacai, is your concern that the gdb user might have created a script
(for example, printing a variable or two, then automatically continuing),
so that breakpoints could happen in quick successsion, such that the
second breakpoint might run concurrently with rcu_gp_fqs()?

If this can really happen, the point that Joel makes is a good one, namely
that rcu_gp_fqs() is single-threaded and (absent rcutorture) runs only
once every few jiffies.  And gdb breakpoints, even with scripting, should
also be rather rare.  So if this is an issue, a global lock should do the
trick, perhaps even one of the existing locks in the rcu_state structure.
The result should then be just as performant/scalable and a lot simpler
than use of atomics.

> Could you test the patch for the issue you are seeing and provide your
> Tested-by tag? Thanks,

Either way, testing would of course be very good!  ;-)

							Thanx, Paul



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux