Re: [PATCH V4 2/2] rcu: Update jiffies in rcu_cpu_stall_reset()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 28, 2023 at 01:33:48PM +0000, Joel Fernandes wrote:
> On Mon, Aug 28, 2023 at 03:47:12AM -0700, Paul E. McKenney wrote:
> > On Sun, Aug 27, 2023 at 06:11:40PM -0400, Joel Fernandes wrote:
> > > On Sun, Aug 27, 2023 at 1:51 AM Huacai Chen <chenhuacai@xxxxxxxxxx> wrote:
> > > [..]
> > > > > > > > The only way I know of to avoid these sorts of false positives is for
> > > > > > > > the user to manually suppress all timeouts (perhaps using a kernel-boot
> > > > > > > > parameter for your early-boot case), do the gdb work, and then unsuppress
> > > > > > > > all stalls.  Even that won't work for networking, because the other
> > > > > > > > system's clock will be running throughout.
> > > > > > > >
> > > > > > > > In other words, from what I know now, there is no perfect solution.
> > > > > > > > Therefore, there are sharp limits to the complexity of any solution that
> > > > > > > > I will be willing to accept.
> > > > > > > I think the simplest solution is (I hope Joel will not angry):
> > > > > >
> > > > > > Not angry at all, just want to help. ;-). The problem is the 300*HZ solution
> > > > > > will also effect the VM workloads which also do a similar reset.  Allow me few
> > > > > > days to see if I can take a shot at fixing it slightly differently. I am
> > > > > > trying Paul's idea of setting jiffies at a later time. I think it is doable.
> > > > > > I think the advantage of doing this is it will make stall detection more
> > > > > > robust in this face of these gaps in jiffie update. And that solution does
> > > > > > not even need us to rely on ktime (and all the issues that come with that).
> > > > > >
> > > > >
> > > > > I wrote a patch similar to Paul's idea and sent it out for review, the
> > > > > advantage being it purely is based on jiffies. Could you try it out
> > > > > and let me know?
> > > > If you can cc my gmail <chenhuacai@xxxxxxxxx>, that could be better.
> > > 
> > > Sure, will do.
> > > 
> > > > I have read your patch, maybe the counter (nr_fqs_jiffies_stall)
> > > > should be atomic_t and we should use atomic operation to decrement its
> > > > value. Because rcu_gp_fqs() can be run concurrently, and we may miss
> > > > the (nr_fqs == 1) condition.
> > > 
> > > I don't think so. There is only 1 place where RMW operation happens
> > > and rcu_gp_fqs() is called only from the GP kthread. So a concurrent
> > > RMW (and hence a lost update) is not possible.
> > 
> > Huacai, is your concern that the gdb user might have created a script
> > (for example, printing a variable or two, then automatically continuing),
> > so that breakpoints could happen in quick successsion, such that the
> > second breakpoint might run concurrently with rcu_gp_fqs()?
> > 
> > If this can really happen, the point that Joel makes is a good one, namely
> > that rcu_gp_fqs() is single-threaded and (absent rcutorture) runs only
> > once every few jiffies.  And gdb breakpoints, even with scripting, should
> > also be rather rare.  So if this is an issue, a global lock should do the
> > trick, perhaps even one of the existing locks in the rcu_state structure.
> > The result should then be just as performant/scalable and a lot simpler
> > than use of atomics.
> 
> Thanks Paul and Huacai, also I was thinking in the event of such concurrent
> breakpoint stalling jiffies updates but GP thread / rcu_gp_fqs() chugging
> along, we could also make the patch more robust for such a situation as
> follows (diff on top of previous patch [1]). Thoughts?
> 
> Also if someone sets a breakpoint right after the "nr_fqs == 1" check, then
> they are kind of asking for it anyway since the GP kthread getting
> stalled is an actual reason for RCU stalls (infact rcutorture has a test mode
> for it even :P) and as such the false-positive may not be that false. ;-)

That would indeed be asking for it.  But then again, they might have set
a breakpoint elsewhere that had the unintended side-effect of catching
the RCU grace-period kthread right at that point.

If that isn't something we are worried about, your original is fine.
If it is something we are worried about, I recommend learning from my
RCU CPU stall warning experiences and just using a lock.  ;-)

							Thanx, Paul

> Btw apologies for forgetting to CC Thomas on [1] since he is involved in the
> timekeeping discussions. I relied on "git send-email" to populate the Cc list
> but did not add Cc: to the patch.
> 
> [1] https://lore.kernel.org/all/20230827025349.4161262-1-joel@xxxxxxxxxxxxxxxxx/
> 
> ---8<-----------------------
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 9273f2318ea1..ffb165a2ef41 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1559,13 +1559,15 @@ static void rcu_gp_fqs(bool first_time)
>  	WRITE_ONCE(rcu_state.n_force_qs, rcu_state.n_force_qs + 1);
>  
>  	WARN_ON_ONCE(nr_fqs > 3);
> -	if (nr_fqs) {
> +	/* Only countdown nr_fqs for stall purposes if jiffies moves. */
> +	if (nr_fqs && jiffies != READ_ONCE(rcu_state.jiffies_last_fqs)) {
>  		if (nr_fqs == 1) {
>  			WRITE_ONCE(rcu_state.jiffies_stall,
>  				   jiffies + rcu_jiffies_till_stall_check());
>  		}
>  		WRITE_ONCE(rcu_state.nr_fqs_jiffies_stall, --nr_fqs);
>  	}
> +	WRITE_ONCE(rcu_state.jiffies_last_fqs, jiffies);
>  
>  	if (first_time) {
>  		/* Collect dyntick-idle snapshots. */
> diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> index e9821a8422db..72128e348fa1 100644
> --- a/kernel/rcu/tree.h
> +++ b/kernel/rcu/tree.h
> @@ -386,6 +386,8 @@ struct rcu_state {
>  						/*  in jiffies. */
>  	unsigned long jiffies_stall;		/* Time at which to check */
>  						/*  for CPU stalls. */
> +	unsigned long jiffies_last_fqs;		/* jiffies value at last FQS.
> +						   to confirm jiffies moves. */
>  	int nr_fqs_jiffies_stall;		/* Number of fqs loops after
>  						 * which read jiffies and set
>  						 * jiffies_stall. Stall
> diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
> index a2fa6b22e248..0ddd22afbc3a 100644
> --- a/kernel/rcu/tree_stall.h
> +++ b/kernel/rcu/tree_stall.h
> @@ -160,6 +160,7 @@ void rcu_cpu_stall_reset(void)
>  {
>  	WRITE_ONCE(rcu_state.nr_fqs_jiffies_stall, 3);
>  	WRITE_ONCE(rcu_state.jiffies_stall, ULONG_MAX);
> +	WRITE_ONCE(rcu_state.jiffies_last_fqs, 0);
>  }
>  
>  //////////////////////////////////////////////////////////////////////////////
> @@ -177,6 +178,7 @@ static void record_gp_stall_check_time(void)
>  	smp_mb(); // ->gp_start before ->jiffies_stall and caller's ->gp_seq.
>  	WRITE_ONCE(rcu_state.nr_fqs_jiffies_stall, 0);
>  	WRITE_ONCE(rcu_state.jiffies_stall, j + j1);
> +	WRITE_ONCE(rcu_state.jiffies_last_fqs, 0);
>  	rcu_state.jiffies_resched = j + j1 / 2;
>  	rcu_state.n_force_qs_gpstart = READ_ONCE(rcu_state.n_force_qs);
>  }



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux