Re: [RFC 06/10] rcu/hotplug: Make rcutree_dead_cpu() parallel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 8/24/2022 12:20 PM, Paul E. McKenney wrote:
> On Wed, Aug 24, 2022 at 09:53:11PM +0800, Pingfan Liu wrote:
>> On Tue, Aug 23, 2022 at 11:01 AM Paul E. McKenney <paulmck@xxxxxxxxxx> wrote:
>>>
>>> On Tue, Aug 23, 2022 at 09:50:56AM +0800, Pingfan Liu wrote:
>>>> On Sun, Aug 21, 2022 at 07:45:28PM -0700, Paul E. McKenney wrote:
>>>>> On Mon, Aug 22, 2022 at 10:15:16AM +0800, Pingfan Liu wrote:
>>>>>> In order to support parallel, rcu_state.n_online_cpus should be
>>>>>> atomic_dec()
>>>>>>
>>>>>> Signed-off-by: Pingfan Liu <kernelfans@xxxxxxxxx>
>>>>>
>>>>> I have to ask...  What testing have you subjected this patch to?
>>>>>
>>>>
>>>> This patch subjects to [1]. The series aims to enable kexec-reboot in
>>>> parallel on all cpu. As a result, the involved RCU part is expected to
>>>> support parallel.
>>>
>>> I understand (and even sympathize with) the expectation.  But results
>>> sometimes diverge from expectations.  There have been implicit assumptions
>>> in RCU about only one CPU going offline at a time, and I am not sure
>>> that all of them have been addressed.  Concurrent CPU onlining has
>>> been looked at recently here:
>>>
>>> https://docs.google.com/document/d/1jymsaCPQ1PUDcfjIKm0UIbVdrJAaGX-6cXrmcfm0PRU/edit?usp=sharing
>>>
>>> You did us atomic_dec() to make rcu_state.n_online_cpus decrementing be
>>> atomic, which is good.  Did you look through the rest of RCU's CPU-offline
>>> code paths and related code paths?
>>
>> I went through those codes at a shallow level, especially at each
>> cpuhp_step hook in the RCU system.
> 
> And that is fine, at least as a first step.
> 
>> But as you pointed out, there are implicit assumptions about only one
>> CPU going offline at a time, I will chew the google doc which you
>> share.  Then I can come to a final result.
> 
> Boqun Feng, Neeraj Upadhyay, Uladzislau Rezki, and I took a quick look,
> and rcu_boost_kthread_setaffinity() seems to need some help.  As it
> stands, it appears that concurrent invocations of this function from the
> CPU-offline path will cause all but the last outgoing CPU's bit to be
> (incorrectly) set in the cpumask_var_t passed to set_cpus_allowed_ptr().
> 
> This should not be difficult to fix, for example, by maintaining a
> separate per-leaf-rcu_node-structure bitmask of the concurrently outgoing
> CPUs for that rcu_node structure.  (Similar in structure to the
> ->qsmask field.)
> 
> There are probably more where that one came from.  ;-)

Should rcutree_dying_cpu() access to rnp->qsmask have a READ_ONCE() ? I was
thinking grace period initialization or qs reporting paths racing with that. Its
just tracing, still :)

Thanks,

- Joel



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux