Re: [PATCH 1/3] rcu: Use static initializer for krc.lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Apr 18, 2020 at 02:37:48PM +0200, Uladzislau Rezki wrote:
> On Fri, Apr 17, 2020 at 11:54:49AM -0700, Paul E. McKenney wrote:
> > On Fri, Apr 17, 2020 at 02:26:41PM -0400, Joel Fernandes wrote:
> > > On Fri, Apr 17, 2020 at 05:04:42PM +0200, Sebastian Andrzej Siewior wrote:
> > > > On 2020-04-16 23:05:15 [-0400], Joel Fernandes wrote:
> > > > > On Thu, Apr 16, 2020 at 11:34:44PM +0200, Sebastian Andrzej Siewior wrote:
> > > > > > On 2020-04-16 14:00:57 [-0700], Paul E. McKenney wrote:
> > > > > > > 
> > > > > > > We might need different calling-context restrictions for the two variants
> > > > > > > of kfree_rcu().  And we might need to come up with some sort of lockdep
> > > > > > > check for "safe to use normal spinlock in -rt".
> > > > > > 
> > > > > > Oh. We do have this already, it is called CONFIG_PROVE_RAW_LOCK_NESTING.
> > > > > > This one will scream if you do
> > > > > > 	raw_spin_lock();
> > > > > > 	spin_lock();
> > > > > > 
> > > > > > Sadly, as of today, there is code triggering this which needs to be
> > > > > > addressed first (but it is one list of things to do).
> > > > > > 
> > > > > > Given the thread so far, is it okay if I repost the series with
> > > > > > migrate_disable() instead of accepting a possible migration before
> > > > > > grabbing the lock? I would prefer to avoid the extra RT case (avoiding
> > > > > > memory allocations in a possible atomic context) until we get there.
> > > > > 
> > > > > I prefer something like the following to make it possible to invoke
> > > > > kfree_rcu() from atomic context considering call_rcu() is already callable
> > > > > from such contexts. Thoughts?
> > > > 
> > > > So it looks like it would work. However, could we please delay this
> > > > until we have an actual case on RT? I just added
> > > > 	WARN_ON(!preemptible());
> > > 
> > > I am not sure if waiting for it to break in the future is a good idea. I'd
> > > rather design it in a forward thinking way. There could be folks replacing
> > > "call_rcu() + kfree in a callback" with kfree_rcu() for example. If they were
> > > in !preemptible(), we'd break on page allocation.
> > > 
> > > Also as a sidenote, the additional pre-allocation of pages that Vlad is
> > > planning on adding would further reduce the need for pages from the page
> > > allocator.
> > > 
> > > Paul, what is your opinion on this?
> > 
> > My experience with call_rcu(), of which kfree_rcu() is a specialization,
> > is that it gets invoked with preemption disabled, with interrupts
> > disabled, and during early boot, as in even before rcu_init() has been
> > invoked.  This experience does make me lean towards raw spinlocks.
> > 
> > But to Sebastian's point, if we are going to use raw spinlocks, we need
> > to keep the code paths holding those spinlocks as short as possible.
> > I suppose that the inability to allocate memory with raw spinlocks held
> > helps, but it is worth checking.
> >
> How about reducing the lock contention even further?

Can we do even better by moving the work-scheduling out from under the
spinlock?  This of course means that it is necessary to handle the
occasional spurious call to the work handler, but that should be rare
and should be in the noise compared to the reduction in contention.

Thoughts?

							Thanx, Paul

> <snip>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index f288477ee1c2..fb916e065784 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -3053,7 +3053,8 @@ static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
> 
>         // Previous RCU batch still in progress, try again later.
>         krcp->monitor_todo = true;
> -       schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
> +       schedule_delayed_work_on(raw_smp_processor_id(),
> +               &krcp->monitor_work, KFREE_DRAIN_JIFFIES);
>         spin_unlock_irqrestore(&krcp->lock, flags);
>  }
> 
> @@ -3168,7 +3169,8 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
>         if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
>             !krcp->monitor_todo) {
>                 krcp->monitor_todo = true;
> -               schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
> +               schedule_delayed_work_on(raw_smp_processor_id(),
> +                       &krcp->monitor_work, KFREE_DRAIN_JIFFIES);
>         }
> 
>  unlock_return:
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 891ccad5f271..49fcc50469f4 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1723,7 +1723,9 @@ static void rcu_work_rcufn(struct rcu_head *rcu)
> 
>         /* read the comment in __queue_work() */
>         local_irq_disable();
> -       __queue_work(WORK_CPU_UNBOUND, rwork->wq, &rwork->work);
> +
> +       /* Just for illustration. Can have queue_rcu_work_on(). */
> +       __queue_work(raw_smp_processor_id(), rwork->wq, &rwork->work);
>         local_irq_enable();
>  }
> <snip>
> 
> Thoughts?
> 
> --
> Vlad Rezki



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux