Re: [PATCH 1/3] rcu: Use static initializer for krc.lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Apr 19, 2020 at 09:17:49PM -0400, Joel Fernandes wrote:
> On Sun, Apr 19, 2020 at 08:27:13PM -0400, Joel Fernandes wrote:
> > On Sun, Apr 19, 2020 at 07:58:36AM -0700, Paul E. McKenney wrote:
> > > On Sat, Apr 18, 2020 at 02:37:48PM +0200, Uladzislau Rezki wrote:
> > > > On Fri, Apr 17, 2020 at 11:54:49AM -0700, Paul E. McKenney wrote:
> > > > > On Fri, Apr 17, 2020 at 02:26:41PM -0400, Joel Fernandes wrote:
> > > > > > On Fri, Apr 17, 2020 at 05:04:42PM +0200, Sebastian Andrzej Siewior wrote:
> > > > > > > On 2020-04-16 23:05:15 [-0400], Joel Fernandes wrote:
> > > > > > > > On Thu, Apr 16, 2020 at 11:34:44PM +0200, Sebastian Andrzej Siewior wrote:
> > > > > > > > > On 2020-04-16 14:00:57 [-0700], Paul E. McKenney wrote:
> > > > > > > > > > 
> > > > > > > > > > We might need different calling-context restrictions for the two variants
> > > > > > > > > > of kfree_rcu().  And we might need to come up with some sort of lockdep
> > > > > > > > > > check for "safe to use normal spinlock in -rt".
> > > > > > > > > 
> > > > > > > > > Oh. We do have this already, it is called CONFIG_PROVE_RAW_LOCK_NESTING.
> > > > > > > > > This one will scream if you do
> > > > > > > > > 	raw_spin_lock();
> > > > > > > > > 	spin_lock();
> > > > > > > > > 
> > > > > > > > > Sadly, as of today, there is code triggering this which needs to be
> > > > > > > > > addressed first (but it is one list of things to do).
> > > > > > > > > 
> > > > > > > > > Given the thread so far, is it okay if I repost the series with
> > > > > > > > > migrate_disable() instead of accepting a possible migration before
> > > > > > > > > grabbing the lock? I would prefer to avoid the extra RT case (avoiding
> > > > > > > > > memory allocations in a possible atomic context) until we get there.
> > > > > > > > 
> > > > > > > > I prefer something like the following to make it possible to invoke
> > > > > > > > kfree_rcu() from atomic context considering call_rcu() is already callable
> > > > > > > > from such contexts. Thoughts?
> > > > > > > 
> > > > > > > So it looks like it would work. However, could we please delay this
> > > > > > > until we have an actual case on RT? I just added
> > > > > > > 	WARN_ON(!preemptible());
> > > > > > 
> > > > > > I am not sure if waiting for it to break in the future is a good idea. I'd
> > > > > > rather design it in a forward thinking way. There could be folks replacing
> > > > > > "call_rcu() + kfree in a callback" with kfree_rcu() for example. If they were
> > > > > > in !preemptible(), we'd break on page allocation.
> > > > > > 
> > > > > > Also as a sidenote, the additional pre-allocation of pages that Vlad is
> > > > > > planning on adding would further reduce the need for pages from the page
> > > > > > allocator.
> > > > > > 
> > > > > > Paul, what is your opinion on this?
> > > > > 
> > > > > My experience with call_rcu(), of which kfree_rcu() is a specialization,
> > > > > is that it gets invoked with preemption disabled, with interrupts
> > > > > disabled, and during early boot, as in even before rcu_init() has been
> > > > > invoked.  This experience does make me lean towards raw spinlocks.
> > > > > 
> > > > > But to Sebastian's point, if we are going to use raw spinlocks, we need
> > > > > to keep the code paths holding those spinlocks as short as possible.
> > > > > I suppose that the inability to allocate memory with raw spinlocks held
> > > > > helps, but it is worth checking.
> > > > >
> > > > How about reducing the lock contention even further?
> > > 
> > > Can we do even better by moving the work-scheduling out from under the
> > > spinlock?  This of course means that it is necessary to handle the
> > > occasional spurious call to the work handler, but that should be rare
> > > and should be in the noise compared to the reduction in contention.
> > 
> > Yes I think that will be required since -rt will sleep on workqueue locks as
> > well :-(. I'm looking into it right now.
> > 
> >         /*
> >          * If @work was previously on a different pool, it might still be
> >          * running there, in which case the work needs to be queued on that
> >          * pool to guarantee non-reentrancy.
> >          */
> >         last_pool = get_work_pool(work);
> >         if (last_pool && last_pool != pwq->pool) {
> >                 struct worker *worker;
> > 
> >                 spin_lock(&last_pool->lock);
> 
> Hmm, I think moving schedule_delayed_work() outside lock will work. Just took
> a good look and that's not an issue. However calling schedule_delayed_work()
> itself is an issue if the caller of kfree_rcu() is !preemptible() on
> PREEMPT_RT. Because the schedule_delayed_work() calls spin_lock on pool->lock
> which can sleep on PREEMPT_RT :-(. Which means we have to do either of:
> 
> 1. Implement a new mechanism for scheduling delayed work that does not
>    acquire sleeping locks.
> 
> 2. Allow kfree_rcu() only from preemptible context (That is Sebastian's
>    initial patch to replace local_irq_save() + spin_lock() with
>    spin_lock_irqsave()).
> 
> 3. Queue the work through irq_work or another bottom-half mechanism.

I use irq_work elsewhere in RCU, but the queue_delayed_work() might
go well with a timer.  This can of course be done conditionally.

> Any other thoughts?

I did forget to ask you guys your opinions about the downsides (if any)
of moving from unbound to per-CPU workqueues.  Thoughts?

							Thanx, Paul

> thanks,
> 
>  - Joel
> 
> 
> > 
> > Thanks!
> > 
> >  - Joel
> > 
> > 
> > > 
> > > Thoughts?
> > > 
> > > 							Thanx, Paul
> > > 
> > > > <snip>
> > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > > > index f288477ee1c2..fb916e065784 100644
> > > > --- a/kernel/rcu/tree.c
> > > > +++ b/kernel/rcu/tree.c
> > > > @@ -3053,7 +3053,8 @@ static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
> > > > 
> > > >         // Previous RCU batch still in progress, try again later.
> > > >         krcp->monitor_todo = true;
> > > > -       schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
> > > > +       schedule_delayed_work_on(raw_smp_processor_id(),
> > > > +               &krcp->monitor_work, KFREE_DRAIN_JIFFIES);
> > > >         spin_unlock_irqrestore(&krcp->lock, flags);
> > > >  }
> > > > 
> > > > @@ -3168,7 +3169,8 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
> > > >         if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
> > > >             !krcp->monitor_todo) {
> > > >                 krcp->monitor_todo = true;
> > > > -               schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
> > > > +               schedule_delayed_work_on(raw_smp_processor_id(),
> > > > +                       &krcp->monitor_work, KFREE_DRAIN_JIFFIES);
> > > >         }
> > > > 
> > > >  unlock_return:
> > > > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > > > index 891ccad5f271..49fcc50469f4 100644
> > > > --- a/kernel/workqueue.c
> > > > +++ b/kernel/workqueue.c
> > > > @@ -1723,7 +1723,9 @@ static void rcu_work_rcufn(struct rcu_head *rcu)
> > > > 
> > > >         /* read the comment in __queue_work() */
> > > >         local_irq_disable();
> > > > -       __queue_work(WORK_CPU_UNBOUND, rwork->wq, &rwork->work);
> > > > +
> > > > +       /* Just for illustration. Can have queue_rcu_work_on(). */
> > > > +       __queue_work(raw_smp_processor_id(), rwork->wq, &rwork->work);
> > > >         local_irq_enable();
> > > >  }
> > > > <snip>
> > > > 
> > > > Thoughts?
> > > > 
> > > > --
> > > > Vlad Rezki



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux