Re: [PATCH 1/3] rcu: Use static initializer for krc.lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On April 20, 2020 8:13:16 AM EDT, Uladzislau Rezki <urezki@xxxxxxxxx> wrote:
>On Sun, Apr 19, 2020 at 06:44:50PM -0700, Paul E. McKenney wrote:
>> On Sun, Apr 19, 2020 at 09:17:49PM -0400, Joel Fernandes wrote:
>> > On Sun, Apr 19, 2020 at 08:27:13PM -0400, Joel Fernandes wrote:
>> > > On Sun, Apr 19, 2020 at 07:58:36AM -0700, Paul E. McKenney wrote:
>> > > > On Sat, Apr 18, 2020 at 02:37:48PM +0200, Uladzislau Rezki
>wrote:
>> > > > > On Fri, Apr 17, 2020 at 11:54:49AM -0700, Paul E. McKenney
>wrote:
>> > > > > > On Fri, Apr 17, 2020 at 02:26:41PM -0400, Joel Fernandes
>wrote:
>> > > > > > > On Fri, Apr 17, 2020 at 05:04:42PM +0200, Sebastian
>Andrzej Siewior wrote:
>> > > > > > > > On 2020-04-16 23:05:15 [-0400], Joel Fernandes wrote:
>> > > > > > > > > On Thu, Apr 16, 2020 at 11:34:44PM +0200, Sebastian
>Andrzej Siewior wrote:
>> > > > > > > > > > On 2020-04-16 14:00:57 [-0700], Paul E. McKenney
>wrote:
>> > > > > > > > > > > 
>> > > > > > > > > > > We might need different calling-context
>restrictions for the two variants
>> > > > > > > > > > > of kfree_rcu().  And we might need to come up
>with some sort of lockdep
>> > > > > > > > > > > check for "safe to use normal spinlock in -rt".
>> > > > > > > > > > 
>> > > > > > > > > > Oh. We do have this already, it is called
>CONFIG_PROVE_RAW_LOCK_NESTING.
>> > > > > > > > > > This one will scream if you do
>> > > > > > > > > > 	raw_spin_lock();
>> > > > > > > > > > 	spin_lock();
>> > > > > > > > > > 
>> > > > > > > > > > Sadly, as of today, there is code triggering this
>which needs to be
>> > > > > > > > > > addressed first (but it is one list of things to
>do).
>> > > > > > > > > > 
>> > > > > > > > > > Given the thread so far, is it okay if I repost the
>series with
>> > > > > > > > > > migrate_disable() instead of accepting a possible
>migration before
>> > > > > > > > > > grabbing the lock? I would prefer to avoid the
>extra RT case (avoiding
>> > > > > > > > > > memory allocations in a possible atomic context)
>until we get there.
>> > > > > > > > > 
>> > > > > > > > > I prefer something like the following to make it
>possible to invoke
>> > > > > > > > > kfree_rcu() from atomic context considering
>call_rcu() is already callable
>> > > > > > > > > from such contexts. Thoughts?
>> > > > > > > > 
>> > > > > > > > So it looks like it would work. However, could we
>please delay this
>> > > > > > > > until we have an actual case on RT? I just added
>> > > > > > > > 	WARN_ON(!preemptible());
>> > > > > > > 
>> > > > > > > I am not sure if waiting for it to break in the future is
>a good idea. I'd
>> > > > > > > rather design it in a forward thinking way. There could
>be folks replacing
>> > > > > > > "call_rcu() + kfree in a callback" with kfree_rcu() for
>example. If they were
>> > > > > > > in !preemptible(), we'd break on page allocation.
>> > > > > > > 
>> > > > > > > Also as a sidenote, the additional pre-allocation of
>pages that Vlad is
>> > > > > > > planning on adding would further reduce the need for
>pages from the page
>> > > > > > > allocator.
>> > > > > > > 
>> > > > > > > Paul, what is your opinion on this?
>> > > > > > 
>> > > > > > My experience with call_rcu(), of which kfree_rcu() is a
>specialization,
>> > > > > > is that it gets invoked with preemption disabled, with
>interrupts
>> > > > > > disabled, and during early boot, as in even before
>rcu_init() has been
>> > > > > > invoked.  This experience does make me lean towards raw
>spinlocks.
>> > > > > > 
>> > > > > > But to Sebastian's point, if we are going to use raw
>spinlocks, we need
>> > > > > > to keep the code paths holding those spinlocks as short as
>possible.
>> > > > > > I suppose that the inability to allocate memory with raw
>spinlocks held
>> > > > > > helps, but it is worth checking.
>> > > > > >
>> > > > > How about reducing the lock contention even further?
>> > > > 
>> > > > Can we do even better by moving the work-scheduling out from
>under the
>> > > > spinlock?  This of course means that it is necessary to handle
>the
>> > > > occasional spurious call to the work handler, but that should
>be rare
>> > > > and should be in the noise compared to the reduction in
>contention.
>> > > 
>> > > Yes I think that will be required since -rt will sleep on
>workqueue locks as
>> > > well :-(. I'm looking into it right now.
>> > > 
>> > >         /*
>> > >          * If @work was previously on a different pool, it might
>still be
>> > >          * running there, in which case the work needs to be
>queued on that
>> > >          * pool to guarantee non-reentrancy.
>> > >          */
>> > >         last_pool = get_work_pool(work);
>> > >         if (last_pool && last_pool != pwq->pool) {
>> > >                 struct worker *worker;
>> > > 
>> > >                 spin_lock(&last_pool->lock);
>> > 
>> > Hmm, I think moving schedule_delayed_work() outside lock will work.
>Just took
>> > a good look and that's not an issue. However calling
>schedule_delayed_work()
>> > itself is an issue if the caller of kfree_rcu() is !preemptible()
>on
>> > PREEMPT_RT. Because the schedule_delayed_work() calls spin_lock on
>pool->lock
>> > which can sleep on PREEMPT_RT :-(. Which means we have to do either
>of:
>> > 
>> > 1. Implement a new mechanism for scheduling delayed work that does
>not
>> >    acquire sleeping locks.
>> > 
>> > 2. Allow kfree_rcu() only from preemptible context (That is
>Sebastian's
>> >    initial patch to replace local_irq_save() + spin_lock() with
>> >    spin_lock_irqsave()).
>> > 
>> > 3. Queue the work through irq_work or another bottom-half
>mechanism.
>> 
>> I use irq_work elsewhere in RCU, but the queue_delayed_work() might
>> go well with a timer.  This can of course be done conditionally.
>> 
>We can schedule_delayed_work() inside and outside of the spinlock,
>i.e. it is not an issue for RT kernel, because as it was noted in last
>message a workqueue system uses raw spinlicks internally. I checked
>the latest linux-5.6.y-rt also. If we do it inside, we will place the
>work on current CPU, at least as i see it, even if it is "unbound".
>

Thanks for confirming!!

>If we do it outside, we will reduce a critical section, from the other
>hand we can introduce a potential delay in placing the context into
>CPUs
>run-queuye. As a result we could end up on another CPU, thus placing
>the work on new CPU, plus memory foot-print might be higher. It would
>be good to test and have a look at it actually.
>
>But it can be negligible :)

Since the wq locking is raw spinlock on rt as Mike and you mentioned,  if wq holds lock for too long that itself will spawn a lengthy non preemptible critical section, so from that standpoint doing it under our lock should be ok I think.

>
>> > Any other thoughts?
>> 
>> I did forget to ask you guys your opinions about the downsides (if
>any)
>> of moving from unbound to per-CPU workqueues.  Thoughts?
>> 
>If we do it outside of spinlock, there is at least one drawback that i
>see, i described it above. We can use schedule_delayed_work_on() but
>we as a caller have to guarantee that a CPU we about to place a work
>is alive :)

FWIW, some time back I did a simple manual test calling queue_work_on on an offline CPU to see what happens and it appears to be working fine. On a 4 CPU system, I offline CPU 3 and queue the work on it which ends up executing on CPU 0 instead.

Thanks,

- Joel

>
>--
>Vlad Rezki

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux