Re: [PATCH bpf-next 1/2] bpf: Allow bpf_local_storage to be used by sleepable programs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 23, 2021 at 10:22:04AM -0800, Paul E. McKenney wrote:
> On Tue, Nov 23, 2021 at 06:11:14PM +0100, KP Singh wrote:
> > On Thu, Sep 2, 2021 at 6:45 AM Martin KaFai Lau <kafai@xxxxxx> wrote:
> > > I think the global lock will be an issue for the current non-sleepable
> > > netdev bpf-prog which could be triggered by external traffic,  so a flag
> > > is needed here to provide a fast path.  I suspect other non-prealloc map
> > > may need it in the future, so probably
> > > s/BPF_F_SLEEPABLE_STORAGE/BPF_F_SLEEPABLE/ instead.
> > 
> > I was re-working the patches and had a couple of questions.
> > 
> > There are two data structures that get freed under RCU here:
> > 
> > struct bpf_local_storage
> > struct bpf_local_storage_selem
> > 
> > We can choose to free the bpf_local_storage_selem under
> > call_rcu_tasks_trace based on
> > whether the map it belongs to is sleepable with something like:
> > 
> > if (selem->sdata.smap->map.map_flags & BPF_F_SLEEPABLE_STORAGE)
Paul's current work (mentioned by his previous email) will improve the
performance of call_rcu_tasks_trace, so it probably can avoid the
new BPF_F_SLEEPABLE flag and make it easier to use.

> >     call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_rcu);
> > else
> >     kfree_rcu(selem, rcu);
> > 
> > Questions:
> > 
> > * Can we free bpf_local_storage under kfree_rcu by ensuring it's
> >   always accessed in a  classical RCU critical section?
>>    Or maybe I am missing something and this also needs to be freed
> >   under trace RCU if any of the selems are from a sleepable map.
In the inode_storage_lookup() of this patch:

+#define bpf_local_storage_rcu_lock_held()                      \
+       (rcu_read_lock_held() || rcu_read_lock_trace_held() ||  \
+        rcu_read_lock_bh_held())

@@ -44,7 +45,8 @@ static struct bpf_local_storage_data *inode_storage_lookup(struct inode *inode,
	if (!bsb)
		return NULL;

-	inode_storage = rcu_dereference(bsb->storage);
+	inode_storage = rcu_dereference_protected(bsb->storage,
+						  bpf_local_storage_rcu_lock_held());

Thus, it is not always in classical RCU critical.

> > 
> > * There is an issue with nested raw spinlocks, e.g. in
> > bpf_inode_storage.c:bpf_inode_storage_free
> > 
> >   hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
> >   /* Always unlink from map before unlinking from
> >   * local_storage.
> >   */
> >   bpf_selem_unlink_map(selem);
> >   free_inode_storage = bpf_selem_unlink_storage_nolock(
> >                  local_storage, selem, false);
> >   }
> >   raw_spin_unlock_bh(&local_storage->lock);
> > 
> > in bpf_selem_unlink_storage_nolock (if we add the above logic with the
> > flag in place of kfree_rcu)
> > call_rcu_tasks_trace grabs a spinlock and these cannot be nested in a
> > raw spin lock.
> > 
> > I am moving the freeing code out of the spinlock, saving the selems on
> > a local list and then doing the free RCU (trace or normal) callbacks
> > at the end. WDYT?
There could be more than one selem to save.

I think the splat is from CONFIG_PROVE_RAW_LOCK_NESTING=y.

Just happened to bump into Paul briefly offline, his work probably can
also avoid the spin_lock in call_rcu_tasks_trace().

I would ignore this splat for now which should go away when it is
merged with Paul's work in the 5.17 merge cycle.

> Depending on the urgency, another approach is to rely on my ongoing work
> removing the call_rcu_tasks_trace() bottleneck.  This commit on branch
> "dev" in the -rcu tree allows boot-time setting of per-CPU callback
> queues for call_rcu_tasks_trace(), along with the other RCU-tasks flavors:
> 
> 0b886cc4b10f ("rcu-tasks: Add rcupdate.rcu_task_enqueue_lim to set initial queueing")
> 
> Preceding commits actually set up the queues.  With these commits, you
> could boot with rcupdate.rcu_task_enqueue_lim=N, where N greater than
> or equal to the number of CPUs on your system, to get per-CPU queuing.
> These commits probably still have a bug or three, but on the other hand,
> they have survived a couple of weeks worth of rcutorture runs.
> 
> This week's work will allow automatic transition between single-queue
> and per-CPU-queue operation based on lock contention and the number of
> callbacks queued.
> 
> My current plan is to get this into the next merge window (v5.17).
That would be great.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux