Re: [PATCH bpf-next 1/2] bpf: Allow bpf_local_storage to be used by sleepable programs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 24, 2021 at 11:20:40PM +0100, KP Singh wrote:
> On Tue, Nov 23, 2021 at 11:30 PM Martin KaFai Lau <kafai@xxxxxx> wrote:
> >
> > On Tue, Nov 23, 2021 at 10:22:04AM -0800, Paul E. McKenney wrote:
> > > On Tue, Nov 23, 2021 at 06:11:14PM +0100, KP Singh wrote:
> > > > On Thu, Sep 2, 2021 at 6:45 AM Martin KaFai Lau <kafai@xxxxxx> wrote:
> > > > > I think the global lock will be an issue for the current non-sleepable
> > > > > netdev bpf-prog which could be triggered by external traffic,  so a flag
> > > > > is needed here to provide a fast path.  I suspect other non-prealloc map
> > > > > may need it in the future, so probably
> > > > > s/BPF_F_SLEEPABLE_STORAGE/BPF_F_SLEEPABLE/ instead.
> > > >
> > > > I was re-working the patches and had a couple of questions.
> > > >
> > > > There are two data structures that get freed under RCU here:
> > > >
> > > > struct bpf_local_storage
> > > > struct bpf_local_storage_selem
> > > >
> > > > We can choose to free the bpf_local_storage_selem under
> > > > call_rcu_tasks_trace based on
> > > > whether the map it belongs to is sleepable with something like:
> > > >
> > > > if (selem->sdata.smap->map.map_flags & BPF_F_SLEEPABLE_STORAGE)
> > Paul's current work (mentioned by his previous email) will improve the
> > performance of call_rcu_tasks_trace, so it probably can avoid the
> > new BPF_F_SLEEPABLE flag and make it easier to use.
> >
> > > >     call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_rcu);
> > > > else
> > > >     kfree_rcu(selem, rcu);
> > > >
> > > > Questions:
> > > >
> > > > * Can we free bpf_local_storage under kfree_rcu by ensuring it's
> > > >   always accessed in a  classical RCU critical section?
> > >>    Or maybe I am missing something and this also needs to be freed
> > > >   under trace RCU if any of the selems are from a sleepable map.
> > In the inode_storage_lookup() of this patch:
> >
> > +#define bpf_local_storage_rcu_lock_held()                      \
> > +       (rcu_read_lock_held() || rcu_read_lock_trace_held() ||  \
> > +        rcu_read_lock_bh_held())
> >
> > @@ -44,7 +45,8 @@ static struct bpf_local_storage_data *inode_storage_lookup(struct inode *inode,
> >         if (!bsb)
> >                 return NULL;
> >
> > -       inode_storage = rcu_dereference(bsb->storage);
> > +       inode_storage = rcu_dereference_protected(bsb->storage,
> > +                                                 bpf_local_storage_rcu_lock_held());
> >
> > Thus, it is not always in classical RCU critical.
> >
> > > >
> > > > * There is an issue with nested raw spinlocks, e.g. in
> > > > bpf_inode_storage.c:bpf_inode_storage_free
> > > >
> > > >   hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
> > > >   /* Always unlink from map before unlinking from
> > > >   * local_storage.
> > > >   */
> > > >   bpf_selem_unlink_map(selem);
> > > >   free_inode_storage = bpf_selem_unlink_storage_nolock(
> > > >                  local_storage, selem, false);
> > > >   }
> > > >   raw_spin_unlock_bh(&local_storage->lock);
> > > >
> > > > in bpf_selem_unlink_storage_nolock (if we add the above logic with the
> > > > flag in place of kfree_rcu)
> > > > call_rcu_tasks_trace grabs a spinlock and these cannot be nested in a
> > > > raw spin lock.
> > > >
> > > > I am moving the freeing code out of the spinlock, saving the selems on
> > > > a local list and then doing the free RCU (trace or normal) callbacks
> > > > at the end. WDYT?
> > There could be more than one selem to save.
> 
> Yes, that's why I was saving them on a local list and then calling
> kfree_rcu or call_rcu_tasks_trace after unlocking the raw_spin_lock
> 
> INIT_HLIST_HEAD(&free_list);
> raw_spin_lock_irqsave(&local_storage->lock, flags);
> hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
>     bpf_selem_unlink_map(selem);
>     free_local_storage = bpf_selem_unlink_storage_nolock(
>     local_storage, selem, false);
>     hlist_add_head(&selem->snode, &free_list);
> }
> raw_spin_unlock_irqrestore(&local_storage->lock, flags);
> 
> /* The element needs to be freed outside the raw spinlock because spin
> * locks cannot nest inside a raw spin locks and call_rcu_tasks_trace
> * grabs a spinklock when the RCU code calls into the scheduler.
> *
> * free_local_storage should always be true as long as
> * local_storage->list was non-empty.
> */
> hlist_for_each_entry_safe(selem, n, &free_list, snode) {
>     if (selem->sdata.smap->map.map_flags & BPF_F_SLEEPABLE_STORAGE)
>         call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_rcu);
>     else
>         kfree_rcu(selem, rcu);
> }
> 
> But... we won't need this anymore.
Yep, Paul's work (thanks!) will make this piece simpler. 

KP, this set functionally does not depend on Paul's changes.
Do you want to spin a new version so that it can be reviewed in parallel?
When the rcu-task changes land in -next, it can probably
be merged into bpf-next first before landing the sleepable
bpf storage work.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux