On Mon, Sep 21, 2020 at 10:29 AM Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote: > > On Thu, Sep 17, 2020 at 12:46 AM Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote: > > > > The in_atomic macro cannot always detect atomic context. In particular, > > it cannot know about held spinlocks in non-preemptible kernels. Although, > > there is no user call bpf_link_put() with holding spinlock now. Be the > > safe side, we can avoid this in the feature. > > > > Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> > > --- > > This change seems unnecessary (or at least premature), as if we ever > get a use case that does bpf_link_put() from under held spinlock, we > should see a warning about that (and in that case I bet code can be > rewritten to not hold spinlock during bpf_link_put()). But on the > other hand it makes bpf_link_put() to follow the pattern of > bpf_map_put(), which always defers the work, so I'm ok with this. As > Song mentioned, this is not called from a performance-critical hot > path, so doesn't matter all that much. > > Acked-by: Andrii Nakryiko <andriin@xxxxxx> > btw, you probably need to resubmit this patch as a non-RFC one for it to be applied?.. > > kernel/bpf/syscall.c | 8 ++------ > > 1 file changed, 2 insertions(+), 6 deletions(-) > > > > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c > > index 178c147350f5..6347be0a5c82 100644 > > --- a/kernel/bpf/syscall.c > > +++ b/kernel/bpf/syscall.c > > @@ -2345,12 +2345,8 @@ void bpf_link_put(struct bpf_link *link) > > if (!atomic64_dec_and_test(&link->refcnt)) > > return; > > > > - if (in_atomic()) { > > - INIT_WORK(&link->work, bpf_link_put_deferred); > > - schedule_work(&link->work); > > - } else { > > - bpf_link_free(link); > > - } > > + INIT_WORK(&link->work, bpf_link_put_deferred); > > + schedule_work(&link->work); > > } > > > > static int bpf_link_release(struct inode *inode, struct file *filp) > > -- > > 2.20.1 > >