Re: [PATCH v2 bpf-next 9/9] selftests/bpf: Add refcounted_kptr tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Apr 22, 2023 at 12:17:47AM +0200, Kumar Kartikeya Dwivedi wrote:
> On Sat, Apr 15, 2023 at 10:18:11PM CEST, Dave Marchevsky wrote:
> > Test refcounted local kptr functionality added in previous patches in
> > the series.
> >
> > Usecases which pass verification:
> >
> > * Add refcounted local kptr to both tree and list. Then, read and -
> >   possibly, depending on test variant - delete from tree, then list.
> >   * Also test doing read-and-maybe-delete in opposite order
> > * Stash a refcounted local kptr in a map_value, then add it to a
> >   rbtree. Read from both, possibly deleting after tree read.
> > * Add refcounted local kptr to both tree and list. Then, try reading and
> >   deleting twice from one of the collections.
> > * bpf_refcount_acquire of just-added non-owning ref should work, as
> >   should bpf_refcount_acquire of owning ref just out of bpf_obj_new
> >
> > Usecases which fail verification:
> >
> > * The simple successful bpf_refcount_acquire cases from above should
> >   both fail to verify if the newly-acquired owning ref is not dropped
> >
> > Signed-off-by: Dave Marchevsky <davemarchevsky@xxxxxx>
> > ---
> > [...]
> > +SEC("?tc")
> > +__failure __msg("Unreleased reference id=3 alloc_insn=21")
> > +long rbtree_refcounted_node_ref_escapes(void *ctx)
> > +{
> > +	struct node_acquire *n, *m;
> > +
> > +	n = bpf_obj_new(typeof(*n));
> > +	if (!n)
> > +		return 1;
> > +
> > +	bpf_spin_lock(&glock);
> > +	bpf_rbtree_add(&groot, &n->node, less);
> > +	/* m becomes an owning ref but is never drop'd or added to a tree */
> > +	m = bpf_refcount_acquire(n);
> 
> I am analyzing the set (and I'll reply in detail to the cover letter), but this
> stood out.
> 
> Isn't this going to be problematic if n has refcount == 1 and is dropped
> internally by bpf_rbtree_add? Are we sure this can never occur? It took me some
> time, but the following schedule seems problematic.
> 
> CPU 0					CPU 1
> n = bpf_obj_new
> lock(lock1)
> bpf_rbtree_add(rbtree1, n)
> m = bpf_rbtree_acquire(n)
> unlock(lock1)
> 
> kptr_xchg(map, m) // move to map
> // at this point, refcount = 2
> 					m = kptr_xchg(map, NULL)
> 					lock(lock2)
> lock(lock1)				bpf_rbtree_add(rbtree2, m)
> p = bpf_rbtree_first(rbtree1)			if (!RB_EMPTY_NODE) bpf_obj_drop_impl(m) // A
> bpf_rbtree_remove(rbtree1, p)
> unlock(lock1)
> bpf_obj_drop(p) // B

You probably meant:
p2 = bpf_rbtree_remove(rbtree1, p)
unlock(lock1)
if (p2)
  bpf_obj_drop(p2)

> 					bpf_refcount_acquire(m) // use-after-free
> 					...
> 
> B will decrement refcount from 1 to 0, after which bpf_refcount_acquire is
> basically performing a use-after-free (when fortunate, one will get a
> WARN_ON_ONCE splat for 0 to 1, otherwise, a silent refcount raise for some
> different object).

As discussed earlier we'll be switching all bpf_obj_new to use BPF_MA_REUSE_AFTER_RCU_GP.

and to adress 0->1 transition.. it does look like we need to two flavors of bpf_refcount_acquire.
One of owned refs and another for non-owned.
The owned bpf_refcount_acquire() can stay KF_ACQUIRE with refcount_inc,
while bpf_refcount_acquire() for non-own will use KF_ACQUIRE | KF_RET_NULL and refcount_inc_not_zero.
The bpf prog can use bpf_refcount_acquire everywhere and the verifier will treat it on the spot
differently depending on the argument.
So the code:
n = bpf_obj_new();
if (!n) ...;
m = bpf_refcount_acquire(n);
doesn't need to check if (!m).



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux