Re: [PATCH bpf-next v2 1/4] bpf: verify scalar ids mapping in regsafe() using check_ids()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2023-06-02 at 13:17 -0700, Andrii Nakryiko wrote:
> On Fri, Jun 2, 2023 at 12:13 PM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote:
> > 
> > On Fri, 2023-06-02 at 11:50 -0700, Andrii Nakryiko wrote:
> > [...]
> > > > > The thread is long. Could you please describe it again in pseudo code?
> > > > 
> > > > - Add a function mark_precise_scalar_ids(struct bpf_verifier_env *env,
> > > >                                         struct bpf_verifier_state *st)
> > > >   such that it:
> > > >   - collect PRECISE_IDS: a set of IDs of all registers marked in env->bt
> > > >   - visit all registers with ids from PRECISE_IDS and make sure
> > > >     that these registers are marked in env->bt
> > > > - Call mark_precise_scalar_ids() from __mark_chain_precision()
> > > >   for each state 'st' visited by states chain processing loop,
> > > >   so that:
> > > >   - mark_precise_scalar_ids() is called for current state when
> > > >     __mark_chain_precision() is entered, reusing id assignments in
> > > >     current state;
> > > >   - mark_precise_scalar_ids() is called for each parent state, reusing
> > > >     id assignments valid at 'last_idx' instruction of that state.
> > > > 
> > > > The idea is that in situations like below:
> > > > 
> > > >    4: if (r6 > r7) goto +1
> > > >    5: r7 = r6
> > > >    --- checkpoint #1 ---
> > > >    6: <something>
> > > >    7: if (r7 > X) goto ...
> > > >    8: r7 = 0
> > > >    9: r9 += r6
> > > > 
> > > > The mark_precise_scalar_ids() would be called at:
> > > > - (9) and current id assignments would be used.
> > > > - (6) and id assignments saved in checkpoint #1 would be used.
> > > > 
> > > > If <something> is the code that modifies r6/r7 the link would be
> > > > broken and we would overestimate the set of precise registers.
> > > > 
> > > 
> > > To avoid this we need to recalculate these IDs on each new parent
> > > state, based on requested precision marks. If we keep a simple and
> > > small array of IDs and do a quick linear search over them for each
> > > SCALAR register, I suspect it should be very fast. I don't think in
> > > practice we'll have more than 1-2 IDs in that array, right?
> > 
> > I'm not sure I understand, could you please describe how it should
> > work for e.g.?:
> > 
> >     3: r6 &= 0xf            // assume safe bound
> >     4: if (r6 > r7) goto +1
> >     5: r7 = r6
> >     --- checkpoint #1 ---
> >     6: r7 = 0
> >     7: if (r7 > 10) goto exit;
> >     8: r7 = 0
> >     9: r9 += r6
> > 
> > __mark_chain_precision() would get to checkpoint #1 with only r6 as
> > precise, what should happen next?
> 
> it should mark all SCALARs that have r6's ID in env->bt, and then
> proceed with precision propagation until next parent state? This is
> where you'll mark r7, because in parent state (checkpoint #1) r6.id ==
> r7.id.

That's what I do now.
Sorry, I thought you had a suggestion on how to avoid the precise set
overestimation (e.g. how to detect that "6: r7 = 0" breaks the link).

> It might be easier to just discuss latest code you have, there are
> lots of intricacies, and code wins over words :)

Here is what I have now:
https://github.com/kernel-patches/bpf/compare/bpf-next_base...eddyz87:bpf:verify-ids-for-scalars-in-regsafe-v3
The interesting part is mark_precise_scalar_ids().

But a few tests are not passing because expected messages have to be adjusted.
And a lot of tests have to be added.
We can delay discussion until I submit v3 (worst case tomorrow).

> > As a side note: I added several optimizations:
> > - avoid allocation of scalar ids for constants;
> > - remove sole scalar ids from cached states;
> 
> so that's what I was proposing earlier,

Yes, it turned out beneficial when I inspected logs for bpf_xdp.o.

> but not just from cached
> states, but from any state. As soon as we get SCALAR with some ID that
> is not shared by any other SCALAR, we should try to drop that ID. The
> question is only in how to implement this efficiently.

No, we don't want to do it for non-cached state, not until we generate
scalar ids on stack spills and fills. Otherwise we would break
find_equal_scalars() for the following case:

  r1 = r2         // r1 gains ID
  fp[-8] = r1     //
  r2 = 0          // 
  r1 = 0          // fp[-8] has a unique ID now
  --- checkpoint ---
  r1 = fp[-8]
  r2 = fp[-8]
  if r1 > 10 goto exit; // range propagates to r2 now,
                        // but won't propagate if fp[-8] ID
                        // is cleared at checkpoint

(A bit contrived, but conveys the idea)

And we don't really need to bother about unique IDs in non-cached state
when rold->id check discussed in a sibling thread is used:

		if (rold->precise && rold->id && !check_ids(rold->id, rcur->id, idmap))
			return false;

Here, if rcur->id is unique there are two cases:
- rold->id == 0: then rcur->id is just ignored
- rold->id != 0: then rold->id/rcur->id pair would be added to idmap,
                 there is some other precise old register with the
                 same id as rold->id, so eventually check_ids()
                 would make regsafe() return false.

> > - do a check as follows:
> >   if (rold->precise && rold->id && !check_ids(idmap, rold, rcur))
> >     return false;
> 
> Hm.. do we need extra special case here? With precision changes we are
> discussion, and this removing singular SCALAR IDs you are proposing,
> just extending existing logic to:
> 
>                 if (regs_exact(rold, rcur, idmap))
>                         return true;
>                 if (env->explore_alu_limits)
>                         return false;
>                 if (!rold->precise)
>                         return true;
>                 /* new val must satisfy old val knowledge */
>                 return range_within(rold, rcur) &&
>                        check_ids(rold->id, rcur->id, idmap) &&
>                        check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap) &&
>                        tnum_in(rold->var_off, rcur->var_off);
> 
> wouldn't be enough?

Yes, it could be shortened as below:

                 return range_within(rold, rcur) &&
                        (rold->id == 0 || check_ids(rold->id, rcur->id, idmap)) &&
                        check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap) &&
                        tnum_in(rold->var_off, rcur->var_off);

but I wanted a separate place to put a long comment at.

> > 
> > And I'm seeing almost zero performance overhead now.
> > So, maybe what we figured so far is good enough.
> > Need to add more tests, though.






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux