Re: [PATCH bpf-next 5/7] bpf: Mark potential spilled loop index variable as precise

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 31, 2023 at 04:39:29PM -0700, Yonghong Song wrote:
> 
> 
> On 3/31/23 2:54 PM, Eduard Zingerman wrote:
> > On Wed, 2023-03-29 at 22:56 -0700, Yonghong Song wrote:
> > > For a loop, if loop index variable is spilled and between loop
> > > iterations, the only reg/spill state difference is spilled loop
> > > index variable, then verifier may assume an infinite loop which
> > > cause verification failure. In such cases, we should mark
> > > spilled loop index variable as precise to differentiate states
> > > between loop iterations.
> > > 
> > > Since verifier is not able to accurately identify loop index
> > > variable, add a heuristic such that if both old reg state and
> > > new reg state are consts, mark old reg state as precise which
> > > will trigger constant value comparison later.
> > > 
> > > Signed-off-by: Yonghong Song <yhs@xxxxxx>
> > > ---
> > >   kernel/bpf/verifier.c | 20 ++++++++++++++++++--
> > >   1 file changed, 18 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index d070943a8ba1..d1aa2c7ae7c0 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -14850,6 +14850,23 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> > >   		/* Both old and cur are having same slot_type */
> > >   		switch (old->stack[spi].slot_type[BPF_REG_SIZE - 1]) {
> > >   		case STACK_SPILL:
> > > +			/* sometime loop index variable is spilled and the spill
> > > +			 * is not marked as precise. If only state difference
> > > +			 * between two iterations are spilled loop index, the
> > > +			 * "infinite loop detected at insn" error will be hit.
> > > +			 * Mark spilled constant as precise so it went through value
> > > +			 * comparison.
> > > +			 */
> > > +			old_reg = &old->stack[spi].spilled_ptr;
> > > +			cur_reg = &cur->stack[spi].spilled_ptr;
> > > +			if (!old_reg->precise) {
> > > +				if (old_reg->type == SCALAR_VALUE &&
> > > +				    cur_reg->type == SCALAR_VALUE &&
> > > +				    tnum_is_const(old_reg->var_off) &&
> > > +				    tnum_is_const(cur_reg->var_off))
> > > +					old_reg->precise = true;
> > > +			}
> > > +
> > >   			/* when explored and current stack slot are both storing
> > >   			 * spilled registers, check that stored pointers types
> > >   			 * are the same as well.
> > > @@ -14860,8 +14877,7 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> > >   			 * such verifier states are not equivalent.
> > >   			 * return false to continue verification of this path
> > >   			 */
> > > -			if (!regsafe(env, &old->stack[spi].spilled_ptr,
> > > -				     &cur->stack[spi].spilled_ptr, idmap))
> > > +			if (!regsafe(env, old_reg, cur_reg, idmap))
> > >   				return false;
> > >   			break;
> > >   		case STACK_DYNPTR:
> > 
> > Hi Yonghong,
> > 
> > If you are going for v2 of this patch-set, could you please consider
> > adding a parameter to regsafe() instead of modifying old state?
> > Maybe it's just me, but having old state immutable seems simpler to understand.
> > E.g., as in the patch in the end of this email (it's a patch on top of your series).
> > 
> > Interestingly, the version without old state modification also performs
> > better in veristat, although I did not analyze the reasons for this.
> 
> Thanks for suggestion. Agree that my change may cause other side effects
> as I explicit marked 'old_reg' as precise. Do not mark 'old_reg' with
> precise should minimize the impact.
> Will make the change in the next revision.

Could you also post veristat before/after difference after patch 1, 3 and 5.
I suspect there should be minimal delta for 1 and 3, but 5 can make both positive
and negative effect.

> > +		if (!rold->precise && !(force_precise_const &&
> > +					tnum_is_const(rold->var_off) &&
> > +					tnum_is_const(rcur->var_off)))

... and if there are negative consequences for patch 5 we might tighten this heuristic.
Like check that rcur->var_off.value - rold->var_off.value == 1 or -1 or bounded
by some small number. If it's truly index var it shouldn't have enormous delta.
But if patch 5 doesn't cause negative effect it would be better to keep it as-is.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux