Re: [PATCH bpf-next v7 3/5] bpf: Inline calls to bpf_loop when callback is known

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jun 19, 2022 at 11:09:36PM +0300, Eduard Zingerman wrote:
> Hi Daniel, Alexei, 
> 
> > On Fri, 2022-06-17 at 01:12 +0200, Daniel Borkmann wrote:
> > On Thu, 2022-06-16 at 19:14 -0700, Alexei Starovoitov wrote:
> > On Mon, Jun 13, 2022 at 1:50 PM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote:
> > > +
> > > +static bool loop_flag_is_zero(struct bpf_verifier_env *env)
> [...]
> > 
> > Great catch here by Daniel.
> > It needs mark_chain_precision().
> 
> Thanks for the catch regarding precision tracking. Unfortunately I
> struggle to create a test case that demonstrates the issue without the
> call to `mark_chain_precision`. As far as I understand this test case
> should look as follows:
> 
> 
> 	... do something in such a way that:
> 	  - there is a branch where
> 	    BPF_REG_4 is 0, SCALAR_VALUE, !precise
> 	    and this branch is explored first
> 	  - there is a branch where
> 	    BPF_REG_4 is 1, SCALAR_VALUE, !precise
> 
> 	/* create branching point */
> 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 0),
> 	/* load callback address to r2 */
> 	BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 5),
> 	BPF_RAW_INSN(0, 0, 0, 0, 0),
> 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
> 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
> 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
> 	BPF_EXIT_INSN(),
> 	/* callback */
> 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 1),
> 	BPF_EXIT_INSN(),
> 
> The "do something" part would then rely on the state pruning logic to
> skip the verification for the second branch. Namely, the following
> part of the `regsafe` function should consider registers identical:
> 
> /* Returns true if (rold safe implies rcur safe) */
> static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
> 			struct bpf_reg_state *rcur, struct bpf_id_pair *idmap)
> {
> 	...
> 	switch (base_type(rold->type)) {
> 	case SCALAR_VALUE:
> 		...
> 		if (rcur->type == SCALAR_VALUE) {
> here ->			if (!rold->precise && !rcur->precise)
> 				return true;
> 			...
> 		} else {
> 			...
> 		}
> 		...	
> 	}
> 	...	
> }
> 
> However, I don't understand what instructions could mark the register
> as a scalar with particular value, but w/o `precise` mark. I tried
> MOV, JEQ, JNE, MUL, sequence of BPF_ALU64_IMM(MOV, ...) - BPF_STX_MEM
> - BPF_LDX_MEM to no avail.

> The following observations might be relevant:
> - `__mark_reg_known` does not change the state of the `precise` mark;

yes.
Just BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0) and ,1) in the other branch
should do it.
The 'mov' won't make the register precise.

So something like below:

r0 = random_u32
r6 = random_u32
if (r0)
   goto L;

r4 = 0

pruning_point:
if (r6) goto next;
next:
/* load callback address to r2 */
BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 5),
BPF_RAW_INSN(0, 0, 0, 0, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
BPF_EXIT_INSN(),

L:
r4 = 1
goto pruning_point

The fallthrough path will proceed with r4=0
and pruning will trigger is_state_visited() with r4=1
which regsafe will incorrectly recognize as equivalent.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux