On Wed, Jan 18, 2023 at 6:14 PM Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> wrote: > > Currently, while reads are disallowed for dynptr stack slots, writes are > not. Reads don't work from both direct access and helpers, while writes > do work in both cases, but have the effect of overwriting the slot_type. > > While this is fine, handling for a few edge cases is missing. Firstly, > a user can overwrite the stack slots of dynptr partially. > > Consider the following layout: > spi: [d][d][?] > 2 1 0 > > First slot is at spi 2, second at spi 1. > Now, do a write of 1 to 8 bytes for spi 1. > > This will essentially either write STACK_MISC for all slot_types or > STACK_MISC and STACK_ZERO (in case of size < BPF_REG_SIZE partial write > of zeroes). The end result is that slot is scrubbed. > > Now, the layout is: > spi: [d][m][?] > 2 1 0 > > Suppose if user initializes spi = 1 as dynptr. > We get: > spi: [d][d][d] > 2 1 0 > > But this time, both spi 2 and spi 1 have first_slot = true. > > Now, when passing spi 2 to dynptr helper, it will consider it as > initialized as it does not check whether second slot has first_slot == > false. And spi 1 should already work as normal. > > This effectively replaced size + offset of first dynptr, hence allowing > invalid OOB reads and writes. > > Make a few changes to protect against this: > When writing to PTR_TO_STACK using BPF insns, when we touch spi of a > STACK_DYNPTR type, mark both first and second slot (regardless of which > slot we touch) as STACK_INVALID. Reads are already prevented. > > Second, prevent writing to stack memory from helpers if the range may > contain any STACK_DYNPTR slots. Reads are already prevented. > > For helpers, we cannot allow it to destroy dynptrs from the writes as > depending on arguments, helper may take uninit_mem and dynptr both at > the same time. This would mean that helper may write to uninit_mem > before it reads the dynptr, which would be bad. > > PTR_TO_MEM: [?????dd] > > Depending on the code inside the helper, it may end up overwriting the > dynptr contents first and then read those as the dynptr argument. > > Verifier would only simulate destruction when it does byte by byte > access simulation in check_helper_call for meta.access_size, and > fail to catch this case, as it happens after argument checks. > > The same would need to be done for any other non-trivial objects created > on the stack in the future, such as bpf_list_head on stack, or > bpf_rb_root on stack. > > A common misunderstanding in the current code is that MEM_UNINIT means > writes, but note that writes may also be performed even without > MEM_UNINIT in case of helpers, in that case the code after handling meta > && meta->raw_mode will complain when it sees STACK_DYNPTR. So that > invalid read case also covers writes to potential STACK_DYNPTR slots. > The only loophole was in case of meta->raw_mode which simulated writes > through instructions which could overwrite them. > > A future series sequenced after this will focus on the clean up of > helper access checks and bugs around that. > > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs") > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> > --- > kernel/bpf/verifier.c | 102 ++++++++++++++++++ > .../testing/selftests/bpf/progs/dynptr_fail.c | 6 +- > 2 files changed, 105 insertions(+), 3 deletions(-) > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index eeb6f1b2bd60..09c09d9bfd89 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -769,6 +769,8 @@ static void mark_dynptr_cb_reg(struct bpf_reg_state *reg, > __mark_dynptr_reg(reg, type, true); > } > > +static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, > + struct bpf_func_state *state, int spi); > > static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg, > enum bpf_arg_type arg_type, int insn_idx) > @@ -863,6 +865,69 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re > return 0; > } > > +static void __mark_reg_unknown(const struct bpf_verifier_env *env, > + struct bpf_reg_state *reg); > + > +static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, > + struct bpf_func_state *state, int spi) > +{ > + int i; > + > + /* We always ensure that STACK_DYNPTR is never set partially, > + * hence just checking for slot_type[0] is enough. This is > + * different for STACK_SPILL, where it may be only set for > + * 1 byte, so code has to use is_spilled_reg. > + */ > + if (state->stack[spi].slot_type[0] != STACK_DYNPTR) > + return 0; > + > + /* Reposition spi to first slot */ > + if (!state->stack[spi].spilled_ptr.dynptr.first_slot) > + spi = spi + 1; > + > + if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) { > + verbose(env, "cannot overwrite referenced dynptr\n"); > + return -EINVAL; > + } > + > + mark_stack_slot_scratched(env, spi); > + mark_stack_slot_scratched(env, spi - 1); > + > + /* Writing partially to one dynptr stack slot destroys both. */ > + for (i = 0; i < BPF_REG_SIZE; i++) { > + state->stack[spi].slot_type[i] = STACK_INVALID; > + state->stack[spi - 1].slot_type[i] = STACK_INVALID; > + } > + > + /* Invalidate any slices associated with this dynptr */ > + if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) { We'll never get here because referenced dynptrs can't be overwritten (we check this above and return -EINVAL if dynptr_type_refcounted() is true). I think we should invalidate any slices associated with non-referenced dynptrs as well. > + int ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id; > + struct bpf_func_state *fstate; > + struct bpf_reg_state *reg; > + > + bpf_for_each_reg_in_vstate(env->cur_state, fstate, reg, ({ > + if (reg->ref_obj_id == ref_obj_id) { > + if (!env->allow_ptr_leaks) > + __mark_reg_not_init(env, reg); > + else > + __mark_reg_unknown(env, reg); > + } > + })); > + } > + > + /* Do not release reference state, we are destroying dynptr on stack, > + * not using some helper to release it. Just reset register. > + */ > + __mark_reg_not_init(env, &state->stack[spi].spilled_ptr); > + __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr); > + > + /* Same reason as unmark_stack_slots_dynptr above */ > + state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; > + state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN; > + > + return 0; > +} > + > static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg) > { > struct bpf_func_state *state = func(env, reg); > @@ -3391,6 +3456,10 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, > env->insn_aux_data[insn_idx].sanitize_stack_spill = true; > } > > + err = destroy_if_dynptr_stack_slot(env, state, spi); > + if (err) > + return err; > + > mark_stack_slot_scratched(env, spi); > if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && > !register_is_null(reg) && env->bpf_capable) { > @@ -3504,6 +3573,14 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env, > if (err) > return err; > > + for (i = min_off; i < max_off; i++) { > + int spi; > + > + spi = __get_spi(i); > + err = destroy_if_dynptr_stack_slot(env, state, spi); > + if (err) > + return err; > + } > > /* Variable offset writes destroy any spilled pointers in range. */ > for (i = min_off; i < max_off; i++) { > @@ -5531,6 +5608,31 @@ static int check_stack_range_initialized( > } > > if (meta && meta->raw_mode) { > + /* Ensure we won't be overwriting dynptrs when simulating byte > + * by byte access in check_helper_call using meta.access_size. > + * This would be a problem if we have a helper in the future > + * which takes: > + * > + * helper(uninit_mem, len, dynptr) > + * > + * Now, uninint_mem may overlap with dynptr pointer. Hence, it > + * may end up writing to dynptr itself when touching memory from > + * arg 1. This can be relaxed on a case by case basis for known > + * safe cases, but reject due to the possibilitiy of aliasing by > + * default. > + */ > + for (i = min_off; i < max_off + access_size; i++) { > + int stack_off = -i - 1; > + > + spi = __get_spi(i); > + /* raw_mode may write past allocated_stack */ > + if (state->allocated_stack <= stack_off) > + continue; > + if (state->stack[spi].slot_type[stack_off % BPF_REG_SIZE] == STACK_DYNPTR) { > + verbose(env, "potential write to dynptr at off=%d disallowed\n", i); > + return -EACCES; > + } > + } > meta->access_size = access_size; > meta->regno = regno; > return 0; > diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c > index 02d57b95cf6e..9dc3f23a8270 100644 > --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c > +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c > @@ -420,7 +420,7 @@ int invalid_write1(void *ctx) > * offset > */ > SEC("?raw_tp") > -__failure __msg("Expected an initialized dynptr as arg #3") > +__failure __msg("cannot overwrite referenced dynptr") > int invalid_write2(void *ctx) > { > struct bpf_dynptr ptr; > @@ -444,7 +444,7 @@ int invalid_write2(void *ctx) > * non-const offset > */ > SEC("?raw_tp") > -__failure __msg("Expected an initialized dynptr as arg #1") > +__failure __msg("cannot overwrite referenced dynptr") > int invalid_write3(void *ctx) > { > struct bpf_dynptr ptr; > @@ -476,7 +476,7 @@ static int invalid_write4_callback(__u32 index, void *data) > * be invalidated as a dynptr > */ > SEC("?raw_tp") > -__failure __msg("arg 1 is an unacquired reference") > +__failure __msg("cannot overwrite referenced dynptr") > int invalid_write4(void *ctx) > { > struct bpf_dynptr ptr; > -- > 2.39.1 >