Re: [PATCH bpf 06/12] bpf: verify callbacks as if they are called unknown number of times

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 15, 2023 at 9:18 PM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote:
>
> Prior to this patch callbacks were handled as regular function calls,
> execution of callback body was modeled exactly once.
> This patch updates callbacks handling logic as follows:
> - introduces a function push_callback_call() that schedules callback
>   body verification in env->head stack;
> - updates prepare_func_exit() to reschedule callback body verification
>   upon BPF_EXIT;
> - as calls to bpf_*_iter_next(), calls to callback invoking functions
>   are marked as checkpoints;
> - is_state_visited() is updated to stop callback based iteration when
>   some identical parent state is found.
>
> Paths with callback function invoked zero times are now verified first,
> which leads to necessity to modify some selftests:
> - the following negative tests required adding release/unlock/drop
>   calls to avoid previously masked unrelated error reports:
>   - cb_refs.c:underflow_prog
>   - exceptions_fail.c:reject_rbtree_add_throw
>   - exceptions_fail.c:reject_with_cp_reference
> - the following precision tracking selftests needed change in expected
>   log trace:
>   - verifier_subprog_precision.c:callback_result_precise
>     (note: r0 precision is no longer propagated inside callback and
>            I think this is a correct behavior)
>   - verifier_subprog_precision.c:parent_callee_saved_reg_precise_with_callback
>   - verifier_subprog_precision.c:parent_stack_slot_precise_with_callback
>
> Reported-by: Andrew Werner <awerner32@xxxxxxxxx>
> Closes: https://lore.kernel.org/bpf/CA+vRuzPChFNXmouzGG+wsy=6eMcfr1mFG0F3g7rbg-sedGKW3w@xxxxxxxxxxxxxx/
> Signed-off-by: Eduard Zingerman <eddyz87@xxxxxxxxx>
> ---
>  include/linux/bpf_verifier.h                  |   5 +
>  kernel/bpf/verifier.c                         | 257 +++++++++++-------
>  .../selftests/bpf/prog_tests/cb_refs.c        |   4 +-
>  tools/testing/selftests/bpf/progs/cb_refs.c   |   1 +
>  .../selftests/bpf/progs/exceptions_fail.c     |   2 +
>  .../bpf/progs/verifier_subprog_precision.c    |  22 +-
>  6 files changed, 183 insertions(+), 108 deletions(-)
>

Overall makes sense, but I left few questions below to try to
understand everything better. Thanks!

[...]

> +static bool is_callback_iter_next(struct bpf_verifier_env *env, int insn_idx);
> +
>  /* For given verifier state backtrack_insn() is called from the last insn to
>   * the first insn. Its purpose is to compute a bitmask of registers and
>   * stack slots that needs precision in the parent verifier state.
> @@ -4030,10 +4044,7 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
>                                         return -EFAULT;
>                                 return 0;
>                         }
> -               } else if ((bpf_helper_call(insn) &&
> -                           is_callback_calling_function(insn->imm) &&
> -                           !is_async_callback_calling_function(insn->imm)) ||
> -                          (bpf_pseudo_kfunc_call(insn) && is_callback_calling_kfunc(insn->imm))) {
> +               } else if (is_sync_callback_calling_insn(insn) && idx != subseq_idx - 1) {

can you leave a comment why we need idx != subseq_idx - 1 check?

>                         /* callback-calling helper or kfunc call, which means
>                          * we are exiting from subprog, but unlike the subprog
>                          * call handling above, we shouldn't propagate

[...]

> @@ -12176,6 +12216,21 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>                 return -EACCES;
>         }
>
> +       /* Check the arguments */
> +       err = check_kfunc_args(env, &meta, insn_idx);
> +       if (err < 0)
> +               return err;
> +
> +       if (meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {

can't we use is_sync_callback_calling_kfunc() here?

> +               err = push_callback_call(env, insn, insn_idx, meta.subprogno,
> +                                        set_rbtree_add_callback_state);
> +               if (err) {
> +                       verbose(env, "kfunc %s#%d failed callback verification\n",
> +                               func_name, meta.func_id);
> +                       return err;
> +               }
> +       }
> +

[...]

> diff --git a/tools/testing/selftests/bpf/prog_tests/cb_refs.c b/tools/testing/selftests/bpf/prog_tests/cb_refs.c
> index 3bff680de16c..b5aa168889c1 100644
> --- a/tools/testing/selftests/bpf/prog_tests/cb_refs.c
> +++ b/tools/testing/selftests/bpf/prog_tests/cb_refs.c
> @@ -21,12 +21,14 @@ void test_cb_refs(void)
>  {
>         LIBBPF_OPTS(bpf_object_open_opts, opts, .kernel_log_buf = log_buf,
>                                                 .kernel_log_size = sizeof(log_buf),
> -                                               .kernel_log_level = 1);
> +                                               .kernel_log_level = 1 | 2 | 4);

nit: 1 is redundant if 2 is specified, so just `2 | 4` ?

>         struct bpf_program *prog;
>         struct cb_refs *skel;
>         int i;
>
>         for (i = 0; i < ARRAY_SIZE(cb_refs_tests); i++) {
> +               if (!test__start_subtest(cb_refs_tests[i].prog_name))
> +                       continue;
>                 LIBBPF_OPTS(bpf_test_run_opts, run_opts,
>                         .data_in = &pkt_v4,
>                         .data_size_in = sizeof(pkt_v4),

[...]

> diff --git a/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c b/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c
> index db6b3143338b..ead358679fe2 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c
> @@ -120,14 +120,12 @@ __naked int global_subprog_result_precise(void)
>  SEC("?raw_tp")
>  __success __log_level(2)
>  __msg("14: (0f) r1 += r6")
> -__msg("mark_precise: frame0: last_idx 14 first_idx 10")
> +__msg("mark_precise: frame0: last_idx 14 first_idx 9")
>  __msg("mark_precise: frame0: regs=r6 stack= before 13: (bf) r1 = r7")
>  __msg("mark_precise: frame0: regs=r6 stack= before 12: (27) r6 *= 4")
>  __msg("mark_precise: frame0: regs=r6 stack= before 11: (25) if r6 > 0x3 goto pc+4")
>  __msg("mark_precise: frame0: regs=r6 stack= before 10: (bf) r6 = r0")
> -__msg("mark_precise: frame0: parent state regs=r0 stack=:")
> -__msg("mark_precise: frame0: last_idx 18 first_idx 0")
> -__msg("mark_precise: frame0: regs=r0 stack= before 18: (95) exit")
> +__msg("mark_precise: frame0: regs=r0 stack= before 9: (85) call bpf_loop")

you are right that r0 returned from bpf_loop is not r0 returned from
bpf_loop's callback, but we still have to go through callback
instructions, right? so you removed few __msg() from subprog
instruction history because it was too long a history or what? I'd
actually keep those but update that in subprog we don't need r0 to be
precise, that will make this test even clearer

>  __naked int callback_result_precise(void)
>  {
>         asm volatile (

[...]





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux