Re: [PATCH bpf-next 3/7] bpf: Introduce BPF_MODIFY_RETURN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 3, 2020 at 6:12 AM KP Singh <kpsingh@xxxxxxxxxxxx> wrote:
>
> From: KP Singh <kpsingh@xxxxxxxxxx>
>
> When multiple programs are attached, each program receives the return
> value from the previous program on the stack and the last program
> provides the return value to the attached function.
>
> The fmod_ret bpf programs are run after the fentry programs and before
> the fexit programs. The original function is only called if all the
> fmod_ret programs return 0 to avoid any unintended side-effects. The
> success value, i.e. 0 is not currently configurable but can be made so
> where user-space can specify it at load time.
>
> For example:
>
> int func_to_be_attached(int a, int b)
> {  <--- do_fentry
>
> do_fmod_ret:
>    <update ret by calling fmod_ret>
>    if (ret != 0)
>         goto do_fexit;
>
> original_function:
>
>     <side_effects_happen_here>
>
> }  <--- do_fexit
>
> The fmod_ret program attached to this function can be defined as:
>
> SEC("fmod_ret/func_to_be_attached")
> BPF_PROG(func_name, int a, int b, int ret)

same as on cover letter, return type is missing

> {
>         // This will skip the original function logic.
>         return 1;
> }
>
> The first fmod_ret program is passed 0 in its return argument.
>
> Signed-off-by: KP Singh <kpsingh@xxxxxxxxxx>
> ---
>  arch/x86/net/bpf_jit_comp.c    | 96 ++++++++++++++++++++++++++++++++--
>  include/linux/bpf.h            |  1 +
>  include/uapi/linux/bpf.h       |  1 +
>  kernel/bpf/btf.c               |  3 +-
>  kernel/bpf/syscall.c           |  1 +
>  kernel/bpf/trampoline.c        |  5 +-
>  kernel/bpf/verifier.c          |  1 +
>  tools/include/uapi/linux/bpf.h |  1 +
>  8 files changed, 103 insertions(+), 6 deletions(-)
>

[...]

>
> +       if (fmod_ret->nr_progs) {
> +               branches = kcalloc(fmod_ret->nr_progs, sizeof(u8 *),
> +                                  GFP_KERNEL);
> +               if (!branches)
> +                       return -ENOMEM;
> +               if (invoke_bpf_mod_ret(m, &prog, fmod_ret, stack_size,
> +                                      branches))

branches leaks here

> +                       return -EINVAL;
> +       }
> +
>         if (flags & BPF_TRAMP_F_CALL_ORIG) {
> -               if (fentry->nr_progs)
> +               if (fentry->nr_progs || fmod_ret->nr_progs)
>                         restore_regs(m, &prog, nr_args, stack_size);
>
>                 /* call original function */
> @@ -1573,6 +1649,14 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,

there is early return one line above here, you need to free branches
in that case to not leak memory

So I guess it's better to do goto cleanup approach at this point?

>                 emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
>         }
>
> +       if (fmod_ret->nr_progs) {
> +               align16_branch_target(&prog);
> +               for (i = 0; i < fmod_ret->nr_progs; i++)
> +                       emit_cond_near_jump(&branches[i], prog, branches[i],
> +                                           X86_JNE);
> +               kfree(branches);
> +       }
> +

[...]



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux