Re: [PATCH bpf v2] bpf: Fix nested bpf_bprintf_prepare with more per-cpu buffers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 11, 2021 at 1:12 AM Florent Revest <revest@xxxxxxxxxxxx> wrote:
>
> The bpf_seq_printf, bpf_trace_printk and bpf_snprintf helpers share one
> per-cpu buffer that they use to store temporary data (arguments to
> bprintf). They "get" that buffer with try_get_fmt_tmp_buf and "put" it
> by the end of their scope with bpf_bprintf_cleanup.
>
> If one of these helpers gets called within the scope of one of these
> helpers, for example: a first bpf program gets called, uses
> bpf_trace_printk which calls raw_spin_lock_irqsave which is traced by
> another bpf program that calls bpf_snprintf, then the second "get"
> fails. Essentially, these helpers are not re-entrant. They would return
> -EBUSY and print a warning message once.
>
> This patch triples the number of bprintf buffers to allow three levels
> of nesting. This is very similar to what was done for tracepoints in
> "9594dc3c7e7 bpf: fix nested bpf tracepoints with per-cpu data"
>
> Fixes: d9c9e4db186a ("bpf: Factorize bpf_trace_printk and bpf_seq_printf")
> Reported-by: syzbot+63122d0bc347f18c1884@xxxxxxxxxxxxxxxxxxxxxxxxx
> Signed-off-by: Florent Revest <revest@xxxxxxxxxxxx>
> ---
>  kernel/bpf/helpers.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 544773970dbc..ef658a9ea5c9 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -696,34 +696,35 @@ static int bpf_trace_copy_string(char *buf, void *unsafe_ptr, char fmt_ptype,
>   */
>  #define MAX_PRINTF_BUF_LEN     512
>
> -struct bpf_printf_buf {
> -       char tmp_buf[MAX_PRINTF_BUF_LEN];
> +/* Support executing three nested bprintf helper calls on a given CPU */
> +struct bpf_bprintf_buffers {
> +       char tmp_bufs[3][MAX_PRINTF_BUF_LEN];
>  };
> -static DEFINE_PER_CPU(struct bpf_printf_buf, bpf_printf_buf);
> -static DEFINE_PER_CPU(int, bpf_printf_buf_used);
> +static DEFINE_PER_CPU(struct bpf_bprintf_buffers, bpf_bprintf_bufs);
> +static DEFINE_PER_CPU(int, bpf_bprintf_nest_level);
>
>  static int try_get_fmt_tmp_buf(char **tmp_buf)
>  {
> -       struct bpf_printf_buf *bufs;
> -       int used;
> +       struct bpf_bprintf_buffers *bufs;
> +       int nest_level;
>
>         preempt_disable();
> -       used = this_cpu_inc_return(bpf_printf_buf_used);
> -       if (WARN_ON_ONCE(used > 1)) {
> -               this_cpu_dec(bpf_printf_buf_used);
> +       nest_level = this_cpu_inc_return(bpf_bprintf_nest_level);
> +       if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(bufs->tmp_bufs))) {
> +               this_cpu_dec(bpf_bprintf_nest_level);

Applied to bpf tree.
I think at the end the fix is simple enough and much better than an
on-stack buffer.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux