Re: [PATCH v4 bpf-next 15/15] bpf: Introduce sysctl kernel.bpf_force_dyn_alloc.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 29, 2022 at 3:02 PM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote:
>
> On 8/26/22 4:44 AM, Alexei Starovoitov wrote:
> > From: Alexei Starovoitov <ast@xxxxxxxxxx>
> >
> > Introduce sysctl kernel.bpf_force_dyn_alloc to force dynamic allocation in bpf
> > hash map. All selftests/bpf should pass with bpf_force_dyn_alloc 0 or 1 and all
> > bpf programs (both sleepable and not) should not see any functional difference.
> > The sysctl's observable behavior should only be improved memory usage.
> >
> > Acked-by: Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx>
> > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
> > ---
> >   include/linux/filter.h | 2 ++
> >   kernel/bpf/core.c      | 2 ++
> >   kernel/bpf/hashtab.c   | 5 +++++
> >   kernel/bpf/syscall.c   | 9 +++++++++
> >   4 files changed, 18 insertions(+)
> >
> > diff --git a/include/linux/filter.h b/include/linux/filter.h
> > index a5f21dc3c432..eb4d4a0c0bde 100644
> > --- a/include/linux/filter.h
> > +++ b/include/linux/filter.h
> > @@ -1009,6 +1009,8 @@ bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk,
> >   }
> >   #endif
> >
> > +extern int bpf_force_dyn_alloc;
> > +
> >   #ifdef CONFIG_BPF_JIT
> >   extern int bpf_jit_enable;
> >   extern int bpf_jit_harden;
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index 639437f36928..a13e78ea4b90 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
> > @@ -533,6 +533,8 @@ void bpf_prog_kallsyms_del_all(struct bpf_prog *fp)
> >       bpf_prog_kallsyms_del(fp);
> >   }
> >
> > +int bpf_force_dyn_alloc __read_mostly;
> > +
> >   #ifdef CONFIG_BPF_JIT
> >   /* All BPF JIT sysctl knobs here. */
> >   int bpf_jit_enable   __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_DEFAULT_ON);
> > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> > index 89f26cbddef5..f68a3400939e 100644
> > --- a/kernel/bpf/hashtab.c
> > +++ b/kernel/bpf/hashtab.c
> > @@ -505,6 +505,11 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
> >
> >       bpf_map_init_from_attr(&htab->map, attr);
> >
> > +     if (!lru && bpf_force_dyn_alloc) {
> > +             prealloc = false;
> > +             htab->map.map_flags |= BPF_F_NO_PREALLOC;
> > +     }
> > +
>
> The rationale is essentially for testing, right? Would be nice to avoid
> making this patch uapi. It will just confuse users with implementation
> details, imho, and then it's hard to remove it again.

Not for testing, but for production.
The plan is to roll this sysctl gradually in the fleet and
hopefully observe memory saving without negative side effects,
but map usage patterns are wild. It will take a long time to get
the confidence that prelloc code from htab can be completely removed.
At scale usage might find all kinds of unforeseen issues.
Probably new alloc heuristics would need to be developed.
If 'git rm kernel/bpf/percpu_freelist.*' ever happens
(would be great, but who knows) then this sysctl will become a nop.
This patch is trivial enough and we could keep it internal,
but everybody else with a large fleet of servers would probably
be applying the same patch and will be repeating the same steps.
bpf usage in hyperscalers varies a lot.
Before 'git rm freelist' we probably flip the default for this sysctl
to get even broader coverage.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux