Re: [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 08, 2022 at 10:41:53AM -0800, Song Liu wrote:
> On Tue, Nov 8, 2022 at 3:27 AM Mike Rapoport <rppt@xxxxxxxxxx> wrote:
> >
> > Hi Song,
> >
> > On Mon, Nov 07, 2022 at 02:39:16PM -0800, Song Liu wrote:
> > > This patchset tries to address the following issues:
> > >
> > > 1. Direct map fragmentation
> > >
> > > On x86, STRICT_*_RWX requires the direct map of any RO+X memory to be also
> > > RO+X. These set_memory_* calls cause 1GB page table entries to be split
> > > into 2MB and 4kB ones. This fragmentation in direct map results in bigger
> > > and slower page table, and pressure for both instruction and data TLB.
> > >
> > > Our previous work in bpf_prog_pack tries to address this issue from BPF
> > > program side. Based on the experiments by Aaron Lu [4], bpf_prog_pack has
> > > greatly reduced direct map fragmentation from BPF programs.
> >
> > Usage of set_memory_* APIs with memory allocated from vmalloc/modules
> > virtual range does not change the direct map, but only updates the
> > permissions in vmalloc range. The direct map splits occur in
> > vm_remove_mappings() when the memory is *freed*.
> >
> > That said, both bpf_prog_pack and these patches do reduce the
> > fragmentation, but this happens because the memory is freed to the system
> > in 2M chunks and there are no splits of 2M pages. Besides, since the same
> > 2M page used for many BPF programs there should be way less vfree() calls.
> >
> > > 2. iTLB pressure from BPF program
> > >
> > > Dynamic kernel text such as modules and BPF programs (even with current
> > > bpf_prog_pack) use 4kB pages on x86, when the total size of modules and
> > > BPF program is big, we can see visible performance drop caused by high
> > > iTLB miss rate.
> >
> > Like Luis mentioned several times already, it would be nice to see numbers.
> >
> > > 3. TLB shootdown for short-living BPF programs
> > >
> > > Before bpf_prog_pack loading and unloading BPF programs requires global
> > > TLB shootdown. This patchset (and bpf_prog_pack) replaces it with a local
> > > TLB flush.
> > >
> > > 4. Reduce memory usage by BPF programs (in some cases)
> > >
> > > Most BPF programs and various trampolines are small, and they often
> > > occupies a whole page. From a random server in our fleet, 50% of the
> > > loaded BPF programs are less than 500 byte in size, and 75% of them are
> > > less than 2kB in size. Allowing these BPF programs to share 2MB pages
> > > would yield some memory saving for systems with many BPF programs. For
> > > systems with only small number of BPF programs, this patch may waste a
> > > little memory by allocating one 2MB page, but using only part of it.
> >
> > I'm not convinced there are memory savings here. Unless you have hundreds
> > of BPF programs, most of 2M page will be wasted, won't it?
> > So for systems that have moderate use of BPF most of the 2M page will be
> > unused, right?
> 
> There will be some memory waste in such cases. But it will get better with:
> 1) With 4/5 and 5/5, BPF programs will share this 2MB page with kernel .text
> section (_stext to _etext);
> 2) modules, ftrace, kprobe will also share this 2MB page;

Unless I'm missing something, what will be shared is the virtual space, the
actual physical pages will be still allocated the same way as any vmalloc()
allocation.

> 3) There are bigger BPF programs in many use cases.
 
With statistics you provided above one will need hundreds if not thousands
of BPF programs to fill a 2M page. I didn't do the math, but it seems that
to see memory savings there should be several hundreds of BPF programs.

> Thanks,
> Song

-- 
Sincerely yours,
Mike.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux