On Tue, Nov 8, 2022 at 11:43 AM Christophe Leroy <christophe.leroy@xxxxxxxxxx> wrote: > > > > Le 08/11/2022 à 19:41, Song Liu a écrit : > > On Tue, Nov 8, 2022 at 3:27 AM Mike Rapoport <rppt@xxxxxxxxxx> wrote: > >> > >> Hi Song, > >> > >> On Mon, Nov 07, 2022 at 02:39:16PM -0800, Song Liu wrote: > >>> This patchset tries to address the following issues: > >>> > >>> 1. Direct map fragmentation > >>> > >>> On x86, STRICT_*_RWX requires the direct map of any RO+X memory to be also > >>> RO+X. These set_memory_* calls cause 1GB page table entries to be split > >>> into 2MB and 4kB ones. This fragmentation in direct map results in bigger > >>> and slower page table, and pressure for both instruction and data TLB. > >>> > >>> Our previous work in bpf_prog_pack tries to address this issue from BPF > >>> program side. Based on the experiments by Aaron Lu [4], bpf_prog_pack has > >>> greatly reduced direct map fragmentation from BPF programs. > >> > >> Usage of set_memory_* APIs with memory allocated from vmalloc/modules > >> virtual range does not change the direct map, but only updates the > >> permissions in vmalloc range. The direct map splits occur in > >> vm_remove_mappings() when the memory is *freed*. > >> > >> That said, both bpf_prog_pack and these patches do reduce the > >> fragmentation, but this happens because the memory is freed to the system > >> in 2M chunks and there are no splits of 2M pages. Besides, since the same > >> 2M page used for many BPF programs there should be way less vfree() calls. > >> > >>> 2. iTLB pressure from BPF program > >>> > >>> Dynamic kernel text such as modules and BPF programs (even with current > >>> bpf_prog_pack) use 4kB pages on x86, when the total size of modules and > >>> BPF program is big, we can see visible performance drop caused by high > >>> iTLB miss rate. > >> > >> Like Luis mentioned several times already, it would be nice to see numbers. > >> > >>> 3. TLB shootdown for short-living BPF programs > >>> > >>> Before bpf_prog_pack loading and unloading BPF programs requires global > >>> TLB shootdown. This patchset (and bpf_prog_pack) replaces it with a local > >>> TLB flush. > >>> > >>> 4. Reduce memory usage by BPF programs (in some cases) > >>> > >>> Most BPF programs and various trampolines are small, and they often > >>> occupies a whole page. From a random server in our fleet, 50% of the > >>> loaded BPF programs are less than 500 byte in size, and 75% of them are > >>> less than 2kB in size. Allowing these BPF programs to share 2MB pages > >>> would yield some memory saving for systems with many BPF programs. For > >>> systems with only small number of BPF programs, this patch may waste a > >>> little memory by allocating one 2MB page, but using only part of it. > >> > >> I'm not convinced there are memory savings here. Unless you have hundreds > >> of BPF programs, most of 2M page will be wasted, won't it? > >> So for systems that have moderate use of BPF most of the 2M page will be > >> unused, right? > > > > There will be some memory waste in such cases. But it will get better with: > > 1) With 4/5 and 5/5, BPF programs will share this 2MB page with kernel .text > > section (_stext to _etext); > > 2) modules, ftrace, kprobe will also share this 2MB page; > > 3) There are bigger BPF programs in many use cases. > > And what I love with this series (for powerpc/32) is that we will likely > now be able to have bpf, ftrace, kprobe without the performance cost of > CONFIG_MODULES. Yeah, I remember reading emails about using tracing tools without CONFIG_MODULES. We still need more work (beyond this set) to make it happen for powerpc/32. For example, current powerpc bpf_jit doesn't support jitting into ROX memory. Song > > Today, CONFIG_MODULES means page mapping, which means handling of kernel > page in ITLB miss handlers. > > By using some of the space between end of rodata and start of inittext, > we are able to use ROX linear memory which is mapped by blocks. It means > there is no need to handle kernel text in ITLB handler (You can look at > https://elixir.bootlin.com/linux/v6.1-rc3/source/arch/powerpc/kernel/head_8xx.S#L191 > to better understand what I'm talking about). > > Thanks > Christophe