Re: [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Nov 13, 2022 at 2:43 AM Mike Rapoport <rppt@xxxxxxxxxx> wrote:
>
> On Wed, Nov 09, 2022 at 09:43:50AM -0800, Song Liu wrote:
> > On Wed, Nov 9, 2022 at 3:18 AM Mike Rapoport <rppt@xxxxxxxxxx> wrote:
> > >
> > [...]
> >
> > > > >
> > > > > The proposed execmem_alloc() looks to me very much tailored for x86
> > > > > to be
> > > > > used as a replacement for module_alloc(). Some architectures have
> > > > > module_alloc() that is quite different from the default or x86
> > > > > version, so
> > > > > I'd expect at least some explanation how modules etc can use execmem_
> > > > > APIs
> > > > > without breaking !x86 architectures.
> > > >
> > > > I think this is fair, but I think we should ask ask ourselves - how
> > > > much should we do in one step?
> > >
> > > I think that at least we need an evidence that execmem_alloc() etc can be
> > > actually used by modules/ftrace/kprobes. Luis said that RFC v2 didn't work
> > > for him at all, so having a core MM API for code allocation that only works
> > > with BPF on x86 seems not right to me.
> >
> > While using execmem_alloc() et. al. in module support is difficult, folks are
> > making progress with it. For example, the prototype would be more difficult
> > before CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
> > (introduced by Christophe).
> >
> > We also have other users that we can onboard soon: BPF trampoline on
> > x86_64, BPF jit and trampoline on arm64, and maybe also on powerpc and
> > s390.
>
> Caching of large pages won't make any difference on arm64 and powerpc
> because they do not support splitting of the direct map, so the only
> potential benefit there is a centralized handling of text loading and I'm
> not convinced execmem_alloc() will get us there.

Sharing large pages helps reduce iTLB pressure, which is the second
motivation here (after reducing direct map fragmentation).

>
> > > With execmem_alloc() as the first step I'm failing to see the large
> > > picture. If we want to use it for modules, how will we allocate RO data?
> > > with similar rodata_alloc() that uses yet another tree in vmalloc?
> > > How the caching of large pages in vmalloc can be made useful for use cases
> > > like secretmem and PKS?
> >
> > If RO data causes problems with direct map fragmentation, we can use
> > similar logic. I think we will need another tree in vmalloc for this case.
> > Since the logic will be mostly identical, I personally don't think adding
> > another tree is a big overhead.
>
> Actually, it would be interesting to quantify memory savings/waste as the
> result of using execmem_alloc()

>From a random system in our fleet, execmem_alloc() saves:

139 iTLB entries (1x 2MB entry vs, 140x 4kB entries), which is more than
100% of L1 iTLB and about 10% of L2 TLB.

It wastes 1.5MB memory, which is 0.0023% of system memory (64GB).

I believe this is clearly a good trade-off.

Thanks,
Song



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux