Re: [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 15, 2022 at 02:48:05PM -0800, Song Liu wrote:
> On Tue, Nov 15, 2022 at 1:09 PM Luis Chamberlain <mcgrof@xxxxxxxxxx> wrote:
> >
> > On Mon, Nov 14, 2022 at 05:30:39PM -0800, Song Liu wrote:
> > > On Mon, Nov 7, 2022 at 2:41 PM Song Liu <song@xxxxxxxxxx> wrote:
> > > >
> > >
> > > [...]
> > >
> > > >
> > > >
> > > > This set enables bpf programs and bpf dispatchers to share huge pages with
> > > > new API:
> > > >   execmem_alloc()
> > > >   execmem_alloc()
> > > >   execmem_fill()
> > > >
> > > > The idea is similar to Peter's suggestion in [1].
> > > >
> > > > execmem_alloc() manages a set of PMD_SIZE RO+X memory, and allocates these
> > > > memory to its users. execmem_alloc() is used to free memory allocated by
> > > > execmem_alloc(). execmem_fill() is used to update memory allocated by
> > > > execmem_alloc().
> > >
> > > Sigh, I just realized this thread made through linux-mm@xxxxxxxxx, but got
> > > dropped by bpf@xxxxxxxxxxxxxxx, so I guess I will have to resend v3.
> >
> > I don't know what is going on with the bpf list but whatever it is, is silly.
> > You should Cc the right folks to ensure proper review if the bpf list is
> > the issue.
> >
> > > Currently, I have got the following action items for v3:
> > > 1. Add unify API to allocate text memory to motivation;
> > > 2. Update Documentation/x86/x86_64/mm.rst;
> > > 3. Allow none PMD_SIZE allocation for powerpc.
> >
> > - I am really exausted of asking again for real performance tests,
> >   you keep saying you can't and I keep saying you can, you are not
> >   trying hard enough. Stop thinking about your internal benchmark which
> >   you cannot publish. There should be enough crap out which you can use.
> >
> > - A new selftest or set of selftests which demonstrates gain in
> >   performance
> 
> I didn't mean to not show the result with publically available. I just
> thought the actual benchmark was better (and we do use that to
> demonstrate the benefit of a lot of kernel improvement).
> 
> For something publically available, how about the following:
> 
> Run 100 instances of the following benchmark from bpf selftests:
>   tools/testing/selftests/bpf/bench -w2 -d100 -a trig-kprobe
> which loads 7 BPF programs, and triggers one of them.
> 
> Then use perf to monitor TLB related counters:
>    perf stat -e iTLB-load-misses,itlb_misses.walk_completed_4k, \
>         itlb_misses.walk_completed_2m_4m -a
> 
> The following results are from a qemu VM with 32 cores.
> 
> Before bpf_prog_pack:
>   iTLB-load-misses: 350k/s
>   itlb_misses.walk_completed_4k: 90k/s
>   itlb_misses.walk_completed_2m_4m: 0.1/s
> 
> With bpf_prog_pack (current upstream):
>   iTLB-load-misses: 220k/s
>   itlb_misses.walk_completed_4k: 68k/s
>   itlb_misses.walk_completed_2m_4m: 0.2/s
> 
> With execmem_alloc (with this set):
>   iTLB-load-misses: 185k/s
>   itlb_misses.walk_completed_4k: 58k/s
>   itlb_misses.walk_completed_2m_4m: 1/s
> 
> Do these address your questions with this?

More in lines with what I was hoping for. Can something just do
the parallelization for you in one shot? Can bench alone do it for you?
Is there no interest to have soemthing which generically showcases
multithreading / hammering a system with tons of eBPF JITs? It may
prove useful.

And also, it begs the question, what if you had another iTLB generic
benchmark or genearl memory pressure workload running *as* you run the
above? I as, as it was my understanding that one of the issues was the
long term slowdown caused by the directmap fragmentation without
bpf_prog_pack, and so such an application should crawl to its knees
over time, and there should be numbers you could show to prove that
too, before and after.

  Luis



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux