Re: [PATCH bpf-next v1 RESEND 1/5] vmalloc: introduce vmalloc_exec, vfree_exec, and vcopy_exec

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Aaron,

On Sun, Nov 6, 2022 at 10:40 PM Aaron Lu <aaron.lu@xxxxxxxxx> wrote:
>
[...]
> > and that I think is the real golden nugget here.
>
> I'm interested in how this patchset (further) improves direct map
> fragmentation so would like to evaluate it to see if my previous work to
> merge small mappings back in architecture layer[1] is still necessary.
>
> I tried to apply this patchset on v6.1-rc3/2/1 and v6.0 but all failed,
> so I took one step back and evaluated the existing bpf_prog_pack. I'm
> aware of this patchset would make things even better by using order-9
> page to backup the vmalloced range.

The patchset was based on bpf-next tree:

https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/

>
> I used the sample bpf prog: sample/bpf/sockex1 because it looks easy to
> run, feel free to let me know a better way to evaluate this.
>
> - In kernels before bpf_prog_pack(v5.17 and earlier), this prog would
> cause 3 pages to change protection to RO+X from RW+NX; And if the three
> pages are far apart, each would cause a level 3 split then a level 2
> split; Reality is, allocated pages tend to stay close physically so
> actual result will not be this bad.
[...]
>
> - v6.1-rc3
> There is no difference because I can't trigger another pack alloc before
> system is OOMed.
>
> Conclusion: I think bpf_prog_pack is very good at reducing direct map
> fragmentation and this patchset can further improve this situation on
> large machines(with huge amount of memory) or with more large bpf progs
> loaded etc.

Thanks a lot for these experiments! I will include the data in the next
version of the set.

>
> Some imperfect things I can think of are(not related to this patchset):
> 1 Once a split happened, it remains happened. This may not be a big deal
> now with bpf_prog_pack and this patchset because the need to allocate a
> new order-9 page and thus cause a potential split should happen much much
> less;

I think we will need to regroup the direct map for some scenarios. But I
am not aware of such workloads.

> 2 When a new order-9 page has to be allocated, there is no way to tell
> the allocator to allocate this order-9 page from an already splitted PUD
> range to avoid another PUD mapping split;

This would be a good improvement.

> 3 As Mike and others have mentioned, there are other users that can also
> cause direct map split.

Thanks,
Song




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux