On 19/03/2025 20:47, Barry Song wrote: > On Thu, Mar 20, 2025 at 4:38 AM Ryan Roberts <ryan.roberts@xxxxxxx> wrote: >> >> Hi All, >> >> I know this is very last minute, but I was hoping that it might be possible to >> squeeze in a session to discuss the following? >> >> Summary/Background: >> >> On arm64, physically contiguous and naturally aligned regions can take advantage >> of contpte mappings (e.g. 64 KB) to reduce iTLB pressure. However, for file >> regions containing text, current readahead behaviour often yields small, >> misaligned folios, preventing this optimization. This proposal introduces a >> special-case path for executable mappings, performing synchronous reads of an >> architecture-chosen size into large folios (64 KB on arm64). Early performance >> tests on real-world workloads (e.g. nginx, redis, kernel compilation) show ~2-9% >> gains. >> >> I’ve previously posted attempts to enable this performance improvement ([1], >> [2]), but there were objections and conversation fizzled out. Now that I have >> more compelling performance data, I’m hoping there is now stronger >> justification, and we can find a path forwards. >> >> What I’d Like to Cover: >> >> - Describe how text memory should ideally be mapped and why it benefits >> performance. >> >> - Brief review of performance data. >> >> - Discuss options for the best way to encourage text into large folios: >> - Let the architecture request a preferred size >> - Extend VMA attributes to include preferred THP size hint > > We might need this for a couple of other cases. > > 1. The native heap—for example, a native heap like jemalloc—can configure > the base "granularity" and then use MADV_DONTNEED/FREE at that granularity > to manage memory. Currently, the default granularity is PAGE_SIZE, which can > lead to excessive folio splitting. For instance, if we set jemalloc's > granularity to > 16KB while sysfs supports 16KB, 32KB, 64KB, etc., splitting can still occur. > Therefore, in some cases, I believe the kernel should be aware of how > userspace is managing memory. > > 2. Java heap GC compaction - userfaultfd_move() things. > I am considering adding support for batched PTE/folios moves in > userfaultfd_move(). > If sysfs enables 16KB, 32KB, 64KB, 128KB, etc., but the userspace Java > heap moves > memory at a 16KB granularity, it could lead to excessive folio splitting. Would these heaps ever use a 64K granule or is that too big? If they can use 64K, then one simple solution would be to only enable mTHP sizes upto 64K (which is the magic size for arm64). Alternatively they could use MADV_NOHUGEPAGE today and be guarranteed that memory would remain mapped as small folios. But I see the potential problem if you want to benefit from HPA with 16K granule there but still enable 64K globally. We have briefly discussed the idea of supporting MADV_HUGEPAGE via madvise_process() in the past; that has an extra param that could encode the size hint(s). > > For exec, it seems we need a userspace-transparent approach. Asking each > application to modify its code to madvise the kernel on its preferred exec folio > size seems cumbersome. I would much prefer a transparent approach. If we did take the approach of using a per-VMA size hint, I was thinking that could be handled by the dynamic linker. Then it's only one place to update. > > I mean, we could whitelist all execs by default unless an application explicitly > requests to disable it? I guess the explicit disable would be MADV_NOHUGEPAGE. But I don't believe the pagecache honours this right now; presumably because the memory is shared. What would you do if one process disabled and another didn't? Thanks, Ryan > >> - Provide a sysfs knob >> - Plug into the “mapping min folio order” infrastructure >> - Other approaches? >> >> [1] https://lore.kernel.org/all/20240215154059.2863126-1-ryan.roberts@xxxxxxx/ >> [2] https://lore.kernel.org/all/20240717071257.4141363-1-ryan.roberts@xxxxxxx/ >> >> Thanks, >> Ryan > > Thanks > Barry