Re: [RFC PATCH] mm: show mthp_fault_alloc and mthp_fault_fallback of multi-size THPs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26.03.24 23:19, Barry Song wrote:
On Tue, Mar 26, 2024 at 4:40 PM Barry Song <21cnbao@xxxxxxxxx> wrote:

On Tue, Mar 26, 2024 at 4:25 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:

On Tue, Mar 26, 2024 at 04:01:03PM +1300, Barry Song wrote:
Profiling a system blindly with mTHP has become challenging due
to the lack of visibility into its operations. While displaying
additional statistics such as partial map/unmap actions may
spark debate, presenting the success rate of mTHP allocations
appears to be a straightforward and pressing need.

Ummm ... no?  Not like this anyway.  It has the bad assumption that
"mTHP" only comes in one size.


I had initially considered per-size allocation and fallback before sending
the RFC. However, in order to prompt discussion and exploration
into profiling possibilities, I opted to send the simplest code instead.

We could consider two options for displaying per-size statistics.

1. A single file could be used to display data for all sizes.
1024KiB fault allocation:
1024KiB fault fallback:
512KiB fault allocation:
512KiB fault fallback:
....
64KiB fault allocation:
64KiB fault fallback:

2. A separate file for each size
For example,

/sys/kernel/debug/transparent_hugepage/hugepages-1024kB/vmstat
/sys/kernel/debug/transparent_hugepage/hugepages-512kB/vmstat
...
/sys/kernel/debug/transparent_hugepage/hugepages-64kB/vmstat


Hi Ryan, David, Willy, Yu,

Hi!


I'm collecting feedback on whether you'd prefer access to something similar
to /sys/kernel/debug/transparent_hugepage/hugepages-<size>/stat to help
determine the direction to take for this patch.

I discussed in the past that we might want to place statistics into sysfs. The idea was to place them into our new hierarchy:

/sys/kernel/mm/transparent_hugepage/hugepages-1024kB/...

following the "one value per file" sysfs design principle.

We could have a new folder "stats" in there that contains files with statistics we care about.

Of course, we could also place that initially into debugfs in a similar fashion, and move it over once the interface is considered good and stable.

My 2 cents would be to avoid a "single file".


This is important to us because we're keen on understanding how often
folios allocations fail on a system with limited memory, such as a phone.

Presently, I've observed a success rate of under 8% for 64KiB allocations.
Yet, integrating Yu's TAO optimization [1] and establishing an 800MiB
nomerge zone on a phone with 8GiB memory, there's a substantial
enhancement in the success rate, reaching approximately 40%. I'm still
fine-tuning the optimal size for the zone.

Just as a side note:

I didn't have the capacity to comment in detail on the "new zones" proposal in-depth so far (I'm hoping / assume there will be discussions at LSF/MM), but I'm hoping we can avoid that for now and instead improve our pageblock infrastructure, like Johannes is trying to, to achieve similar gains.

I suspect "some things we can do with new zones we can also do with pageblocks inside a zone". For example, there were discussions in the past to have "sticky movable" pageblocks: pageblocks that may only contain movable data. One could do the same with "pageblocks may not contain allocations < order X" etc. So one could similarly optimize the memmap to some degree for these pageblocks.

IMHO we should first try making THP <= pageblock allocations more reliable, not using new zones, and I'm happy that Johannes et al. are doing work in that direction. But it's a longer discussion to be had at LSF/MM.

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux