On Tue, Mar 26, 2024 at 4:25 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Tue, Mar 26, 2024 at 04:01:03PM +1300, Barry Song wrote: > > Profiling a system blindly with mTHP has become challenging due > > to the lack of visibility into its operations. While displaying > > additional statistics such as partial map/unmap actions may > > spark debate, presenting the success rate of mTHP allocations > > appears to be a straightforward and pressing need. > > Ummm ... no? Not like this anyway. It has the bad assumption that > "mTHP" only comes in one size. I had initially considered per-size allocation and fallback before sending the RFC. However, in order to prompt discussion and exploration into profiling possibilities, I opted to send the simplest code instead. We could consider two options for displaying per-size statistics. 1. A single file could be used to display data for all sizes. 1024KiB fault allocation: 1024KiB fault fallback: 512KiB fault allocation: 512KiB fault fallback: .... 64KiB fault allocation: 64KiB fault fallback: 2. A separate file for each size For example, /sys/kernel/debug/transparent_hugepage/hugepages-1024kB/vmstat /sys/kernel/debug/transparent_hugepage/hugepages-512kB/vmstat ... /sys/kernel/debug/transparent_hugepage/hugepages-64kB/vmstat While the latter option may seem more appealing, it presents a challenge in situations where a 512kB allocation may fallback to 256kB, yet a separate 256kB allocation succeeds. Demonstrating the connection that the successful 256kB allocation is actually a fallback from the 512kB allocation can be complex especially if we begin to support per-VMA hints for mTHP sizes. Thanks Barry