On Wed, Jan 10, 2024 at 12:16 PM John Hubbard <jhubbard@xxxxxxxxxx> wrote: > > On 1/9/24 19:51, Barry Song wrote: > > On Wed, Jan 10, 2024 at 11:35 AM John Hubbard <jhubbard@xxxxxxxxxx> wrote: > ... > >> Hi Ryan, > >> > >> One thing that immediately came up during some recent testing of mTHP > >> on arm64: the pid requirement is sometimes a little awkward. I'm running > >> tests on a machine at a time for now, inside various containers and > >> such, and it would be nice if there were an easy way to get some numbers > >> for the mTHPs across the whole machine. > >> > >> I'm not sure if that changes anything about thpmaps here. Probably > >> this is fine as-is. But I wanted to give some initial reactions from > >> just some quick runs: the global state would be convenient. > > > > +1. but this seems to be impossible by scanning pagemap? > > so may we add this statistics information in kernel just like > > /proc/meminfo or a separate /proc/mthp_info? > > > > Yes. From my perspective, it looks like the global stats are more useful > initially, and the more detailed per-pid or per-cgroup stats are the > next level of investigation. So feels odd to start with the more > detailed stats. > probably because this can be done without the modification of the kernel. The detailed per-pid or per-cgroup is still quite useful to my case in which we set mTHP enabled/disabled and allowed sizes according to vma types, eg. libc_malloc, java heaps etc. Different vma types can have different anon_name. So I can use the detailed info to find out if specific VMAs have gotten mTHP properly and how many they have gotten. > However, Ryan did clearly say, above, "In future we may wish to > introduce stats directly into the kernel (e.g. smaps or similar)". And > earlier he ran into some pushback on trying to set up /proc or /sys > values because this is still such an early feature. > > I wonder if we could put the global stats in debugfs for now? That's > specifically supposed to be a "we promise *not* to keep this ABI stable" > location. +1. > > > thanks, > -- > John Hubbard > NVIDIA > Thanks Barry