Re: [RFC PATCH v1] tools/mm: Add thpmaps script to dump THP usage info

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/01/2024 09:09, Barry Song wrote:
> On Wed, Jan 10, 2024 at 4:58 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:
>>
>> On 10/01/2024 08:02, Barry Song wrote:
>>> On Wed, Jan 10, 2024 at 12:16 PM John Hubbard <jhubbard@xxxxxxxxxx> wrote:
>>>>
>>>> On 1/9/24 19:51, Barry Song wrote:
>>>>> On Wed, Jan 10, 2024 at 11:35 AM John Hubbard <jhubbard@xxxxxxxxxx> wrote:
>>>> ...
>>>>>> Hi Ryan,
>>>>>>
>>>>>> One thing that immediately came up during some recent testing of mTHP
>>>>>> on arm64: the pid requirement is sometimes a little awkward. I'm running
>>>>>> tests on a machine at a time for now, inside various containers and
>>>>>> such, and it would be nice if there were an easy way to get some numbers
>>>>>> for the mTHPs across the whole machine.
>>
>> Just to confirm, you're expecting these "global" stats be truely global and not
>> per-container? (asking because you exploicitly mentioned being in a container).
>> If you want per-container, then you can probably just create the container in a
>> cgroup?
>>
>>>>>>
>>>>>> I'm not sure if that changes anything about thpmaps here. Probably
>>>>>> this is fine as-is. But I wanted to give some initial reactions from
>>>>>> just some quick runs: the global state would be convenient.
>>
>> Thanks for taking this for a spin! Appreciate the feedback.
>>
>>>>>
>>>>> +1. but this seems to be impossible by scanning pagemap?
>>>>> so may we add this statistics information in kernel just like
>>>>> /proc/meminfo or a separate /proc/mthp_info?
>>>>>
>>>>
>>>> Yes. From my perspective, it looks like the global stats are more useful
>>>> initially, and the more detailed per-pid or per-cgroup stats are the
>>>> next level of investigation. So feels odd to start with the more
>>>> detailed stats.
>>>>
>>>
>>> probably because this can be done without the modification of the kernel.
>>
>> Yes indeed, as John said in an earlier thread, my previous attempts to add stats
>> directly in the kernel got pushback; DavidH was concerned that we don't really
>> know exectly how to account mTHPs yet
>> (whole/partial/aligned/unaligned/per-size/etc) so didn't want to end up adding
>> the wrong ABI and having to maintain it forever. There has also been some
>> pushback regarding adding more values to multi-value files in sysfs, so David
>> was suggesting coming up with a whole new scheme at some point (I know
>> /proc/meminfo isn't sysfs, but the equivalent files for NUMA nodes and cgroups
>> do live in sysfs).
>>
>> Anyway, this script was my attempt to 1) provide a short term solution to the
>> "we need some stats" request and 2) provide a context in which to explore what
>> the right stats are - this script can evolve without the ABI problem.
>>
>>> The detailed per-pid or per-cgroup is still quite useful to my case in which
>>> we set mTHP enabled/disabled and allowed sizes according to vma types,
>>> eg. libc_malloc, java heaps etc.
>>>
>>> Different vma types can have different anon_name. So I can use the detailed
>>> info to find out if specific VMAs have gotten mTHP properly and how many
>>> they have gotten.
>>>
>>>> However, Ryan did clearly say, above, "In future we may wish to
>>>> introduce stats directly into the kernel (e.g. smaps or similar)". And
>>>> earlier he ran into some pushback on trying to set up /proc or /sys
>>>> values because this is still such an early feature.
>>>>
>>>> I wonder if we could put the global stats in debugfs for now? That's
>>>> specifically supposed to be a "we promise *not* to keep this ABI stable"
>>>> location.
>>
>> Now that I think about it, I wonder if we can add a --global mode to the script
>> (or just infer global when neither --pid nor --cgroup are provided). I think I
>> should be able to determine all the physical memory ranges from /proc/iomem,
>> then grab all the info we need from /proc/kpageflags. We should then be able to
>> process it all in much the same way as for --pid/--cgroup and provide the same
>> stats, but it will apply globally. What do you think?
> 
> for debug purposes, it should be good. imaging there is a health
> monitor which needs
> to sample the stats of large folios online and periodically, this
> might be too expensive.

Yes, understood - the long term aim needs to be to get stats into the kernel.
This is intended as a step to help make that happen.

> 
>>
>> If we can possibly avoid sysfs/debugfs I would prefer to keep it all in a script
>> for now.
>>
>>>
>>> +1.
>>>
>>>>
>>>>
>>>> thanks,
>>>> --
>>>> John Hubbard
>>>> NVIDIA
>>>>
>>>
> 
> Thanks
> Barry





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux