Re: [RFC PATCH v1] tools/mm: Add thpmaps script to dump THP usage info

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 10, 2024 at 6:23 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:
>
> On 10/01/2024 09:09, Barry Song wrote:
> > On Wed, Jan 10, 2024 at 4:58 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:
> >>
> >> On 10/01/2024 08:02, Barry Song wrote:
> >>> On Wed, Jan 10, 2024 at 12:16 PM John Hubbard <jhubbard@xxxxxxxxxx> wrote:
> >>>>
> >>>> On 1/9/24 19:51, Barry Song wrote:
> >>>>> On Wed, Jan 10, 2024 at 11:35 AM John Hubbard <jhubbard@xxxxxxxxxx> wrote:
> >>>> ...
> >>>>>> Hi Ryan,
> >>>>>>
> >>>>>> One thing that immediately came up during some recent testing of mTHP
> >>>>>> on arm64: the pid requirement is sometimes a little awkward. I'm running
> >>>>>> tests on a machine at a time for now, inside various containers and
> >>>>>> such, and it would be nice if there were an easy way to get some numbers
> >>>>>> for the mTHPs across the whole machine.
> >>
> >> Just to confirm, you're expecting these "global" stats be truely global and not
> >> per-container? (asking because you exploicitly mentioned being in a container).
> >> If you want per-container, then you can probably just create the container in a
> >> cgroup?
> >>
> >>>>>>
> >>>>>> I'm not sure if that changes anything about thpmaps here. Probably
> >>>>>> this is fine as-is. But I wanted to give some initial reactions from
> >>>>>> just some quick runs: the global state would be convenient.
> >>
> >> Thanks for taking this for a spin! Appreciate the feedback.
> >>
> >>>>>
> >>>>> +1. but this seems to be impossible by scanning pagemap?
> >>>>> so may we add this statistics information in kernel just like
> >>>>> /proc/meminfo or a separate /proc/mthp_info?
> >>>>>
> >>>>
> >>>> Yes. From my perspective, it looks like the global stats are more useful
> >>>> initially, and the more detailed per-pid or per-cgroup stats are the
> >>>> next level of investigation. So feels odd to start with the more
> >>>> detailed stats.
> >>>>
> >>>
> >>> probably because this can be done without the modification of the kernel.
> >>
> >> Yes indeed, as John said in an earlier thread, my previous attempts to add stats
> >> directly in the kernel got pushback; DavidH was concerned that we don't really
> >> know exectly how to account mTHPs yet
> >> (whole/partial/aligned/unaligned/per-size/etc) so didn't want to end up adding
> >> the wrong ABI and having to maintain it forever. There has also been some
> >> pushback regarding adding more values to multi-value files in sysfs, so David
> >> was suggesting coming up with a whole new scheme at some point (I know
> >> /proc/meminfo isn't sysfs, but the equivalent files for NUMA nodes and cgroups
> >> do live in sysfs).
> >>
> >> Anyway, this script was my attempt to 1) provide a short term solution to the
> >> "we need some stats" request and 2) provide a context in which to explore what
> >> the right stats are - this script can evolve without the ABI problem.
> >>
> >>> The detailed per-pid or per-cgroup is still quite useful to my case in which
> >>> we set mTHP enabled/disabled and allowed sizes according to vma types,
> >>> eg. libc_malloc, java heaps etc.
> >>>
> >>> Different vma types can have different anon_name. So I can use the detailed
> >>> info to find out if specific VMAs have gotten mTHP properly and how many
> >>> they have gotten.
> >>>
> >>>> However, Ryan did clearly say, above, "In future we may wish to
> >>>> introduce stats directly into the kernel (e.g. smaps or similar)". And
> >>>> earlier he ran into some pushback on trying to set up /proc or /sys
> >>>> values because this is still such an early feature.
> >>>>
> >>>> I wonder if we could put the global stats in debugfs for now? That's
> >>>> specifically supposed to be a "we promise *not* to keep this ABI stable"
> >>>> location.
> >>
> >> Now that I think about it, I wonder if we can add a --global mode to the script
> >> (or just infer global when neither --pid nor --cgroup are provided). I think I
> >> should be able to determine all the physical memory ranges from /proc/iomem,
> >> then grab all the info we need from /proc/kpageflags. We should then be able to
> >> process it all in much the same way as for --pid/--cgroup and provide the same
> >> stats, but it will apply globally. What do you think?
>
> Having now thought about this for a few mins (in the shower, if anyone wants the
> complete picture :) ), this won't quite work. This approach doesn't have the
> virtual mapping information so the best it can do is tell us "how many of each
> size of THP are allocated?" - it doesn't tell us anything about whether they are
> fully or partially mapped or what their alignment is (all necessary if we want
> to know if they are contpte-mapped). So I don't think this approach is going to
> be particularly useful.
>
> And this is also the big problem if we want to gather stats inside the kernel;
> if we want something equivalant to /proc/meminfo's
> AnonHugePages/ShmemPmdMapped/FilePmdMapped, we need to consider not just the
> allocation of the THP but also whether it is mapped. That's easy for
> PMD-mappings, because there is only one entry to consider - when you set it, you
> increment the number of PMD-mapped THPs, when you clear it, you decrement. But
> for PTE-mappings it's harder; you know the size when you are mapping so its easy
> to increment, but you can do a partial unmap, so you would need to scan the PTEs
> to figure out if we are unmapping the first page of a previously
> fully-PTE-mapped THP, which is expensive. We would need a cheap mechanism to
> determine "is this folio fully and contiguously mapped in at least one process?".

as OPPO's approach I shared to you before is maintaining two mapcount
1. entire map
2. subpage's map
3. if 1 and 2 both exist, it is DoubleMapped.

This isn't a problem for us. and everytime if we do a partial unmap,
we have an explicit
cont_pte split which will decrease the entire map and increase the
subpage's mapcount.

but its downside is that we expose this info to mm-core.

>
> So depending on what global stats you actually need, the route to getting them
> cheaply may not be easy. (My previous attempt to add stats cheated and didn't
> try to track "fully mapped" vs "partially mapped" - instead it just counted the
> number of pages belonging to a THP (of any size) that were mapped.
>
> If you need the global mapping state, then the short term way to do this would
> be to provide the root cgroup, then have the script recurse through all child
> cgroups; That would pick up all the processes and iterate through them:
>
>   $ thpmaps --cgroup /sys/fs/cgroup --summary ...
>
> This won't quite work with the current version because it doesn't recurse
> through the cgroup children currently, but that would be easy to add.
>
>
> >
> > for debug purposes, it should be good. imaging there is a health
> > monitor which needs
> > to sample the stats of large folios online and periodically, this
> > might be too expensive.
> >
> >>
> >> If we can possibly avoid sysfs/debugfs I would prefer to keep it all in a script
> >> for now.
> >>
> >>>
> >>> +1.
> >>>
> >>>>
> >>>>
> >>>> thanks,
> >>>> --
> >>>> John Hubbard
> >>>> NVIDIA
> >>>>
> >>>
> >

Thanks
Barry





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux