Re: [RFC PATCH v1] tools/mm: Add thpmaps script to dump THP usage info

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 12, 2024 at 2:18 AM David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> On 11.01.24 13:25, Ryan Roberts wrote:
> > On 10/01/2024 22:14, Barry Song wrote:
> >> On Wed, Jan 10, 2024 at 7:59 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:
> >>>
> >>> On 10/01/2024 11:38, Barry Song wrote:
> >>>> On Wed, Jan 10, 2024 at 7:21 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:
> >>>>>
> >>>>> On 10/01/2024 11:00, David Hildenbrand wrote:
> >>>>>> On 10.01.24 11:55, Ryan Roberts wrote:
> >>>>>>> On 10/01/2024 10:42, David Hildenbrand wrote:
> >>>>>>>> On 10.01.24 11:38, Ryan Roberts wrote:
> >>>>>>>>> On 10/01/2024 10:30, Barry Song wrote:
> >>>>>>>>>> On Wed, Jan 10, 2024 at 6:23 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> On 10/01/2024 09:09, Barry Song wrote:
> >>>>>>>>>>>> On Wed, Jan 10, 2024 at 4:58 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> On 10/01/2024 08:02, Barry Song wrote:
> >>>>>>>>>>>>>> On Wed, Jan 10, 2024 at 12:16 PM John Hubbard <jhubbard@xxxxxxxxxx> wrote:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> On 1/9/24 19:51, Barry Song wrote:
> >>>>>>>>>>>>>>>> On Wed, Jan 10, 2024 at 11:35 AM John Hubbard <jhubbard@xxxxxxxxxx>
> >>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>> ...
> >>>>>>>>>>>>>>>>> Hi Ryan,
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> One thing that immediately came up during some recent testing of mTHP
> >>>>>>>>>>>>>>>>> on arm64: the pid requirement is sometimes a little awkward. I'm
> >>>>>>>>>>>>>>>>> running
> >>>>>>>>>>>>>>>>> tests on a machine at a time for now, inside various containers and
> >>>>>>>>>>>>>>>>> such, and it would be nice if there were an easy way to get some
> >>>>>>>>>>>>>>>>> numbers
> >>>>>>>>>>>>>>>>> for the mTHPs across the whole machine.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Just to confirm, you're expecting these "global" stats be truely global
> >>>>>>>>>>>>> and not
> >>>>>>>>>>>>> per-container? (asking because you exploicitly mentioned being in a
> >>>>>>>>>>>>> container).
> >>>>>>>>>>>>> If you want per-container, then you can probably just create the container
> >>>>>>>>>>>>> in a
> >>>>>>>>>>>>> cgroup?
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> I'm not sure if that changes anything about thpmaps here. Probably
> >>>>>>>>>>>>>>>>> this is fine as-is. But I wanted to give some initial reactions from
> >>>>>>>>>>>>>>>>> just some quick runs: the global state would be convenient.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Thanks for taking this for a spin! Appreciate the feedback.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> +1. but this seems to be impossible by scanning pagemap?
> >>>>>>>>>>>>>>>> so may we add this statistics information in kernel just like
> >>>>>>>>>>>>>>>> /proc/meminfo or a separate /proc/mthp_info?
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Yes. From my perspective, it looks like the global stats are more useful
> >>>>>>>>>>>>>>> initially, and the more detailed per-pid or per-cgroup stats are the
> >>>>>>>>>>>>>>> next level of investigation. So feels odd to start with the more
> >>>>>>>>>>>>>>> detailed stats.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> probably because this can be done without the modification of the kernel.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Yes indeed, as John said in an earlier thread, my previous attempts to add
> >>>>>>>>>>>>> stats
> >>>>>>>>>>>>> directly in the kernel got pushback; DavidH was concerned that we don't
> >>>>>>>>>>>>> really
> >>>>>>>>>>>>> know exectly how to account mTHPs yet
> >>>>>>>>>>>>> (whole/partial/aligned/unaligned/per-size/etc) so didn't want to end up
> >>>>>>>>>>>>> adding
> >>>>>>>>>>>>> the wrong ABI and having to maintain it forever. There has also been some
> >>>>>>>>>>>>> pushback regarding adding more values to multi-value files in sysfs, so
> >>>>>>>>>>>>> David
> >>>>>>>>>>>>> was suggesting coming up with a whole new scheme at some point (I know
> >>>>>>>>>>>>> /proc/meminfo isn't sysfs, but the equivalent files for NUMA nodes and
> >>>>>>>>>>>>> cgroups
> >>>>>>>>>>>>> do live in sysfs).
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Anyway, this script was my attempt to 1) provide a short term solution
> >>>>>>>>>>>>> to the
> >>>>>>>>>>>>> "we need some stats" request and 2) provide a context in which to explore
> >>>>>>>>>>>>> what
> >>>>>>>>>>>>> the right stats are - this script can evolve without the ABI problem.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> The detailed per-pid or per-cgroup is still quite useful to my case in
> >>>>>>>>>>>>>> which
> >>>>>>>>>>>>>> we set mTHP enabled/disabled and allowed sizes according to vma types,
> >>>>>>>>>>>>>> eg. libc_malloc, java heaps etc.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Different vma types can have different anon_name. So I can use the
> >>>>>>>>>>>>>> detailed
> >>>>>>>>>>>>>> info to find out if specific VMAs have gotten mTHP properly and how many
> >>>>>>>>>>>>>> they have gotten.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> However, Ryan did clearly say, above, "In future we may wish to
> >>>>>>>>>>>>>>> introduce stats directly into the kernel (e.g. smaps or similar)". And
> >>>>>>>>>>>>>>> earlier he ran into some pushback on trying to set up /proc or /sys
> >>>>>>>>>>>>>>> values because this is still such an early feature.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> I wonder if we could put the global stats in debugfs for now? That's
> >>>>>>>>>>>>>>> specifically supposed to be a "we promise *not* to keep this ABI stable"
> >>>>>>>>>>>>>>> location.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Now that I think about it, I wonder if we can add a --global mode to the
> >>>>>>>>>>>>> script
> >>>>>>>>>>>>> (or just infer global when neither --pid nor --cgroup are provided). I
> >>>>>>>>>>>>> think I
> >>>>>>>>>>>>> should be able to determine all the physical memory ranges from
> >>>>>>>>>>>>> /proc/iomem,
> >>>>>>>>>>>>> then grab all the info we need from /proc/kpageflags. We should then be
> >>>>>>>>>>>>> able to
> >>>>>>>>>>>>> process it all in much the same way as for --pid/--cgroup and provide the
> >>>>>>>>>>>>> same
> >>>>>>>>>>>>> stats, but it will apply globally. What do you think?
> >>>>>>>>>>>
> >>>>>>>>>>> Having now thought about this for a few mins (in the shower, if anyone wants
> >>>>>>>>>>> the
> >>>>>>>>>>> complete picture :) ), this won't quite work. This approach doesn't have the
> >>>>>>>>>>> virtual mapping information so the best it can do is tell us "how many of
> >>>>>>>>>>> each
> >>>>>>>>>>> size of THP are allocated?" - it doesn't tell us anything about whether they
> >>>>>>>>>>> are
> >>>>>>>>>>> fully or partially mapped or what their alignment is (all necessary if we
> >>>>>>>>>>> want
> >>>>>>>>>>> to know if they are contpte-mapped). So I don't think this approach is
> >>>>>>>>>>> going to
> >>>>>>>>>>> be particularly useful.
> >>>>>>>>>>>
> >>>>>>>>>>> And this is also the big problem if we want to gather stats inside the
> >>>>>>>>>>> kernel;
> >>>>>>>>>>> if we want something equivalant to /proc/meminfo's
> >>>>>>>>>>> AnonHugePages/ShmemPmdMapped/FilePmdMapped, we need to consider not just the
> >>>>>>>>>>> allocation of the THP but also whether it is mapped. That's easy for
> >>>>>>>>>>> PMD-mappings, because there is only one entry to consider - when you set it,
> >>>>>>>>>>> you
> >>>>>>>>>>> increment the number of PMD-mapped THPs, when you clear it, you decrement.
> >>>>>>>>>>> But
> >>>>>>>>>>> for PTE-mappings it's harder; you know the size when you are mapping so its
> >>>>>>>>>>> easy
> >>>>>>>>>>> to increment, but you can do a partial unmap, so you would need to scan the
> >>>>>>>>>>> PTEs
> >>>>>>>>>>> to figure out if we are unmapping the first page of a previously
> >>>>>>>>>>> fully-PTE-mapped THP, which is expensive. We would need a cheap mechanism to
> >>>>>>>>>>> determine "is this folio fully and contiguously mapped in at least one
> >>>>>>>>>>> process?".
> >>>>>>>>>>
> >>>>>>>>>> as OPPO's approach I shared to you before is maintaining two mapcount
> >>>>>>>>>> 1. entire map
> >>>>>>>>>> 2. subpage's map
> >>>>>>>>>> 3. if 1 and 2 both exist, it is DoubleMapped.
> >>>>>>>>>>
> >>>>>>>>>> This isn't a problem for us. and everytime if we do a partial unmap,
> >>>>>>>>>> we have an explicit
> >>>>>>>>>> cont_pte split which will decrease the entire map and increase the
> >>>>>>>>>> subpage's mapcount.
> >>>>>>>>>>
> >>>>>>>>>> but its downside is that we expose this info to mm-core.
> >>>>>>>>>
> >>>>>>>>> OK, but I think we have a slightly more generic situation going on with the
> >>>>>>>>> upstream; If I've understood correctly, you are using the PTE_CONT bit in the
> >>>>>>>>> PTE to determne if its fully mapped? That works for your case where you only
> >>>>>>>>> have 1 size of THP that you care about (contpte-size). But for the upstream, we
> >>>>>>>>> have multi-size THP so we can't use the PTE_CONT bit to determine if its fully
> >>>>>>>>> mapped because we can only use that bit if the THP is at least 64K and aligned,
> >>>>>>>>> and only on arm64. We would need a SW bit for this purpose, and the mm would
> >>>>>>>>> need to update that SW bit for every PTE one the full -> partial map
> >>>>>>>>> transition.
> >>>>>>>>
> >>>>>>>> Oh no. Let's not make everything more complicated for the purpose of some stats.
> >>>>>>>>
> >>>>>>>
> >>>>>>> Indeed, I was intending to argue *against* doing it this way. Fundamentally, if
> >>>>>>> we want to know what's fully mapped and what's not, then I don't see any way
> >>>>>>> other than by scanning the page tables and we might as well do that in user
> >>>>>>> space with this script.
> >>>>>>>
> >>>>>>> Although, I expect you will shortly make a proposal that is simple to implement
> >>>>>>> and prove me wrong ;-)
> >>>>>>
> >>>>>> Unlikely :) As you said, once you have multiple folio sizes, it stops really
> >>>>>> making sense.
> >>>>>>
> >>>>>> Assume you have a 128 kiB pageache folio, and half of that is mapped. You can
> >>>>>> set cont-pte bits on that half and all is fine. Or AMD can benefit from it's
> >>>>>> optimizations without the cont-pte bit and everything is fine.
> >>>>>
> >>>>> Yes, but for debug and optimization, its useful to know when THPs are
> >>>>> fully/partially mapped, when they are unaligned etc. Anyway, the script does
> >>>>> that for us, and I think we are tending towards agreement that there are
> >>>>> unlikely to be any cost benefits by moving it into the kernel.
> >>>>
> >>>> frequent partial unmap can defeat all purpose for us to use large folios.
> >>>> just imagine a large folio can soon be splitted after it is formed. we lose
> >>>> the performance gain and might get regression instead.
> >>>
> >>> nit: just because a THP gets partially unmapped in a process doesn't mean it
> >>> gets split into order-0 pages. If the folio still has all its pages mapped at
> >>> least once then no further action is taken. If the page being unmapped was the
> >>> last mapping of that page, then the THP is put on the deferred split queue, so
> >>> that it can be split in future if needed.
> >>>>
> >>>> and this can be very frequent, for example, one userspace heap management
> >>>> is releasing memory page by page.
> >>>>
> >>>> In our real product deployment, we might not care about the second partial
> >>>> unmapped,  we do care about the first partial unmapped as we can use this
> >>>> to know if split has ever happened on this large folios. an partial unmapped
> >>>> subpage can be unlikely re-mapped back.
> >>>>
> >>>> so i guess 1st unmap is probably enough, at least for my product. I mean we
> >>>> care about if partial unmap has ever happened on a large folio more than how
> >>>> they are exactly partially unmapped :-)
> >>>
> >>> I'm not sure what you are suggesting here? A global boolean that tells you if
> >>> any folio in the system has ever been partially unmapped? That will almost
> >>> certainly always be true, even for a very well tuned system.
> >>
> >> not a global boolean but a per-folio boolean. in case userspace maps a region
> >> and has no userspace management, then we are fine as it is unlikely to have
> >> partial unmap/map things; in case userspace maps a region, but manages it
> >> by itself, such as heap things, we might result in lots of partial map/unmap,
> >> which can lead to 3 problems:
> >> 1. potential memory footprint increase, for example, while userspace releases
> >> some pages in a folio, we might still keep it as frequent splitting folio into
> >> basepages and releasing the unmapped subpage might be too expensive.
> >> 2. if cont-pte is involved, frequent dropping cont-pte/tlb shootdown
> >> might happen.
> >> 3. other maintenance overhead such as splitting large folios etc.
> >>
> >> We'd like to know how serious partial map things are happening. so either
> >> we will disable mTHP in this kind of VMAs, or optimize userspace to do
> >> some alignment according to the size of large folios.
> >>
> >> in android phones, we detect lots of apps, and also found some apps might
> >> do things like
> >> 1. mprotect on some pages within a large folio
> >> 2. mlock on some pages within a large folio
> >> 3. madv_free on some pages within a large folio
> >> 4. madv_pageout on some pages within a large folio.
> >>
> >> it would be good if we have a per-folio boolean to know how serious userspace
> >> is breaking the large folios. for example, if more than 50% folios in a vma has
> >> this problem, we can find it out and take some action.
> >
> > The high level value of these stats seems clear - I agree we need to be able to
> > get these insights. I think the issues are more around the implementation
> > though. I'm struggling to understand exactly how we could implement a lot of
> > these things cheaply (either in the kernel or in user space).
> >
> > Let me try to work though what I think you are suggesting:
> >
> >   - every THP is initially fully mapped
>
> Not for pagecache folios.
>
> >   - when an operation causes a partial unmap, mark the folio as having at least
> >     one partial mapping
> >   - on transition from "no partial mappings" to "at least one partial mapping"
> >     increment a "anon-partial-<size>kB" (one for each supported folio size)
> >     counter by the folio size
> >   - on transition from "at least one partial mapping" to "fully unampped
> >     everywhere" decrement the counter by the folio size
> >
> > I think the issue with this is that a folio that is fully mapped in a process
> > that gets forked, then is partially unmapped in 1 process, will be accounted as
> > partially mapped even after the process that partially unmapped it exits, even
> > though that folio is now fully mapped in all processes that map it. Is that a
> > problem, perhaps not? I'm not sure.
>
> What I can offer with my total mapcount I am working on (+ entire/pmd
> mapcount, but let's put that aside):
>
> 1) total_mapcount not multiples of folio_nr_page -> at least one process
> currently maps the folio partially
>
> 2) total_mapcount is less than folio_nr_page -> surely partially mapped
>
> I think for most of anon memory (note that most folios are always
> exclusive in our system, not cow-shared) 2) would already be sufficient.

if we can improve Ryan's "mm: Batch-copy PTE ranges during fork()" to
add nr_pages in copy_pte_range for rmap.
copy_pte_range()
{
           folio_try_dup_anon_rmap_ptes(...nr_pages....)
}
and at the same time, in zap_pte_range(), we remove the whole anon_rmap
if the zapped-range covers the whole folio.

Replace the for-loop
for (i = 0; i < nr; i++, page++) {
        add_rmap(1);
}
for (i = 0; i < nr; i++, page++) {
        remove_rmap(1);
}
by always using add_rmap(nr_pages) and remove_rmap(nr_pages) if we
are doing the entire mapping/unmapping.

then we might be able to TestAndSetPartialMapped flag for this folio anywhile
1. someone is adding rmap with a number not equal nr_pages
2. someone is removing rmap with a number not equal nr_pages
That means we are doing partial mapping or unmapping.
and we increment partialmap_count by 1, let debugfs or somewhere present
this count.

while the folio is released to buddy and splitted into normal pages,
we remove this flag and decrease partialmap_count by 1.

>
> --
> Cheers,
>
> David / dhildenb
>

Thanks
Barry





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux