Re: [RFC PATCH v1] tools/mm: Add thpmaps script to dump THP usage info

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10.01.24 13:05, Barry Song wrote:
On Wed, Jan 10, 2024 at 7:59 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:

On 10/01/2024 11:38, Barry Song wrote:
On Wed, Jan 10, 2024 at 7:21 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:

On 10/01/2024 11:00, David Hildenbrand wrote:
On 10.01.24 11:55, Ryan Roberts wrote:
On 10/01/2024 10:42, David Hildenbrand wrote:
On 10.01.24 11:38, Ryan Roberts wrote:
On 10/01/2024 10:30, Barry Song wrote:
On Wed, Jan 10, 2024 at 6:23 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:

On 10/01/2024 09:09, Barry Song wrote:
On Wed, Jan 10, 2024 at 4:58 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:

On 10/01/2024 08:02, Barry Song wrote:
On Wed, Jan 10, 2024 at 12:16 PM John Hubbard <jhubbard@xxxxxxxxxx> wrote:

On 1/9/24 19:51, Barry Song wrote:
On Wed, Jan 10, 2024 at 11:35 AM John Hubbard <jhubbard@xxxxxxxxxx>
wrote:
...
Hi Ryan,

One thing that immediately came up during some recent testing of mTHP
on arm64: the pid requirement is sometimes a little awkward. I'm
running
tests on a machine at a time for now, inside various containers and
such, and it would be nice if there were an easy way to get some
numbers
for the mTHPs across the whole machine.

Just to confirm, you're expecting these "global" stats be truely global
and not
per-container? (asking because you exploicitly mentioned being in a
container).
If you want per-container, then you can probably just create the container
in a
cgroup?


I'm not sure if that changes anything about thpmaps here. Probably
this is fine as-is. But I wanted to give some initial reactions from
just some quick runs: the global state would be convenient.

Thanks for taking this for a spin! Appreciate the feedback.


+1. but this seems to be impossible by scanning pagemap?
so may we add this statistics information in kernel just like
/proc/meminfo or a separate /proc/mthp_info?


Yes. From my perspective, it looks like the global stats are more useful
initially, and the more detailed per-pid or per-cgroup stats are the
next level of investigation. So feels odd to start with the more
detailed stats.


probably because this can be done without the modification of the kernel.

Yes indeed, as John said in an earlier thread, my previous attempts to add
stats
directly in the kernel got pushback; DavidH was concerned that we don't
really
know exectly how to account mTHPs yet
(whole/partial/aligned/unaligned/per-size/etc) so didn't want to end up
adding
the wrong ABI and having to maintain it forever. There has also been some
pushback regarding adding more values to multi-value files in sysfs, so
David
was suggesting coming up with a whole new scheme at some point (I know
/proc/meminfo isn't sysfs, but the equivalent files for NUMA nodes and
cgroups
do live in sysfs).

Anyway, this script was my attempt to 1) provide a short term solution
to the
"we need some stats" request and 2) provide a context in which to explore
what
the right stats are - this script can evolve without the ABI problem.

The detailed per-pid or per-cgroup is still quite useful to my case in
which
we set mTHP enabled/disabled and allowed sizes according to vma types,
eg. libc_malloc, java heaps etc.

Different vma types can have different anon_name. So I can use the
detailed
info to find out if specific VMAs have gotten mTHP properly and how many
they have gotten.

However, Ryan did clearly say, above, "In future we may wish to
introduce stats directly into the kernel (e.g. smaps or similar)". And
earlier he ran into some pushback on trying to set up /proc or /sys
values because this is still such an early feature.

I wonder if we could put the global stats in debugfs for now? That's
specifically supposed to be a "we promise *not* to keep this ABI stable"
location.

Now that I think about it, I wonder if we can add a --global mode to the
script
(or just infer global when neither --pid nor --cgroup are provided). I
think I
should be able to determine all the physical memory ranges from
/proc/iomem,
then grab all the info we need from /proc/kpageflags. We should then be
able to
process it all in much the same way as for --pid/--cgroup and provide the
same
stats, but it will apply globally. What do you think?

Having now thought about this for a few mins (in the shower, if anyone wants
the
complete picture :) ), this won't quite work. This approach doesn't have the
virtual mapping information so the best it can do is tell us "how many of
each
size of THP are allocated?" - it doesn't tell us anything about whether they
are
fully or partially mapped or what their alignment is (all necessary if we
want
to know if they are contpte-mapped). So I don't think this approach is
going to
be particularly useful.

And this is also the big problem if we want to gather stats inside the
kernel;
if we want something equivalant to /proc/meminfo's
AnonHugePages/ShmemPmdMapped/FilePmdMapped, we need to consider not just the
allocation of the THP but also whether it is mapped. That's easy for
PMD-mappings, because there is only one entry to consider - when you set it,
you
increment the number of PMD-mapped THPs, when you clear it, you decrement.
But
for PTE-mappings it's harder; you know the size when you are mapping so its
easy
to increment, but you can do a partial unmap, so you would need to scan the
PTEs
to figure out if we are unmapping the first page of a previously
fully-PTE-mapped THP, which is expensive. We would need a cheap mechanism to
determine "is this folio fully and contiguously mapped in at least one
process?".

as OPPO's approach I shared to you before is maintaining two mapcount
1. entire map
2. subpage's map
3. if 1 and 2 both exist, it is DoubleMapped.

This isn't a problem for us. and everytime if we do a partial unmap,
we have an explicit
cont_pte split which will decrease the entire map and increase the
subpage's mapcount.

but its downside is that we expose this info to mm-core.

OK, but I think we have a slightly more generic situation going on with the
upstream; If I've understood correctly, you are using the PTE_CONT bit in the
PTE to determne if its fully mapped? That works for your case where you only
have 1 size of THP that you care about (contpte-size). But for the upstream, we
have multi-size THP so we can't use the PTE_CONT bit to determine if its fully
mapped because we can only use that bit if the THP is at least 64K and aligned,
and only on arm64. We would need a SW bit for this purpose, and the mm would
need to update that SW bit for every PTE one the full -> partial map
transition.

Oh no. Let's not make everything more complicated for the purpose of some stats.


Indeed, I was intending to argue *against* doing it this way. Fundamentally, if
we want to know what's fully mapped and what's not, then I don't see any way
other than by scanning the page tables and we might as well do that in user
space with this script.

Although, I expect you will shortly make a proposal that is simple to implement
and prove me wrong ;-)

Unlikely :) As you said, once you have multiple folio sizes, it stops really
making sense.

Assume you have a 128 kiB pageache folio, and half of that is mapped. You can
set cont-pte bits on that half and all is fine. Or AMD can benefit from it's
optimizations without the cont-pte bit and everything is fine.

Yes, but for debug and optimization, its useful to know when THPs are
fully/partially mapped, when they are unaligned etc. Anyway, the script does
that for us, and I think we are tending towards agreement that there are
unlikely to be any cost benefits by moving it into the kernel.

frequent partial unmap can defeat all purpose for us to use large folios.
just imagine a large folio can soon be splitted after it is formed. we lose
the performance gain and might get regression instead.

nit: just because a THP gets partially unmapped in a process doesn't mean it
gets split into order-0 pages. If the folio still has all its pages mapped at
least once then no further action is taken. If the page being unmapped was the
last mapping of that page, then the THP is put on the deferred split queue, so
that it can be split in future if needed.

yes. That is exactly what the kernel is doing, but this is not so
important for us
to resolve performance issues.


and this can be very frequent, for example, one userspace heap management
is releasing memory page by page.

In our real product deployment, we might not care about the second partial
unmapped,  we do care about the first partial unmapped as we can use this
to know if split has ever happened on this large folios. an partial unmapped
subpage can be unlikely re-mapped back.

so i guess 1st unmap is probably enough, at least for my product. I mean we
care about if partial unmap has ever happened on a large folio more than how
they are exactly partially unmapped :-)

I'm not sure what you are suggesting here? A global boolean that tells you if
any folio in the system has ever been partially unmapped? That will almost
certainly always be true, even for a very well tuned system.




We want simple stats that tell us which folio sizes are actually allocated. For
everything else, just scan the process to figure out what exactly is going on.


Certainly that's much easier to do. But is it valuable? It might be if we also
keep stats for the number of failures to allocate the various sizes - then we
can see what percentage of high order allocation attempts are successful, which
is probably useful.

My point is that we split large folios into two simple categories,
1. large folios which have never been partially unmapped
2. large folios which have ever been partially unmapped.


With the rmap batching stuff I am working on, you get the complete thing unmapped in most cases (as long as they are in one VMA) -- for example during munmap()/exit()/etc.

Only when multiple VMAs were involved, or when someone COWs / PTE_DONTNEEDs / munmap some subpages, you get a single page of a large folio.

That could be used to simply flag the folio in your case.

But not sure if that has to be handled on the rmap level. Could be handled higher up in the callchain (esp. pte-dontneed).

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux