Re: [LSF/MM/BPF TOPIC] TAO: THP Allocator Optimizations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6 Mar 2024, at 10:51, Johannes Weiner wrote:

> On Thu, Feb 29, 2024 at 11:34:32AM -0700, Yu Zhao wrote:
>> TAO is an umbrella project aiming at a better economy of physical
>> contiguity viewed as a valuable resource. A few examples are:
>> 1. A multi-tenant system can have guaranteed THP coverage while
>>    hosting abusers/misusers of the resource.
>> 2. Abusers/misusers, e.g., workloads excessively requesting and then
>>    splitting THPs, should be punished if necessary.
>> 3. Good citizens should be awarded with, e.g., lower allocation
>>    latency and less cost of metadata (struct page).
>> 4. Better interoperability with userspace memory allocators when
>>    transacting the resource.
>>
>> This project puts the same emphasis on the established use case for
>> servers and the emerging use case for clients so that client workloads
>> like Android and ChromeOS can leverage the recent multi-sized THPs
>> [1][2].
>>
>> Chapter One introduces the cornerstone of TAO: an abstraction called
>> policy (virtual) zones, which are overlayed on the physical zones.
>> This is in line with item 1 above.
>
> This is a very interesting topic to me. Meta has collaborated with CMU
> to research this as well, the results of which are typed up here:
> https://dl.acm.org/doi/pdf/10.1145/3579371.3589079
>
> We had used a dynamic CMA region, but unless I'm missing something
> about the policy zones this is just another way to skin the cat.
>
> The other difference is that we made the policy about migratetypes
> rather than order. The advantage of doing it by order is of course
> that you can forego a lot of compaction work to begin with. The
> downside is that you have to be more precise and proactive about
> sizing the THP vs non-THP regions correctly, as it's more restrictive
> than saying "this region just has to remain compactable, but is good
> for small and large pages" - most workloads will have a mix of those.
>
> For region sizing, I see that for now you have boot parameters. But
> the exact composition of orders that a system needs is going to vary
> by workload, and likely within workloads over time. IMO some form of
> auto-sizing inside the kernel will make the difference between this
> being a general-purpose OS feature and "this is useful to hyperscalers
> that control their whole stack, have resources to profile their
> applications in-depth, and can tailor-make kernel policies around the
> results" - not unlike hugetlb itself.
>
> What we had experimented with is a feedback system between the
> regions. It tracks the amount of memory pressure that exists for the
> pages in each section - i.e. how much reclaim and compaction is needed
> to satisfy allocations from a given region, and how many refaults and
> swapins are occuring in them - and then move the boundaries
> accordingly if there is an imbalance.
>
> The first draft of this was an extension to psi to track pressure by
> allocation context. This worked quite well, but was a little fat on
> the scheduler cacheline footprint. Kaiyang (CC'd) has been working on
> tracking these input metrics in a leaner fashion.
>
> You mentioned a pageblock-oriented solution also in Chapter One. I had
> proposed one before, so I'm obviously biased, but my gut feeling is
> that we likely need both - one for 2MB and smaller, and one for
> 1GB. My thinking is this:
>
> 1. Contiguous zones are more difficult and less reliable to resize at
>    runtime, and the huge page size you're trying to grow and shrink
>    the regions for matters. Assuming 4k pages (wild, I know) there are
>    512 pages in a 2MB folio, but a quarter million pages in a 1GB
>    folio. It's much easier for a single die-hard kernel allocation to
>    get in the way of expanding the THP region by another 1GB page than
>    finding 512 disjunct 2MB pageblocks somewhere.
>
>    Basically, dynamic adaptiveness of the pool seems necessary for a
>    general-purpose THP[tm] feature, but also think adaptiveness for 1G
>    huge pages is going to be difficult to pull off reliably, simply
>    because we have no control over the lifetime of kernel allocations.
>
> 2. I think there also remains a difference in audience. Reliable
>    coverage of up to 2MB would be a huge boon for most workloads,
>    especially the majority of those that are not optimized much for
>    contiguity. IIRC Willy mentioned before somewhere that nowdays the
>    optimal average page size is still in the multi-k range.
>
>    1G huge pages are immensely useful for specific loads - we
>    certainly have our share of those as well. But the step size to 1GB
>    is so large that:
>
>    1) it's fewer applications that can benefit in the first place
>
>    2) it requires applications to participate more proactively in the
>       contiguity efforts to keep internal fragmentation reasonable
>
>    3) the 1G huge pages are more expensive and less reliable when it
>       comes to growing the THP region by another page at runtime,
>       which remains a forcing function for static, boot-time configs
>
>    4) the performance impact of falling back from 1G to 2MB or 4k
>       would be quite large compared to falling back from 2M. Setups
>       that invest to overcome all of the above difficulties in order
>       to tickle more cycles out of their systems are going to be less
>       tolerant of just falling back to smaller pages
>
>    As you can see, points 2-4 take a lot of the "transparent" out of
>    "transparent huge pages".

Also there are implementation challenges for 1GB THP based on my past
experience:

1) I had triple mapping (PTE, PMD, PUD) support for 1GB THP in my
   original patchset, but the implementation is quite hacky and complicated.
   subpage mapcount is going to be a headache to maintain. We probably
   want to not support triple mapping.

2) Page migration was not in my patchset due to high migration overheads,
   although the implementation might not be hard. At least,
   splitting 1GB THP upon migration should be added to make it movable,
   otherwise, it might cause performance issue in NUMA systems.

3) Creating a 1GB THP at page fault time might cause a long latency. When
   to create and who can create will need to be discussed. khugepaged or
   process_madvise are candidates.

So it is more likely to have 1GB large folio without much of the transparent
feature.

--
Best Regards,
Yan, Zi

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux