Re: [PATCH RFC v2 0/2] mTHP-friendly compression in zsmalloc and zram based on multi-pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 19, 2024 at 9:51 AM Barry Song <21cnbao@xxxxxxxxx> wrote:
>
> On Tue, Nov 19, 2024 at 9:29 AM Usama Arif <usamaarif642@xxxxxxxxx> wrote:
> >
> >
> >
> > On 18/11/2024 02:27, Barry Song wrote:
> > > On Tue, Nov 12, 2024 at 10:37 AM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > >>
> > >> On Tue, Nov 12, 2024 at 8:30 AM Nhat Pham <nphamcs@xxxxxxxxx> wrote:
> > >>>
> > >>> On Thu, Nov 7, 2024 at 2:10 AM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > >>>>
> > >>>> From: Barry Song <v-songbaohua@xxxxxxxx>
> > >>>>
> > >>>> When large folios are compressed at a larger granularity, we observe
> > >>>> a notable reduction in CPU usage and a significant improvement in
> > >>>> compression ratios.
> > >>>>
> > >>>> mTHP's ability to be swapped out without splitting and swapped back in
> > >>>> as a whole allows compression and decompression at larger granularities.
> > >>>>
> > >>>> This patchset enhances zsmalloc and zram by adding support for dividing
> > >>>> large folios into multi-page blocks, typically configured with a
> > >>>> 2-order granularity. Without this patchset, a large folio is always
> > >>>> divided into `nr_pages` 4KiB blocks.
> > >>>>
> > >>>> The granularity can be set using the `ZSMALLOC_MULTI_PAGES_ORDER`
> > >>>> setting, where the default of 2 allows all anonymous THP to benefit.
> > >>>>
> > >>>> Examples include:
> > >>>> * A 16KiB large folio will be compressed and stored as a single 16KiB
> > >>>>   block.
> > >>>> * A 64KiB large folio will be compressed and stored as four 16KiB
> > >>>>   blocks.
> > >>>>
> > >>>> For example, swapping out and swapping in 100MiB of typical anonymous
> > >>>> data 100 times (with 16KB mTHP enabled) using zstd yields the following
> > >>>> results:
> > >>>>
> > >>>>                         w/o patches        w/ patches
> > >>>> swap-out time(ms)       68711              49908
> > >>>> swap-in time(ms)        30687              20685
> > >>>> compression ratio       20.49%             16.9%
> > >>>
> > >>> The data looks very promising :) My understanding is it also results
> > >>> in memory saving as well right? Since zstd operates better on bigger
> > >>> inputs.
> > >>>
> > >>> Is there any end-to-end benchmarking? My intuition is that this patch
> > >>> series overall will improve the situations, assuming we don't fallback
> > >>> to individual zero order page swapin too often, but it'd be nice if
> > >>> there is some data backing this intuition (especially with the
> > >>> upstream setup, i.e without any private patches). If the fallback
> > >>> scenario happens frequently, the patch series can make a page fault
> > >>> more expensive (since we have to decompress the entire chunk, and
> > >>> discard everything but the single page being loaded in), so it might
> > >>> make a difference.
> > >>>
> > >>> Not super qualified to comment on zram changes otherwise - just a
> > >>> casual observer to see if we can adopt this for zswap. zswap has the
> > >>> added complexity of not supporting THP zswap in (until Usama's patch
> > >>> series lands), and the presence of mixed backing states (due to zswap
> > >>> writeback), increasing the likelihood of fallback :)
> > >>
> > >> Correct. As I mentioned to Usama[1], this could be a problem, and we are
> > >> collecting data. The simplest approach to work around the issue is to fall
> > >> back to four small folios instead of just one, which would prevent the need
> > >> for three extra decompressions.
> > >>
> > >> [1] https://lore.kernel.org/linux-mm/CAGsJ_4yuZLOE0_yMOZj=KkRTyTotHw4g5g-t91W=MvS5zA4rYw@xxxxxxxxxxxxxx/
> > >>
> > >
> > > Hi Nhat, Usama, Ying,
> > >
> > > I committed to providing data for cases where large folio allocation fails and
> > > swap-in falls back to swapping in small folios. Here is the data that Tangquan
> > > helped collect:
> > >
> > > * zstd, 100MB typical anon memory swapout+swapin 100times
> > >
> > > 1. 16kb mTHP swapout + 16kb mTHP swapin + w/o zsmalloc large block
> > > (de)compression
> > > swap-out(ms) 63151
> > > swap-in(ms)  31551
> > > 2. 16kb mTHP swapout + 16kb mTHP swapin + w/ zsmalloc large block
> > > (de)compression
> > > swap-out(ms) 43925
> > > swap-in(ms)  21763
> > > 3. 16kb mTHP swapout + 100% fallback to small folios swap-in + w/
> > > zsmalloc large block (de)compression
> > > swap-out(ms) 43423
> > > swap-in(ms)   68660
> > >
> >
> > Hi Barry,
> >
> > Thanks for the numbers!
> >
> > In what condition was it falling back to small folios. Did you just added a hack
> > in alloc_swap_folio to just jump to fallback? or was it due to cgroup limited memory
> > pressure?

Usama,
I realized you might be asking how the test 3 was done.
yes, it is a simple hack by 100%fallback to small folios.


>
> In real scenarios, even without memcg, fallbacks mainly occur due to memory
> fragmentation, which prevents the allocation of mTHP (contiguous pages) from
> the buddy system. While cgroup memory pressure isn't the primary issue here,
> it can also contribute to fallbacks.
>
> Note that this fallback occurs universally for both do_anonymous_page() and
> filesystem mTHP.
>
> >
> > Would it be good to test with something like kernel build test (or something else that
> > causes swap thrashing) to see if the regression worsens with large granularity decompression?
> > i.e. would be good to have numbers for real world applications.
>
> I’m confident that the data will be reliable as long as memory isn’t fragmented,
> but fragmentation depends on when the case is run. For example, on a fresh
> system, memory is not fragmented at all, but after running various workloads
> for a few hours, serious fragmentation may occur.
>
> I recall reporting that a phone using 64KB mTHP had a high mTHP allocation
> success rate in the first hour, but this dropped to less than 10% after a few
> hours of use.
>
> In my understanding, the performance of mTHP can vary significantly depending
> on the system's fragmentation state. This is why efforts like Yu Zhao's TAO are
> being developed to address the mTHP allocation success rate issue.
>
> >
> > > Thus, "swap-in(ms) 68660," where mTHP allocation always fails, is significantly
> > > slower than "swap-in(ms) 21763," where mTHP allocation succeeds.
> > >
> > > If there are no objections, I could send a v3 patch to fall back to 4
> > > small folios
> > > instead of one. However, this would significantly increase the complexity of
> > > do_swap_page(). My gut feeling is that the added complexity might not be
> > > well-received :-)
> > >
> >
> > If there is space for 4 small folios, then maybe it might be worth passing
> > __GFP_DIRECT_RECLAIM? as that can trigger compaction and give a large folio.
> >
>
> Small folios are always much *easier* to obtain from the system.
> Triggering compaction
> won't necessarily yield a large folio if unmovable small folios are scattered.
>
> For small folios, reclamation is already the case for memcg. as a small folio
> is charged by GFP_KERNEL as it was before.
>
> static struct folio *__alloc_swap_folio(struct vm_fault *vmf)
> {
>         struct vm_area_struct *vma = vmf->vma;
>         struct folio *folio;
>         swp_entry_t entry;
>
>         folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vmf->address);
>         if (!folio)
>                 return NULL;
>
>         entry = pte_to_swp_entry(vmf->orig_pte);
>         if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
>                                            GFP_KERNEL, entry)) {
>                 folio_put(folio);
>                 return NULL;
>         }
>
>         return folio;
> }
>
> Thanks
> Barry





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux