Re: [PATCH RFC v2 0/2] mTHP-friendly compression in zsmalloc and zram based on multi-pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 12, 2024 at 10:37 AM Barry Song <21cnbao@xxxxxxxxx> wrote:
>
> On Tue, Nov 12, 2024 at 8:30 AM Nhat Pham <nphamcs@xxxxxxxxx> wrote:
> >
> > On Thu, Nov 7, 2024 at 2:10 AM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > >
> > > From: Barry Song <v-songbaohua@xxxxxxxx>
> > >
> > > When large folios are compressed at a larger granularity, we observe
> > > a notable reduction in CPU usage and a significant improvement in
> > > compression ratios.
> > >
> > > mTHP's ability to be swapped out without splitting and swapped back in
> > > as a whole allows compression and decompression at larger granularities.
> > >
> > > This patchset enhances zsmalloc and zram by adding support for dividing
> > > large folios into multi-page blocks, typically configured with a
> > > 2-order granularity. Without this patchset, a large folio is always
> > > divided into `nr_pages` 4KiB blocks.
> > >
> > > The granularity can be set using the `ZSMALLOC_MULTI_PAGES_ORDER`
> > > setting, where the default of 2 allows all anonymous THP to benefit.
> > >
> > > Examples include:
> > > * A 16KiB large folio will be compressed and stored as a single 16KiB
> > >   block.
> > > * A 64KiB large folio will be compressed and stored as four 16KiB
> > >   blocks.
> > >
> > > For example, swapping out and swapping in 100MiB of typical anonymous
> > > data 100 times (with 16KB mTHP enabled) using zstd yields the following
> > > results:
> > >
> > >                         w/o patches        w/ patches
> > > swap-out time(ms)       68711              49908
> > > swap-in time(ms)        30687              20685
> > > compression ratio       20.49%             16.9%
> >
> > The data looks very promising :) My understanding is it also results
> > in memory saving as well right? Since zstd operates better on bigger
> > inputs.
> >
> > Is there any end-to-end benchmarking? My intuition is that this patch
> > series overall will improve the situations, assuming we don't fallback
> > to individual zero order page swapin too often, but it'd be nice if
> > there is some data backing this intuition (especially with the
> > upstream setup, i.e without any private patches). If the fallback
> > scenario happens frequently, the patch series can make a page fault
> > more expensive (since we have to decompress the entire chunk, and
> > discard everything but the single page being loaded in), so it might
> > make a difference.
> >
> > Not super qualified to comment on zram changes otherwise - just a
> > casual observer to see if we can adopt this for zswap. zswap has the
> > added complexity of not supporting THP zswap in (until Usama's patch
> > series lands), and the presence of mixed backing states (due to zswap
> > writeback), increasing the likelihood of fallback :)
>
> Correct. As I mentioned to Usama[1], this could be a problem, and we are
> collecting data. The simplest approach to work around the issue is to fall
> back to four small folios instead of just one, which would prevent the need
> for three extra decompressions.
>
> [1] https://lore.kernel.org/linux-mm/CAGsJ_4yuZLOE0_yMOZj=KkRTyTotHw4g5g-t91W=MvS5zA4rYw@xxxxxxxxxxxxxx/
>

Hi Nhat, Usama, Ying,

I committed to providing data for cases where large folio allocation fails and
swap-in falls back to swapping in small folios. Here is the data that Tangquan
helped collect:

* zstd, 100MB typical anon memory swapout+swapin 100times

1. 16kb mTHP swapout + 16kb mTHP swapin + w/o zsmalloc large block
(de)compression
swap-out(ms) 63151
swap-in(ms)  31551
2. 16kb mTHP swapout + 16kb mTHP swapin + w/ zsmalloc large block
(de)compression
swap-out(ms) 43925
swap-in(ms)  21763
3. 16kb mTHP swapout + 100% fallback to small folios swap-in + w/
zsmalloc large block (de)compression
swap-out(ms) 43423
swap-in(ms)   68660

Thus, "swap-in(ms) 68660," where mTHP allocation always fails, is significantly
slower than "swap-in(ms) 21763," where mTHP allocation succeeds.

If there are no objections, I could send a v3 patch to fall back to 4
small folios
instead of one. However, this would significantly increase the complexity of
do_swap_page(). My gut feeling is that the added complexity might not be
well-received :-)

Thanks
Barry





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux