RE: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Nhat Pham <nphamcs@xxxxxxxxx>
> Sent: Thursday, August 29, 2024 5:07 PM
> To: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> Cc: Sridhar, Kanchana P <kanchana.p.sridhar@xxxxxxxxx>; linux-
> kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; hannes@xxxxxxxxxxx;
> chengming.zhou@xxxxxxxxx; usamaarif642@xxxxxxxxx;
> ryan.roberts@xxxxxxx; Huang, Ying <ying.huang@xxxxxxxxx>;
> 21cnbao@xxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx; Zou, Nanhai
> <nanhai.zou@xxxxxxxxx>; Feghali, Wajdi K <wajdi.k.feghali@xxxxxxxxx>;
> Gopal, Vinodh <vinodh.gopal@xxxxxxxxx>
> Subject: Re: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios
> 
> On Thu, Aug 29, 2024 at 4:55 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> wrote:
> >
> > On Thu, Aug 29, 2024 at 4:45 PM Nhat Pham <nphamcs@xxxxxxxxx>
> wrote:
> > I think it's also the fact that the processes exit right after they
> > are done allocating the memory. So I think in the case of SSD, when we
> > stall waiting for IO some processes get to exit and free up memory, so
> > we need to do less swapping out in general because the processes are
> > more serialized. With zswap, all processes try to access memory at the
> > same time so the required amount of memory at any given point is
> > higher, leading to more thrashing.
> >
> > I suggested keeping the memory allocated for a long time to even the
> > playing field, or we can make the processes keep looping and accessing
> > the memory (or part of it) for a while.
> >
> > That being said, I think this may be a signal that the memory.high
> > throttling is not performing as expected in the zswap case. Not sure
> > tbh, but I don't think SSD swap should perform better than zswap in
> > that case.
> 
> Yeah something is fishy there. That said, the benchmarking in v4 is wack:
> 
> 1. We use lz4, which has a really poor compression factor.
> 
> 2. The swapfile is really small, so we occasionally see problems with
> swap allocation failure.
> 
> Both of these factors affect benchmarking validity and stability a
> lot. I think in this version's benchmarks, with zstd as the software
> compressor + a much larger swapfile (albeit on top of a ZRAM block
> device), we no longer see memory.high violation, even at a lower
> memory.high value...? The performance number is wack indeed - not a
> lot of values in the case 2 section.

Hopefully the latest data from the two sets of experiments (4G SSD with
usemem --sleep 10, and 179G SSD) should make better sense?

Thanks,
Kanchana




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux