RE: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> Sent: Thursday, August 29, 2024 4:55 PM
> To: Nhat Pham <nphamcs@xxxxxxxxx>
> Cc: Sridhar, Kanchana P <kanchana.p.sridhar@xxxxxxxxx>; linux-
> kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; hannes@xxxxxxxxxxx;
> chengming.zhou@xxxxxxxxx; usamaarif642@xxxxxxxxx;
> ryan.roberts@xxxxxxx; Huang, Ying <ying.huang@xxxxxxxxx>;
> 21cnbao@xxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx; Zou, Nanhai
> <nanhai.zou@xxxxxxxxx>; Feghali, Wajdi K <wajdi.k.feghali@xxxxxxxxx>;
> Gopal, Vinodh <vinodh.gopal@xxxxxxxxx>
> Subject: Re: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios
> 
> On Thu, Aug 29, 2024 at 4:45 PM Nhat Pham <nphamcs@xxxxxxxxx> wrote:
> >
> > On Thu, Aug 29, 2024 at 3:49 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> wrote:
> > >
> > > On Thu, Aug 29, 2024 at 2:27 PM Kanchana P Sridhar
> > >
> > > We are basically comparing zram with zswap in this case, and it's not
> > > fair because, as you mentioned, the zswap compressed data is being
> > > accounted for while the zram compressed data isn't. I am not really
> > > sure how valuable these test results are. Even if we remove the cgroup
> > > accounting from zswap, we won't see an improvement, we should expect
> a
> > > similar performance to zram.
> > >
> > > I think the test results that are really valuable are case 1, where
> > > zswap users are currently disabling CONFIG_THP_SWAP, and get to enable
> > > it after this series.
> >
> > Ah, this is a good point.
> >
> > I think the point of comparing mTHP zswap v.s mTHP (SSD)swap is more
> > of a sanity check. IOW, if mTHP swap outperforms mTHP zswap, then
> > something is wrong (otherwise why would enable zswap - might as well
> > just use swap, since SSD swap with mTHP >>> zswap with mTHP >>> zswap
> > without mTHP).
> 
> Yeah, good point, but as you mention below..
> 
> >
> > That said, I don't think this benchmark can show it anyway. The access
> > pattern here is such that all the allocated memories are really cold,
> > so swap to disk (or to zram, which does not account memory usage
> > towards cgroup) is better by definition... And Kanchana does not seem
> > to have access to setup with larger SSD swapfiles? :)
> 
> I think it's also the fact that the processes exit right after they
> are done allocating the memory. So I think in the case of SSD, when we
> stall waiting for IO some processes get to exit and free up memory, so
> we need to do less swapping out in general because the processes are
> more serialized. With zswap, all processes try to access memory at the
> same time so the required amount of memory at any given point is
> higher, leading to more thrashing.
> 
> I suggested keeping the memory allocated for a long time to even the
> playing field, or we can make the processes keep looping and accessing
> the memory (or part of it) for a while.

Thanks for the suggestion, Yosry. I have shared the data in my earlier
response today, that seems to confirm your hypothesis. Please do let
me know if you have any other suggestions.

We generally see better throughput of usemem with zswap-mTHP
as compared to SSD-mTHP.

Thanks,
Kanchana

> 
> That being said, I think this may be a signal that the memory.high
> throttling is not performing as expected in the zswap case. Not sure
> tbh, but I don't think SSD swap should perform better than zswap in
> that case.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux