Re: [PATCH v6] zswap: memcontrol: implement zswap writeback disabling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Chris Li <chrisl@xxxxxxxxxx> 于2023年12月9日周六 07:56写道:
>
> Hi Nhat,
>
> On Thu, Dec 7, 2023 at 5:03 PM Nhat Pham <nphamcs@xxxxxxxxx> wrote:
> >
> > On Thu, Dec 7, 2023 at 4:19 PM Chris Li <chrisl@xxxxxxxxxx> wrote:
> > >
> > > Hi Nhat,
> > >
> > >
> > > On Thu, Dec 7, 2023 at 11:24 AM Nhat Pham <nphamcs@xxxxxxxxx> wrote:
> > > >
> > > > During our experiment with zswap, we sometimes observe swap IOs due to
> > > > occasional zswap store failures and writebacks-to-swap. These swapping
> > > > IOs prevent many users who cannot tolerate swapping from adopting zswap
> > > > to save memory and improve performance where possible.
> > > >
> > > > This patch adds the option to disable this behavior entirely: do not
> > > > writeback to backing swapping device when a zswap store attempt fail,
> > > > and do not write pages in the zswap pool back to the backing swap
> > > > device (both when the pool is full, and when the new zswap shrinker is
> > > > called).
> > > >
> > > > This new behavior can be opted-in/out on a per-cgroup basis via a new
> > > > cgroup file. By default, writebacks to swap device is enabled, which is
> > > > the previous behavior. Initially, writeback is enabled for the root
> > > > cgroup, and a newly created cgroup will inherit the current setting of
> > > > its parent.
> > > >
> > > > Note that this is subtly different from setting memory.swap.max to 0, as
> > > > it still allows for pages to be stored in the zswap pool (which itself
> > > > consumes swap space in its current form).
> > > >
> > > > This patch should be applied on top of the zswap shrinker series:
> > > >
> > > > https://lore.kernel.org/linux-mm/20231130194023.4102148-1-nphamcs@xxxxxxxxx/
> > > >
> > > > as it also disables the zswap shrinker, a major source of zswap
> > > > writebacks.
> > >
> > > I am wondering about the status of "memory.swap.tiers" proof of concept patch?
> > > Are we still on board to have this two patch merge together somehow so
> > > we can have
> > > "memory.swap.tiers" == "all" and "memory.swap.tiers" == "zswap" cover the
> > > memory.zswap.writeback == 1 and memory.zswap.writeback == 0 case?
> > >
> > > Thanks
> > >
> > > Chris
> > >
> >
> > Hi Chris,
> >
> > I briefly summarized my recent discussion with Johannes here:
> >
> > https://lore.kernel.org/all/CAKEwX=NwGGRAtXoNPfq63YnNLBCF0ZDOdLVRsvzUmYhK4jxzHA@xxxxxxxxxxxxxx/
>
> Sorry I am traveling in a different time zone so not able to get to
> that email sooner. That email is only sent out less than one day
> before the V6 patch right?
>
> >
> > TL;DR is we acknowledge the potential usefulness of swap.tiers
> > interface, but the use case is not quite there yet, so it does not
>
> I disagree about no use case. No use case for Meta != no usage case
> for the rest of the linux kernel community. That mindset really needs
> to shift to do Linux kernel development. Respect other's usage cases.
> It is not just Meta's Linux kernel. It is everybody's Linux kernel.
>
> I can give you three usage cases right now:
> 1) Google producting kernel uses SSD only swap, it is currently on
> pilot. This is not expressible by the memory.zswap.writeback. You can
> set the memory.zswap.max = 0 and memory.zswap.writeback = 1, then SSD
> backed swapfile. But the whole thing feels very clunky, especially
> what you really want is SSD only swap, you need to do all this zswap
> config dance. Google has an internal memory.swapfile feature
> implemented per cgroup swap file type by "zswap only", "real swap file
> only", "both", "none" (the exact keyword might be different). running
> in the production for almost 10 years. The need for more than zswap
> type of per cgroup control is really there.
>
> 2) As indicated by this discussion, Tencent has a usage case for SSD
> and hard disk swap as overflow.
> https://lore.kernel.org/linux-mm/20231119194740.94101-9-ryncsn@xxxxxxxxx/
> +Kairui

Yes, we are not using zswap. We are using ZRAM for swap since we have
many different varieties of workload instances, with a very flexible
storage setup. Some of them don't have the ability to set up a
swapfile. So we built a pack of kernel infrastructures based on ZRAM,
which so far worked pretty well.

The concern from some teams is that ZRAM (or zswap) can't always free
up memory so they may lead to higher risk of OOM compared to a
physical swap device, and they do have suitable devices for doing swap
on some of their machines. So a secondary swap support is very helpful
in case of memory usage peak.

Besides this, another requirement is that different containers may
have different priority, some containers can tolerate high swap
overhead while some cannot, so swap tiering is useful for us in many
ways.

And thanks to cloud infrastructure the disk setup could change from
time to time depending on workload requirements, so our requirement is
to support ZRAM (always) + SSD (optional) + HDD (also optional) as
swap backends, while not making things too complex to maintain.

Currently we have implemented a cgroup based ZRAM compression
algorithm control, per-cgroup ZRAM accounting and limit, and a
experimental kernel worker to migrate cold swap entry from high
priority device to low priority device at very small scale (lack of
basic mechanics to do this at large scale, however due to the low IOPS
of slow device and cold pages are rarely accessed, this wasn't too
much of a problem so far but kind of ugly). The rest of swapping (eg.
secondary swap when ZRAM if full) will depend on the kernel's native
ability.

So far it works, not in the best form, need more patches to make it
work better (eg. the swapin/readahead patch I sent previously). Some
of our design may also need to change in the long term, and we also
want a well built interface and kernel mechanics to manage multi tier
swaps, I'm very willing to talk and collaborate on this.





[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux