Re: [PATCH v6] zswap: memcontrol: implement zswap writeback disabling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 08, 2023 at 03:55:59PM -0800, Chris Li wrote:
> I can give you three usage cases right now:
> 1) Google producting kernel uses SSD only swap, it is currently on
> pilot. This is not expressible by the memory.zswap.writeback. You can
> set the memory.zswap.max = 0 and memory.zswap.writeback = 1, then SSD
> backed swapfile. But the whole thing feels very clunky, especially
> what you really want is SSD only swap, you need to do all this zswap
> config dance. Google has an internal memory.swapfile feature
> implemented per cgroup swap file type by "zswap only", "real swap file
> only", "both", "none" (the exact keyword might be different). running
> in the production for almost 10 years. The need for more than zswap
> type of per cgroup control is really there.

We use regular swap on SSD without zswap just fine. Of course it's
expressible.

On dedicated systems, zswap is disabled in sysfs. On shared hosts
where it's determined based on which workload is scheduled, zswap is
generally enabled through sysfs, and individual cgroup access is
controlled via memory.zswap.max - which is what this knob is for.

This is analogous to enabling swap globally, and then opting
individual cgroups in and out with memory.swap.max.

So this usecase is very much already supported, and it's expressed in
a way that's pretty natural for how cgroups express access and lack of
access to certain resources.

I don't see how memory.swap.type or memory.swap.tiers would improve
this in any way. On the contrary, it would overlap and conflict with
existing controls to manage swap and zswap on a per-cgroup basis.

> 2) As indicated by this discussion, Tencent has a usage case for SSD
> and hard disk swap as overflow.
> https://lore.kernel.org/linux-mm/20231119194740.94101-9-ryncsn@xxxxxxxxx/
> +Kairui

Multiple swap devices for round robin or with different priorities
aren't new, they have been supported for a very, very long time. So
far nobody has proposed to control the exact behavior on a per-cgroup
basis, and I didn't see anybody in this thread asking for it either.

So I don't see how this counts as an obvious and automatic usecase for
memory.swap.tiers.

> 3) Android has some fancy swap ideas led by those patches.
> https://lore.kernel.org/linux-mm/20230710221659.2473460-1-minchan@xxxxxxxxxx/
> It got shot down due to removal of frontswap. But the usage case and
> product requirement is there.
> +Minchan

This looks like an optimization for zram to bypass the block layer and
hook directly into the swap code. Correct me if I'm wrong, but this
doesn't appear to have anything to do with per-cgroup backend control.

> > zswap.writeback is a more urgent need, and does not prevent swap.tiers
> > if we do decide to implement it.
> 
> I respect that urgent need, that is why I Ack on the V5 path, under
> the understanding that this zswap.writeback is not carved into stones.
> When a better interface comes alone, that interface can be obsolete.
> Frankly speaking I would much prefer not introducing the cgroup API
> which will be obsolete soon.
> 
> If you think zswap.writeback is not removable when another better
> alternative is available, please voice it now.
> 
> If you squash my minimal memory.swap.tiers patch, it will also address
> your urgent need for merging the "zswap.writeback", no?

We can always deprecate ABI if something better comes along.

However, it's quite bold to claim that memory.swap.tiers is the best
way to implement backend control on a per-cgroup basis, and that it'll
definitely be needed in the future. You might see this as a foregone
conclusion, but I very much doubt this.

Even if such a file were to show up, I'm not convinced it should even
include zswap as one of the tiers. Zswap isn't a regular swap backend,
it doesn't show up in /proc/swaps, it can't be a second tier, the way
it interacts with its backend file is very different than how two
swapfiles of different priorities interact with each other, it's
already controllable with memory.zswap.max, etc.

I'm open to discussing usecases and proposals for more fine-grained
per-cgroup backend control. We've had discussions about per-cgroup
swapfiles in the past. Cgroup parameters for swapon are another
thought. There are several options and many considerations. The
memory.swap.tiers idea is the newest, has probably had the least
amount of discussion among them, and looks the least convincing to me.

Let's work out the requirements first.

The "conflict" with memory.zswap.writeback is a red herring - it's no
more of a conflict than setting memory.swap.tiers to "zswap" or "all"
and then setting memory.zswap.max or memory.swap.max to 0.

So the notion that we have to rush in a minimal version of a MUCH
bigger concept, just to support zswap writeback disabling is
misguided. And then hope that this format works as the concept evolves
and real usecases materialize... There is no reason to take that risk.




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux