Re: swap on ZRAM, zswap, and Rust was: Better interactivity in low-memory situations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




----- Original Message -----
> From: "Chris Murphy" <lists@xxxxxxxxxxxxxxxxx>
> To: "Development discussions related to Fedora" <devel@xxxxxxxxxxxxxxxxxxxxxxx>
> Sent: Friday, August 30, 2019 9:55:52 PM
> Subject: swap on ZRAM, zswap, and Rust was: Better interactivity in low-memory situations
> 
> Hi,
> This is yet another follow-up for this thread:
> https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx/message/XUZLHJ5O32OX24LG44R7UZ2TMN6NY47N/
> 
> 
> Basics:
> "zswap" compresses swap and uses a defined memory pool as a cache,
> with spill over (still compressed) going into a conventional swap
> partition. The memory pool doesn't appear as a separate block device.
> A conventional swap partition on a drive is required.
> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/Documentation/blockdev/zram.txt?h=v5.2.9
> 
> "swap on ZRAM" A ZRAM device appears as a block device, and is
> effectively a compressed RAM disk. It's common for this to be the
> exclusive swap device, of course it is volatile so in that
> configuration your system can't hibernate. But it's also possible to
> use swap priority in fstab to cause the ZRAM device to be used with
> higher priority, and a conventional swap partition on a drive with a
> lower priority.
> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/Documentation/vm/zswap.rst?h=v5.2.9

Just a slight addition to this comparison - AFAIK there is a slight difference in how zswap and zram handle the
in-ram swap device being full & making use of the swap device on harddrive.

If the zswap device becomes full, zswap will, according to the docs, free up spase in RAM by moving the least recently used
pages to the disk, so that the "hot" pages stay in ram & new pages can be placed there.

In comparison AFAIK, there is no such mechanism for zram and the priority value simply means which swap will be used as the first
one and once it becomes full, new pages will simply go to the next swap with lower priority. Please correct me if I am completely
wrong and the Linux swap allocation algorithm actually moves pages between swap devices based on priority. :)

> 
> 
> What they do:
> Either strategy can help avoid swap thrashing, by moderating the
> transition from exclusively RAM based work, to heavy swapping on disk.
> In my testing, the most aggressive memory starved workloads still
> result in an unresponsive system. Neither are a complete solution,
> they really seem to just be moderators that kick the can down the
> road. But I do think it's an improvement especially in the incidental
> swap use case, where transition from memory to swap isn't noticeable.
> 
> 
> Which is better?
> I don't know. Seriously, that's what all of my testing as come down
> to. A user won't likely notice the difference. Both dynamically
> allocate memory to their "memory pools" on demand. But otherwise, they
> really are two very different implementations. Regardless, Fedora
> Workstation and probably even Fedora Server, should use one of them by
> default out of the box.
> 
> IoT folks are already using swap on ZRAM by default, in lieu of a disk
> based swap partition. And Anaconda folks are doing the same for low
> memory devices when the installer is launched. I've been using zswap
> on Fedora Workstation edition on my laptop, and Fedora Server on an
> Intel NUC, for maybe two years (earlier this summer I switched both of
> them swap on ZRAM to compare).
> 
> How are they different?
> There are several "swap on ZRAM" implementations. The zram package in
> Fedora right now is what IoT folks are using which installs a systemd
> service unit to setup the ZRAM block device, mkswap on it, and then
> swapon, during system startup. Simple.
> 
> The ideal scenario is to get everyone on the same page, and so far it
> looks like systemd's zram-generator, built in Rust, meets all the
> requirements. That needs to be confirmed, but also right now there's a
> small problem, it's not working. So we kinda need a someone familiar
> with Rust and systemd to take this on, if we want to use the same
> thing everywhere.
> https://github.com/systemd/zram-generator/issues/4
> 
> Whereas zswap is setup by using boot parameters, which we could have
> the installer set, contingent on a conventional swap partition being
> created.
> zswap.enabled=1 zswap.compressor=lz4 zswap.max_pool_percent=20
> zswap.zpool=zbud
> 
> Zswap upstream tells me they're close to dropping the experimental
> status, hopefully by the end of the summer. It might be a bit longer
> before they're as confident with zpool type z3fold.
Indeed, I've had issues with stability in the past when I tried the z3fold
option, but no issue with the default values in the last ~year, so it
really seems to be ready with the default values. 

> 
> Hackfest anyone?
> 
> 
> 
> --
> Chris Murphy
> _______________________________________________
> devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx
> 
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux