On 6/5/20 6:59 PM, Chris Murphy wrote:
On Fri, Jun 5, 2020 at 6:47 PM Samuel Sieb <samuel@xxxxxxxx> wrote:
I installed the zram package and noticed the systemd-swap package, so
installed that also.
There are conflicting implementations:
anaconda package provides zram.service
zram package provides zram-swap.service
systemd-swap package provides
Did you leave something out?
Are you saying that zram and systemd-swap both provide configuration for
zram?
I've only casually tested systemd-swap package. Note this isn't an
upstream systemd project. Where as the proposed rust zram-generator is
"upstream" in that it's maintained by the same folks, but is not
provided by systemd package I think because it's in rust.
Ok, I was thinking the generator might require rebooting to get it to
work. And I saw the systemd-swap package and thought that sounded
useful to try.
There shouldn't be any weird/bad interactions between them, but it is
possible for the user to become very confused which one is working. It
*should* be zram-generator because it runs much earlier during boot
than the others. But I have not thoroughly tested for conflicting
interactions, mainly just sanity testing to make sure things don't
blow up.
I only started the one service, so I don't think there are any conflicts.
I adjusted the zram setting to 4G and reduced
zswap a bit. I have no idea what that is doing, it doesn't seem to
affect anything I can measure. The overall improvement in
responsiveness is very nice.
It might be you're modifying the configuration of a different
implementation from the one that's actually setting up swaponzram.
No, it was quite clear that I was modifying the right config. It's the
/etc/systemd/swap.conf as described in the man page and it was affecting
the result.
I don't understand the numbers I'm getting for these. I disabled my
swap partition to force as much to go to zram as possible and then
turned it back on.
# swapon
NAME TYPE SIZE USED PRIO
/dev/sda3 partition 16G 1.9G -2
/zram0 partition 4G 4G 32767
This looks like I'm using all 4G of allocated space in the zram swap, but:
# zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4 4G 1.8G 658.5M 679.6M 4
This suggests that it's only using 1.8G. Can you explain what this means?
Yeah that's confusing. zramctl just gets info from sysfs, but you
could double check it by
cat /sys/block/zram0/mm_stat
The first value should match "DATA" column in zramctl (which reports in MiB).
While the kernel has long had support for using up to 32 swap devices
at the same time, this is seldom used in practice so it could be an
artifact of this? Indicating that all of this swap is "in use" from
the swap perspective, where zramctl is telling you the truth about
what the zram kernel module is actually using. Is it a cosmetic
reporting bug or intentional? Good question. I'll try to reproduce and
report it upstream and see what they say. But if you beat me to it
that would be great, and then I can just write the email for linux-mm
and cite your bug report. :D
Part of my concern is that if it's not actually full, then why is it
using so much of the disk swap?
For upstream, do you mean the kernel?
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx