On Wed, Feb 19, 2020 at 10:13 PM Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
My test: Fedora Workstation 31, laptop with 8G RAM, 8G swap partition,
fill up memory using Firefox tabs pointed to various websites, and
then I followed [1] to issue two commands:
# echo reboot > /sys/power/disk
# echo disk > /sys/power/state
I experience twice as many failures as successes. Curiously, the
successes show pageout does happen. Before hibernate there is no swap
in-use, but after resume ~2GiB swap is in-use and RAM usage is about
50%.
I'm sorry for having confused up this discussion :-/
In case it's interesting, my testing approach was to open up gimp, open a picture and enlarge it to 40000 px wide, which takes over 8GB RAM. In total, I have then about 9GB RAM usage out of total 16GB RAM. Then I issued "systemctl hibernate". With vm.swappiness=0, there's the out of memory error I already posted before. I tested that the same occurs when I directly instruct the kernel to hibernate:
# echo disk > /sys/power/state
-bash: echo: write error: Cannot allocate memory
-bash: echo: write error: Cannot allocate memory
When I switch to vm.swappiness=1 (or the default 60), I can hibernate just fine, and I resume with ~6GB RAM in memory and ~3GB in swap. If it is still relevant, I can provide exact numbers from /proc/meminfo. But I guess now that we see it doesn't affect people by default, it's no longer that important. This also invalides most of my previous suggestions about ideal swap size. OTOH I'm very happy that you proved me wrong and I discovered this, because now I can again hibernate even when my memory is quite full.
_______________________________________________ desktop mailing list -- desktop@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to desktop-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/desktop@xxxxxxxxxxxxxxxxxxxxxxx