Hello Robert, On Sun, 30 Jun 2019 09:54:14 -0400 Robert Moskowitz <rgm@xxxxxxxxxxxxxxx> wrote: > And the Fedora21 image is limited to 1GB: > > Jun 30 01:53:35 lx140e kernel: [ 13357] 107 13357 884212 215032 3043328 72575 0 qemu-system-x86 > Jun 30 01:53:35 lx140e kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/machine.slice/machine-qemu\x2d3\x2dfedora21.scope,task=qemu-system-x86,pid=13357,uid=107 > Jun 30 01:53:35 lx140e kernel: Out of memory: Killed process 13357 (qemu-system-x86) total-vm:3536848kB, anon-rss:860128kB, file-rss:0kB, shmem-rss:0kB > Jun 30 01:53:35 lx140e kernel: oom_reaper: reaped process 13357 (qemu-system-x86), now anon-rss:0kB, file-rss:12kB, shmem-rss:0kB > Jun 30 01:53:36 lx140e journal[878]: internal error: End of file from qemu monitor > Jun 30 01:53:36 lx140e systemd[1]: machine-qemu\x2d3\x2dfedora21.scope: Succeeded. > Jun 30 01:53:36 lx140e systemd-machined[760]: Machine qemu-3-fedora21 terminated. > > A new kernel came out today, and I installed that and rebooted. So let's see what happens tonight... Looking at your other post, it seems that some process is requesting a bunch of memory during the night, and the kernel kills the most memory consumer ones to make room for the new one (and according to other criteria). I had this exact same behavior when a rsync-based backup was running overnight: it was listing files from an enormous disk, the amount of memory for this was huge and firefox and other big memory eaters being killed nearly every night. Regards, -- wwp
Attachment:
pgpbwwmroUatK.pgp
Description: OpenPGP digital signature
_______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx