On Sun, Jan 5, 2020 at 2:18 AM Zbigniew Jędrzejewski-Szmek <zbyszek@xxxxxxxxx> wrote: > > On Sat, Jan 04, 2020 at 04:38:19PM -0700, Chris Murphy wrote: > > My understanding of systemd OOMPolicy= behavior, is it looks for the > > kernel's oom-killer messages and acts upon those. Whereas earlyoom > > uses the same metric (oom_score) as the oom-killer, it does not invoke > > the oom-killer. Therefore systemd probably does not get the proper > > hint to implement OOMPolicy= > > Yes. The kernel reports oom events in the cgroup file memory.events, > and systemd waits for an inotify event on that file; OOMPolicy=stop is > implemented that way. And the OOMPolicy=kill option is "implemented" > by setting memory.oom.group=1 in the kernel [1] and having the kernel > kill all the processes. So systemd is providing a thin wrapper around > the kernel functionality. > > If processes are not killed by the kernel but through a signal from > userspace, all of this will not work. The gotcha on the desktop with kernel oom-killer, is that if it's needed, it's way past too late. And it may never trigger. The central problem to be solved isn't even what does OOM killing or when: the ridiculously bad system responsiveness during heavy swap usage. My top criticism of the feature proposal is that it doesn't address the responsivity problem head on. It just reduces the duration of badness. And the reduction isn't near enough. One thing that helps the heavy swap problem, today? A much smaller swap partition. In fact, no swap partition alleviates the problem entirely, but of course that has other consequences (that the working group is discussing in #120). -- Chris Murphy _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx