On Fri, Mar 15, 2019 at 8:56 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote: > > On Thu, Mar 14, 2019 at 9:37 PM Daniel Colascione <dancol@xxxxxxxxxx> wrote: > > > > On Thu, Mar 14, 2019 at 8:16 PM Steven Rostedt <rostedt@xxxxxxxxxxx> wrote: > > > > > > On Thu, 14 Mar 2019 13:49:11 -0700 > > > Sultan Alsawaf <sultan@xxxxxxxxxxxxxxx> wrote: > > > > > > > Perhaps I'm missing something, but if you want to know when a process has died > > > > after sending a SIGKILL to it, then why not just make the SIGKILL optionally > > > > block until the process has died completely? It'd be rather trivial to just > > > > store a pointer to an onstack completion inside the victim process' task_struct, > > > > and then complete it in free_task(). > > > > > > How would you implement such a method in userspace? kill() doesn't take > > > any parameters but the pid of the process you want to send a signal to, > > > and the signal to send. This would require a new system call, and be > > > quite a bit of work. > > > > That's what the pidfd work is for. Please read the original threads > > about the motivation and design of that facility. > > > > > If you can solve this with an ebpf program, I > > > strongly suggest you do that instead. > > > > Regarding process death notification: I will absolutely not support > > putting aBPF and perf trace events on the critical path of core system > > memory management functionality. Tracing and monitoring facilities are > > great for learning about the system, but they were never intended to > > be load-bearing. The proposed eBPF process-monitoring approach is just > > a variant of the netlink proposal we discussed previously on the pidfd > > threads; it has all of its drawbacks. We really need a core system > > call --- really, we've needed robust process management since the > > creation of unix --- and I'm glad that we're finally getting it. > > Adding new system calls is not expensive; going to great lengths to > > avoid adding one is like calling a helicopter to avoid crossing the > > street. I don't think we should present an abuse of the debugging and > > performance monitoring infrastructure as an alternative to a robust > > and desperately-needed bit of core functionality that's neither hard > > to add nor complex to implement nor expensive to use. > > > > Regarding the proposal for a new kernel-side lmkd: when possible, the > > kernel should provide mechanism, not policy. Putting the low memory > > killer back into the kernel after we've spent significant effort > > making it possible for userspace to do that job. Compared to kernel > > code, more easily understood, more easily debuggable, more easily > > updated, and much safer. If we *can* move something out of the kernel, > > we should. This patch moves us in exactly the wrong direction. Yes, we > > need *something* that sits synchronously astride the page allocation > > path and does *something* to stop a busy beaver allocator that eats > > all the available memory before lmkd, even mlocked and realtime, can > > respond. The OOM killer is adequate for this very rare case. > > > > With respect to kill timing: Tim is right about the need for two > > levels of policy: first, a high-level process prioritization and > > memory-demand balancing scheme (which is what OOM score adjustment > > code in ActivityManager amounts to); and second, a low-level > > process-killing methodology that maximizes sustainable memory reclaim > > and minimizes unwanted side effects while killing those processes that > > should be dead. Both of these policies belong in userspace --- because > > they *can* be in userspace --- and userspace needs only a few tools, > > most of which already exist, to do a perfectly adequate job. > > > > We do want killed processes to die promptly. That's why I support > > boosting a process's priority somehow when lmkd is about to kill it. > > The precise way in which we do that --- involving not only actual > > priority, but scheduler knobs, cgroup assignment, core affinity, and > > so on --- is a complex topic best left to userspace. lmkd already has > > all the knobs it needs to implement whatever priority boosting policy > > it wants. > > > > Hell, once we add a pidfd_wait --- which I plan to work on, assuming > > nobody beats me to it, after pidfd_send_signal lands --- you can > > imagine a general-purpose priority inheritance mechanism expediting > > process death when a high-priority process waits on a pidfd_wait > > handle for a condemned process. You know you're on the right track > > design-wise when you start seeing this kind of elegant constructive > > interference between seemingly-unrelated features. What we don't need > > is some kind of blocking SIGKILL alternative or backdoor event > > delivery system. > > When talking about pidfd_wait functionality do you mean something like > this: https://lore.kernel.org/patchwork/patch/345098/ ? I missed the > discussion about it, could you please point me to it? That directory-polling approach came up in the discussion. It's a bad idea, mostly for API reasons. I'm talking about something more like https://lore.kernel.org/lkml/20181029175322.189042-1-dancol@xxxxxxxxxx/, albeit in system call form instead of in the form of a new per-task proc file. _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel