Re: [RFC] simple_lmk: Introduce Simple Low Memory Killer for Android

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Mar 17, 2019 at 10:11:10AM -0700, Daniel Colascione wrote:
> On Sun, Mar 17, 2019 at 9:35 AM Serge E. Hallyn <serge@xxxxxxxxxx> wrote:
> >
> > On Sun, Mar 17, 2019 at 12:42:40PM +0100, Christian Brauner wrote:
> > > On Sat, Mar 16, 2019 at 09:53:06PM -0400, Joel Fernandes wrote:
> > > > On Sat, Mar 16, 2019 at 12:37:18PM -0700, Suren Baghdasaryan wrote:
> > > > > On Sat, Mar 16, 2019 at 11:57 AM Christian Brauner <christian@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > On Sat, Mar 16, 2019 at 11:00:10AM -0700, Daniel Colascione wrote:
> > > > > > > On Sat, Mar 16, 2019 at 10:31 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
> > > > > > > >
> > > > > > > > On Fri, Mar 15, 2019 at 11:49 AM Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
> > > > > > > > >
> > > > > > > > > On Fri, Mar 15, 2019 at 07:24:28PM +0100, Christian Brauner wrote:
> > > > > > > > > [..]
> > > > > > > > > > > why do we want to add a new syscall (pidfd_wait) though? Why not just use
> > > > > > > > > > > standard poll/epoll interface on the proc fd like Daniel was suggesting.
> > > > > > > > > > > AFAIK, once the proc file is opened, the struct pid is essentially pinned
> > > > > > > > > > > even though the proc number may be reused. Then the caller can just poll.
> > > > > > > > > > > We can add a waitqueue to struct pid, and wake up any waiters on process
> > > > > > > > > > > death (A quick look shows task_struct can be mapped to its struct pid) and
> > > > > > > > > > > also possibly optimize it using Steve's TIF flag idea. No new syscall is
> > > > > > > > > > > needed then, let me know if I missed something?
> > > > > > > > > >
> > > > > > > > > > Huh, I thought that Daniel was against the poll/epoll solution?
> > > > > > > > >
> > > > > > > > > Hmm, going through earlier threads, I believe so now. Here was Daniel's
> > > > > > > > > reasoning about avoiding a notification about process death through proc
> > > > > > > > > directory fd: http://lkml.iu.edu/hypermail/linux/kernel/1811.0/00232.html
> > > > > > > > >
> > > > > > > > > May be a dedicated syscall for this would be cleaner after all.
> > > > > > > >
> > > > > > > > Ah, I wish I've seen that discussion before...
> > > > > > > > syscall makes sense and it can be non-blocking and we can use
> > > > > > > > select/poll/epoll if we use eventfd.
> > > > > > >
> > > > > > > Thanks for taking a look.
> > > > > > >
> > > > > > > > I would strongly advocate for
> > > > > > > > non-blocking version or at least to have a non-blocking option.
> > > > > > >
> > > > > > > Waiting for FD readiness is *already* blocking or non-blocking
> > > > > > > according to the caller's desire --- users can pass options they want
> > > > > > > to poll(2) or whatever. There's no need for any kind of special
> > > > > > > configuration knob or non-blocking option. We already *have* a
> > > > > > > non-blocking option that works universally for everything.
> > > > > > >
> > > > > > > As I mentioned in the linked thread, waiting for process exit should
> > > > > > > work just like waiting for bytes to appear on a pipe. Process exit
> > > > > > > status is just another blob of bytes that a process might receive. A
> > > > > > > process exit handle ought to be just another information source. The
> > > > > > > reason the unix process API is so awful is that for whatever reason
> > > > > > > the original designers treated processes as some kind of special kind
> > > > > > > of resource instead of fitting them into the otherwise general-purpose
> > > > > > > unix data-handling API. Let's not repeat that mistake.
> > > > > > >
> > > > > > > > Something like this:
> > > > > > > >
> > > > > > > > evfd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
> > > > > > > > // register eventfd to receive death notification
> > > > > > > > pidfd_wait(pid_to_kill, evfd);
> > > > > > > > // kill the process
> > > > > > > > pidfd_send_signal(pid_to_kill, ...)
> > > > > > > > // tend to other things
> > > > > > >
> > > > > > > Now you've lost me. pidfd_wait should return a *new* FD, not wire up
> > > > > > > an eventfd.
> > > > > > >
> > > > >
> > > > > Ok, I probably misunderstood your post linked by Joel. I though your
> > > > > original proposal was based on being able to poll a file under
> > > > > /proc/pid and then you changed your mind to have a separate syscall
> > > > > which I assumed would be a blocking one to wait for process exit.
> > > > > Maybe you can describe the new interface you are thinking about in
> > > > > terms of userspace usage like I did above? Several lines of code would
> > > > > explain more than paragraphs of text.
> > > >
> > > > Hey, Thanks Suren for the eventfd idea. I agree with Daniel on this. The idea
> > > > from Daniel here is to wait for process death and exit events by just
> > > > referring to a stable fd, independent of whatever is going on in /proc.
> > > >
> > > > What is needed is something like this (in highly pseudo-code form):
> > > >
> > > > pidfd = opendir("/proc/<pid>",..);
> > > > wait_fd = pidfd_wait(pidfd);
> > > > read or poll wait_fd (non-blocking or blocking whichever)
> > > >
> > > > wait_fd will block until the task has either died or reaped. In both these
> > > > cases, it can return a suitable string such as "dead" or "reaped" although an
> > > > integer with some predefined meaning is also Ok.
> > > >
> > > > What that guarantees is, even if the task's PID has been reused, or the task
> > > > has already died or already died + reaped, all of these events cannot race
> > > > with the code above and the information passed to the user is race-free and
> > > > stable / guaranteed.
> > > >
> > > > An eventfd seems to not fit well, because AFAICS passing the raw PID to
> > > > eventfd as in your example would still race since the PID could have been
> > > > reused by another process by the time the eventfd is created.
> > > >
> > > > Also Andy's idea in [1] seems to use poll flags to communicate various tihngs
> > > > which is still not as explicit about the PID's status so that's a poor API
> > > > choice compared to the explicit syscall.
> > > >
> > > > I am planning to work on a prototype patch based on Daniel's idea and post something
> > > > soon (chatted with Daniel about it and will reference him in the posting as
> > > > well), during this posting I will also summarize all the previous discussions
> > > > and come up with some tests as well.  I hope to have something soon.
> > >
> > > Having pidfd_wait() return another fd will make the syscall harder to
> > > swallow for a lot of people I reckon.
> > > What exactly prevents us from making the pidfd itself readable/pollable
> > > for the exit staus? They are "special" fds anyway. I would really like
> > > to avoid polluting the api with multiple different types of fds if possible.
> > >
> > > ret = pidfd_wait(pidfd);
> > > read or poll pidfd
> >
> > I'm not quite clear on what the two steps are doing here.  Is pidfd_wait()
> > doing a waitpid(2), and the read gets exit status info?
> 
> pidfd_wait on an open pidfd returns a "wait handle" FD. The wait

That is what you are proposing.  I'm not sure that's what Christian
was proposing.  'ret' is ambiguous there.  Christian?

> handle works just like a pipe: you can select/epoll/whatever for
> readability. read(2) on the wait handle (which blocks unless you set
> O_NONBLOCK, just like a pipe) completes with a siginfo_t when the
> process to which the wait handle is attached exits. Roughly,
> 
> int kill_and_wait_for_exit(int pidfd) {
>   int wait_handle = pidfd_wait(pidfd);
>   pidfd_send_signal(pidfd, ...);
>   siginfo_t exit_info;
>   read(wait_handle, &exit_info, sizeof(exit_info)); // Blocks because
> we haven't configured non-blocking behavior, just like a pipe.
>   close(wait_handle);
>   return exit_info.si_status;
> }
> 
> >
> > > (Note that I'm traveling so my responses might be delayed quite a bit.)
> > > (Ccing a few people that might have an opinion here.)
> > >
> > > Christian
> >
> > On its own, what you (Christian) show seems nicer.  But think about a main event
> > loop (like in lxc), where we just loop over epoll_wait() on various descriptors.
> > If we want to wait for any of several types of events - maybe a signalfd, socket
> > traffic, or a process death - it would be nice if we can treat them all the same
> > way, without having to setup a separate thread to watch the pidfd and send
> > data over another fd.  Is there a nice way we can provide that with what you've
> > got above?
> 
> Nobody is proposing any kind of mechanism that would require a
> separate thread. What I'm proposing works with poll and read and
> should be trivial to integrate into any existing event loop: from the
> perspective of the event loop, it looks just like a pipe.

(yes, I understood your proposal)

-serge
_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel



[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux