Re: [PATCH RFC 1/2] Add polling support to pidfd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 19, 2019 at 4:12 PM Christian Brauner <christian@xxxxxxxxxx> wrote:
>
> On Sat, Apr 20, 2019 at 12:46 AM Daniel Colascione <dancol@xxxxxxxxxx> wrote:
> >
> > On Fri, Apr 19, 2019 at 3:02 PM Christian Brauner <christian@xxxxxxxxxx> wrote:
> > >
> > > On Fri, Apr 19, 2019 at 11:48 PM Christian Brauner <christian@xxxxxxxxxx> wrote:
> > > >
> > > > On Fri, Apr 19, 2019 at 11:21 PM Daniel Colascione <dancol@xxxxxxxxxx> wrote:
> > > > >
> > > > > On Fri, Apr 19, 2019 at 1:57 PM Christian Brauner <christian@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > On Fri, Apr 19, 2019 at 10:34 PM Daniel Colascione <dancol@xxxxxxxxxx> wrote:
> > > > > > >
> > > > > > > On Fri, Apr 19, 2019 at 12:49 PM Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
> > > > > > > >
> > > > > > > > On Fri, Apr 19, 2019 at 09:18:59PM +0200, Christian Brauner wrote:
> > > > > > > > > On Fri, Apr 19, 2019 at 03:02:47PM -0400, Joel Fernandes wrote:
> > > > > > > > > > On Thu, Apr 18, 2019 at 07:26:44PM +0200, Christian Brauner wrote:
> > > > > > > > > > > On April 18, 2019 7:23:38 PM GMT+02:00, Jann Horn <jannh@xxxxxxxxxx> wrote:
> > > > > > > > > > > >On Wed, Apr 17, 2019 at 3:09 PM Oleg Nesterov <oleg@xxxxxxxxxx> wrote:
> > > > > > > > > > > >> On 04/16, Joel Fernandes wrote:
> > > > > > > > > > > >> > On Tue, Apr 16, 2019 at 02:04:31PM +0200, Oleg Nesterov wrote:
> > > > > > > > > > > >> > >
> > > > > > > > > > > >> > > Could you explain when it should return POLLIN? When the whole
> > > > > > > > > > > >process exits?
> > > > > > > > > > > >> >
> > > > > > > > > > > >> > It returns POLLIN when the task is dead or doesn't exist anymore,
> > > > > > > > > > > >or when it
> > > > > > > > > > > >> > is in a zombie state and there's no other thread in the thread
> > > > > > > > > > > >group.
> > > > > > > > > > > >>
> > > > > > > > > > > >> IOW, when the whole thread group exits, so it can't be used to
> > > > > > > > > > > >monitor sub-threads.
> > > > > > > > > > > >>
> > > > > > > > > > > >> just in case... speaking of this patch it doesn't modify
> > > > > > > > > > > >proc_tid_base_operations,
> > > > > > > > > > > >> so you can't poll("/proc/sub-thread-tid") anyway, but iiuc you are
> > > > > > > > > > > >going to use
> > > > > > > > > > > >> the anonymous file returned by CLONE_PIDFD ?
> > > > > > > > > > > >
> > > > > > > > > > > >I don't think procfs works that way. /proc/sub-thread-tid has
> > > > > > > > > > > >proc_tgid_base_operations despite not being a thread group leader.
> > > > > > > > > > > >(Yes, that's kinda weird.) AFAICS the WARN_ON_ONCE() in this code can
> > > > > > > > > > > >be hit trivially, and then the code will misbehave.
> > > > > > > > > > > >
> > > > > > > > > > > >@Joel: I think you'll have to either rewrite this to explicitly bail
> > > > > > > > > > > >out if you're dealing with a thread group leader, or make the code
> > > > > > > > > > > >work for threads, too.
> > > > > > > > > > >
> > > > > > > > > > > The latter case probably being preferred if this API is supposed to be
> > > > > > > > > > > useable for thread management in userspace.
> > > > > > > > > >
> > > > > > > > > > At the moment, we are not planning to use this for sub-thread management. I
> > > > > > > > > > am reworking this patch to only work on clone(2) pidfds which makes the above
> > > > > > > > >
> > > > > > > > > Indeed and agreed.
> > > > > > > > >
> > > > > > > > > > discussion about /proc a bit unnecessary I think. Per the latest CLONE_PIDFD
> > > > > > > > > > patches, CLONE_THREAD with pidfd is not supported.
> > > > > > > > >
> > > > > > > > > Yes. We have no one asking for it right now and we can easily add this
> > > > > > > > > later.
> > > > > > > > >
> > > > > > > > > Admittedly I haven't gotten around to reviewing the patches here yet
> > > > > > > > > completely. But one thing about using POLLIN. FreeBSD is using POLLHUP
> > > > > > > > > on process exit which I think is nice as well. How about returning
> > > > > > > > > POLLIN | POLLHUP on process exit?
> > > > > > > > > We already do things like this. For example, when you proxy between
> > > > > > > > > ttys. If the process that you're reading data from has exited and closed
> > > > > > > > > it's end you still can't usually simply exit because it might have still
> > > > > > > > > buffered data that you want to read.  The way one can deal with this
> > > > > > > > > from  userspace is that you can observe a (POLLHUP | POLLIN) event and
> > > > > > > > > you keep on reading until you only observe a POLLHUP without a POLLIN
> > > > > > > > > event at which point you know you have read
> > > > > > > > > all data.
> > > > > > > > > I like the semantics for pidfds as well as it would indicate:
> > > > > > > > > - POLLHUP -> process has exited
> > > > > > > > > - POLLIN  -> information can be read
> > > > > > > >
> > > > > > > > Actually I think a bit different about this, in my opinion the pidfd should
> > > > > > > > always be readable (we would store the exit status somewhere in the future
> > > > > > > > which would be readable, even after task_struct is dead). So I was thinking
> > > > > > > > we always return EPOLLIN.  If process has not exited, then it blocks.
> > > > > > >
> > > > > > > ITYM that a pidfd polls as readable *once a task exits* and stays
> > > > > > > readable forever. Before a task exit, a poll on a pidfd should *not*
> > > > > > > yield POLLIN and reading that pidfd should *not* complete immediately.
> > > > > > > There's no way that, having observed POLLIN on a pidfd, you should
> > > > > > > ever then *not* see POLLIN on that pidfd in the future --- it's a
> > > > > > > one-way transition from not-ready-to-get-exit-status to
> > > > > > > ready-to-get-exit-status.
> > > > > >
> > > > > > What do you consider interesting state transitions? A listener on a pidfd
> > > > > > in epoll_wait() might be interested if the process execs for example.
> > > > > > That's a very valid use-case for e.g. systemd.
> > > > >
> > > > > Sure, but systemd is specialized.
> > > >
> > > > So is Android and we're not designing an interface for Android but for
> > > > all of userspace.
> > > > I hope this is clear. Service managers are quite important and systemd
> > > > is the largest one
> > > > and they can make good use of this feature.
> > > >
> > > > >
> > > > > There are two broad classes of programs that care about process exit
> > > > > status: 1) those that just want to do something and wait for it to
> > > > > complete, and 2) programs that want to perform detailed monitoring of
> > > > > processes and intervention in their state. #1 is overwhelmingly more
> > > > > common. The basic pidfd feature should take care of case #1 only, as
> > > > > wait*() in file descriptor form. I definitely don't think we should be
> > > > > complicating the interface and making it more error-prone (see below)
> > > > > for the sake of that rare program that cares about non-exit
> > > > > notification conditions. You're proposing a complicated combination of
> > > > > poll bit flags that most users (the ones who just wait to wait for
> > > > > processes) don't care about and that risk making the facility hard to
> > > > > use with existing event loops, which generally recognize readability
> > > > > and writability as the only properties that are worth monitoring.
> > > >
> > > > That whole pargraph is about dismissing a range of valid use-cases based on
> > > > assumptions such as "way more common" and
> > > > even argues that service managers are special cases and therefore not
> > > > really worth considering. I would like to be more open to other use cases.
> > > >
> > > > >
> > > > > > We can't use EPOLLIN for that too otherwise you'd need to to waitid(_WNOHANG)
> > > > > > to check whether an exit status can be read which is not nice and then you
> > > > > > multiplex different meanings on the same bit.
> > > > > > I would prefer if the exit status can only be read from the parent which is
> > > > > > clean and the least complicated semantics, i.e. Linus waitid() idea.
> > > > >
> > > > > Exit status information should be *at least* as broadly available
> > > > > through pidfds as it is through the last field of /proc/pid/stat
> > > > > today, and probably more broadly. I've been saying for six months now
> > > > > that we need to talk about *who* should have access to exit status
> > > > > information. We haven't had that conversation yet. My preference is to
> > > > > just make exit status information globally available, as FreeBSD seems
> > >
> > > Totally aside from whether or not this is a good idea but since you
> > > keep bringing
> > > this up and I'm really curious about this where is this documented and how
> > > does this work, please?
> >
> > According to the kqueue FreeBSD man page [1] (I'm reading the FreeBSD
> > 12 version), it's possible to register in a kqueue instead a PID of
> > interest via EVFILT_PROC and receive a NOTE_EXIT notification when
> > that process dies. NOTE_EXIT comes with the exit status of the process
> > that died. I don't see any requirement that EVFILT_PROC work only on
> > child processes of the waiter: on the contrary, the man page states
> > that "if a process can normally see another
> > process, it can attach an event to it.". This documentation reads to
> > me like on FreeBSD process exit status is much more widely available
> > than it is on Linux. Am I missing something?
>
> So in fact FreeBSD has what I'm proposing fully for pids but partial
> for pidfds:
> state transition montoring NOTE_EXIT, NOTE_FORK, NOTE_EXEC and with
> NOTE_TRACK even more.
> For NOTE_EXIT you register a pid or pidfd in an epoll_wait()/kqueue loop you get
> an event and you can get access to that data in the case of kqueue by
> look at the
> "data" member or by getting another event flag. I was putting the idea
> on the table
> to do this via EPOLLIN and then looking at a simple struct that contains that
> information.

If you turn pidfd into an event stream, reads have to be destructive.
If reads are destructive, you can't share pidfds instances between
multiple readers. If you can't get a pidfd except via clone, you can't
have more than one pidfd instance for a single process. The overall
result is that we're back in the same place we were before with the
old wait system, i.e., only one entity can monitor a process for
interesting state transitions and everyone else gets a racy,
inadequate interface via /proc. FreeBSD doesn't have this problem
because you can create an *arbitrary* number of *different* kqueue
objects, register a PID in each of them, and get an independent
destructively-read event stream in each context. It's worth noting
that the FreeBSD process file descriptor from pdfork(2) is *NOT* an
event stream, as you're describing, but a level-triggered
one-transition facility of the sort that I'm advocating.

In other words, FreeBSD already implements the model I'm describing:
level-triggered simple exit notification for pidfd and a separate
edge-triggered monitoring facility.

> I like this idea to be honest.

I'm not opposed to some facility that delivers a stream of events
relating to some process. That could even be epoll, as our rough
equivalent to kqueue. I don't see a need to make the pidfd the channel
through which we deliver these events. There's room for both an event
stream like the one FreeBSD provides and a level-triggered "did this
process exit or not?" indication via pidfd.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux