Kay Sievers wrote: <Snip. Apologies for snipping all your questions. I think it's become a little confused and this is the most important issue.> [Could run_program() avoid forking twice in a threaded udevd, and kill timed-out commands so they don't become zombies when they eventually finish?] > On Tue, May 26, 2009 at 20:05, Alan Jenkins <alan-jenkins@xxxxxxxxxxxxxx> wrote: > >> There are other workarounds for the lack of a timeout in sys_wait(), so >> I don't think that's a problem. We can require that commands close >> stdout & stderr pipes when they exit - i.e. do not pass them on to a >> long-running child. (At the moment there's a debian script "net.agent" >> which has to do this for debug mode - it would need fixing to always do it). >> > > I don't think we can really assume anything from called programs. :) > > Thanks, > Kay > Sounds like the voice of experience. The main constraint I'm working around is the lack of a timeout in sys_wait*(). Forking twice allows the parent process to block on a pipe instead, and use a select() timeout. If we don't want to fork twice, and we can't rely on EOF on the command's stdout pipe, I think the only alternative is to wait for SIGCHLD. The signal handler would check for finished PIDs with WNOHANG, and look them up in a list to see which thread needs waking up. In that case, it's not essential to kill timed out commands. The signal handler can reap them just as easily as it reaps a command which is being waited for by an event. Forking twice is simpler though. It's more direct, and it also easily works with threading disabled. At the moment my thread-specific code is reasonably contained. I have a ./configure option which switches back to using separate processes, in case pthreads support is missing or broken on some platforms. Thanks Alan -- To unsubscribe from this list: send the line "unsubscribe linux-hotplug" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html