On Thu, Aug 6, 2009 at 4:40 AM, Paul Menage<menage@xxxxxxxxxx> wrote: > On Thu, Aug 6, 2009 at 4:24 AM, Louis Rilling<Louis.Rilling@xxxxxxxxxxx> wrote: >> >> You meant signal_struct, right? sighand_struct can be shared by several >> thread groups, while signal_struct can't. >> > > No, I meant sighand_struct. I realise that it *can* be shared between > processes, but I didn't think that NPTL actually did so. (Are there > common cases of this happening?) And in cases where it was shared, it > wouldn't affect correctness, but simply create the potential for a > little more contention. > > I agree that signal_struct might in principle be a better place for > it, but the first cacheline of signal_struct appears to be occupied > with performance-sensitive things (a couple of counters and a queue > used in do_wait()) already, whereas the first cacheline of > sighand_struct only appears to be used incremented/decremented during > fork/exit, and when delivering a bunch of mostly-fatal signals. > > But having said that, if having it in signal_struct isn't considered a > potential performance hit, it would be fine there too. > > Paul > I'm presently rewriting the locking scheme here to have the rwsem live in sighand_struct, and writing two new functions lock_threadgroup_fork and unlock_threadgroup_fork (for use in cgroup_attach_proc) which will live in cgroup.c but are generic enough that they could be used by anybody who wants to do threadgroup-wide fork-sensitive changes. I'm also having the fork_lock under an #ifdef CONFIG_CGROUPS, which could be expanded to, say, CONFIG_FORK_LOCK_THREADGROUP (which CGROUPS would depend upon) if somebody else ever wanted to use this lock as well. For genericness, I'll have the down_read and up_read on the lock in do_fork() directly (instead of cgroup_fork and cgroup_post_fork as it is in this version of the patch). If there are no more comments/discussion on the locking scheme, I'll resubmit the patch series with these changes approximately Monday. _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/containers