On Thu, Aug 6, 2009 at 3:34 AM, Peter Zijlstra<a.p.zijlstra@xxxxxxxxx> wrote: > On Thu, 2009-08-06 at 03:28 -0700, Paul Menage wrote: > >> OK, well if lockdep can't currently handle the "writer takes a lock on >> every thread" model, > > I haven't read what this is about, but simply looking at that sentence > makes me want to hit someone with a cluebat. Have you any idea how > expensive that is? For lockdep to track that many locks, or just the concept of taking that many locks generally? The basic idea is that in order to implement a "procs" file in cgroups that can migrate all threads in a process atomically, we need to synchronize with concurrent clone() calls. But since thread clones are likely to occur far more often than "procs" writes, and we wanted to avoid introducing overhead into the clone path, one approach was to give each thread a fork mutex, which it could take around the relevant parts of the fork/clone operation, and have the "procs" writer deal with obtaining the fork mutex for every thread in the process being moved, so pushing the overhead on to the "procs" writer. I don't think it's a deficiency of lockdep that it would have trouble dealing with this - in fact, my original plan was that we'd just have to live with the fact that anyone doing a "procs" move on a massive process would have to live with lockdep printing an overflow warning. But given that AFAICS we can eliminate the overhead associated with a single lock by piggy-backing on the cache line containing sighand->count, hopefully this won't be an issue any more. Paul _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/containers