On Mon, Feb 16, 2009 at 09:29:00AM +0800, Li Zefan wrote: > >> struct mnt_writer *cpu_writer = &per_cpu(mnt_writers, cpu); > >> spin_lock(&cpu_writer->lock); > >> if (cpu_writer->mnt != mnt) { > >> spin_unlock(&cpu_writer->lock); > >> continue; > >> } > >> prevents the problem, OK? > >> > > > > Sure, I'll try. :) > > > > Not a single warning for the whole weekend, so I think above change works. OK... So here's what we really want: * we know that nobody will set cpu_writer->mnt to mnt from now on * all changes to that sucker are done under cpu_writer->lock * we want the laziest equivalent of spin_lock(&cpu_writer->lock); if (likely(cpu_writer->mnt != mnt)) { spin_unlock(&cpu_writer->lock); continue; } /* do stuff */ that would make sure we won't miss earlier setting of ->mnt done by another CPU. Anyway, for now (HEAD and all -stable starting with 2.6.26) we want this: --- fs/namespace.c 2009-01-25 21:45:31.000000000 -0500 +++ fs/namespace.c 2009-02-15 21:31:14.000000000 -0500 @@ -614,9 +614,11 @@ */ for_each_possible_cpu(cpu) { struct mnt_writer *cpu_writer = &per_cpu(mnt_writers, cpu); - if (cpu_writer->mnt != mnt) - continue; spin_lock(&cpu_writer->lock); + if (cpu_writer->mnt != mnt) { + spin_unlock(&cpu_writer->lock); + continue; + } atomic_add(cpu_writer->count, &mnt->__mnt_writers); cpu_writer->count = 0; /* _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/containers