On Wed, Aug 18, 2010 at 04:05:39PM +0200, Andi Kleen wrote: > Nick Piggin <npiggin@xxxxxxxxx> writes: > > BTW one way to make the slow path faster would be to start sharing > per cpu locks inside a core on SMT at least. The same cores have the same > caches and sharing cache lines is free. That would cut it in half > on a 2x HT system. Yes it's possible. brlock code is encapsulated, so you could experiment. One problem is that vfsmount lock gets held for read for a relatively long time in the store-free path walk patches. So you could get multiple threads contending on it. > > > - > > static int event; > > static DEFINE_IDA(mnt_id_ida); > > static DEFINE_IDA(mnt_group_ida); > > +static DEFINE_SPINLOCK(mnt_id_lock); > > Can you add a scope comment to that lock? It protects mnt_id_ida; I should have explicitly commented that. I'll put a patch to do that at the head of my next queue to submit. Thanks for reviewing. > > > @@ -623,39 +653,43 @@ static inline void __mntput(struct vfsmo > > void mntput_no_expire(struct vfsmount *mnt) > > { > > repeat: > > - if (atomic_dec_and_lock(&mnt->mnt_count, &vfsmount_lock)) { > > - if (likely(!mnt->mnt_pinned)) { > > - spin_unlock(&vfsmount_lock); > > - __mntput(mnt); > > - return; > > - } > > - atomic_add(mnt->mnt_pinned + 1, &mnt->mnt_count); > > - mnt->mnt_pinned = 0; > > - spin_unlock(&vfsmount_lock); > > - acct_auto_close_mnt(mnt); > > - goto repeat; > > + if (atomic_add_unless(&mnt->mnt_count, -1, 1)) > > + return; > > Hmm that's a unrelated change? It's because we don't have atomic_dec_and_br_lock()... > > The rest looks all good and quite straight forward > > Reviewed-by: Andi Kleen <ak@xxxxxxxxxxxxxxx> Thanks, Nick -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html