Quoting Matt Helsley (matthltc@xxxxxxxxxx): > When lookup succeeds we don't need the "new" user struct which hasn't > been linked into the uidhash. So we can immediately drop the lock and > then free "new" rather than free it with the lock held. > > Signed-off-by: Matt Helsley <matthltc@xxxxxxxxxx> > Cc: David Howells <dhowells@xxxxxxxxxx> > Cc: Pavel Emelyanov <xemul@xxxxxxxxxxxxx> > Cc: Alexey Dobriyan <adobriyan@xxxxxxxxx> > Cc: "Serge E. Hallyn" <serge@xxxxxxxxxx> Acked-by: Serge E. Hallyn <serge@xxxxxxxxxx> And might I say that the label 'out_unlock' in that function is sadly named :) > Cc: containers@xxxxxxxxxxxxxxxxxxxxxxxxxx > --- > kernel/user.c | 12 +++++++----- > 1 files changed, 7 insertions(+), 5 deletions(-) > > diff --git a/kernel/user.c b/kernel/user.c > index 5c598ca..4ea8e58 100644 > --- a/kernel/user.c > +++ b/kernel/user.c > @@ -157,16 +157,18 @@ struct user_struct *alloc_uid(struct user_namespace *ns, uid_t uid) > */ > spin_lock_irq(&uidhash_lock); > up = uid_hash_find(uid, hashent); > - if (up) { > + if (!up) { > + uid_hash_insert(new, hashent); > + up = new; > + } > + spin_unlock_irq(&uidhash_lock); > + > + if (up != new) { > put_user_ns(ns); > key_put(new->uid_keyring); > key_put(new->session_keyring); > kmem_cache_free(uid_cachep, new); > - } else { > - uid_hash_insert(new, hashent); > - up = new; > } > - spin_unlock_irq(&uidhash_lock); > } > > return up; > -- > 1.6.3.3 > > _______________________________________________ > Containers mailing list > Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx > https://lists.linux-foundation.org/mailman/listinfo/containers _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/containers