Dmitry Vyukov <dvyukov@xxxxxxxxxx> writes: > > Nice. I think it should work. > > I would also do: > > diff --git a/kernel/ucount.c b/kernel/ucount.c > index 8a11fc0cb459..233c8e46acd5 100644 > --- a/kernel/ucount.c > +++ b/kernel/ucount.c > @@ -143,19 +143,18 @@ static struct ucounts *get_ucounts(struct > user_namespace *ns, kuid_t uid) > > new->ns = ns; > new->uid = uid; > - atomic_set(&new->count, 0); > + atomic_set(&new->count, 1); > > spin_lock_irq(&ucounts_lock); > ucounts = find_ucounts(ns, uid, hashent); > if (ucounts) { > + atomic_inc(&ucounts->count); > kfree(new); > } else { > hlist_add_head(&new->node, hashent); > ucounts = new; > } > } > - if (!atomic_add_unless(&ucounts->count, 1, INT_MAX)) > - ucounts = NULL; > spin_unlock_irq(&ucounts_lock); > return ucounts; > } No. As that allows ucounts->count to be incremented to the point it goes negative. Counter wrap-around is just as bad as imperfect locking if you can trigger it, and has been a cause of use-after-free errors etc. So it is a feature that if the count is maxed out for a given kuid that get_ucounts will fail. Eric _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/containers