On Tue, Dec 17, 2024 at 01:35:09PM +0100, Miklos Szeredi wrote: > On Tue, 17 Dec 2024 at 13:23, Christian Brauner <brauner@xxxxxxxxxx> wrote: > > > @@ -270,18 +270,19 @@ static inline struct hlist_head *mp_hash(struct dentry *dentry) > > > > static int mnt_alloc_id(struct mount *mnt) > > { > > - int res = ida_alloc(&mnt_id_ida, GFP_KERNEL); > > + int res; > > > > - if (res < 0) > > - return res; > > - mnt->mnt_id = res; > > - mnt->mnt_id_unique = atomic64_inc_return(&mnt_id_ctr); > > + xa_lock(&mnt_id_xa); > > + res = __xa_alloc(&mnt_id_xa, &mnt->mnt_id, mnt, XA_LIMIT(1, INT_MAX), GFP_KERNEL); > > This uses a different allocation strategy, right? That would be a > user visible change, which is somewhat risky. Maybe, but afaict, xa_alloc() just uses the first available key similar to ida_alloc(). A while ago I even asked Karel whether he would mind allocating the old mount id cyclically via ida_alloc_cyclic() and he said he won't care and it won't matter (to him at least). I doubt that userspace expects mount ids to be in any specific sequence. A long time ago we even did cyclic allocation and switched to non-cyclic allocation. Right now, if I mount and unmount immediately afterwards and no one managed to get their mount in between I get the same id assigned. That's true of xa_alloc() as well from my testing. So I think we can just risk it. > > > + if (!res) > > + mnt->mnt_id_unique = ++mnt_id_ctr; > > + xa_unlock(&mnt_id_xa); > > return 0; > > return res; Bah, thanks. Fixed.