On Thu, Aug 15, 2024 at 08:54:02AM -0700, Shakeel Butt wrote: > At the moment memcg IDs are managed through IDR which requires external > synchronization mechanisms and makes the allocation code a bit awkward. > Let's switch to xarray and make the code simpler. > > Signed-off-by: Shakeel Butt <shakeel.butt@xxxxxxxxx> > Suggested-by: Matthew Wilcox <willy@xxxxxxxxxxxxx> > Reviewed-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> > Reviewed-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> > Reviewed-by: Muchun Song <muchun.song@xxxxxxxxx> > Acked-by: Michal Hocko <mhocko@xxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > --- > > Changes since v1: > - Fix error path in mem_cgroup_alloc (Dan Carpenter) > > mm/memcontrol.c | 39 ++++++++++----------------------------- > 1 file changed, 10 insertions(+), 29 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index df84683a0e1c..e8e03a5e1e5e 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -3408,29 +3408,12 @@ static void memcg_wb_domain_size_changed(struct mem_cgroup *memcg) > */ > > #define MEM_CGROUP_ID_MAX ((1UL << MEM_CGROUP_ID_SHIFT) - 1) > -static DEFINE_IDR(mem_cgroup_idr); > -static DEFINE_SPINLOCK(memcg_idr_lock); > - > -static int mem_cgroup_alloc_id(void) > -{ > - int ret; > - > - idr_preload(GFP_KERNEL); > - spin_lock(&memcg_idr_lock); > - ret = idr_alloc(&mem_cgroup_idr, NULL, 1, MEM_CGROUP_ID_MAX + 1, > - GFP_NOWAIT); > - spin_unlock(&memcg_idr_lock); > - idr_preload_end(); > - return ret; > -} > +static DEFINE_XARRAY_ALLOC1(mem_cgroup_ids); > > static void mem_cgroup_id_remove(struct mem_cgroup *memcg) > { > if (memcg->id.id > 0) { > - spin_lock(&memcg_idr_lock); > - idr_remove(&mem_cgroup_idr, memcg->id.id); > - spin_unlock(&memcg_idr_lock); > - > + xa_erase(&mem_cgroup_ids, memcg->id.id); > memcg->id.id = 0; > } > } > @@ -3465,7 +3448,7 @@ static inline void mem_cgroup_id_put(struct mem_cgroup *memcg) > struct mem_cgroup *mem_cgroup_from_id(unsigned short id) > { > WARN_ON_ONCE(!rcu_read_lock_held()); > - return idr_find(&mem_cgroup_idr, id); > + return xa_load(&mem_cgroup_ids, id); > } > > #ifdef CONFIG_SHRINKER_DEBUG > @@ -3558,17 +3541,17 @@ static struct mem_cgroup *mem_cgroup_alloc(struct mem_cgroup *parent) > struct mem_cgroup *memcg; > int node, cpu; > int __maybe_unused i; > - long error = -ENOMEM; > + long error; > > memcg = kzalloc(struct_size(memcg, nodeinfo, nr_node_ids), GFP_KERNEL); > if (!memcg) > - return ERR_PTR(error); > + return ERR_PTR(-ENOMEM); > > - memcg->id.id = mem_cgroup_alloc_id(); > - if (memcg->id.id < 0) { > - error = memcg->id.id; > + error = xa_alloc(&mem_cgroup_ids, &memcg->id.id, NULL, > + XA_LIMIT(1, MEM_CGROUP_ID_MAX), GFP_KERNEL); > + if (error) > goto fail; > - } > + error = -ENOMEM; There is another subtle change here: xa_alloc() returns -EBUSY in the case of the address space exhaustion, while the old code returned -ENOSPC. It's unlikely a big practical problem.