On Thu, Mar 23, 2023 at 02:12:53PM -0700, Andrew Morton wrote: > On Thu, 23 Mar 2023 20:21:11 +0100 "Uladzislau Rezki (Sony)" <urezki@xxxxxxxxx> wrote: > > > A global vmap_blocks-xarray array can be contented under > > heavy usage of the vm_map_ram()/vm_unmap_ram() APIs. The > > lock_stat shows that a "vmap_blocks.xa_lock" lock is a > > second in a top-list when it comes to contentions: > > > > ... > > > > This patch does not fix vmap_area_lock/free_vmap_area_lock and > > purge_vmap_area_lock bottle-necks, it is rather a separate rework. > > > > ... > > > > static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue); > > > > ... > > > > +static struct vmap_block_queue * > > +addr_to_vbq(unsigned long addr) > > +{ > > + int cpu = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus(); > > + return &per_cpu(vmap_block_queue, cpu); > > +} > > Seems strange. vmap_block_queue is not a per-cpu thing in this usage. > Instead it's a hash table, indexed off the (hashed) address, not off > smp_processor_id(). > > Yet in other places, vmap_block_queue *is* used in the conventional > cpu-local fashion. > > So we can have CPU A using the cpu-local entry in vmap_block_queue > while CPU B is simultaneously using it, having looked it up via `addr'. > > AFAICT this all works OK, no races. > > But still, what it's doing is mixing an addr-indexed hashtable with the > CPU-indexed array in surprising ways. It would be clearer to make the > vmap_blocks array a separate thing from the per-cpu array, although it > would presumably use a bit more memory. > > Can we please at least get a big fat comment in an appropriate place > which explains all this to the reader? > Yep, i will send out a v2 with all explanation. Indeed i have to add detailed explanation. Thanks! -- Uladzislau Rezki