On Wed, 26. Jun 18:51, Baoquan He wrote: > On 06/26/24 at 06:03pm, Hailong Liu wrote: > > On Wed, 26. Jun 11:15, Uladzislau Rezki wrote: > > > On Wed, Jun 26, 2024 at 01:12:06PM +0800, Hailong Liu wrote: > > > > On Tue, 25. Jun 22:05, Uladzislau Rezki wrote: > > > > > > > > > > /** > > > > > > > > > > * cpumask_next - get the next cpu in a cpumask > > > > > > > > > > * @n: the cpu prior to the place to search (i.e. return will be > @n) > > > > > > > > > > * @srcp: the cpumask pointer > > > > > > > > > > * > > > > > > > > > > * Return: >= nr_cpu_ids if no further cpus set. > > > > > > > > > > > > > > > > > > Ah, I got what you mean. In the vbq case, it may not have chance to get > > > > > > > > > a return number as nr_cpu_ids. Becuase the hashed index limits the > > > > > > > > > range to [0, nr_cpu_ids-1], and cpu_possible(index) will guarantee it > > > > > > > > > won't be the highest cpu number [nr_cpu_ids-1] since CPU[nr_cpu_ids-1] must > > > > > > > > > be possible CPU. > > > > > > > > > > > > > > > > > > Do I miss some corner cases? > > > > > > > > > > > > > > > > > Right. We guarantee that a highest CPU is available by doing: % nr_cpu_ids. > > > > > > > > So we do not need to use *next_wrap() variant. You do not miss anything :) > > > > > > > > > > > > > > > > Hailong Liu has proposed more simpler version: > > > > > > > > > > > > > > > > <snip> > > > > > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > > > > > > > index 11fe5ea208aa..e1e63ffb9c57 100644 > > > > > > > > --- a/mm/vmalloc.c > > > > > > > > +++ b/mm/vmalloc.c > > > > > > > > @@ -1994,8 +1994,9 @@ static struct xarray * > > > > > > > > addr_to_vb_xa(unsigned long addr) > > > > > > > > { > > > > > > > > int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus(); > > > > > > > > + int cpu = cpumask_nth(index, cpu_possible_mask); > > > > > > > > > > > > > > > > - return &per_cpu(vmap_block_queue, index).vmap_blocks; > > > > > > > > + return &per_cpu(vmap_block_queue, cpu).vmap_blocks; > > > > > > > > <snip> > > > > > > > > > > > > > > > > which just takes a next CPU if an index is not set in the cpu_possible_mask. > > > > > > > > > > > > > > > > The only thing that can be updated in the patch is to replace num_possible_cpu() > > > > > > > > by the nr_cpu_ids. > > > > > > > > > > > > > > > > Any thoughts? I think we need to fix it by a minor change so it is > > > > > > > > easier to back-port on stable kernels. > > > > > > > > > > > > > > Yeah, sounds good since the regresson commit is merged in v6.3. > > > > > > > Please feel free to post this and the hash array patch separately for > > > > > > > formal reviewing. > > > > > > > > > > > > > Agreed! The patch about hash array i will post later. G> > > > > > > > > > > > > By the way, when I am replying this mail, I check the cpumask_nth() > > > > > > > again. I doubt it may take more checking then cpu_possible(), given most > > > > > > > of systems don't have gaps in cpu_possible_mask. I could be dizzy at > > > > > > > this moment. > > > > > > > > > > > > > > static inline unsigned int cpumask_nth(unsigned int cpu, const struct cpumask *srcp) > > > > > > > { > > > > > > > return find_nth_bit(cpumask_bits(srcp), small_cpumask_bits, cpumask_check(cpu)); > > > > > > > } > > > > > > > > > > > > > Yep, i do not think it is a big problem based on your noted fact. > > > > > > > > > > > Checked. There is a difference: > > > > > > > > > > 1. Default > > > > > > > > > > <snip> > > > > > ... > > > > > + 15.95% 6.05% [kernel] [k] __vmap_pages_range_noflush > > > > > + 15.91% 1.74% [kernel] [k] addr_to_vb_xa <--------------- > > > > > + 15.13% 12.05% [kernel] [k] vunmap_p4d_range > > > > > + 14.17% 13.38% [kernel] [k] __find_nth_bit <-------------- > > > > > + 10.62% 0.00% [kernel] [k] ret_from_fork_asm > > > > > + 10.62% 0.00% [kernel] [k] ret_from_fork > > > > > + 10.62% 0.00% [kernel] [k] kthread > > > > > ... > > > > > <snip> > > > > > > > > > > 2. Check if cpu_possible() and then fallback to cpumask_nth() if not > > > > > > > > > > <snip> > > > > > ... > > > > > + 6.84% 0.29% [kernel] [k] alloc_vmap_area > > > > > + 6.80% 6.70% [kernel] [k] native_queued_spin_lock_slowpath > > > > > + 4.24% 0.09% [kernel] [k] free_vmap_block > > > > > + 2.41% 2.38% [kernel] [k] addr_to_vb_xa <----------- > > > > > + 1.94% 1.91% [kernel] [k] xas_start > > > > > ... > > > > > <snip> > > > > > > > > > > It is _worth_ to check if an index is in possible mask: > > > > > > > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > > > > index 45e1506d58c3..af20f78c2cbf 100644 > > > > > --- a/mm/vmalloc.c > > > > > +++ b/mm/vmalloc.c > > > > > @@ -2542,7 +2542,10 @@ static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue); > > > > > static struct xarray * > > > > > addr_to_vb_xa(unsigned long addr) > > > > > { > > > > > - int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus(); > > > > > + int index = (addr / VMAP_BLOCK_SIZE) % nr_cpu_ids; > > > > IIUC, use nr_cpu_ids here maybe incorrect. > > > > > > > > take b101 as example, nr_cpu_ids is 3. if index is 2 cpumask_nth(2, cpu_possible_mask); > > > > might return 64. > > > > > > > But then a CPU2 becomes possible? Cutting by % nr_cpu_ids generates values < nr_cpu_ids. > > > So, last CPU is always possible and we never do cpumask_nth() on a last possible CPU. > > > > > > What i miss here? > > > > > Sorry, I forget to reply to all :), I write a demo to test as follows: > > > > static int cpumask_init(void) > > { > > struct cpumask mask; > > unsigned int cpu_id; > > cpumask_clear(&mask); > > > > cpumask_set_cpu(1, &mask); > > cpumask_set_cpu(3, &mask); > > cpumask_set_cpu(5, &mask); > > > > cpu_id = find_last_bit(cpumask_bits(&mask), NR_CPUS) + 1; > > pr_info("cpu_id:%d\n", cpu_id); > > > > for (; i < nr_cpu_ids; i++) { > > pr_info("%d: cpu_%d\n", i, cpumask_nth(i, &mask)); > > } > > > > return 0; > > } > > > > [ 1.337020][ T1] cpu_id:6 > > [ 1.337338][ T1] 0: cpu_1 > > [ 1.337558][ T1] 1: cpu_3 > > [ 1.337751][ T1] 2: cpu_5 > > [ 1.337960][ T1] 3: cpu_64 > > [ 1.338183][ T1] 4: cpu_64 > > [ 1.338387][ T1] 5: cpu_64 > > [ 1.338594][ T1] 6: cpu_64 > > > > In summary, the nr_cpu_ids = last_bit + 1, and cpumask_nth() return the nth cpu_id. > > I think just using below change for a quick fix is enough. It doesn't > have the issue cpumask_nth() has and very simple. For most of systems, > it only adds an extra cpu_possible(idex) checking. > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 633363997dec..59a8951cc6c0 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2542,7 +2542,10 @@ static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue); > static struct xarray * > addr_to_vb_xa(unsigned long addr) > { > - int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus(); > + int index = (addr / VMAP_BLOCK_SIZE) % nr_cpu_ids; > + > + if (!cpu_possible(idex)) > + index = cpumask_next(index, cpu_possible_mask); > > return &per_cpu(vmap_block_queue, index).vmap_blocks; > } > Agreed! This is a very simple solution. If cpumask is b1000001, addresses being distributed across different CPUs could theoretically lead to such a situation, but it has not been encountered in practice. I’m just pointing out the possibility here. CPU_0 CPU_6 CPU_6 CPU_6 CPU_6 CPU_6 | | | | | | V V V V V V 0 10 20 30 40 50 60 |------|------|------|------|------|------|.. Thanks again for your reply, I learned a lot. -- help you, help me, Hailong.