The patch titled Subject: lib/group_cpus: optimize inner loop in grp_spread_init_one() has been added to the -mm mm-nonmm-unstable branch. Its filename is lib-group_cpus-optimize-inner-loop-in-grp_spread_init_one.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/lib-group_cpus-optimize-inner-loop-in-grp_spread_init_one.patch This patch will later appear in the mm-nonmm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Yury Norov <yury.norov@xxxxxxxxx> Subject: lib/group_cpus: optimize inner loop in grp_spread_init_one() Date: Thu, 7 Dec 2023 12:38:57 -0800 The loop starts from the beginning every time we switch to the next sibling mask. This is the Schlemiel the Painter's style of coding because we know for sure that nmsk is clear up to current CPU, and we can just continue from the next CPU. Also, we can do it nicer if leverage the dedicated for_each() iterator. Link: https://lkml.kernel.org/r/20231207203900.859776-4-yury.norov@xxxxxxxxx Signed-off-by: Yury Norov <yury.norov@xxxxxxxxx> Cc: Andy Shevchenko <andriy.shevchenko@xxxxxxxxxxxxxxx> Cc: Ming Lei <ming.lei@xxxxxxxxxx> Cc: Rasmus Villemoes <linux@xxxxxxxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- lib/group_cpus.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) --- a/lib/group_cpus.c~lib-group_cpus-optimize-inner-loop-in-grp_spread_init_one +++ a/lib/group_cpus.c @@ -30,13 +30,13 @@ static void grp_spread_init_one(struct c /* If the cpu has siblings, use them first */ siblmsk = topology_sibling_cpumask(cpu); - for (sibl = -1; cpus_per_grp > 0; ) { - sibl = cpumask_next(sibl, siblmsk); - if (sibl >= nr_cpu_ids) - break; + sibl = cpu + 1; + + for_each_cpu_and_from(sibl, siblmsk, nmsk) { __cpumask_clear_cpu(sibl, nmsk); __cpumask_set_cpu(sibl, irqmsk); - cpus_per_grp--; + if (cpus_per_grp-- == 0) + return; } } } _ Patches currently in -mm which might be from yury.norov@xxxxxxxxx are cpumask-introduce-for_each_cpu_and_from.patch lib-group_cpus-relax-atomicity-requirement-in-grp_spread_init_one.patch lib-group_cpus-optimize-inner-loop-in-grp_spread_init_one.patch lib-group_cpus-optimize-outer-loop-in-grp_spread_init_one.patch lib-cgroup_cpusc-dont-zero-cpumasks-in-group_cpus_evenly-on-allocation.patch lib-group_cpusc-drop-unneeded-cpumask_empty-call-in-__group_cpus_evenly.patch