On Thu, Apr 20, 2023 at 11:27:26AM +0300, Tariq Toukan wrote: > I like this clean API. Thanks :) > nit: > Previously cpu_online_mask was used here. Is this change intentional? > We can fix it in a followup patch if this is the only comment on the series. > > Reviewed-by: Tariq Toukan <tariqt@xxxxxxxxxx> The only CPUs listed in the sched_domains_numa_masks are 'available', i.e. online CPUs. The for_each_numa_cpu() ANDs user-provided cpumask with a map associate to the hop, and that means that if we AND with possible mask, we'll eventually walk online CPUs only. To make sure, I experimented with the modified test: diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index 6becb044a66f..c8d557731080 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -760,8 +760,13 @@ static void __init test_for_each_numa(void) unsigned int hop, c = 0; rcu_read_lock(); - for_each_numa_cpu(cpu, hop, node, cpu_online_mask) + pr_err("Node %d:\t", node); + for_each_numa_cpu(cpu, hop, node, cpu_possible_mask) { expect_eq_uint(cpumask_local_spread(c++, node), cpu); + pr_cont("%3d", cpu); + + } + pr_err("\n"); rcu_read_unlock(); } } This is the NUMA topology of my test machine after the boot: root@debian:~# numactl -H available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 node 0 size: 1861 MB node 0 free: 1792 MB node 1 cpus: 4 5 node 1 size: 1914 MB node 1 free: 1823 MB node 2 cpus: 6 7 node 2 size: 1967 MB node 2 free: 1915 MB node 3 cpus: 8 9 10 11 12 13 14 15 node 3 size: 7862 MB node 3 free: 7259 MB node distances: node 0 1 2 3 0: 10 50 30 70 1: 50 10 70 30 2: 30 70 10 50 3: 70 30 50 10 And this is what test prints: root@debian:~# insmod test_bitmap.ko test_bitmap: loaded. test_bitmap: parselist: 14: input is '0-2047:128/256' OK, Time: 472 test_bitmap: bitmap_print_to_pagebuf: input is '0-32767 ', Time: 2665 test_bitmap: Node 0: 0 1 2 3 6 7 4 5 8 9 10 11 12 13 14 15 test_bitmap: test_bitmap: Node 1: 4 5 8 9 10 11 12 13 14 15 0 1 2 3 6 7 test_bitmap: test_bitmap: Node 2: 6 7 0 1 2 3 8 9 10 11 12 13 14 15 4 5 test_bitmap: test_bitmap: Node 3: 8 9 10 11 12 13 14 15 4 5 6 7 0 1 2 3 test_bitmap: test_bitmap: all 6614 tests passed Now, disable a couple of CPUs: root@debian:~# chcpu -d 1-2 smpboot: CPU 1 is now offline CPU 1 disabled smpboot: CPU 2 is now offline CPU 2 disabled And try again: root@debian:~# rmmod test_bitmap rmmod: ERROR: ../libkmod/libkmod[ 320.275904] test_bitmap: unloaded. root@debian:~# numactl -H available: 4 nodes (0-3) node 0 cpus: 0 3 node 0 size: 1861 MB node 0 free: 1792 MB node 1 cpus: 4 5 node 1 size: 1914 MB node 1 free: 1823 MB node 2 cpus: 6 7 node 2 size: 1967 MB node 2 free: 1915 MB node 3 cpus: 8 9 10 11 12 13 14 15 node 3 size: 7862 MB node 3 free: 7259 MB node distances: node 0 1 2 3 0: 10 50 30 70 1: 50 10 70 30 2: 30 70 10 50 3: 70 30 50 10 root@debian:~# insmod test_bitmap.ko test_bitmap: loaded. test_bitmap: parselist: 14: input is '0-2047:128/256' OK, Time: 491 test_bitmap: bitmap_print_to_pagebuf: input is '0-32767 ', Time: 2174 test_bitmap: Node 0: 0 3 6 7 4 5 8 9 10 11 12 13 14 15 test_bitmap: test_bitmap: Node 1: 4 5 8 9 10 11 12 13 14 15 0 3 6 7 test_bitmap: test_bitmap: Node 2: 6 7 0 3 8 9 10 11 12 13 14 15 4 5 test_bitmap: test_bitmap: Node 3: 8 9 10 11 12 13 14 15 4 5 6 7 0 3 test_bitmap: test_bitmap: all 6606 tests passed I used cpu_possible_mask because I wanted to keep the patch consistent: before we traversed NUMA hop masks, now we traverse the same hop masks AND user-provided mask, so the latter should include all possible CPUs. If you think it's better to have cpu_online_mask in the driver, let's make it in a separate patch? Thanks, Yury