On some of the NUMA platforms, the CPU index in each NUMA node grows sequentially. While on other platforms, it can be inconsecutive, E.g. % numactl --hardware available: 4 nodes (0-3) node 0 cpus: 0 4 8 12 16 20 24 28 node 0 size: 131058 MB node 0 free: 86531 MB node 1 cpus: 1 5 9 13 17 21 25 29 node 1 size: 131072 MB node 1 free: 127070 MB node 2 cpus: 2 6 10 14 18 22 26 30 node 2 size: 131072 MB node 2 free: 127758 MB node 3 cpus: 3 7 11 15 19 23 27 31 node 3 size: 131072 MB node 3 free: 127226 MB node distances: node 0 1 2 3 0: 10 20 20 20 1: 20 10 20 20 2: 20 20 10 20 3: 20 20 20 10 This patch is to fix the problem by using the CPU index in caps->host.numaCell[i]->cpus[i] to set the bitmask instead of assuming the CPU index of the NUMA nodes are always sequential. --- src/qemu/qemu_process.c | 11 ++--------- 1 files changed, 2 insertions(+), 9 deletions(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 19bb22a..58ba5bf 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -1825,20 +1825,13 @@ qemuProcessInitCpuAffinity(struct qemud_driver *driver, if (vm->def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO) { VIR_DEBUG("Set CPU affinity with advisory nodeset from numad"); /* numad returns the NUMA node list, convert it to cpumap */ - int prev_total_ncpus = 0; for (i = 0; i < driver->caps->host.nnumaCell; i++) { int j; int cur_ncpus = driver->caps->host.numaCell[i]->ncpus; if (nodemask[i]) { - for (j = prev_total_ncpus; - j < cur_ncpus + prev_total_ncpus && - j < maxcpu && - j < VIR_DOMAIN_CPUMASK_LEN; - j++) { - VIR_USE_CPU(cpumap, j); - } + for (j = 0; j < cur_ncpus; j++) + VIR_USE_CPU(cpumap, driver->caps->host.numaCell[i]->cpus[j]); } - prev_total_ncpus += cur_ncpus; } } else { VIR_DEBUG("Set CPU affinity with specified cpuset"); -- 1.7.7.3 -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list