On Wed, Jun 07, 2023 at 04:40:59PM +0200, Michal Privoznik wrote:
In ideal world, my plan was perfect. We allow union of all host nodes in cpuset.mems and once QEMU has allocated its memory, we 'fix up' restriction of its emulator thread by writing the original value we wanted to set all along. But in fact, we can't do it because that triggers memory movement. For instance, consider the following <numatune/>: <numatune> <memory mode="strict" nodeset="0"/> <memnode cellid="1" mode="strict" nodeset="1"/> </numatune> <numa> <cell id="0" cpus="0-1" memory="1024000" unit="KiB" /> <cell id="1" cpus="2-3" memory="1048576" unit="KiB"/> </numa> This is meant to create 1:1 mapping between guest and host NUMA nodes. So we start QEMU with cpuset.mems set to "0-1" (so that it can allocate memory even for guest node #1 and have the memory come fro host node #1) and then, set cpuset.mems to "0" (because that's where we wanted emulator thread to live). But this in turn triggers movement of all memory (even the allocated one) to host NUMA node #0. Therefore, we have to just keep cpuset.mems untouched and rely on .host-nodes passed on the QEMU cmd line. The placement still suffers because of cpuset.mems set for vcpus or iothreads, but that's fixed in next commit. Fixes: 3ec6d586bc3ec7a8cf406b1b6363e87d50aa159c Signed-off-by: Michal Privoznik <mprivozn@xxxxxxxxxx>
Reviewed-by: Martin Kletzander <mkletzan@xxxxxxxxxx>
Attachment:
signature.asc
Description: PGP signature