On Thu, Jan 31, 2019 at 10:34:17AM +0100, Michal Privoznik wrote: > https://bugzilla.redhat.com/show_bug.cgi?id=1503284 > > The way we currently start qemu from CPU affinity POV is as > follows: > > 1) the child process is set affinity to all online CPUs (unless > some vcpu pinning was given in the domain XML) > > 2) Once qemu is running, cpuset cgroup is configured taking > memory pinning into account > > Problem is that we let qemu allocate its memory just anywhere in > 1) and then rely in 2) to be able to move the memory to > configured NUMA nodes. This might not be always possible (e.g. > qemu might lock some parts of its memory) and is very suboptimal > (copying large memory between NUMA nodes takes significant amount > of time). > > The solution is to set affinity to one of (in priority order): > - The CPUs associated with NUMA memory affinity mask > - The CPUs associated with emulator pinning > - All online host CPUs > > Later (once QEMU has allocated its memory) we then change this > again to (again in priority order): > - The CPUs associated with emulator pinning > - The CPUs returned by numad > - The CPUs associated with vCPU pinning > - All online host CPUs > > Signed-off-by: Michal Privoznik <mprivozn@xxxxxxxxxx> > --- > > diff to v1 (both points suggested by Dan): > - Expanded the commit message > - fixed qemuProcessGetAllCpuAffinity so that it returns online CPU map > only > > src/qemu/qemu_process.c | 132 +++++++++++++++++++--------------------- > 1 file changed, 63 insertions(+), 69 deletions(-) Reviewed-by: Daniel P. Berrangé <berrange@xxxxxxxxxx> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list