* Alexander Graf (agraf@xxxxxxx) wrote: > On 29.10.2011, at 20:45, Bharata B Rao wrote: > > As guests become NUMA aware, it becomes important for the guests to > > have correct NUMA policies when they run on NUMA aware hosts. > > Currently limited support for NUMA binding is available via libvirt > > where it is possible to apply a NUMA policy to the guest as a whole. > > However multinode guests would benefit if guest memory belonging to > > different guest nodes are mapped appropriately to different host NUMA nodes. > > > > To achieve this we would need QEMU to expose information about > > guest RAM ranges (Guest Physical Address - GPA) and their host virtual > > address mappings (Host Virtual Address - HVA). Using GPA and HVA, any external > > tool like libvirt would be able to divide the guest RAM as per the guest NUMA > > node geometry and bind guest memory nodes to corresponding host memory nodes > > using HVA. This needs both QEMU (and libvirt) changes as well as changes > > in the kernel. > > Ok, let's take a step back here. You are basically growing libvirt into a memory resource manager that know how much memory is available on which nodes and how these nodes would possibly fit into the host's memory layout. > > Shouldn't that be the kernel's job? It seems to me that architecturally the kernel is the place I would want my memory resource controls to be in. I think that both Peter and Andrea are looking at this. Before we commit an API to QEMU that has a different semantic than a possible new kernel interface (that perhaps QEMU could use directly to inform kernel of the binding/relationship between vcpu thread and it's memory at VM startuup) it would be useful to see what these guys are working on... thanks, -chris -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html