On 05/19/2011 03:34 AM, Daniel Veillard wrote:
On Sun, May 15, 2011 at 09:37:21PM -0400, Mark Wagner wrote:
On 05/12/2011 06:45 AM, Daniel P. Berrange wrote:
On Thu, May 12, 2011 at 06:22:49PM +0800, Osier Yang wrote:
Hi, All
This series adopts Daniel's suggestion on v1, using libnuma but
not invoking numactl to set the NUMA policy. Add support for
"interleave" and "preferred" modes, except the "strict" mode
supported in v1.
The new XML is like:
<numatune>
<memory model="interleave" nodeset="+0-4,8-12"/>
<numatune>
I persist in using the numactl nodeset syntax to represent
the "nodeset", as I think the purpose of adding NUMA tuning
support is to provide the use for NUMA users, keeping the
syntax same as numactl will make them feel better.
Compatibility with numactl syntax is an explicit non-goal.
numactl is just one platform specific impl. Compatibility
with numactl syntax is of no interest to the ESX or VirtualBox
drivers. The libvirt NUMA syntax should be using other
existing libvirt XML as the design compatibility target.
I won't argue semantic of XML with you, but please keep in mind
that one of the main differences between using a numactl like
mechanism and taskset is that the NUMA mechanisms also let you
bind to specific, NUMA node memory, as well as specifying the
access type.
So from the outside looking in, keeping things in terms of cpusets
would seem to not be in full agreement with the RFE for NUMA support.
I would think that the specification of NUMA binding would need to
include NUMA nodes and specify memory bindings as well as the
access type. From a performance perspective, support for true
NUMA is what is the last hurdle that is keeping libvirt from being
used in high performance situations.
I think that specifying things in terms of nodes instead of
cpus will make it easier for the end user. So I guess I need
to withdraw the part about not arguing XML...
Hi Mark,
I'm not 100% sure I understand what you disagreeing with:
- it seems to me that the proposed model does allow the specification
of the nodes and the memory binding associated
- I wonder if you just object to the "nodeset" attribute name here
- please note that "Node" in the context of libvirt has the specific
meaning of the whole physical machine http://libvirt.org/goals.html
that terminology was set up 5 years ago and present in many places
of the libvirt API. On the other hand "nodeset" is being used in
other places to specify a set of cpu nodes in a NUMA context.
Could you help us clarify your point of view ?
thanks !
Daniel
Daniel
I think that maybe I didn't fully understand the entire context.
My main goal is to make sure that we consider the differences
between NUMA and simple CPU pinning. After a rereading the threads
and some conversations it appears that you are doing that.
Sorry for the noise on this issue
btw - I must say that I think that the libvirt team is doing a
great job overall in getting support for the features that are needed
to achieve good to top performance from KVM. I actually based a lot of
my Summit presentation around using libvirt and virt-manager. Once NUMA
support is in, I expect that we will see some SPECvirt submissions that are
based on libvirt. Thanks for all of the hard work!
-mark
--
Mark Wagner
Principal SW Engineer - Performance
Red Hat
--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list