Re: [PATCH 2/5] docs: Define XML schema for numa tuning and add docs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ä 2011å05æ05æ 23:29, Daniel P. Berrange åé:
On Thu, May 05, 2011 at 05:38:27PM +0800, Osier Yang wrote:
Currently we only want to use "membind" function of numactl, but
perhaps more other functions in future, so introduce element
"<numatune>", future NUMA tuning related XML stuffs should go
into it.
---
  docs/formatdomain.html.in |   17 +++++++++++++++++
  docs/schemas/domain.rng   |   20 ++++++++++++++++++++
  2 files changed, 37 insertions(+), 0 deletions(-)

diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 5013c48..6da6465 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -288,6 +288,9 @@
      &lt;min_guarantee&gt;65536&lt;/min_guarantee&gt;
    &lt;/memtune&gt;
    &lt;vcpu cpuset="1-4,^3,6" current="1"&gt;2&lt;/vcpu&gt;
+&lt;numatune&gt;
+&lt;membind nodeset="1,2,!3-6"&gt;
+&lt;/numatune&gt;

I don't think we should be creating a new<numatune>  element here since
it is not actually covering all aspects of NUMA tuning. We already have
CPU NUMA pinning in the separate<vcpu>  element. NUMA memory pinning
should likely be either in the<memtune>  or<memoryBacking>  elements,
probably the latter.

Agree that it doesn't cover all aspects of NUMA tuning, actually
we also have <vcpupin>, the reason I did't put it into <memtune>
is that I'm not sure if we will also support other tuning stuffs.


Also, it is not very nice to use a different syntax for negation for the
VCPU specification, vs memory node specification "^3" vs "!3"

NUMA tuning use different syntax, actually also has "+", which is not
used by VCPU specification, so IMHO once we have to accept "+", "!"
should be accepted too, or we can do a converstion, from "^" to "!"?


Looking to the future, we may want to consider how we'd allow host NUMA
mapping on a fine grained basis, per guest NUMA node. eg It is possible
with QEMU to actually define a guest visible NUMA topology for the virtual
CPUs and memory using

     -numa node[,mem=size][,cpus=cpu[-cpu]][,nodeid=node]

We don't support that yet, which is something we ought to do. At which
point you probably also want to be ale to map guest NUMA nodes to host
NUMA nodes.

As far as I understand this, doesn't we need a standalone
<numatune> for things like this?

Thanks
Osier

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]