Re: Start NUMA work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Daniel Veillard wrote:
  Okay enclosed is a first patch to add the new entry point for getting
the available memeory in the NUMA cells:

/**
 * virNodeGetCellFreeMemory:
 * @conn: pointer to the hypervisor connection
 * @freeMems: pointer to the array of unsigned long
 * @nbCells: number of entries available in freeMems
 *
 * This call allows to ask the amount of free memory in each NUMA cell.
 * The @freeMems array must be allocated by the caller and will be filled
 * with the amounts of free memory in kilobytes for each cell starting
 * from cell #0 and up to @nbCells -1 or the number of cell in the Node
 * (which can be found using virNodeGetInfo() see the nodes entry in the
 * structure).
 *
 * Returns the number of entries filled in freeMems, or -1 in case of error.
 */

int virNodeGetCellsFreeMemory(virConnectPtr conn, unsigned long *freeMems,
                          int nbCells)

So you're using "unsigned long" here to mean 32 bits on 32 bit archs, and 64 bits on 64 bit archs?

A purely 32 bit freeMem will allow up to 4095 GB of RAM per cell. But in reality up to 2047 GB of RAM because mappings in other languages will probably be signed.

High-end users are already decking out PCs with 128 GB of RAM. If they double the RAM every year, we'll hit this limit in 4 years[1]. So is it worth using an explicit 64 bit quantity here, or using another base (MB instead of KB for example)? Or do we just think that all such vast machines will be 64 bit?

  based on the feedback, it seems it's better to provide an API checking
a range of cells. This version suggest to always start at cell 0, it could be
extended to start at a base cell number, not a big change, is it needed ?

On the one hand, subranges of cells could be useful for simple hierarchical archs. On the other hand (hypercubes) useful subranges aren't likely to be contiguous anyway!

The patch adds it to the driver interfaces and put the entry point
needed in the xen_internal.c module xenHypervisorNodeGetCellsFreeMemory() .

As for the actual patch, I'm guessing nothing will be committed until we have a working prototype? Apart from lack of remote support it looks fine.

  Now for extending virConnectGetCapabilities() it is a bit messy not not
that much. First it's implemented on Xen using xenHypervisorGetCapabilities,
unfortunately it seems the easiest way to get the NUMA capabilities is by
asking though xend. Calling xend_internals.c from xen_internals.c is not
nice, but xenHypervisorGetCapabilities() is actually noty using any
hypervisor call as far as I can see, it's all about opening/parsing
files from /proc and /sys and returning the result as XML, so this could
as well be done in the xend_internals (or xen_unified.c) module.

Yeah, best just to move that common code up to xen_unified.c probably. In any case the Xen "driver" is so intertwined that it's really just one big lump so calling between the sub-drivers is unlikely to be a problem.

Rich.

[1] This analysis avoids two factors: (a) it covers the whole machine rather than individual cells, (b) on the other hand, perhaps flash memory (which has dramatically higher density) will become fast enough to replace convention RAM.

--
Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/
Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod
Street, Windsor, Berkshire, SL4 1TE, United Kingdom.  Registered in
England and Wales under Company Registration No. 03798903

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

--
Libvir-list mailing list
Libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]