On Tue, Sep 11, 2007 at 04:28:00PM +0100, Richard W.M. Jones wrote: > Daniel Veillard wrote: > > Okay enclosed is a first patch to add the new entry point for getting > >the available memeory in the NUMA cells: > > > >/** > > * virNodeGetCellFreeMemory: > > * @conn: pointer to the hypervisor connection > > * @freeMems: pointer to the array of unsigned long > > * @nbCells: number of entries available in freeMems > > * > > * This call allows to ask the amount of free memory in each NUMA cell. > > * The @freeMems array must be allocated by the caller and will be filled > > * with the amounts of free memory in kilobytes for each cell starting > > * from cell #0 and up to @nbCells -1 or the number of cell in the Node > > * (which can be found using virNodeGetInfo() see the nodes entry in the > > * structure). > > * > > * Returns the number of entries filled in freeMems, or -1 in case of > > error. > > */ > > > >int > >virNodeGetCellsFreeMemory(virConnectPtr conn, unsigned long *freeMems, > > int nbCells) > > So you're using "unsigned long" here to mean 32 bits on 32 bit archs, > and 64 bits on 64 bit archs? > > A purely 32 bit freeMem will allow up to 4095 GB of RAM per cell. But > in reality up to 2047 GB of RAM because mappings in other languages will > probably be signed. > > High-end users are already decking out PCs with 128 GB of RAM. If they > double the RAM every year, we'll hit this limit in 4 years[1]. So is it > worth using an explicit 64 bit quantity here, or using another base (MB > instead of KB for example)? Or do we just think that all such vast > machines will be 64 bit? Well we already use unsigned long in KB for memory quantities in libvirt, I just reused that, I doubt we will see more than 64GB for 32bits CPU ever, that's already stretching the limits > > based on the feedback, it seems it's better to provide an API checking > >a range of cells. This version suggest to always start at cell 0, it could > >be > >extended to start at a base cell number, not a big change, is it needed ? > > On the one hand, subranges of cells could be useful for simple > hierarchical archs. On the other hand (hypercubes) useful subranges > aren't likely to be contiguous anyway! for anything non-flat it's hard to guess and for anything flat placement means basically checking all cells to find the optimum > >The patch adds it to the driver interfaces and put the entry point > >needed in the xen_internal.c module xenHypervisorNodeGetCellsFreeMemory() . > > As for the actual patch, I'm guessing nothing will be committed until we > have a working prototype? Apart from lack of remote support it looks fine. yes, and right remote is something I didn't tried to look at yet, I hope returning arrays of values won't be a problem. > > Now for extending virConnectGetCapabilities() it is a bit messy not not > >that much. First it's implemented on Xen using > >xenHypervisorGetCapabilities, > >unfortunately it seems the easiest way to get the NUMA capabilities is by > >asking though xend. Calling xend_internals.c from xen_internals.c is not > >nice, but xenHypervisorGetCapabilities() is actually noty using any > >hypervisor call as far as I can see, it's all about opening/parsing > >files from /proc and /sys and returning the result as XML, so this could > >as well be done in the xend_internals (or xen_unified.c) module. > > Yeah, best just to move that common code up to xen_unified.c probably. yes my though, except for the global variable used. > In any case the Xen "driver" is so intertwined that it's really just one > big lump so calling between the sub-drivers is unlikely to be a problem. heh :-\ Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@xxxxxxxxxx | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/ -- Libvir-list mailing list Libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list