On Wed, 2012-12-12 at 20:49 -0800, Dave Hansen wrote: > On 12/12/2012 06:03 PM, Davidlohr Bueso wrote: > > On Wed, 2012-12-12 at 17:48 -0800, Dave Hansen wrote: > >> But if we went and did it per-DIMM (showing which physical addresses and > >> NUMA nodes a DIMM maps to), wouldn't that be redundant with this > >> proposed interface? > > > > If DIMMs overlap between nodes, then we wouldn't have an exact range for > > a node in question. Having both approaches would complement each other. > > How is that possible? If NUMA nodes are defined by distances from CPUs > to memory, how could a DIMM have more than a single distance to any > given CPU? Can't this occur when interleaving emulated nodes with physical ones? > > >> How do you plan to use this in practice, btw? > > > > It started because I needed to recognize the address of a node to remove > > it from the e820 mappings and have the system "ignore" the node's > > memory. > > Actually, now that I think about it, can you check in the > /sys/devices/system/ directories for memory and nodes? We have linkages > there for each memory section to every NUMA node, and you can also > derive the physical address from the phys_index in each section. That > should allow you to work out physical addresses for a given node. > I had looked at the memory-hotplug interface but found that this 'phys_index' doesn't include holes, while ->node_spanned_pages does. Thanks, Davidlohr -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>