On 12/13/2012 03:15 PM, Davidlohr Bueso wrote: > On Wed, 2012-12-12 at 20:49 -0800, Dave Hansen wrote: >> How is that possible? If NUMA nodes are defined by distances from CPUs >> to memory, how could a DIMM have more than a single distance to any >> given CPU? > > Can't this occur when interleaving emulated nodes with physical ones? I'm glad you mentioned numa=fake. Its interleaving node configuration would also make the patch you've proposed completely useless. Let's say you've got a two-node system with 16GB of RAM: | 0 | 1 | And you use numa=fake=1G, you'll get the interleaved like this: |0|1|0|1|0|1|0|1|0|1|0|1|0|1|0|1| The information that is exported from the interface you're proposing would be: node0: start_pfn=0 and spanned_pages = 15G node1: start_pfn=1G and spanned_pages = 15G In that situation, there is no way, to figure out which DIMM is backed by a given node since the node ranges overlap. >>>> How do you plan to use this in practice, btw? >>> >>> It started because I needed to recognize the address of a node to remove >>> it from the e820 mappings and have the system "ignore" the node's >>> memory. >> >> Actually, now that I think about it, can you check in the >> /sys/devices/system/ directories for memory and nodes? We have linkages >> there for each memory section to every NUMA node, and you can also >> derive the physical address from the phys_index in each section. That >> should allow you to work out physical addresses for a given node. >> > I had looked at the memory-hotplug interface but found that this > 'phys_index' doesn't include holes, while ->node_spanned_pages does. I'm not sure what you mean. Each memory section in sysfs accounts for SECTION_SIZE where sections are 128MB by default on x86_64. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>