On Thu, 28 May 2009, Christoph Lameter wrote: > I'm having a bit of trouble with the NUMA allocator in the kernel. This > is in a numa=fake test-setup (though this shouldn't matter, I guess). Not sure how fake numa works. This could affect the result. page_to_nid shows a different number than the node from which the page actually came? Sounds broken. > node 7 - "numactl --hardware" doesn't show any allocations from node 7. > In fact it seems that the memory is allocated from the first node with > free pages until these run out. Only then pages from the selected (last) > node are given out. Once the selected node is full, alloc_pages_node(... > GFP_THISNODE ...) returns NULL - as it should - and I fall back to a > normal allocation that then also reports a different node ID from > page_to_nid (c.f. the attached diff). > > The strange thing is, that a simple test module (attached as well) works > as expected. The allocation succeeds, reports the selected node in > page_to_nid *and* the free memory reported from "numactl --hardware" in > the selected node decreases. > > Any insight as to why the KVM allocation might be special are very > appreciated. I tried to follow the call path, but didn't find any red > flags that would indicate the difference. Please verify your numbers using /proc/zoneinfo. -- To unsubscribe from this list: send the line "unsubscribe linux-numa" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html