On 01/31/2017 12:04 PM, Dave Hansen wrote: > On 01/30/2017 11:25 PM, John Hubbard wrote: >> I also don't like having these policies hard-coded, and your 100x >> example above helps clarify what can go wrong about it. It would be >> nicer if, instead, we could better express the "distance" between nodes >> (bandwidth, latency, relative to sysmem, perhaps), and let the NUMA >> system figure out the Right Thing To Do. >> >> I realize that this is not quite possible with NUMA just yet, but I >> wonder if that's a reasonable direction to go with this? > In the end, I don't think the kernel can make the "right" decision very > widely here. > > Intel's Xeon Phis have some high-bandwidth memory (MCDRAM) that > evidently has a higher latency than DRAM. Given a plain malloc(), how > is the kernel to know that the memory will be used for AVX-512 > instructions that need lots of bandwidth vs. some random data structure > that's latency-sensitive? > > In the end, I think all we can do is keep the kernel's existing default > of "low latency to the CPU that allocated it", and let apps override > when that policy doesn't fit them. > I think John's point is that latency might not be the predominant factor anymore for certain sections of the CPU and GPU world. What if a Phi has MCDRAM physically attached, but DDR4 connected via QPI that still has lower total latency (might be a stretch for Phi but not a stretch for GPUs with deep sorting memory controllers)? Lowest latency is probably the wrong choice. Latency has really been a numeric proxy for physical proximity, under assumption most closely coupled memory is the right placement, but HBM/MCDRAM is causing that relationship to break down in all sorts of interesting ways. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>