On Mon, Jul 02, 2012 at 12:24:36AM -0400, Rik van Riel wrote: > On 06/28/2012 08:56 AM, Andrea Arcangeli wrote: > > If any of the ptes that khugepaged is collapsing was a pte_numa, the > > resulting trans huge pmd will be a pmd_numa too. > > Why? > > If some of the ptes already got faulted in and made really > resident again, why do you want to incur a new NUMA fault > on the newly collapsed hugepage? If we don't set pmd_numa on the collapsed hugepage, the result is that we'll understimate the thread NUMA affinity to the node where the hugepage is located (mm affinity is recorded independently by the NUMA hinting page faults). If it's better or worse I guess depends on luck, we just lose information. I guess overstimating the node affinity with a node with hugepages just collapsed is better than understimating it, more often than not. I doubt it matters much if just 1 pte_numa or all pte_numa creates a pmd_numa. With the pmd scan mode (default enabled) we fault in at pmd-granularity regardless of THP or not, so either ways it's the same, this only an issue when you set knuma_scand/pmd = 0 at runtime. > Is there something on we should know about? > > If so, could you document it? I'll add a note. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>