If a PTE or PMD is already marked NUMA when scanning to mark entries for NUMA hinting then it is not necessary to update the entry and incur a TLB flush penalty. Avoid the avoidhead where possible. Signed-off-by: Mel Gorman <mgorman@xxxxxxx> --- mm/huge_memory.c | 14 ++++++++------ mm/mprotect.c | 4 ++++ 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8546654..f2bf521 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1524,12 +1524,14 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, return 0; } - ret = 1; - entry = pmdp_get_and_clear_notify(mm, addr, pmd); - entry = pmd_modify(entry, newprot); - ret = HPAGE_PMD_NR; - set_pmd_at(mm, addr, pmd, entry); - BUG_ON(pmd_write(entry)); + if (!prot_numa || !pmd_protnone(*pmd)) { + ret = 1; + entry = pmdp_get_and_clear_notify(mm, addr, pmd); + entry = pmd_modify(entry, newprot); + ret = HPAGE_PMD_NR; + set_pmd_at(mm, addr, pmd, entry); + BUG_ON(pmd_write(entry)); + } spin_unlock(ptl); } diff --git a/mm/mprotect.c b/mm/mprotect.c index 33dfafb..109e7aa 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -86,6 +86,10 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, page = vm_normal_page(vma, addr, oldpte); if (!page || PageKsm(page)) continue; + + /* Avoid TLB flush if possible */ + if (pte_protnone(oldpte)) + continue; } ptent = ptep_modify_prot_start(mm, addr, pte); -- 2.1.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>