The patch titled thp: mprotect: pass vma down to page table walkers has been added to the -mm tree. Its filename is thp-mprotect-pass-vma-down-to-page-table-walkers.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** See http://userweb.kernel.org/~akpm/stuff/added-to-mm.txt to find out what to do about this The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/ ------------------------------------------------------ Subject: thp: mprotect: pass vma down to page table walkers From: Johannes Weiner <hannes@xxxxxxxxxxx> Flushing the tlb for huge pmds requires the vma's anon_vma, so pass along the vma instead of the mm, we can always get the latter when we need it. Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> Signed-off-by: Andrea Arcangeli <aarcange@xxxxxxxxxx> Reviewed-by: Rik van Riel <riel@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/mprotect.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff -puN mm/mprotect.c~thp-mprotect-pass-vma-down-to-page-table-walkers mm/mprotect.c --- a/mm/mprotect.c~thp-mprotect-pass-vma-down-to-page-table-walkers +++ a/mm/mprotect.c @@ -78,7 +78,7 @@ static void change_pte_range(struct mm_s pte_unmap_unlock(pte - 1, ptl); } -static inline void change_pmd_range(struct mm_struct *mm, pud_t *pud, +static inline void change_pmd_range(struct vm_area_struct *vma, pud_t *pud, unsigned long addr, unsigned long end, pgprot_t newprot, int dirty_accountable) { @@ -88,14 +88,15 @@ static inline void change_pmd_range(stru pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); - split_huge_page_pmd(mm, pmd); + split_huge_page_pmd(vma->vm_mm, pmd); if (pmd_none_or_clear_bad(pmd)) continue; - change_pte_range(mm, pmd, addr, next, newprot, dirty_accountable); + change_pte_range(vma->vm_mm, pmd, addr, next, newprot, + dirty_accountable); } while (pmd++, addr = next, addr != end); } -static inline void change_pud_range(struct mm_struct *mm, pgd_t *pgd, +static inline void change_pud_range(struct vm_area_struct *vma, pgd_t *pgd, unsigned long addr, unsigned long end, pgprot_t newprot, int dirty_accountable) { @@ -107,7 +108,8 @@ static inline void change_pud_range(stru next = pud_addr_end(addr, end); if (pud_none_or_clear_bad(pud)) continue; - change_pmd_range(mm, pud, addr, next, newprot, dirty_accountable); + change_pmd_range(vma, pud, addr, next, newprot, + dirty_accountable); } while (pud++, addr = next, addr != end); } @@ -127,7 +129,8 @@ static void change_protection(struct vm_ next = pgd_addr_end(addr, end); if (pgd_none_or_clear_bad(pgd)) continue; - change_pud_range(mm, pgd, addr, next, newprot, dirty_accountable); + change_pud_range(vma, pgd, addr, next, newprot, + dirty_accountable); } while (pgd++, addr = next, addr != end); flush_tlb_range(vma, start, end); } _ Patches currently in -mm which might be from hannes@xxxxxxxxxxx are mm-compactionc-avoid-double-mem_cgroup_del_lru.patch writeback-integrated-background-writeback-work.patch writeback-trace-wakeup-event-for-background-writeback.patch writeback-stop-background-kupdate-works-from-livelocking-other-works.patch writeback-stop-background-kupdate-works-from-livelocking-other-works-update.patch writeback-avoid-livelocking-wb_sync_all-writeback.patch writeback-avoid-livelocking-wb_sync_all-writeback-update.patch writeback-check-skipped-pages-on-wb_sync_all.patch writeback-check-skipped-pages-on-wb_sync_all-update.patch writeback-check-skipped-pages-on-wb_sync_all-update-fix.patch mm-compaction-add-trace-events-for-memory-compaction-activity.patch mm-vmscan-convert-lumpy_mode-into-a-bitmask.patch mm-vmscan-reclaim-order-0-and-use-compaction-instead-of-lumpy-reclaim.patch mm-vmscan-reclaim-order-0-and-use-compaction-instead-of-lumpy-reclaim-fix.patch mm-migration-allow-migration-to-operate-asynchronously-and-avoid-synchronous-compaction-in-the-faster-path.patch mm-migration-allow-migration-to-operate-asynchronously-and-avoid-synchronous-compaction-in-the-faster-path-fix.patch mm-migration-cleanup-migrate_pages-api-by-matching-types-for-offlining-and-sync.patch mm-compaction-perform-a-faster-migration-scan-when-migrating-asynchronously.patch mm-vmscan-rename-lumpy_mode-to-reclaim_mode.patch mm-vmscan-rename-lumpy_mode-to-reclaim_mode-fix.patch mm-deactivate-invalidated-pages.patch mm-deactivate-invalidated-pages-fix.patch mm-kswapd-stop-high-order-balancing-when-any-suitable-zone-is-balanced.patch mm-kswapd-keep-kswapd-awake-for-high-order-allocations-until-a-percentage-of-the-node-is-balanced.patch mm-kswapd-use-the-order-that-kswapd-was-reclaiming-at-for-sleeping_prematurely.patch mm-kswapd-reset-kswapd_max_order-and-classzone_idx-after-reading.patch mm-kswapd-treat-zone-all_unreclaimable-in-sleeping_prematurely-similar-to-balance_pgdat.patch mm-kswapd-use-the-classzone-idx-that-kswapd-was-using-for-sleeping_prematurely.patch thp-split_huge_page_mm-vma.patch thp-transparent-hugepage-core.patch thp-add-x86-32bit-support.patch thp-mincore-transparent-hugepage-support.patch thp-add-pmd_modify.patch thp-mprotect-pass-vma-down-to-page-table-walkers.patch thp-mprotect-transparent-huge-page-support.patch memcg-fix-unit-mismatch-in-memcg-oom-limit-calculation.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html