The patch titled Subject: mm: improve mprotect(R|W) efficiency on pages referenced once has been removed from the -mm tree. Its filename was mm-improve-mprotectrw-efficiency-on-pages-referenced-once-v5.patch This patch was dropped because it was folded into mm-improve-mprotectrw-efficiency-on-pages-referenced-once.patch ------------------------------------------------------ From: Peter Collingbourne <pcc@xxxxxxxxxx> Subject: mm: improve mprotect(R|W) efficiency on pages referenced once add comments, prohibit optimization for NUMA pages Link: https://lkml.kernel.org/r/20210601185926.2623183-1-pcc@xxxxxxxxxx Signed-off-by: Peter Collingbourne <pcc@xxxxxxxxxx> Link: https://linux-review.googlesource.com/id/I98d75ef90e20330c578871c87494d64b1df3f1b8 Link: [1] https://source.android.com/devices/tech/debug/scudo Link: [2] https://cs.android.com/android/platform/superproject/+/master:bionic/benchmarks/stdlib_benchmark.cpp;l=53;drc=e8693e78711e8f45ccd2b610e4dbe0b94d551cc9 Link: [3] https://github.com/pcc/llvm-project/commit/scudo-mprotect-secondary2 Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Evgenii Stepanov <eugenis@xxxxxxxxxx> Cc: Kostya Kortchinsky <kostyak@xxxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/mprotect.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) --- a/mm/mprotect.c~mm-improve-mprotectrw-efficiency-on-pages-referenced-once-v5 +++ a/mm/mprotect.c @@ -35,10 +35,16 @@ #include "internal.h" +/* Determine whether we can avoid taking write faults for known dirty pages. */ static bool may_avoid_write_fault(pte_t pte, struct vm_area_struct *vma, unsigned long cp_flags) { + /* + * The dirty accountable bit indicates that we can always make the page + * writable regardless of the number of references. + */ if (!(cp_flags & MM_CP_DIRTY_ACCT)) { + /* Otherwise, we must have exclusive access to the page. */ if (!(vma_is_anonymous(vma) && (vma->vm_flags & VM_WRITE))) return false; @@ -46,15 +52,31 @@ static bool may_avoid_write_fault(pte_t return false; } + /* + * Don't do this optimization for clean pages as we need to be notified + * of the transition from clean to dirty. + */ if (!pte_dirty(pte)) return false; + /* Same for softdirty. */ if (!pte_soft_dirty(pte) && (vma->vm_flags & VM_SOFTDIRTY)) return false; + /* + * For userfaultfd the user program needs to monitor write faults so we + * can't do this optimization. + */ if (pte_uffd_wp(pte)) return false; + /* + * It is unclear whether this optimization can be done safely for NUMA + * pages. + */ + if (cp_flags & MM_CP_PROT_NUMA) + return false; + return true; } @@ -153,7 +175,6 @@ static unsigned long change_pte_range(st ptent = pte_clear_uffd_wp(ptent); } - /* Avoid taking write faults for known dirty pages */ if (may_avoid_write_fault(ptent, vma, cp_flags)) ptent = pte_mkwrite(ptent); ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); _ Patches currently in -mm which might be from pcc@xxxxxxxxxx are mm-improve-mprotectrw-efficiency-on-pages-referenced-once.patch