> On Oct 3, 2021, at 5:10 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > On Sat, Sep 25, 2021 at 01:54:22PM -0700, Nadav Amit wrote: > >> @@ -338,25 +344,25 @@ static unsigned long change_protection_range(struct vm_area_struct *vma, >> struct mm_struct *mm = vma->vm_mm; >> pgd_t *pgd; >> unsigned long next; >> - unsigned long start = addr; >> unsigned long pages = 0; >> + struct mmu_gather tlb; >> >> BUG_ON(addr >= end); >> pgd = pgd_offset(mm, addr); >> flush_cache_range(vma, addr, end); >> inc_tlb_flush_pending(mm); > > That seems unbalanced... Bad rebase. Thanks for catching it! > >> + tlb_gather_mmu(&tlb, mm); >> + tlb_start_vma(&tlb, vma); >> do { >> next = pgd_addr_end(addr, end); >> if (pgd_none_or_clear_bad(pgd)) >> continue; >> - pages += change_p4d_range(vma, pgd, addr, next, newprot, >> + pages += change_p4d_range(&tlb, vma, pgd, addr, next, newprot, >> cp_flags); >> } while (pgd++, addr = next, addr != end); >> >> - /* Only flush the TLB if we actually modified any entries: */ >> - if (pages) >> - flush_tlb_range(vma, start, end); >> - dec_tlb_flush_pending(mm); > > ... seeing you do remove the extra decrement. Is it really needed? We do not put this comment elsewhere for tlb_finish_mmu(). But no problem, I’ll keep it.