On Sat, Jan 30, 2021 at 5:17 PM Nadav Amit <nadav.amit@xxxxxxxxx> wrote: > > > On Jan 30, 2021, at 5:07 PM, Andy Lutomirski <luto@xxxxxxxxxx> wrote: > > > > Adding Andrew Cooper, who has a distressingly extensive understanding > > of the x86 PTE magic. > > > > On Sat, Jan 30, 2021 at 4:16 PM Nadav Amit <nadav.amit@xxxxxxxxx> wrote: > >> From: Nadav Amit <namit@xxxxxxxxxx> > >> > >> Currently, using mprotect() to unprotect a memory region or uffd to > >> unprotect a memory region causes a TLB flush. At least on x86, as > >> protection is promoted, no TLB flush is needed. > >> > >> Add an arch-specific pte_may_need_flush() which tells whether a TLB > >> flush is needed based on the old PTE and the new one. Implement an x86 > >> pte_may_need_flush(). > >> > >> For x86, besides the simple logic that PTE protection promotion or > >> changes of software bits does require a flush, also add logic that > >> considers the dirty-bit. If the dirty-bit is clear and write-protect is > >> set, no TLB flush is needed, as x86 updates the dirty-bit atomically > >> on write, and if the bit is clear, the PTE is reread. > >> > >> Signed-off-by: Nadav Amit <namit@xxxxxxxxxx> > >> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> > >> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > >> Cc: Andy Lutomirski <luto@xxxxxxxxxx> > >> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> > >> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > >> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > >> Cc: Will Deacon <will@xxxxxxxxxx> > >> Cc: Yu Zhao <yuzhao@xxxxxxxxxx> > >> Cc: Nick Piggin <npiggin@xxxxxxxxx> > >> Cc: x86@xxxxxxxxxx > >> --- > >> arch/x86/include/asm/tlbflush.h | 44 +++++++++++++++++++++++++++++++++ > >> include/asm-generic/tlb.h | 4 +++ > >> mm/mprotect.c | 3 ++- > >> 3 files changed, 50 insertions(+), 1 deletion(-) > >> > >> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h > >> index 8c87a2e0b660..a617dc0a9b06 100644 > >> --- a/arch/x86/include/asm/tlbflush.h > >> +++ b/arch/x86/include/asm/tlbflush.h > >> @@ -255,6 +255,50 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch, > >> > >> extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); > >> > >> +static inline bool pte_may_need_flush(pte_t oldpte, pte_t newpte) > >> +{ > >> + const pteval_t ignore_mask = _PAGE_SOFTW1 | _PAGE_SOFTW2 | > >> + _PAGE_SOFTW3 | _PAGE_ACCESSED; > > > > Why is accessed ignored? Surely clearing the accessed bit needs a > > flush if the old PTE is present. > > I am just following the current scheme in the kernel (x86): > > int ptep_clear_flush_young(struct vm_area_struct *vma, > unsigned long address, pte_t *ptep) > { > /* > * On x86 CPUs, clearing the accessed bit without a TLB flush > * doesn't cause data corruption. [ It could cause incorrect > * page aging and the (mistaken) reclaim of hot pages, but the > * chance of that should be relatively low. ] > * If anyone ever implements the optimization of skipping the flush when unmapping a !accessed page, then this will cause data corruption. If nothing else, this deserves a nice comment in the new code. > * So as a performance optimization don't flush the TLB when > * clearing the accessed bit, it will eventually be flushed by > * a context switch or a VM operation anyway. [ In the rare > * event of it not getting flushed for a long time the delay > * shouldn't really matter because there's no real memory > * pressure for swapout to react to. ] > */ > return ptep_test_and_clear_young(vma, address, ptep); > } > > > > > >> + const pteval_t enable_mask = _PAGE_RW | _PAGE_DIRTY | _PAGE_GLOBAL; > >> + pteval_t oldval = pte_val(oldpte); > >> + pteval_t newval = pte_val(newpte); > >> + pteval_t diff = oldval ^ newval; > >> + pteval_t disable_mask = 0; > >> + > >> + if (IS_ENABLED(CONFIG_X86_64) || IS_ENABLED(CONFIG_X86_PAE)) > >> + disable_mask = _PAGE_NX; > >> + > >> + /* new is non-present: need only if old is present */ > >> + if (pte_none(newpte)) > >> + return !pte_none(oldpte); > >> + > >> + /* > >> + * If, excluding the ignored bits, only RW and dirty are cleared and the > >> + * old PTE does not have the dirty-bit set, we can avoid a flush. This > >> + * is possible since x86 architecture set the dirty bit atomically while > > > > s/set/sets/ > > > >> + * it caches the PTE in the TLB. > >> + * > >> + * The condition considers any change to RW and dirty as not requiring > >> + * flush if the old PTE is not dirty or not writable for simplification > >> + * of the code and to consider (unlikely) cases of changing dirty-bit of > >> + * write-protected PTE. > >> + */ > >> + if (!(diff & ~(_PAGE_RW | _PAGE_DIRTY | ignore_mask)) && > >> + (!(pte_dirty(oldpte) || !pte_write(oldpte)))) > >> + return false; > > > > This logic seems confusing to me. Is your goal to say that, if the > > old PTE was clean and writable and the new PTE is write-protected, > > then no flush is needed? > > Yes. > > > If so, I would believe you're right, but I'm > > not convinced you've actually implemented this. Also, there may be > > other things going on that need flushing, e.g. a change of the address > > or an accessed bit or NX change. > > The first part (diff & ~(_PAGE_RW | _PAGE_DIRTY | ignore_mask) is supposed > to capture changes of address, NX-bit, etc. > > The second part is indeed wrong. It should have been: > (!pte_dirty(oldpte) || !pte_write(oldpte)) > > > > > Also, CET makes this extra bizarre. > > I saw something about the not-writeable-and-dirty considered differently. I > need to have a look, but I am not sure it affects anything. > It affects everyone's sanity. I don't yet have an opinion as to whether it affects correctness :)