On Fri, Jul 24, 2020 at 06:29:43PM -0700, Linus Torvalds wrote: > On Fri, Jul 24, 2020 at 5:37 PM Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> wrote: > > A follow-up question about your comment in the previous email "The > > notion of "this is a retry, so let's do nothing" is fundamentally > > wrong.", do you mean it is not safe? > > I mean it fails my "smell test". > > The patch didn't just avoid the TLB flush, it avoided all the other > "mark it dirty and young" things too. And that made me go "why would > RETRY be different in this regard"? I had a similar concern, couldn't convince myself it's entirely safe. Even if it is safe now, some mm change in the future may break the current assumptions. The arm64 do_page_fault() sets FAULT_FLAG_TRIED if a previous handle_mm_fault() returns VM_FAULT_RETRY. A quick grep for VM_FAULT_RETRY shows a few more instances than what Yang listed. Maybe they are all safe, I just couldn't get my head around it. > For any architecture that guarantees that a page fault will always > flush the old TLB entry for this kind of situation, that > flush_tlb_fix_spurious_fault() thing can be a no-op. > > So that's why on x86, we just do > > #define flush_tlb_fix_spurious_fault(vma, address) do { } while (0) > > and have no issues. > > Note that it does *not* need to do any cross-CPU flushing or anything > like that. So it's actually wrong (I think) to have that default > fallback for > > #define flush_tlb_fix_spurious_fault(vma, address) > flush_tlb_page(vma, address) > > because flush_tlb_page() is the serious "do cross CPU etc". > > Does the arm64 flush_tlb_page() perhaps do the whole expensive > cross-CPU thing rather than the much cheaper "just local invalidate" > version? I think it makes sense to have a local-only flush_tlb_fix_spurious_fault(), but with ptep_set_access_flags() updated to still issue the full broadcast TLBI. In addition, I added a minor optimisation to avoid the TLB flush if the old pte was not accessible. In a read-access fault case (followed by mkyoung), the TLB wouldn't have cached a non-accessible pte (not sure it makes much difference to Yang's case). Anyway, from ARMv8.1 onwards, the hardware handles the access flag automatically. I'm not sure the first dsb(nshst) below is of much use in this case. If we got a spurious fault, the write to the pte happened on a different CPU (IIUC, we shouldn't return to user with updated ptes without a TLB flush on the same CPU). Anyway, we can refine this if it solves Yang's performance regression. -------------8<----------------------- diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index d493174415db..d1401cbad7d4 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -268,6 +268,20 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, dsb(ish); } +static inline void local_flush_tlb_page(struct vm_area_struct *vma, + unsigned long uaddr) +{ + unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); + + dsb(nshst); + __tlbi(vale1, addr); + __tlbi_user(vale1, addr); + dsb(nsh); +} + +#define flush_tlb_fix_spurious_fault(vma, address) \ + local_flush_tlb_page(vma, address) + /* * This is meant to avoid soft lock-ups on large TLB flushing ranges and not * necessarily a performance improvement. diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 8afb238ff335..0accee714cc2 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -218,7 +218,9 @@ int ptep_set_access_flags(struct vm_area_struct *vma, pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); } while (pteval != old_pteval); - flush_tlb_fix_spurious_fault(vma, address); + if (pte_accessible(vma->vm_mm, pte)) + flush_tlb_page(vma, address); + return 1; } -- Catalin