Re: [patch 01/15] mm/memory.c: avoid access flag update TLB flush for retried page fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jul 25, 2020 at 04:58:43PM +0100, Catalin Marinas wrote:
> On Fri, Jul 24, 2020 at 06:29:43PM -0700, Linus Torvalds wrote:
> > On Fri, Jul 24, 2020 at 5:37 PM Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> wrote:
> > > A follow-up question about your comment in the previous email "The
> > > notion of "this is a retry, so let's do nothing" is fundamentally
> > > wrong.", do you mean it is not safe?
> > 
> > I mean it fails my "smell test".
> > 
> > The patch didn't just avoid the TLB flush, it avoided all the other
> > "mark it dirty and young" things too. And that made me go "why would
> > RETRY be different in this regard"?
> 
> I had a similar concern, couldn't convince myself it's entirely safe.
> Even if it is safe now, some mm change in the future may break the
> current assumptions.
> 
> The arm64 do_page_fault() sets FAULT_FLAG_TRIED if a previous
> handle_mm_fault() returns VM_FAULT_RETRY. A quick grep for
> VM_FAULT_RETRY shows a few more instances than what Yang listed. Maybe
> they are all safe, I just couldn't get my head around it.
> 
> > For any architecture that guarantees that a page fault will always
> > flush the old TLB entry for this kind of situation, that
> > flush_tlb_fix_spurious_fault() thing can be a no-op.
> > 
> > So that's why on x86, we just do
> > 
> >   #define flush_tlb_fix_spurious_fault(vma, address) do { } while (0)
> > 
> > and have no issues.
> > 
> > Note that it does *not* need to do any cross-CPU flushing or anything
> > like that. So it's actually wrong (I think) to have that default
> > fallback for
> > 
> >    #define flush_tlb_fix_spurious_fault(vma, address)
> > flush_tlb_page(vma, address)
> > 
> > because flush_tlb_page() is the serious "do cross CPU etc".
> > 
> > Does the arm64 flush_tlb_page() perhaps do the whole expensive
> > cross-CPU thing rather than the much cheaper "just local invalidate"
> > version?
> 
> I think it makes sense to have a local-only
> flush_tlb_fix_spurious_fault(), but with ptep_set_access_flags() updated
> to still issue the full broadcast TLBI. In addition, I added a minor
> optimisation to avoid the TLB flush if the old pte was not accessible.
> In a read-access fault case (followed by mkyoung), the TLB wouldn't have
> cached a non-accessible pte (not sure it makes much difference to Yang's
> case). Anyway, from ARMv8.1 onwards, the hardware handles the access
> flag automatically.
> 
> I'm not sure the first dsb(nshst) below is of much use in this case. If
> we got a spurious fault, the write to the pte happened on a different
> CPU (IIUC, we shouldn't return to user with updated ptes without a TLB
> flush on the same CPU). Anyway, we can refine this if it solves Yang's
> performance regression.
> 
> -------------8<-----------------------
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index d493174415db..d1401cbad7d4 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -268,6 +268,20 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
>  	dsb(ish);
>  }
>  
> +static inline void local_flush_tlb_page(struct vm_area_struct *vma,
> +					unsigned long uaddr)
> +{
> +	unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
> +
> +	dsb(nshst);
> +	__tlbi(vale1, addr);
> +	__tlbi_user(vale1, addr);
> +	dsb(nsh);
> +}
> +
> +#define flush_tlb_fix_spurious_fault(vma, address) \
> +	local_flush_tlb_page(vma, address)

Why can't we just have flush_tlb_fix_spurious_fault() be a NOP on arm64?

We only perform local TLB invalidation in a few special places (e.g. ASID
rollover, hibernate) so everywhere else that TLB invalidation is required
to prevent a CPU from re-taking a fault, it will have been broadcast.

Given that the architecture prohibits the TLB from caching invalid entries,
then software access/dirty is fine without additional flushing. The only
problematic case I can think of is on the invalid->valid (i.e. map) path,
where we elide the expensive DSB instruction because (a) most CPUs have a
walker that can snoop the store buffer and (b) even if they don't, the
store buffer tends to drain by the time we get back to userspace. Even
if that was a problem, flush_tlb_fix_spurious_fault() wouldn't be the
right hook, since the DSB must occur on the CPU that did the pte update.

So I'd be inclined to drop flush_tlb_fix_spurious_fault() altogether,
as I don't see why it's needed.

Have I missed something?

Will




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux