Re: [PATCH 12/18] arch/tlb: Clean up simple architectures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 03, 2018 at 05:03:50PM +0000, Vineet Gupta wrote:
> On 09/26/2018 04:56 AM, Peter Zijlstra wrote:
> > There are generally two cases:
> >
> >  1) either the platform has an efficient flush_tlb_range() and
> >     asm-generic/tlb.h doesn't need any overrides at all.
> >
> >  2) or an architecture lacks an efficient flush_tlb_range() and
> >     we override tlb_end_vma() and tlb_flush().
> >
> > Convert all 'simple' architectures to one of these two forms.
> >

> > --- a/arch/arc/include/asm/tlb.h
> > +++ b/arch/arc/include/asm/tlb.h
> > @@ -9,29 +9,6 @@
> >  #ifndef _ASM_ARC_TLB_H
> >  #define _ASM_ARC_TLB_H
> >  
> > -#define tlb_flush(tlb)				\
> > -do {						\
> > -	if (tlb->fullmm)			\
> > -		flush_tlb_mm((tlb)->mm);	\
> > -} while (0)
> > -
> > -/*
> > - * This pair is called at time of munmap/exit to flush cache and TLB entries
> > - * for mappings being torn down.
> > - * 1) cache-flush part -implemented via tlb_start_vma( ) for VIPT aliasing D$
> > - * 2) tlb-flush part - implemted via tlb_end_vma( ) flushes the TLB range
> > - *
> > - * Note, read https://urldefense.proofpoint.com/v2/url?u=http-3A__lkml.org_lkml_2004_1_15_6&d=DwIBaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=c14YS-cH-kdhTOW89KozFhBtBJgs1zXscZojEZQ0THs&m=5jiyvgRek4SKK5DUWDBGufVcuLez5G-jJCh3K-ndHsg&s=7uAzzw_jdAXMfb07B-vGPh3V1vggbTAsB7xL6Kie47A&e=
> > - */
> > -
> > -#define tlb_end_vma(tlb, vma)						\
> > -do {									\
> > -	if (!tlb->fullmm)						\
> > -		flush_tlb_range(vma, vma->vm_start, vma->vm_end);	\
> > -} while (0)
> > -
> > -#define __tlb_remove_tlb_entry(tlb, ptep, address)
> > -
> >  #include <linux/pagemap.h>
> >  #include <asm-generic/tlb.h>
> 
> LGTM per discussion in an earlier thread. However given that for "simpler" arches
> the whole series doesn't apply can you please beef up the changelog so I don't go
> scratching my head 2 years down the line. It currently describes the hows of
> things but not exactly whys: shift_arg_pages missing tlb_start_vma,
> move_page_tables look dodgy, yady yadda ?

Right you are. Thanks for pointing out the somewhat sparse Changelog;
typically I end up kicking myself a few years down the line.

I think I will in fact change the implementation a little and provide a
symbol/Kconfig to switch the default implementation between
flush_tlb_vma() and flush_tlb_mm().

That avoids some of the repetition. But see here a preview of the new
Changelog, does that clarify things enough?

---
Subject: arch/tlb: Clean up simple architectures
From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Date: Tue Sep 4 17:04:07 CEST 2018

The generic mmu_gather implementation is geared towards range tracking
and provided the architecture provides a fairly efficient
flush_tlb_range() implementation (or provides a custom tlb_flush()
implementation) things will work well.

The one case this doesn't cover well is where there is no (efficient)
range invalidate at all. In this case we can select
MMU_GATHER_NO_RANGE.

So this reduces to two cases:

 1) either the platform has an efficient flush_tlb_range() and
    asm-generic/tlb.h doesn't need any overrides at all.

 2) or an architecture lacks an efficient flush_tlb_range() and
    we need to select MMU_GATHER_NO_RANGE.

Convert all 'simple' architectures to one of these two forms.

alpha:	    has no range invalidate -> 2
arc:	    already used flush_tlb_range() -> 1
c6x:	    has no range invalidate -> 2
hexagon:    has an efficient flush_tlb_range() -> 1
            (flush_tlb_mm() is in fact a full range invalidate,
	     so no need to shoot down everything)
m68k:	    has inefficient flush_tlb_range() -> 2
microblaze: has no flush_tlb_range() -> 2
mips:	    has efficient flush_tlb_range() -> 1
	    (even though it currently seems to use flush_tlb_mm())
nds32:	    already uses flush_tlb_range() -> 1
nios2:	    has inefficient flush_tlb_range() -> 2
	    (no limit on range iteration)
openrisc:   has inefficient flush_tlb_range() -> 2
	    (no limit on range iteration)
parisc:	    already uses flush_tlb_range() -> 1
sparc32:    already uses flush_tlb_range() -> 1
unicore32:  has inefficient flush_tlb_range() -> 2
	    (no limit on range iteration)
xtensa:	    has efficient flush_tlb_range() -> 1

Note this also fixes a bug in the existing code for a number
platforms. Those platforms that did:

  tlb_end_vma() -> if (!fullmm) flush_tlb_*()
  tlb_flush -> if (full_mm) flush_tlb_mm()

missed the case of shift_arg_pages(), which doesn't have @fullmm set,
nor calls into tlb_*vma(), but still frees page-tables and thus needs
an invalidate. The new code handles this by detecting a non-empty
range, and either issuing the matching range invalidate or a full
invalidate, depending on the capabilities.

Cc: Nick Piggin <npiggin@xxxxxxxxx>
Cc: "David S. Miller" <davem@xxxxxxxxxxxxx>
Cc: Michal Simek <monstr@xxxxxxxxx>
Cc: Helge Deller <deller@xxxxxx>
Cc: Greentime Hu <green.hu@xxxxxxxxx>
Cc: Richard Henderson <rth@xxxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxxxxxxx>
Cc: Will Deacon <will.deacon@xxxxxxx>
Cc: Ley Foon Tan <lftan@xxxxxxxxxx>
Cc: Jonas Bonn <jonas@xxxxxxxxxxxx>
Cc: Mark Salter <msalter@xxxxxxxxxx>
Cc: Richard Kuo <rkuo@xxxxxxxxxxxxxx
Cc: Vineet Gupta <vgupta@xxxxxxxxxxxx>
Cc: Paul Burton <paul.burton@xxxxxxxx>
Cc: Max Filippov <jcmvbkbc@xxxxxxxxx>
Cc: Guan Xuetao <gxt@xxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux