On Wed, 22 Aug 2018 17:55:27 +0200 Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > On Wed, Aug 22, 2018 at 05:30:15PM +0200, Peter Zijlstra wrote: > > ARM > > which later used this put an explicit TLB invalidate in their > > __p*_free_tlb() functions, and PowerPC-radix followed that example. > > > +/* > > + * If we want tlb_remove_table() to imply TLB invalidates. > > + */ > > +static inline void tlb_table_invalidate(struct mmu_gather *tlb) > > +{ > > +#ifdef CONFIG_HAVE_RCU_TABLE_INVALIDATE > > + /* > > + * Invalidate page-table caches used by hardware walkers. Then we still > > + * need to RCU-sched wait while freeing the pages because software > > + * walkers can still be in-flight. > > + */ > > + __tlb_flush_mmu_tlbonly(tlb); > > +#endif > > +} > > > Nick, Will is already looking at using this to remove the synchronous > invalidation from __p*_free_tlb() for ARM, could you have a look to see > if PowerPC-radix could benefit from that too? powerpc/radix has no such issue, it already does this tracking. We were discussing this a couple of months ago, I wasn't aware of ARM's issue but I suggested x86 could go the same way as powerpc. Would have been good to get some feedback on some of the proposed approaches there. Because it's not just pwc tracking but if you do this you don't need those silly hacks in generic code to expand the TLB address range either. So powerpc has no fundamental problem with making this stuff generic. If you need a fix for x86 and ARM for this merge window though, I would suggest just copying what powerpc already has. Next time we can consider consolidating them all into generic code. Thanks, Nick > > Basically, using a patch like the below, would give your tlb_flush() > information on if tables were removed or not. > > --- > > --- a/include/asm-generic/tlb.h > +++ b/include/asm-generic/tlb.h > @@ -96,12 +96,22 @@ struct mmu_gather { > #endif > unsigned long start; > unsigned long end; > - /* we are in the middle of an operation to clear > - * a full mm and can make some optimizations */ > - unsigned int fullmm : 1, > - /* we have performed an operation which > - * requires a complete flush of the tlb */ > - need_flush_all : 1; > + /* > + * we are in the middle of an operation to clear > + * a full mm and can make some optimizations > + */ > + unsigned int fullmm : 1; > + > + /* > + * we have performed an operation which > + * requires a complete flush of the tlb > + */ > + unsigned int need_flush_all : 1; > + > + /* > + * we have removed page directories > + */ > + unsigned int freed_tables : 1; > > struct mmu_gather_batch *active; > struct mmu_gather_batch local; > @@ -136,6 +146,7 @@ static inline void __tlb_reset_range(str > tlb->start = TASK_SIZE; > tlb->end = 0; > } > + tlb->freed_tables = 0; > } > > static inline void tlb_remove_page_size(struct mmu_gather *tlb, > @@ -269,6 +280,7 @@ static inline void tlb_remove_check_page > #define pte_free_tlb(tlb, ptep, address) \ > do { \ > __tlb_adjust_range(tlb, address, PAGE_SIZE); \ > + tlb->freed_tables = 1; \ > __pte_free_tlb(tlb, ptep, address); \ > } while (0) > #endif > @@ -276,7 +288,8 @@ static inline void tlb_remove_check_page > #ifndef pmd_free_tlb > #define pmd_free_tlb(tlb, pmdp, address) \ > do { \ > - __tlb_adjust_range(tlb, address, PAGE_SIZE); \ > + __tlb_adjust_range(tlb, address, PAGE_SIZE); \ > + tlb->freed_tables = 1; \ > __pmd_free_tlb(tlb, pmdp, address); \ > } while (0) > #endif > @@ -286,6 +299,7 @@ static inline void tlb_remove_check_page > #define pud_free_tlb(tlb, pudp, address) \ > do { \ > __tlb_adjust_range(tlb, address, PAGE_SIZE); \ > + tlb->freed_tables = 1; \ > __pud_free_tlb(tlb, pudp, address); \ > } while (0) > #endif > @@ -295,7 +309,8 @@ static inline void tlb_remove_check_page > #ifndef p4d_free_tlb > #define p4d_free_tlb(tlb, pudp, address) \ > do { \ > - __tlb_adjust_range(tlb, address, PAGE_SIZE); \ > + __tlb_adjust_range(tlb, address, PAGE_SIZE); \ > + tlb->freed_tables = 1; \ > __p4d_free_tlb(tlb, pudp, address); \ > } while (0) > #endif