Re: [RFC PATCH 1/2] mm/mmu_gather: Invalidate TLB correctly on batch allocation failure and flush

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/17/19 2:39 PM, Peter Zijlstra wrote:
On Tue, Dec 17, 2019 at 12:47:12PM +0530, Aneesh Kumar K.V wrote:
Architectures for which we have hardware walkers of Linux page table should
flush TLB on mmu gather batch allocation failures and batch flush. Some
architectures like POWER supports multiple translation modes (hash and radix)
and in the case of POWER only radix translation mode needs the above TLBI.
This is because for hash translation mode kernel wants to avoid this extra
flush since there are no hardware walkers of linux page table. With radix
translation, the hardware also walks linux page table and with that, kernel
needs to make sure to TLB invalidate page walk cache before page table pages are
freed.

Based on changes from Peter Zijlstra <peterz@xxxxxxxxxxxxx>

AFAICT it is all my patch ;-)

Yes. I moved the changes you had to upstream. I can update the From: in the next version if you are ok with that?


Anyway, this commit:

More details in
commit: d86564a2f085 ("mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE")

states that you do an explicit invalidate in __p*_free_tlb(), which, if
I'm not mistaken is still there:

   arch/powerpc/include/asm/nohash/pgalloc.h:      tlb_flush_pgtable(tlb, address);


nohash is not really radix. So we still do the tlb flush from the pte_free_tlb for nohash and for PPC-radix, we let tlb_table_invalidate to flush that.


Or am I reading this wrong? I'm thinking you can remove that now.

diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h
index b2c0be93929d..feea1a09bbce 100644
--- a/arch/powerpc/include/asm/tlb.h
+++ b/arch/powerpc/include/asm/tlb.h
@@ -27,6 +27,10 @@
  #define tlb_flush tlb_flush
  extern void tlb_flush(struct mmu_gather *tlb);
+#ifdef CONFIG_HAVE_RCU_TABLE_FREE
/*
  * PPC-Hash does not use the linux page-tables, so we can avoid
  * the TLBI for page-table freeing, PPC-Radix otoh does use the
  * page-tables and needs the TLBI.
  */
+#define tlb_needs_table_invalidate()	radix_enabled()
+#endif

Also, are you really sure about the !SMP case? Esp. on Radix I'm
thinking that the PWC (page-walk-cache) can give trouble even on UP,
when we get preempted in the middle of mmu_gather. Hmm?


Yes, looking at !SMP I guess we do have issue there. we do free the pagetable pages directly in __p*_free_tlb() with the current code. That will definitely not work. Are you suggesting we enable HAVE_RCU_TABLE_FREE even for !SMP?

  /* Get the generic bits... */
  #include <asm-generic/tlb.h>



-aneesh




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux