On Sat, Dec 30, 2023 at 07:26:11PM +0100, Alexandre Ghiti wrote: > Hi Jisheng, Hi Alex, > > On 28/12/2023 09:46, Jisheng Zhang wrote: > > The mmu_gather code sets fullmm=1 when tearing down the entire address > > space for an mm_struct on exit or execve. So if the underlying platform > > supports ASID, the tlb flushing can be avoided because the ASID > > allocator will never re-allocate a dirty ASID. > > > > Use the performance of Process creation in unixbench on T-HEAD TH1520 > > platform is improved by about 4%. > > > > Signed-off-by: Jisheng Zhang <jszhang@xxxxxxxxxx> > > --- > > arch/riscv/include/asm/tlb.h | 9 +++++++++ > > 1 file changed, 9 insertions(+) > > > > diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h > > index 1eb5682b2af6..35f3c214332e 100644 > > --- a/arch/riscv/include/asm/tlb.h > > +++ b/arch/riscv/include/asm/tlb.h > > @@ -12,10 +12,19 @@ static void tlb_flush(struct mmu_gather *tlb); > > #define tlb_flush tlb_flush > > #include <asm-generic/tlb.h> > > +#include <asm/mmu_context.h> > > static inline void tlb_flush(struct mmu_gather *tlb) > > { > > #ifdef CONFIG_MMU > > + /* > > + * If ASID is supported, the ASID allocator will either invalidate the > > + * ASID or mark it as used. So we can avoid TLB invalidation when > > + * pulling down a full mm. > > + */ > > > Given the number of bits are limited for the ASID, at some point we'll reuse > previously allocated ASID so the ASID allocator must make sure to invalidate > the entries when reusing an ASID: can you point where this is done? Per my understanding of the code, the path would be set_mm_asid() __new_context() __flush_context() // set context_tlb_flush_pending if (need_flush_tlb) local_flush_tlb_all() Thanks > > > + if (static_branch_likely(&use_asid_allocator) && tlb->fullmm) > > + return; > > + > > if (tlb->fullmm || tlb->need_flush_all) > > flush_tlb_mm(tlb->mm); > > else