Excerpts from Andy Lutomirski's message of November 29, 2020 10:36 am: > On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <npiggin@xxxxxxxxx> wrote: >> >> NOMMU systems could easily go without this and save a bit of code >> and the refcount atomics, because their mm switch is a no-op. I >> haven't flipped them over because haven't audited all arch code to >> convert over to using the _lazy_tlb refcounting. >> >> Signed-off-by: Nicholas Piggin <npiggin@xxxxxxxxx> >> --- >> arch/Kconfig | 11 +++++++ >> include/linux/sched/mm.h | 13 ++++++-- >> kernel/sched/core.c | 68 +++++++++++++++++++++++++++++----------- >> kernel/sched/sched.h | 4 ++- >> 4 files changed, 75 insertions(+), 21 deletions(-) >> >> diff --git a/arch/Kconfig b/arch/Kconfig >> index 56b6ccc0e32d..596bf589d74b 100644 >> --- a/arch/Kconfig >> +++ b/arch/Kconfig >> @@ -430,6 +430,17 @@ config ARCH_WANT_IRQS_OFF_ACTIVATE_MM >> irqs disabled over activate_mm. Architectures that do IPI based TLB >> shootdowns should enable this. >> >> +# Should make this depend on MMU, because there is little use for lazy mm switching >> +# with NOMMU. Must audit NOMMU architecture code for lazy mm refcounting first. >> +config MMU_LAZY_TLB >> + def_bool y >> + help >> + Enable "lazy TLB" mmu context switching for kernel threads. >> + >> +config MMU_LAZY_TLB_REFCOUNT >> + def_bool y >> + depends on MMU_LAZY_TLB >> + > > This could use some documentation as to what "no" means. Sure I can add a bit more. > >> config ARCH_HAVE_NMI_SAFE_CMPXCHG >> bool >> >> diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h >> index 7157c0f6fef8..bd0f27402d4b 100644 >> --- a/include/linux/sched/mm.h >> +++ b/include/linux/sched/mm.h >> @@ -51,12 +51,21 @@ static inline void mmdrop(struct mm_struct *mm) >> /* Helpers for lazy TLB mm refcounting */ >> static inline void mmgrab_lazy_tlb(struct mm_struct *mm) >> { >> - mmgrab(mm); >> + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) >> + mmgrab(mm); >> } >> >> static inline void mmdrop_lazy_tlb(struct mm_struct *mm) >> { >> - mmdrop(mm); >> + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) { >> + mmdrop(mm); >> + } else { >> + /* >> + * mmdrop_lazy_tlb must provide a full memory barrier, see the >> + * membarrier comment finish_task_switch. > > "membarrier comment in finish_task_switch()", perhaps? Sure. Thanks, Nick