The patch titled Subject: lazy-tlb-introduce-lazy-mm-refcount-helper-functions-fix has been added to the -mm tree. Its filename is lazy-tlb-introduce-lazy-mm-refcount-helper-functions-fix.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/lazy-tlb-introduce-lazy-mm-refcount-helper-functions-fix.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/lazy-tlb-introduce-lazy-mm-refcount-helper-functions-fix.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Nicholas Piggin <npiggin@xxxxxxxxx> Subject: lazy-tlb-introduce-lazy-mm-refcount-helper-functions-fix Fix a refcounting bug in kthread_use_mm (the mm reference is increased unconditionally now, but the lazy tlb refcount is still only dropped only if mm != active_mm). Link: https://lkml.kernel.org/r/1623125298.bx63h3mopj.astroid@xxxxxxxxx Signed-off-by: Nicholas Piggin <npiggin@xxxxxxxxx> Cc: Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- kernel/kthread.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) --- a/kernel/kthread.c~lazy-tlb-introduce-lazy-mm-refcount-helper-functions-fix +++ a/kernel/kthread.c @@ -1314,6 +1314,11 @@ void kthread_use_mm(struct mm_struct *mm WARN_ON_ONCE(!(tsk->flags & PF_KTHREAD)); WARN_ON_ONCE(tsk->mm); + /* + * It's possible that tsk->active_mm == mm here, but we must + * still mmgrab(mm) and mmdrop_lazy_tlb(active_mm), because lazy + * mm may not have its own refcount (see mmgrab/drop_lazy_tlb()). + */ mmgrab(mm); task_lock(tsk); @@ -1338,12 +1343,9 @@ void kthread_use_mm(struct mm_struct *mm * memory barrier after storing to tsk->mm, before accessing * user-space memory. A full memory barrier for membarrier * {PRIVATE,GLOBAL}_EXPEDITED is implicitly provided by - * mmdrop(), or explicitly with smp_mb(). + * mmdrop_lazy_tlb(). */ - if (active_mm != mm) - mmdrop_lazy_tlb(active_mm); - else - smp_mb(); + mmdrop_lazy_tlb(active_mm); to_kthread(tsk)->oldfs = force_uaccess_begin(); } _ Patches currently in -mm which might be from npiggin@xxxxxxxxx are lazy-tlb-introduce-lazy-mm-refcount-helper-functions.patch lazy-tlb-introduce-lazy-mm-refcount-helper-functions-fix.patch lazy-tlb-allow-lazy-tlb-mm-refcounting-to-be-configurable.patch lazy-tlb-shoot-lazies-a-non-refcounting-lazy-tlb-option.patch powerpc-64s-enable-mmu_lazy_tlb_shootdown.patch