The quilt patch titled Subject: x86/mm: further clarify switch_mm_irqs_off() documentation has been removed from the -mm tree. Its filename was x86-mm-further-clarify-switch_mm_irqs_off-documentation.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Subject: x86/mm: further clarify switch_mm_irqs_off() documentation Date: Thu, 22 Feb 2024 19:09:10 +0000 Commit accf6b23d1e5a ("x86/mm: clarify "prev" usage in switch_mm_irqs_off()") attempted to clarify x86's usage of the arguments passed by generic code, specifically the "prev" argument the is unused by x86. However, it could have done a better job with the comment above switch_mm_irqs_off(). Rewrite this comment according to Dave Hansen's suggestion. Link: https://lkml.kernel.org/r/20240222190911.1903054-1-yosryahmed@xxxxxxxxxx Fixes: 3cfd6625a6cf ("x86/mm: clarify "prev" usage in switch_mm_irqs_off()") Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Suggested-by: Dave Hansen <dave.hansen@xxxxxxxxx> Acked-by: Dave Hansen <dave.hansen@xxxxxxxxx> Cc: Andy Lutomirski <luto@xxxxxxxxxx> Cc: Borislav Petkov (AMD) <bp@xxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/x86/mm/tlb.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/arch/x86/mm/tlb.c~x86-mm-further-clarify-switch_mm_irqs_off-documentation +++ a/arch/x86/mm/tlb.c @@ -493,10 +493,10 @@ static inline void cr4_update_pce_mm(str #endif /* - * The "prev" argument passed by the caller does not always match CR3. For - * example, the scheduler passes in active_mm when switching from lazy TLB mode - * to normal mode, but switch_mm_irqs_off() can be called from x86 code without - * updating active_mm. Use cpu_tlbstate.loaded_mm instead. + * This optimizes when not actually switching mm's. Some architectures use the + * 'unused' argument for this optimization, but x86 must use + * 'cpu_tlbstate.loaded_mm' instead because it does not always keep + * 'current->active_mm' up to date. */ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, struct task_struct *tsk) _ Patches currently in -mm which might be from yosryahmed@xxxxxxxxxx are