This is a note to let you know that I've just added the patch titled x86/alternative: Don't call text_poke() in lazy TLB mode to the 5.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: x86-alternative-don-t-call-text_poke-in-lazy-tlb-mode.patch and it can be found in the queue-5.4 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From abee7c494d8c41bb388839bccc47e06247f0d7de Mon Sep 17 00:00:00 2001 From: Juergen Gross <jgross@xxxxxxxx> Date: Fri, 9 Oct 2020 16:42:25 +0200 Subject: x86/alternative: Don't call text_poke() in lazy TLB mode From: Juergen Gross <jgross@xxxxxxxx> commit abee7c494d8c41bb388839bccc47e06247f0d7de upstream. When running in lazy TLB mode the currently active page tables might be the ones of a previous process, e.g. when running a kernel thread. This can be problematic in case kernel code is being modified via text_poke() in a kernel thread, and on another processor exit_mmap() is active for the process which was running on the first cpu before the kernel thread. As text_poke() is using a temporary address space and the former address space (obtained via cpu_tlbstate.loaded_mm) is restored afterwards, there is a race possible in case the cpu on which exit_mmap() is running wants to make sure there are no stale references to that address space on any cpu active (this e.g. is required when running as a Xen PV guest, where this problem has been observed and analyzed). In order to avoid that, drop off TLB lazy mode before switching to the temporary address space. Fixes: cefa929c034eb5d ("x86/mm: Introduce temporary mm structs") Signed-off-by: Juergen Gross <jgross@xxxxxxxx> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Link: https://lkml.kernel.org/r/20201009144225.12019-1-jgross@xxxxxxxx Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/x86/include/asm/mmu_context.h | 9 +++++++++ 1 file changed, 9 insertions(+) --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -379,6 +379,15 @@ static inline temp_mm_state_t use_tempor temp_mm_state_t temp_state; lockdep_assert_irqs_disabled(); + + /* + * Make sure not to be in TLB lazy mode, as otherwise we'll end up + * with a stale address space WITHOUT being in lazy mode after + * restoring the previous mm. + */ + if (this_cpu_read(cpu_tlbstate.is_lazy)) + leave_mm(smp_processor_id()); + temp_state.mm = this_cpu_read(cpu_tlbstate.loaded_mm); switch_mm_irqs_off(NULL, mm, current); Patches currently in stable-queue which might be from jgross@xxxxxxxx are queue-5.4/xen-events-close-evtchn-after-mapping-cleanup.patch queue-5.4/x86-alternative-don-t-call-text_poke-in-lazy-tlb-mode.patch