This is a note to let you know that I've just added the patch titled x86/entry_64: Add VERW just before userspace transition to the 5.15-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: x86-entry_64-add-verw-just-before-userspace-transition.patch and it can be found in the queue-5.15 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From stable+bounces-27525-greg=kroah.com@xxxxxxxxxxxxxxx Tue Mar 12 22:11:02 2024 From: Pawan Gupta <pawan.kumar.gupta@xxxxxxxxxxxxxxx> Date: Tue, 12 Mar 2024 14:10:51 -0700 Subject: x86/entry_64: Add VERW just before userspace transition To: stable@xxxxxxxxxxxxxxx Cc: Dave Hansen <dave.hansen@xxxxxxxxx>, Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Message-ID: <20240312-delay-verw-backport-5-15-y-v2-3-e0f71d17ed1b@xxxxxxxxxxxxxxx> Content-Disposition: inline From: Pawan Gupta <pawan.kumar.gupta@xxxxxxxxxxxxxxx> commit 3c7501722e6b31a6e56edd23cea5e77dbb9ffd1a upstream. Mitigation for MDS is to use VERW instruction to clear any secrets in CPU Buffers. Any memory accesses after VERW execution can still remain in CPU buffers. It is safer to execute VERW late in return to user path to minimize the window in which kernel data can end up in CPU buffers. There are not many kernel secrets to be had after SWITCH_TO_USER_CR3. Add support for deploying VERW mitigation after user register state is restored. This helps minimize the chances of kernel data ending up into CPU buffers after executing VERW. Note that the mitigation at the new location is not yet enabled. Corner case not handled ======================= Interrupts returning to kernel don't clear CPUs buffers since the exit-to-user path is expected to do that anyways. But, there could be a case when an NMI is generated in kernel after the exit-to-user path has cleared the buffers. This case is not handled and NMI returning to kernel don't clear CPU buffers because: 1. It is rare to get an NMI after VERW, but before returning to user. 2. For an unprivileged user, there is no known way to make that NMI less rare or target it. 3. It would take a large number of these precisely-timed NMIs to mount an actual attack. There's presumably not enough bandwidth. 4. The NMI in question occurs after a VERW, i.e. when user state is restored and most interesting data is already scrubbed. Whats left is only the data that NMI touches, and that may or may not be of any interest. [ pawan: resolved conflict for hunk swapgs_restore_regs_and_return_to_usermode ] Suggested-by: Dave Hansen <dave.hansen@xxxxxxxxx> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@xxxxxxxxxxxxxxx> Signed-off-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Link: https://lore.kernel.org/all/20240213-delay-verw-v8-2-a6216d83edb7%40linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/x86/entry/entry_64.S | 11 +++++++++++ arch/x86/entry/entry_64_compat.S | 1 + 2 files changed, 12 insertions(+) --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -219,6 +219,7 @@ syscall_return_via_sysret: popq %rdi popq %rsp swapgs + CLEAR_CPU_BUFFERS sysretq SYM_CODE_END(entry_SYSCALL_64) @@ -637,6 +638,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_ /* Restore RDI. */ popq %rdi SWAPGS + CLEAR_CPU_BUFFERS INTERRUPT_RETURN @@ -743,6 +745,8 @@ native_irq_return_ldt: */ popq %rax /* Restore user RAX */ + CLEAR_CPU_BUFFERS + /* * RSP now points to an ordinary IRET frame, except that the page * is read-only and RSP[31:16] are preloaded with the userspace @@ -1466,6 +1470,12 @@ nmi_restore: movq $0, 5*8(%rsp) /* clear "NMI executing" */ /* + * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like + * NMI in kernel after user state is restored. For an unprivileged user + * these conditions are hard to meet. + */ + + /* * iretq reads the "iret" frame and exits the NMI stack in a * single instruction. We are returning to kernel mode, so this * cannot result in a fault. Similarly, we don't need to worry @@ -1482,6 +1492,7 @@ SYM_CODE_END(asm_exc_nmi) SYM_CODE_START(ignore_sysret) UNWIND_HINT_EMPTY mov $-ENOSYS, %eax + CLEAR_CPU_BUFFERS sysretl SYM_CODE_END(ignore_sysret) #endif --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -319,6 +319,7 @@ sysret32_from_system_call: xorl %r9d, %r9d xorl %r10d, %r10d swapgs + CLEAR_CPU_BUFFERS sysretl SYM_CODE_END(entry_SYSCALL_compat) Patches currently in stable-queue which might be from kroah.com@xxxxxxxxxxxxxxx are queue-5.15/x86-rfds-mitigate-register-file-data-sampling-rfds.patch queue-5.15/x86-entry_32-add-verw-just-before-userspace-transition.patch queue-5.15/x86-bugs-add-asm-helpers-for-executing-verw.patch queue-5.15/kvm-x86-export-rfds_no-and-rfds_clear-to-guests.patch queue-5.15/x86-asm-add-_asm_rip-macro-for-x86-64-rip-suffix.patch queue-5.15/x86-entry_64-add-verw-just-before-userspace-transition.patch queue-5.15/x86-mmio-disable-kvm-mitigation-when-x86_feature_clear_cpu_buf-is-set.patch queue-5.15/x86-bugs-use-alternative-instead-of-mds_user_clear-static-key.patch queue-5.15/documentation-hw-vuln-add-documentation-for-rfds.patch queue-5.15/kvm-vmx-use-bt-jnc-i.e.-eflags.cf-to-select-vmresume-vs.-vmlaunch.patch queue-5.15/kvm-vmx-move-verw-closer-to-vmentry-for-mds-mitigation.patch