v4: - Fill unused part of mds_verw_sel cacheline with int3. (Andrew) - Fix the formatting in documentation (0-day CI). - s/inspite/in spite/ (Sean). - Explicitly skip FB_CLEAR optimization when MDS affected (Sean). v3: https://lore.kernel.org/r/20231025-delay-verw-v3-0-52663677ee35@xxxxxxxxxxxxxxx - Use .entry.text section for VERW memory operand. (Andrew/PeterZ) - Fix the duplicate header inclusion. (Chao) v2: https://lore.kernel.org/r/20231024-delay-verw-v2-0-f1881340c807@xxxxxxxxxxxxxxx - Removed the extra EXEC_VERW macro layers. (Sean) - Move NOPL before VERW. (Sean) - s/USER_CLEAR_CPU_BUFFERS/CLEAR_CPU_BUFFERS/. (Josh/Dave) - Removed the comments before CLEAR_CPU_BUFFERS. (Josh) - Remove CLEAR_CPU_BUFFERS from NMI returning to kernel and document the reason. (Josh/Dave) - Reformat comment in md_clear_update_mitigation(). (Josh) - Squash "x86/bugs: Cleanup mds_user_clear" patch. (Nikolay) - s/GUEST_CLEAR_CPU_BUFFERS/CLEAR_CPU_BUFFERS/. (Josh) - Added a patch from Sean to use CFLAGS.CF for VMLAUNCH/VMRESUME selection. This facilitates a single CLEAR_CPU_BUFFERS location for both VMLAUNCH and VMRESUME. (Sean) v1: https://lore.kernel.org/r/20231020-delay-verw-v1-0-cff54096326d@xxxxxxxxxxxxxxx Hi, Legacy instruction VERW was overloaded by some processors to clear micro-architectural CPU buffers as a mitigation of CPU bugs. This series moves VERW execution to a later point in exit-to-user path. This is needed because in some cases it may be possible for kernel data to be accessed after VERW in arch_exit_to_user_mode(). Such accesses may put data into MDS affected CPU buffers, for example: 1. Kernel data accessed by an NMI between VERW and return-to-user can remain in CPU buffers (since NMI returning to kernel does not execute VERW to clear CPU buffers). 2. Alyssa reported that after VERW is executed, CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system call. Memory accesses during stack scrubbing can move kernel stack contents into CPU buffers. 3. When caller saved registers are restored after a return from function executing VERW, the kernel stack accesses can remain in CPU buffers(since they occur after VERW). Although these cases are less practical to exploit, moving VERW closer to ring transition reduces the attack surface. Overview of the series: Patch 1: Prepares VERW macros for use in asm. Patch 2: Adds macros to 64-bit entry/exit points. Patch 3: Adds macros to 32-bit entry/exit points. Patch 4: Enables the new macros. Patch 5: Uses CFLAGS.CF for VMLAUNCH/VMRESUME selection. Patch 6: Adds macro to VMenter. Below is some performance data collected on a Skylake client compared with previous implementation: Baseline: v6.6-rc5 | Test | Configuration | v1 | v3 | | ------------------ | ---------------------- | ---- | ---- | | build-linux-kernel | defconfig | 1.00 | 1.00 | | hackbench | 32 - Process | 1.02 | 1.06 | | nginx | Short Connection - 500 | 1.01 | 1.04 | Cc: linux-kernel@xxxxxxxxxxxxxxx Cc: linux-doc@xxxxxxxxxxxxxxx Cc: kvm@xxxxxxxxxxxxxxx Cc: Alyssa Milburn <alyssa.milburn@xxxxxxxxxxxxxxx> Cc: Daniel Sneddon <daniel.sneddon@xxxxxxxxxxxxxxx> Cc: antonio.gomez.iglesias@xxxxxxxxxxxxxxx Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> To: Thomas Gleixner <tglx@xxxxxxxxxxxxx> To: Ingo Molnar <mingo@xxxxxxxxxx> To: Borislav Petkov <bp@xxxxxxxxx> To: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> To: x86@xxxxxxxxxx To: "H. Peter Anvin" <hpa@xxxxxxxxx> To: Peter Zijlstra <peterz@xxxxxxxxxxxxx> To: Josh Poimboeuf <jpoimboe@xxxxxxxxxx> To: Andy Lutomirski <luto@xxxxxxxxxx> To: Jonathan Corbet <corbet@xxxxxxx> To: Sean Christopherson <seanjc@xxxxxxxxxx> To: Paolo Bonzini <pbonzini@xxxxxxxxxx> To: tony.luck@xxxxxxxxx To: ak@xxxxxxxxxxxxxxx To: tim.c.chen@xxxxxxxxxxxxxxx To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> To: Nikolay Borisov <nik.borisov@xxxxxxxx> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@xxxxxxxxxxxxxxx> --- Pawan Gupta (5): x86/bugs: Add asm helpers for executing VERW x86/entry_64: Add VERW just before userspace transition x86/entry_32: Add VERW just before userspace transition x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key KVM: VMX: Move VERW closer to VMentry for MDS mitigation Sean Christopherson (1): KVM: VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH Documentation/arch/x86/mds.rst | 38 +++++++++++++++++++++++++----------- arch/x86/entry/entry.S | 17 ++++++++++++++++ arch/x86/entry/entry_32.S | 3 +++ arch/x86/entry/entry_64.S | 11 +++++++++++ arch/x86/entry/entry_64_compat.S | 1 + arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/entry-common.h | 1 - arch/x86/include/asm/nospec-branch.h | 27 +++++++++++++------------ arch/x86/kernel/cpu/bugs.c | 15 ++++++-------- arch/x86/kernel/nmi.c | 2 -- arch/x86/kvm/vmx/run_flags.h | 7 +++++-- arch/x86/kvm/vmx/vmenter.S | 9 ++++++--- arch/x86/kvm/vmx/vmx.c | 19 +++++++++++++----- 13 files changed, 106 insertions(+), 46 deletions(-) --- base-commit: 05d3ef8bba77c1b5f98d941d8b2d4aeab8118ef1 change-id: 20231011-delay-verw-d0474986b2c3 Best regards, -- Thanks, Pawan