On Sat, Mar 19, 2022 at 02:41:20PM +0100, Paolo Bonzini wrote: > Nah, don't worry. I'll take care of it, I'm still not 100% on top of things > but I can handle one patch. :) Well, if you take it, then you'll have to give us an immutable branch, please, to merge it into x86/core so that peterz can do his IBT fix ontop before he sends the stuff during the merge window. In any case, here's the final version (did some commit message fixups + added tags). Thx. --- From: Borislav Petkov <bp@xxxxxxx> Date: Wed, 16 Mar 2022 22:05:52 +0100 Subject: [PATCH] kvm/emulate: Fix SETcc emulation function offsets with SLS The commit in Fixes started adding INT3 after RETs as a mitigation against straight-line speculation. The fastop SETcc implementation in kvm's insn emulator uses macro magic to generate all possible SETcc functions and to jump to them when emulating the respective instruction. However, it hardcodes the size and alignment of those functions to 4: a three-byte SETcc insn and a single-byte RET. BUT, with SLS, there's an INT3 that gets slapped after the RET, which brings the whole scheme out of alignment: 15: 0f 90 c0 seto %al 18: c3 ret 19: cc int3 1a: 0f 1f 00 nopl (%rax) 1d: 0f 91 c0 setno %al 20: c3 ret 21: cc int3 22: 0f 1f 00 nopl (%rax) 25: 0f 92 c0 setb %al 28: c3 ret 29: cc int3 and this explodes like this: int3: 0000 [#1] PREEMPT SMP PTI CPU: 0 PID: 2435 Comm: qemu-system-x86 Not tainted 5.17.0-rc8-sls #1 Hardware name: Dell Inc. Precision WorkStation T3400 /0TP412, BIOS A14 04/30/2012 RIP: 0010:setc+0x5/0x8 [kvm] Code: 00 00 0f 1f 00 0f b6 05 43 24 06 00 c3 cc 0f 1f 80 00 00 00 00 0f 90 c0 c3 cc 0f \ 1f 00 0f 91 c0 c3 cc 0f 1f 00 0f 92 c0 c3 cc <0f> 1f 00 0f 93 c0 c3 cc 0f 1f 00 \ 0f 94 c0 c3 cc 0f 1f 00 0f 95 c0 Call Trace: <TASK> ? x86_emulate_insn [kvm] ? x86_emulate_instruction [kvm] ? vmx_handle_exit [kvm_intel] ? kvm_arch_vcpu_ioctl_run [kvm] ? kvm_vcpu_ioctl [kvm] ? __x64_sys_ioctl ? do_syscall_64 ? entry_SYSCALL_64_after_hwframe </TASK> Raise the alignment value when SLS is enabled and use a macro for that instead of hard-coding naked numbers. Fixes: e463a09af2f0 ("x86: Add straight-line-speculation mitigation") Reported-by: Jamie Heilman <jamie@xxxxxxxxxxxxxxxxxxxxx> Signed-off-by: Borislav Petkov <bp@xxxxxxx> Acked-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Tested-by: Jamie Heilman <jamie@xxxxxxxxxxxxxxxxxxxxx> Link: https://lore.kernel.org/r/YjGzJwjrvxg5YZ0Z@xxxxxxxxxxxxxxxxxxxxx --- arch/x86/kvm/emulate.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 5719d8cfdbd9..f321abb9a4a8 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -429,8 +429,11 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop); FOP_END /* Special case for SETcc - 1 instruction per cc */ + +#define SETCC_ALIGN (4 * (1 + IS_ENABLED(CONFIG_SLS))) + #define FOP_SETCC(op) \ - ".align 4 \n\t" \ + ".align " __stringify(SETCC_ALIGN) " \n\t" \ ".type " #op ", @function \n\t" \ #op ": \n\t" \ #op " %al \n\t" \ @@ -1047,7 +1050,7 @@ static int em_bsr_c(struct x86_emulate_ctxt *ctxt) static __always_inline u8 test_cc(unsigned int condition, unsigned long flags) { u8 rc; - void (*fop)(void) = (void *)em_setcc + 4 * (condition & 0xf); + void (*fop)(void) = (void *)em_setcc + SETCC_ALIGN * (condition & 0xf); flags = (flags & EFLAGS_MASK) | X86_EFLAGS_IF; asm("push %[flags]; popf; " CALL_NOSPEC -- 2.29.2 -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette