Stable bugfix backport request of "KVM: x86: smm: preserve interrupt shadow in SMRAM"?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Maxim and Paolo, 

This is the linux-stable backport request regarding the below patch.

KVM: x86: smm: preserve interrupt shadow in SMRAM
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=fb28875fd7da184079150295da7ee8d80a70917e

According to the below link, there may be a backport to stable kernels, while I
do not see it in the stable kernels.

https://gitlab.com/qemu-project/qemu/-/issues/1198

Would you mind sharing if there is already any existing backport, or please let
me know if I can send the backport to the linux-stable?

There are many conflicts unless we backport the entire patchset, e.g.,: I
choose 0x7f1a/0x7ecb for 32-bit/64-bit int_shadow in the smram.

--------------------------------

>From 90f492c865a4b7ca6187a4fc9eebe451f3d6c17e Mon Sep 17 00:00:00 2001
From: Maxim Levitsky <mlevitsk@xxxxxxxxxx>
Date: Fri, 26 Jan 2024 14:03:59 -0800
Subject: [PATCH linux-5.15.y 1/1] KVM: x86: smm: preserve interrupt shadow in SMRAM

[ Upstream commit fb28875fd7da184079150295da7ee8d80a70917e ]

When #SMI is asserted, the CPU can be in interrupt shadow due to sti or
mov ss.

It is not mandatory in  Intel/AMD prm to have the #SMI blocked during the
shadow, and on top of that, since neither SVM nor VMX has true support
for SMI window, waiting for one instruction would mean single stepping
the guest.

Instead, allow #SMI in this case, but both reset the interrupt window and
stash its value in SMRAM to restore it on exit from SMM.

This fixes rare failures seen mostly on windows guests on VMX, when #SMI
falls on the sti instruction which mainfest in VM entry failure due
to EFLAGS.IF not being set, but STI interrupt window still being set
in the VMCS.

Signed-off-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx>
Message-Id: <20221025124741.228045-24-mlevitsk@xxxxxxxxxx>
Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>

Backport fb28875fd7da184079150295da7ee8d80a70917e from a big patchset
merge:

[PATCH RESEND v4 00/23] SMM emulation and interrupt shadow fixes
https://lore.kernel.org/all/20221025124741.228045-1-mlevitsk@xxxxxxxxxx/

Since only the last patch is backported, there are many conflicts.

The core idea of the patch:

- Save the interruptibility before entering SMM.
- Load the interruptibility after leaving SMM.

Although the real offsets in smram buffer are the same, the bugfix and the
UEK5 use different offsets in the function calls. Here are some examples.

32-bit:
              bugfix      UEK6
smbase     -> 0xFEF8  -> 0x7ef8
cr4        -> 0xFF14  -> 0x7f14
int_shadow -> 0xFF1A  ->  n/a
eip        -> 0xFFF0  -> 0x7ff0
cr0        -> 0xFFFC  -> 0x7ffc

64-bit:
              bugfix      UEK6
int_shadow -> 0xFECB  ->  n/a
efer       -> 0xFEd0  -> 0x7ed0
smbase     -> 0xFF00  -> 0x7f00
cr4        -> 0xFF48  -> 0x7f48
cr0        -> 0xFF58  -> 0x7f58
rip        -> 0xFF78  -> 0x7f78

Therefore, we choose the below offsets for int_shadow:

32-bit: int_shadow = 0x7f1a
64-bit: int_shadow = 0x7ecb

Signed-off-by: Dongli Zhang <dongli.zhang@xxxxxxxxxx>
---
 arch/x86/kvm/emulate.c | 15 +++++++++++++--
 arch/x86/kvm/x86.c     |  6 ++++++
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 98b25a7..00df781b 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -2438,7 +2438,7 @@ static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt,
 	struct desc_ptr dt;
 	u16 selector;
 	u32 val, cr0, cr3, cr4;
-	int i;
+	int i, r;

 	cr0 =                      GET_SMSTATE(u32, smstate, 0x7ffc);
 	cr3 =                      GET_SMSTATE(u32, smstate, 0x7ff8);
@@ -2488,7 +2488,15 @@ static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt,

 	ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7ef8));

-	return rsm_enter_protected_mode(ctxt, cr0, cr3, cr4);
+	r = rsm_enter_protected_mode(ctxt, cr0, cr3, cr4);
+
+	if (r != X86EMUL_CONTINUE)
+		return r;
+
+	static_call(kvm_x86_set_interrupt_shadow)(ctxt->vcpu, 0);
+	ctxt->interruptibility = GET_SMSTATE(u8, smstate, 0x7f1a);
+
+	return r;
 }

 #ifdef CONFIG_X86_64
@@ -2559,6 +2567,9 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt,
 			return r;
 	}

+	static_call(kvm_x86_set_interrupt_shadow)(ctxt->vcpu, 0);
+	ctxt->interruptibility = GET_SMSTATE(u8, smstate, 0x7ecb);
+
 	return X86EMUL_CONTINUE;
 }
 #endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index aa6f700..6b30d40 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9400,6 +9400,8 @@ static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf)
 	/* revision id */
 	put_smstate(u32, buf, 0x7efc, 0x00020000);
 	put_smstate(u32, buf, 0x7ef8, vcpu->arch.smbase);
+
+	put_smstate(u8, buf, 0x7f1a, static_call(kvm_x86_get_interrupt_shadow)(vcpu));
 }

 #ifdef CONFIG_X86_64
@@ -9454,6 +9456,8 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf)

 	for (i = 0; i < 6; i++)
 		enter_smm_save_seg_64(vcpu, buf, i);
+
+	put_smstate(u8, buf, 0x7ecb, static_call(kvm_x86_get_interrupt_shadow)(vcpu));
 }
 #endif

@@ -9490,6 +9494,8 @@ static void enter_smm(struct kvm_vcpu *vcpu)
 	kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
 	kvm_rip_write(vcpu, 0x8000);

+	static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0);
+
 	cr0 = vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0_PG);
 	static_call(kvm_x86_set_cr0)(vcpu, cr0);
 	vcpu->arch.cr0 = cr0;
--
1.8.3.1

--------------------------------

Thank you very much!

Dongli Zhang




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux