[RFC 1/5] sbm: x86: fix SBM error entry path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Petr Tesarik <petr.tesarik1@xxxxxxxxxxxxxxxxxxx>

Normal interrupt entry from SBM should be generally treated as entry from
kernel mode (no swapgs, no speculation mitigations), but since there is a
CPL change, the interrupt handler runs on the trampoline stack, which may
get reused if the current task is re-scheduled.

Make sure to switch to the SBM exception stack.

Signed-off-by: Petr Tesarik <petr.tesarik1@xxxxxxxxxxxxxxxxxxx>
---
 arch/x86/entry/entry_64.S | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 4ba3eea38102..96830591302d 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1062,14 +1062,20 @@ SYM_CODE_START(error_entry)
 	/*
 	 * If sandbox mode was active, adjust the saved CS,
 	 * unconditionally switch to kernel CR3 and continue
-	 * as if the interrupt was from kernel space.
+	 * as if the interrupt was from kernel space, but
+	 * switch away from the trampoline stack.
 	 */
 	movq	x86_sbm_state + SBM_kernel_cr3, %rcx
 	jrcxz	.Lerror_swapgs
 
 	andb	$~3, CS+8(%rsp)
 	movq	%rcx, %cr3
-	jmp	.Lerror_entry_done_lfence
+
+	FENCE_SWAPGS_KERNEL_ENTRY
+	CALL_DEPTH_ACCOUNT
+	leaq	8(%rsp), %rdi
+	/* Put us onto the SBM exception stack. */
+	jmp	sync_regs
 #endif
 
 .Lerror_swapgs:
-- 
2.34.1





[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux