[PATCH 2/3] x86/entry/32: Check for VM86 mode in slow-path check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Joerg Roedel <jroedel@xxxxxxx>

The SWITCH_TO_KERNEL_STACK macro only checks for CPL == 0 to
go down the slow and paranoid entry path. The problem is
that this check also returns true when coming from VM86
mode. This is not a problem by itself, as the paranoid path
handles VM86 stack-frames just fine, but it is not necessary
as the normal code path handles VM86 mode as well (and
faster).

Extend the check to include VM86 mode. This also makes an
optimization of the paranoid path possible.

Signed-off-by: Joerg Roedel <jroedel@xxxxxxx>
---
 arch/x86/entry/entry_32.S | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 010cdb4..2767c62 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -414,8 +414,16 @@
 	andl	$(0x0000ffff), PT_CS(%esp)
 
 	/* Special case - entry from kernel mode via entry stack */
-	testl	$SEGMENT_RPL_MASK, PT_CS(%esp)
-	jz	.Lentry_from_kernel_\@
+#ifdef CONFIG_VM86
+	movl	PT_EFLAGS(%esp), %ecx		# mix EFLAGS and CS
+	movb	PT_CS(%esp), %cl
+	andl	$(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %ecx
+#else
+	movl	PT_CS(%esp), %ecx
+	andl	$SEGMENT_RPL_MASK, %ecx
+#endif
+	cmpl	$USER_RPL, %ecx
+	jb	.Lentry_from_kernel_\@
 
 	/* Bytes to copy */
 	movl	$PTREGS_SIZE, %ecx
-- 
2.7.4




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux