From: Masami Hiramatsu <mhiramat@xxxxxxxxxx> commit ee6a7354a3629f9b65bc18dbe393503e9440d6f5 upstream. Since MOV SS and POP SS instructions will delay the exceptions until the next instruction is executed, single-stepping on it by kprobes must be prohibited. However, kprobes usually executes those instructions directly on trampoline buffer (a.k.a. kprobe-booster), except for the kprobes which has post_handler. Thus if kprobe user probes MOV SS with post_handler, it will do single-stepping on the MOV SS. This means it is safe that if it is used via ftrace or perf/bpf since those don't use the post_handler. Anyway, since the stack switching is a rare case, it is safer just rejecting kprobes on such instructions. Signed-off-by: Masami Hiramatsu <mhiramat@xxxxxxxxxx> Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Ricardo Neri <ricardo.neri-calderon@xxxxxxxxxxxxxxx> Cc: Francis Deslauriers <francis.deslauriers@xxxxxxxxxxxx> Cc: Oleg Nesterov <oleg@xxxxxxxxxx> Cc: Alexei Starovoitov <ast@xxxxxxxxxx> Cc: Steven Rostedt <rostedt@xxxxxxxxxxx> Cc: Andy Lutomirski <luto@xxxxxxxxxx> Cc: "H . Peter Anvin" <hpa@xxxxxxxxx> Cc: Yonghong Song <yhs@xxxxxx> Cc: Borislav Petkov <bp@xxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: "David S . Miller" <davem@xxxxxxxxxxxxx> Link: https://lkml.kernel.org/r/152587069574.17316.3311695234863248641.stgit@devbox Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/x86/include/asm/insn.h | 18 ++++++++++++++++++ arch/x86/kernel/kprobes/core.c | 4 ++++ 2 files changed, 22 insertions(+) --- a/arch/x86/include/asm/insn.h +++ b/arch/x86/include/asm/insn.h @@ -198,4 +198,22 @@ static inline int insn_offset_immediate( return insn_offset_displacement(insn) + insn->displacement.nbytes; } +#define POP_SS_OPCODE 0x1f +#define MOV_SREG_OPCODE 0x8e + +/* + * Intel SDM Vol.3A 6.8.3 states; + * "Any single-step trap that would be delivered following the MOV to SS + * instruction or POP to SS instruction (because EFLAGS.TF is 1) is + * suppressed." + * This function returns true if @insn is MOV SS or POP SS. On these + * instructions, single stepping is suppressed. + */ +static inline int insn_masking_exception(struct insn *insn) +{ + return insn->opcode.bytes[0] == POP_SS_OPCODE || + (insn->opcode.bytes[0] == MOV_SREG_OPCODE && + X86_MODRM_REG(insn->modrm.bytes[0]) == 2); +} + #endif /* _ASM_X86_INSN_H */ --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -372,6 +372,10 @@ int __copy_instruction(u8 *dest, u8 *src return 0; memcpy(dest, insn.kaddr, length); + /* We should not singlestep on the exception masking instructions */ + if (insn_masking_exception(&insn)) + return 0; + #ifdef CONFIG_X86_64 if (insn_rip_relative(&insn)) { s64 newdisp;