These functions can be executed on the int3 stack, so kprobes are dangerous. Tracing is probably a bad idea, too. Fixes: b645af2d5905c4e32399005b867987919cbfc3ae Cc: stable@xxxxxxxxxxxxxxx Signed-off-by: Andy Lutomirski <luto@xxxxxxxxxxxxxx> --- This is only lightly tested, but it seems like a good idea. Sorry I missed this over the weekend. I don't think there's any rush on this, but I'll discourage the -stable people from applying the bad_iret fix until this is in. Actually, it's probably best for the -stable kernels to hold off on both b645af2d5905 x86_64, traps: Rework bad_iret and af726f21ed8a x86_64, traps: Fix the espfix64 #DF fixup and rewrite it in C for a couple weeks so that they can accumulate some more testing. They're security fixes, but the impact is probably so low that there's no rush. arch/x86/kernel/traps.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index de801f22128a..07ab8e9733c5 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -387,7 +387,7 @@ NOKPROBE_SYMBOL(do_int3); * for scheduling or signal handling. The actual stack switch is done in * entry.S */ -asmlinkage __visible struct pt_regs *sync_regs(struct pt_regs *eregs) +asmlinkage __visible notrace struct pt_regs *sync_regs(struct pt_regs *eregs) { struct pt_regs *regs = eregs; /* Did already sync */ @@ -413,7 +413,7 @@ struct bad_iret_stack { struct pt_regs regs; }; -asmlinkage __visible +asmlinkage __visible notrace struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s) { /* @@ -436,6 +436,7 @@ struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s) BUG_ON(!user_mode_vm(&new_stack->regs)); return new_stack; } +NOKPROBE_SYMBOL(fixup_bad_iret); #endif /* -- 1.9.3 -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html