This ensures that we always have all full set of registers available when PTRACE_EVENT_EXIT is called. Something that is not guaranteed for callers of do_exit. Additionally this guarantees PTRACE_EVENT_EXIT will not cause havoc with abnormal exits. Signed-off-by: "Eric W. Biederman" <ebiederm@xxxxxxxxxxxx> --- kernel/exit.c | 2 -- kernel/signal.c | 2 ++ 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/exit.c b/kernel/exit.c index 51e0c82b3f7d..309f1d71e340 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -763,8 +763,6 @@ void __noreturn do_exit(long code) profile_task_exit(tsk); kcov_task_exit(tsk); - ptrace_event(PTRACE_EVENT_EXIT, code); - validate_creds_for_do_exit(tsk); /* diff --git a/kernel/signal.c b/kernel/signal.c index 63fda9b6bbf9..7214331836bc 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -2890,6 +2890,8 @@ bool get_signal(struct ksignal *ksig) if (exit_code & 0x7f) current->flags |= PF_SIGNALED; + ptrace_event(PTRACE_EVENT_EXIT, exit_code); + /* * PF_IO_WORKER threads will catch and exit on fatal signals * themselves. They have cleanup that must be performed, so -- 2.20.1