Parallel calls to add_device_randomness() contend on their own. The clone side aleady runs outside of tasklist_lock, which in turn means any caller on the exit side extends the tasklist_lock hold time while contending on the random-private lock. Signed-off-by: Mateusz Guzik <mjguzik@xxxxxxxxx> --- kernel/exit.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/kernel/exit.c b/kernel/exit.c index 3485e5fc499e..1eb2e7d36ce4 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -174,9 +174,6 @@ static void __exit_signal(struct task_struct *tsk) sig->curr_target = next_thread(tsk); } - add_device_randomness((const void*) &tsk->se.sum_exec_runtime, - sizeof(unsigned long long)); - /* * Accumulate here the counters for all threads as they die. We could * skip the group leader because it is the last user of signal_struct, @@ -278,6 +275,8 @@ void release_task(struct task_struct *p) write_unlock_irq(&tasklist_lock); proc_flush_pid(thread_pid); put_pid(thread_pid); + add_device_randomness((const void*) &p->se.sum_exec_runtime, + sizeof(unsigned long long)); release_thread(p); put_task_struct_rcu_user(p); -- 2.43.0