do_mq_timedreceive calls wq_sleep with a stack local address. The sender (do_mq_timedsend) uses this address to later call pipelined_send. This leads to a very hard to trigger race where a do_mq_timedreceive call might return and leave do_mq_timedsend to rely on an invalid address, causing the following crash: [ 240.739977] RIP: 0010:wake_q_add_safe+0x13/0x60 [ 240.739991] Call Trace: [ 240.739999] __x64_sys_mq_timedsend+0x2a9/0x490 [ 240.740003] ? auditd_test_task+0x38/0x40 [ 240.740007] ? auditd_test_task+0x38/0x40 [ 240.740011] do_syscall_64+0x80/0x680 [ 240.740017] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 240.740019] RIP: 0033:0x7f5928e40343 The race occurs as: 1. do_mq_timedreceive calls wq_sleep with the address of `struct ext_wait_queue` on function stack (aliased as `ewq_addr` here) - it holds a valid `struct ext_wait_queue *` as long as the stack has not been overwritten. 2. `ewq_addr` gets added to info->e_wait_q[RECV].list in wq_add, and do_mq_timedsend receives it via wq_get_first_waiter(info, RECV) to call __pipelined_op. 3. Sender calls __pipelined_op::smp_store_release(&this->state, STATE_READY). Here is where the race window begins. (`this` is `ewq_addr`.) 4. If the receiver wakes up now in do_mq_timedreceive::wq_sleep, it will see `state == STATE_READY` and break. `ewq_addr` gets removed from info->e_wait_q[RECV].list. 5. do_mq_timedreceive returns, and `ewq_addr` is no longer guaranteed to be a `struct ext_wait_queue *` since it was on do_mq_timedreceive's stack. (Although the address may not get overwritten until another function happens to touch it, which means it can persist around for an indefinite time.) 6. do_mq_timedsend::__pipelined_op() still believes `ewq_addr` is a `struct ext_wait_queue *`, and uses it to find a task_struct to pass to the wake_q_add_safe call. In the lucky case where nothing has overwritten `ewq_addr` yet, `ewq_addr->task` is the right task_struct. In the unlucky case, __pipelined_op::wake_q_add_safe gets handed a bogus address as the receiver's task_struct causing the crash. do_mq_timedsend::__pipelined_op() should not dereference `this` after setting STATE_READY, as the receiver counterpart is now free to return. Change __pipelined_op to call wake_q_add before setting STATE_READY which ensures that the receiver's task_struct can still be found via `this`. Fixes: c5b2cbdbdac563 ("ipc/mqueue.c: update/document memory barriers") Signed-off-by: Varad Gautam <varad.gautam@xxxxxxxx> Reported-by: Matthias von Faber <matthias.vonfaber@xxxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> # 5.6 Cc: Christian Brauner <christian.brauner@xxxxxxxxxx> Cc: Oleg Nesterov <oleg@xxxxxxxxxx> Cc: "Eric W. Biederman" <ebiederm@xxxxxxxxxxxx> Cc: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Davidlohr Bueso <dbueso@xxxxxxx> --- v2: Call wake_q_add before smp_store_release, instead of using a get_task_struct/wake_q_add_safe combination across smp_store_release. (Davidlohr Bueso) ipc/mqueue.c | 33 ++++++++++++++++++++++++--------- 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/ipc/mqueue.c b/ipc/mqueue.c index 8031464ed4ae..bfcb6f81a824 100644 --- a/ipc/mqueue.c +++ b/ipc/mqueue.c @@ -78,11 +78,13 @@ struct posix_msg_tree_node { * MQ_BARRIER: * To achieve proper release/acquire memory barrier pairing, the state is set to * STATE_READY with smp_store_release(), and it is read with READ_ONCE followed - * by smp_acquire__after_ctrl_dep(). In addition, wake_q_add_safe() is used. + * by smp_acquire__after_ctrl_dep(). The state change to STATE_READY must be + * the last write operation, after which the blocked task can immediately + * return and exit. * * This prevents the following races: * - * 1) With the simple wake_q_add(), the task could be gone already before + * 1) With wake_q_add(), the task could be gone already before * the increase of the reference happens * Thread A * Thread B @@ -97,10 +99,25 @@ struct posix_msg_tree_node { * sys_exit() * get_task_struct() // UaF * - * Solution: Use wake_q_add_safe() and perform the get_task_struct() before - * the smp_store_release() that does ->state = STATE_READY. + * 2) With wake_q_add(), the receiver task could have returned from the + * syscall and had its stack-allocated waiter overwritten before the + * sender could add it to the wake_q + * Thread A + * Thread B + * WRITE_ONCE(wait.state, STATE_NONE); + * schedule_hrtimeout() + * ->state = STATE_READY + * <timeout returns> + * if (wait.state == STATE_READY) return; + * sysret to user space + * overwrite receiver's stack + * wake_q_add(A) + * if (cmpxchg()) // corrupted waiter * - * 2) Without proper _release/_acquire barriers, the woken up task + * Solution: Queue the task for wakeup before the smp_store_release() that + * does ->state = STATE_READY. + * + * 3) Without proper _release/_acquire barriers, the woken up task * could read stale data * * Thread A @@ -116,7 +133,7 @@ struct posix_msg_tree_node { * * Solution: use _release and _acquire barriers. * - * 3) There is intentionally no barrier when setting current->state + * 4) There is intentionally no barrier when setting current->state * to TASK_INTERRUPTIBLE: spin_unlock(&info->lock) provides the * release memory barrier, and the wakeup is triggered when holding * info->lock, i.e. spin_lock(&info->lock) provided a pairing @@ -1005,11 +1022,9 @@ static inline void __pipelined_op(struct wake_q_head *wake_q, struct ext_wait_queue *this) { list_del(&this->list); - get_task_struct(this->task); - + wake_q_add(wake_q, this->task); /* see MQ_BARRIER for purpose/pairing */ smp_store_release(&this->state, STATE_READY); - wake_q_add_safe(wake_q, this->task); } /* pipelined_send() - send a message directly to the task waiting in -- 2.30.2