On Tue, Jul 11, 2023 at 06:47:01PM -0600, Jens Axboe wrote: > Add support for FUTEX_WAKE/WAIT primitives. > > IORING_OP_FUTEX_WAKE is mix of FUTEX_WAKE and FUTEX_WAKE_BITSET, as > it does support passing in a bitset. > > Similary, IORING_OP_FUTEX_WAIT is a mix of FUTEX_WAIT and > FUTEX_WAIT_BITSET. > > FUTEX_WAKE is straight forward, as we can always just do those inline. > FUTEX_WAIT will queue the futex with an appropriate callback, and > that callback will in turn post a CQE when it has triggered. > > Cancelations are supported, both from the application point-of-view, > but also to be able to cancel pending waits if the ring exits before > all events have occurred. > > This is just the barebones wait/wake support. PI or REQUEUE support is > not added at this point, unclear if we might look into that later. > > Likewise, explicit timeouts are not supported either. It is expected > that users that need timeouts would do so via the usual io_uring > mechanism to do that using linked timeouts. > > Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> I'm not sure I'm qualified to review this :/ I really don't know anything about how io-uring works. And the above doesn't really begin to explain things. > +static void io_futex_wake_fn(struct wake_q_head *wake_q, struct futex_q *q) > +{ > + struct io_futex_data *ifd = container_of(q, struct io_futex_data, q); > + struct io_kiocb *req = ifd->req; > + > + __futex_unqueue(q); > + smp_store_release(&q->lock_ptr, NULL); > + > + io_req_set_res(req, 0, 0); > + req->io_task_work.func = io_futex_complete; > + io_req_task_work_add(req); > +} I'm noting the WARN from futex_wake_mark() went walk-about. Perhaps something like so? diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c index ba01b9408203..07758d48d5db 100644 --- a/kernel/futex/waitwake.c +++ b/kernel/futex/waitwake.c @@ -106,20 +106,11 @@ * double_lock_hb() and double_unlock_hb(), respectively. */ -/* - * The hash bucket lock must be held when this is called. - * Afterwards, the futex_q must not be accessed. Callers - * must ensure to later call wake_up_q() for the actual - * wakeups to occur. - */ -void futex_wake_mark(struct wake_q_head *wake_q, struct futex_q *q) +bool __futex_wake_mark(struct futex_q *q) { - struct task_struct *p = q->task; - if (WARN(q->pi_state || q->rt_waiter, "refusing to wake PI futex\n")) - return; + return false; - get_task_struct(p); __futex_unqueue(q); /* * The waiting task can free the futex_q as soon as q->lock_ptr = NULL @@ -130,6 +121,26 @@ void futex_wake_mark(struct wake_q_head *wake_q, struct futex_q *q) */ smp_store_release(&q->lock_ptr, NULL); + return true; +} + +/* + * The hash bucket lock must be held when this is called. + * Afterwards, the futex_q must not be accessed. Callers + * must ensure to later call wake_up_q() for the actual + * wakeups to occur. + */ +void futex_wake_mark(struct wake_q_head *wake_q, struct futex_q *q) +{ + struct task_struct *p = q->task; + + get_task_struct(p); + + if (!__futex_wake_mark(q)) { + put_task_struct(p); + return; + } + /* * Queue the task for later wakeup for after we've released * the hb->lock.