On 23/11/2020 07:34, Hillf Danton wrote: [...] > After staring the report and 311daef8013a a bit more, it seems that we > can have a simpler fix without the help of wakeup. It is implemented > by busy waiting until there is no more request in flight found. I think it's better not. It doesn't happen instantly, so may take a lot of spinning in some cases. Moreover, I don't want an unkillable task eating up all CPU if this hangs again (eg because of some other case). And, I'd love to keep it working similarly to __io_uring_task_cancel() to not think twice about all corner cases. > > --- a/fs/io_uring.c > +++ b/fs/io_uring.c > @@ -6077,13 +6077,10 @@ static int io_req_defer(struct io_kiocb > static void io_req_drop_files(struct io_kiocb *req) > { > struct io_ring_ctx *ctx = req->ctx; > - struct io_uring_task *tctx = req->task->io_uring; > unsigned long flags; > > spin_lock_irqsave(&ctx->inflight_lock, flags); > list_del(&req->inflight_entry); > - if (atomic_read(&tctx->in_idle)) > - wake_up(&tctx->wait); > spin_unlock_irqrestore(&ctx->inflight_lock, flags); > req->flags &= ~REQ_F_INFLIGHT; > put_files_struct(req->work.identity->files); > @@ -8706,7 +8703,6 @@ static void io_uring_cancel_files(struct > while (!list_empty_careful(&ctx->inflight_list)) { > struct io_task_cancel cancel = { .task = task, .files = NULL, }; > struct io_kiocb *req; > - DEFINE_WAIT(wait); > bool found = false; > > spin_lock_irq(&ctx->inflight_lock); > @@ -8718,9 +8714,6 @@ static void io_uring_cancel_files(struct > found = true; > break; > } > - if (found) > - prepare_to_wait(&task->io_uring->wait, &wait, > - TASK_UNINTERRUPTIBLE); > spin_unlock_irq(&ctx->inflight_lock); > > /* We need to keep going until we don't find a matching req */ > @@ -8733,7 +8726,6 @@ static void io_uring_cancel_files(struct > /* cancellations _may_ trigger task work */ > io_run_task_work(); > schedule(); > - finish_wait(&task->io_uring->wait, &wait); > } > } > > -- Pavel Begunkov