On 27/05/2020 13:23, Pavel Begunkov wrote: > Overflowed requests in io_uring_cancel_files() should be shed only of > inflight and overflowed refs. All other left references are owned by > someone else. E.g. a submission ref owned by __io_queue_sqe() but not > yet reached the point of releasing it. > > However, if an overflowed request in io_uring_cancel_files() had extra > refs, after refcount_sub_and_test(2) check fails, it > > - tries to cancel the req, which is already going away. That's pointless, > just go for the next lap of inflight waiting. > > - io_put_req() underflowing req->refs of a potentially freed request.e Probably needs v2, please disregard this for now. > > Fixes: 2ca10259b418 ("io_uring: prune request from overflow list on flush") > Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> > --- > fs/io_uring.c | 16 +++++----------- > 1 file changed, 5 insertions(+), 11 deletions(-) > > diff --git a/fs/io_uring.c b/fs/io_uring.c > index de6547e68626..01851a74bb12 100644 > --- a/fs/io_uring.c > +++ b/fs/io_uring.c > @@ -7483,19 +7483,13 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx, > WRITE_ONCE(ctx->rings->cq_overflow, > atomic_inc_return(&ctx->cached_cq_overflow)); > > - /* > - * Put inflight ref and overflow ref. If that's > - * all we had, then we're done with this request. > - */ > - if (refcount_sub_and_test(2, &cancel_req->refs)) { > - io_free_req(cancel_req); > - finish_wait(&ctx->inflight_wait, &wait); > - continue; > - } > + /* Put inflight ref and overflow ref. */ > + io_double_put_req(cancel_req); > + } else { > + io_wq_cancel_work(ctx->io_wq, &cancel_req->work); > + io_put_req(cancel_req); > } > > - io_wq_cancel_work(ctx->io_wq, &cancel_req->work); > - io_put_req(cancel_req); > schedule(); > finish_wait(&ctx->inflight_wait, &wait); > } > -- Pavel Begunkov