Re: Polled I/O cannot find completions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/31/2020 11:43 AM, Bijan Mottahedeh wrote:

Does io_uring though have to deal with BLK_QC_T_NONE at all?  Or are you
saying that it should never receive that result?
That's one of the things I'm not clear about.
BLK_QC_T_* are block cookies, they are only valid in the block layer.
Only the poll handler called should have to deal with them, inside
their f_op->iopoll() handler. It's simply passed from the queue to
the poll side.

So no, io_uring shouldn't have to deal with them at all.

The problem, as I see it, is if the block layer returns BLK_QC_T_NONE
and the IO was actually queued and requires polling to be found. We'd
end up with IO timeouts for handling those requests, and that's not a
good thing...

I see requests in io_do_iopoll() on poll_list with req->res == -EAGAIN, I think because the completion happened after an issued request was added to poll_list in io_iopoll_req_issued().

How should we deal with such a request, reissue unconditionally or something else?


I mimicked the done processing code in io_iopoll_complete() for -EAGAIN as a test.  I can now get further and don't see polling threads hang; in fact, I eventually see I/O timeouts as you noted.

It seems that there might be two separate issues here. Makes sense?

Thanks.

--bijan

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 62bd410..a3e3a4e 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1738,11 +1738,24 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx,
        io_free_req_many(ctx, &rb);
 }

+static void io_iopoll_queue(struct list_head *again)
+{
+       struct io_kiocb *req;
+
+       while (!list_empty(again)) {
+               req = list_first_entry(again, struct io_kiocb, list);
+               list_del(&req->list);
+               refcount_inc(&req->refs);
+               io_queue_async_work(req);
+       }
+}
+
 static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
                        long min)
 {
        struct io_kiocb *req, *tmp;
        LIST_HEAD(done);
+       LIST_HEAD(again);
        bool spin;
        int ret;

@@ -1757,9 +1770,9 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned
                struct kiocb *kiocb = &req->rw.kiocb;

                /*
-                * Move completed entries to our local list. If we find a
-                * request that requires polling, break out and complete
-                * the done list first, if we have entries there.
+                * Move completed and retryable entries to our local lists.
+                * If we find a request that requires polling, break out
+                * and complete those lists first, if we have entries there.
                 */
                if (req->flags & REQ_F_IOPOLL_COMPLETED) {
                        list_move_tail(&req->list, &done);
@@ -1768,6 +1781,13 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned
                if (!list_empty(&done))
                        break;

+               if (req->result == -EAGAIN) {
+                       list_move_tail(&req->list, &again);
+                       continue;
+               }
+               if (!list_empty(&again))
+                       break;
+
                ret = kiocb->ki_filp->f_op->iopoll(kiocb, spin);
                if (ret < 0)
                        break;
@@ -1780,6 +1800,9 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned
        if (!list_empty(&done))
                io_iopoll_complete(ctx, nr_events, &done);

+       if (!list_empty(&again))
+               io_iopoll_queue(&again);
+
        return ret;
 }





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux