On 2/17/2020 1:23 AM, Jens Axboe wrote: > On 2/16/20 12:06 PM, Pavel Begunkov wrote: >> On 15/02/2020 09:01, Jens Axboe wrote: >>> diff --git a/fs/io_uring.c b/fs/io_uring.c >>> index fb94b8bac638..530dcd91fa53 100644 >>> @@ -4630,6 +4753,14 @@ static void __io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe) >>> */ >>> if (ret == -EAGAIN && (!(req->flags & REQ_F_NOWAIT) || >>> (req->flags & REQ_F_MUST_PUNT))) { >>> + >>> + if (io_arm_poll_handler(req, &retry_count)) { >>> + if (retry_count == 1) >>> + goto issue; >> >> Better to sqe=NULL before retrying, so it won't re-read sqe and try to >> init the req twice. > > Good point, that should get cleared after issue. > >> Also, the second sync-issue may -EAGAIN again, and as I remember, >> read/write/etc will try to copy iovec into req->io. But iovec is >> already in req->io, so it will self memcpy(). Not a good thing. > > I'll look into those details, that has indeed reared its head before. > >>> + else if (!retry_count) >>> + goto done_req; >>> + INIT_IO_WORK(&req->work, io_wq_submit_work); >> >> It's not nice to reset it as this: >> - prep() could set some work.flags >> - custom work.func is more performant (adds extra switch) >> - some may rely on specified work.func to be called. e.g. close(), even though >> it doesn't participate in the scheme > > It's totally a hack as-is for the "can't do it, go async". I did clean And I don't understand lifetimes yet... probably would need a couple of questions later. > this up a bit (if you check the git version, it's changed quite a bit), That's what I've been looking at > but it's still a mess in terms of that and ->work vs union ownership. > The commit message also has a note about that. > > So more work needed in that area for sure. Right, I just checked a couple of things for you -- Pavel Begunkov