On 6/15/20 3:24 AM, Xiaoguang Wang wrote: > In io_complete_rw_iopoll(), stores to io_kiocb's result and iopoll > completed are two independent store operations, to ensure that once > iopoll_completed is ture and then req->result must been perceived by > the cpu executing io_do_iopoll(), proper memory barrier should be used. > > And in io_do_iopoll(), we check whether req->result is EAGAIN, if it is, > we'll need to issue this io request using io-wq again. In order to just > issue a single smp_rmb() on the completion side, move the re-submit work > to io_iopoll_complete(). Did you actually test this one? > @@ -1736,11 +1748,20 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events, > { > struct req_batch rb; > struct io_kiocb *req; > + LIST_HEAD(again); > + > + /* order with ->result store in io_complete_rw_iopoll() */ > + smp_rmb(); > > rb.to_free = rb.need_iter = 0; > while (!list_empty(done)) { > int cflags = 0; > > + if (READ_ONCE(req->result) == -EAGAIN) { > + req->iopoll_completed = 0; > + list_move_tail(&req->list, &again); > + continue; > + } > req = list_first_entry(done, struct io_kiocb, list); > list_del(&req->list); > You're using 'req' here before you initialize it... -- Jens Axboe