On Thu, Jun 17, 2021 at 06:35:49PM +0800, Ming Lei wrote: > Support bio(REQ_POLLED) polling in the following approach: > > 1) only support io polling on normal READ/WRITE, and other abnormal IOs > still fallback on IRQ mode, so the target io is exactly inside the dm > io. > > 2) hold one refcnt on io->io_count after submitting this dm bio with > REQ_POLLED > > 3) support dm native bio splitting, any dm io instance associated with > current bio will be added into one list which head is bio->bi_end_io > which will be recovered before ending this bio > > 4) implement .poll_bio() callback, call bio_poll() on the single target > bio inside the dm io which is retrieved via bio->bi_bio_drv_data; call > dec_pending() after the target io is done in .poll_bio() > > 4) enable QUEUE_FLAG_POLL if all underlying queues enable QUEUE_FLAG_POLL, > which is based on Jeffle's previous patch. > > Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> ... > @@ -938,8 +945,12 @@ static void dec_pending(struct dm_io *io, blk_status_t error) > end_io_acct(io); > free_io(md, io); > > - if (io_error == BLK_STS_DM_REQUEUE) > + if (io_error == BLK_STS_DM_REQUEUE) { > + /* not poll any more in case of requeue */ > + if (bio->bi_opf & REQ_POLLED) > + bio->bi_opf &= ~REQ_POLLED; It becomes not necessary to clear REQ_POLLED before requeuing since every dm_io is added into the hlist_head which is reused from bio->bi_end_io, so all dm-io(include the one to be requeued) will be polled. Thanks, Ming -- dm-devel mailing list dm-devel@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/dm-devel