On Tue, Oct 20, 2020 at 02:54:20PM +0800, Jeffle Xu wrote: > Design of cookie is initially constrained as a per-bio concept. It > dosn't work well when bio-split needed, and it is really an issue when > adding support of iopoll for dm devices. > > The current algorithm implementation is simple. The returned cookie of > dm device is actually not used since it is just the cookie of one of > the cloned bios. Polling of dm device is actually polling on all > hardware queues (in poll mode) of all underlying target devices. > > Signed-off-by: Jeffle Xu <jefflexu@xxxxxxxxxxxxxxxxx> > --- > drivers/md/dm-core.h | 1 + > drivers/md/dm-table.c | 30 ++++++++++++++++++++++++++++++ > drivers/md/dm.c | 39 +++++++++++++++++++++++++++++++++++++++ > 3 files changed, 70 insertions(+) > > diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h > index d522093cb39d..f18e066beffe 100644 > --- a/drivers/md/dm-core.h > +++ b/drivers/md/dm-core.h > @@ -187,4 +187,5 @@ extern atomic_t dm_global_event_nr; > extern wait_queue_head_t dm_global_eventq; > void dm_issue_global_event(void); > > +int dm_io_poll(struct request_queue *q, blk_qc_t cookie); > #endif > diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c > index ce543b761be7..634b79842519 100644 > --- a/drivers/md/dm-table.c > +++ b/drivers/md/dm-table.c > @@ -1809,6 +1809,31 @@ static bool dm_table_requires_stable_pages(struct dm_table *t) > return false; > } > > +static int device_not_support_poll(struct dm_target *ti, struct dm_dev *dev, > + sector_t start, sector_t len, void *data) > +{ > + struct request_queue *q = bdev_get_queue(dev->bdev); > + > + return q && !(q->queue_flags & QUEUE_FLAG_POLL); > +} > + > +bool dm_table_supports_poll(struct dm_table *t) > +{ > + struct dm_target *ti; > + unsigned int i; > + > + /* Ensure that all targets support DAX. */ > + for (i = 0; i < dm_table_get_num_targets(t); i++) { > + ti = dm_table_get_target(t, i); > + > + if (!ti->type->iterate_devices || > + ti->type->iterate_devices(ti, device_not_support_poll, NULL)) > + return false; > + } > + > + return true; > +} > + > void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, > struct queue_limits *limits) > { > @@ -1901,6 +1926,11 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, > #endif > > blk_queue_update_readahead(q); > + > + if (dm_table_supports_poll(t)) { > + q->poll_fn = dm_io_poll; > + blk_queue_flag_set(QUEUE_FLAG_POLL, q); > + } > } > > unsigned int dm_table_get_num_targets(struct dm_table *t) > diff --git a/drivers/md/dm.c b/drivers/md/dm.c > index c18fc2548518..4eceaf87ffd4 100644 > --- a/drivers/md/dm.c > +++ b/drivers/md/dm.c > @@ -1666,6 +1666,45 @@ static blk_qc_t dm_submit_bio(struct bio *bio) > return ret; > } > > +static int dm_poll_one_dev(struct request_queue *q, blk_qc_t cookie) > +{ > + /* Iterate polling on all polling queues for mq device */ > + if (queue_is_mq(q)) { > + struct blk_mq_hw_ctx *hctx; > + int i, ret = 0; > + > + if (!percpu_ref_tryget(&q->q_usage_counter)) > + return 0; > + > + queue_for_each_poll_hw_ctx(q, hctx, i) { > + ret += q->mq_ops->poll(hctx); > + } IMO, this way may not be accepted from performance viewpoint, .poll() often requires per-hw-queue lock. So in case of > 1 io thread, contention/cache ping-pong on hw queue resource can be very serious. I guess you may have to find one way to pass correct cookie to ->poll(). Thanks, Ming