On 4/16/21 5:07 PM, Ming Lei wrote: > On Fri, Apr 16, 2021 at 04:00:37PM +0800, Jeffle Xu wrote: >> Hi, >> How about this patch to remove the extra poll_capable() method? >> >> And the following 'dm: support IO polling for bio-based dm device' needs >> following change. >> >> ``` >> + /* >> + * Check for request-based device is remained to >> + * dm_mq_init_request_queue()->blk_mq_init_allocated_queue(). >> + * For bio-based device, only set QUEUE_FLAG_POLL when all underlying >> + * devices supporting polling. >> + */ >> + if (__table_type_bio_based(t->type)) { >> + if (dm_table_supports_poll(t)) { >> + blk_queue_flag_set(QUEUE_FLAG_POLL_CAP, q); >> + blk_queue_flag_set(QUEUE_FLAG_POLL, q); >> + } >> + else { >> + blk_queue_flag_clear(QUEUE_FLAG_POLL, q); >> + blk_queue_flag_clear(QUEUE_FLAG_POLL_CAP, q); >> + } >> + } >> ``` > > Frankly speaking, I don't see any value of using QUEUE_FLAG_POLL_CAP for > DM, and the result is basically subset of treating DM as always being capable > of polling. > > Also underlying queue change(either limits or flag) won't be propagated > to DM/MD automatically. Strictly speaking it doesn't matter if all underlying > queues are capable of supporting polling at the exact time of 'write sysfs/poll', > cause any of them may change in future. > > So why not start with the simplest approach(always capable of polling) > which does meet normal bio based polling requirement? > I find one scenario where this issue may matter. Consider the scenario where HIPRI bios are submitted to DM device though **all** underlying devices has been disabled for polling. In this case, a **valid** cookie (pid of current submitting process) is still returned. Then if @spin of the following blk_poll() is true, blk_poll() will get stuck in dead loop because blk_mq_poll() always returns 0, since previously submitted bios are all enqueued into IRQ hw queue. Maybe you need to re-remove the bio from the poll context if the returned cookie is BLK_QC_T_NONE? Something like: -static blk_qc_t __submit_bio_noacct(struct bio *bio) +static blk_qc_t __submit_bio_noacct_ctx(struct bio *bio, struct io_context *ioc) { struct bio_list bio_list_on_stack[2]; blk_qc_t ret = BLK_QC_T_NONE; @@ -1047,7 +1163,15 @@ static blk_qc_t __submit_bio_noacct(struct bio *bio) bio_list_on_stack[1] = bio_list_on_stack[0]; bio_list_init(&bio_list_on_stack[0]); if (ioc && queue_is_mq(q) && (bio->bi_opf & REQ_HIPRI)) { bool queued = blk_bio_poll_prep_submit(ioc, bio); ret = __submit_bio(bio); + if (queued && !blk_qc_t_valid(ret)) /* TODO:remove bio from poll_context */ bio_set_private_data(bio, ret); } else { ret = __submit_bio(bio); } Then if you'd like fix this in this way, the returned value of .submit_bio() of DM/MD also needs to return BLK_QC_T_NONE now. Currently .submit_bio() of DM actually returns the cookie of the last split bio (to underlying mq deivice). -- Thanks, Jeffle -- dm-devel mailing list dm-devel@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/dm-devel