>>> As the passthrough path can now support request caching via blk_mq_alloc_request(), >>> and it uses blk_execute_rq_nowait(), bad things can happen at least for zoned >>> devices: >>> >>> static inline struct blk_plug *blk_mq_plug( struct bio *bio) >>> { >>> /* Zoned block device write operation case: do not plug the BIO */ >>> if (bdev_is_zoned(bio->bi_bdev) && op_is_write(bio_op(bio))) >>> return NULL; >>> .. >> >> Thinking more about it, even this will not fix it because op is >> REQ_OP_DRV_OUT if it is a NVMe write for passthrough requests. >> >> @Damien Should the condition in blk_mq_plug() be changed to: >> >> static inline struct blk_plug *blk_mq_plug( struct bio *bio) >> { >> /* Zoned block device write operation case: do not plug the BIO */ >> if (bdev_is_zoned(bio->bi_bdev) && !op_is_read(bio_op(bio))) >> return NULL; > > That looks reasonable to me. It'll prevent plug optimizations even > for passthrough on zoned devices, but that's probably fine. > Do you want me send a separate patch for this change or you will fold it in the existing series? -- Pankaj