Issue a kernel warning if a zoned write is passed directly to the block driver instead of to the I/O scheduler because passing a zoned write directly to the block driver may cause zoned write reordering. Cc: Christoph Hellwig <hch@xxxxxx> Cc: Damien Le Moal <dlemoal@xxxxxxxxxx> Cc: Ming Lei <ming.lei@xxxxxxxxxx> Cc: Mike Snitzer <snitzer@xxxxxxxxxx> Signed-off-by: Bart Van Assche <bvanassche@xxxxxxx> --- block/blk-mq.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/block/blk-mq.c b/block/blk-mq.c index bc52a57641e2..9ef6fa5d7471 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2429,6 +2429,9 @@ static void blk_mq_request_bypass_insert(struct request *rq, blk_insert_t flags) { struct blk_mq_hw_ctx *hctx = rq->mq_hctx; + WARN_ON_ONCE(rq->rq_flags & RQF_USE_SCHED && + blk_rq_is_seq_zoned_write(rq)); + spin_lock(&hctx->lock); if (flags & BLK_MQ_INSERT_AT_HEAD) list_add(&rq->queuelist, &hctx->dispatch); @@ -2562,6 +2565,9 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, }; blk_status_t ret; + WARN_ON_ONCE(rq->rq_flags & RQF_USE_SCHED && + blk_rq_is_seq_zoned_write(rq)); + /* * For OK queue, we are done. For error, caller may kill it. * Any other error (busy), just add it to our list as we