On 5/23/23 00:19, Christoph Hellwig wrote:
On Mon, May 22, 2023 at 11:38:39AM -0700, Bart Van Assche wrote:
@@ -2429,6 +2429,9 @@ static void blk_mq_request_bypass_insert(struct request *rq, blk_insert_t flags)
{
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+ WARN_ON_ONCE(rq->rq_flags & RQF_USE_SCHED &&
+ blk_rq_is_seq_zoned_write(rq));
+
spin_lock(&hctx->lock);
if (flags & BLK_MQ_INSERT_AT_HEAD)
list_add(&rq->queuelist, &hctx->dispatch);
@@ -2562,6 +2565,9 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
};
blk_status_t ret;
+ WARN_ON_ONCE(rq->rq_flags & RQF_USE_SCHED &&
+ blk_rq_is_seq_zoned_write(rq));
What makes sequential writes here special vs other requests that are
supposed to be using the scheduler and not a bypass?
Hi Christoph,
If some REQ_OP_WRITE or REQ_OP_WRITE_ZEROES requests are submitted to
the I/O scheduler and others bypass the I/O scheduler this may lead to
reordering. Hence this patch that triggers a kernel warning if any
REQ_OP_WRITE or REQ_OP_WRITE_ZEROES requests bypass the I/O scheduler.
Thanks,
Bart.