On Mon, Apr 10, 2023 at 05:15:44PM -0700, Bart Van Assche wrote: > Subject: [PATCH] block: Send flush requests to the I/O scheduler > > Send flush requests to the I/O scheduler such that I/O scheduler policies > are applied to writes with the FUA flag set. Separate the I/O scheduler > members from the flush members in struct request since with this patch > applied a request may pass through both an I/O scheduler and the flush > machinery. > > This change affects the statistics of I/O schedulers that track I/O > statistics (BFQ and mq-deadline). This looks reasonably to me, as these special cases are nasty. But we'll need very careful testing, including performance testing to ensure this doesn't regress. > + blk_mq_sched_insert_request(rq, /*at_head=*/false, > + /*run_queue=*/true, /*async=*/true); And place drop these silly comments. If you want to do something about this rather suboptimal interface convert the three booleans to a flags argument with properly named flags. > - if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq)) > - return true; > - > - return false; > + return req_op(rq) == REQ_OP_FLUSH || blk_rq_is_passthrough(rq); This just seem like an arbitrary reformatting. While I also prefer your new version, I don't think it belongs into this patch.