From: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> Now we unconditionally blk_rq_init_flush() to replace rq->end_io to make rq return twice back to the flush state machine for post-flush. Obviously, non post-flush requests don't need it, they don't need to end request twice, so they don't need to replace rq->end_io callback. And the same for requests with the FUA bit on hardware with FUA support. There are also some other good points: 1. all requests on hardware with FUA support won't have post-flush, so all of them don't need to end twice. 2. non post-flush requests won't have RQF_FLUSH_SEQ rq_flags set, so they can merge like normal requests. 3. we don't account non post-flush requests in flush_data_in_flight, since there is no point to defer pending flush for these requests. Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> --- block/blk-flush.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/block/blk-flush.c b/block/blk-flush.c index ed195c760617..a299dae65350 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -178,7 +178,8 @@ static void blk_end_flush(struct request *rq, struct blk_flush_queue *fq, * normal completion and end it. */ list_del_init(&rq->queuelist); - blk_flush_restore_request(rq); + if (rq->rq_flags & RQF_FLUSH_SEQ) + blk_flush_restore_request(rq); blk_mq_end_request(rq, error); blk_kick_flush(q, fq); @@ -461,7 +462,8 @@ bool blk_insert_flush(struct request *rq) * Mark the request as part of a flush sequence and submit it * for further processing to the flush state machine. */ - blk_rq_init_flush(rq); + if (policy & REQ_FSEQ_POSTFLUSH) + blk_rq_init_flush(rq); spin_lock_irq(&fq->mq_flush_lock); blk_enqueue_preflush(rq, fq); spin_unlock_irq(&fq->mq_flush_lock); -- 2.41.0