Simultaneously writing to a sequential zone of a zoned block device from multiple contexts requires mutual exclusion for BIO issuing to ensure that writes happen sequentially. However, even for a well behaved user correctly implementing such synchronization, BIO plugging may interfere and result in BIOs from the different contextx to be reordered if plugging is done outside of the mutual exclusion section, e.g. the plug was started by a function higher in the call chain than the function issuing BIOs. Context A Context B | blk_start_plug() | ... | seq_write_zone() | mutex_lock(zone) | submit_bio(bio-0) | submit_bio(bio-1) | mutex_unlock(zone) | return | ------------------------------> | seq_write_zone() | mutex_lock(zone) | submit_bio(bio-2) | mutex_unlock(zone) | <------------------------------ | | blk_finish_plug() In the above example, despite the mutex synchronization resulting in the correct BIO issuing order 0, 1, 2, context A BIOs 0 and 1 end up being issued after BIO 2 when the plug is released with blk_finish_plug(). To fix this problem, introduce the internal helper function blk_mq_plug() to access the current context plug, return the current plug only if the target device is not a zoned block device or if the BIO to be plugged not a write operation. Otherwise, ignore the plug and return NULL, resulting is all writes to zoned block device to never be plugged. Signed-off-by: Damien Le Moal <damien.lemoal@xxxxxxx> --- block/blk-core.c | 2 +- block/blk-mq.c | 2 +- block/blk-mq.h | 12 ++++++++++++ 3 files changed, 14 insertions(+), 2 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 8340f69670d8..3957ea6811c3 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -645,7 +645,7 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, struct request *rq; struct list_head *plug_list; - plug = current->plug; + plug = blk_mq_plug(q, bio); if (!plug) return false; diff --git a/block/blk-mq.c b/block/blk-mq.c index ce0f5f4ede70..90be5bb6fa1b 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1969,7 +1969,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) cookie = request_to_qc_t(data.hctx, rq); - plug = current->plug; + plug = blk_mq_plug(q, bio); if (unlikely(is_flush_fua)) { blk_mq_put_ctx(data.ctx); blk_mq_bio_to_request(rq, bio); diff --git a/block/blk-mq.h b/block/blk-mq.h index 633a5a77ee8b..d9b1e94b82a4 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -238,4 +238,16 @@ static inline void blk_mq_clear_mq_map(struct blk_mq_queue_map *qmap) qmap->mq_map[cpu] = 0; } +static inline struct blk_plug *blk_mq_plug(struct request_queue *q, + struct bio *bio) +{ + struct blk_plug *plug = current->plug; + + if (!blk_queue_is_zoned(q) || !op_is_write(bio_op(bio))) + return plug; + + /* Zoned block device write case: do not plug the BIO */ + return NULL; +} + #endif -- 2.21.0