If WRITE_ZERO/WRITE_SAME operation is not supported by the storage, blk_cloned_rq_check_limits() will return IO error which will cause device-mapper to fail the paths. Instead, if the queue limit is set to 0, return BLK_STS_NOTSUPP. BLK_STS_NOTSUPP will be ignored by device-mapper and will not fail the paths. Suggested-by: Martin K. Petersen <martin.petersen@xxxxxxxxxx> Signed-off-by: Ritika Srivastava <ritika.srivastava@xxxxxxxxxx> Reviewed-by: Martin K. Petersen <martin.petersen@xxxxxxxxxx> Reviewed-by: Christoph Hellwig <hch@xxxxxx> --- Changelog: v4: Explain the window of error in the comments. Add reviewed-by tag from Martin v3: Formatting changes and subject updated from previous version -block: return BLK_STS_NOTSUPP if operation is not supported v2: Document scenario and SCSI error encountered in a comment in blk_cloned_rq_check_limits(). --- block/blk-core.c | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 4c1af34..d857344 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1147,10 +1147,24 @@ blk_qc_t submit_bio(struct bio *bio) static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q, struct request *rq) { - if (blk_rq_sectors(rq) > blk_queue_get_max_sectors(q, req_op(rq))) { + unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq)); + + if (blk_rq_sectors(rq) > max_sectors) { + /* + * SCSI device does not have a good way to return if + * Write Same/Zero is actually supported. If a device rejects + * a non-read/write command (discard, write same,etc.) the + * low-level device driver will set the relevant queue limit to + * 0 to prevent blk-lib from issuing more of the offending + * operations. Commands queued prior to the queue limit being + * reset need to be completed with BLK_STS_NOTSUPP to avoid I/O + * errors being propagated to upper layers. + */ + if (max_sectors == 0) + return BLK_STS_NOTSUPP; + printk(KERN_ERR "%s: over max size limit. (%u > %u)\n", - __func__, blk_rq_sectors(rq), - blk_queue_get_max_sectors(q, req_op(rq))); + __func__, blk_rq_sectors(rq), max_sectors); return BLK_STS_IOERR; } @@ -1177,8 +1191,11 @@ static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q, */ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq) { - if (blk_cloned_rq_check_limits(q, rq)) - return BLK_STS_IOERR; + blk_status_t ret; + + ret = blk_cloned_rq_check_limits(q, rq); + if (ret != BLK_STS_OK) + return ret; if (rq->rq_disk && should_fail_request(&rq->rq_disk->part0, blk_rq_bytes(rq))) -- 1.8.3.1