The concept looks fine, but some of the formatting especially in the comments is strange. Also we should not print the message for this case, but just the real error. Updated version with my suggestions below. Also don't you need a third patch that makes dm-multipath stop sending Write Same/Zeroes command when this happens? --- >From c056b0523173f17cd3d8ca77a8cfca4e45fe8cb7 Mon Sep 17 00:00:00 2001 From: Ritika Srivastava <ritika.srivastava@xxxxxxxxxx> Date: Wed, 29 Jul 2020 15:47:58 -0700 Subject: block: better deal with the delayed not supported case in blk_cloned_rq_check_limits If WRITE_ZERO/WRITE_SAME operation is not supported by the storage, blk_cloned_rq_check_limits() will return IO error which will cause device-mapper to fail the paths. Instead, if the queue limit is set to 0, return BLK_STS_NOTSUPP. BLK_STS_NOTSUPP will be ignored by device-mapper and will not fail the paths. Suggested-by: Martin K. Petersen <martin.petersen@xxxxxxxxxx> Signed-off-by: Ritika Srivastava <ritika.srivastava@xxxxxxxxxx> --- block/blk-core.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index e04ee2c8da2e95..81b830c24b5b4f 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1296,10 +1296,21 @@ EXPORT_SYMBOL(submit_bio); static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q, struct request *rq) { - if (blk_rq_sectors(rq) > blk_queue_get_max_sectors(q, req_op(rq))) { + unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq)); + + if (blk_rq_sectors(rq) > max_sectors) { + /* + * At least SCSI device do not have a good way to return if + * Write Same is actually supported. So we first try to issue + * one and if it fails clear the max sectors value on failure. + * If this occurs onthe lower device we need to propagate the + * right error code up. + */ + if (max_sectors == 0) + return BLK_STS_NOTSUPP; + printk(KERN_ERR "%s: over max size limit. (%u > %u)\n", - __func__, blk_rq_sectors(rq), - blk_queue_get_max_sectors(q, req_op(rq))); + __func__, blk_rq_sectors(rq), max_sectors); return BLK_STS_IOERR; } @@ -1326,8 +1337,11 @@ static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q, */ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq) { - if (blk_cloned_rq_check_limits(q, rq)) - return BLK_STS_IOERR; + blk_status_t ret; + + ret = blk_cloned_rq_check_limits(q, rq); + if (ret != BLK_STS_OK) + return ret; if (rq->rq_disk && should_fail_request(&rq->rq_disk->part0, blk_rq_bytes(rq))) -- 2.28.0