Currently if the underlying device is discard capable and the discard_passdown is enabled, the discard_granularity will be inherited from that device. This will pose a problem in the case that the device discard_granularity is smaller than thin volume chunk size, because in that case discard requests will not be chunk size aligned so it will be ignored by dm-thin. Fix this by setting thin volume discard granularity to the bigger of the two max(device discard_granularity, thin volume chunk size). Strictly speaking it is not necessary to get the bigger of the two, because the thin volume chunk size will always be >= device discard_granularity. However I believe that the reason for this is only because dm-thin can not handle discard requests bigger than chunk size which is hopefully going to change soon. This way it is future proof. RHBZ: 1106856 Reported-by: Zdenek Kabelac <zkabelac@xxxxxxxxxxxxxxxxx> Signed-off-by: Lukas Czerner <lczerner@xxxxxxxxxx> --- drivers/md/dm-thin.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index 242ac2e..fdd7089 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -3068,7 +3068,9 @@ static void set_discard_limits(struct pool_c *pt, struct queue_limits *limits) */ if (pt->adjusted_pf.discard_passdown) { data_limits = &bdev_get_queue(pt->data_dev->bdev)->limits; - limits->discard_granularity = data_limits->discard_granularity; + limits->discard_granularity = + max(data_limits->discard_granularity, + pool->sectors_per_block << SECTOR_SHIFT); } else limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT; } -- 1.8.3.1 -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel