On Mon, May 20, 2024 at 11:39:14AM -0400, Mike Snitzer wrote: > On Mon, May 20, 2024 at 05:06:53PM +0200, Christoph Hellwig wrote: > > > This is probably my fault, I actually found this right at the time > > of the original revert of switching dm to the limits API, and then > > let it slip as the patch was reverted. That fact that you readded > > the commit somehow went past my attention window. > > It's fine, all we can do now is work through how best to fix it. Open > to suggestions. But this next hunk, which you trimmed in your reply, > _seems_ needed to actually fix the issue Ted reported -- given the > current validate method in blk-settings.c (resharing here to just > continue this thread in a natural way): > > diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c > index 4793ad2aa1f7..c196f39579af 100644 > --- a/drivers/md/dm-thin.c > +++ b/drivers/md/dm-thin.c > @@ -4497,7 +4499,8 @@ static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits) > > if (pool->pf.discard_enabled) { > limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT; > - limits->max_discard_sectors = pool->sectors_per_block * BIO_PRISON_MAX_RANGE; > + limits->max_hw_discard_sectors = limits->max_user_discard_sectors = > + pool->sectors_per_block * BIO_PRISON_MAX_RANGE; > } > } > > Maybe update blk_validate_limits() to ensure max_discard_sectors is a factor of discard_granularity? That way thin_io_hints() (and equivalent functions in other DM targets) just need to be audited/updated to ensure they are setting both discard_granularity and max_hw_discard_sectors? Mike