On Mon, May 20, 2024 at 11:39:14AM -0400, Mike Snitzer wrote: > That's fair. My criticism was more about having to fix up DM targets > to cope with the new normal of max_discard_sectors being set as a > function of max_hw_discard_sectors and max_user_discard_sectors. > > With stacked devices in particular it is _very_ hard for the user to > know their exerting control over a max discard limit is correct. The user forcing a limit is always very sketchy, which is why I'm not a fan of it. > Yeah, but my concern is that if a user sets a value that is too low > it'll break targets like DM thinp (which Ted reported). So forcibly > setting both to indirectly set the required max_discard_sectors seems > necessary. Dm-think requiring a minimum discard size is a rather odd requirement. Is this just a debug asswert, or is there a real technical reason for it? If so we can introduce a now to force a minimum size or disable user setting the value entirely. > diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c > index 4793ad2aa1f7..c196f39579af 100644 > --- a/drivers/md/dm-thin.c > +++ b/drivers/md/dm-thin.c > @@ -4497,7 +4499,8 @@ static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits) > > if (pool->pf.discard_enabled) { > limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT; > - limits->max_discard_sectors = pool->sectors_per_block * BIO_PRISON_MAX_RANGE; > + limits->max_hw_discard_sectors = limits->max_user_discard_sectors = > + pool->sectors_per_block * BIO_PRISON_MAX_RANGE; > } Drivers really have no business setting max_user_discard_sector, the whole point of the field is to separate device/driver capabilities from user policy. So if dm-think really has no way of handling smaller discards, we need to ensure they can't be set. I'm also kinda curious what actually sets a user limit in Ted's case as that feels weird.