On Fri, May 12, 2023 at 10:32 AM Mike Snitzer <snitzer@xxxxxxxxxx> wrote: > > On Sat, May 06 2023 at 2:29P -0400, > Sarthak Kukreti <sarthakkukreti@xxxxxxxxxxxx> wrote: > > > dm-thinpool uses the provision request to provision > > blocks for a dm-thin device. dm-thinpool currently does not > > pass through REQ_OP_PROVISION to underlying devices. > > > > For shared blocks, provision requests will break sharing and copy the > > contents of the entire block. Additionally, if 'skip_block_zeroing' > > is not set, dm-thin will opt to zero out the entire range as a part > > of provisioning. > > > > Signed-off-by: Sarthak Kukreti <sarthakkukreti@xxxxxxxxxxxx> > > --- > > drivers/md/dm-thin.c | 70 +++++++++++++++++++++++++++++++++++++++++--- > > 1 file changed, 66 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c > > index 2b13c949bd72..3f94f53ac956 100644 > > --- a/drivers/md/dm-thin.c > > +++ b/drivers/md/dm-thin.c > ... > > @@ -4114,6 +4171,8 @@ static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits) > > * The pool uses the same discard limits as the underlying data > > * device. DM core has already set this up. > > */ > > + > > + limits->max_provision_sectors = pool->sectors_per_block; > > Just noticed that setting limits->max_provision_sectors needs to move > above pool_io_hints code that sets up discards -- otherwise the early > return from if (!pt->adjusted_pf.discard_enabled) will cause setting > max_provision_sectors to be skipped. > > Here is a roll up of the fixes that need to be folded into this patch: > Ah right, thanks for pointing that out! I'll fold this into v7. Best Sarthak > diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c > index 3f94f53ac956..90c8e36cb327 100644 > --- a/drivers/md/dm-thin.c > +++ b/drivers/md/dm-thin.c > @@ -4151,6 +4151,8 @@ static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits) > blk_limits_io_opt(limits, pool->sectors_per_block << SECTOR_SHIFT); > } > > + limits->max_provision_sectors = pool->sectors_per_block; > + > /* > * pt->adjusted_pf is a staging area for the actual features to use. > * They get transferred to the live pool in bind_control_target() > @@ -4171,8 +4173,6 @@ static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits) > * The pool uses the same discard limits as the underlying data > * device. DM core has already set this up. > */ > - > - limits->max_provision_sectors = pool->sectors_per_block; > } > > static struct target_type pool_target = { > @@ -4349,6 +4349,7 @@ static int thin_ctr(struct dm_target *ti, unsigned int argc, char **argv) > > ti->num_provision_bios = 1; > ti->provision_supported = true; > + ti->max_provision_granularity = true; > > mutex_unlock(&dm_thin_pool_table.mutex); >