On 3/10/25 07:29, Benjamin Marzinski wrote: > dm_revalidate_zones() only allowed devices that had no zone resources > set up to call blk_revalidate_disk_zones(). If the device already had > zone resources, disk->nr_zones would always equal md->nr_zones so > dm_revalidate_zones() returned without doing any work. Instead, always > call blk_revalidate_disk_zones() if you are loading a new zoned table. > > However, if the device emulates zone append operations and already has > zone append emulation resources, the table size cannot change when > loading a new table. Otherwise, all those resources will be garbage. > > If emulated zone append operations are needed and the zone write pointer > offsets of the new table do not match those of the old table, writes to > the device will still fail. This patch allows users to safely grow and > shrink zone devices. But swapping arbitrary zoned tables will still not > work. I do not think that this patch correctly address the shrinking of dm zoned device: blk_revalidate_disk_zones() will look at a smaller set of zones, which will leave already hashed zone write plugs outside of that new zone range in the disk zwplug hash table. disk_revalidate_zone_resources() does not cleanup and reallocate the hash table if the number of zones shrinks. For a physical drive, this can only happen if the drive is reformatted with some magic vendor unique command, which is why this was never implemented as that is not a valid production use case. To make things simpler, I think we should allow growing/shrinking zoned device tables, and much less swapping tables between zoned and not-zoned. I am more inclined to avoid all these corner cases by simply not supporting table switching for zoned device. That would be much safer I think. No-one complained about any issue with table switching until now, which likely means that no-one is using this. So what about simply returning an error for table switching for a zoned device ? If someone request this support, we can revisit this. > > Signed-off-by: Benjamin Marzinski <bmarzins@xxxxxxxxxx> > --- > drivers/md/dm-zone.c | 23 +++++++++++++---------- > 1 file changed, 13 insertions(+), 10 deletions(-) > > diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c > index ac86011640c3..7e9ebeee7eac 100644 > --- a/drivers/md/dm-zone.c > +++ b/drivers/md/dm-zone.c > @@ -164,16 +164,8 @@ int dm_revalidate_zones(struct dm_table *t, struct request_queue *q) > if (!get_capacity(disk)) > return 0; > > - /* Revalidate only if something changed. */ > - if (!disk->nr_zones || disk->nr_zones != md->nr_zones) { > - DMINFO("%s using %s zone append", > - disk->disk_name, > - queue_emulates_zone_append(q) ? "emulated" : "native"); > - md->nr_zones = 0; > - } > - > - if (md->nr_zones) > - return 0; > + DMINFO("%s using %s zone append", disk->disk_name, > + queue_emulates_zone_append(q) ? "emulated" : "native"); > > /* > * Our table is not live yet. So the call to dm_get_live_table() > @@ -392,6 +384,17 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q, > return 0; > } > > + /* > + * If the device needs zone append emulation, and the device already has > + * zone append emulation resources, make sure that the chunk_sectors > + * hasn't changed size. Otherwise those resources will be garbage. > + */ > + if (!lim->max_hw_zone_append_sectors && disk->zone_wplugs_hash && > + q->limits.chunk_sectors != lim->chunk_sectors) { > + DMERR("Cannot change zone size when swapping tables"); > + return -EINVAL; > + } > + > /* > * Warn once (when the capacity is not yet set) if the mapped device is > * partially using zone resources of the target devices as that leads to -- Damien Le Moal Western Digital Research