On 07/10/2020 01:33, Damien Le Moal wrote: [...] > Hmmm. That is one more tunable knob, and one that the user/sysadmin may not > consider without knowing that the FS is actually using zone append. E.g. btrfs > does, f2fs does not. I was thinking of something simpler: > > * Keep the soft limit zone_append_max_bytes/max_zone_append_sectors as RO > * Change its value when the generic soft limit max_sectors is changed. > > Something like this: > > diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c > index 7dda709f3ccb..78817d7acb66 100644 > --- a/block/blk-sysfs.c > +++ b/block/blk-sysfs.c > @@ -246,6 +246,11 @@ queue_max_sectors_store(struct request_queue *q, const char > *page, size_t count) > spin_lock_irq(&q->queue_lock); > q->limits.max_sectors = max_sectors_kb << 1; > q->backing_dev_info->io_pages = max_sectors_kb >> (PAGE_SHIFT - 10); > + > + q->limits.max_zone_append_sectors = > + min(q->limits.max_sectors, > + q->limits.max_hw_zone_append_sectors); > + > spin_unlock_irq(&q->queue_lock); > > return ret; > > The reasoning is that zone appends are a variation of write commands, and since > max_sectors will gate the size of all read and write commands, it should also > gate the size zone append writes. And that avoids adding yet another tuning knob > for users to get confused about. True, but my thought was to have two different knobs so an administrator can fine tune the normal write path vs the zone-append path. But that may indeed be over-engineering. Byte, Johannes