On 07/10/2020 08:24, Damien Le Moal wrote: > On 2020/10/07 14:50, Christoph Hellwig wrote: >>> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c >>> index 7dda709f3ccb..78817d7acb66 100644 >>> --- a/block/blk-sysfs.c >>> +++ b/block/blk-sysfs.c >>> @@ -246,6 +246,11 @@ queue_max_sectors_store(struct request_queue *q, const char >>> *page, size_t count) >>> spin_lock_irq(&q->queue_lock); >>> q->limits.max_sectors = max_sectors_kb << 1; >>> q->backing_dev_info->io_pages = max_sectors_kb >> (PAGE_SHIFT - 10); >>> + >>> + q->limits.max_zone_append_sectors = >>> + min(q->limits.max_sectors, >>> + q->limits.max_hw_zone_append_sectors); >>> + >>> spin_unlock_irq(&q->queue_lock); >>> >>> return ret; >> >> Yes, this looks pretty sensible. I'm not even sure we need the field, >> just do the min where we build the bio instead of introducing another >> field that needs to be maintained. > > Indeed, that would be even simpler. But that would also mean repeating that min > call for every user. So may be we should just add a simple helper > queue_get_max_zone_append_sectors() ? > > > Like this? diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index cf80e61b4c5e..967cd76f16d4 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1406,7 +1406,10 @@ static inline unsigned int queue_max_segment_size(const struct request_queue *q) static inline unsigned int queue_max_zone_append_sectors(const struct request_queue *q) { - return q->limits.max_zone_append_sectors; + + struct queue_limits *l = q->limits; + + return min(l->max_zone_append_sectors, l->max_sectors); } static inline unsigned queue_logical_block_size(const struct request_queue *q) That's indeed much simpler, we'd just need to take precaution everyone's using queue_max_zone_append_sectors() and isn't directly poking into the queue_limits.