This is a note to let you know that I've just added the patch titled block: properly handle REQ_OP_ZONE_APPEND in __bio_split_to_limits to the 6.11-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: block-properly-handle-req_op_zone_append-in-__bio_sp.patch and it can be found in the queue-6.11 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit becf8078393cd8e497c0ad8f0be8effa2376e464 Author: Christoph Hellwig <hch@xxxxxx> Date: Mon Aug 26 19:37:56 2024 +0200 block: properly handle REQ_OP_ZONE_APPEND in __bio_split_to_limits [ Upstream commit 1e8a7f6af926e266cc1d7ac49b56bd064057d625 ] Currently REQ_OP_ZONE_APPEND is handled by the bio_split_rw case in __bio_split_to_limits. This is harmful because REQ_OP_ZONE_APPEND bios do not adhere to the soft max_limits value but instead use their own capped version of max_hw_sectors, leading to incorrect splits that later blow up in bio_split. We still need the bio_split_rw logic to count nr_segs for blk-mq code, so add a new wrapper that passes in the right limit, and turns any bio that would need a split into an error as an additional debugging aid. Signed-off-by: Christoph Hellwig <hch@xxxxxx> Reviewed-by: Damien Le Moal <dlemoal@xxxxxxxxxx> Tested-by: Hans Holmberg <hans.holmberg@xxxxxxx> Reviewed-by: Hans Holmberg <hans.holmberg@xxxxxxx> Link: https://lore.kernel.org/r/20240826173820.1690925-4-hch@xxxxxx Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Stable-dep-of: 60dc5ea6bcfd ("block: take chunk_sectors into account in bio_split_write_zeroes") Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/block/blk-merge.c b/block/blk-merge.c index c7222c4685e06..56769c4bcd799 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -378,6 +378,26 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim, get_max_io_size(bio, lim) << SECTOR_SHIFT)); } +/* + * REQ_OP_ZONE_APPEND bios must never be split by the block layer. + * + * But we want the nr_segs calculation provided by bio_split_rw_at, and having + * a good sanity check that the submitter built the bio correctly is nice to + * have as well. + */ +struct bio *bio_split_zone_append(struct bio *bio, + const struct queue_limits *lim, unsigned *nr_segs) +{ + unsigned int max_sectors = queue_limits_max_zone_append_sectors(lim); + int split_sectors; + + split_sectors = bio_split_rw_at(bio, lim, nr_segs, + max_sectors << SECTOR_SHIFT); + if (WARN_ON_ONCE(split_sectors > 0)) + split_sectors = -EINVAL; + return bio_submit_split(bio, split_sectors); +} + /** * bio_split_to_limits - split a bio to fit the queue limits * @bio: bio to be split diff --git a/block/blk.h b/block/blk.h index 0d8cd64c12606..61c2afa67daab 100644 --- a/block/blk.h +++ b/block/blk.h @@ -337,6 +337,8 @@ struct bio *bio_split_write_zeroes(struct bio *bio, const struct queue_limits *lim, unsigned *nsegs); struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim, unsigned *nr_segs); +struct bio *bio_split_zone_append(struct bio *bio, + const struct queue_limits *lim, unsigned *nr_segs); /* * All drivers must accept single-segments bios that are smaller than PAGE_SIZE. @@ -375,6 +377,8 @@ static inline struct bio *__bio_split_to_limits(struct bio *bio, return bio_split_rw(bio, lim, nr_segs); *nr_segs = 1; return bio; + case REQ_OP_ZONE_APPEND: + return bio_split_zone_append(bio, lim, nr_segs); case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: return bio_split_discard(bio, lim, nr_segs);