On Thu, Dec 03 2020 at 10:59pm -0500, Ming Lei <ming.lei@xxxxxxxxxx> wrote: > On Thu, Dec 03, 2020 at 09:03:43PM -0500, Mike Snitzer wrote: > > On Thu, Dec 03 2020 at 8:12pm -0500, > > Ming Lei <ming.lei@xxxxxxxxxx> wrote: > > > > > On Thu, Dec 03, 2020 at 09:33:59AM -0500, Mike Snitzer wrote: > > > > On Wed, Dec 02 2020 at 10:26pm -0500, > > > > Ming Lei <ming.lei@xxxxxxxxxx> wrote: > > > > > > > > > On Tue, Dec 01, 2020 at 11:07:09AM -0500, Mike Snitzer wrote: > > > > > > commit 22ada802ede8 ("block: use lcm_not_zero() when stacking > > > > > > chunk_sectors") broke chunk_sectors limit stacking. chunk_sectors must > > > > > > reflect the most limited of all devices in the IO stack. > > > > > > > > > > > > Otherwise malformed IO may result. E.g.: prior to this fix, > > > > > > ->chunk_sectors = lcm_not_zero(8, 128) would result in > > > > > > blk_max_size_offset() splitting IO at 128 sectors rather than the > > > > > > required more restrictive 8 sectors. > > > > > > > > > > What is the user-visible result of splitting IO at 128 sectors? > > > > > > > > The VDO dm target fails because it requires IO it receives to be split > > > > as it advertised (8 sectors). > > > > > > OK, looks VDO's chunk_sector limit is one hard constraint, even though it > > > is one DM device, so I guess you are talking about DM over VDO? > > > > > > Another reason should be that VDO doesn't use blk_queue_split(), otherwise it > > > won't be a trouble, right? > > > > > > Frankly speaking, if the stacking driver/device has its own hard queue limit > > > like normal hardware drive, the driver should be responsible for the splitting. > > > > DM core does the splitting for VDO (just like any other DM target). > > In 5.9 I updated DM to use chunk_sectors, use blk_stack_limits() > > stacking of it, and also use blk_max_size_offset(). > > > > But all that block core code has shown itself to be too rigid for DM. I > > tried to force the issue by stacking DM targets' ti->max_io_len with > > chunk_sectors. But really I'd need to be able to pass in the per-target > > max_io_len to blk_max_size_offset() to salvage using it. > > > > Stacking chunk_sectors seems ill-conceived. One size-fits-all splitting > > is too rigid. > > DM/VDO knows exactly it is one hard chunk_sectors limit, and DM shouldn't play > the stacking trick on VDO's chunk_sectors limit, should it? Feel like I already answered this in detail but... correct, DM cannot and should not use stacked chunk_sectors as basis for splitting. Up until 5.9, where I changed DM core to set and then use chunk_sectors for splitting via blk_max_size_offset(), DM only used its own per-target ti->max_io_len in drivers/md/dm.c:max_io_len(). But I reverted back to DM's pre-5.9 splitting in this stable@ fix that I'll be sending to Linus today for 5.10-rcX: https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-5.10-rcX&id=6bb38bcc33bf3093c08bd1b71e4f20c82bb60dd1 DM is now back to pre-5.9 behavior where it doesn't even consider chunk_sectors for splitting (NOTE: dm-zoned sets ti->max_io_len though so it is effectively achieves the same boundary splits via max_io_len). With that baseline established, what I'm now saying is: if DM, the most common limits stacking consumer, cannot benefit from stacked chunk_sectors then what stacked device does benefit? Could be block core's stacked chunk_sectors based splitting is good enough for others, just not yet seeing how. Feels like it predates blk_queue_split() and the stacking of chunk_sectors could/should be removed now. All said, I'm fine with leaving stacked chunk_sectors for others to care about... think I've raised enough awareness on this topic now ;) Mike -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel