On Thu, Dec 03, 2020 at 09:33:59AM -0500, Mike Snitzer wrote: > On Wed, Dec 02 2020 at 10:26pm -0500, > Ming Lei <ming.lei@xxxxxxxxxx> wrote: > > > On Tue, Dec 01, 2020 at 11:07:09AM -0500, Mike Snitzer wrote: > > > commit 22ada802ede8 ("block: use lcm_not_zero() when stacking > > > chunk_sectors") broke chunk_sectors limit stacking. chunk_sectors must > > > reflect the most limited of all devices in the IO stack. > > > > > > Otherwise malformed IO may result. E.g.: prior to this fix, > > > ->chunk_sectors = lcm_not_zero(8, 128) would result in > > > blk_max_size_offset() splitting IO at 128 sectors rather than the > > > required more restrictive 8 sectors. > > > > What is the user-visible result of splitting IO at 128 sectors? > > The VDO dm target fails because it requires IO it receives to be split > as it advertised (8 sectors). OK, looks VDO's chunk_sector limit is one hard constraint, even though it is one DM device, so I guess you are talking about DM over VDO? Another reason should be that VDO doesn't use blk_queue_split(), otherwise it won't be a trouble, right? Frankly speaking, if the stacking driver/device has its own hard queue limit like normal hardware drive, the driver should be responsible for the splitting. > > > I understand it isn't related with correctness, because the underlying > > queue can split by its own chunk_sectors limit further. So is the issue > > too many further-splitting on queue with chunk_sectors 8? then CPU > > utilization is increased? Or other issue? > > No, this is all about correctness. > > Seems you're confining the definition of the possible stacking so that > the top-level device isn't allowed to have its own hard requirements on I just don't know this story, thanks for your clarification. As I mentioned above, if the stacking driver has its own hard queue limit, it should be the driver's responsibility to respect it via blk_queue_split() or whatever. Thanks, Ming