Re: [PATCH v2] block: use gcd() to fix chunk_sectors limit stacking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 03 2020 at  8:12pm -0500,
Ming Lei <ming.lei@xxxxxxxxxx> wrote:

> On Thu, Dec 03, 2020 at 09:33:59AM -0500, Mike Snitzer wrote:
> > On Wed, Dec 02 2020 at 10:26pm -0500,
> > Ming Lei <ming.lei@xxxxxxxxxx> wrote:
> > 
> > > On Tue, Dec 01, 2020 at 11:07:09AM -0500, Mike Snitzer wrote:
> > > > commit 22ada802ede8 ("block: use lcm_not_zero() when stacking
> > > > chunk_sectors") broke chunk_sectors limit stacking. chunk_sectors must
> > > > reflect the most limited of all devices in the IO stack.
> > > > 
> > > > Otherwise malformed IO may result. E.g.: prior to this fix,
> > > > ->chunk_sectors = lcm_not_zero(8, 128) would result in
> > > > blk_max_size_offset() splitting IO at 128 sectors rather than the
> > > > required more restrictive 8 sectors.
> > > 
> > > What is the user-visible result of splitting IO at 128 sectors?
> > 
> > The VDO dm target fails because it requires IO it receives to be split
> > as it advertised (8 sectors).
> 
> OK, looks VDO's chunk_sector limit is one hard constraint, even though it
> is one DM device, so I guess you are talking about DM over VDO?
> 
> Another reason should be that VDO doesn't use blk_queue_split(), otherwise it
> won't be a trouble, right?
> 
> Frankly speaking, if the stacking driver/device has its own hard queue limit
> like normal hardware drive, the driver should be responsible for the splitting.

DM core does the splitting for VDO (just like any other DM target).
In 5.9 I updated DM to use chunk_sectors, use blk_stack_limits()
stacking of it, and also use blk_max_size_offset().

But all that block core code has shown itself to be too rigid for DM.  I
tried to force the issue by stacking DM targets' ti->max_io_len with
chunk_sectors.  But really I'd need to be able to pass in the per-target
max_io_len to blk_max_size_offset() to salvage using it.

Stacking chunk_sectors seems ill-conceived.  One size-fits-all splitting
is too rigid.

> > > I understand it isn't related with correctness, because the underlying
> > > queue can split by its own chunk_sectors limit further. So is the issue
> > > too many further-splitting on queue with chunk_sectors 8? then CPU
> > > utilization is increased? Or other issue?
> > 
> > No, this is all about correctness.
> > 
> > Seems you're confining the definition of the possible stacking so that
> > the top-level device isn't allowed to have its own hard requirements on
> 
> I just don't know this story, thanks for your clarification.
> 
> As I mentioned above, if the stacking driver has its own hard queue
> limit, it should be the driver's responsibility to respect it via
> blk_queue_split() or whatever.

Again, DM does its own splitting... that aspect of it isn't an issue.
The problem is the basis for splitting cannot be the stacked up
chunk_sectors.

Mike




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux