On 2021/11/23 22:28, hch@xxxxxxxxxxxxx wrote:
On Tue, Nov 23, 2021 at 11:39:11AM +0000, Johannes Thumshirn wrote:
I think we have to differentiate two cases here:
A "regular" REQ_OP_ZONE_APPEND bio and a RAID stripe REQ_OP_ZONE_APPEND
bio. The 1st one (i.e. the regular REQ_OP_ZONE_APPEND bio) can't be split
because we cannot guarantee the order the device writes the data to disk.
That's correct.
But if we want to move all bio split into chunk layer, we want a initial
bio without any limitation, and then using that bio to create real
REQ_OP_ZONE_APPEND bios with proper size limitations.
For the RAID stripe bio we can split it into the two (or more) parts that
will end up on _different_ devices. All we need to do is a) ensure it
doesn't cross the device's zone append limit and b) clamp all
bi_iter.bi_sector down to the start of the target zone, a.k.a sticking to
the rules of REQ_OP_ZONE_APPEND.
Exactly. A stacking driver must never split a REQ_OP_ZONE_APPEND bio.
But the file system itself can of course split it as long as each split
off bio has it's own bi_end_io handler to record where it has been
written to.
This makes me wonder, can we really forget the zone thing for the
initial bio so we just create a plain bio without any special
limitation, and let every split condition be handled in the lower layer?
Including raid stripe boundary, zone limitations etc.
(yeah, it's still not pure stacking driver, but it's more
stacking-driver like).
In that case, the missing piece seems to be a way to convert a splitted
plain bio into a REQ_OP_ZONE_APPEND bio.
Can this be done without slow bvec copying?
Thanks,
Qu