On 10.03.2022 14:58, Matias Bjørling wrote:
>> Yes, these drives are intended for Linux users that would use the
>> zoned block device. Append is supported but holes in the LBA space
>> (due to diff in zone cap and zone size) is still a problem for these users.
>
> With respect to the specific users, what does it break specifically? What are
key features are they missing when there's holes?
What we hear is that it breaks existing mapping in applications, where the
address space is seen as contiguous; with holes it needs to account for the
unmapped space. This affects performance and and CPU due to unnecessary
splits. This is for both reads and writes.
For more details, I guess they will have to jump in and share the parts that
they consider is proper to share in the mailing list.
I guess we will have more conversations around this as we push the block
layer changes after this series.
Ok, so I hear that one issue is I/O splits - If I assume that reads are sequential, zone cap/size between 100MiB and 1GiB, then my gut feeling would tell me its less CPU intensive to split every 100MiB to 1GiB of reads, than it would be to not have power of 2 zones due to the extra per io calculations.
Do I have a faulty assumption about the above, or is there more to it?
I do not have numbers on the number of splits. I can only say that it is
an issue. Then the whole management is apparently also costing some DRAM
for extra mapping, instead of simply doing +1.
The goal for these customers is not having the emulation, so the cost of
the !PO2 path would be 0.
For the existing applications that require a PO2, we have the emulation.
In this case, the cost will only be paid on the devices that implement
!PO2 zones.
Hope this answer the question.