On 14/02/2024 12:27, Nilay Shroff wrote:
Support reading atomic write registers to fill in request_queue
properties.
Use following method to calculate limits:
atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
You still need to fix that mail client to not add extra blank lines.
atomic_write_unit_min = logical_block_size
atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
atomic_write_boundary = NABSPF
In case the device doesn't support namespace atomic boundary size (i.e. NABSPF
is zero) then while merging atomic block-IO we should allow merge.
For example, while front/back merging the atomic block IO, we check whether
boundary is defined or not. In case if boundary is not-defined (i.e. it's zero)
then we simply reject merging ateempt (as implemented in
rq_straddles_atomic_write_boundary()).
Are you sure about that? In rq_straddles_atomic_write_boundary(), if
boundary == 0, then we return false, i.e. there is no boundary, so we
can never be crossing it.
static bool rq_straddles_atomic_write_boundary(struct request *rq,
unsigned int front,
unsigned int back)
{
unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
unsigned int mask, imask;
loff_t start, end;
if (!boundary)
return false;
...
}
And then will not reject a merge for that reason, like:
int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int
nr_segs)
{
...
if (req->cmd_flags & REQ_ATOMIC) {
if (rq_straddles_atomic_write_boundary(req,
0, bio->bi_iter.bi_size)) {
return 0;
}
}
return ll_new_hw_segment(req, bio, nr_segs);
}
I am quoting this from NVMe spec (Command Set Specification, revision 1.0a,
Section 2.1.4.3) : "To ensure backwards compatibility, the values reported for
AWUN, AWUPF, and ACWU shall be set such that they are supported even if a
write crosses an atomic boundary. If a controller does not guarantee
atomicity across atomic boundaries, the controller shall set AWUN, AWUPF, and
ACWU to 0h (1 LBA)."
Thanks,
John