On 03/06/2024 10:26, Hannes Reinecke wrote:
+static bool rq_straddles_atomic_write_boundary(struct request *rq,
+ unsigned int front_adjust,
+ unsigned int back_adjust)
+{
+ unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
+ u64 mask, start_rq_pos, end_rq_pos;
+
+ if (!boundary)
+ return false;
+
+ start_rq_pos = blk_rq_pos(rq) << SECTOR_SHIFT;
+ end_rq_pos = start_rq_pos + blk_rq_bytes(rq) - 1;
+
+ start_rq_pos -= front_adjust;
+ end_rq_pos += back_adjust;
+
+ mask = ~(boundary - 1);
+
+ /* Top bits are different, so crossed a boundary */
+ if ((start_rq_pos & mask) != (end_rq_pos & mask))
+ return true;
+
+ return false;
+}
But isn't that precisely what 'chunk_sectors' is doing?
IE ensuring that requests never cross that boundary?
Q1: Shouldn't we rather use/modify/adapt chunk_sectors for this thing?
So you are saying that we can re-use blk_chunk_sectors_left() to
determine whether merging a bio/req would cross the boundary, right?
It seems ok in principle - we would just need to ensure that it is
watertight.
Q2: If we don't, shouldn't we align the atomic write boundary to the
chunk_sectors setting to ensure both match up?
Yeah, right. But we can only handle what HW tells.
The atomic write boundary is only relevant to NVMe. NVMe NOIOB - which
we use to set chunk_sectors - is an IO optimization hint, AFAIK. However
the atomic write boundary is a hard limit. So if NOIOB is not aligned
with the atomic write boundary - which seems unlikely - then the atomic
write boundary takes priority.
Thanks,
John