+
+ imask = ~mask;
+
+ /* Top bits are different, so crossed a boundary */
+ if ((start & imask) != (end & imask))
+ return true;
+
+ return false;
+}
+
I'm not sure what is going on with your mail client here.
Shall we ensure here that we don't cross max limit of atomic write supported by
device? It seems that if the boundary size is not advertized by the device
(in fact, I have one NVMe drive which has boundary size zero i.e. nabo/nabspf/
nawupf are all zero but awupf is non-zero) then we (unconditionally) allow
merging. However it may be possible that post merging the total size of the
request may exceed the atomic-write-unit-max-size supported by the device and
if that happens then most probably we would be able to catch it very late in
the driver code (if the device is NVMe).
So is it a good idea to validate here whether we could potentially exceed
the atomic-write-max-unit-size supported by device before we allow merging?
Note that we have atomic_write_max_bytes and atomic_write_max_unit_size,
and they are not always the same thing.
In case we exceed the atomic-write-max-unit-size post merge then don't allow
merging?
We check this elsewhere. I just expanded the normal check for max
request size to cover atomic writes.
Normally we check that a merged request would not exceed max_sectors
value, and this max_sectors value can be got from
blk_queue_get_max_sectors().
So if you check a function like ll_back_merge_fn(), we have a merging
size check:
if (blk_rq_sectors(req) + bio_sectors(bio) >
blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
req_set_nomerge(req->q, req);
return 0;
}
And here the blk_rq_get_max_sectors() -> blk_queue_get_max_sectors()
call now also supports atomic writes (see patch #7):
@@ -167,7 +167,16 @@ static inline unsigned get_max_io_size(struct bio *bio,
{
...
+ if (bio->bi_opf & REQ_ATOMIC)
+ max_sectors = lim->atomic_write_max_sectors;
+ else
+ max_sectors = lim->max_sectors;
Note that we do not allow merging of atomic and non-atomic writes.
Thanks,
John