On Wed, Dec 08, 2021 at 10:33:19AM -0800, Bart Van Assche wrote: > On 12/7/21 4:56 PM, Eric Biggers wrote: > > +What: /sys/block/<disk>/queue/virt_boundary_mask > > +Date: April 2021 > > +Contact: linux-block@xxxxxxxxxxxxxxx > > +Description: > > + [RO] This file shows the I/O segment alignment mask for the > > + block device. I/O requests to this device will be split between > > + segments wherever either the end of the previous segment or the > > + beginning of the current segment is not aligned to > > + virt_boundary_mask + 1 bytes. > > "I/O segment alignment" looks confusing to me. My understanding is that this > attribute refers to the alignment of the internal data buffer boundaries and not > to the alignment of the offset on the storage medium. The name "virt_boundary" > refers to the property that if all internal boundaries are a multiple of > (virt_boundary_mask + 1) then an MMU with page size (virt_boundary_mask + 1) can > map the entire data buffer onto a contiguous range of virtual addresses. E.g. > RDMA adapters have an MMU that can do this. Several drivers that set this > attribute support a storage controller that does not have an internal MMU. As an > example, the NVMe core sets this mask since the NVMe specification requires that > only the first element in a PRP list has a non-zero offset. From the NVMe > specification: "PRP entries contained within a PRP List shall have a memory page > offset of 0h. If a second PRP entry is present within a command, it shall have a > memory page offset of 0h. In both cases, the entries are memory". Sure, I meant for it to be talking about the memory addresses. How about this: [RO] This file shows the I/O segment memory alignment mask for the block device. I/O requests to this device will be split between segments wherever either the memory address of the end of the previous segment or the memory address of the beginning of the current segment is not aligned to virt_boundary_mask + 1 bytes. - Eric