Re: [LSF/MM/BPF TOPIC] breaking the 512 KiB IO boundary on x86_64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 20, 2025 at 03:18:46PM +0100, Christoph Hellwig wrote:
> On Thu, Mar 20, 2025 at 04:41:11AM -0700, Luis Chamberlain wrote:
> > We've been constrained to a max single 512 KiB IO for a while now on x86_64.
> 
> No, we absolutely haven't.  I'm regularly seeing multi-MB I/O on both
> SCSI and NVMe setup.

Sorry you're right, I should have been clearer. This is only an issue without
large folios for buffered IO, or without scatter list chaining support.

Or put another way, block driver which don't support scatter list
chaining will end up with a different max IO possible for direct IO and
io-uring cmd.

> > This is due to the number of DMA segments and the segment size.
> 
> In nvme the max_segment_size is UINT_MAX, and for most SCSI HBAs it is
> fairly large as well.

For Direct IO or io-uring cmd when large folios may not be used the
segments will be constrained to the page size.

  Luis




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux