[LSF/MM/BPF TOPIC] breaking the 512 KiB IO boundary on x86_64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We've been constrained to a max single 512 KiB IO for a while now on x86_64.
This is due to the number of DMA segments and the segment size. With LBS the
segments can be much bigger without using huge pages, and so on a 64 KiB
block size filesystem you can now see 2 MiB IOs when using buffered IO.
But direct IO is still crippled, because allocations are from anonymous
memory, and unless you are using mTHP you won't get large folios. mTHP
is also non-deterministic, and so you end up in a worse situation for
direct IO if you want to rely on large folios, as you may *sometimes*
end up with large folios and sometimes you might not. IO patterns can
therefore be erratic.

As I just posted in a simple RFC [0], I believe the two step DMA API
helps resolve this.  Provided we move the block integrity stuff to the
new DMA API as well, the only patches really needed to support larger
IOs for direct IO for NVMe are:

  iomap: use BLK_MAX_BLOCK_SIZE for the iomap zero page
  blkdev: lift BLK_MAX_BLOCK_SIZE to page cache limit

The other two nvme-pci patches in that series are to just help with
experimentation now and they can be ignored.

It does beg a few questions:

 - How are we computing the new max single IO anyway? Are we really
   bounded only by what devices support?
 - Do we believe this is the step in the right direction?
 - Is 2 MiB a sensible max block sector size limit for the next few years?
 - What other considerations should we have?
 - Do we want something more deterministic for large folios for direct IO?

[0] https://lkml.kernel.org/r/20250320111328.2841690-1-mcgrof@xxxxxxxxxx

  Luis




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux