Re: [LSF/MM/BPF TOPIC] breaking the 512 KiB IO boundary on x86_64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 20, 2025 at 02:29:56PM +0100, Daniel Gomez wrote:
> On Thu, Mar 20, 2025 at 12:11:47PM +0100, Matthew Wilcox wrote:
> > On Thu, Mar 20, 2025 at 04:41:11AM -0700, Luis Chamberlain wrote:
> > > We've been constrained to a max single 512 KiB IO for a while now on x86_64.
> > ...
> > > It does beg a few questions:
> > > 
> > >  - How are we computing the new max single IO anyway? Are we really
> > >    bounded only by what devices support?
> > >  - Do we believe this is the step in the right direction?
> > >  - Is 2 MiB a sensible max block sector size limit for the next few years?
> > >  - What other considerations should we have?
> > >  - Do we want something more deterministic for large folios for direct IO?
> > 
> > Is the 512KiB limit one that real programs actually hit?  Would we
> > see any benefit from increasing it?  A high end NVMe device has a
> > bandwidth limit around 10GB/s, so that's reached around 20k IOPS,
> > which is almost laughably low.
> 
> Current devices do more than that. A quick search gives me 14GB/s and 2.5M IOPS
> for gen5 devices:
> 
> https://semiconductor.samsung.com/ssd/enterprise-ssd/pm1743/
> 
> An gen6 goes even further.

That kind of misses my point.  You don't need to exceed 512KiB I/Os to
be bandwidth limited.  So what's the ROI of all this work?  Who benefits?




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux