Re: [LSF/MM/BPF TOPIC] breaking the 512 KiB IO boundary on x86_64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 21, 2025 at 07:43:09AM +0530, Ritesh Harjani wrote:
> "Darrick J. Wong" <djwong@xxxxxxxxxx> writes:
> 
> > On Fri, Mar 21, 2025 at 12:16:28AM +0530, Ritesh Harjani wrote:
> >> Luis Chamberlain <mcgrof@xxxxxxxxxx> writes:
> >> 
> >> > We've been constrained to a max single 512 KiB IO for a while now on x86_64.
> >> > This is due to the number of DMA segments and the segment size. With LBS the
> >> > segments can be much bigger without using huge pages, and so on a 64 KiB
> >> > block size filesystem you can now see 2 MiB IOs when using buffered IO.
> >> > But direct IO is still crippled, because allocations are from anonymous
> >> > memory, and unless you are using mTHP you won't get large folios. mTHP
> >> > is also non-deterministic, and so you end up in a worse situation for
> >> > direct IO if you want to rely on large folios, as you may *sometimes*
> >> > end up with large folios and sometimes you might not. IO patterns can
> >> > therefore be erratic.
> >> >
> >> > As I just posted in a simple RFC [0], I believe the two step DMA API
> >> > helps resolve this.  Provided we move the block integrity stuff to the
> >> > new DMA API as well, the only patches really needed to support larger
> >> > IOs for direct IO for NVMe are:
> >> >
> >> >   iomap: use BLK_MAX_BLOCK_SIZE for the iomap zero page
> >> >   blkdev: lift BLK_MAX_BLOCK_SIZE to page cache limit
> >> 
> >> Maybe some naive questions, however I would like some help from people
> >> who could confirm if my understanding here is correct or not.
> >> 
> >> Given that we now support large folios in buffered I/O directly on raw
> >> block devices, applications must carefully serialize direct I/O and
> >> buffered I/O operations on these devices, right?
> >> 
> >> IIUC. until now, mixing buffered I/O and direct I/O (for doing I/O on
> >> /dev/xxx) on separate boundaries (blocksize == pagesize) worked fine,
> >> since direct I/O would only invalidate its corresponding page in the
> >> page cache. This assumes that both direct I/O and buffered I/O use the
> >> same blocksize and pagesize (e.g. both using 4K or both using 64K).
> >> However with large folios now introduced in the buffered I/O path for
> >> block devices, direct I/O may end up invalidating an entire large folio,
> >> which could span across a region where an ongoing direct I/O operation
> >
> > I don't understand the question.  Should this read  ^^^ "buffered"?
> 
> oops, yes.
> 
> > As in, directio submits its write bio, meanwhile another thread
> > initiates a buffered write nearby, the write gets a 2MB folio, and
> > then the post-write invalidation knocks down the entire large folio?
> > Even though the two ranges written are (say) 256k apart?
> >
> 
> Yes, Darrick. That is my question. 
> 
> i.e. w/o large folios in block devices one could do direct-io &
> buffered-io in parallel even just next to each other (assuming 4k pagesize). 
> 
>            |4k-direct-io | 4k-buffered-io | 
> 
> 
> However with large folios now supported in buffered-io path for block
> devices, the application cannot submit such direct-io + buffered-io
> pattern in parallel. Since direct-io can end up invalidating the folio
> spanning over it's 4k range, on which buffered-io is in progress.
> 
> So now applications need to be careful to not submit any direct-io &
> buffered-io in parallel with such above patterns on a raw block device,
> correct? That is what I would like to confirm.

I think that's correct, and kind of horrifying if true.  I wonder if
->invalidate_folio might be a reasonable way to clear the uptodate bits
on the relevant parts of a large folio without having to split or remove
it?

--D

> > --D
> >
> >> is taking place. That means, with large folio support in block devices,
> >> application developers must now ensure that direct I/O and buffered I/O
> >> operations on block devices are properly serialized, correct?
> >> 
> >> I was looking at posix page [1] and I don't think posix standard defines
> >> the semantics for operations on block devices. So it is really upto the
> >> individual OS implementation, correct? 
> >> 
> >> And IIUC, what Linux recommends is to never mix any kind of direct-io
> >> and buffered-io when doing I/O on raw block devices, but I cannot find
> >> this recommendation in any Documentation? So can someone please point me
> >> one where we recommend this?
> 
> And this ^^^ 
> 
> 
> -ritesh
> 
> >> 
> >> [1]: https://pubs.opengroup.org/onlinepubs/9799919799/
> >> 
> >> 
> >> -ritesh
> >> 
> >> >
> >> > The other two nvme-pci patches in that series are to just help with
> >> > experimentation now and they can be ignored.
> >> >
> >> > It does beg a few questions:
> >> >
> >> >  - How are we computing the new max single IO anyway? Are we really
> >> >    bounded only by what devices support?
> >> >  - Do we believe this is the step in the right direction?
> >> >  - Is 2 MiB a sensible max block sector size limit for the next few years?
> >> >  - What other considerations should we have?
> >> >  - Do we want something more deterministic for large folios for direct IO?
> >> >
> >> > [0] https://lkml.kernel.org/r/20250320111328.2841690-1-mcgrof@xxxxxxxxxx
> >> >
> >> >   Luis
> >> 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux