Re: [LSF/MM/BPF TOPIC] breaking the 512 KiB IO boundary on x86_64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 20, 2025 at 08:37:05AM -0700, Bart Van Assche wrote:
> On 3/20/25 7:18 AM, Christoph Hellwig wrote:
> > On Thu, Mar 20, 2025 at 04:41:11AM -0700, Luis Chamberlain wrote:
> > > We've been constrained to a max single 512 KiB IO for a while now on x86_64.
> > 
> > No, we absolutely haven't.  I'm regularly seeing multi-MB I/O on both
> > SCSI and NVMe setup.
> 
> Is NVME_MAX_KB_SZ the current maximum I/O size for PCIe NVMe
> controllers? From drivers/nvme/host/pci.c:

Yes, this is the driver's limit. The device's limit may be lower or
higher.

I allocate out of hugetlbfs to reliably send direct IO at this size
because the nvme driver's segment count is limited to 128. The driver
doesn't impose a segment size limit, though. If each segment is only 4k
(a common occurance), I guess that's where Luis is getting the 512K
limit?

> /*
>  * These can be higher, but we need to ensure that any command doesn't
>  * require an sg allocation that needs more than a page of data.
>  */
> #define NVME_MAX_KB_SZ	8192
> #define NVME_MAX_SEGS	128
> #define NVME_MAX_META_SEGS 15
> #define NVME_MAX_NR_ALLOCATIONS	5
> 
> > > This is due to the number of DMA segments and the segment size.
> > 
> > In nvme the max_segment_size is UINT_MAX, and for most SCSI HBAs it is
> > fairly large as well.
> 
> I have a question for NVMe device manufacturers. It is known since a
> long time that submitting large I/Os with the NVMe SGL format requires
> less CPU time compared to the NVMe PRP format. Is this sufficient to
> motivate NVMe device manufacturers to implement the SGL format? All SCSI
> controllers I know of, including UFS controllers, support something that
> is much closer to the NVMe SGL format rather than the NVMe PRP format.

SGL support does seem less common than you'd think. It is more efficient
when you have physically contiguous pages, or an IOMMU mapped
discontiguous pages into a dma contiguous IOVA. If you don't have that,
PRP is a little more efficient for memory and CPU usage. But in the
context of large folios, yeah, SGL is the better option.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux