Re: IMX8MM PCIe performance evaluated with NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 3, 2021 at 3:31 PM Keith Busch <kbusch@xxxxxxxxxx> wrote:
>
> On Fri, Dec 03, 2021 at 01:52:17PM -0800, Tim Harvey wrote:
> > Greetings,
> >
> > I'm using PCIe on the IMX8M Mini and testing PCIe performance with a
> > NVMe constrained to 1 lane. The NVMe in question is a Samsung SSD980
> > 500GB which claims 3500MB/s read speed (with a gen3 x4 link).
> >
> > My understanding of PCIe performance would give the following
> > theoretical max bandwidth based on clock and encoding:
> > pcie gen1 x1 : 2500MT/s*1lane*80% (8B/10B encoding) = 2000Mbps = 250MB/s
> > pcie gen2 x1 : 5000MT/s*1lane*80% (8B/10B encoding) = 4000Mbps = 500MB/s
> > pcie gen3 x1 : 8000MT/s*1lane*98.75% (128B/130B encoding) = 7900Mbps = 987.5MB/s
> > pcie gen3 x4 : 8000MT/s*4lane*98.75% (128B/130B encoding) = 31600Mbps = 3950MB/s
> >
> > My assumption is an NVMe would have very little data overhead and thus
> > be a simple way to test PCIe bus performance.
>
> Your 'dd' output is only reporting the user data throughput, but there
> is more happening on the link than just user data.
>
> You've accounted for the bit encoding, but there's more from the PCIe
> protocol: the PHY layer (SOS), DLLP (Ack, FC), and TLP (headers,
> sequences, checksums).
>
> NVMe itself also adds some overhead in the form of SQE, CQE, PRP, and
> MSIx.
>
> All told, the best theoretical bandwidth that user data will be able to
> utilize out of the link is going to end up being ~85-90%, depending on
> your PCIe MPS (Max Payload Size) setting.
>
> > Testing this NVMe with 'dd if=/dev/nvme0n1 of=/dev/null bs=1M
> > count=500 iflag=nocache' on various systems gives me the following:
>
> If using 'dd', I think you want to use 'iflag=direct' rather than 'nocache'.
>
> > - x86 gen3 x4: 2700MB/s (vs theoretical max of ~4GB/s)
> > - x86 gen3 x1: 840MB/s
> > - x86 gen2 x1: 390MB/s
> > - cn8030 gen3 x1: 352MB/s (Cavium OcteonTX)
> > - cn8030 gen2 x1: 193MB/s (Cavium OcteonTX)
> > - imx8mm gen2 x1: 266MB/s
> >
> > The various x86 tests were not all done on the same PC or the same
> > kernel or kernel config... I used what I had around with whatever
> > Linux OS was on them just to get a feel for performance and in all
> > cases but the x4 case lanes 2/3/4 were masked off with kapton tape to
> > force a 1-lane link.
> >
> > Why do you think the IMX8MM running at gen2 x1 would have such a lower
> > than expected performance (266MB/s vs the 390MB/s an x86 gen2 x1 could
> > get)?
> >
> > What would a more appropriate way of testing PCIe performance be?
>
> Beyond the protocol overhead, 'dd' is probably not going to be the best
> way to meausre a device's performance. This sends just one command at a
> time, so you are also measuring the full software stack latency, which
> includes a system call and interrupt driven context switches. The PCIe
> traffic would be idle during this overhead when running at just qd1.
>
> I am guessing your x86 is simply faster at executing through this
> software stack than your imx8mm, so the software latency is lower.
>
> A better approach may be to use higher queue depths with batched
> submissions so that your software overhead can occur concurrently with
> your PCIe traffic. Also, you can eliminate interrupt context switches if
> you use polled IO queues.

Thanks for the response!

The roughly 266MB/s performance results I've got on IMX8MM gen2 x1
using NVMe and plain old 'dd' is on par with what another has found
using a custom PCIe device of theirs and a simple loopback test so I
feel that the 'software stack' isn't the bottleneck here (as that's
removed in his situation). I'm leaning towards something like
interrupt latency. I'll have to dig into the NVMe device driver and
see if there is a way to hack it to poll to see what the difference
is.

Best regards,

Tim



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux