IMX8MM PCIe performance evaluated with NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings,

I'm using PCIe on the IMX8M Mini and testing PCIe performance with a
NVMe constrained to 1 lane. The NVMe in question is a Samsung SSD980
500GB which claims 3500MB/s read speed (with a gen3 x4 link).

My understanding of PCIe performance would give the following
theoretical max bandwidth based on clock and encoding:
pcie gen1 x1 : 2500MT/s*1lane*80% (8B/10B encoding) = 2000Mbps = 250MB/s
pcie gen2 x1 : 5000MT/s*1lane*80% (8B/10B encoding) = 4000Mbps = 500MB/s
pcie gen3 x1 : 8000MT/s*1lane*98.75% (128B/130B encoding) = 7900Mbps = 987.5MB/s
pcie gen3 x4 : 8000MT/s*4lane*98.75% (128B/130B encoding) = 31600Mbps = 3950MB/s

My assumption is an NVMe would have very little data overhead and thus
be a simple way to test PCIe bus performance.

Testing this NVMe with 'dd if=/dev/nvme0n1 of=/dev/null bs=1M
count=500 iflag=nocache' on various systems gives me the following:
- x86 gen3 x4: 2700MB/s (vs theoretical max of ~4GB/s)
- x86 gen3 x1: 840MB/s
- x86 gen2 x1: 390MB/s
- cn8030 gen3 x1: 352MB/s (Cavium OcteonTX)
- cn8030 gen2 x1: 193MB/s (Cavium OcteonTX)
- imx8mm gen2 x1: 266MB/s

The various x86 tests were not all done on the same PC or the same
kernel or kernel config... I used what I had around with whatever
Linux OS was on them just to get a feel for performance and in all
cases but the x4 case lanes 2/3/4 were masked off with kapton tape to
force a 1-lane link.

Why do you think the IMX8MM running at gen2 x1 would have such a lower
than expected performance (266MB/s vs the 390MB/s an x86 gen2 x1 could
get)?

What would a more appropriate way of testing PCIe performance be?

Best regards,

Tim



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux