Re: One Question About PCIe BUS Config Type with pcie_bus_safe or pcie_bus_perf On NVMe Device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 23, 2018 at 02:01:27PM +0000, Ron Yuan wrote:
> Just got the log, see attachment. Kernel is under "perf" mode.
> SSD and Ethernet controller are both set to 256B MRRS. 

Here's what I see from lspci:

  3a:00.0 Root Port to 3b     MPS_cap=256 MPS=256 MRRS=128
  3b:00.0 NIC Endpoint        MPS_cap=512 MPS=256 MRRS=256
  3b:00.1 NIC Endpoint        MPS_cap=512 MPS=256 MRRS=256

Here, the NICs support up to MPS=512 but the Root Port only supports
MPS=256.  We must set MPS=256 for the NICs.  Otherwise, the NICs could
do 512-byte DMA writes to system memory, and the Root Port would treat
those TLPs as malformed.

There is no need to limit MRRS because there are no other devices
under this Root Port.  We can't tell from lspci what the maximum MRRS
is, but if the NICs support MRRS=4096, we could use that.

  ae:00.0 Root Port to af     MPS_cap=256 MPS=256 MRRS=128
  ae:01.0 Root Port to b0     MPS_cap=256 MPS=256 MRRS=128
  af:00.0 SSD Endpoint        MPS_cap=256 MPS=256 MRRS=256
  b0:00.0 SSD Endpoint        MPS_cap=256 MPS=256 MRRS=256

Everything here supports MPS=256, so that's what we should use.

As with the NICs, there's no need to limit MRRS here.  We don't know
what MRRS the endpoints support, but PERFORMANCE mode is being
unnecessarily conservative when it limits MRRS to the MPS (256 in this
case).

It's not as simple as just removing the "set MRRS=MPS" part because we
do rely on that in some topologies.  There are interesting topologies
where we *don't* need it, like both of the ones above, and we need to
make Linux smart enough to recognize them.



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux