Query about setting MaxPayloadSize for the best performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi,
This is about configuring the MPS (MaxPayloadSize) in the PCIe hierarchy during enumeration. I would like to highlight the dependency on how the MPS gets configured in a hierarchy based on the MPS value already present in the root port's DevCtl register.

Initial root port's configuration (CASE-A):
Root port is capable of 128 & 256 MPS, but its MPS is set to "128" in its DevCtl register.

Observation:
    CASE-A-1:
When a device with support for 256MPS is connected directly to this root port, only 128MPS is set in its DevCtl register (though both root port and endpoint support 256MPS). This results in sub-optimal performance.
    CASE-A-2:
When a device with only support for 128MPS is connected to the root port through a PCIe switch (that has support for up to 256MPS), entire hierarchy is configured for 128MPS.

Initial root port's configuration (CASE-B):
Root port is capable of 128 & 256 MPS, but its MPS is set to "256" in its DevCtl register.

Observation:
    CASE-B-1:
When a device with support for 256MPS is connected directly to this root port, 256MPS is set in its DevCtl register. This gives the expected performance.
    CASE-B-2:
When a device with only support for 128MPS is connected to the root port through a PCIe switch (that has support for upto 256MPS), rest of the hierarchy gets configured for 256MPS, but since the endpoint behind the switch has support for only 128MPS, functionality of this endpoint gets broken.

One solution to address this issue is to leave the DevCtl of RP at 128MPS and append 'pci=pcie_bus_perf' to the kernel command line. This would change both MPS and MRRS (Max Read Request Size) in the hierarchy in such a way that the system offers the best performance.

I'm not fully aware of the history of various 'pcie_bus_xxxx' options, but, since there is no downside to making 'pcie_bus_perf' as the default, I'm wondering why can't we just use 'pcie_bus_perf' itself as the default configuration instead of the existing default configuration which has the issues mentioned in CASE-A-1 and CASE-B-2.

Thanks,
Vidya Sagar



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux