Re: Query about setting MaxPayloadSize for the best performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 22, 2023 at 11:04:03AM +0530, Vidya Sagar wrote:
> 
> Hi,
> This is about configuring the MPS (MaxPayloadSize) in the PCIe hierarchy
> during enumeration. I would like to highlight the dependency on how the MPS
> gets configured in a hierarchy based on the MPS value already present in the
> root port's DevCtl register.
> 
> Initial root port's configuration (CASE-A):
>     Root port is capable of 128 & 256 MPS, but its MPS is set to "128" in
> its DevCtl register.
> 
> Observation:
>     CASE-A-1:
>         When a device with support for 256MPS is connected directly to this
> root port, only 128MPS is set in its DevCtl register (though both root port
> and endpoint support 256MPS). This results in sub-optimal performance.

Yes.  We could set both to 256.  But I think there's a potential issue
for peer-to-peer transactions, isn't there?  E.g., 

  00:01.0 Root Port to [bus 01], MPSS=256 MPS=256
  00:02.0 Root Port to [bus 02], MPSS=256 MPS=128
  01:00.0 Endpoint, MPSS=256 MPS=256
  02:00.0 Endpoint, MPSS=128 MPS=128

02:00.0 is only capable of MPS=128, so it and 00:02.0 are set to that.
Now 01:00.0 does a DMA write to  02.00.0 and sends a single 256-byte
TLP.

>     CASE-A-2:
>         When a device with only support for 128MPS is connected to the root
> port through a PCIe switch (that has support for up to 256MPS), entire
> hierarchy is configured for 128MPS.
> 
> Initial root port's configuration (CASE-B):
>     Root port is capable of 128 & 256 MPS, but its MPS is set to "256" in
> its DevCtl register.
> 
> Observation:
>     CASE-B-1:
>         When a device with support for 256MPS is connected directly to this
> root port, 256MPS is set in its DevCtl register. This gives the expected
> performance.
>     CASE-B-2:
>         When a device with only support for 128MPS is connected to the root
> port through a PCIe switch (that has support for upto 256MPS), rest of the
> hierarchy gets configured for 256MPS, but since the endpoint behind the
> switch has support for only 128MPS, functionality of this endpoint gets
> broken.
> 
> One solution to address this issue is to leave the DevCtl of RP at 128MPS
> and append 'pci=pcie_bus_perf' to the kernel command line. This would change
> both MPS and MRRS (Max Read Request Size) in the hierarchy in such a way
> that the system offers the best performance.
> 
> I'm not fully aware of the history of various 'pcie_bus_xxxx' options, but,
> since there is no downside to making 'pcie_bus_perf' as the default, I'm
> wondering why can't we just use 'pcie_bus_perf' itself as the default
> configuration instead of the existing default configuration which has the
> issues mentioned in CASE-A-1 and CASE-B-2.

I'm definitely not happy with our MPS configuration.  I guess I should
be glad that at least we don't have build-time config options for it.

Anyway, it would be great if somebody would clean it up and make it
more sensible.

The peer-to-peer thing is a big issue because I don't think the RC is
required or maybe even allowed to split TLPs to accommodate devices
with smaller MPS.

I'm not even sure the RC is required to route TLPs between Root Ports
(see pci_p2pdma_whitelist[]), and I don't think that functionality is
discoverable either directly from PCIe or via a firmware interface.

But there's some new stuff in PCIe r6.0 related to MPS; I haven't
really dug into it, but maybe some of that can help?

You've likely seen "Understanding Performance of PCI Express Systems"
by Jason Lawley, from Oct 28, 2014 [1].  It's a good analysis of MPS,
MRRS, RCB, etc.

Bjorn

[1] https://docs.xilinx.com/v/u/en-US/wp350



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux