Re: about mpss with pcie_bus_perf

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 15, 2014 at 10:18 AM, Jon Mason <jdmason@xxxxxxxx> wrote:
>
> If all inter-device communication is removed, then the only
> communication is CPU, Endpoint, and switches in-between.  Going from
> CPU to Endpoint, the MPS is actually going to be the Cache Line size.
> Since the Cache line size is 64B on x86 and most other architectures,
> there is no worry that the endpoint will get a PCIE packet larger than
> the MPS.  Also, using the MRRS to clamp down the endpoint to the MPS
> of the switches should ensure no reads larger than the MPS.  Going
> from Endpoint to CPU, we must ensure that all switches have a MPSS
> large enough for any device under them.  If not, then we must clamp
> down the Endpoint MPS.
>
> If all of this works, then we can ensure a much larger MPS for all of
> the PCI devices under a switch and not be bound by the smallest MPSS
> of an endpoint on the switch.

I'm confused by above statement.

On system have pcie hotplug support, BIOS set root port mps to 255,
and end port 255 during post.
then hot-remove and hot-add cards, new MPS will be 128 default.
when driver put load the end device, we will have lots of AER about TLP etc.

After change root port mps and end device mps to 128, we will not have
AER anymore.

So question is: root port's mpss 256 and end device's mpss 128 should
work well without any problem?

Also I have noticed BIOS set MPSS to 256 and MRRS 512.
so what is reason for current code for pcie_bus_perf to limit MRRS with MPSS?

Thanks

Yinghai
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux