pcie_bandwidth_available and USB4/TBT3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Recently we’ve been looking at some issues with AMD dGPUs being put into a TBT3 eGPU enclosure and various issues that come up. Several of them are root caused to bugs in the amdgpu driver that we’ll fix there.

However one thing stands out is a performance problem where the cards are artificially limited to a lower speed than necessary.

The amdgpu driver uses pcie_bandwidth_available() to decide what values to use for the platform speed cap and bandwidth cap. The value returned for the platform speed cap is always hardcoded to 2.5 GT/s.

This happens because the USB4 spec explicitly states[1]

---
11.2.1 PCIe Physical Layer Logical Sub-block
The Logical sub-block shall update the PCIe configuration registers with the following
characteristics:
• PCIe Gen 1 protocol behavior.
• Max Link Speed field in the Link Capabilities Register set to 0001b (data rate of 2.5 GT/s
only).
Note: These settings do not represent actual throughput. Throughput is implementation specific
and based on the USB4 Fabric performance.
---

So I wanted to ask – is it better to:
1. Catch this case in pcie_bandwidth_available() to skip PCIe root ports associated with a USB4 controller.

2. Special case the usage of pcie_bandwidth_available() to ignore any limiting devices when dev_is_removable() for the dGPU.

I'm personally tending to think it's better to fix in pcie_bandwidth_available() because papering over it in amdgpu means that the discovering the upper bound isn't possible if you must ignore the return value for pcie_bandwidth_available().

Thanks,

[1] https://www.usb.org/document-library/usb4r-specification-v20
USB4 V2 with Errata and ECN through June 2023 - CLEAN p710



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux