On Fri, Jun 28, 2019 at 12:23:09PM +0200, Timur Kristóf wrote: > Hi guys, > > I use an AMD RX 570 in a Thunderbolt 3 external GPU box. > dmesg gives me the following message: > pci 0000:3a:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x4 link at 0000:04:04.0 (capable of 31.504 Gb/s with 8 GT/s x4 link) > > Here is a tree view of the devices as well as the output of lspci -vvv: > https://pastebin.com/CSsS2akZ > > The critical path of the device tree looks like this: > > 00:1c.4 Intel Corporation Sunrise Point-LP PCI Express Root Port #5 (rev f1) > 03:00.0 Intel Corporation JHL6540 Thunderbolt 3 Bridge (C step) [Alpine Ridge 4C 2016] (rev 02) > 04:04.0 Intel Corporation JHL6540 Thunderbolt 3 Bridge (C step) [Alpine Ridge 4C 2016] (rev 02) > 3a:00.0 Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] > 3b:01.0 Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] > 3c:00.0 Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev ef) > > Here is the weird part: > > Accoding to lspci, all of these devices report in their LnkCap that > they support 8 GT/s, except the 04:04.0 and 3a:00.0 which say they only > support 2.5 GT/s. Contradictory to lspci, sysfs on the other hand says > that both of them are capable of 8 GT/s as well: > "/sys/bus/pci/devices/0000:04:04.0/max_link_speed" and > "/sys/bus/pci/devices/0000:3a:00.0/max_link_speed" are 8 GT/s. > It seems that there is a discrepancy between what lspci thinks and what > the devices are actually capable of. > > Questions: > > 1. Why are there four bridge devices? 04:00.0, 04:01.0 and 04:02.0 look > superfluous to me and nothing is connected to them. It actually gives > me the feeling that the TB3 driver creates 4 devices with 2.5 GT/s > each, instead of one device that can do the full 8 GT/s. Because it is standard PCIe switch with one upstream port and n downstream ports. > 2. Why are some of the bridge devices only capable of 2.5 GT/s > according to lspci? You need to talk to lspci maintainer. > 3. Is it possible to manually set them to 8 GT/s? No idea. Are you actually seeing some performance issue because of this or are you just curious? _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel