> Well that's the extension PCIe downstream port. The other one is > 04:01.0. > > Typically 04:00.0 and 04:00.2 are used to connect TBT (05:00.0) and > xHCI > (39:00.0) but in your case you don't seem to have USB 3 devices > connected to that so it is not present. If you plug in USB-C device > (non-TBT) you should see the host router xHCI appearing as well. > > This is pretty standard topology. > > > > Not sure I understand correctly, are you saying that TB3 can do 40 > > Gbit/sec even though the kernel thinks it can only do 8 Gbit / sec? > > Yes the PCIe switch upstream port (3a:00.0) is connected back to the > host router over virtual Thunderbolt 40Gb/s link so the PCIe gen1 > speeds > it reports do not really matter here (same goes for the downstream). > > The topology looks like bellow if I got it right from the lspci > output: > > 00:1c.4 (root port) 8 GT/s x 4 > ^ > | real PCIe link > v > 03:00.0 (upstream port) 8 GT/s x 4 > 04:04.0 (downstream port) 2.5 GT/s x 4 > ^ > | virtual link 40 Gb/s > v > 3a:00.0 (upstream port) 2.5 GT/s x 4 > 3b:01.0 (downstream port) 8 GT/s x 4 > ^ > | real PCIe link > v > 3c:00.0 (eGPU) 8 GT/s x 4 > > In other words all the real PCIe links run at full 8 GT/s x 4 which > is > what is expected, I think. It makes sense now. This is hands down the best explanation I've seen about how TB3 hangs together. Thanks for taking the time to explain it! I have two more questions: 1. What is the best way to test that the virtual link is indeed capable of 40 Gbit / sec? So far I've been unable to figure out how to measure its maximum throughput. 2. Why is it that the game can only utilize as much as 2.5 Gbit / sec when it gets bottlenecked? The same problem is not present on a desktop computer with a "normal" PCIe port. _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel