> > Can you point me to the place where amdgpu decides the PCIe link > > speed? > > I'd like to try to tweak it a little bit to see if that helps at > > all. > > I'm not sure offhand, Alex or anyone? Thus far, I started by looking at how the pp_dpm_pcie sysfs interface works, and found smu7_hwmgr which seems to be the only hwmgr that actually outputs anything on PP_PCIE: https://github.com/torvalds/linux/blob/a2d635decbfa9c1e4ae15cb05b68b2559f7f827c/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c#L4462 However, its output is definitely incorrect. It tells me that the supported PCIe modes are: cat /sys/class/drm/card1/device/pp_dpm_pcie 0: 2.5GT/s, x8 1: 8.0GT/s, x16 It allows me to change between these two modes, but the change doesn't seem to have any actual effect on the transfer speeds. Neither of those modes actually makes sense. Amdgpu doesn't seem to be aware of the fact that it runs on a x4 link. In fact, the smu7_get_current_pcie_lane_number function even has an assertion: PP_ASSERT_WITH_CODE((7 >= link_width), On the other hand: cat /sys/class/drm/card1/device/current_link_width 4 So I don't understand how it can even work with PCIe x4, why doesn't that assertion get triggered on my system? > > Out of curiosity, is there a performace decrease with small > > transfers > > on a "normal" PCIe port too, or is this specific to TB3? > > It's not TB3 specific. With a "normal" 8 GT/s x16 port, I get between > ~256 MB/s for 4 KB transfers and ~12 GB/s for 4 MB transfers (even > larger transfers seem slightly slower again). This also looks > consistent > with your measurements in that the practical limit seems to be around > 75% of the theoretical bandwidth. Sounds like your idea to try to optimize mesa to use larger transfers is a good idea, then. Best regards, Tim _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel