On Fri, 2019-07-05 at 09:36 -0400, Alex Deucher wrote: > On Thu, Jul 4, 2019 at 6:55 AM Michel Dänzer <michel@xxxxxxxxxxx> > wrote: > > On 2019-07-03 1:04 p.m., Timur Kristóf wrote: > > > > > There may be other factors, yes. I can't offer a good > > > > > explanation > > > > > on > > > > > what exactly is happening, but it's pretty clear that amdgpu > > > > > can't > > > > > take > > > > > full advantage of the TB3 link, so it seemed like a good idea > > > > > to > > > > > start > > > > > investigating this first. > > > > > > > > Yeah, actually it would be consistent with ~16-32 KB > > > > granularity > > > > transfers based on your measurements above, which is plausible. > > > > So > > > > making sure that the driver doesn't artificially limit the PCIe > > > > bandwidth might indeed help. > > > > > > Can you point me to the place where amdgpu decides the PCIe link > > > speed? > > > I'd like to try to tweak it a little bit to see if that helps at > > > all. > > > > I'm not sure offhand, Alex or anyone? > > amdgpu_device_get_pcie_info() in amdgpu_device.c. Hi Alex, I took a look at amdgpu_device_get_pcie_info() and found that it uses pcie_bandwidth_available to determine the capabilities of the PCIe port. However, pcie_bandwidth_available gives you only the current bandwidth as set by the PCIe link status register, not the maximum capability. I think something along these lines would fix it: https://pastebin.com/LscEMKMc It seems to me that the PCIe capabilities are only used in a few places in the code, so this patch fixes pp_dpm_pcie. However it doesn't affect the actual performance. What do you think? Best regards, Tim _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel