On Thu, Jul 4, 2019 at 6:55 AM Michel Dänzer <michel@xxxxxxxxxxx> wrote: > > On 2019-07-03 1:04 p.m., Timur Kristóf wrote: > > > >>> There may be other factors, yes. I can't offer a good explanation > >>> on > >>> what exactly is happening, but it's pretty clear that amdgpu can't > >>> take > >>> full advantage of the TB3 link, so it seemed like a good idea to > >>> start > >>> investigating this first. > >> > >> Yeah, actually it would be consistent with ~16-32 KB granularity > >> transfers based on your measurements above, which is plausible. So > >> making sure that the driver doesn't artificially limit the PCIe > >> bandwidth might indeed help. > > > > Can you point me to the place where amdgpu decides the PCIe link speed? > > I'd like to try to tweak it a little bit to see if that helps at all. > > I'm not sure offhand, Alex or anyone? amdgpu_device_get_pcie_info() in amdgpu_device.c. > > > >> OTOH this also indicates a similar potential for improvement by using > >> larger transfers in Mesa and/or the kernel. > > > > Yes, that sounds like it would be worth looking into. > > > > Out of curiosity, is there a performace decrease with small transfers > > on a "normal" PCIe port too, or is this specific to TB3? > > It's not TB3 specific. With a "normal" 8 GT/s x16 port, I get between > ~256 MB/s for 4 KB transfers and ~12 GB/s for 4 MB transfers (even > larger transfers seem slightly slower again). This also looks consistent > with your measurements in that the practical limit seems to be around > 75% of the theoretical bandwidth. > > > -- > Earthling Michel Dänzer | https://www.amd.com > Libre software enthusiast | Mesa and X developer _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel