> > Okay, so I booted my system with amdgpu.benchmark=3 > > You can find the full dmesg log here: https://pastebin.com/zN9FYGw4 > > > > The result is between 1-5 Gbit / sec depending on the transfer size > > (the higher the better), which corresponds to neither the 8 Gbit / > > sec > > that the kernel thinks it is limited to, nor the 20 Gbit / sec > > which I > > measured earlier with pcie_bw. > > 5 Gbit/s throughput could be consistent with 8 Gbit/s theoretical > bandwidth, due to various overhead. Okay, that's good to know. > > Since pcie_bw only shows the maximum PCIe packet size (and not the > > actual size), could it be that it's so inaccurate that the 20 Gbit > > / > > sec is a fluke? > > Seems likely or at least plausible. Thanks for the confirmation. It also looks like it is the slowest with small transfers, which I assume mesa is doing for this game. > > > > There may be other factors, yes. I can't offer a good explanation > > on > > what exactly is happening, but it's pretty clear that amdgpu can't > > take > > full advantage of the TB3 link, so it seemed like a good idea to > > start > > investigating this first. > > Yeah, actually it would be consistent with ~16-32 KB granularity > transfers based on your measurements above, which is plausible. So > making sure that the driver doesn't artificially limit the PCIe > bandwidth might indeed help. Can you point me to the place where amdgpu decides the PCIe link speed? I'd like to try to tweak it a little bit to see if that helps at all. > OTOH this also indicates a similar potential for improvement by using > larger transfers in Mesa and/or the kernel. Yes, that sounds like it would be worth looking into. Out of curiosity, is there a performace decrease with small transfers on a "normal" PCIe port too, or is this specific to TB3? Best regards, Tim _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel