On Wed, 20 Sep 2017 16:26:25 +0200 Auger Eric <eric.auger@xxxxxxxxxx> wrote: > Hi Sinan, > > On 20/09/2017 15:01, Sinan Kaya wrote: > > On 9/20/2017 3:59 AM, Auger Eric wrote: > >>> My impression is that MRRS is predominantly device and driver > >>> dependent, not topology dependent. A device can send a read request > >>> with a size larger than MPS, which implies that the device supplying > >>> the read data would split it into multiple TLPs based on MPS. > >> I read that too on the net. However in in 6.3.4.1. (3.0. Nov 10), Rules > >> for SW Configuration it is written: > >> "Software must set Max_Read_Request_Size of an isochronous-configured > >> device with a value that does not exceed the Max_Payload_Size set for > >> the device." > >> > >> But on the the other hand some drivers are setting the MMRS directly > >> without further checking the MPS? > > > > We discussed this on LPC. MRRS and MPS are two independent concepts and > > are not related to each other under normal circumstances. > > > > The only valid criteria is that MRRS needs to be a multiple of MPS. > > > > https://linuxplumbersconf.org/2017/ocw//system/presentations/4732/original/crs.pdf > > > > Because completions are required to be a minimum of MPS size. If MRRS > MPS, > > read response is sent as multiple completions. > > With that patch, you can end up with MRRS < MPS. Do I understand > correctly this is an issue? My impression is that the issue would be inefficiency. There should be nothing functionally wrong with a read request less than MPS, but we're not "filling" the TLP as much as the topology allows. Is that your understanding as well, Sinan? It seems like it would be relatively easy to virtualize MRRS like we do the FLR bit, ie. evaluate the change the user is trying to make and update MRRS with pci-core callbacks, capping the lower bound equal to MPS for efficiency. It's possible we'll encounter devices that really do need a lower MPS, but it seems unlikely since this is the setting the PCI core seems to make by default (MRRS == MPS). Thanks, Alex