Hi Bjorn et al., Bjorn Helgaas <helgaas@xxxxxxxxxx> writes: > I'd really like to have a single implementation of whatever quirk > works around this. I don't think we should have multiple copies just > because we assume some firmware takes care of part of this for us. I second this. I think it should work this way: MPS affects the whole buses, i.e., packets are not fragmented by PCIe bridges. MPS works for both RX and TX. This means the CPU MPS (if any) must be enforced (set in the registers) over the whole bus (system). The system may use different (smaller) MPSes for different devices, though. Perhaps the user should be able to ask for smaller value (currently it's done using enum pcie_bus_config_types). MRRS can be larger than MPS (a single read causes multiple packets of response), and can be different for different devices. Still, all devices must be programmed the system's limit at most (or less if the user wishes to). IMHO this means we should use max_mps and max_mrrs for the whole system, and then e.g. platform PCIe controller driver or a device driver could lower them, triggering writes to the PCI config registers down the buses. Individual devices/drivers could use smaller values without changing the global variables. > I have the vague impression that this issue is related to an arm64 AXI > bus property [2] or maybe a DesignWare controller property [3], so > this might affect several PCIe controller drivers. [2] seems like a bug in TI specific SoC and revision only. [3] it seems all DWC PCIe hosts (and maybe devices) need a limit (two limits). E.g.: - i.MX6 needs MRRS = 512 (or lower at user's discretion) and MPS = 128. - CNS3xxx needs MRRS = MPS = 128 IIRC. -- Krzysztof "Chris" Hałasa Sieć Badawcza Łukasiewicz Przemysłowy Instytut Automatyki i Pomiarów PIAP Al. Jerozolimskie 202, 02-486 Warszawa