Hi Alex, Alexandru Elisei <alexandru.elisei@xxxxxxx> writes: > Hi Punit, > > Thank you for working on this! > > On 6/15/21 12:04 AM, Punit Agrawal wrote: >> Alexandru and Qu reported this resource allocation failure on >> ROCKPro64 v2 and ROCK Pi 4B, both based on the RK3399: >> >> pci_bus 0000:00: root bus resource [mem 0xfa000000-0xfbdfffff 64bit] >> pci 0000:00:00.0: PCI bridge to [bus 01] >> pci 0000:00:00.0: BAR 14: no space for [mem size 0x00100000] >> pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit] >> >> "BAR 14" is the PCI bridge's 32-bit non-prefetchable window, and our >> PCI allocation code isn't smart enough to allocate it in a host >> bridge window marked as 64-bit, even though this should work fine. >> >> A DT host bridge description includes the windows from the CPU >> address space to the PCI bus space. On a few architectures >> (microblaze, powerpc, sparc), the DT may also describe PCI devices >> themselves, including their BARs. >> >> Before 9d57e61bf723 ("of/pci: Add IORESOURCE_MEM_64 to resource >> flags for 64-bit memory addresses"), of_bus_pci_get_flags() ignored >> the fact that some DT addresses described 64-bit windows and BARs. >> That was a problem because the virtio virtual NIC has a 32-bit BAR >> and a 64-bit BAR, and the driver couldn't distinguish them. >> >> 9d57e61bf723 set IORESOURCE_MEM_64 for those 64-bit DT ranges, which >> fixed the virtio driver. But it also set IORESOURCE_MEM_64 for host >> bridge windows, which exposed the fact that the PCI allocator isn't >> smart enough to put 32-bit resources in those 64-bit windows. >> >> Clear IORESOURCE_MEM_64 from host bridge windows since we don't need >> that information. > > I've tested the patch on my rockpro64. Kernel built from tag v5.13-rc6: > > [ 0.345676] pci 0000:01:00.0: 8.000 Gb/s available PCIe bandwidth, limited by > 2.5 GT/s PCIe x4 link at 0000:00:00.0 (capable of 31.504 Gb/s with 8.0 GT/s PCIe > x4 link) > [ 0.359300] pci_bus 0000:01: busn_res: [bus 01-1f] end is updated to 01 > [ 0.359343] pci 0000:00:00.0: BAR 14: no space for [mem size 0x00100000] > [ 0.359365] pci 0000:00:00.0: BAR 14: failed to assign [mem size 0x00100000] > [ 0.359387] pci 0000:01:00.0: BAR 0: no space for [mem size 0x00004000 64bit] > [ 0.359407] pci 0000:01:00.0: BAR 0: failed to assign [mem size 0x00004000 64bit] > [ 0.359428] pci 0000:00:00.0: PCI bridge to [bus 01] > [ 0.359862] pcieport 0000:00:00.0: PME: Signaling with IRQ 76 > [ 0.360190] pcieport 0000:00:00.0: AER: enabled with IRQ 76 > > Kernel built from tag v5.13-rc6 with this patch applied: > > [ 0.345434] pci 0000:01:00.0: 8.000 Gb/s available PCIe bandwidth, limited by > 2.5 GT/s PCIe x4 link at 0000:00:00.0 (capable of 31.504 Gb/s with 8.0 GT/s PCIe > x4 link) > [ 0.359081] pci_bus 0000:01: busn_res: [bus 01-1f] end is updated to 01 > [ 0.359128] pci 0000:00:00.0: BAR 14: assigned [mem 0xfa000000-0xfa0fffff] > [ 0.359155] pci 0000:01:00.0: BAR 0: assigned [mem 0xfa000000-0xfa003fff 64bit] > [ 0.359217] pci 0000:00:00.0: PCI bridge to [bus 01] > [ 0.359239] pci 0000:00:00.0: bridge window [mem 0xfa000000-0xfa0fffff] > [ 0.359422] pcieport 0000:00:00.0: enabling device (0000 -> 0002) > [ 0.359687] pcieport 0000:00:00.0: PME: Signaling with IRQ 76 > [ 0.360001] pcieport 0000:00:00.0: AER: enabled with IRQ 76 > > And the NVME on the PCIE expansion card works as expected: > > Tested-by: Alexandru Elisei <alexandru.elisei@xxxxxxx> Thanks a lot for the retest and the detailed logs. Punit [...]