Hi Arnd, Am Freitag, 20. Januar 2023, 15:00:35 CET schrieb Arnd Bergmann: > On Fri, Jan 20, 2023, at 13:43, Alexander Stein wrote: > > Am Donnerstag, 19. Januar 2023, 17:07:30 CET schrieb Arnd Bergmann: > >> On Thu, Jan 19, 2023, at 16:27, Alexander Stein wrote: > >> > >> In particular, it seems that the memory map of the PCI address > >> spaces is configurable, but only within that area you listed. > >> I see that section "28.4.2 PEX register descriptions" does list > >> a 64-bit prefetchable address space in addition to the 32-bit > >> non-prefetchable memory space, but the 64-bit space is not > >> listed in the DT. It would be a good idea to configure that > >> as well in order for devices to work that need a larger BAR, > >> such as a GPU, but it wouldn't help with fitting the PCIe > >> into non-LPAE 32-bit CPU address space. > > > > I'm not sure if I can follow you here. Do you have some keywords of what's > > missing there? > > Prefetchable_Memory_Base_Register, section 28.4.2.20 in the > document you pointed me to. > > PCIe addressing is usually split up into I/O space (kilobytes of > registers), non-prefetchable memory space (megabytes of registers > and memory and prefetchable 64-bit memory space (gigabytes of > device memory). > > The prefetchable space is indicated by bit '30' of the first > word in the ranges property, so if that is configured, you > would see a third line there starting with 0xc2000000 or > 0x42000000. Without this, PCIe cards that have prefetchable > BARs fall back to the non-prefetchable one, which may be > too small or less efficient. This is usually only relevant > for framebuffers on a GPU, but there are probably other > devices as well. Thanks for the explanation, although I'm still lacking deeper knowledge how to configure PCIe properly. I tried adding the following line in the 'ranges' property: > <0xc2000000 0x0 0x20000000 0x40 0x20000000 0x0 0x20000000>, /* prefetchable memory */ which was taken from the old example in Documentation/devicetree/bindings/pci/ layerscape-pci.txt, removed in Commit a3b18f5f1d42e ("dt-bindings: pci: layerscape-pci: define AER/PME interrupts", 2022-03-11). But I couldn't detect any difference, maybe it's just due to my PCIe devices I have available. > >> In the datasheet I also see that the chip theoretically > >> supports 8GB of DDR4, which would definitely put it beyond > >> the highmem limit, even with the 4G:4G memory split. Do you > >> know if there are ls1021a devices with more than 4GB of > >> installed memory? > > > > Where did you find those 8GB? Section 16.2 mentions it supports up to 4 > > banks/ chip-selects which I would assume is much more. Also the memory > > map has a DRAM region 2 for memory region 2-32GB. But yes this exceeds > > 32bit addressing. I'm not aware of ls1021 devices with more than 4GB > > memory. Our modules only support up to 2GB. > > I think I misread this, as section 2.2 mentions you can have > four chip-selects that are limited to either 2GB or 8GB each, > for a theoretical maximum of 26GB. As long as the practical > limit is 4GB or less, I think we're fine here. Linus Walleij > has is working on a prototype for changing the memory > management code to handle up to 4GB of contiguous RAM without > highmem, which will become relevant in the future as we get > rid of highmem support. On this chip, the first 4GB of > installed memory are not contiguous in the physical address > space, so this will need another set of patches on top. > > As long as you only use the first chip-select with 2GB > of installed memory, very little will change for you. > > It might be worthwhile to check if your system works > correctly with ARM_LPAE=y, VMSPLIT_2G=y and HIGHMEM=n, > which should be the best configuration for your system > anyway and will keep working after highmem gets removed. Thanks for that hint. Having this setting the board seems to still run like it should. Best regards, Alexander