On Thu, Oct 17, 2019 at 12:33:24AM +0200, Marek Vasut wrote: > On 10/17/19 12:26 AM, Rob Herring wrote: > [...] > >>>> You can have multiple non-continuous DRAM banks for example. And an > >>>> entry for SRAM optionally. Each DRAM bank and/or the SRAM should have a > >>>> separate dma-ranges entry, right ? > >>> > >>> Not necessarily. We really only want to define the minimum we have to. > >>> The ideal system is no dma-ranges. Is each bank at a different > >>> relative position compared to the CPU's view of the system. That would > >>> seem doubtful for just DRAM banks. Perhaps DRAM and SRAM could change. > >> > >> Is that a question ? Anyway, yes, there is a bit of DRAM below the 32bit > >> boundary and some more above the 32bit boundary. These two banks don't > >> need to be continuous. And then you could add the SRAM into the mix. > > > > Continuous is irrelevant. My question was in more specific terms is > > (bank1 addr - bank0 addr) different for CPU's view (i.e phys addr) vs. > > PCI host view (i.e. bus addr)? If not, then that is 1 translation and > > 1 dma-ranges entry. > > I don't think it's different in that aspect. Except the bus has this > 32bit limitation, where it only sees subset of the DRAM. > > Why should the DMA ranges incorrectly cover also the DRAM which is not > present ? I think this is where there is a difference in understanding. If I understand correctly, the job of the dma-ranges property isn't to describe *what* ranges the PCI device can access - it's there to describe *how*, i.e. the mapping between PCI and CPU-visible memory. The dma-ranges property is a side-effect of how the busses are wired up between the CPU and PCI controller - and so it doesn't matter what is or isn't on those buses. It's the job of other parts of the system to ensure that PCI devices are told the correct addresses to write to, e.g. the enumerating software referring to a valid CPU visible address correctly translated for the view of the PCI device, ATS etc. And any IOMMU to enforce that. It sounds like there is a 1:1 mapping between CPU and PCI - in which case there isn't a reason for a dma-ranges. Thanks, Andrew Murray > > >>> I suppose if your intent is to use inbound windows as a poor man's > >>> IOMMU to prevent accesses to the holes, then yes you would list them > >>> out. But I think that's wrong and difficult to maintain. You'd also > >>> need to deal with reserved-memory regions too. > >> > >> What's the problem with that? The bootloader has all that information > >> and can patch the DT correctly. In fact, in my specific case, I have > >> platform which can be populated with differently sized DRAM, so the > >> holes are also dynamically calculated ; there is no one DT then, the > >> bootloader is responsible to generate the dma-ranges accordingly. > > > > The problems are it doesn't work: > > > > Your dma-mask and offset are not going to be correct. > > > > You are running out of inbound windows. Your patch does nothing to > > solve that. The solution would be merging multiple dma-ranges entries > > to a single inbound window. We'd have to do that both for dma-mask and > > inbound windows. The former would also have to figure out which > > entries apply to setting up dma-mask. I'm simply suggesting just do > > that up front and avoid any pointless splits. > > But then the PCI device can trigger a transaction to non-existent DRAM > and cause undefined behavior. Surely we do not want that ? > > > You are setting up random inbound windows. The bootloader can't assume > > what order the OS parses dma-ranges, and the OS can't assume what > > order the bootloader writes the entries. > > But the OS can assume the ranges are correct and cover only valid > memory, right ? That is, memory into which the PCI controller can safely > access. > > -- > Best regards, > Marek Vasut