On Wed, Oct 16, 2019 at 1:18 PM Marek Vasut <marek.vasut@xxxxxxxxx> wrote: > > On 10/16/19 8:12 PM, Rob Herring wrote: > > On Wed, Oct 16, 2019 at 11:18 AM Lorenzo Pieralisi > > <lorenzo.pieralisi@xxxxxxx> wrote: > >> > >> [+RobH, Robin] > >> > >> On Wed, Oct 16, 2019 at 05:29:50PM +0200, Marek Vasut wrote: > >> > >> [...] > >> > >>>>> The firmware provides all the ranges which are available and usable, > >>>>> that's the hardware description and that should be in the DT. > >>>> > >>>> If the HW (given that those dma-ranges are declared for the PCI host > >>>> controller) can't be programmed to enable those DMA ranges - those > >>>> ranges are neither available nor usable, ergo DT is broken. > >>> > >>> The hardware can be programmed to enable those DMA ranges, just not all > >>> of them at the same time. > >> > >> Ok, we are down to DT bindings interpretation then. > >> > >>> It's not the job of the bootloader to guess which ranges might the next > >>> stage like best. > >> > >> By the time this series: > >> > >> https://patchwork.ozlabs.org/user/todo/linux-pci/?series=132419 > >> > >> is merged, your policy will require the host controller driver to > >> remove the DMA ranges that could not be programmed in the inbound > >> address decoders from the dma_ranges list, otherwise things will > >> fall apart. > > > > I don't think the above series has too much impact on this. It's my > > other series dealing with dma masks that's relevant because for dma > > masks we only ever look at the first dma-ranges entry. We either have > > to support multiple addresses and sizes per device (the only way to > > really support any possible dma-ranges), merge entries to single > > offset/mask or have some way to select which range entry to use. > > > > So things are broken to some extent regardless unless MAX_NR_INBOUND_MAPS == 1. > > > >>>>> The firmware cannot decide the policy for the next stage (Linux in > >>>>> this case) on which ranges are better to use for Linux and which are > >>>>> less good. Linux can then decide which ranges are best suited for it > >>>>> and ignore the other ones. > >>>> > >>>> dma-ranges is a property that is used by other kernel subsystems eg > >>>> IOMMU other than the RCAR host controller driver. The policy, provided > >>>> there is one should be shared across them. You can't leave a PCI > >>>> host controller half-programmed and expect other subsystems (that > >>>> *expect* those ranges to be DMA'ble) to work. > >>>> > >>>> I reiterate my point: if firmware is broken it is better to fail > >>>> the probe rather than limp on hoping that things will keep on > >>>> working. > >>> > >>> But the firmware is not broken ? > >> > >> See above, it depends on how the dma-ranges property is interpreted, > >> hopefully we can reach consensus in this thread, I won't merge a patch > >> that can backfire later unless we all agree that what it does is > >> correct. > > > > Defining more dma-ranges entries than the h/w has inbound windows for > > sounds like a broken DT to me. > > > > What exactly does dma-ranges contain in this case? I'm not really > > visualizing how different clients would pick different dma-ranges > > entries. > > You can have multiple non-continuous DRAM banks for example. And an > entry for SRAM optionally. Each DRAM bank and/or the SRAM should have a > separate dma-ranges entry, right ? Not necessarily. We really only want to define the minimum we have to. The ideal system is no dma-ranges. Is each bank at a different relative position compared to the CPU's view of the system. That would seem doubtful for just DRAM banks. Perhaps DRAM and SRAM could change. I suppose if your intent is to use inbound windows as a poor man's IOMMU to prevent accesses to the holes, then yes you would list them out. But I think that's wrong and difficult to maintain. You'd also need to deal with reserved-memory regions too. Rob