Re: [PATCH V3 2/3] PCI: rcar: Do not abort on too many inbound dma-ranges

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 16, 2019 at 4:16 PM Marek Vasut <marek.vasut@xxxxxxxxx> wrote:
>
> On 10/16/19 10:25 PM, Rob Herring wrote:
> [...]
> >>>>>>> The firmware cannot decide the policy for the next stage (Linux in
> >>>>>>> this case) on which ranges are better to use for Linux and which are
> >>>>>>> less good. Linux can then decide which ranges are best suited for it
> >>>>>>> and ignore the other ones.
> >>>>>>
> >>>>>> dma-ranges is a property that is used by other kernel subsystems eg
> >>>>>> IOMMU other than the RCAR host controller driver. The policy, provided
> >>>>>> there is one should be shared across them. You can't leave a PCI
> >>>>>> host controller half-programmed and expect other subsystems (that
> >>>>>> *expect* those ranges to be DMA'ble) to work.
> >>>>>>
> >>>>>> I reiterate my point: if firmware is broken it is better to fail
> >>>>>> the probe rather than limp on hoping that things will keep on
> >>>>>> working.
> >>>>>
> >>>>> But the firmware is not broken ?
> >>>>
> >>>> See above, it depends on how the dma-ranges property is interpreted,
> >>>> hopefully we can reach consensus in this thread, I won't merge a patch
> >>>> that can backfire later unless we all agree that what it does is
> >>>> correct.
> >>>
> >>> Defining more dma-ranges entries than the h/w has inbound windows for
> >>> sounds like a broken DT to me.
> >>>
> >>> What exactly does dma-ranges contain in this case? I'm not really
> >>> visualizing how different clients would pick different dma-ranges
> >>> entries.
> >>
> >> You can have multiple non-continuous DRAM banks for example. And an
> >> entry for SRAM optionally. Each DRAM bank and/or the SRAM should have a
> >> separate dma-ranges entry, right ?
> >
> > Not necessarily. We really only want to define the minimum we have to.
> > The ideal system is no dma-ranges. Is each bank at a different
> > relative position compared to the CPU's view of the system. That would
> > seem doubtful for just DRAM banks. Perhaps DRAM and SRAM could change.
>
> Is that a question ? Anyway, yes, there is a bit of DRAM below the 32bit
> boundary and some more above the 32bit boundary. These two banks don't
> need to be continuous. And then you could add the SRAM into the mix.

Continuous is irrelevant. My question was in more specific terms is
(bank1 addr - bank0 addr) different for CPU's view (i.e phys addr) vs.
PCI host view (i.e. bus addr)? If not, then that is 1 translation and
1 dma-ranges entry.

> > I suppose if your intent is to use inbound windows as a poor man's
> > IOMMU to prevent accesses to the holes, then yes you would list them
> > out. But I think that's wrong and difficult to maintain. You'd also
> > need to deal with reserved-memory regions too.
>
> What's the problem with that? The bootloader has all that information
> and can patch the DT correctly. In fact, in my specific case, I have
> platform which can be populated with differently sized DRAM, so the
> holes are also dynamically calculated ; there is no one DT then, the
> bootloader is responsible to generate the dma-ranges accordingly.

The problems are it doesn't work:

Your dma-mask and offset are not going to be correct.

You are running out of inbound windows. Your patch does nothing to
solve that. The solution would be merging multiple dma-ranges entries
to a single inbound window. We'd have to do that both for dma-mask and
inbound windows. The former would also have to figure out which
entries apply to setting up dma-mask. I'm simply suggesting just do
that up front and avoid any pointless splits.

You are setting up random inbound windows. The bootloader can't assume
what order the OS parses dma-ranges, and the OS can't assume what
order the bootloader writes the entries.

Rob



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux