Re: [PATCH V3 2/3] PCI: rcar: Do not abort on too many inbound dma-ranges

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 18, 2019 at 12:17:38PM +0200, Geert Uytterhoeven wrote:
> Hi Andrew,
> 
> On Fri, Oct 18, 2019 at 12:07 PM Andrew Murray <andrew.murray@xxxxxxx> wrote:
> > On Thu, Oct 17, 2019 at 12:33:24AM +0200, Marek Vasut wrote:
> > > On 10/17/19 12:26 AM, Rob Herring wrote:
> > > [...]
> > > >>>> You can have multiple non-continuous DRAM banks for example. And an
> > > >>>> entry for SRAM optionally. Each DRAM bank and/or the SRAM should have a
> > > >>>> separate dma-ranges entry, right ?
> > > >>>
> > > >>> Not necessarily. We really only want to define the minimum we have to.
> > > >>> The ideal system is no dma-ranges. Is each bank at a different
> > > >>> relative position compared to the CPU's view of the system. That would
> > > >>> seem doubtful for just DRAM banks. Perhaps DRAM and SRAM could change.
> > > >>
> > > >> Is that a question ? Anyway, yes, there is a bit of DRAM below the 32bit
> > > >> boundary and some more above the 32bit boundary. These two banks don't
> > > >> need to be continuous. And then you could add the SRAM into the mix.
> > > >
> > > > Continuous is irrelevant. My question was in more specific terms is
> > > > (bank1 addr - bank0 addr) different for CPU's view (i.e phys addr) vs.
> > > > PCI host view (i.e. bus addr)? If not, then that is 1 translation and
> > > > 1 dma-ranges entry.
> > >
> > > I don't think it's different in that aspect. Except the bus has this
> > > 32bit limitation, where it only sees subset of the DRAM.
> > >
> > > Why should the DMA ranges incorrectly cover also the DRAM which is not
> > > present ?
> >
> > I think this is where there is a difference in understanding.
> >
> > If I understand correctly, the job of the dma-ranges property isn't to
> > describe *what* ranges the PCI device can access - it's there to describe
> > *how*, i.e. the mapping between PCI and CPU-visible memory.
> >
> > The dma-ranges property is a side-effect of how the busses are wired up
> > between the CPU and PCI controller - and so it doesn't matter what is or
> > isn't on those buses.
> >
> > It's the job of other parts of the system to ensure that PCI devices are
> > told the correct addresses to write to, e.g. the enumerating software
> > referring to a valid CPU visible address correctly translated for the view
> > of the PCI device, ATS etc. And any IOMMU to enforce that.
> 
> Yep, that's what I thought, too.
> 
> > It sounds like there is a 1:1 mapping between CPU and PCI - in which case
> > there isn't a reason for a dma-ranges.
> 
> There's still the 32-bit limitation: PCI devices can access low 32-bit addresses
> only.

I guess a single dma-range that is limited to 32bits would work here?

Thanks,

Andrew Murray

> 
> Gr{oetje,eeting}s,
> 
>                         Geert
> 
> -- 
> Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@xxxxxxxxxxxxxx
> 
> In personal conversations with technical people, I call myself a hacker. But
> when I'm talking to journalists I just say "programmer" or something like that.
>                                 -- Linus Torvalds



[Index of Archives]     [Linux Samsung SOC]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux