Re: [PATCH V3 2/3] PCI: rcar: Do not abort on too many inbound dma-ranges

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18/10/2019 15:26, Marek Vasut wrote:
On 10/18/19 2:53 PM, Robin Murphy wrote:
On 18/10/2019 13:22, Marek Vasut wrote:
On 10/18/19 11:53 AM, Lorenzo Pieralisi wrote:
On Thu, Oct 17, 2019 at 05:01:26PM +0200, Marek Vasut wrote:

[...]

Again, just handling the first N dma-ranges entries and ignoring the
rest is not 'configure the controller correctly'.

It's the best effort thing to do. It's well possible the next
generation
of the controller will have more windows, so could accommodate the
whole
list of ranges.

In the context of DT describing the platform that doesn't make any
sense. It's like saying it's fine for U-Boot to also describe a bunch of
non-existent CPUs just because future SoCs might have them. Just because
the system would probably still boot doesn't mean it's right.

It's the exact opposite of what you just described -- the last release
of U-Boot currently populates a subset of the DMA ranges, not a
superset. The dma-ranges in the Linux DT currently are a superset of
available DRAM on the platform.

I'm not talking about the overall coverage of addresses - I've already made clear what I think about that - I'm talking about the *number* of individual entries. If the DT binding defines that dma-ranges entries directly represent bridge windows, then the bootloader for a given platform should never generate more entries than that platform has actual windows, because to do otherwise would be bogus.

Thinking about this further, this patch should be OK either way, if
there is a DT which defines more DMA ranges than the controller can
handle, handling some is better than failing outright -- a PCI which
works with a subset of memory is better than PCI that does not work
at all.

OK to sum it up, this patch is there to deal with u-boot adding multiple
dma-ranges to DT.

Yes, this patch was posted over two months ago, about the same time this
functionality was posted for inclusion in U-Boot. It made it into recent
U-Boot release, but there was no feedback on the Linux patch until
recently.

U-Boot can be changed for the next release, assuming we agree on how it
should behave.

I still do not understand the benefit given that for
DMA masks they are useless as Rob pointed out and ditto for inbound
windows programming (given that AFAICS the PCI controller filters out
any transaction that does not fall within its inbound windows by default
so adding dma-ranges has the net effect of widening the DMA'able address
space rather than limiting it).

In short, what's the benefit of adding more dma-ranges regions to the
DT (and consequently handling them in the kernel) ?

The benefit is programming the controller inbound windows correctly.
But if there is a better way to do that, I am open to implement that.
Are there any suggestions / examples of that ?

The crucial thing is that once we improve the existing "dma-ranges"
handling in the DMA layer such that it *does* consider multiple entries
properly, platforms presenting ranges which don't actually exist will
almost certainly start going wrong, and are either going to have to fix
their broken bootloaders or try to make a case for platform-specific
workarounds in core code.
Again, this is exactly the other way around, the dma-ranges populated by
U-Boot cover only existing DRAM. The single dma-range in Linux DT covers
even the holes without existing DRAM.

So even if the Linux dma-ranges handling changes, there should be no
problem.

Say you have a single hardware window, and this DT property (1-cell numbers for simplicity:

	dma-ranges = <0x00000000 0x00000000 0x80000000>;

Driver reads one entry and programs the window to 2GB@0, DMA setup parses the first entry and sets device masks to 0x7fffffff, and everything's fine.

Now say we describe the exact same address range this way instead:

	dma-ranges = <0x00000000 0x00000000 0x40000000,
		      0x40000000 0x40000000 0x40000000>;

Driver reads one entry and programs the window to 1GB@0, DMA setup parses the first entry and sets device masks to 0x3fffffff, and *today*, things are suboptimal but happen to work.

Now say we finally get round to fixing the of_dma code to properly generate DMA masks that actually include all usable address bits, a user upgrades their kernel package, and reboots with that same DT...

Driver reads one entry and programs the window to 1GB@0, DMA setup parses all entries and sets device masks to 0x7fffffff, devices start randomly failing or throwing DMA errors half the time, angry user looks at the changelog to find that somebody decided their now-corrupted filesystem is less important than the fact that hey, at least the machine didn't refuse to boot because the DT was obviously wrong. Are you sure that shouldn't be a problem?


Now, if you want to read the DT binding as less strict and let it just describe some arbitrarily-complex set of address ranges that should be valid for DMA, that's not insurmountable; you just need more complex logic in your driver capable of calculating how best to cover *all* those ranges using the available number of windows.

Robin.



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux