On 10/17/19 4:36 PM, Rob Herring wrote: [...]>>>>> The PCI device will trigger transactions to memory only when instructed >>>>> to do so by Linux, right? Hence if Linux takes into account >>>>> chosen/memory >>>>> and dma-ranges, there is no problem? >>>> >>>> Unless of course the remote device initiates a transfer. And if the >>>> controller is programmed such that accesses to the missing DRAM in the >>>> holes are not filtered out by the controller, then the controller will >>>> gladly let the transaction through. Do we really want to let this >>>> happen ? >>> >>> If you've got devices making random unsolicited accesses then who's to >>> say they wouldn't also hit valid windows and corrupt memory? If it's >>> happening at all you've already lost. >> >> Not necessarily. If your controller is programmed correctly with just >> the ranges that are valid, then it will filter out at least the accesses >> outside of valid memory. If it is programmed incorrectly, as you >> suggest, then the accesses will go through, causing undefined behavior. >> >> And note that there is such weird buggy PCI hardware. A slightly >> unrelated example are some of the ath9k, which are generating spurious >> MSIs even if they are in legacy PCI IRQ mode. If the controller is >> configured correctly, even those buggy cards work, because it can filter >> the spurious MSIs out. If not, they do not. > > How do those devices work on h/w without inbound window configuration > or they don't? With legacy IRQs. > How do the spurious MSIs only go to invalid addresses and not valid addresses? They do not, the point was, there is such broken hardware so the controller should be programmed correctly. >> That's why I would prefer to configure the controller correctly, not >> just hope that nothing bad will come out of misconfiguring it slightly. > > Again, just handling the first N dma-ranges entries and ignoring the > rest is not 'configure the controller correctly'. It's the best effort thing to do. It's well possible the next generation of the controller will have more windows, so could accommodate the whole list of ranges. Thinking about this further, this patch should be OK either way, if there is a DT which defines more DMA ranges than the controller can handle, handling some is better than failing outright -- a PCI which works with a subset of memory is better than PCI that does not work at all. >>> And realistically, if the address >>> isn't valid then it's not going to make much difference anyway - in >>> probably 99% of cases, either the transaction doesn't hit a window and >>> the host bridge returns a completer abort, or it does hit a window, the >>> AXI side returns DECERR or SLVERR, and the host bridge translates that >>> into a completer abort. Consider also that many PCI IPs don't have >>> discrete windows and just map the entirety of PCI mem space directly to >>> the system PA space. >> >> And in that 1% of cases, we are OK with failure which could have been >> easily prevented if the controller was programmed correctly ? That does >> not look like a good thing. > > You don't need dma-ranges if you want to handle holes in DRAM. Use > memblock to get this information. Then it will work if you boot using > UEFI too. Do you have any further details about this ? > dma-ranges at the PCI bridge should be restrictions in the PCI bridge, > not ones somewhere else in the system. -- Best regards, Marek Vasut