Re: [PATCH v2] PCI: dwc: Strengthen the MSI address allocation logic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024-01-16 5:18 pm, Sajid Dalvi wrote:
On Tue, Jan 16, 2024 at 7:30 AM Robin Murphy <robin.murphy@xxxxxxx> wrote:

On 2024-01-11 4:21 am, Ajay Agarwal wrote:
There can be platforms that do not use/have 32-bit DMA addresses
but want to enumerate endpoints which support only 32-bit MSI
address. The current implementation of 32-bit IOVA allocation can
fail for such platforms, eventually leading to the probe failure.

If there is a memory region reserved for the pci->dev, pick up
the MSI data from this region. This can be used by the platforms
described above.

Else, if the memory region is not reserved, try to allocate a
32-bit IOVA. Additionally, if this allocation also fails, attempt
a 64-bit allocation for probe to be successful. If the 64-bit MSI
address is allocated, then the EPs supporting 32-bit MSI address
will not work.

Signed-off-by: Ajay Agarwal <ajayagarwal@xxxxxxxxxx>
---
Changelog since v1:
   - Use reserved memory, if it exists, to setup the MSI data
   - Fallback to 64-bit IOVA allocation if 32-bit allocation fails

   .../pci/controller/dwc/pcie-designware-host.c | 50 ++++++++++++++-----
   drivers/pci/controller/dwc/pcie-designware.h  |  1 +
   2 files changed, 39 insertions(+), 12 deletions(-)

diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c
b/drivers/pci/controller/dwc/pcie-designware-host.c
index 7991f0e179b2..8c7c77b49ca8 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -331,6 +331,8 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp
*pp)
       u64 *msi_vaddr;
       int ret;
       u32 ctrl, num_ctrls;
+     struct device_node *np;
+     struct resource r;

       for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++)
               pp->irq_mask[ctrl] = ~0;
@@ -374,20 +376,44 @@ static int dw_pcie_msi_host_init(struct
dw_pcie_rp *pp)
        * order not to miss MSI TLPs from those devices the MSI target
        * address has to be within the lowest 4GB.
        *
-      * Note until there is a better alternative found the reservation
is
-      * done by allocating from the artificially limited DMA-coherent
-      * memory.
+      * Check if there is memory region reserved for this device. If
yes,
+      * pick up the msi_data from this region. This will be helpful for
+      * platforms that do not use/have 32-bit DMA addresses but want
to use
+      * endpoints which support only 32-bit MSI address.
+      * Else, if the memory region is not reserved, try to allocate a
32-bit
+      * IOVA. Additionally, if this allocation also fails, attempt a
64-bit
+      * allocation. If the 64-bit MSI address is allocated, then the
EPs
+      * supporting 32-bit MSI address will not work.
        */
-     ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
-     if (ret)
-             dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices
with only 32-bit MSI support may not work properly\n");
+     np = of_parse_phandle(dev->of_node, "memory-region", 0);
+     if (np) {
+             ret = of_address_to_resource(np, 0, &r);

This is incorrect in several ways - reserved-memory nodes represent
actual system memory, so can't be used to reserve arbitrary PCI memory
space (which may be different if DMA offsets are involved); the whole
purpose of going through the DMA API is to ensure we get a unique *bus*
address. Obviously we don't want to reserve actual memory for something
that functionally doesn't need it, but conversely having a
reserved-memory region for an address which isn't memory would be
nonsensical. And even if this *were* a viable approach, you haven't
updated the DWC binding to allow it, nor defined a reserved-memory
binding for the node itself.

If it was reasonable to put something in DT at all, then the logical
thing would be a property expressing an MSI address directly on the
controller node itself, but even that would be dictating software policy
rather than describing an aspect of the platform itself. Plus this is
far from the only driver with this concern, so it wouldn't make much
sense to hack just one binding without all the others as well. The rest
of the DT already describes everything an OS needs to know in order to
decide an MSI address to use, it's just a matter of making this
particular OS do a better job of putting it all together.

Thanks,
Robin.


Robin,
Needed some clarification.
It seems you are implying that the pcie device tree node should define a
property for the MSI address within the PCIe address space.
However, you also state that this would not be an ideal solution, and
would prefer using existing device tree constructs.
I am not sure what you mean by, " The rest of the DT already describes
everything."
Do you mean adding an "msi" reg to reg-names and defining the address
in the reg list?

No, I'm saying the closest this should come to DT at all is the possibility of the low-level driver hard-coding a platform-specific value for pp->msi_data based on some platform-specific compatible, as Serge pointed to on v1.

Otherwise, based on the system memory layout and dma-ranges of the controller node we have enough information to figure out what PCI bus address ranges can't collide with any valid DMA mapping of RAM, and thus generate a suitable MSI address, but that really wants to be a generic PCI-layer helper (which could also generically implement the various DMA API tricks as a fallback if necessary).

Thanks,
Robin.




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux