On Thu, Jan 11, 2024 at 09:51:03AM +0530, Ajay Agarwal wrote: > There can be platforms that do not use/have 32-bit DMA addresses > but want to enumerate endpoints which support only 32-bit MSI > address. The current implementation of 32-bit IOVA allocation can > fail for such platforms, eventually leading to the probe failure. > > If there is a memory region reserved for the pci->dev, pick up > the MSI data from this region. This can be used by the platforms > described above. I don't like this part of the change. Here is why 1. One more time DW PCIe iMSI-RX doesn't need any actual _system memory_ to work! What is needed a single dword within the PCIe bus address space! The solution with using the coherent DMA allocation is mainly a hack/workaround to make sure the system memory behind the MSI address isn't utilized for something else. The correct solution would be to reserve PCIe-bus space memory for MSIs with no RAM behind at all. For instance, if no RAM below 4GB what prevents us from using the lowest PCIe bus address memory for iMSI-Rx (not saying that IOMMU and stuff like in-/outbound iATU can be also utilized to set the lowest PCIe bus address space free). You on the contrary suggest to convert a temporal workaround to being the platforms DT-bindings convention by defining new "memory-region" property semantics. This basically propagates a weak software solution to the DT-bindings, which isn't right. 2. Even if we get used to the solution with always coherent DMA allocating for iMSI-Rx, I don't really see much benefit in reserving a specific system memory for it. If there is no actual RAM below 4GB then reserving won't work. If there is what's the point in reserving it if normal DMA-mask and dma_alloc_coherent() will work just fine? 3. If, as an emergency solution for this problem, you wish to assign a specific DMA-buffer then you don't need to define new non-standard "memory-region" property semantics. What about using the "restricted-dma-pool" reserved-memory region? https://www.kernel.org/doc/Documentation/devicetree/bindings/reserved-memory/shared-dma-pool.yaml -Serge(y) > > Else, if the memory region is not reserved, try to allocate a > 32-bit IOVA. Additionally, if this allocation also fails, attempt > a 64-bit allocation for probe to be successful. If the 64-bit MSI > address is allocated, then the EPs supporting 32-bit MSI address > will not work. > > Signed-off-by: Ajay Agarwal <ajayagarwal@xxxxxxxxxx> > --- > Changelog since v1: > - Use reserved memory, if it exists, to setup the MSI data > - Fallback to 64-bit IOVA allocation if 32-bit allocation fails > > .../pci/controller/dwc/pcie-designware-host.c | 50 ++++++++++++++----- > drivers/pci/controller/dwc/pcie-designware.h | 1 + > 2 files changed, 39 insertions(+), 12 deletions(-) > > diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c > index 7991f0e179b2..8c7c77b49ca8 100644 > --- a/drivers/pci/controller/dwc/pcie-designware-host.c > +++ b/drivers/pci/controller/dwc/pcie-designware-host.c > @@ -331,6 +331,8 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp) > u64 *msi_vaddr; > int ret; > u32 ctrl, num_ctrls; > + struct device_node *np; > + struct resource r; > > for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++) > pp->irq_mask[ctrl] = ~0; > @@ -374,20 +376,44 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp) > * order not to miss MSI TLPs from those devices the MSI target > * address has to be within the lowest 4GB. > * > - * Note until there is a better alternative found the reservation is > - * done by allocating from the artificially limited DMA-coherent > - * memory. > + * Check if there is memory region reserved for this device. If yes, > + * pick up the msi_data from this region. This will be helpful for > + * platforms that do not use/have 32-bit DMA addresses but want to use > + * endpoints which support only 32-bit MSI address. > + * Else, if the memory region is not reserved, try to allocate a 32-bit > + * IOVA. Additionally, if this allocation also fails, attempt a 64-bit > + * allocation. If the 64-bit MSI address is allocated, then the EPs > + * supporting 32-bit MSI address will not work. > */ > - ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); > - if (ret) > - dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n"); > + np = of_parse_phandle(dev->of_node, "memory-region", 0); > + if (np) { > + ret = of_address_to_resource(np, 0, &r); > + if (ret) { > + dev_err(dev, "No memory address assigned to the region\n"); > + return ret; > + } > > - msi_vaddr = dmam_alloc_coherent(dev, sizeof(u64), &pp->msi_data, > - GFP_KERNEL); > - if (!msi_vaddr) { > - dev_err(dev, "Failed to alloc and map MSI data\n"); > - dw_pcie_free_msi(pp); > - return -ENOMEM; > + pp->msi_data = r.start; > + } else { > + dev_dbg(dev, "No %s specified\n", "memory-region"); > + ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); > + if (ret) > + dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n"); > + > + msi_vaddr = dmam_alloc_coherent(dev, sizeof(u64), &pp->msi_data, > + GFP_KERNEL); > + if (!msi_vaddr) { > + dev_warn(dev, "Failed to alloc 32-bit MSI data. Attempting 64-bit now\n"); > + dma_set_coherent_mask(dev, DMA_BIT_MASK(64)); > + msi_vaddr = dmam_alloc_coherent(dev, sizeof(u64), &pp->msi_data, > + GFP_KERNEL); > + } > + > + if (!msi_vaddr) { > + dev_err(dev, "Failed to alloc and map MSI data\n"); > + dw_pcie_free_msi(pp); > + return -ENOMEM; > + } > } > > return 0; > diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h > index 55ff76e3d384..c85cf4d56e98 100644 > --- a/drivers/pci/controller/dwc/pcie-designware.h > +++ b/drivers/pci/controller/dwc/pcie-designware.h > @@ -317,6 +317,7 @@ struct dw_pcie_rp { > phys_addr_t io_bus_addr; > u32 io_size; > int irq; > + u8 coherent_dma_bits; > const struct dw_pcie_host_ops *ops; > int msi_irq[MAX_MSI_CTRLS]; > struct irq_domain *irq_domain; > -- > 2.43.0.275.g3460e3d667-goog >