On 2024-01-11 4:21 am, Ajay Agarwal wrote:
There can be platforms that do not use/have 32-bit DMA addresses
but want to enumerate endpoints which support only 32-bit MSI
address. The current implementation of 32-bit IOVA allocation can
fail for such platforms, eventually leading to the probe failure.
If there is a memory region reserved for the pci->dev, pick up
the MSI data from this region. This can be used by the platforms
described above.
Else, if the memory region is not reserved, try to allocate a
32-bit IOVA. Additionally, if this allocation also fails, attempt
a 64-bit allocation for probe to be successful. If the 64-bit MSI
address is allocated, then the EPs supporting 32-bit MSI address
will not work.
Signed-off-by: Ajay Agarwal <ajayagarwal@xxxxxxxxxx>
---
Changelog since v1:
- Use reserved memory, if it exists, to setup the MSI data
- Fallback to 64-bit IOVA allocation if 32-bit allocation fails
.../pci/controller/dwc/pcie-designware-host.c | 50 ++++++++++++++-----
drivers/pci/controller/dwc/pcie-designware.h | 1 +
2 files changed, 39 insertions(+), 12 deletions(-)
diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index 7991f0e179b2..8c7c77b49ca8 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -331,6 +331,8 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
u64 *msi_vaddr;
int ret;
u32 ctrl, num_ctrls;
+ struct device_node *np;
+ struct resource r;
for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++)
pp->irq_mask[ctrl] = ~0;
@@ -374,20 +376,44 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
* order not to miss MSI TLPs from those devices the MSI target
* address has to be within the lowest 4GB.
*
- * Note until there is a better alternative found the reservation is
- * done by allocating from the artificially limited DMA-coherent
- * memory.
+ * Check if there is memory region reserved for this device. If yes,
+ * pick up the msi_data from this region. This will be helpful for
+ * platforms that do not use/have 32-bit DMA addresses but want to use
+ * endpoints which support only 32-bit MSI address.
+ * Else, if the memory region is not reserved, try to allocate a 32-bit
+ * IOVA. Additionally, if this allocation also fails, attempt a 64-bit
+ * allocation. If the 64-bit MSI address is allocated, then the EPs
+ * supporting 32-bit MSI address will not work.
*/
- ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
- if (ret)
- dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n");
+ np = of_parse_phandle(dev->of_node, "memory-region", 0);
+ if (np) {
+ ret = of_address_to_resource(np, 0, &r);
This is incorrect in several ways - reserved-memory nodes represent
actual system memory, so can't be used to reserve arbitrary PCI memory
space (which may be different if DMA offsets are involved); the whole
purpose of going through the DMA API is to ensure we get a unique *bus*
address. Obviously we don't want to reserve actual memory for something
that functionally doesn't need it, but conversely having a
reserved-memory region for an address which isn't memory would be
nonsensical. And even if this *were* a viable approach, you haven't
updated the DWC binding to allow it, nor defined a reserved-memory
binding for the node itself.
If it was reasonable to put something in DT at all, then the logical
thing would be a property expressing an MSI address directly on the
controller node itself, but even that would be dictating software policy
rather than describing an aspect of the platform itself. Plus this is
far from the only driver with this concern, so it wouldn't make much
sense to hack just one binding without all the others as well. The rest
of the DT already describes everything an OS needs to know in order to
decide an MSI address to use, it's just a matter of making this
particular OS do a better job of putting it all together.
Thanks,
Robin.
+ if (ret) {
+ dev_err(dev, "No memory address assigned to the region\n");
+ return ret;
+ }
- msi_vaddr = dmam_alloc_coherent(dev, sizeof(u64), &pp->msi_data,
- GFP_KERNEL);
- if (!msi_vaddr) {
- dev_err(dev, "Failed to alloc and map MSI data\n");
- dw_pcie_free_msi(pp);
- return -ENOMEM;
+ pp->msi_data = r.start;
+ } else {
+ dev_dbg(dev, "No %s specified\n", "memory-region");
+ ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
+ if (ret)
+ dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n");
+
+ msi_vaddr = dmam_alloc_coherent(dev, sizeof(u64), &pp->msi_data,
+ GFP_KERNEL);
+ if (!msi_vaddr) {
+ dev_warn(dev, "Failed to alloc 32-bit MSI data. Attempting 64-bit now\n");
+ dma_set_coherent_mask(dev, DMA_BIT_MASK(64));
+ msi_vaddr = dmam_alloc_coherent(dev, sizeof(u64), &pp->msi_data,
+ GFP_KERNEL);
+ }
+
+ if (!msi_vaddr) {
+ dev_err(dev, "Failed to alloc and map MSI data\n");
+ dw_pcie_free_msi(pp);
+ return -ENOMEM;
+ }
}
return 0;
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
index 55ff76e3d384..c85cf4d56e98 100644
--- a/drivers/pci/controller/dwc/pcie-designware.h
+++ b/drivers/pci/controller/dwc/pcie-designware.h
@@ -317,6 +317,7 @@ struct dw_pcie_rp {
phys_addr_t io_bus_addr;
u32 io_size;
int irq;
+ u8 coherent_dma_bits;
const struct dw_pcie_host_ops *ops;
int msi_irq[MAX_MSI_CTRLS];
struct irq_domain *irq_domain;