DW PCIe Root Ports and End-points can be equipped with the DW eDMA engine. In that case it is critical to have the platform device pre-initialized with a valid DMA-mask so the drivers using the eDMA-engine would be able to allocate the DMA-able buffers. The MSI-capable data requires to be allocated from the lowest 4GB region. Since that procedure implies the DMA-mask change we need to restore the mask set by the low-level drivers after the MSI-data allocation is done. Signed-off-by: Serge Semin <Sergey.Semin@xxxxxxxxxxxxxxxxxxxx> --- Changelog v7: - This is a new patch added on v7 stage of the series. (@Robin) --- drivers/pci/controller/dwc/pcie-designware-host.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index 5762bd306261..1a3dae1f6aa2 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -326,7 +326,7 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp) struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct device *dev = pci->dev; struct platform_device *pdev = to_platform_device(dev); - u64 *msi_vaddr; + u64 *msi_vaddr, dma_mask; int ret; u32 ctrl, num_ctrls; @@ -366,6 +366,13 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp) dw_chained_msi_isr, pp); } + /* + * Save and then restore the DMA-mask pre-set by the low-level drivers + * after allocating the MSI-capable region. The mask might be useful for + * the controllers with the embedded eDMA engine. + */ + dma_mask = dma_get_mask(dev); + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)); if (ret) dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n"); @@ -378,6 +385,10 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp) return -ENOMEM; } + ret = dma_set_mask_and_coherent(dev, dma_mask); + if (ret) + dev_warn(dev, "Failed to re-store DMA-mask\n"); + return 0; } -- 2.38.1