On 2022-09-28 12:41, Serge Semin wrote:
On Thu, Aug 25, 2022 at 06:50:24PM +0000, Will McVicker wrote:
Re-work the msi_msg DMA allocation logic to use dmam_alloc_coherent() which
uses the coherent DMA mask to try to return an allocation within the DMA
mask limits. With that, we now can drop the msi_page parameter in struct
dw_pcie_rp. This allows kernel configurations that disable ZONE_DMA32 to
continue supporting a 32-bit DMA mask. Without this patch, the PCIe host
device will fail to probe when ZONE_DMA32 is disabled.
As Rob already said here
https://lore.kernel.org/all/CAL_JsqJh=d-B51b6yPBRq0tOwbChN=AFPr-a19U1QdQZAE7c1A@xxxxxxxxxxxxxx/
and I mentioned in this thread
https://lore.kernel.org/linux-pci/20220912000211.ct6asuhhmnatje5e@mobilestation/
DW PCIe MSI doesn't cause any DMA due to the way the iMSI-RX engine is
designed. So reserving any real system memory is a waste of one in
this case. Reserving DMA-coherent even more inappropriate since it
can be expensive on some platforms (see note in Part Ia of
Documentation/core-api/dma-api.rst). For instance on MIPS32 with
non-corehent common DMA.
This has been discussed before - in general it is difficult to pick an
arbitrary MSI address that is *guaranteed* not to overlap any valid DMA
address that somebody may try to use later. However there is a very easy
way to guarantee that the DMA API won't give anyone a particular DMA
address, which is to get an address directly from the DMA API and keep
it. Yes, that can technically be done with a streaming mapping *if* you
already have some memory allocated in a suitable physical location, but
coherent allocations are even more foolproof, simpler to clean up
(particularly with devres), and unlikely to be an issue on relevant
platforms (do any MIPS32 systems use this driver?)
Fixes: 35797e672ff0 ("PCI: dwc: Fix MSI msi_msg DMA mapping")
Reported-by: Isaac J. Manjarres <isaacmanjarres@xxxxxxxxxx>
Signed-off-by: Will McVicker <willmcvicker@xxxxxxxxxx>
Acked-by: Jingoo Han <jingoohan1@xxxxxxxxx>
Reviewed-by: Rob Herring <robh@xxxxxxxxxx>
---
.../pci/controller/dwc/pcie-designware-host.c | 28 +++++--------------
drivers/pci/controller/dwc/pcie-designware.h | 1 -
2 files changed, 7 insertions(+), 22 deletions(-)
diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index 7746f94a715f..39f3b37d4033 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -267,15 +267,6 @@ static void dw_pcie_free_msi(struct dw_pcie_rp *pp)
irq_domain_remove(pp->msi_domain);
irq_domain_remove(pp->irq_domain);
-
- if (pp->msi_data) {
- struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
- struct device *dev = pci->dev;
-
- dma_unmap_page(dev, pp->msi_data, PAGE_SIZE, DMA_FROM_DEVICE);
- if (pp->msi_page)
- __free_page(pp->msi_page);
- }
}
static void dw_pcie_msi_init(struct dw_pcie_rp *pp)
@@ -336,6 +327,7 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
struct platform_device *pdev = to_platform_device(dev);
+ u64 *msi_vaddr;
int ret;
u32 ctrl, num_ctrls;
@@ -375,22 +367,16 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
dw_chained_msi_isr, pp);
}
- ret = dma_set_mask(dev, DMA_BIT_MASK(32));
+ ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
This has been redundant in the first place since none of the DW PCIe
low-level drivers update the mask, and it's of 32-bits wide by default
anyway:
https://elixir.bootlin.com/linux/latest/source/drivers/of/platform.c#L167
No, in general drivers should always explicitly set their mask(s) and
check the return value to make sure DMA is possible at all before trying
any other DMA API calls. There's no guarantee that the default mask is
usable (e.g. some systems don't have any 32-bit addressable RAM), or
that it's even always 32 bits (due to crufty reasons of something
of_dma_configure() tried to do a long time ago).
Thanks,
Robin.
if (ret)
dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n");
- pp->msi_page = alloc_page(GFP_DMA32);
- pp->msi_data = dma_map_page(dev, pp->msi_page, 0,
- PAGE_SIZE, DMA_FROM_DEVICE);
- ret = dma_mapping_error(dev, pp->msi_data);
- if (ret) {
- dev_err(pci->dev, "Failed to map MSI data\n");
- __free_page(pp->msi_page);
- pp->msi_page = NULL;
- pp->msi_data = 0;
+ msi_vaddr = dmam_alloc_coherent(dev, sizeof(u64), &pp->msi_data,
+ GFP_KERNEL);
Changing the whole device DMA-mask due to something that doesn't
perform seems inappropriate. I'd suggest to preserve the ZONE_DMA32
here until there is something like suggested by @Robin
https://lore.kernel.org/linux-pci/1e63a581-14ae-b4b5-a5bf-ca8f09c33af6@xxxxxxx/
in the last paragraph is implemented. Especially seeing there still
common drivers in kernel which still rely on that zone.
-Sergey
+ if (!msi_vaddr) {
+ dev_err(dev, "Failed to alloc and map MSI data\n");
dw_pcie_free_msi(pp);
-
- return ret;
+ return -ENOMEM;
}
return 0;
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
index 09b887093a84..a871ae7eb59e 100644
--- a/drivers/pci/controller/dwc/pcie-designware.h
+++ b/drivers/pci/controller/dwc/pcie-designware.h
@@ -243,7 +243,6 @@ struct dw_pcie_rp {
struct irq_domain *irq_domain;
struct irq_domain *msi_domain;
dma_addr_t msi_data;
- struct page *msi_page;
struct irq_chip *msi_irq_chip;
u32 num_vectors;
u32 irq_mask[MAX_MSI_CTRLS];
--
2.37.2.672.g94769d06f0-goog