On 2019-05-23 3:48 a.m., Koenig, Christian wrote: > Am 23.05.19 um 11:43 schrieb Christoph Hellwig: >> [CAUTION: External Email] >> >> On Thu, May 23, 2019 at 08:12:18AM +0000, Koenig, Christian wrote: >>>> Are you DMA-mapping the addresses outside the P2PDMA code? If so there's >>>> a huge mismatch with the existing users of P2PDMA (nvme-fabrics). If >>>> you're not dma-mapping then I can't see how it could work because the >>>> IOMMU should reject any requests to access those addresses. >>> Well, we are using the DMA API (dma_map_resource) for this. If the P2P >>> code is not using this then I would rather say that the P2P code is >>> actually broken. >>> >>> Adding Christoph as well, cause he is usually the one discussion stuff >>> like that with me. >> Heh. Actually dma_map_resource-ish APIs are the right thing to do, >> but I'm not sure how you managed to be able to use it for PCIe P2P >> yet, as it fails to account for any difference in the PCIe level >> "physical" address with the hosts view of "physical" addresses. >> >> Do these offsets now how up on AMD platforms? Do you adjust for them >> elsewhere? > > I don't adjust the address manually anywhere. I just call > dma_map_resource() and use the resulting DMA address to access the other > devices PCI BAR. > > At least on my test system (AMD CPU + AMD GPUs) this seems to work > totally fine. Currently trying to find time and an Intel box to test it > there as well. I'm sure this will work fine in all cases (assuming the RC/IOMMU supports P2P). It's just you're breaking the existing p2pdma users which currently work by avoiding the IOMMU and use the pci_bus_address() instead. Logan