On Thu, Jan 23, 2025 at 03:35:21PM +0100, Christian König wrote:Sending it as text mail once more. Am 23.01.25 um 15:32 schrieb Christian König:Am 23.01.25 um 14:59 schrieb Jason Gunthorpe:On Wed, Jan 22, 2025 at 03:59:11PM +0100, Christian König wrote:For example we have cases with multiple devices are in the same IOMMU domain and re-using their DMA address mappings.IMHO this is just another flavour of "private" address flow between two cooperating drivers.Well that's the point. The inporter is not cooperating here.If the private address relies on a shared iommu_domain controlled by the driver, then yes, the importer MUST be cooperating. For instance, if you send the same private address into RDMA it will explode because it doesn't have any notion of shared iommu_domain mappings, and it certainly doesn't setup any such shared domains.Hui? Why the heck should a driver own it's iommu domain?I don't know, you are the one saying the drivers have special shared iommu_domains so DMA BUF need some special design to accommodate it. I'm aware that DRM drivers do directly call into the iommu subsystem and do directly manage their own IOVA. I assumed this is what you were talkinga bout. See below.
No, no there are much more cases where drivers simply assume that they are in the same iommu domain for different devices. E.g. that different PCI endpoints can use the same dma_addr_t.
For example those classic sound devices for HDMI audio on graphics cards work like this. It's a very long time that I looked into that, but I think that this is even a HW limitation.
In other words if the device handled by the generic ALSA driver and the GPU are not in the same iommu domain you run into trouble.
The domain is owned and assigned by the PCI subsystem under Linux.That domain is *exclusively* owned by the DMA API and is only accessed via maps created by DMA API calls. If you are using the DMA API correctly then all of this is abstracted and none of it matters to you. There is no concept of "shared domains" in the DMA API.
Well it might never been documented but I know of quite a bunch of different cases that assume that a DMA addr will just ultimately work for some other device/driver as well.
Of hand I know at least the generic ALSA driver case, some V4L driver (but that might use the same PCI endpoint, not 100% sure) and a multi GPU case which works like this.
You call the DMA API, you get a dma_addr_t that is valid for a
*single* device, you program it in HW. That is all. There is no reason
to dig deeper than this.
The importer doesn't have the slightest idea that he is sharing it's DMA addresses with the exporter.Of course it does. The importer driver would have had to explicitly set this up! The normal kernel behavior is that all drivers get private iommu_domains controled by the DMA API. If your driver is doing something else *it did it deliberately*.As far as I know that is simply not correct. Currently IOMMU domains/groups are usually shared between devices.No, the opposite. The iommu subsystem tries to maximally isolate devices up to the HW limit. On server platforms every device is expected to get its own iommu domain.Especially multi function devices get only a single IOMMU domain.Only if the PCI HW doesn't support ACS.
Ah, yes that can certainly be.
This is all DMA API internal details you shouldn't even be talking about at the DMA BUF level. It is all hidden and simply does not matter to DMA BUF at all.
Well we somehow need to support the existing use cases with the new API.
The new iommu architecture has the probing driver disable the DMA API and can then manipulate its iommu domain however it likes, safely. Ie the probing driver is aware and particiapting in disabling the DMA API.Why the heck should we do this? That drivers manage all of that on their own sounds like a massive step in the wrong direction.I am talking about DRM drivers that HAVE to manage their own for some reason I don't know. eg: drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c: tdev->iommu.domain = iommu_domain_alloc(&platform_bus_type); drivers/gpu/drm/msm/msm_iommu.c: domain = iommu_paging_domain_alloc(dev); drivers/gpu/drm/rockchip/rockchip_drm_drv.c: private->domain = iommu_paging_domain_alloc(private->iommu_dev); drivers/gpu/drm/tegra/drm.c: tegra->domain = iommu_paging_domain_alloc(dma_dev); drivers/gpu/host1x/dev.c: host->domain = iommu_paging_domain_alloc(host->dev); Normal simple drivers should never be calling these functions! If you are calling these functions you are not using the DMA API, and, yes, some cases like tegra n1x are actively sharing these special domains across multiple devices and drivers. If you want to pass an IOVA in one of these special driver-created domains then it would be some private address in DMABUF that only works on drivers that have understood they attached to these manually created domains. No DMA API involvement here.
That won't fly like this. That would break at least the ALSA use case and potentially quite a bunch of others.
I still strongly think that the exporter should talk with the DMA API to setup the access path for the importer and *not* the importer directly.It is contrary to the design of the new API which wants to co-optimize mapping and HW setup together as one unit.Yeah and I'm really questioning this design goal. That sounds like totally going into the wrong direction just because of the RDMA drivers.Actually it is storage that motivates this. It is just pointless to allocate a dma_addr_t list in the fast path when you don't need it. You can stream the dma_addr_t directly into HW structures that are necessary and already allocated.
That's what I can 100% agree on.
For GPUs its basically the same, e.g. converting from the dma_addr_t to your native presentation is just additional overhead nobody needs.
For instance in RDMA we want to hint and control the way the IOMMU mapping works in the DMA API to optimize the RDMA HW side. I can't do those optimizations if I'm not in control of the mapping.Why? What is the technical background here?dma-iommu.c chooses an IOVA alignment based on its own reasoning that is not always compatible with the HW. The HW can optimize if the IOVA alignment meets certain restrictions. Much like page tables in a GPU.
Yeah, but why can't we tell the DMA API those restrictions instead of letting the driver manage the address space themselves?
The same is probably true on the GPU side too, you want IOVAs that have tidy alignment with your PTE structure, but only the importer understands its own HW to make the correct hints to the DMA API.Yeah but then express those as requirements to the DMA API and not move all the important decisions into the driver where they are implemented over and over again and potentially broken halve the time.It wouild be in the DMA API, just the per-mapping portion of the API. Same as the multipath, the ATS, and more. It is all per-mapping descisions of the executing HW, not global decisions or something like.
So the DMA API has some structure or similar to describe the necessary per-mapping properties?
Regards,
Christian.
Jason