Re: [PATCH v1 5/8] iommu/amd: Use iommu_attach/detach_device()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2022-02-15 09:11, Joerg Roedel wrote:
On Mon, Feb 14, 2022 at 11:00:59AM -0400, Jason Gunthorpe wrote:
On Mon, Feb 14, 2022 at 03:23:07PM +0100, Joerg Roedel wrote:

Device drivers calling into iommu_attach_device() is seldom a good
idea.  In this case the sound device has some generic hardware
interface so that an existing sound driver can be re-used. Making this
driver call iommu-specific functions for some devices is something hard
to justify.

Er, so this is transparent to the generic sound device? I guess
something fixed up the dma_api on that device to keep working?

Right, this is completly transparent to the sound device. The IOMMU code
will not set dma_ops on the device because it uses a direct mapping and
so the standard implementation will be used.

But, then, the requirement is that nobody is using the dma API when we
make this change?

That is the tricky part. DMA-API keeps working after the change is made,
because the new domain is also direct mapped. The new domain just has
the ability to assign host page-tables to device PASIDs, so that DMA
requests with a PASID TLP will be remapped.

It was actually a requirement for this code that when it jumps in, the
DMA-API mappings stay live. And the reason a direct mapping is used at
all is that the page-table walker of the IOMMU is a two-dimensional
walker, which will treat the addresses found in the host page-tables as
IO-virtual an translates them through the underlying page-table. So to
use host-pagetables the underlying mapping must be direct mapped.

Given how things have evolved since that code was originally written, and that we seemingly now have the def_domain_type override kicking in as soon as we first see an IOMMUv2-capable device, do we even need to then subsequently switch to this special unmanaged domain with its pagetable sucked out, or could we just install the PASID table in the default domain itself?

Robin.

I don't think it matters how big/small the group is, only that when we
change the domain we know everything flowing through the domain is
still happy.

Yes, that matters. The group size matters too for DMA-API performance.
If two devices compete for the same lock in the allocator and/or the
same cached magazines, things will slow down. That only matters for
high-throughput devices, but still...

Regards,

	Joerg




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux