Re: [PATCH v2 0/4] Make the iommu driver no-snoop block feature consistent

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2022-04-08 14:35, Jason Gunthorpe wrote:
On Fri, Apr 08, 2022 at 02:11:10PM +0100, Robin Murphy wrote:

However, this creates an oddball situation where the vfio_device and
it's struct device could become unplugged from the system while the
domain that the struct device spawned continues to exist and remains
attached to other devices in the same group. ie the iommu driver has
to be careful not to retain the struct device input..

Oh, I rather assumed that VFIO might automatically tear down the
container/domain when the last real user disappears.

It does, that isn't quite what I mean..

Lets say a simple case with two groups and two devices.

Open a VFIO container FD

We open group A and SET_CONTAINER it. This results in an
    domain_A = iommu_domain_alloc(device_A)
    iommu_attach_group(domain_A, device_A->group)

We open group B and SET_CONTAINER it. Using the sharing logic we end
up doing
    iommu_attach_group(domain_A, device_B->group)

Now we close group A FD, detatch device_A->group from domain_A and the
driver core hot-unplugs device A completely from the system.

However, domain_A remains in the system used by group B's open FD.

It is a bit funny at least.. I think it is just something to document
and be aware of for iommu driver writers that they probably shouldn't
try to store the allocation device in their domain struct.

IHMO the only purpose of the allocation device is to crystalize the
configuration of the iommu_domain at allocation time.

Oh, for sure. When I implement the API switch, I can certainly try to document it as clearly as possible that the device argument is only for resolving the correct IOMMU ops and target instance, and the resulting domain is still not in any way tied to that specific device.

I hadn't thought about how it might look to future developers who aren't already familiar with all the history here, so thanks for the nudge!

as long as we take care not to release DMA ownership until that point also.
As you say, it just looks a bit funny.

The DMA ownership should be OK as we take ownership on each group FD
open
I suppose that is inevitable to have sharing of domains across
devices, so the iommu drivers will have to accommodate this.

I think domain lifecycle management is already entirely up to the users and
not something that IOMMU drivers need to worry about. Drivers should only
need to look at per-device data in attach/detach (and, once I've finished,
alloc) from the device argument which can be assumed to be valid at that
point. Otherwise, all the relevant internal data for domain ops should
belong to the domain already.

Making attach/detach take a struct device would be nice - but I would
expect the attach/detatch to use a strictly paired struct device and I
don't think this trick of selecting an arbitary vfio_device will
achieve that.

So, I suppose VFIO would want to attach/detatch on every vfio_device
individually and it would iterate over the group instead of doing a
list_first_entry() like above. This would not be hard to do in VFIO.

It feels like we've already beaten that discussion to death in other threads; regardless of what exact argument the iommu_attach/detach operations end up taking, they have to operate on the whole (explicit or implicit) iommu_group at once, because doing anything else would defeat the point of isolation groups, and be impossible for alias groups.

Not sure what the iommu layer would have to do to accommodate this..

If it's significantly easier for VFIO to just run through a whole list of devices and attach each one without having to keep track of whether they might share an iommu_group which has already been attached, then we can probably relax the API a little such that attaching to a domain which is already the current domain becomes a no-op instead of returning -EBUSY, but I'd rather not create an expectation that anyone *has* to do that. For any other callers that would be forcing *more* iommu_group implementation details onto them, when we all want less.

Cheers,
Robin.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux