Re: [PATCH v3 2/8] iommu/sva: Use iopf domain attach/detach interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/3/9 1:46, Jason Gunthorpe wrote:
On Mon, Jan 22, 2024 at 03:38:57PM +0800, Lu Baolu wrote:
@@ -215,7 +202,23 @@ static struct iopf_group *iopf_group_alloc(struct iommu_fault_param *iopf_param,
  		group = abort_group;
  	}
+ cookie = iopf_pasid_cookie_get(iopf_param->dev, pasid);
+	if (!cookie && pasid != IOMMU_NO_PASID)
+		cookie = iopf_pasid_cookie_get(iopf_param->dev, IOMMU_NO_PASID);
+	if (IS_ERR(cookie) || !cookie) {
+		/*
+		 * The PASID of this device was not attached by an I/O-capable
+		 * domain. Ask the caller to abort handling of this fault.
+		 * Otherwise, the reference count will be switched to the new
+		 * iopf group and will be released in iopf_free_group().
+		 */
+		kfree(group);
+		group = abort_group;
+		cookie = NULL;
+	}

If this is the main point of the cookie mechansim - why not just have
a refcount inside the domain itself?

I'm really having a hard time making sense of this cookie thing, we
have a lifetime issue on the domain pointer, why is adding another
structure the answer?

The whole cookie mechanism aims to address two things:

- Extend the domain lifetime until all pending page faults are resolved.
- Associate information about the iommu device with each attachment of
  the domain so that the iommufd device object ID could be quickly
  retrieved in the fault delivering path.


I see we also need to stick a pointer in the domain for iommufd to get
back to the hwpt, but that doesn't seem to need such a big system to
accomplish - just add a void *. It would make sense for the domain to
have some optional pointer to a struct for all the fault related data
that becomes allocated when a PRI domain is created..

It's not getting back hwpt from domain, just getting the iommufd_device
in the fault delivering path. The iommufd_device is not per-domain, but
per-domain-attachment.


Also, I thought we'd basically said that domain detatch is supposed to
flush any open PRIs before returning, what happened to that as a
solution to the lifetime problem?

If I remember it correctly, what we discussed was to flush all pending
page faults when disabling PRI. Anyway, the current driver behavior is
to flush faults during domain detachment. And if we keep this behavior,
we no longer need to worry about domain lifetime anymore.


Regardless I think we should split this into two series - improve the PRI
lifetime model for domains and then put iommufd on top of it..

Yes, agreed. Let's tackle those two points in a separate series.

Best regards,
baolu




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux