Re: [PATCH v5 01/12] iommu: Introduce a replace API for device pasid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/11/5 15:57, Baolu Lu wrote:
On 2024/11/5 15:49, Yi Liu wrote:
On 2024/11/5 11:58, Baolu Lu wrote:
On 11/4/24 21:25, Yi Liu wrote:
+/**
+ * iommu_replace_device_pasid - Replace the domain that a pasid is attached to
+ * @domain: the new iommu domain
+ * @dev: the attached device.
+ * @pasid: the pasid of the device.
+ * @handle: the attach handle.
+ *
+ * This API allows the pasid to switch domains. Return 0 on success, or an + * error. The pasid will keep the old configuration if replacement failed.
+ * This is supposed to be used by iommufd, and iommufd can guarantee that
+ * both iommu_attach_device_pasid() and iommu_replace_device_pasid() would
+ * pass in a valid @handle.
+ */
+int iommu_replace_device_pasid(struct iommu_domain *domain,
+                   struct device *dev, ioasid_t pasid,
+                   struct iommu_attach_handle *handle)
+{
+    /* Caller must be a probed driver on dev */
+    struct iommu_group *group = dev->iommu_group;
+    struct iommu_attach_handle *curr;
+    int ret;
+
+    if (!domain->ops->set_dev_pasid)
+        return -EOPNOTSUPP;
+
+    if (!group)
+        return -ENODEV;
+
+    if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner ||
+        pasid == IOMMU_NO_PASID || !handle)
+        return -EINVAL;
+
+    handle->domain = domain;
+
+    mutex_lock(&group->mutex);
+    /*
+     * The iommu_attach_handle of the pasid becomes inconsistent with the
+     * actual handle per the below operation. The concurrent PRI path will +     * deliver the PRQs per the new handle, this does not have a functional
+     * impact. The PRI path would eventually become consistent when the
+     * replacement is done.
+     */
+    curr = (struct iommu_attach_handle *)xa_store(&group->pasid_array,
+                              pasid, handle,
+                              GFP_KERNEL);

The iommu drivers can only flush pending PRs in the hardware queue when
__iommu_set_group_pasid() is called. So, it appears more reasonable to
reorder things like this:

     __iommu_set_group_pasid();
     switch_attach_handle();

Or anything I overlooked?

not quite get why this handle is related to iommu driver flushing PRs.
Before __iommu_set_group_pasid(), the pasid is still attached with the
old domain, so is the hw configuration.

I meant that in the path of __iommu_set_group_pasid(), the iommu drivers
have the opportunity to flush the PRs pending in the hardware queue. If
the attach_handle is switched (by calling xa_store()) before
__iommu_set_group_pasid(), the pending PRs will be routed to iopf
handler of the new domain, which is not desirable.

I see. You mean the handling of PRQs. I was interpreting you are talking
about PRQ draining.

yet, what you described was discussed before [1]. Forwarding PRQs to the
new domain looks to be ok.

But you reminded me one thing. What I cared about more is the case
replacing an iopf-capable domain to non-capable domain. This means the new
coming PRQs would be responded by iopf_error_response(). Do you see an
issue here?

[1] https://lore.kernel.org/linux-iommu/20240429135512.GC941030@xxxxxxxxxx/

--
Regards,
Yi Liu




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux