On Wed, Aug 04, 2021 at 12:10:45AM +0530, Praveen Kumar wrote: > On 09-07-2021 17:13, Wei Liu wrote: > > +static void hv_iommu_domain_free(struct iommu_domain *d) > > +{ > > + struct hv_iommu_domain *domain = to_hv_iommu_domain(d); > > + unsigned long flags; > > + u64 status; > > + struct hv_input_delete_device_domain *input; > > + > > + if (is_identity_domain(domain) || is_null_domain(domain)) > > + return; > > + > > + local_irq_save(flags); > > + input = *this_cpu_ptr(hyperv_pcpu_input_arg); > > + memset(input, 0, sizeof(*input)); > > + > > + input->device_domain= domain->device_domain; > > + > > + status = hv_do_hypercall(HVCALL_DELETE_DEVICE_DOMAIN, input, NULL); > > + > > + local_irq_restore(flags); > > + > > + if (!hv_result_success(status)) > > + pr_err("%s: hypercall failed, status %lld\n", __func__, status); > > Is it OK to deallocate the resources, if hypercall has failed ? It should be fine. We leak some resources in the hypervisor, but Linux is in a rather wedged state anyway. Refusing to free up resources in Linux does not much good. > Do we have any specific error code EBUSY (kind of) which we need to wait upon ? > I have not found a circumstance that can happen. > > + > > + ida_free(&domain->hv_iommu->domain_ids, domain->device_domain.domain_id.id); > > + > > + iommu_put_dma_cookie(d); > > + > > + kfree(domain); > > +} > > + > > +static int hv_iommu_attach_dev(struct iommu_domain *d, struct device *dev) > > +{ > > + struct hv_iommu_domain *domain = to_hv_iommu_domain(d); > > + u64 status; > > + unsigned long flags; > > + struct hv_input_attach_device_domain *input; > > + struct pci_dev *pdev; > > + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev); > > + > > + /* Only allow PCI devices for now */ > > + if (!dev_is_pci(dev)) > > + return -EINVAL; > > + > > + pdev = to_pci_dev(dev); > > + > > + dev_dbg(dev, "Attaching (%strusted) to %d\n", pdev->untrusted ? "un" : "", > > + domain->device_domain.domain_id.id); > > + > > + local_irq_save(flags); > > + input = *this_cpu_ptr(hyperv_pcpu_input_arg); > > + memset(input, 0, sizeof(*input)); > > + > > + input->device_domain = domain->device_domain; > > + input->device_id = hv_build_pci_dev_id(pdev); > > + > > + status = hv_do_hypercall(HVCALL_ATTACH_DEVICE_DOMAIN, input, NULL); > > + local_irq_restore(flags); > > + > > + if (!hv_result_success(status)) > > + pr_err("%s: hypercall failed, status %lld\n", __func__, status); > > Does it make sense to vdev->domain = NULL ? > It is already NULL -- there is no other code path that sets it and the detach path always sets the field to NULL. > > + else > > + vdev->domain = domain; > > + > > + return hv_status_to_errno(status); > > +} > > + [...] > > +static size_t hv_iommu_unmap(struct iommu_domain *d, unsigned long iova, > > + size_t size, struct iommu_iotlb_gather *gather) > > +{ > > + size_t unmapped; > > + struct hv_iommu_domain *domain = to_hv_iommu_domain(d); > > + unsigned long flags, npages; > > + struct hv_input_unmap_device_gpa_pages *input; > > + u64 status; > > + > > + unmapped = hv_iommu_del_mappings(domain, iova, size); > > + if (unmapped < size) > > + return 0; > > Is there a case where unmapped > 0 && unmapped < size ? > There could be such a case -- hv_iommu_del_mappings' return value is >= 0. Is there a problem with this predicate? Wei.