Re: [RFC PATCH 10/11] vfio/iommu_type1: Optimize dirty bitmap population based on iommu HWDBM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21-02-09 11:16:08, Robin Murphy wrote:
> On 2021-02-07 09:56, Yi Sun wrote:
> >Hi,
> >
> >On 21-01-28 23:17:41, Keqian Zhu wrote:
> >
> >[...]
> >
> >>+static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu,
> >>+				     struct vfio_dma *dma)
> >>+{
> >>+	struct vfio_domain *d;
> >>+
> >>+	list_for_each_entry(d, &iommu->domain_list, next) {
> >>+		/* Go through all domain anyway even if we fail */
> >>+		iommu_split_block(d->domain, dma->iova, dma->size);
> >>+	}
> >>+}
> >
> >This should be a switch to prepare for dirty log start. Per Intel
> >Vtd spec, there is SLADE defined in Scalable-Mode PASID Table Entry.
> >It enables Accessed/Dirty Flags in second-level paging entries.
> >So, a generic iommu interface here is better. For Intel iommu, it
> >enables SLADE. For ARM, it splits block.
> 
> From a quick look, VT-D's SLADE and SMMU's HTTU appear to be the
> exact same thing. This step isn't about enabling or disabling that
> feature itself (the proposal for SMMU is to simply leave HTTU
> enabled all the time), it's about controlling the granularity at
> which the dirty status can be detected/reported at all, since that's
> tied to the pagetable structure.
> 
> However, if an IOMMU were to come along with some other way of
> reporting dirty status that didn't depend on the granularity of
> individual mappings, then indeed it wouldn't need this operation.
> 
Per my thought, we can use these two start/stop interfaces to make
user space decide when to start/stop the dirty tracking. For Intel
SLADE, I think we can enable this bit when this start interface is
called by user space. I don't think leave SLADE enabled all the time
is necessary for Intel Vt-d. So I suggest a generic interface here.
Thanks!

> Robin.
> 
> >>+
> >>+static void vfio_dma_dirty_log_stop(struct vfio_iommu *iommu,
> >>+				    struct vfio_dma *dma)
> >>+{
> >>+	struct vfio_domain *d;
> >>+
> >>+	list_for_each_entry(d, &iommu->domain_list, next) {
> >>+		/* Go through all domain anyway even if we fail */
> >>+		iommu_merge_page(d->domain, dma->iova, dma->size,
> >>+				 d->prot | dma->prot);
> >>+	}
> >>+}
> >
> >Same as above comment, a generic interface is required here.
> >
> >>+
> >>+static void vfio_iommu_dirty_log_switch(struct vfio_iommu *iommu, bool start)
> >>+{
> >>+	struct rb_node *n;
> >>+
> >>+	/* Split and merge even if all iommu don't support HWDBM now */
> >>+	for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
> >>+		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> >>+
> >>+		if (!dma->iommu_mapped)
> >>+			continue;
> >>+
> >>+		/* Go through all dma range anyway even if we fail */
> >>+		if (start)
> >>+			vfio_dma_dirty_log_start(iommu, dma);
> >>+		else
> >>+			vfio_dma_dirty_log_stop(iommu, dma);
> >>+	}
> >>+}
> >>+
> >>  static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu,
> >>  					unsigned long arg)
> >>  {
> >>@@ -2812,8 +2900,10 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu,
> >>  		pgsize = 1 << __ffs(iommu->pgsize_bitmap);
> >>  		if (!iommu->dirty_page_tracking) {
> >>  			ret = vfio_dma_bitmap_alloc_all(iommu, pgsize);
> >>-			if (!ret)
> >>+			if (!ret) {
> >>  				iommu->dirty_page_tracking = true;
> >>+				vfio_iommu_dirty_log_switch(iommu, true);
> >>+			}
> >>  		}
> >>  		mutex_unlock(&iommu->lock);
> >>  		return ret;
> >>@@ -2822,6 +2912,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu,
> >>  		if (iommu->dirty_page_tracking) {
> >>  			iommu->dirty_page_tracking = false;
> >>  			vfio_dma_bitmap_free_all(iommu);
> >>+			vfio_iommu_dirty_log_switch(iommu, false);
> >>  		}
> >>  		mutex_unlock(&iommu->lock);
> >>  		return 0;
> >>-- 
> >>2.19.1
> >_______________________________________________
> >iommu mailing list
> >iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx
> >https://lists.linuxfoundation.org/mailman/listinfo/iommu
> >
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux