RE: iommufd dirty page logging overview

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Tian, Kevin
> Sent: Saturday, March 19, 2022 3:55 PM
> 
> > From: Jason Gunthorpe <jgg@xxxxxxxxxx>
> > Sent: Friday, March 18, 2022 8:41 PM
> >
> > On Fri, Mar 18, 2022 at 09:23:49AM +0000, Tian, Kevin wrote:
> > > > From: Jason Gunthorpe <jgg@xxxxxxxxxx>
> > > > Sent: Thursday, March 17, 2022 7:51 AM
> > > >
> > > > > there a rough idea of how the new dirty page logging will look like?
> > > > > Is this already explained in the email threads an I missed it?
> > > >
> > > > I'm hoping to get something to show in the next few weeks, but what
> > > > I've talked about previously is to have two things:
> > > >
> > > > 1) Control and reporting of dirty tracking via the system IOMMU
> > > >    through the iommu_domain interface exposed by iommufd
> > > >
> > > > 2) Control and reporting of dirty tracking via a VFIO migration
> > > >    capable device's internal tracking through a VFIO_DEVICE_FEATURE
> > > >    interface similar to the v2 migration interface
> > > >
> > > > The two APIs would be semantically very similar but target different
> > > > HW blocks. Userspace would be in charge to decide which dirty tracker
> > > > to use and how to configure it.
> > > >
> > >
> > > for the 2nd option I suppose userspace is expected to retrieve
> > > dirty bits via VFIO_DEVICE_FEATURE before every iommufd
> > > unmap operation in precopy phase, just like why we need return
> > > the dirty bitmap to userspace in iommufd unmap interface in
> > > the 1st option. Correct?
> >
> > It would have to be after unmap, not before
> >
> 
> why? after unmap a dirty GPA page in the unmapped range is
> meaningless to userspace since there is no backing PFN for that
> GPA.
> 

Let me make it more specific by taking vIOMMU as an example.
No nesting i.e. Qemu generates a merged mapping for GIOVA->HPA
via iommufd.

iommufd unmap is caused when emulating virtual iotlb invalidation
request, *after* the guest iommu driver clears the guest I/O page
table for the specified GIOVA range.

The dirty bits recorded by the device is around the dma addresses
programmed by the guest, i.e. GIOVA.

Now if qemu pulls dirty bits from vfio device after iommufd unmap,
how would qemu even know the corresponding PFN/VA for dirty
GFNs given the guest I/O mapping has been cleared?

This might not be a problem for dpdk when the mapping is managed
by the application itself thus that knowledge is not lost after iommufd
unmap. But concept-wise I feel pulling dirty bits before destroying
related mappings makes more sense as translating dirty bits to
underlying PFNs is kind of an usage of the mapping.

Thanks
Kevin




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux