RE: [PATCH v6 00/18] IOMMUFD Dirty Tracking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Joao Martins <joao.m.martins@xxxxxxxxxx>
> Sent: Wednesday, October 30, 2024 4:57 PM
> To: Shameerali Kolothum Thodi
> <shameerali.kolothum.thodi@xxxxxxxxxx>; Jason Gunthorpe
> <jgg@xxxxxxxxxx>; Zhangfei Gao <zhangfei.gao@xxxxxxxxxx>
> Cc: iommu@xxxxxxxxxxxxxxx; Kevin Tian <kevin.tian@xxxxxxxxx>; Lu Baolu
> <baolu.lu@xxxxxxxxxxxxxxx>; Yi Liu <yi.l.liu@xxxxxxxxx>; Yi Y Sun
> <yi.y.sun@xxxxxxxxx>; Nicolin Chen <nicolinc@xxxxxxxxxx>; Joerg Roedel
> <joro@xxxxxxxxxx>; Suravee Suthikulpanit
> <suravee.suthikulpanit@xxxxxxx>; Will Deacon <will@xxxxxxxxxx>; Robin
> Murphy <robin.murphy@xxxxxxx>; Zhenzhong Duan
> <zhenzhong.duan@xxxxxxxxx>; Alex Williamson
> <alex.williamson@xxxxxxxxxx>; kvm@xxxxxxxxxxxxxxx; Shameer Kolothum
> <shamiali2008@xxxxxxxxx>; Wangzhou (B) <wangzhou1@xxxxxxxxxxxxx>
> Subject: Re: [PATCH v6 00/18] IOMMUFD Dirty Tracking
> 
> On 30/10/2024 15:57, Shameerali Kolothum Thodi wrote:
> >> On 30/10/2024 15:36, Jason Gunthorpe wrote:
> >>> On Wed, Oct 30, 2024 at 11:15:02PM +0800, Zhangfei Gao wrote:
> >>>> hw/vfio/migration.c
> >>>>     if (vfio_viommu_preset(vbasedev)) {
> >>>>         error_setg(&err, "%s: Migration is currently not supported "
> >>>>                    "with vIOMMU enabled", vbasedev->name);
> >>>>         goto add_blocker;
> >>>>     }
> >>>
> >>> The viommu driver itself does not support live migration, it would
> >>> need to preserve all the guest configuration and bring it all back. It
> >>> doesn't know how to do that yet.
> >>
> >> It's more of vfio code, not quite related to actually hw vIOMMU.
> >>
> >> There's some vfio migration + vIOMMU support patches I have to follow
> up
> >> (v5)
> >
> > Are you referring this series here?
> > https://lore.kernel.org/qemu-devel/d5d30f58-31f0-1103-6956-
> 377de34a790c@xxxxxxxxxx/T/
> >
> > Is that enabling migration only if Guest doesn’t do any DMA translations?
> >
> No, it does it when guest is using the sw vIOMMU too. to be clear: this has
> nothing to do with nested IOMMU or if the guest is doing (emulated) dirty
> tracking.

Ok. Thanks for explaining. So just to clarify, this works for Intel vt-d with
" caching-mode=on". ie, no real 2 stage setup is required  like in ARM
 SMMUv3.

> When the guest doesn't do DMA translation is this patch:
> 
> https://lore.kernel.org/qemu-devel/20230908120521.50903-1-
> joao.m.martins@xxxxxxxxxx/

Ok.

> 
> >> but unexpected set backs unrelated to work delay some of my plans for
> >> qemu 9.2.
> >> I expect to resume in few weeks. I can point you to a branch while I don't
> >> submit (provided soft-freeze is coming)
> >
> > Also, I think we need a mechanism for page fault handling in case Guest
> handles
> > the stage 1 plus dirty tracking for stage 1 as well.
> >
> 
> I have emulation for x86 iommus to dirty tracking, but that is unrelated to
> L0
> live migration -- It's more for testing in the lack of recent hardware. Even
> emulated page fault handling doesn't affect this unless you have to re-
> map/map
> new IOVA, which would also be covered in this series I think.
> 
> Unless you are talking about physical IOPF that qemu may terminate,
> though we
> don't have such support in Qemu atm.

Yeah I was referring to ARM SMMUv3 cases, where we need nested-smmuv3
support for vfio-pci assignment. Another usecase we have is support SVA in
Guest,  with hardware capable  of physical IOPF.

I will take a look at your series above and will see what else is required
to support ARM. Please CC if you plan to respin or have a latest branch.
Thanks for your efforts.

Shameer
 





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux