On 30/10/2024 15:57, Shameerali Kolothum Thodi wrote: >> On 30/10/2024 15:36, Jason Gunthorpe wrote: >>> On Wed, Oct 30, 2024 at 11:15:02PM +0800, Zhangfei Gao wrote: >>>> hw/vfio/migration.c >>>> if (vfio_viommu_preset(vbasedev)) { >>>> error_setg(&err, "%s: Migration is currently not supported " >>>> "with vIOMMU enabled", vbasedev->name); >>>> goto add_blocker; >>>> } >>> >>> The viommu driver itself does not support live migration, it would >>> need to preserve all the guest configuration and bring it all back. It >>> doesn't know how to do that yet. >> >> It's more of vfio code, not quite related to actually hw vIOMMU. >> >> There's some vfio migration + vIOMMU support patches I have to follow up >> (v5) > > Are you referring this series here? > https://lore.kernel.org/qemu-devel/d5d30f58-31f0-1103-6956-377de34a790c@xxxxxxxxxx/T/ > > Is that enabling migration only if Guest doesn’t do any DMA translations? > No, it does it when guest is using the sw vIOMMU too. to be clear: this has nothing to do with nested IOMMU or if the guest is doing (emulated) dirty tracking. When the guest doesn't do DMA translation is this patch: https://lore.kernel.org/qemu-devel/20230908120521.50903-1-joao.m.martins@xxxxxxxxxx/ >> but unexpected set backs unrelated to work delay some of my plans for >> qemu 9.2. >> I expect to resume in few weeks. I can point you to a branch while I don't >> submit (provided soft-freeze is coming) > > Also, I think we need a mechanism for page fault handling in case Guest handles > the stage 1 plus dirty tracking for stage 1 as well. > I have emulation for x86 iommus to dirty tracking, but that is unrelated to L0 live migration -- It's more for testing in the lack of recent hardware. Even emulated page fault handling doesn't affect this unless you have to re-map/map new IOVA, which would also be covered in this series I think. Unless you are talking about physical IOPF that qemu may terminate, though we don't have such support in Qemu atm.