Re: [PATCH v4 3/6] KVM: arm64: Enable ring-based dirty memory tracking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 04 Oct 2022 05:26:23 +0100,
Gavin Shan <gshan@xxxxxxxxxx> wrote:

[...]

> > Why another capability? Just allowing dirty logging to be enabled
> > before we saving the GIC state should be enough, shouldn't it?
> > 
> 
> The GIC state would be just one case where no vcpu can be used to push
> dirty page information. As you mentioned, SMMMU HTTU feature could possibly
> be another case to ARM64. It's uncertain about other architectures where
> dirty-ring will be supported. In QEMU, the dirty (bitmap) logging is enabled
> at the beginning of migration and the bitmap is synchronized to global
> dirty bitmap and RAMBlock's dirty bitmap gradually, as the following
> backtrace shows. What we need to do for QEMU is probably retrieve the
> bitmap at point (A).
> 
> Without the new capability, we will have to rely on the return value
> from ioctls KVM_GET_DIRTY_LOG and KVM_CLEAR_DIRTY_LOG to detect the
> capability. For example, -ENXIO is returned on old kernels.

Huh. Fair enough.

KVM_CAP_ALLOW_DIRTY_LOG_AND_DIRTY_RING_TOGETHER_UNTIL_THE_NEXT_TIME...


> 
>    migration_thread
>      qemu_savevm_state_setup
>        ram_save_setup
>          ram_init_all
>            ram_init_bitmaps
>              memory_global_dirty_log_start(GLOBAL_DIRTY_MIGRATION)   // dirty logging enabled
>              migration_bitmap_sync_precopy(rs)
>        :
>      migration_iteration_run                                         // iteration 0
>        qemu_savevm_state_pending
>          migration_bitmap_sync_precopy
>        qemu_savevm_state_iterate
>          ram_save_iterate
>      migration_iteration_run                                        // iteration 1
>        qemu_savevm_state_pending
>          migration_bitmap_sync_precopy
>        qemu_savevm_state_iterate
>          ram_save_iterate
>      migration_iteration_run                                        // iteration 2
>        qemu_savevm_state_pending
>          migration_bitmap_sync_precopy
>        qemu_savevm_state_iterate
>          ram_save_iterate
>        :
>      migration_iteration_run                                       // iteration N
>        qemu_savevm_state_pending
>          migration_bitmap_sync_precopy
>        migration_completion
>          qemu_savevm_state_complete_precopy
>            qemu_savevm_state_complete_precopy_iterable
>              ram_save_complete
>                migration_bitmap_sync_precopy                      // A
>                <send all dirty pages>
> 
> Note: for post-copy and snapshot, I assume we need to save the dirty bitmap
>       in the last synchronization, right after the VM is stopped.

Not only the VM stopped, but also the devices made quiescent.

> >> If all of us agree on this, I can send another kernel patch to address
> >> this. QEMU still need more patches so that the feature can be supported.
> > 
> > Yes, this will also need some work.
> > 
> >>>> 
> >>>> To me, this is just a relaxation of an arbitrary limitation, as the
> >>>> current assumption that only vcpus can dirty memory doesn't hold at
> >>>> all.
> >>> 
> >>> The initial dirty ring proposal has a per-vm ring, but after we
> >>> investigated x86 we found that all legal dirty paths are with a vcpu
> >>> context (except one outlier on kvmgt which fixed within itself), so we
> >>> dropped the per-vm ring.
> >>> 
> >>> One thing to mention is that DMAs should not count in this case because
> >>> that's from device perspective, IOW either IOMMU or SMMU dirty tracking
> >>> should be reported to the device driver that interacts with the userspace
> >>> not from KVM interfaces (e.g. vfio with VFIO_IOMMU_DIRTY_PAGES).  That even
> >>> includes emulated DMA like vhost (VHOST_SET_LOG_BASE).
> >>> 
> >> 
> >> Thanks to Peter for mentioning the per-vm ring's history. As I said above,
> >> lets use bitmap instead if all of us agree.
> >> 
> >> If I'm correct, Marc may be talking about SMMU, which is emulated in host
> >> instead of QEMU. In this case, the DMA target pages are similar to those
> >> pages for vgic/its tables. Both sets of pages are invisible from QEMU.
> > 
> > No, I'm talking about an actual HW SMMU using the HTTU feature that
> > set the Dirty bit in the PTEs. And people have been working on sharing
> > SMMU and CPU PTs for some time, which would give us the one true
> > source of dirty page.
> > 
> > In this configuration, the dirty ring mechanism will be pretty useless.
> > 
> 
> Ok. I don't know the details. Marc, the dirty bitmap is helpful in this case?

Yes, the dirty bitmap is useful if the source of dirty bits is
obtained from the page tables. The cost of collecting/resetting the
bits is pretty high though.

	M.

-- 
Without deviation from the norm, progress is not possible.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux