On Fri, 19 Aug 2022 01:55:57 +0100, Gavin Shan <gshan@xxxxxxxxxx> wrote: > > The ring-based dirty memory tracking has been available and enabled > on x86 for a while. The feature is beneficial when the number of > dirty pages is small in a checkpointing system or live migration > scenario. More details can be found from fb04a1eddb1a ("KVM: X86: > Implement ring-based dirty memory tracking"). > > This enables the ring-based dirty memory tracking on ARM64. It's > notable that no extra reserved ring entries are needed on ARM64 > because the huge pages are always split into base pages when page > dirty tracking is enabled. Can you please elaborate on this? Adding a per-CPU ring of course results in extra memory allocation, so there must be a subtle x86-specific detail that I'm not aware of... > > Signed-off-by: Gavin Shan <gshan@xxxxxxxxxx> > --- > Documentation/virt/kvm/api.rst | 2 +- > arch/arm64/include/uapi/asm/kvm.h | 1 + > arch/arm64/kvm/Kconfig | 1 + > arch/arm64/kvm/arm.c | 8 ++++++++ > 4 files changed, 11 insertions(+), 1 deletion(-) > > diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst > index abd7c32126ce..19fa1ac017ed 100644 > --- a/Documentation/virt/kvm/api.rst > +++ b/Documentation/virt/kvm/api.rst > @@ -8022,7 +8022,7 @@ regardless of what has actually been exposed through the CPUID leaf. > 8.29 KVM_CAP_DIRTY_LOG_RING > --------------------------- > > -:Architectures: x86 > +:Architectures: x86, arm64 > :Parameters: args[0] - size of the dirty log ring > > KVM is capable of tracking dirty memory using ring buffers that are > diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h > index 3bb134355874..7e04b0b8d2b2 100644 > --- a/arch/arm64/include/uapi/asm/kvm.h > +++ b/arch/arm64/include/uapi/asm/kvm.h > @@ -43,6 +43,7 @@ > #define __KVM_HAVE_VCPU_EVENTS > > #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 > +#define KVM_DIRTY_LOG_PAGE_OFFSET 64 For context, the documentation says: <quote> - if KVM_CAP_DIRTY_LOG_RING is available, a number of pages at KVM_DIRTY_LOG_PAGE_OFFSET * PAGE_SIZE. [...] </quote> What is the reason for picking this particular value? > > #define KVM_REG_SIZE(id) \ > (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT)) > diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig > index 815cc118c675..0309b2d0f2da 100644 > --- a/arch/arm64/kvm/Kconfig > +++ b/arch/arm64/kvm/Kconfig > @@ -32,6 +32,7 @@ menuconfig KVM > select KVM_VFIO > select HAVE_KVM_EVENTFD > select HAVE_KVM_IRQFD > + select HAVE_KVM_DIRTY_RING > select HAVE_KVM_MSI > select HAVE_KVM_IRQCHIP > select HAVE_KVM_IRQ_ROUTING > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 986cee6fbc7f..3de6b9b39db7 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -866,6 +866,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) > if (!ret) > ret = 1; > > + /* Force vcpu exit if its dirty ring is soft-full */ > + if (unlikely(vcpu->kvm->dirty_ring_size && > + kvm_dirty_ring_soft_full(&vcpu->dirty_ring))) { > + vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL; > + trace_kvm_dirty_ring_exit(vcpu); > + ret = 0; > + } > + Why can't this be moved to kvm_vcpu_exit_request() instead? I would also very much like the check to be made a common helper with x86. A seemingly approach would be to make this a request on dirty log insertion, and avoid the whole "check the log size" on every run, which adds pointless overhead to unsuspecting users (aka everyone). Thanks, M. -- Without deviation from the norm, progress is not possible.