On Sun, Nov 14, 2021, Shivam Kumar wrote: > +static int kvm_vm_ioctl_enable_dirty_quota_migration(struct kvm *kvm, > + bool enabled) > +{ > + if (!KVM_DIRTY_LOG_PAGE_OFFSET) I don't think we should force architectures to opt in. It would be trivial to add if (kvm_dirty_quota_is_full(vcpu)) { vcpu->run->exit_reason = KVM_EXIT_DIRTY_QUOTA_FULL; r = 0; break; } in the run loops of each architecture. And we can do that in incremental patches without #ifdeffery since it's only the exiting aspect that requires arch help. > + return -EINVAL; > + > + /* > + * For now, dirty quota migration works with dirty bitmap so don't > + * enable it if dirty ring interface is enabled. In future, dirty > + * quota migration may work with dirty ring interface was well. > + */ Why does KVM care? This is a very simple concept. QEMU not using it for the dirty ring doesn't mean KVM can't support it. > + if (kvm->dirty_ring_size) > + return -EINVAL; > + > + /* Return if no change */ > + if (kvm->dirty_quota_migration_enabled == enabled) Needs to be check under lock. > + return -EINVAL; Probably more idiomatic to return 0 if the desired value is the current value. > + mutex_lock(&kvm->lock); > + kvm->dirty_quota_migration_enabled = enabled; Needs to check vCPU creation. > + mutex_unlock(&kvm->lock); > + > + return 0; > +} > + > int __attribute__((weak)) kvm_vm_ioctl_enable_cap(struct kvm *kvm, > struct kvm_enable_cap *cap) > { > @@ -4305,6 +4339,9 @@ static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm, > } > case KVM_CAP_DIRTY_LOG_RING: > return kvm_vm_ioctl_enable_dirty_log_ring(kvm, cap->args[0]); > + case KVM_CAP_DIRTY_QUOTA_MIGRATION: > + return kvm_vm_ioctl_enable_dirty_quota_migration(kvm, > + cap->args[0]); > default: > return kvm_vm_ioctl_enable_cap(kvm, cap); > } > -- > 2.22.3 >