On 18/11/21 11:27 pm, Sean Christopherson wrote:
On Sun, Nov 14, 2021, Shivam Kumar wrote:
+static int kvm_vm_ioctl_enable_dirty_quota_migration(struct kvm *kvm,
+ bool enabled)
+{
+ if (!KVM_DIRTY_LOG_PAGE_OFFSET)
I don't think we should force architectures to opt in. It would be trivial to
add
if (kvm_dirty_quota_is_full(vcpu)) {
vcpu->run->exit_reason = KVM_EXIT_DIRTY_QUOTA_FULL;
r = 0;
break;
}
in the run loops of each architecture. And we can do that in incremental patches
without #ifdeffery since it's only the exiting aspect that requires arch help.
Noted. Thanks.
+ return -EINVAL;
+
+ /*
+ * For now, dirty quota migration works with dirty bitmap so don't
+ * enable it if dirty ring interface is enabled. In future, dirty
+ * quota migration may work with dirty ring interface was well.
+ */
Why does KVM care? This is a very simple concept. QEMU not using it for the
dirty ring doesn't mean KVM can't support it.
The dirty ring interface, if enabled, blocks the path that updates the
dirty bitmap. Our current implementation depends on that path. We were
planning to make the required changes in our implementation for it work
with dirty ring as well in upcoming patches. Will explore the
possibility of doing it in the next patchset only.
+ if (kvm->dirty_ring_size)
+ return -EINVAL;
+
+ /* Return if no change */
+ if (kvm->dirty_quota_migration_enabled == enabled)
Needs to be check under lock.
Noted. Thanks.
+ return -EINVAL;
Probably more idiomatic to return 0 if the desired value is the current value.
Keeping the case in mind when the userspace is trying to enable it while
the migration is already going on(which shouldn't happen), we are
returning -EINVAL. Please let me know if 0 still makes more sense.
+ mutex_lock(&kvm->lock);
+ kvm->dirty_quota_migration_enabled = enabled;
Needs to check vCPU creation.
In our current implementation, we are using the
KVM_CAP_DIRTY_QUOTA_MIGRATION ioctl to start dirty logging (through
dirty counter) on the kernel side. This ioctl is called each time a new
migration starts and ends.
The dirty quota context of each vCPU is stored in two variables dirty
counter and quota which we are allocating at vCPU creation and freeing
at vCPU destroy.
+ mutex_unlock(&kvm->lock);
+
+ return 0;
+}
+
int __attribute__((weak)) kvm_vm_ioctl_enable_cap(struct kvm *kvm,
struct kvm_enable_cap *cap)
{
@@ -4305,6 +4339,9 @@ static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm,
}
case KVM_CAP_DIRTY_LOG_RING:
return kvm_vm_ioctl_enable_dirty_log_ring(kvm, cap->args[0]);
+ case KVM_CAP_DIRTY_QUOTA_MIGRATION:
+ return kvm_vm_ioctl_enable_dirty_quota_migration(kvm,
+ cap->args[0]);
default:
return kvm_vm_ioctl_enable_cap(kvm, cap);
}
--
2.22.3
Thank you very much for your feedback.