Re: [PATCH 3/6] Add KVM_CAP_DIRTY_QUOTA_MIGRATION and handle vCPU page faults.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 20/11/21 1:51 am, Shivam Kumar wrote:

On 20/11/21 1:36 am, Sean Christopherson wrote:
On Sat, Nov 20, 2021, Shivam Kumar wrote:
On 18/11/21 11:27 pm, Sean Christopherson wrote:
+        return -EINVAL;
Probably more idiomatic to return 0 if the desired value is the current value.
Keeping the case in mind when the userspace is trying to enable it while the
migration is already going on(which shouldn't happen), we are returning
-EINVAL. Please let me know if 0 still makes more sense.
If the semantics are not "enable/disable", but rather "(re)set the quota",
then it makes sense to allow changing the quota arbitrarily.
I agree that the semantics are not apt. Will modify it. Thanks.

+    mutex_lock(&kvm->lock);
+    kvm->dirty_quota_migration_enabled = enabled;
Needs to check vCPU creation.
In our current implementation, we are using the
KVM_CAP_DIRTY_QUOTA_MIGRATION ioctl to start dirty logging (through dirty counter) on the kernel side. This ioctl is called each time a new migration
starts and ends.
Ah, and from the cover letter discussion, you want the count and quota to be
reset when a new migration occurs.  That makes sense.

Actually, if we go the route of using kvm_run to report and update the count/quota, we don't even need a capability.  Userspace can signal each vCPU to induce an exit to userspace, e.g. at the start of migration, then set the desired quota/count in vcpu->kvm_run and stuff exit_reason so that KVM updates the quota/count on the subsequent KVM_RUN.  No locking or requests needed, and userspace can reset the
count at will, it just requires a signal.

It's a little weird to overload exit_reason like that, but if that's a sticking point we could add a flag in kvm_run somewhere.  Requiring an exit to userspace
at the start of migration doesn't seem too onerous.
Yes, this path looks flaw-free. We will explore the complexity and how we can simplify its implementation.
Is it okay to define the per-vcpu dirty quota and dirty count in the kvm_run structure itself? It can save space and reduce the complexity of the implemenation by large margin.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux