Re: [PATCH 3/6] Add KVM_CAP_DIRTY_QUOTA_MIGRATION and handle vCPU page faults.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 25, 2021, Shivam Kumar wrote:
> 
> On 20/11/21 1:51 am, Shivam Kumar wrote:
> > 
> > On 20/11/21 1:36 am, Sean Christopherson wrote:
> > > Actually, if we go the route of using kvm_run to report and update the
> > > count/quota, we don't even need a capability.  Userspace can signal each
> > > vCPU to induce an exit to userspace, e.g. at the start of migration, then
> > > set the desired quota/count in vcpu->kvm_run and stuff exit_reason so
> > > that KVM updates the quota/count on the subsequent KVM_RUN.  No locking
> > > or requests needed, and userspace can reset the count at will, it just
> > > requires a signal.
> > > 
> > > It's a little weird to overload exit_reason like that, but if that's a
> > > sticking point we could add a flag in kvm_run somewhere.  Requiring an
> > > exit to userspace at the start of migration doesn't seem too onerous.
> >
> > Yes, this path looks flaw-free. We will explore the complexity and how
> > we can simplify its implementation.
>
> Is it okay to define the per-vcpu dirty quota and dirty count in the kvm_run
> structure itself? It can save space and reduce the complexity of the
> implemenation by large margin.

Paolo, I'm guessing this question is directed at you since I made the suggestion :-)



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux