Re: [PATCH RESEND v2 08/17] KVM: X86: Implement ring-based dirty memory tracking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 08, 2020 at 06:41:06PM +0100, Paolo Bonzini wrote:
> On 08/01/20 16:52, Peter Xu wrote:
> > here, which is still a bit tricky to makeup the kvmgt issue.
> > 
> > Now we still have the waitqueue but it'll only be used for
> > no-vcpu-context dirtyings, so:
> > 
> > - For no-vcpu-context: thread could wait in the waitqueue if it makes
> >   vcpu0's ring soft-full (note, previously it was hard-full, so here
> >   we make it easier to wait so we make sure )
> > 
> > - For with-vcpu-context: we should never wait, guaranteed by the fact
> >   that KVM_RUN will return now if soft-full for that vcpu ring, and
> >   above waitqueue will make sure even vcpu0's waitqueue won't be
> >   filled up by kvmgt
> > 
> > Again this is still a workaround for kvmgt and I think it should not
> > be needed after the refactoring.  It's just a way to not depend on
> > that work so this should work even with current kvmgt.
> 
> The kvmgt patches were posted, you could just include them in your next
> series and clean everything up.  You can get them at
> https://patchwork.kernel.org/cover/11316219/.

Good to know!

Maybe I'll simply drop all the redundants in the dirty ring series
assuming it's there?  Since these patchsets should not overlap with
each other (so looks more like an ordering constraints for merging).

Thanks,

-- 
Peter Xu




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux