Re: [PATCH v8 3/7] KVM: Support dirty ring in conjunction with bitmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 07, 2022 at 09:21:35AM +0000, Marc Zyngier wrote:
> On Sun, 06 Nov 2022 21:06:43 +0000,
> Peter Xu <peterx@xxxxxxxxxx> wrote:
> > 
> > On Sun, Nov 06, 2022 at 08:12:22PM +0000, Marc Zyngier wrote:
> > > Hi Peter,
> > > 
> > > On Sun, 06 Nov 2022 16:22:29 +0000,
> > > Peter Xu <peterx@xxxxxxxxxx> wrote:
> > > > 
> > > > Hi, Marc,
> > > > 
> > > > On Sun, Nov 06, 2022 at 03:43:17PM +0000, Marc Zyngier wrote:
> > > > > > +Note that the bitmap here is only a backup of the ring structure, and
> > > > > > +normally should only contain a very small amount of dirty pages, which
> > > > > 
> > > > > I don't think we can claim this. It is whatever amount of memory is
> > > > > dirtied outside of a vcpu context, and we shouldn't make any claim
> > > > > regarding the number of dirty pages.
> > > > 
> > > > The thing is the current with-bitmap design assumes that the two logs are
> > > > collected in different windows of migration, while the dirty log is only
> > > > collected after the VM is stopped.  So collecting dirty bitmap and sending
> > > > the dirty pages within the bitmap will be part of the VM downtime.
> > > > 
> > > > It will stop to make sense if the dirty bitmap can contain a large portion
> > > > of the guest memory, because then it'll be simpler to just stop the VM,
> > > > transfer pages, and restart on dest node without any tracking mechanism.
> > > 
> > > Oh, I absolutely agree that the whole vcpu dirty ring makes zero sense
> > > in general. It only makes sense if the source of the dirty pages is
> > > limited to the vcpus, which is literally a corner case. Look at any
> > > real machine, and you'll quickly realise that this isn't the case, and
> > > that DMA *is* a huge source of dirty pages.
> > > 
> > > Here, we're just lucky enough not to have much DMA tracking yet. Once
> > > that happens (and I have it from people doing the actual work that it
> > > *is* happening), you'll realise that the dirty ring story is of very
> > > limited use. So I'd rather drop anything quantitative here, as this is
> > > likely to be wrong.
> > 
> > Is it a must that arm64 needs to track device DMAs using the same dirty
> > tracking interface rather than VFIO or any other interface?
> 
> What does it change? At the end of the day, you want a list of dirty
> pages. How you obtain it is irrelevant.
> 
> > It's
> > definitely not the case for x86, but if it's true for arm64, then could the
> > DMA be spread across all the guest pages?  If it's also true, I really
> > don't know how this will work..
> 
> Of course, all pages can be the target of DMA. It works the same way
> it works for the ITS: you sync the state, you obtain the dirty bits,
> you move on.
> 
> And mimicking what x86 does is really not my concern (if you still
> think that arm64 is just another flavour of x86, stay tuned!  ;-).

I didn't mean so, I should probably stop mentioning x86. :)

I had some sense already from the topics in past few years of kvm forum.
Yeah I'll be looking forward to anything more coming.

> 
> > 
> > We're only syncing the dirty bitmap once right now with the protocol.  If
> > that can cover most of the guest mem, it's same as non-live.  If we sync it
> > periodically, then it's the same as enabling dirty-log alone and the rings
> > are useless.
> 
> I'm glad that you finally accept it: the ring *ARE* useless in the
> general sense. Only limited, CPU-only workloads can make any use of
> the current design. This probably covers a large proportion of what
> the cloud vendors do, but this doesn't work for general situations
> where you have a stream of dirty pages originating outside of the
> CPUs.

The ring itself is really not the thing to blame, IMHO it's a good attempt
to try out de-coupling guest size in regard of dirty tracking from kvm.  It
may not be perfect, but it may still service some of the goals, e.g., at
least it allows the user app to detect per-vcpu information and also since
there's the ring full events we can do something more than before like the
vcpu throttling that China Telecom does with the ring structures.

But I agree it's not a generic enough solution.  Hopefully it'll still
cover some use cases so it's not completely not making sense.

Thanks,

-- 
Peter Xu




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux