Re: [PATCH v8 3/7] KVM: Support dirty ring in conjunction with bitmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 06 Nov 2022 21:23:13 +0000,
Gavin Shan <gshan@xxxxxxxxxx> wrote:
> 
> Hi Peter and Marc,
> 
> On 11/7/22 5:06 AM, Peter Xu wrote:
> > On Sun, Nov 06, 2022 at 08:12:22PM +0000, Marc Zyngier wrote:
> >> On Sun, 06 Nov 2022 16:22:29 +0000,
> >> Peter Xu <peterx@xxxxxxxxxx> wrote:
> >>> On Sun, Nov 06, 2022 at 03:43:17PM +0000, Marc Zyngier wrote:
> >>>>> +Note that the bitmap here is only a backup of the ring structure, and
> >>>>> +normally should only contain a very small amount of dirty pages, which
> >>>> 
> >>>> I don't think we can claim this. It is whatever amount of memory is
> >>>> dirtied outside of a vcpu context, and we shouldn't make any claim
> >>>> regarding the number of dirty pages.
> >>> 
> >>> The thing is the current with-bitmap design assumes that the two logs are
> >>> collected in different windows of migration, while the dirty log is only
> >>> collected after the VM is stopped.  So collecting dirty bitmap and sending
> >>> the dirty pages within the bitmap will be part of the VM downtime.
> >>> 
> >>> It will stop to make sense if the dirty bitmap can contain a large portion
> >>> of the guest memory, because then it'll be simpler to just stop the VM,
> >>> transfer pages, and restart on dest node without any tracking mechanism.
> >> 
> >> Oh, I absolutely agree that the whole vcpu dirty ring makes zero sense
> >> in general. It only makes sense if the source of the dirty pages is
> >> limited to the vcpus, which is literally a corner case. Look at any
> >> real machine, and you'll quickly realise that this isn't the case, and
> >> that DMA *is* a huge source of dirty pages.
> >> 
> >> Here, we're just lucky enough not to have much DMA tracking yet. Once
> >> that happens (and I have it from people doing the actual work that it
> >> *is* happening), you'll realise that the dirty ring story is of very
> >> limited use. So I'd rather drop anything quantitative here, as this is
> >> likely to be wrong.
> > 
> > Is it a must that arm64 needs to track device DMAs using the same dirty
> > tracking interface rather than VFIO or any other interface?  It's
> > definitely not the case for x86, but if it's true for arm64, then could the
> > DMA be spread across all the guest pages?  If it's also true, I really
> > don't know how this will work..
> > 
> > We're only syncing the dirty bitmap once right now with the protocol.  If
> > that can cover most of the guest mem, it's same as non-live.  If we sync it
> > periodically, then it's the same as enabling dirty-log alone and the rings
> > are useless.
> > 
> 
> For vgic/its tables, the number of dirty pages can be huge in theory. However,
> they're limited in practice. So I intend to agree with Peter that dirty-ring
> should be avoided and dirty-log needs to be used instead when the DMA case
> is supported in future. As Peter said, the small amount of dirty pages in
> the bitmap is the condition to use it here. I think it makes sense to mention
> it in the document.

And again, I disagree. This API has *nothing* to do with the ITS. It
is completely general purpose and should work with anything because it
is designed for that.

The problem is that you're considering that RING+BITMAP is a different
thing from BITMAP alone when it comes to non-CPU traffic. It really
isn't.  We can't say "there will only be a few pages dirtied", because
we simply don't know.

If you really want a quantitative argument then say something like:

"The use of the ring+bitmap combination is only beneficial if there is
only very little memory that is dirtied by non-CPU agents. Consider
using the stand-alone bitmap API if this isn't the case."

which clearly puts the choice in the hand of the user.

[...]

> How about to avoid mentioning KVM_CLEAR_DIRTY_LOG here? I don't expect QEMU
> to clear the dirty bitmap after it's collected in this particular case.

Peter said there is an undefined behaviour. I want to understand
whether this is the case or not. QEMU is only one of the users of this
stuff, as all the vendors have their own custom VMM, and they do
things in funny ways.

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux