Re: [PATCH v1 0/6] support dirtyrate at the granualrity of vcpu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





在 2021/6/2 5:54, Peter Xu 写道:
On Tue, Jun 01, 2021 at 01:02:45AM +0800, huangy81@xxxxxxxxxxxxxxx wrote:
From: Hyman Huang(黄勇) <huangy81@xxxxxxxxxxxxxxx>

Since the Dirty Ring on QEMU part has been merged recently, how to use
this feature is under consideration.

In the scene of migration, it is valuable to provide a more accurante
interface to track dirty memory than existing one, so that the upper
layer application can make a wise decision, or whatever. More importantly,
dirtyrate info at the granualrity of vcpu could provide a possibility to
make migration convergent by imposing restriction on vcpu. With Dirty
Ring, we can calculate dirtyrate efficiently and cheaply.

The old interface implemented by sampling pages, it consumes cpu
resource, and the larger guest memory size become, the more cpu resource
it consumes, namely, hard to scale. New interface has no such drawback.

Yong,

Thanks for working on this!

Some high-level comments:

- The layout of the patch looks a bit odd.  E.g., you introduced the new "vcpu"
   qmp parameter in patch 3, however it's not yet implemented, meanwhile I feel
   like you squashed mostly all the rest into patch 6.  It's okay to use a
   single big patch, but IMHO better to not declare that flag in QMP before it's
   working, so ideally that should be the last patch to do that.

   From that POV: patch 1/2/4 look ok to be separated; perhaps squash patch
   3/5/6 into one single patch to enable the new method as the last one?

Yeah previously the concern is make the patch clear and small, however with the comment of each commit, it seems ok. As you said, it's okay to use a single big patch, i'll adjust the patch set style base on your advice.
- You used "vcpu" across the patchset to show the per-vcpu new method.  Shall
   we rename it globally to "per_vcpu" or "vcpu_based"?  A raw "vcpu" looks more
   like a struct pointer not a boolean.

Indeed, actually the initial name of the option is "per_vcpu". : ). i'll fix this.
- Using memory_global_dirty_log_start|stop() may not be wise too IMHO, at least
   we need to make sure it's not during migration, otherwise we could call the
   stop() before migration ends then that'll be a problem..
Yeah, this may be a serious problem, thanks for your timely advice.

   Maybe we can start to make global_dirty_log a bitmask? Then we define:

     GLOBAL_DIRTY_MIGRATION
     GLOBAL_DIRTY_DIRTY_RATE

   All references to global_dirty_log should mostly be untouched because any bit
   set there should justify that global dirty logging is enabled (either for
   migration or for dirty rate measurement).

   Migration starting half-way of dirty rate measurement seems okay too even
   taking things like init-all-set into account, afaict.. as long as dirty rate
   code never touches the qemu dirty bitmap, but only do the accounting when
   collecting the pages...

   Feel free to think more about it on any other potential conflict with
   migration, but in general seems working to me.

I'll apply this on the next version.
- Would you consider picking up my HMP patch and let HMP work from the 1st day?

- Please Cc the author of dirty rate too (Chuan Zheng <zhengchuan@xxxxxxxxxx>),
   while I already started to do so in this email.

I'd be glad to do this above two.
Thanks,


Thanks Peter!

--
Best regard

Hyman Huang(黄勇)



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux