Re: migrate_set_downtime bug

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 29, 2009 at 06:36:57PM +0200, Dietmar Maurer wrote:
> > Also, if this is really the case (buffered), then the bandwidth capping
> > part
> > of migration is also wrong.
> > 
> > Have you compared the reported bandwidth to your actual bandwith ? I
> > suspect
> > the source of the problem can be that we're currently ignoring the time
> > we take
> > to transfer the state of the devices, and maybe it is not negligible.
> > 
> 
> I have a 1GB network (e1000 card), and get values like bwidth=0.98 - which is much too high.
The main reason for not using the whole migration time is that it can lead to values
that are not very helpful in situation where the network load changes too much.

Since the problem you pinpointed do exist, I would suggest measuring the average load of the last,
say, 10 iterations. How would that work for you?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux