On 11/30/2010 03:47 PM, Anthony Liguori wrote:
On 11/30/2010 01:15 AM, Paolo Bonzini wrote:
On 11/30/2010 03:11 AM, Anthony Liguori wrote:
BufferedFile should hit the qemu_file_rate_limit check when the socket
buffer gets filled up.
The problem is that the file rate limit is not hit because work is
done elsewhere. The rate can limit the bandwidth used and makes QEMU
aware that socket operations may block (because that's what the
buffered file freeze/unfreeze logic does); but it cannot be used to
limit the _time_ spent in the migration code.
Yes, it can, if you set the rate limit sufficiently low.
The caveats are 1) the kvm.ko interface for dirty bits doesn't scale
for large memory guests so we spend a lot more CPU time walking it
than we should 2) zero pages cause us to burn a lot more CPU time than
we otherwise would because compressing them is so effective.
What's the problem with burning that cpu? per guest page, compressing
takes less than sending. Is it just an issue of qemu mutex hold time?
In the short term, fixing (2) by accounting zero pages as full sized
pages should "fix" the problem.
In the long term, we need a new dirty bit interface from kvm.ko that
uses a multi-level table. That should dramatically improve scan
performance.
Why would a multi-level table help? (or rather, please explain what you
mean by a multi-level table).
Something we could do is divide memory into more slots, and polling each
slot when we start to scan its page range. That reduces the time
between sampling a page's dirtiness and sending it off, and reduces the
latency incurred by the sampling. There are also non-interface-changing
ways to reduce this latency, like O(1) write protection, or using dirty
bits instead of write protection when available.
We also need to implement live migration in a separate thread that
doesn't carry qemu_mutex while it runs.
IMO that's the biggest hit currently.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html