On 12/02/2010 03:31 AM, Takuya Yoshikawa wrote:
Thanks for the answers Avi, Juan, Some FYI, (not about the bottleneck) On Wed, 01 Dec 2010 14:35:57 +0200 Avi Kivity<avi@xxxxxxxxxx> wrote: > > > - how many dirty pages do we have to care? > > > > default values and assuming 1Gigabit ethernet for ourselves ~9.5MB of > > dirty pages to have only 30ms of downtime. > > 1Gb/s * 30ms = 100 MB/s * 30 ms = 3 MB. > 3MB / 4KB/page = 750 pages. Then, KVM side processing is near the theoretical goal! In my framebuffer test, I tested nr_dirty_pages/npages = 576/4096 case with the rate of 20 updates/s (1updates/50ms). Using rmap optimization, write protection only took 46,718 tsc time.
Yes, using rmap to drive write protection with sparse dirty bitmaps really helps.
Bitmap copy was not a problem of course. The display was working anyway at this rate! In my guess, within 1,000 dirty pages, kvm_vm_ioctl_get_dirty_log() can be processed within 200us or so even for large RAM slot. - rmap optimization depends mainly on nr_dirty_pages but npages. Avi, can you guess the property of O(1) write protection? I want to test rmap optimization taking these issues into acount.
I think we should use O(1) write protection only if there is a large number of dirty pages. With a small number, using rmap guided by the previous dirty bitmap is faster.
So, under normal operation where only the framebuffer is logged, we'd use rmap write protection; when enabling logging for live migration we'd use O(1) write protection, after a few iterations when the number of dirty pages drops, we switch back to rmap write protection.
Of course, Kemari have to continue synchronization, and maybe see more dirty pages. This will be a future task!
There's yet another option, of using dirty bits instead of write protection. Or maybe using write protection in the upper page tables and dirty bits in the lowest level.
-- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html