The dirty and non-dirty pages are checked one by one in vl.c. Since we believe that most of the memory is not dirty, checking the dirty and non-dirty pages by multiple page size should be much faster than checking them one by one. We think there is mostly two kind of situation. One is almost all the page is clean because there should be small amount of pages become dirty between each round of dirty bitmap check. The other is all the pages is dirty because all the bitmap is marked as dirty at the beginning of migration. To prove our prospect, we have evaluated effect of this patch. we compared runtime of ram_save_remaining with original ram_save_remaining() and ram_save_remaining() using functions of this patch. Test Environment: CPU: 4x Intel Xeon Quad Core 2.66GHz Mem size: 6GB kvm version: 2.6.31-17-server qemu version: commit ed880109f74f0a4dd5b7ec09e6a2d9ba4903d9a5 Host OS: Ubuntu 9.10 (kernel 2.6.31) Guest OS: Debian/GNU Linux lenny (kernel 2.6.26) Guest Mem size: 512MB Conditions of experiments are as follows: Cond1: Guest OS periodically makes the 256MB continuous dirty pages. Cond2: Guest OS periodically makes the 256MB dirty pages and non-dirty pages in turn. Cond3: Guest OS read 3GB file, which is bigger than memory. Cond4: Guest OS read/write 3GB file, which is bigger than memory. Experimental results: Cond1: 8~16 times speed up Cond2: 3~4 times speed down Cond3: 8~16 times speed up Cond4: 2~16 times speed up # Runtime of ram_save_remaining() is changed by the number of remaining # dirty pages. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html