On 02/05/2010 12:18 PM, OHMURA Kei wrote: > dirty-bitmap-traveling is carried out by byte size in qemu-kvm.c. > But We think that dirty-bitmap-traveling by long size is faster than by byte > size especially when most of memory is not dirty. > > > > + > +static int kvm_get_dirty_pages_log_range_by_long(unsigned long start_addr, > + unsigned char *bitmap, > + unsigned long offset, > + unsigned long mem_size) > +{ > + unsigned int i; > + unsigned int len; > + unsigned long *bitmap_ul = (unsigned long *)bitmap; > + > + /* bitmap-traveling by long size is faster than by byte size > + * especially when most of memory is not dirty. > + * bitmap should be long-size aligned for traveling by long. > + */ > + if (((unsigned long)bitmap & (TARGET_LONG_SIZE - 1)) == 0) { > Since we allocate the bitmap, we can be sure that it is aligned on a long boundary (qemu_malloc() should guarantee that). So you can eliminate the fallback. > + len = ((mem_size / TARGET_PAGE_SIZE) + TARGET_LONG_BITS - 1) / > + TARGET_LONG_BITS; > + for (i = 0; i < len; i++) > + if (bitmap_ul[i] != 0) > + kvm_get_dirty_pages_log_range_by_byte(i * TARGET_LONG_SIZE, > + (i + 1) * TARGET_LONG_SIZE, bitmap, offset); > Better to just use the original loop here (since we don't need the function as a fallback). > + /* > + * We will check the remaining dirty-bitmap, > + * when the mem_size is not a multiple of TARGET_LONG_SIZE. > + */ > + if ((mem_size & (TARGET_LONG_SIZE - 1)) != 0) { > + len = ((mem_size / TARGET_PAGE_SIZE) + 7) / 8; > + kvm_get_dirty_pages_log_range_by_byte(i * TARGET_LONG_SIZE, > + len, bitmap, offset); > + } > Seems like the bitmap size is also aligned as well (allocated using BITMAP_SIZE which aligns using HOST_LONG_BITS), so this is unnecessary as well. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html