On 22 November 2014 at 01:50, Mario Smarduch <m.smarduch@xxxxxxxxxxx> wrote: > QEMU has a global migration bitmap for all regions initially set > dirty, and it's updated over iterations with KVM's dirty bitmap. Once > dirty pages are migrated bits are cleared. If QEMU updates a > memory region directly I can't see how it's reflected in that migration > bitmap that determines what pages should be migrated as it makes > it's passes. On x86 if host updates guest memory it marks that > page dirty. > > But virtio writes to guest memory directly and that appears to > work just fine. I read that code sometime back, and will need to revisit. All devices in QEMU that write to guest memory will do it via a function in exec.c (possibly through wrapper functions) which eventually will call invalidate_and_set_dirty(), which is what is responsible for updating our dirty bitmaps. In the specific case of virtio, the virtio device ends up calling virtqueue_fill(), which does a cpu_physical_memory_unmap(), which just calls address_space_unmap(), which will either directly call invalidate_and_set_dirty() or, if using a bounce buffer, will copy the bounce buffer to guest RAM with address_space_write(), which calls address_space_rw(), which will do the invalidate_and_set_dirty(). There's no cache incoherency issue for migration, because the migration code is in the QEMU process and so it will read the most recent thing QEMU wrote whether that data is still in the dcache or has migrated out to real (host) RAM. -- PMM -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html