On 2013/8/30 0:08, Anthony Liguori wrote:
Hi Qin,
KVM and Xen represent memory in a very different way. KVM can only
track when guest mode code dirties memory. It relies on QEMU to track
when guest memory is dirtied by QEMU. Since vhost is running outside
of QEMU, vhost also needs to tell QEMU when it has dirtied memory.
I don't think this is a problem with Xen though. I believe (although
could be wrong) that Xen is able to track when either the domain or
dom0 dirties memory.
So I think you can simply ignore the dirty logging with vhost and it
should Just Work.
Xen track guest's memory when live migrating as what KVM did (I guess it
rely on EPT),it couldn't mark dom0's dirty memory automatically.
I did the same dirty log with vhost_net but instead of KVM's api with
Xen's dirty memory interface,then live migration work.
--------------------------------------------------------------------
There is a bug on the Xen live migration when using qemu emulate
nic(such as virtio_net).
current flow:
xc_save->dirty memory copy->suspend->stop_vcpu->last memory copy
stop_qemu->stop_virtio_net
save_qemu->save_virtio_net
it means virtio_net would dirty memory after the last memory copy.
I have test it both vhost_on_qemu and virtio_net in qemu,there are same
problem, the update of vring_index would be mistake and lead network
unreachable. my solution is:
xc_save->dirty memory copy->suspend->stop_vcpu->stop_qemu
->stop_virtio_net->last memory copy
save_qemu->save_virtio_net
Xen's netfront and netback disconnect and flush IO-ring when live
migrate,so it is OK.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html