Re: Is fallback vhost_net to qemu for live migrate available?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2013/8/30 0:08, Anthony Liguori wrote:
Hi Qin,

By change the memory copy and notify mechanism ,currently virtio-net with
vhost_net could run on Xen with good performance。

I think the key in doing this would be to implement a property
ioeventfd and irqfd interface in the driver domain kernel.  Just
hacking vhost_net with Xen specific knowledge would be pretty nasty
IMHO.

Yes, I add a kernel module which persist virtio-net pio_addr and msix address as what kvm module did. Guest wake up vhost thread by adding a hook func in evtchn_interrupt.

Did you modify the front end driver to do grant table mapping or is
this all being done by mapping the domain's memory?

There is nothing changed in front end driver. Currently I use alloc_vm_area to get address space, and map the domain's memory as what what qemu did.

KVM and Xen represent memory in a very different way.  KVM can only
track when guest mode code dirties memory.  It relies on QEMU to track
when guest memory is dirtied by QEMU.  Since vhost is running outside
of QEMU, vhost also needs to tell QEMU when it has dirtied memory.

I don't think this is a problem with Xen though.  I believe (although
could be wrong) that Xen is able to track when either the domain or
dom0 dirties memory.

So I think you can simply ignore the dirty logging with vhost and it
should Just Work.

Thanks for your advice, I have tried it, without ping, it could migrate successfully, but if there has skb been received, domU would crash. I guess that because though Xen track domU memory, but it could only track memory that changed in DomU. memory changed by Dom0 is out of control.


No, we don't have a mechanism to fallback  to QEMU for the datapath.
It would be possible but I think it's a bad idea to mix and match the
two.

Next I would try to fallback datapath to qemu for three reason:
1: memory translate mechanism has been changed for vhost_net on Xen,so there would be some necessary changed needed for vhost_log in kernel.

2: I also maped IOREQ_PFN page(which is used for communication between qemu and Xen) in kernel notify module, so it also needed been marked as dirty when tx/rx exist in migrate period.

3: Most important of all, Michael S. Tsirkin said that he hadn't considered about vhost_net migrate on Xen,so there would be some changed needed in vhost_log for qemu.

fallback to qemu seems to much easier, isn't it.

Regards
Qin chuanyu


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux