On Tue, Aug 27, 2013 at 11:32:31AM +0800, Qin Chuanyu wrote: > Hi all > > I am participating in a project which try to port vhost_net on Xen。 > > By change the memory copy and notify mechanism ,currently > virtio-net with vhost_net could run on Xen with good > performance。TCP receive throughput of single vnic from 2.77Gbps up > to 6Gps。In VM receive side,I instead grant_copy with grant_map + > memcopy,it efficiently reduce the cost of grant_table spin_lock of > dom0,So the hole server TCP performance from 5.33Gps up to 9.5Gps。 > > Now I am consider the live migrate of vhost_net on Xen,vhost_net > use vhost_log for live migrate on Kvm,but qemu on Xen havn't manage > the hole memory of VM,So I am trying to fallback datapath from > vhost_net to qemu when doing live migrate ,and fallback datapath > from qemu to > vhost_net again after vm migrate to new server。 > > My question is: > why didn't vhost_net do the same fallback operation for live > migrate on KVM,but use vhost_log to mark the dirty page? > Is there any mechanism fault for the idea of fallback datapath from > vhost_net to qemu for live migrate? > > any question about the detail of vhost_net on Xen is welcome。 > > Thanks > It should work, in practice. However, one issue with this approach that I see is that you are running two instances of virtio-net on the host: qemu and vhost-net, doubling your security surface for guest to host attack. I don't exactly see why does it matter that qemu doesn't manage the whole memory of a VM - vhost only needs to log memory writes that it performs. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html