Re: [RFC v3 00/29] vDPA software assisted live migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




在 2021/5/24 下午6:37, Eugenio Perez Martin 写道:
On Mon, May 24, 2021 at 11:38 AM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
On Wed, May 19, 2021 at 06:28:34PM +0200, Eugenio Pérez wrote:
Commit 17 introduces the buffer forwarding. Previous one are for
preparations again, and laters are for enabling some obvious
optimizations. However, it needs the vdpa device to be able to map
every IOVA space, and some vDPA devices are not able to do so. Checking
of this is added in previous commits.
That might become a significant limitation. And it worries me that
this is such a big patchset which might yet take a while to get
finalized.

Sorry, maybe I've been unclear here: Latter commits in this series
address this limitation. Still not perfect: for example, it does not
support adding or removing guest's memory at the moment, but this
should be easy to implement on top.

The main issue I'm observing is from the kernel if I'm not wrong: If I
unmap every address, I cannot re-map them again.


This looks like a bug.

Does this happen only on some specific device (e.g vp_vdpa) or it's a general issue of vhost-vdpa?


  But code in this
patchset is mostly final, except for the comments it may arise in the
mail list of course.

I have an idea: how about as a first step we implement a transparent
switch from vdpa to a software virtio in QEMU or a software vhost in
kernel?

This will give us live migration quickly with performance comparable
to failover but without dependance on guest cooperation.

I think it should be doable. I'm not sure about the effort that needs
to be done in qemu to hide these "hypervisor-failover devices" from
guest's view but it should be comparable to failover, as you say.


Yes, if we want to switch, I'd go to a fallback to vhost-vdpa network backend instead.

Thanks



Networking should be ok by its nature, although it could require care
on the host hardware setup. But I'm not sure how other types of
vhost/vdpa devices may work that way. How would a disk/scsi device
switch modes? Can the kernel take control of the vdpa device through
vhost, and just start reporting with a dirty bitmap?

Thanks!

Next step could be driving vdpa from userspace while still copying
packets to a pre-registered buffer.

Finally your approach will be a performance optimization for devices
that support arbitrary IOVA.

Thoughts?

--
MST


_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux