Hi Stefano,
On 10/17/2023 6:44 AM, Stefano Garzarella wrote:
On Fri, Oct 13, 2023 at 10:29:26AM -0700, Si-Wei Liu wrote:
Hi Stefano,
On 10/13/2023 2:22 AM, Stefano Garzarella wrote:
Hi Si-Wei,
On Fri, Oct 13, 2023 at 01:23:40AM -0700, Si-Wei Liu wrote:
RFC only. Not tested on vdpa-sim-blk with user virtual address.
I can test it, but what I should stress?
Great, thank you! As you see, my patch moved vhost_iotlb_reset out of
vdpasim_reset for the sake of decoupling mapping from vdpa device
reset. For hardware devices this decoupling makes sense as platform
IOMMU already did it. But I'm not sure if there's something in the
software device (esp. with vdpa-blk and the userspace library stack)
that may have to rely on the current .reset behavior that clears the
vhost_iotlb. So perhaps you can try to exercise every possible case
involving blk device reset, and see if anything (related to mapping)
breaks?
I just tried these steps without using a VM and the host kernel hangs
after adding the device:
[root@f38-vm-build ~]# modprobe virtio-vdpa
[root@f38-vm-build ~]# modprobe vdpa-sim-blk
[root@f38-vm-build ~]# vdpa dev add mgmtdev vdpasim_blk name blk0
[ 35.284575][ T563] virtio_blk virtio6: 1/0/0 default/read/poll queues
[ 35.286372][ T563] virtio_blk virtio6: [vdb] 262144 512-byte
logical blocks (134 MB/128 MiB)
[ 35.295271][ T564] vringh:
Reverting this patch (so building "vdpa/mlx5: implement .reset_map
driver op") worked here.
I'm sorry, the previous RFC patch was incomplete - please see the v2 I
just posted. Tested both use_va and !use_va on vdpa-sim-blk, and raw
disk copy to the vdpa block simulator using dd seems fine. Just let me
know how it goes on your side this time.
Thanks,
-Siwei
Works fine with vdpa-sim-net which uses physical address to map.
Can you share your tests? so I'll try to do the same with blk.
Basically everything involving virtio device reset in the guest,
e.g. reboot the VM, remove/unbind then reprobe/bind the virtio-net
module/driver, then see if device I/O (which needs mapping properly)
is still flowing as expected. And then everything else that could
trigger QEMU's vhost_dev_start/stop paths ending up as passive
vhos-vdpa backend reset, for e.g. link status change,
suspend/hibernate, SVQ switch and live migration. I am not sure if
vdpa-blk supports live migration through SVQ or not, if not you don't
need to worry about.
This patch is based on top of [1].
[1]
https://lore.kernel.org/virtualization/1696928580-7520-1-git-send-email-si-wei.liu@xxxxxxxxxx/
The series does not apply well on master or vhost tree.
Where should I apply it?
Sent the link through another email offline.
Received thanks!
Stefano
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization