On Tue, 26 Sep 2023 08:04:51 -0500, Michael Roth <michael.roth@xxxxxxx> wrote: > On Sun, Sep 03, 2023 at 06:13:38PM -0400, Michael S. Tsirkin wrote: > > The following changes since commit 2dde18cd1d8fac735875f2e4987f11817cc0bc2c: > > > > Linux 6.5 (2023-08-27 14:49:51 -0700) > > > > are available in the Git repository at: > > > > https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus > > > > for you to fetch changes up to 1acfe2c1225899eab5ab724c91b7e1eb2881b9ab: > > > > virtio_ring: fix avail_wrap_counter in virtqueue_add_packed (2023-09-03 18:10:24 -0400) > > > > ---------------------------------------------------------------- > > virtio: features > > > > a small pull request this time around, mostly because the > > vduse network got postponed to next relase so we can be sure > > we got the security store right. > > > > Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx> > > > > ---------------------------------------------------------------- > > Eugenio P閞ez (4): > > vdpa: add VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK flag > > vdpa: accept VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK backend feature > > vdpa: add get_backend_features vdpa operation > > vdpa_sim: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK > > > > Jason Wang (1): > > virtio_vdpa: build affinity masks conditionally > > > > Xuan Zhuo (12): > > virtio_ring: check use_dma_api before unmap desc for indirect > > virtio_ring: put mapping error check in vring_map_one_sg > > virtio_ring: introduce virtqueue_set_dma_premapped() > > virtio_ring: support add premapped buf > > virtio_ring: introduce virtqueue_dma_dev() > > virtio_ring: skip unmap for premapped > > virtio_ring: correct the expression of the description of virtqueue_resize() > > virtio_ring: separate the logic of reset/enable from virtqueue_resize > > virtio_ring: introduce virtqueue_reset() > > virtio_ring: introduce dma map api for virtqueue > > virtio_ring: introduce dma sync api for virtqueue > > virtio_net: merge dma operations when filling mergeable buffers > > This ^ patch (upstream commit 295525e29a) seems to cause a > network-related regression when using SWIOTLB in the guest. I noticed > this initially testing SEV guests, which use SWIOTLB by default, but > it can also be seen with normal guests when forcing SWIOTLB via > swiotlb=force kernel cmdline option. I see it with both 6.6-rc1 and > 6.6-rc2 (haven't tried rc3 yet, but don't see any related changes > there), and reverting 714073495f seems to avoid the issue. > > Steps to reproduce: > > 1) Boot QEMU/KVM guest with 6.6-rc2 with swiotlb=force via something like the following cmdline: > > qemu-system-x86_64 \ > -machine q35 -smp 4,maxcpus=255 -cpu EPYC-Milan-v2 \ > -enable-kvm -m 16G,slots=5,maxmem=256G -vga none \ > -device virtio-scsi-pci,id=scsi0,disable-legacy=on,iommu_platform=true \ > -drive file=/home/mroth/storage/ubuntu-18.04-seves2.qcow2,if=none,id=drive0,snapshot=off \ > -device scsi-hd,id=hd0,drive=drive0,bus=scsi0.0 \ > -device virtio-net-pci,netdev=netdev0,id=net0,disable-legacy=on,iommu_platform=true,romfile= \ > -netdev tap,script=/home/mroth/qemu-ifup,id=netdev0 \ > -L /home/mroth/storage/AMDSEV2/snp-release-2023-09-23/usr/local/share/qemu \ > -drive if=pflash,format=raw,unit=0,file=/home/mroth/storage/AMDSEV2/snp-release-2023-09-23/usr/local/share/qemu/OVMF_CODE.fd,readonly \ > -drive if=pflash,format=raw,unit=1,file=/home/mroth/storage/AMDSEV2/snp-release-2023-09-23/usr/local/share/qemu/OVMF_VARS.fd \ > -debugcon file:debug.log -global isa-debugcon.iobase=0x402 -msg timestamp=on \ > -kernel /boot/vmlinuz-6.6.0-rc2-vanilla0+ \ > -initrd /boot/initrd.img-6.6.0-rc2-vanilla0+ \ > -append "root=UUID=d72a6d1c-06cf-4b79-af43-f1bac4f620f9 ro console=ttyS0,115200n8 earlyprintk=serial,ttyS0,115200 debug=1 sev=debug page_poison=0 spec_rstack_overflow=off swiotlb=force" > > 2) scp a small file from the host to the guest IP via its virtio-net device. > Smaller file sizes succeed, but the larger the file the more likely > it will fail. e.g.: > > mroth@host:~$ dd if=/dev/zero of=test bs=1K count=19 > 19+0 records in > 19+0 records out > 19456 bytes (19 kB, 19 KiB) copied, 0.000940134 s, 20.7 MB/s > mroth@host:~$ scp test vm0: > test 100% 19KB 10.1MB/s 00:00 > mroth@host:~$ dd if=/dev/zero of=test bs=1K count=20 > 20+0 records in > 20+0 records out > 20480 bytes (20 kB, 20 KiB) copied, 0.00093774 s, 21.8 MB/s > mroth@host:~$ scp test vm0: > test 0% 0 0.0KB/s --:-- ETA > client_loop: send disconnect: Broken pipe > lost connection > mroth@host:~$ Hi Michael, Thanks for the report. Cloud you try this fix? I reproduce this issue, and that works for me. Thanks. diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 98dc9b49d56b..9ece27dc5144 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -589,16 +589,16 @@ static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len) --dma->ref; - if (dma->ref) { - if (dma->need_sync && len) { - offset = buf - (head + sizeof(*dma)); + if (dma->need_sync && len) { + offset = buf - (head + sizeof(*dma)); - virtqueue_dma_sync_single_range_for_cpu(rq->vq, dma->addr, offset, - len, DMA_FROM_DEVICE); - } + virtqueue_dma_sync_single_range_for_cpu(rq->vq, dma->addr, + offset, len, + DMA_FROM_DEVICE); + } + if (dma->ref) return; - } virtqueue_dma_unmap_single_attrs(rq->vq, dma->addr, dma->len, DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); > > Thanks, > > Mike > > > > > Yuan Yao (1): > > virtio_ring: fix avail_wrap_counter in virtqueue_add_packed > > > > Yue Haibing (1): > > vdpa/mlx5: Remove unused function declarations > > > > drivers/net/virtio_net.c | 230 ++++++++++++++++++--- > > drivers/vdpa/mlx5/core/mlx5_vdpa.h | 3 - > > drivers/vdpa/vdpa_sim/vdpa_sim.c | 8 + > > drivers/vhost/vdpa.c | 15 +- > > drivers/virtio/virtio_ring.c | 412 ++++++++++++++++++++++++++++++++----- > > drivers/virtio/virtio_vdpa.c | 17 +- > > include/linux/vdpa.h | 4 + > > include/linux/virtio.h | 22 ++ > > include/uapi/linux/vhost_types.h | 4 + > > 9 files changed, 625 insertions(+), 90 deletions(-) > > _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization