On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote: > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote: > > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote: > > > > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote: > > > > > > > > > > > > > > ## AF_XDP > > > > > > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit > > > > > > > feature. > > > > > > > > > > > > > > At present, we have completed some preparation: > > > > > > > > > > > > > > 1. vq-reset (virtio spec and kernel code) > > > > > > > 2. virtio-core premapped dma > > > > > > > 3. virtio-net xdp refactor > > > > > > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket > > > > > > > Zerocopy. > > > > > > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with > > > > > > > kernel. > > > > > > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it > > > > > > > is also the local CPU, then we wake up napi directly. > > > > > > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support > > > > > > > AF_XDP. > > > > > > > > > > > > > > ## performance > > > > > > > > > > > > > > ENV: Qemu with vhost-user(polling mode). > > > > > > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf > > > > > > > I use this tool to send udp packet by kernel syscall. > > > > > > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000 > > > > > > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP. > > > > > > > > > > > > > > | Guest APP CPU |Guest Softirq CPU | UDP PPS > > > > > > > ------------------|---------------|------------------|------------ > > > > > > > xmit by syscall | 100% | | 676,915 > > > > > > > xmit by xsk | 59.1% | 100% | 5,447,168 > > > > > > > recv by syscall | 60% | 100% | 932,288 > > > > > > > recv by xsk | 35.7% | 100% | 3,343,168 > > > > > > > > > > > > Any chance we can get a testpmd result (which I guess should be better > > > > > > than PPS above)? > > > > > > > > > > Do you mean testpmd + DPDK + AF_XDP? > > > > > > > > Yes. > > > > > > > > > > > > > > Yes. This is probably better because my tool does more work. That is not a > > > > > complete testing tool used by our business. > > > > > > > > Probably, but it would be appealing for others. Especially considering > > > > DPDK supports AF_XDP PMD now. > > > > > > OK. > > > > > > Let me try. > > > > > > But could you start to review firstly? > > > > Yes, it's in my todo list. > > Speaking too fast, I think if it doesn't take too long time, I would > wait for the result first as netdim series. One reason is that I > remember claims to be only 10% to 20% loss comparing to wire speed, so > I'd expect it should be much faster. I vaguely remember, even a vhost > can gives us more than 3M PPS if we disable SMAP, so the numbers here > are not as impressive as expected. What is SMAP? Cloud you give me more info? So if we think the 3M as the wire speed, you expect the result can reach 2.8M pps/core, right? Now the recv result is 2.5M(2463646) pps/core. Do you think there is a huge gap? My tool makes udp packet and lookup route, so it take more much cpu. I am confused. What is SMAP? Could you give me more information? So if we use 3M as the wire speed, you would expect the result to be 2.8M pps/core, right? Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think the difference is big? My tool makes udp packets and looks up routes, so it requires more CPU. I'm confused. Is there something I misunderstood? Thanks. > > Thanks > > > > > > > > > > > > > > > > > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because > > > > > the device is in busy mode. So there is race between driver and device. > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach > > > > > 10,000,000. > > > > > > > > Care to post a draft for this? > > > > > > YES, I is thinking for this. > > > But maybe that is just work for split. The packed mode has some troubles. > > > > Ok. > > > > Thanks > > > > > > > > Thanks. > > > > > > > > > > > Thanks > > > > > > > > > > > > > > Thanks. > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > ## maintain > > > > > > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in > > > > > > > virtio-net. > > > > > > > > > > > > > > Please review. > > > > > > > > > > > > > > Thanks. > > > > > > > > > > > > > > v1: > > > > > > > 1. remove two virtio commits. Push this patchset to net-next > > > > > > > 2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx > > > > > > > 3. fix some warnings > > > > > > > > > > > > > > Xuan Zhuo (19): > > > > > > > virtio_net: rename free_old_xmit_skbs to free_old_xmit > > > > > > > virtio_net: unify the code for recycling the xmit ptr > > > > > > > virtio_net: independent directory > > > > > > > virtio_net: move to virtio_net.h > > > > > > > virtio_net: add prefix virtnet to all struct/api inside virtio_net.h > > > > > > > virtio_net: separate virtnet_rx_resize() > > > > > > > virtio_net: separate virtnet_tx_resize() > > > > > > > virtio_net: sq support premapped mode > > > > > > > virtio_net: xsk: bind/unbind xsk > > > > > > > virtio_net: xsk: prevent disable tx napi > > > > > > > virtio_net: xsk: tx: support tx > > > > > > > virtio_net: xsk: tx: support wakeup > > > > > > > virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer > > > > > > > virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer > > > > > > > virtio_net: xsk: rx: introduce add_recvbuf_xsk() > > > > > > > virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer > > > > > > > virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer > > > > > > > virtio_net: update tx timeout record > > > > > > > virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY > > > > > > > > > > > > > > MAINTAINERS | 2 +- > > > > > > > drivers/net/Kconfig | 8 +- > > > > > > > drivers/net/Makefile | 2 +- > > > > > > > drivers/net/virtio/Kconfig | 13 + > > > > > > > drivers/net/virtio/Makefile | 8 + > > > > > > > drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++----------- > > > > > > > drivers/net/virtio/virtio_net.h | 359 +++++++++++ > > > > > > > drivers/net/virtio/xsk.c | 545 ++++++++++++++++ > > > > > > > drivers/net/virtio/xsk.h | 32 + > > > > > > > 9 files changed, 1247 insertions(+), 374 deletions(-) > > > > > > > create mode 100644 drivers/net/virtio/Kconfig > > > > > > > create mode 100644 drivers/net/virtio/Makefile > > > > > > > rename drivers/net/{virtio_net.c => virtio/main.c} (91%) > > > > > > > create mode 100644 drivers/net/virtio/virtio_net.h > > > > > > > create mode 100644 drivers/net/virtio/xsk.c > > > > > > > create mode 100644 drivers/net/virtio/xsk.h > > > > > > > > > > > > > > -- > > > > > > > 2.32.0.3.g01195cf9f > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization