On Sat, Jun 30, 2018 at 3:03 PM Jesper Dangaard Brouer <brouer@xxxxxxxxxx> wrote: > > On Fri, 29 Jun 2018 23:33:58 -0700 > xiangxia.m.yue@xxxxxxxxx wrote: > > > From: Tonghao Zhang <xiangxia.m.yue@xxxxxxxxx> > > > > This patch improves the guest receive and transmit performance. > > On the handle_tx side, we poll the sock receive queue at the > > same time. handle_rx do that in the same way. > > > > We set the poll-us=100us and use the iperf3 to test > > Where/how do you configure poll-us=100us ? > > Are you talking about /proc/sys/net/core/busy_poll ? No, we set the poll-us in qemu. e.g -netdev tap,ifname=tap0,id=hostnet0,vhost=on,script=no,downscript=no,poll-us=100 > > p.s. Nice performance boost! :-) > > > its bandwidth, use the netperf to test throughput and mean > > latency. When running the tests, the vhost-net kthread of > > that VM, is alway 100% CPU. The commands are shown as below. > > > > iperf3 -s -D > > iperf3 -c IP -i 1 -P 1 -t 20 -M 1400 > > > > or > > netserver > > netperf -H IP -t TCP_RR -l 20 -- -O "THROUGHPUT,MEAN_LATENCY" > > > > host -> guest: > > iperf3: > > * With the patch: 27.0 Gbits/sec > > * Without the patch: 14.4 Gbits/sec > > > > netperf (TCP_RR): > > * With the patch: 48039.56 trans/s, 20.64us mean latency > > * Without the patch: 46027.07 trans/s, 21.58us mean latency > > > > This patch also improves the guest transmit performance. > > > > guest -> host: > > iperf3: > > * With the patch: 27.2 Gbits/sec > > * Without the patch: 24.4 Gbits/sec > > > > netperf (TCP_RR): > > * With the patch: 47963.25 trans/s, 20.71us mean latency > > * Without the patch: 45796.70 trans/s, 21.68us mean latency > > > > Signed-off-by: Tonghao Zhang <zhangtonghao@xxxxxxxxxxxxxxx> > > -- > Best regards, > Jesper Dangaard Brouer > MSc.CS, Principal Kernel Engineer at Red Hat > LinkedIn: http://www.linkedin.com/in/brouer _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization