Hello Xiaohui, On Thu, 2010-07-29 at 19:14 +0800, xiaohui.xin@xxxxxxxxx wrote: > The idea is simple, just to pin the guest VM user space and then > let host NIC driver has the chance to directly DMA to it. > The patches are based on vhost-net backend driver. We add a device > which provides proto_ops as sendmsg/recvmsg to vhost-net to > send/recv directly to/from the NIC driver. KVM guest who use the > vhost-net backend may bind any ethX interface in the host side to > get copyless data transfer thru guest virtio-net frontend. Since vhost-net already supports macvtap/tun backends, do you think whether it's better to implement zero copy in macvtap/tun than inducing a new media passthrough device here? > Our goal is to improve the bandwidth and reduce the CPU usage. > Exact performance data will be provided later. I did some vhost performance measurement over 10Gb ixgbe, and found that in order to get consistent BW results, netperf/netserver, qemu, vhost threads smp affinities are required. Looking forward to these results for small message size comparison. For large message size 10Gb ixgbe BW already reached by doing vhost smp affinity w/i offloading support, we will see how much CPU utilization it can be reduced. Please provide latency results as well. I did some experimental on macvtap zero copy sendmsg, what I have found that get_user_pages latency pretty high. Thanks Shirley -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html