>It may be possible to make vmdq appear like an sr-iov capable device >from userspace. sr-iov provides the userspace interfaces to allocate >interfaces and assign mac addresses. To make it useful, you would have >to handle tx multiplexing in the driver but that would be much easier to >consume for kvm What we have thought is to support multiple net_dev structures according to multiple queue pairs of one vmdq adapter and presents multiple mac address in user space and each one mac can be used by a guest. What does the tx multiplexing in the driver exactly mean? Thanks Xiaohui -----Original Message----- From: Anthony Liguori [mailto:anthony@xxxxxxxxxxxxx] Sent: Tuesday, September 01, 2009 5:57 AM To: Avi Kivity Cc: Xin, Xiaohui; mst@xxxxxxxxxx; netdev@xxxxxxxxxxxxxxx; virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx; kvm@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; mingo@xxxxxxx; linux-mm@xxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx; hpa@xxxxxxxxx; gregory.haskins@xxxxxxxxx Subject: Re: [PATCHv5 3/3] vhost_net: a kernel-level virtio server Avi Kivity wrote: > On 08/31/2009 02:42 PM, Xin, Xiaohui wrote: >> Hi, Michael >> That's a great job. We are now working on support VMDq on KVM, and >> since the VMDq hardware presents L2 sorting based on MAC addresses >> and VLAN tags, our target is to implement a zero copy solution using >> VMDq. We stared from the virtio-net architecture. What we want to >> proposal is to use AIO combined with direct I/O: >> 1) Modify virtio-net Backend service in Qemu to submit aio requests >> composed from virtqueue. >> 2) Modify TUN/TAP device to support aio operations and the user space >> buffer directly mapping into the host kernel. >> 3) Let a TUN/TAP device binds to single rx/tx queue from the NIC. >> 4) Modify the net_dev and skb structure to permit allocated skb to >> use user space directly mapped payload buffer address rather then >> kernel allocated. >> >> As zero copy is also your goal, we are interested in what's in your >> mind, and would like to collaborate with you if possible. >> > > One way to share the effort is to make vmdq queues available as normal > kernel interfaces. It may be possible to make vmdq appear like an sr-iov capable device from userspace. sr-iov provides the userspace interfaces to allocate interfaces and assign mac addresses. To make it useful, you would have to handle tx multiplexing in the driver but that would be much easier to consume for kvm. Regards, Anthony Liguori -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html