Hi Michael, I'll reserve individual patch review until they're in a mergable state, but I do have some comments about the overall integration architecture. Generally speaking, I think the integration unnecessarily invasive. It adds things to the virtio infrastructure that shouldn't be there like the irqfd/queuefd bindings. It also sneaks in things like raw backend support which really isn't needed. I think we can do better. Here's what I suggest: The long term goal should be to have a NetDevice interface that looks very much like virtio-net but as an API, not an ABI. Roughly, it would look something like: struct NetDevice { int add_xmit(NetDevice *dev, struct iovec *iov, int iovcnt, void *token); int add recv(NetDevice *dev, struct iovec *iov, int iovcnt, void *token); void *get_xmit(NetDevice *dev); void *get_recv(NetDevice *dev); void kick(NetDevice *dev); ... }; That gives us a better API for use with virtio-net, e1000, etc. Assuming we had this interface, I think a natural extension would be: int add_ring(NetDevice *dev, void *address); int add_kickfd(NetDevice *dev, int fd); For slot management, it really should happen outside of the NetDevice structure. We'll need a slot notifier mechanism such that we can keep this up to date as things change. vhost-net because a NetDevice. It can support things like the e1000 by doing ring translation behind the scenes. virtio-net can be fast pathed in the case that we're using KVM but otherwise, it would also rely on the ring translation. N.B. in the case vhost-net is fast pathed, it requires a different device in QEMU that uses a separate virtio transport. We should reuse as much code as possible obviously. It doesn't make sense to have all of the virtio-pci code and virtio-net code in place when we aren't using it. All this said, I'm *not* suggesting you have to implement all of this to get vhost-net merged. Rather, I'm suggesting that we should try to structure the current vhost-net implementation to complement this architecture assuming we all agree this is the sane thing to do. That means I would make the following changes to your series: - move vhost-net support to a VLANClientState backend. - do not introduce a raw socket backend - if for some reason you want to back to tap and raw, those should be options to the vhost-net backend. - when fast pathing with vhost-net, we should introduce interfaces to VLANClientState similar to add_ring and add_kickfd. They'll be very specific to vhost-net for now, but that's okay. - sort out the layering of vhost-net within the virtio infrastructure. vhost-net should really be it's own qdev device. I don't see very much code reuse happening right now so I don't understand why it's not that way currently. Regards, Anthony Liguori _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/virtualization