Hi Jason, I have an additional comment regarding using vring. On Tue, Sep 3, 2019 at 6:42 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > Kind of, in order to address the above limitation, you probably want to > implement a vringh based netdevice and driver. It will work like, > instead of trying to represent a virtio-net device to endpoint, > represent a new type of network device, it uses two vringh ring instead > virtio ring. The vringh ring is usually used to implement the > counterpart of virtio driver. The advantages are obvious: > > - no need to deal with two sets of features, config space etc. > - network specific, from the point of endpoint linux, it's not a virtio > device, no need to care about transport stuffs or embedding internal > virtio-net specific data structures > - reuse the exist codes (vringh) to avoid duplicated bugs, implementing > a virtqueue is kind of challenge With vringh.c, there is no easy way to interface with virtio_net.c. vringh.c is linked with vhost/net.c nicely but again it's not easy to interface vhost/net.c with the network stack of endpoint kernel. The vhost drivers are not designed with the purpose of creating another suite of virtual devices in the host kernel in the first place. If I try to manually write code for this interfacing, it seems that I will do duplicate work that virtio_net.c does. There will be two more main disadvantages probably. Firstly, there will be two layers of overheads. vhost/net.c uses vringh.c to channel data buffers into some struct sockets. This is the first layer of overhead. That the virtual network device will have to use these sockets somehow adds another layer of overhead. Secondly, probing, intialization and de-initialization of the virtual network_device are already non-trivial. I'll likely copy this part almost verbatim from virtio_net.c in the end. So in the end, there will be more duplicate code. Thank you for your patience! Best, Haotian