On 2019/9/5 上午5:58, Haotian Wang wrote: > Hi Jason, > > I have an additional comment regarding using vring. > > On Tue, Sep 3, 2019 at 6:42 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: >> Kind of, in order to address the above limitation, you probably want to >> implement a vringh based netdevice and driver. It will work like, >> instead of trying to represent a virtio-net device to endpoint, >> represent a new type of network device, it uses two vringh ring instead >> virtio ring. The vringh ring is usually used to implement the >> counterpart of virtio driver. The advantages are obvious: >> >> - no need to deal with two sets of features, config space etc. >> - network specific, from the point of endpoint linux, it's not a virtio >> device, no need to care about transport stuffs or embedding internal >> virtio-net specific data structures >> - reuse the exist codes (vringh) to avoid duplicated bugs, implementing >> a virtqueue is kind of challenge > With vringh.c, there is no easy way to interface with virtio_net.c. > > vringh.c is linked with vhost/net.c nicely Let me clarify, vhost_net doesn't use vringh at all (though there's a plan to switch to use vringh). > but again it's not easy to > interface vhost/net.c with the network stack of endpoint kernel. The > vhost drivers are not designed with the purpose of creating another > suite of virtual devices in the host kernel in the first place. If I try > to manually write code for this interfacing, it seems that I will do > duplicate work that virtio_net.c does. Let me explain: - I'm not suggesting to use vhost_net since it can only deal with userspace virtio rings. - I suggest to introduce netdev that has vringh vring assoticated. Vringh was designed to deal with virtio ring located at different types of memory. It supports userspace vring and kernel vring currently, but it should not be too hard to add support for e.g endpoint device that requires DMA or whatever other method to access the vring. So it was by design to talk directly with e.g kernel virtio device. - In your case, you can read vring address from virtio config space through endpoint framework and then create vringh. It's as simple as: creating a netdev, read vring address, and initialize vringh. Then you can use vringh helper to get iov and build skb etc (similar to caif_virtio). > > There will be two more main disadvantages probably. > > Firstly, there will be two layers of overheads. vhost/net.c uses > vringh.c to channel data buffers into some struct sockets. This is the > first layer of overhead. That the virtual network device will have to > use these sockets somehow adds another layer of overhead. As I said, it doesn't work like vhost and no socket is needed at all. > > Secondly, probing, intialization and de-initialization of the virtual > network_device are already non-trivial. I'll likely copy this part > almost verbatim from virtio_net.c in the end. So in the end, there will > be more duplicate code. It will be a new type of network device instead of virtio, you don't need to care any virtio stuffs but vringh in your codes. So it looks to me it would be much simpler and compact. But I'm not saying your method is no way to go, but you should deal with lots of other issues like I've replied in the previous mail. What you want to achieve is 1) Host (virtio-pci) <-> virtio ring <-> virtual eth device <-> virtio ring <-> Endpoint (virtio with customized config_ops). But I suggest is 2) Host (virtio-pci) <-> virtio ring <-> virtual eth device <-> vringh vring (virtio ring in the Host) <-> network device The differences is. - Complexity: In your proposal, there will be two virtio devices and 4 virtqueues. It means you need to prepare two sets of features, config ops etc. And dealing with inconsistent feature will be a pain. It may work for simple case like a virtio-net device with only _F_MAC, but it would be hard to be expanded. If we decide to go for vringh, there will be a single virtio device and 2 virtqueues. In the endpoint part, it will be 2 vringh vring (which is actually point the same virtqueue from Host side) and a normal network device. There's no need for dealing with inconsistency, since vringh basically sever as a a device implementation, the feature negotiation is just between device (network device with vringh) and driver (virtito-pci) from the view of Linux running on the PCI Host. - Maintainability: A third path for dealing virtio ring. We've already had vhost and vringh, a third path will add a lot of overhead when trying to maintaining them. My proposal will try to reuse vringh, there's no need a new path. - Layer violation: We want to hide the transport details from the device and make virito-net device can be used without modification. But your codes try to poke information like virtnet_info. My proposal is to just have a new networking device that won't need to care virtio at all. It's not that hard as you imagine to have a new type of netdev, I suggest to take a look at how caif_virtio is done, it would be helpful. If you still decide to go with two two virtio device model, you need probably: - Proving two sets of config and features, and deal with inconsistency - Try to reuse the vringh codes - Do not refer internal structures from virtio-net.c But I recommend to take a step of trying vringh method which should be much simpler. Thanks > > Thank you for your patience! > > Best, > Haotian
Attachment:
pEpkey.asc
Description: application/pgp-keys