On 2020/9/9 上午12:37, Cornelia Huck wrote:
Then you need something that is functional equivalent to virtio PCI
which is actually the concept of vDPA (e.g vDPA provides alternatives if
the queue_sel is hard in the EP implementation).
It seems I really need to read up on vDPA more... do you have a pointer
for diving into this alternatives aspect?
See vpda_config_ops in include/linux/vdpa.h
Especially this part:
int (*set_vq_address)(struct vdpa_device *vdev,
u16 idx, u64 desc_area, u64 driver_area,
u64 device_area);
This means for the devices (e.g endpoint device) that is hard to
implement virtio-pci layout, it can use any other register layout or
vendor specific way to configure the virtqueue.
"Virtio Over NTB" should anyways be a new transport.
Does that make any sense?
yeah, in the approach I used the initial features are hard-coded in
vhost-rpmsg (inherent to the rpmsg) but when we have to use adapter
layer (vhost only for accessing virtio ring and use virtio drivers on
both front end and backend), based on the functionality (e.g, rpmsg),
the vhost should be configured with features (to be presented to the
virtio) and that's why additional layer or APIs will be required.
A question here, if we go with vhost bus approach, does it mean the
virtio device can only be implemented in EP's userspace?
Can we maybe implement an alternative bus as well that would allow us
to support different virtio device implementations (in addition to the
vhost bus + userspace combination)?
That should be fine, but I'm not quite sure that implementing the device
in kerne (kthread) is the good approach.
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization