Re: [PATCH V3 0/7] mdev based hardware virtio offloading support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 17, 2019 at 09:42:53AM +0800, Jason Wang wrote:
> 
> On 2019/10/15 下午10:37, Stefan Hajnoczi wrote:
> > On Tue, Oct 15, 2019 at 11:37:17AM +0800, Jason Wang wrote:
> > > On 2019/10/15 上午1:49, Stefan Hajnoczi wrote:
> > > > On Fri, Oct 11, 2019 at 04:15:50PM +0800, Jason Wang wrote:
> > > > > There are hardware that can do virtio datapath offloading while having
> > > > > its own control path. This path tries to implement a mdev based
> > > > > unified API to support using kernel virtio driver to drive those
> > > > > devices. This is done by introducing a new mdev transport for virtio
> > > > > (virtio_mdev) and register itself as a new kind of mdev driver. Then
> > > > > it provides a unified way for kernel virtio driver to talk with mdev
> > > > > device implementation.
> > > > > 
> > > > > Though the series only contains kernel driver support, the goal is to
> > > > > make the transport generic enough to support userspace drivers. This
> > > > > means vhost-mdev[1] could be built on top as well by resuing the
> > > > > transport.
> > > > > 
> > > > > A sample driver is also implemented which simulate a virito-net
> > > > > loopback ethernet device on top of vringh + workqueue. This could be
> > > > > used as a reference implementation for real hardware driver.
> > > > > 
> > > > > Consider mdev framework only support VFIO device and driver right now,
> > > > > this series also extend it to support other types. This is done
> > > > > through introducing class id to the device and pairing it with
> > > > > id_talbe claimed by the driver. On top, this seris also decouple
> > > > > device specific parents ops out of the common ones.
> > > > I was curious so I took a quick look and posted comments.
> > > > 
> > > > I guess this driver runs inside the guest since it registers virtio
> > > > devices?
> > > 
> > > It could run in either guest or host. But the main focus is to run in the
> > > host then we can use virtio drivers in containers.
> > > 
> > > 
> > > > If this is used with physical PCI devices that support datapath
> > > > offloading then how are physical devices presented to the guest without
> > > > SR-IOV?
> > > 
> > > We will do control path meditation through vhost-mdev[1] and vhost-vfio[2].
> > > Then we will present a full virtio compatible ethernet device for guest.
> > > 
> > > SR-IOV is not a must, any mdev device that implements the API defined in
> > > patch 5 can be used by this framework.
> > What I'm trying to understand is: if you want to present a virtio-pci
> > device to the guest (e.g. using vhost-mdev or vhost-vfio), then how is
> > that related to this patch series?
> 
> 
> This series introduce some infrastructure that would be used by vhost-mdev:
> 
> 1) allow new type of mdev devices/drivers other than vfio (through class_id
> and device ops)
> 
> 2) a set of virtio specific callbacks that will be used by both vhost-mdev
> and virtio-mdev defined in patch 5
> 
> Then vhost-mdev can be implemented on top: a new mdev class id but reuse the
> callback defined in 2. Through this way the parent can provides a single set
> of callbacks (device ops) for both kernel virtio driver (through
> virtio-mdev) or userspace virtio driver (through vhost-mdev).

Okay, thanks for explaining!

Stefan

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux