On Wed, Apr 14, 2021 at 3:35 PM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote: > > On Wed, Mar 31, 2021 at 04:05:09PM +0800, Xie Yongji wrote: > > This series introduces a framework, which can be used to implement > > vDPA Devices in a userspace program. The work consist of two parts: > > control path forwarding and data path offloading. > > > > In the control path, the VDUSE driver will make use of message > > mechnism to forward the config operation from vdpa bus driver > > to userspace. Userspace can use read()/write() to receive/reply > > those control messages. > > > > In the data path, the core is mapping dma buffer into VDUSE > > daemon's address space, which can be implemented in different ways > > depending on the vdpa bus to which the vDPA device is attached. > > > > In virtio-vdpa case, we implements a MMU-based on-chip IOMMU driver with > > bounce-buffering mechanism to achieve that. And in vhost-vdpa case, the dma > > buffer is reside in a userspace memory region which can be shared to the > > VDUSE userspace processs via transferring the shmfd. > > > > The details and our user case is shown below: > > > > ------------------------ ------------------------- ---------------------------------------------- > > | Container | | QEMU(VM) | | VDUSE daemon | > > | --------- | | ------------------- | | ------------------------- ---------------- | > > | |dev/vdx| | | |/dev/vhost-vdpa-x| | | | vDPA device emulation | | block driver | | > > ------------+----------- -----------+------------ -------------+----------------------+--------- > > | | | | > > | | | | > > ------------+---------------------------+----------------------------+----------------------+--------- > > | | block device | | vhost device | | vduse driver | | TCP/IP | | > > | -------+-------- --------+-------- -------+-------- -----+---- | > > | | | | | | > > | ----------+---------- ----------+----------- -------+------- | | > > | | virtio-blk driver | | vhost-vdpa driver | | vdpa device | | | > > | ----------+---------- ----------+----------- -------+------- | | > > | | virtio bus | | | | > > | --------+----+----------- | | | | > > | | | | | | > > | ----------+---------- | | | | > > | | virtio-blk device | | | | | > > | ----------+---------- | | | | > > | | | | | | > > | -----------+----------- | | | | > > | | virtio-vdpa driver | | | | | > > | -----------+----------- | | | | > > | | | | vdpa bus | | > > | -----------+----------------------+---------------------------+------------ | | > > | ---+--- | > > -----------------------------------------------------------------------------------------| NIC |------ > > ---+--- > > | > > ---------+--------- > > | Remote Storages | > > ------------------- > > This all looks quite similar to vhost-user-block except that one > does not need any kernel support at all. > > So I am still scratching my head about its advantages over > vhost-user-block. > It plays the same role as vhost-user-block in VM user cases. > > > We make use of it to implement a block device connecting to > > our distributed storage, which can be used both in containers and > > VMs. Thus, we can have an unified technology stack in this two cases. > > Maybe the container part is the answer. How does that stack look? > Yes, it enables containers to reuse virtio software stack. We can have one daemon that provides service to both containers and virtual machines. Thanks, Yongji