On Tue, Oct 20, 2020 at 1:16 AM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
On Mon, Oct 19, 2020 at 10:56:19PM +0800, Xie Yongji wrote:
> This series introduces a framework, which can be used to implement
> vDPA Devices in a userspace program. To implement it, the work
> consist of two parts: control path emulating and data path offloading.
>
> In the control path, the VDUSE driver will make use of message
> mechnism to forward the actions (get/set features, get/st status,
> get/set config space and set virtqueue states) from virtio-vdpa
> driver to userspace. Userspace can use read()/write() to
> receive/reply to those control messages.
>
> In the data path, the VDUSE driver implements a MMU-based
> on-chip IOMMU driver which supports both direct mapping and
> indirect mapping with bounce buffer. Then userspace can access
> those iova space via mmap(). Besides, eventfd mechnism is used to
> trigger interrupts and forward virtqueue kicks.
>
> The details and our user case is shown below:
>
> ------------------------ -----------------------------------------------------------
> | APP | | QEMU |
> | --------- | | -------------------- -------------------+<-->+------ |
> | |dev/vdx| | | | device emulation | | virtio dataplane | | BDS | |
> ------------+----------- -----------+-----------------------+-----------------+-----
> | | | |
> | | emulating | offloading |
> ------------+---------------------------+-----------------------+-----------------+------
> | | block device | | vduse driver | | vdpa device | | TCP/IP | |
> | -------+-------- --------+-------- +------+------- -----+---- |
> | | | | | | |
> | | | | | | |
> | ----------+---------- ----------+----------- | | | |
> | | virtio-blk driver | | virtio-vdpa driver | | | | |
> | ----------+---------- ----------+----------- | | | |
> | | | | | | |
> | | ------------------ | | |
> | ----------------------------------------------------- ---+--- |
> ------------------------------------------------------------------------------ | NIC |---
> ---+---
> |
> ---------+---------
> | Remote Storages |
> -------------------
> We make use of it to implement a block device connecting to
> our distributed storage, which can be used in containers and
> bare metal.
What is not exactly clear is what is the APP above doing.
Taking virtio blk requests and sending them over the network
in some proprietary way?
No, the APP doesn't need to know details on virtio-blk. Maybe replace "APP" with "Container" here could be more clear. Our purpose is to make virtio device available for container and bare metal, so that we can reuse the VM's technology stack to provide service, e.g. SPDK's remote bdev, ovs-dpdk and so on.
> Compared with qemu-nbd solution, this solution has
> higher performance, and we can have an unified technology stack
> in VM and containers for remote storages.
>
> To test it with a host disk (e.g. /dev/sdx):
>
> $ qemu-storage-daemon \
> --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
> --monitor chardev=charmonitor \
> --blockdev driver=host_device,cache.direct=on,aio=native,filename=/dev/sdx,node-name=disk0 \
> --export vduse-blk,id=test,node-name=disk0,writable=on,vduse-id=1,num-queues=16,queue-size=128
>
> The qemu-storage-daemon can be found at https://github.com/bytedance/qemu/tree/vduse
>
> Future work:
> - Improve performance (e.g. zero copy implementation in datapath)
> - Config interrupt support
> - Userspace library (find a way to reuse device emulation code in qemu/rust-vmm)
How does this driver compare with vhost-user-blk (which doesn't need kernel support)?
We want to implement a block device rather than a virtio-blk dataplane. And with this driver's help, the vhost-user-blk process could provide storage service to all APPs in the host.
Thanks,
Yongji