On 2020/10/19 下午10:56, Xie Yongji wrote:
This series introduces a framework, which can be used to implement
vDPA Devices in a userspace program. To implement it, the work
consist of two parts: control path emulating and data path offloading.
In the control path, the VDUSE driver will make use of message
mechnism to forward the actions (get/set features, get/st status,
get/set config space and set virtqueue states) from virtio-vdpa
driver to userspace. Userspace can use read()/write() to
receive/reply to those control messages.
In the data path, the VDUSE driver implements a MMU-based
on-chip IOMMU driver which supports both direct mapping and
indirect mapping with bounce buffer. Then userspace can access
those iova space via mmap(). Besides, eventfd mechnism is used to
trigger interrupts and forward virtqueue kicks.
This is pretty interesting!
For vhost-vdpa, it should work, but for virtio-vdpa, I think we should
carefully deal with the IOMMU/DMA ops stuffs.
I notice that neither dma_map nor set_map is implemented in
vduse_vdpa_config_ops, this means you want to let vhost-vDPA to deal
with IOMMU domains stuffs. Any reason for doing that?
The reason for the questions are:
1) You've implemented a on-chip IOMMU driver but don't expose it to
generic IOMMU layer (or generic IOMMU layer may need some extension to
support this)
2) We will probably remove the IOMMU domain management in vhost-vDPA,
and move it to the device(parent).
So if it's possible, please implement either set_map() or
dma_map()/dma_unmap(), this may align with our future goal and may speed
up the development.
Btw, it would be helpful to give even more details on how the on-chip
IOMMU driver in implemented.
The details and our user case is shown below:
------------------------ -----------------------------------------------------------
| APP | | QEMU |
| --------- | | -------------------- -------------------+<-->+------ |
| |dev/vdx| | | | device emulation | | virtio dataplane | | BDS | |
------------+----------- -----------+-----------------------+-----------------+-----
| | | |
| | emulating | offloading |
------------+---------------------------+-----------------------+-----------------+------
| | block device | | vduse driver | | vdpa device | | TCP/IP | |
| -------+-------- --------+-------- +------+------- -----+---- |
| | | | | | |
| | | | | | |
| ----------+---------- ----------+----------- | | | |
| | virtio-blk driver | | virtio-vdpa driver | | | | |
| ----------+---------- ----------+----------- | | | |
| | | | | | |
| | ------------------ | | |
| ----------------------------------------------------- ---+--- |
------------------------------------------------------------------------------ | NIC |---
---+---
|
---------+---------
| Remote Storages |
-------------------
The figure is not very clear to me in the following points:
1) if the device emulation and virtio dataplane is all implemented in
QEMU, what's the point of doing this? I thought the device should be a
remove process?
2) it would be better to draw a vDPA bus somewhere to help people to
understand the architecture
3) for the "offloading" I guess it should be done virtio vhost-vDPA, so
it's better to draw a vhost-vDPA block there
We make use of it to implement a block device connecting to
our distributed storage, which can be used in containers and
bare metal. Compared with qemu-nbd solution, this solution has
higher performance, and we can have an unified technology stack
in VM and containers for remote storages.
To test it with a host disk (e.g. /dev/sdx):
$ qemu-storage-daemon \
--chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
--monitor chardev=charmonitor \
--blockdev driver=host_device,cache.direct=on,aio=native,filename=/dev/sdx,node-name=disk0 \
--export vduse-blk,id=test,node-name=disk0,writable=on,vduse-id=1,num-queues=16,queue-size=128
The qemu-storage-daemon can be found at https://github.com/bytedance/qemu/tree/vduse
Future work:
- Improve performance (e.g. zero copy implementation in datapath)
- Config interrupt support
- Userspace library (find a way to reuse device emulation code in qemu/rust-vmm)
Right, a library will be very useful.
Thanks
Xie Yongji (4):
mm: export zap_page_range() for driver use
vduse: Introduce VDUSE - vDPA Device in Userspace
vduse: grab the module's references until there is no vduse device
vduse: Add memory shrinker to reclaim bounce pages
drivers/vdpa/Kconfig | 8 +
drivers/vdpa/Makefile | 1 +
drivers/vdpa/vdpa_user/Makefile | 5 +
drivers/vdpa/vdpa_user/eventfd.c | 221 ++++++
drivers/vdpa/vdpa_user/eventfd.h | 48 ++
drivers/vdpa/vdpa_user/iova_domain.c | 488 ++++++++++++
drivers/vdpa/vdpa_user/iova_domain.h | 104 +++
drivers/vdpa/vdpa_user/vduse.h | 66 ++
drivers/vdpa/vdpa_user/vduse_dev.c | 1081 ++++++++++++++++++++++++++
include/uapi/linux/vduse.h | 85 ++
mm/memory.c | 1 +
11 files changed, 2108 insertions(+)
create mode 100644 drivers/vdpa/vdpa_user/Makefile
create mode 100644 drivers/vdpa/vdpa_user/eventfd.c
create mode 100644 drivers/vdpa/vdpa_user/eventfd.h
create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c
create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h
create mode 100644 drivers/vdpa/vdpa_user/vduse.h
create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c
create mode 100644 include/uapi/linux/vduse.h
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization