Re: [PATCH v6 10/10] Documentation: Add documentation for VDUSE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




在 2021/4/15 下午3:19, Stefan Hajnoczi 写道:
On Thu, Apr 15, 2021 at 01:38:37PM +0800, Yongji Xie wrote:
On Wed, Apr 14, 2021 at 10:15 PM Stefan Hajnoczi <stefanha@xxxxxxxxxx> wrote:
On Wed, Mar 31, 2021 at 04:05:19PM +0800, Xie Yongji wrote:
VDUSE (vDPA Device in Userspace) is a framework to support
implementing software-emulated vDPA devices in userspace. This
document is intended to clarify the VDUSE design and usage.

Signed-off-by: Xie Yongji <xieyongji@xxxxxxxxxxxxx>
---
  Documentation/userspace-api/index.rst |   1 +
  Documentation/userspace-api/vduse.rst | 212 ++++++++++++++++++++++++++++++++++
  2 files changed, 213 insertions(+)
  create mode 100644 Documentation/userspace-api/vduse.rst
Just looking over the documentation briefly (I haven't studied the code
yet)...

Thank you!

+How VDUSE works
+------------
+Each userspace vDPA device is created by the VDUSE_CREATE_DEV ioctl on
+the character device (/dev/vduse/control). Then a device file with the
+specified name (/dev/vduse/$NAME) will appear, which can be used to
+implement the userspace vDPA device's control path and data path.
These steps are taken after sending the VDPA_CMD_DEV_NEW netlink
message? (Please consider reordering the documentation to make it clear
what the sequence of steps are.)

No, VDUSE devices should be created before sending the
VDPA_CMD_DEV_NEW netlink messages which might produce I/Os to VDUSE.
I see. Please include an overview of the steps before going into detail.
Something like:

   VDUSE devices are started as follows:

   1. Create a new VDUSE instance with ioctl(VDUSE_CREATE_DEV) on
      /dev/vduse/control.

   2. Begin processing VDUSE messages from /dev/vduse/$NAME. The first
      messages will arrive while attaching the VDUSE instance to vDPA.

   3. Send the VDPA_CMD_DEV_NEW netlink message to attach the VDUSE
      instance to vDPA.

   VDUSE devices are stopped as follows:

   ...

+     static int netlink_add_vduse(const char *name, int device_id)
+     {
+             struct nl_sock *nlsock;
+             struct nl_msg *msg;
+             int famid;
+
+             nlsock = nl_socket_alloc();
+             if (!nlsock)
+                     return -ENOMEM;
+
+             if (genl_connect(nlsock))
+                     goto free_sock;
+
+             famid = genl_ctrl_resolve(nlsock, VDPA_GENL_NAME);
+             if (famid < 0)
+                     goto close_sock;
+
+             msg = nlmsg_alloc();
+             if (!msg)
+                     goto close_sock;
+
+             if (!genlmsg_put(msg, NL_AUTO_PORT, NL_AUTO_SEQ, famid, 0, 0,
+                 VDPA_CMD_DEV_NEW, 0))
+                     goto nla_put_failure;
+
+             NLA_PUT_STRING(msg, VDPA_ATTR_DEV_NAME, name);
+             NLA_PUT_STRING(msg, VDPA_ATTR_MGMTDEV_DEV_NAME, "vduse");
+             NLA_PUT_U32(msg, VDPA_ATTR_DEV_ID, device_id);
What are the permission/capability requirements for VDUSE?

Now I think we need privileged permission (root user). Because
userspace daemon is able to access avail vring, used vring, descriptor
table in kernel driver directly.
Please state this explicitly at the start of the document. Existing
interfaces like FUSE are designed to avoid trusting userspace.


There're some subtle difference here. VDUSE present a device to kernel which means IOMMU is probably the only thing to prevent a malicous device.


Therefore
people might think the same is the case here. It's critical that people
are aware of this before deploying VDUSE with virtio-vdpa.

We should probably pause here and think about whether it's possible to
avoid trusting userspace. Even if it takes some effort and costs some
performance it would probably be worthwhile.


Since the bounce buffer is used the only attack surface is the coherent area, if we want to enforce stronger isolation we need to use shadow virtqueue (which is proposed in earlier version by me) in this case. But I'm not sure it's worth to do that.



Is the security situation different with vhost-vdpa? In that case it
seems more likely that the host kernel doesn't need to trust the
userspace VDUSE device.

Regarding privileges in general: userspace VDUSE processes shouldn't
need to run as root. The VDUSE device lifecycle will require privileges
to attach vhost-vdpa and virtio-vdpa devices, but the actual userspace
process that emulates the device should be able to run unprivileged.
Emulated devices are an attack surface and even if you are comfortable
with running them as root in your specific use case, it will be an issue
as soon as other people want to use VDUSE and could give VDUSE a
reputation for poor security.


In this case, I think it works as other char device:

- privilleged process to create and destroy the VDUSE
- fd is passed via SCM_RIGHTS to unprivilleged process that implements the device



How does VDUSE interact with namespaces?

Not sure I get your point here. Do you mean how the emulated vDPA
device interact with namespaces? This should work like hardware vDPA
devices do. VDUSE daemon can reside outside the namespace of a
container which uses the vDPA device.
Can VDUSE devices run inside containers? Are /dev/vduse/$NAME and vDPA
device names global?


I think it's a global one, we can add namespace on top.



What is the meaning of VDPA_ATTR_DEV_ID? I don't see it in Linux
v5.12-rc6 drivers/vdpa/vdpa.c:vdpa_nl_cmd_dev_add_set_doit().

It means the device id (e.g. VIRTIO_ID_BLOCK) of the vDPA device and
can be found in include/uapi/linux/vdpa.h.
VDPA_ATTR_DEV_ID is only used by VDPA_CMD_DEV_GET in Linux v5.12-rc6,
not by VDPA_CMD_DEV_NEW.

The example in this document uses VDPA_ATTR_DEV_ID with
VDPA_CMD_DEV_NEW. Is the example outdated?

+MMU-based IOMMU Driver
+----------------------
+VDUSE framework implements an MMU-based on-chip IOMMU driver to support
+mapping the kernel DMA buffer into the userspace iova region dynamically.
+This is mainly designed for virtio-vdpa case (kernel virtio drivers).
+
+The basic idea behind this driver is treating MMU (VA->PA) as IOMMU (IOVA->PA).
+The driver will set up MMU mapping instead of IOMMU mapping for the DMA transfer
+so that the userspace process is able to use its virtual address to access
+the DMA buffer in kernel.
+
+And to avoid security issue, a bounce-buffering mechanism is introduced to
+prevent userspace accessing the original buffer directly which may contain other
+kernel data. During the mapping, unmapping, the driver will copy the data from
+the original buffer to the bounce buffer and back, depending on the direction of
+the transfer. And the bounce-buffer addresses will be mapped into the user address
+space instead of the original one.
Is mmap(2) the right interface if memory is not actually shared, why not
just use pread(2)/pwrite(2) to make the copy explicit? That way the copy
semantics are clear. For example, don't expect to be able to busy wait
on the memory because changes will not be visible to the other side.

(I guess I'm missing something here and that mmap(2) is the right
approach, but maybe this documentation section can be clarified.)
It's for performance considerations on the one hand. We might need to
call pread(2)/pwrite(2) multiple times for each request.
Userspace can keep page-sized pread() buffers around to avoid additional
syscalls during a request.


I'm not sure I get here. But the length of the request is not necessarily PAGE_SIZE.



mmap() access does reduce the number of syscalls, but it also introduces
page faults (effectively doing the page-sized pread() I mentioned
above).


You can access the data directly if there's already a page fault. So mmap() should be much faster in this case.



It's not obvious to me that there is a fundamental difference between
the two approaches in terms of performance.

On the other
hand, we can handle the virtqueue in a unified way for both vhost-vdpa
case and virtio-vdpa case. Otherwise, userspace daemon needs to know
which iova ranges need to be accessed with pread(2)/pwrite(2). And in
the future, we might be able to avoid bouncing in some cases.
Ah, I see. So bounce buffers are not used for vhost-vdpa?


Yes, VDUSE can pass different fds to usersapce for mmap().

Thanks



Stefan




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux