Re: [PATCH v7 00/12] Introduce VDUSE - vDPA Device in Userspace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 25, 2021 at 2:48 PM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
>
> On Tue, May 25, 2021 at 02:40:57PM +0800, Jason Wang wrote:
> >
> > 在 2021/5/20 下午5:06, Yongji Xie 写道:
> > > On Thu, May 20, 2021 at 2:06 PM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
> > > > On Mon, May 17, 2021 at 05:55:01PM +0800, Xie Yongji wrote:
> > > > > This series introduces a framework, which can be used to implement
> > > > > vDPA Devices in a userspace program. The work consist of two parts:
> > > > > control path forwarding and data path offloading.
> > > > >
> > > > > In the control path, the VDUSE driver will make use of message
> > > > > mechnism to forward the config operation from vdpa bus driver
> > > > > to userspace. Userspace can use read()/write() to receive/reply
> > > > > those control messages.
> > > > >
> > > > > In the data path, the core is mapping dma buffer into VDUSE
> > > > > daemon's address space, which can be implemented in different ways
> > > > > depending on the vdpa bus to which the vDPA device is attached.
> > > > >
> > > > > In virtio-vdpa case, we implements a MMU-based on-chip IOMMU driver with
> > > > > bounce-buffering mechanism to achieve that. And in vhost-vdpa case, the dma
> > > > > buffer is reside in a userspace memory region which can be shared to the
> > > > > VDUSE userspace processs via transferring the shmfd.
> > > > >
> > > > > The details and our user case is shown below:
> > > > >
> > > > > ------------------------    -------------------------   ----------------------------------------------
> > > > > |            Container |    |              QEMU(VM) |   |                               VDUSE daemon |
> > > > > |       ---------      |    |  -------------------  |   | ------------------------- ---------------- |
> > > > > |       |dev/vdx|      |    |  |/dev/vhost-vdpa-x|  |   | | vDPA device emulation | | block driver | |
> > > > > ------------+-----------     -----------+------------   -------------+----------------------+---------
> > > > >              |                           |                            |                      |
> > > > >              |                           |                            |                      |
> > > > > ------------+---------------------------+----------------------------+----------------------+---------
> > > > > |    | block device |           |  vhost device |            | vduse driver |          | TCP/IP |    |
> > > > > |    -------+--------           --------+--------            -------+--------          -----+----    |
> > > > > |           |                           |                           |                       |        |
> > > > > | ----------+----------       ----------+-----------         -------+-------                |        |
> > > > > | | virtio-blk driver |       |  vhost-vdpa driver |         | vdpa device |                |        |
> > > > > | ----------+----------       ----------+-----------         -------+-------                |        |
> > > > > |           |      virtio bus           |                           |                       |        |
> > > > > |   --------+----+-----------           |                           |                       |        |
> > > > > |                |                      |                           |                       |        |
> > > > > |      ----------+----------            |                           |                       |        |
> > > > > |      | virtio-blk device |            |                           |                       |        |
> > > > > |      ----------+----------            |                           |                       |        |
> > > > > |                |                      |                           |                       |        |
> > > > > |     -----------+-----------           |                           |                       |        |
> > > > > |     |  virtio-vdpa driver |           |                           |                       |        |
> > > > > |     -----------+-----------           |                           |                       |        |
> > > > > |                |                      |                           |    vdpa bus           |        |
> > > > > |     -----------+----------------------+---------------------------+------------           |        |
> > > > > |                                                                                        ---+---     |
> > > > > -----------------------------------------------------------------------------------------| NIC |------
> > > > >                                                                                           ---+---
> > > > >                                                                                              |
> > > > >                                                                                     ---------+---------
> > > > >                                                                                     | Remote Storages |
> > > > >                                                                                     -------------------
> > > > >
> > > > > We make use of it to implement a block device connecting to
> > > > > our distributed storage, which can be used both in containers and
> > > > > VMs. Thus, we can have an unified technology stack in this two cases.
> > > > >
> > > > > To test it with null-blk:
> > > > >
> > > > >    $ qemu-storage-daemon \
> > > > >        --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
> > > > >        --monitor chardev=charmonitor \
> > > > >        --blockdev driver=host_device,cache.direct=on,aio=native,filename=/dev/nullb0,node-name=disk0 \
> > > > >        --export type=vduse-blk,id=test,node-name=disk0,writable=on,name=vduse-null,num-queues=16,queue-size=128
> > > > >
> > > > > The qemu-storage-daemon can be found at https://github.com/bytedance/qemu/tree/vduse
> > > > >
> > > > > To make the userspace VDUSE processes such as qemu-storage-daemon able to
> > > > > run unprivileged. We did some works on virtio driver to avoid trusting
> > > > > device, including:
> > > > >
> > > > >    - validating the device status:
> > > > >
> > > > >      * https://lore.kernel.org/lkml/20210517093428.670-1-xieyongji@xxxxxxxxxxxxx/
> > > > >
> > > > >    - validating the used length:
> > > > >
> > > > >      * https://lore.kernel.org/lkml/20210517090836.533-1-xieyongji@xxxxxxxxxxxxx/
> > > > >
> > > > >    - validating the device config:
> > > > >
> > > > >      * patch 4 ("virtio-blk: Add validation for block size in config space")
> > > > >
> > > > >    - validating the device response:
> > > > >
> > > > >      * patch 5 ("virtio_scsi: Add validation for residual bytes from response")
> > > > >
> > > > > Since I'm not sure if I missing something during auditing, especially on some
> > > > > virtio device drivers that I'm not familiar with, now we only support emualting
> > > > > a few vDPA devices by default, including: virtio-net device, virtio-blk device,
> > > > > virtio-scsi device and virtio-fs device. This limitation can help to reduce
> > > > > security risks.
> > > > I suspect there are a lot of assumptions even with these 4.
> > > > Just what are the security assumptions and guarantees here?
> >
> >
> > Note that VDUSE is not the only device that may suffer from this, here're
> > two others:
> >
> > 1) Encrypted VM
>
> Encrypted VMs are generally understood not to be fully
> protected from attacks by a malicious hypervisor. For example
> a DoS by a hypervisor is currently trivial.

Right, but I mainly meant the emulated virtio-net device in the case
of an encrypted VM. We should not leak information to the
device/hypervisor.

>
> > 2) Smart NICs
>
> More or less the same thing.

In my opinion, this is more similar to VDUSE. Without an encrypted VM,
we trust the hypervisor but not the device so DOS from a device should
be eliminated.

Thanks

>
>
> >
> > > The attack surface from a virtio device is limited with IOMMU enabled.
> > > It should be able to avoid security risk if we can validate all data
> > > such as config space and used length from device in device driver.
> > >
> > > > E.g. it seems pretty clear that exposing a malformed FS
> > > > to a random kernel config can cause untold mischief.
> > > >
> > > > Things like virtnet_send_command are also an easy way for
> > > > the device to DOS the kernel.
> >
> >
> > I think the virtnet_send_command() needs to use interrupt instead of
> > polling.
> >
> > Thanks
> >
> >
> > > > And before you try to add
> > > > an arbitrary timeout there - please don't,
> > > > the fix is moving things that must be guaranteed into kernel
> > > > and making things that are not guaranteed asynchronous.
> > > > Right now there are some things that happen with locks taken,
> > > > where if we don't wait for device we lose the ability to report failures
> > > > to userspace. E.g. all kind of netlink things are like this.
> > > > One can think of a bunch of ways to address this, this
> > > > needs to be discussed with the relevant subsystem maintainers.
> > > >
> > > >
> > > > If I were you I would start with one type of device, and as simple one
> > > > as possible.
> > > >
> > > Make sense to me. The virtio-blk device might be a good start. We
> > > already have some existing interface like NBD to do similar things.
> > >
> > > >
> > > > > When a sysadmin trusts the userspace process enough, it can relax
> > > > > the limitation with a 'allow_unsafe_device_emulation' module parameter.
> > > > That's not a great security interface. It's a global module specific knob
> > > > that just allows any userspace to emulate anything at all.
> > > > Coming up with a reasonable interface isn't going to be easy.
> > > > For now maybe just have people patch their kernels if they want to
> > > > move fast and break things.
> > > >
> > > OK. A reasonable interface can be added if we need it in the future.
> > >
> > > Thanks,
> > > Yongji
>





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux