Re: [External] Re: [RFC 0/4] Introduce VDUSE - vDPA Device in Userspace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2020/10/20 下午3:39, Yongji Xie wrote:


On Tue, Oct 20, 2020 at 11:20 AM Jason Wang <jasowang@xxxxxxxxxx <mailto:jasowang@xxxxxxxxxx>> wrote:


    On 2020/10/19 下午10:56, Xie Yongji wrote:
    > This series introduces a framework, which can be used to implement
    > vDPA Devices in a userspace program. To implement it, the work
    > consist of two parts: control path emulating and data path
    offloading.
    >
    > In the control path, the VDUSE driver will make use of message
    > mechnism to forward the actions (get/set features, get/st status,
    > get/set config space and set virtqueue states) from virtio-vdpa
    > driver to userspace. Userspace can use read()/write() to
    > receive/reply to those control messages.
    >
    > In the data path, the VDUSE driver implements a MMU-based
    > on-chip IOMMU driver which supports both direct mapping and
    > indirect mapping with bounce buffer. Then userspace can access
    > those iova space via mmap(). Besides, eventfd mechnism is used to
    > trigger interrupts and forward virtqueue kicks.


    This is pretty interesting!

    For vhost-vdpa, it should work, but for virtio-vdpa, I think we
    should
    carefully deal with the IOMMU/DMA ops stuffs.


    I notice that neither dma_map nor set_map is implemented in
    vduse_vdpa_config_ops, this means you want to let vhost-vDPA to deal
    with IOMMU domains stuffs.  Any reason for doing that?

Actually, this series only focus on virtio-vdpa case now. To support vhost-vdpa,  as you said, we need to implement dma_map/dma_unmap. But there is a limit that vm's memory can't be anonymous pages which are forbidden in vm_insert_page(). Maybe we need to add some limits on vhost-vdpa?


I'm not sure I get this, any reason that you want to use vm_insert_page() to VM's memory. Or do you mean you want to implement some kind of zero-copy?

I guess from the software device implemention in user space it only need to receive IOVA ranges and map them in its own address space.


    The reason for the questions are:

    1) You've implemented a on-chip IOMMU driver but don't expose it to
    generic IOMMU layer (or generic IOMMU layer may need some
    extension to
    support this)
    2) We will probably remove the IOMMU domain management in vhost-vDPA,
    and move it to the device(parent).

    So if it's possible, please implement either set_map() or
    dma_map()/dma_unmap(), this may align with our future goal and may
    speed
    up the development.

    Btw, it would be helpful to give even more details on how the on-chip
    IOMMU driver in implemented.


The basic idea is treating MMU (VA->PA) as IOMMU (IOVA->PA). And using vm_insert_page()/zap_page_range() to do address mapping/unmapping. And the address mapping will be done in page fault handler because vm_insert_page() can't be called in atomic_context such as dma_map_ops->map_page().


Ok, please add it in the cover letter or patch 2 in the next version.



    >
    > The details and our user case is shown below:
    >
    > ------------------------
     -----------------------------------------------------------
    > |                  APP |     | QEMU                           |
    > |       ---------      |     | --------------------
    -------------------+<-->+------ |
    > |       |dev/vdx|      |     | | device emulation | | virtio
    dataplane |    | BDS | |
    > ------------+-----------
     -----------+-----------------------+-----------------+-----
    >              |                           |          |           
         |
    >              |                           | emulating          |
    offloading      |
    >
    ------------+---------------------------+-----------------------+-----------------+------
    > |    | block device |           |  vduse driver |   |  vdpa
    device |    | TCP/IP | |
    > |    -------+--------           --------+--------  
    +------+-------     -----+---- |
    > |           |                           |   |      |           
         |     |
    > |           |                           |   |      |           
         |     |
    > | ----------+----------       ----------+-----------  |      | 
                   |     |
    > | | virtio-blk driver |       | virtio-vdpa driver |  |      | 
                   |     |
    > | ----------+----------       ----------+-----------  |      | 
                   |     |
    > |           |                           |   |      |           
         |     |
    > |           |  ------------------      |                 |     |
    > |  -----------------------------------------------------        
    ---+---  |
    >
    ------------------------------------------------------------------------------
    | NIC |---
    >                          ---+---
    >                             |
    >                    ---------+---------
    >                    | Remote Storages |
    >                    -------------------


    The figure is not very clear to me in the following points:

    1) if the device emulation and virtio dataplane is all implemented in
    QEMU, what's the point of doing this? I thought the device should
    be a
    remove process?

    2) it would be better to draw a vDPA bus somewhere to help people to
    understand the architecture
    3) for the "offloading" I guess it should be done virtio
    vhost-vDPA, so
    it's better to draw a vhost-vDPA block there


This figure only shows virtio-vdpa case, I will take vhost-vdpa case into consideration in next version.


Please do that, otherwise this proposal is incomplete.

Thanks



Thanks,
Yongji






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux