Re: [External] Re: [RFC 0/4] Introduce VDUSE - vDPA Device in Userspace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2020/10/23 上午10:55, Yongji Xie wrote:


On Tue, Oct 20, 2020 at 5:13 PM Jason Wang <jasowang@xxxxxxxxxx <mailto:jasowang@xxxxxxxxxx>> wrote:


    On 2020/10/20 下午4:35, Yongji Xie wrote:
    >
    >
    > On Tue, Oct 20, 2020 at 4:01 PM Jason Wang <jasowang@xxxxxxxxxx
    <mailto:jasowang@xxxxxxxxxx>
    > <mailto:jasowang@xxxxxxxxxx <mailto:jasowang@xxxxxxxxxx>>> wrote:
    >
    >
    >     On 2020/10/20 下午3:39, Yongji Xie wrote:
    >     >
    >     >
    >     > On Tue, Oct 20, 2020 at 11:20 AM Jason Wang
    <jasowang@xxxxxxxxxx <mailto:jasowang@xxxxxxxxxx>
    >     <mailto:jasowang@xxxxxxxxxx <mailto:jasowang@xxxxxxxxxx>>
    >     > <mailto:jasowang@xxxxxxxxxx <mailto:jasowang@xxxxxxxxxx>
    <mailto:jasowang@xxxxxxxxxx <mailto:jasowang@xxxxxxxxxx>>>> wrote:
    >     >
    >     >
    >     >     On 2020/10/19 下午10:56, Xie Yongji wrote:
    >     >     > This series introduces a framework, which can be used to
    >     implement
    >     >     > vDPA Devices in a userspace program. To implement
    it, the work
    >     >     > consist of two parts: control path emulating and
    data path
    >     >     offloading.
    >     >     >
    >     >     > In the control path, the VDUSE driver will make use
    of message
    >     >     > mechnism to forward the actions (get/set features,
    get/st
    >     status,
    >     >     > get/set config space and set virtqueue states) from
    >     virtio-vdpa
    >     >     > driver to userspace. Userspace can use read()/write() to
    >     >     > receive/reply to those control messages.
    >     >     >
    >     >     > In the data path, the VDUSE driver implements a
    MMU-based
    >     >     > on-chip IOMMU driver which supports both direct
    mapping and
    >     >     > indirect mapping with bounce buffer. Then userspace
    can access
    >     >     > those iova space via mmap(). Besides, eventfd
    mechnism is
    >     used to
    >     >     > trigger interrupts and forward virtqueue kicks.
    >     >
    >     >
    >     >     This is pretty interesting!
    >     >
    >     >     For vhost-vdpa, it should work, but for virtio-vdpa, I
    think we
    >     >     should
    >     >     carefully deal with the IOMMU/DMA ops stuffs.
    >     >
    >     >
    >     >     I notice that neither dma_map nor set_map is
    implemented in
    >     >     vduse_vdpa_config_ops, this means you want to let
    vhost-vDPA
    >     to deal
    >     >     with IOMMU domains stuffs.  Any reason for doing that?
    >     >
    >     > Actually, this series only focus on virtio-vdpa case now. To
    >     support
    >     > vhost-vdpa,  as you said, we need to implement
    >     dma_map/dma_unmap. But
    >     > there is a limit that vm's memory can't be anonymous pages
    which
    >     are
    >     > forbidden in vm_insert_page(). Maybe we need to add some
    limits on
    >     > vhost-vdpa?
    >
    >
    >     I'm not sure I get this, any reason that you want to use
    >     vm_insert_page() to VM's memory. Or do you mean you want to
    implement
    >     some kind of zero-copy?
    >
    >
    >
    > If my understanding is right, we will have a QEMU (VM) process
    and a
    > device emulation process in the vhost-vdpa case, right? When I/O
    > happens, the virtio driver in VM will put the IOVA to vring and
    device
    > emulation process will get the IOVA from vring. Then the device
    > emulation process will translate the IOVA to its VA to access
    the dma
    > buffer which resides in VM's memory. That means the device
    emulation
    > process needs to access VM's memory, so we should use
    vm_insert_page()
    > to build the page table of the device emulation process.


    Ok, I get you now. So it looks to me the that the real issue is
    not the
    limitation to anonymous page but see the comments above
    vm_insert_page():

    "

      * The page has to be a nice clean _individual_ kernel allocation.
    "

    So I suspect that using vm_insert_page() to share pages between
    processes is legal. We need inputs from MM experts.


Yes,  vm_insert_page() can't be used in this case. So could we add the shmfd into the vhost iotlb msg and pass it to the device emulation process as a new iova_domain, just like vhost-user does.

Thanks,
Yongji


I think vhost-user did that via SET_MEM_TABLE which is not supported by vDPA. Note that the current IOTLB message will be used when vIOMMU is enabled.

This needs more thought. Will come back if I had any thought.

Thanks






    >
    >     I guess from the software device implemention in user space it
    >     only need
    >     to receive IOVA ranges and map them in its own address space.
    >
    >
    > How to map them in its own address space if we don't use
    vm_insert_page()?


_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux