Hi all, This RFC aims to reopen the discussion of Virtio RDMA. Now this is based on Yuval Shaia's RFC "VirtIO RDMA" which implemented a frame for Virtio RDMA and a simple control path (Not sure if Yuval Shaia has any further plan for it). We try to extend this work and implement a simple data-path and a completed control path. Now this can work with SEND, RECV and REG_MR in kernel. There is a simple test module in this patch that can communicate with ibv_rc_pingpong in rdma-core. During doing this work, we have found some problems and would like to ask for some suggestions from community: 1. Each qp need two VQ, but qemu default only support 1024 VQ. I think it is possible to multiplex the VQ, since the cmd_post_send carry the qpn in request. 2. The virtio-rdma device's gid should equal to host rdma device's gid. This means that we cannot use gid cache in rdma subsystem. And theoretically the gid should also equal to the device's netdev's ip address, how can we deal with this conflict. 3. How to support DMA mr? The verbs in host cannot support it. And it seems hard to ping whole guest physical memory in qemu. 4. The FRMR api need to set key of MR through IB_WR_REG_MR. But it is impossible to change a key of mr using uverbs. In our implementation, we change the key of WR while post_send, but this means the MR can only work with SEND and RECV since we cannot change the key in the remote. The final solution may be to implement an urdma device based on rxe in qemu, through this we can get full control of MR. 5. The GSI is not supported now. And we think it's a problem that when the host receive a GSI package, it doesn't know which device it belongs to. Any further thoughts will be greatly welcomed. And we noticed that there seems to be no existing work for virtio-rdma spec, we are happy to start it from this RFC. How to test with test module: 1. Set test module's SERVER_ADDR and SERVER_PORT 2. Build kernel and qemu 3. Build rdmacm-mux in qemu/contrib and run it in backend 4. Boot kernel with qemu with following args using libvirt <interface type='bridge'> <mac address='00:16:3e:5d:aa:a8'/> <source bridge='virbr0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </interface> <qemu:commandline> <qemu:arg value='-chardev'/> <qemu:arg value='socket,path=/var/run/rdmacm-mux-rxe0-1,id=mads'/> <qemu:arg value='-device'/> <qemu:arg value='virtio-rdma-pci,disable-legacy=on,addr=2.1, ibdev=rxe0,netdev=bridge0,mad-chardev=mads'/> <qemu:arg value='-object'/> <qemu:arg value='memory-backend-ram,id=mb1,size=1G,share'/> <qemu:arg value='-numa'/> <qemu:arg value='node,memdev=mb1'/> </qemu:commandline> Note that virtio-net and virtio-rdma should be in same slot's function 0 and function 1. 5. Run "ibv_rc_pingpong -g 1 -n 500 -s 20480" as server 6. Run "insmod virtio_rdma_rc_pingping_client.ko" in guest One note regarding the patchset. We know it's not standard to collaps patches from two repos. But in order to display the whole work of Virtio RDMA, we still did it. Thanks. patch1: RDMA/virtio-rdma Introduce a new core cap prot (linux) patch2: RDMA/virtio-rdma: VirtIO RDMA driver (linux) The main patch of virtio-rdma driver in linux kernel patch3: RDMA/virtio-rdma: VirtIO RDMA test module (linux) A test module patch4: virtio-net: Move some virtio-net-pci decl to include/hw/virtio (qemu) Patch from Yuval Shaia patch5: hw/virtio-rdma: VirtIO rdma device (qemu) The main patch of virtio-rdma device in linux kernel -- 2.11.0