Hi Bart, +AD4- -----Original Message----- +AD4- From: linux-rdma-owner+AEA-vger.kernel.org On Behalf Of Bart Van Assche +AD4- Sent: Tuesday, February 26, 2019 4:48 PM +AD4- To: linux-rdma+AEA-vger.kernel.org +AD4- Subject: rdma+AF8-rxe and the loopback network interface +AD4- +AD4- Hello, +AD4- +AD4- Because of security reasons I would like to run some RDMA tests in a virtual +AD4- machine with the rdma+AF8-rxe driver attached to the loopback (+ACI-lo+ACI-) interface +AD4- and with no other network interfaces configured. Although it is possible to +AD4- associate the rdma+AF8-rxe network interface I have not yet found a way to let +AD4- the RDMA/CM set up a connection from address ::1 to ::1. rdma+AF8-bind+AF8-addr() +AD4- fails for address ::1 because cma+AF8-check+AF8-linklocal() does not consider it as a +AD4- link-local address. rdma+AF8-resolve+AF8-route() fails because it expects that +AD4- bound+AF8-dev+AF8-if +ACEAPQ- 0 before that function is called. Modifying +AD4- rdma+AF8-bind+AF8-addr() and rdma+AF8-resolve+AF8-route() such that these recognize the +AD4- address ::1 causes the RDMA/CM at the listener side to reject the incoming +AD4- connection, probably because ::1 does not match the GID of the port of the +AD4- rdma+AF8-rxe instance attached to +ACI-lo+ACI- +AD4- (fe80:0000:0000:0000:0200:00ff:fe00:0000). Using the +ACI-lo+ACI- +AD4- GID does not work because it is not in the IPv6 routing table. +AD4- +AD4- Has anyone else already looked into this? +AD4- What a timing. :-) I have similar request from two different users who want to run RDMA in their VM and limited to VM. I have been looking into this. So will give little long answer and a plan. The right approach is to have loopback rdma interface for loopback traffic. Hence, I created a loopback RoCE device which doesn't go through complex. This loopback rdma device with few changes in rdmacm support IPv6 :1 and 127.0.0.1 IPv4 addresses too. I have been running ext4 over nvme-fabrics over loopback in a VM with 5.0.0.rc-5 for few hours nonstop at 10K IOPs at 4K block size. Ib+AF8-write+AF8-bw Perftests are in range of 7Gbps (4KB) to 80Gbps (8MB) with single QP and scales with more cpus. This also matches with netdev style to use actual netdev for external traffic and lo device for loopback device. Additionally it avoids hacky way in rdmacm. I cleaned up and it does right GID matching of lo with corresponding rdma device who as lo netdev based GIDs. Overall, I like to extend this loopback rdma device driver (2000 lines of wrapper to memcpy()) :-)) with all the verbs, for IB too. I haven+IBk-t figured out supporting QP0 for IB, but will figure it out. This has extremely slim user space driver. My idea is to use common user space driver for loopback, siw at minimum, but haven't started siw code review. At best I can post the code on github once I split into reviewable patches for you try it out.