Re: rdma_rxe and the loopback network interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2019/3/2 上午8:26, Bart Van Assche wrote:
On Sat, 2019-03-02 at 07:39 +0800, Yanjun Zhu wrote:
On 2019/2/27 6:48, Bart Van Assche wrote:
Because of security reasons I would like to run some RDMA tests in a virtual
machine with the rdma_rxe driver attached to the loopback ("lo") interface
and with no other network interfaces configured. Although it is possible to
associate the rdma_rxe network interface I have not yet found a way to let
the RDMA/CM set up a connection from address ::1 to ::1.
Sorry. Can you use ipv4 address instead of ipv6 address?
Hi Yanjun,

Login with IPv4 also fails. The logs that appear if I run blktests on top of
a v5.0-rc6 kernel with several debug patches applied are as follows:

kernel: device-mapper: multipath service-time: version 0.3.0 loaded
kernel: ib_srp:srp_parse_in: ib_srp: 127.0.0.1:5555 -> 127.0.0.1:5555
kernel: ib_srp:srp_create_target: ib_srp: max_sectors = 1024; max_pages_per_mr = 512; mr_page_size = 4096; max_sectors_per_mr = 4096; mr_per_cmd = 2
kernel: ib_srp:srp_create_ch_ib: ib_srp: 1; dev 00000000a502471d (rxe0) <> 0000000061eb80ae (rxe1)
kernel: ib_srp: QP creation failed for dev rxe1: -22
multipathd[850]: mpatho: load table [0 65536 multipath 1 queue_if_no_path 0 1 1 service-time 0 1 1 8:32 1]
kernel: ib_srp:srp_parse_in: ib_srp: [::1]:5555 -> [::1]:5555/0%0
kernel: ib_srp:srp_create_target: ib_srp: max_sectors = 1024; max_pages_per_mr = 512; mr_page_size = 4096; max_sectors_per_mr = 4096; mr_per_cmd = 2
kernel: ib_srp:srp_create_ch_ib: ib_srp: 1; dev 00000000a502471d (rxe0) <> 0000000061eb80ae (rxe1)
multipathd[850]: mpatho: event checker started
multipathd[850]: sdc [8:32]: path added to devmap mpatho
kernel: ib_srp: QP creation failed for dev rxe1: -22

rxe0 is associated with an Ethernet network interface and rxe1 is associated
with the loopback interface. QP creation fails because the RDMA CM/ID is
associated with another RDMA device than the RDMA protection domain. So the
root cause here is that calling rdma_resolve_route() with either 127.0.0.1
or ::1 as source and/or destination addresses associates the RDMA CM/ID with
the wrong RDMA device.

Thanks a lot. Let me try to reproduce this problem.

Zhu Yanjun


Bart.



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux