2. Initiator side: #nvme connect-all -t rdma -a 172.31.40.4 -s 1023 3. check the kernel log on both target/initiator side. kernel log: [ 242.494533] ocrdma0:Using VLAN with PFC is recommended [ 242.520244] ocrdma0:Using VLAN 0 for this connection [ 242.652599] ocrdma0:Using VLAN with PFC is recommended [ 242.676365] ocrdma0:Using VLAN 0 for this connection [ 242.700476] ocrdma0:Using VLAN with PFC is recommended [ 242.723497] ocrdma0:Using VLAN 0 for this connection [ 242.812331] nvme nvme0: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 172.31.40.4:1023 [ 242.854149] ocrdma0:Using VLAN with PFC is recommended [ 242.854149] ocrdma0:Using VLAN 0 for this connection [ 242.854662] ------------[ cut here ]------------ [ 242.854671] WARNING: CPU: 2 PID: 158 at drivers/infiniband/core/verbs.c:1975 __ib_drain_sq+0x182/0x1c0 [ib_core]
I suspect that ib_drain_sq is not supported on ocrdma. From looking at the code it seems that ocrdma fails and send post when the qp is not in RTS. I think that if this is the case ocrdma needs to implement its own qp drain logic similar to cxgb4, ns, i40e etc. Devesh (CC'd), has anyone tested ib_drain_qp on ocrdma? -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html