Current rxe_requester() doesn't generate a completion on error after getting a wqe. Fix the issue by calling "goto err;" instead. Signed-off-by: Xiao Yang <yangx.jy@xxxxxxxxxxx> --- drivers/infiniband/sw/rxe/rxe_req.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index ae5fbc79dd5c..e69fe409fbcb 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -648,26 +648,30 @@ int rxe_requester(void *arg) psn_compare(qp->req.psn, (qp->comp.psn + RXE_MAX_UNACKED_PSNS)) > 0)) { qp->req.wait_psn = 1; - goto exit; + wqe->status = IB_WC_LOC_QP_OP_ERR; + goto err; } /* Limit the number of inflight SKBs per QP */ if (unlikely(atomic_read(&qp->skb_out) > RXE_INFLIGHT_SKBS_PER_QP_HIGH)) { qp->need_req_skb = 1; - goto exit; + wqe->status = IB_WC_LOC_QP_OP_ERR; + goto err; } opcode = next_opcode(qp, wqe, wqe->wr.opcode); if (unlikely(opcode < 0)) { wqe->status = IB_WC_LOC_QP_OP_ERR; - goto exit; + goto err; } mask = rxe_opcode[opcode].mask; if (unlikely(mask & RXE_READ_OR_ATOMIC_MASK)) { - if (check_init_depth(qp, wqe)) - goto exit; + if (check_init_depth(qp, wqe)) { + wqe->status = IB_WC_LOC_QP_OP_ERR; + goto err; + } } mtu = get_mtu(qp); -- 2.25.4