This is a note to let you know that I've just added the patch titled RDMA/rxe: Don't set BTH_ACK_MASK for UC or UD QPs to the 5.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: rdma-rxe-don-t-set-bth_ack_mask-for-uc-or-ud-qps.patch and it can be found in the queue-5.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 8c22337d29d8a59977cd63d687be9604077b7410 Author: Honggang LI <honggangli@xxxxxxx> Date: Mon Jun 24 10:03:48 2024 +0800 RDMA/rxe: Don't set BTH_ACK_MASK for UC or UD QPs [ Upstream commit 4adcaf969d77d3d3aa3871bbadc196258a38aec6 ] BTH_ACK_MASK bit is used to indicate that an acknowledge (for this packet) should be scheduled by the responder. Both UC and UD QPs are unacknowledged, so don't set BTH_ACK_MASK for UC or UD QPs. Fixes: 8700e3e7c485 ("Soft RoCE driver") Signed-off-by: Honggang LI <honggangli@xxxxxxx> Link: https://lore.kernel.org/r/20240624020348.494338-1-honggangli@xxxxxxx Reviewed-by: Zhu Yanjun <yanjun.zhu@xxxxxxxxx> Signed-off-by: Leon Romanovsky <leon@xxxxxxxxxx> Signed-off-by: Jason Gunthorpe <jgg@xxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 69238f856c677..a034aa8fc5ca5 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -362,7 +362,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, int solicited; u16 pkey; u32 qp_num; - int ack_req; + int ack_req = 0; /* length from start of bth to end of icrc */ paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; @@ -396,8 +396,9 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn : qp->attr.dest_qp_num; - ack_req = ((pkt->mask & RXE_END_MASK) || - (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK)); + if (qp_type(qp) != IB_QPT_UD && qp_type(qp) != IB_QPT_UC) + ack_req = ((pkt->mask & RXE_END_MASK) || + (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK)); if (ack_req) qp->req.noack_pkts = 0;