5.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Hyunchul Lee <hyc.lee@xxxxxxxxx> [ Upstream commit 4d02c4fdc0e256b493f9a3b604c7ff18f0019f17 ] Due to restriction that cannot handle multiple buffer descriptor structures, decrease the maximum read/write size for Windows clients. And set the maximum fragmented receive size in consideration of the receive queue size. Acked-by: Namjae Jeon <linkinjeon@xxxxxxxxxx> Signed-off-by: Hyunchul Lee <hyc.lee@xxxxxxxxx> Signed-off-by: Steve French <stfrench@xxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- fs/ksmbd/transport_rdma.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) --- a/fs/ksmbd/transport_rdma.c +++ b/fs/ksmbd/transport_rdma.c @@ -1914,7 +1914,9 @@ static int smb_direct_prepare(struct ksm st->max_send_size = min_t(int, st->max_send_size, le32_to_cpu(req->max_receive_size)); st->max_fragmented_send_size = - le32_to_cpu(req->max_fragmented_size); + le32_to_cpu(req->max_fragmented_size); + st->max_fragmented_recv_size = + (st->recv_credit_max * st->max_recv_size) / 2; ret = smb_direct_send_negotiate_response(st, ret); out: