The Send Queue depth is temporarily reduced to 1 SQE per credit. The new rdma_rw API does an internal computation, during QP creation, to increase the depth of the Send Queue to handle RDMA Read and Write operations. This change has to come before the NFSD code paths are updated to use the rdma_rw API. Without this patch, rdma_rw_init_qp() increases the size of the SQ too much, resulting in memory allocation failures during QP creation.
I agree this needs to happen, but turns out you don't have any guarantees of the maximum size of the sq depending on your max_sge parameter. I'd recommend having a fall-back shrinked size sq allocation impllemented like srpt does. We don't have it in nvmet-rdma nor iser, but its a good thing to have... -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html