> On Mar 21, 2017, at 1:58 PM, Sagi Grimberg <sagi@xxxxxxxxxxx> wrote: > > >> The Send Queue depth is temporarily reduced to 1 SQE per credit. The >> new rdma_rw API does an internal computation, during QP creation, to >> increase the depth of the Send Queue to handle RDMA Read and Write >> operations. >> >> This change has to come before the NFSD code paths are updated to >> use the rdma_rw API. Without this patch, rdma_rw_init_qp() increases >> the size of the SQ too much, resulting in memory allocation failures >> during QP creation. > > I agree this needs to happen, but turns out you don't have any > guarantees of the maximum size of the sq depending on your max_sge > parameter. That's true. However, this is meant to be temporary while I'm working out details of the rdma_rw API conversion. More work in this area comes in the next series: http://git.linux-nfs.org/?p=cel/cel-2.6.git;a=log;h=refs/heads/nfsd-rdma-rw-api > I'd recommend having a fall-back shrinked size sq allocation > impllemented like srpt does. Agree it should be done. Would it be OK to wait until the dust settles here, or do you think it's a hard requirement for accepting this series? > We don't have it in nvmet-rdma nor iser, but its a good thing to have... -- Chuck Lever -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html