On Thu, 2011-07-21 at 17:41 -0400, J. Bruce Fields wrote: > On Thu, Jul 21, 2011 at 01:49:02PM -0400, Steve Dickson wrote: > > Our performance team has noticed that increasing > > RPCRDMA_MAX_DATA_SEGS from 8 to 64 significantly > > increases throughput when using the RDMA transport. > > The main risk that I can see being that we have on the stack in two > places: > > rpcrdma_register_fmr_external(struct rpcrdma_mr_seg *seg, ... > { > ... > u64 physaddrs[RPCRDMA_MAX_DATA_SEGS]; > > rpcrdma_register_default_external(struct rpcrdma_mr_seg *seg, ... > { > ... > struct ib_phys_buf ipb[RPCRDMA_MAX_DATA_SEGS]; > > Where ip_phys_buf is 16 bytes. > > So that's 512 bytes in the first case, 1024 in the second. This is > called from rpciod--what are our rules about allocating memory from > rpciod? Is that allocated on the stack? We should always try to avoid 1024-byte allocations on the stack, since that eats up a full 1/8th (or 1/4 in the case of 4k stacks) of the total stack space. If, OTOH, that memory is being allocated dynamically, then the rule is "don't let rpciod sleep". Cheers Trond -- Trond Myklebust Linux NFS client maintainer NetApp Trond.Myklebust@xxxxxxxxxx www.netapp.com -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html