I am looking into an issue of hanging clients to a set of NFS servers, on a large HPC cluster. My investigation took me to the RPC code, svc_create_socket(). if (protocol == IPPROTO_TCP) { if ((error = kernel_listen(sock, 64)) < 0) goto bummer; } A fixed backlog of 64 connections at the server seems like it could be too low on a cluster like this, particularly when the protocol opens and closes the TCP connection. I wondered what is the rationale is behind this number, particuarly as it is a fixed value. Perhaps there is a reason why this has no effect on nfsd, or is this a FAQ for people on large systems? The servers show overflow of a listening queue, which I imagine is related. $ netstat -s [...] TcpExt: 6475 times the listen queue of a socket overflowed 6475 SYNs to LISTEN sockets ignored The affected servers are old, kernel 2.6.9. But this limit of 64 is consistent across that and the latest kernel source. Thanks -- Mark -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html