Hello, On 1/16/20 2:08 PM, Olga Kornievskaia wrote: > From: Olga Kornievskaia <kolga@xxxxxxxxxx> > > Signed-off-by: Olga Kornievskaia <kolga@xxxxxxxxxx> > --- > fs/nfs/nfs4client.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c > index 460d625..4df3fb0 100644 > --- a/fs/nfs/nfs4client.c > +++ b/fs/nfs/nfs4client.c > @@ -881,7 +881,7 @@ static int nfs4_set_client(struct nfs_server *server, > > if (minorversion == 0) > __set_bit(NFS_CS_REUSEPORT, &cl_init.init_flags); > - else if (proto == XPRT_TRANSPORT_TCP) > + if (proto == XPRT_TRANSPORT_TCP) > cl_init.nconnect = nconnect; > > if (server->flags & NFS_MOUNT_NORESVPORT) > Tested-by: Steve Dickson <steved@xxxxxxxxxx> With this patch v4.0 mounts act just like v4.1/v4.2 mounts But is that a good thing. :-) Here is what I've found in my testing... mount -onconnect=12 172.31.1.54:/home/tmp /mnt/tmp Will create 12 TCP connections and maintain those 12 connections until the umount happens. By maintain I mean if the connection times out, it is reconnected to maintain the 12 connections # mount -onconnect=12 172.31.1.54:/home/tmp /mnt/tmp # netstat -an | grep 172.31.1.54 | wc -l 12 # netstat -an | grep 172.31.1.54 tcp 0 0 172.31.1.24:901 172.31.1.54:2049 ESTABLISHED tcp 0 0 172.31.1.24:667 172.31.1.54:2049 ESTABLISHED tcp 0 0 172.31.1.24:746 172.31.1.54:2049 ESTABLISHED tcp 0 0 172.31.1.24:672 172.31.1.54:2049 ESTABLISHED tcp 0 0 172.31.1.24:832 172.31.1.54:2049 ESTABLISHED tcp 0 0 172.31.1.24:895 172.31.1.54:2049 ESTABLISHED tcp 0 0 172.31.1.24:673 172.31.1.54:2049 ESTABLISHED tcp 0 0 172.31.1.24:732 172.31.1.54:2049 ESTABLISHED tcp 0 0 172.31.1.24:795 172.31.1.54:2049 ESTABLISHED tcp 0 0 172.31.1.24:918 172.31.1.54:2049 ESTABLISHED tcp 0 0 172.31.1.24:674 172.31.1.54:2049 ESTABLISHED tcp 0 0 172.31.1.24:953 172.31.1.54:2049 ESTABLISHED # umount /mnt/tmp # netstat -an | grep 172.31.1.54 | wc -l 12 # netstat -an | grep 172.31.1.54 tcp 0 0 172.31.1.24:901 172.31.1.54:2049 TIME_WAIT tcp 0 0 172.31.1.24:667 172.31.1.54:2049 TIME_WAIT tcp 0 0 172.31.1.24:746 172.31.1.54:2049 TIME_WAIT tcp 0 0 172.31.1.24:672 172.31.1.54:2049 TIME_WAIT tcp 0 0 172.31.1.24:832 172.31.1.54:2049 TIME_WAIT tcp 0 0 172.31.1.24:895 172.31.1.54:2049 TIME_WAIT tcp 0 0 172.31.1.24:673 172.31.1.54:2049 TIME_WAIT tcp 0 0 172.31.1.24:732 172.31.1.54:2049 TIME_WAIT tcp 0 0 172.31.1.24:795 172.31.1.54:2049 TIME_WAIT tcp 0 0 172.31.1.24:918 172.31.1.54:2049 TIME_WAIT tcp 0 0 172.31.1.24:674 172.31.1.54:2049 TIME_WAIT tcp 0 0 172.31.1.24:953 172.31.1.54:2049 TIME_WAIT Is this the expected behavior? If so I have a few concerns... * The connections walk all over the /etc/services namespace. Meaning using ports that are reserved for registered services, something we've tried to avoid in userland by not binding to privilege ports and use of backlist ports via /etc/bindresvport.blacklist * When the unmount happens, all those connections go into TIME_WAIT on privilege ports and there are only so many of those. Not good during mount storms (when a server reboots and thousand of home dirs are remounted). * No man page describing the new feature. I realize there is not much we can do about some of these (aka umount==>TIME_WAIT) but I think we need to document what we are doing to people's connection namespace when they use this feature. steved.