It's too hard to read this tcpdump-style network trace with multiple nfs streams (a full .cap file would be much better) (internals of the packets are hidden). Some things that stick out. If you are doing a v4.0 mount, it typically would start with a SETCLIENTID. Yours starts with a PUTROOTFH which means you already have a 4.0 mount going to this server. "cat /proc/fs/nfsfs/server" would show you mounts to that server. If you are not expecting that you already had an existing 4.0 mount (ie., your "mount" command doesn't show that server mounted), then things have gone wrong already and you have a stuck mount which might be interfering with further mounts. Are you experiencing issues with a fresh boot ? do you have an ability/luxury to reboot the client machine? Your problem description is confusing. Your last network trace is about a failing v4.0 mount. Your initial description is talking about mounting with "vers=3" or "vers=2". So is the problem with a specific nfs version or is the problem with mounting over 10GB interface with any NFS versions? You can also turn on rpcdebug messages (if your client machine isn't getting a lot of NFS traffic) but given your trace I see multiple streams so you'll have to dig thru lots of output to follow your own NFS operations. On Mon, Nov 4, 2019 at 7:29 PM Chandler <admin@xxxxxxxxxxxxxxxxxx> wrote: > > Any ideas what's going on here? > Thanks