Re: nconnect & repeating BIND_CONN_TO_SESSION?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hope you don't mind a top post...

If you capture packets, you will probably see a
callback_down flag in the reply to Sequence for
RPC(s) just before the BindConnectionToSession.
(This is what normally triggers them.)

The question then becomes "why is the server
setting the ..callback_down flag?".

One possibility might be a timeout on a callback
attempt that is too agressive?
--> You should be able to see the callbacks in the
      packet trace.

Another might the server attempting the callbacks on
the wrong TCP connection.
--> The FreeBSD server was broken until recently and
      would use any TCP connection to the client and not
      just the one where the Session had enabled the
      backchannel.
      --> If you happen to be mounting a FreeBSD server,
             you cannot use "nconnect" unless it is very
            up-to-date (the fix was done 10months ago, but
             it takes quite a while to get out in releases).
Look for the CreateSession RPCs when the mount was
first done and see which ones have a backchannel.

Btw, unless the client establishes a new TCP connection
(SYN, SYN/ACK,...) before doing the BindConnectionToSession,
the server might reply NFS4ER_INVAL. The RFC says this is
to be done, but I'll admit the FreeBSD server doesn't bother.

Good luck with it, rick


________________________________________
From: J. Bruce Fields <bfields@xxxxxxxxxxxx>
Sent: Friday, January 7, 2022 12:17 PM
To: Daire Byrne
Cc: linux-nfs
Subject: Re: nconnect & repeating BIND_CONN_TO_SESSION?

CAUTION: This email originated from outside of the University of Guelph. Do not click links or open attachments unless you recognize the sender and know the content is safe. If in doubt, forward suspicious emails to IThelp@xxxxxxxxxxx


On Fri, Jan 07, 2022 at 12:26:07PM +0000, Daire Byrne wrote:
> Hi,
>
> I have been playing around with NFSv4.2 over very high latency
> networks (200ms - nocto,actimeo=3600,nconnect) and I noticed that
> lookups were much slower than expected.
>
> Looking at a normal stat, at first with simple workloads, I see the
> expected LOOKUP/ACCESS pairs for each directory and file in a path.
> But after some period of time and with extra load on the client host
> (I am also re-exporting these mounts but I don't think that's
> relevant), I start to see a BIND_CONN_TO_SESSION call for every
> nconnect connection before every LOOKUP & ACCESS. In the case of a
> high latency network, these extra roundtrips kill performance.
>
> I am using nconnect because it has some clear performance benefits
> when doing sequential reads and writes over high latency connections.
> If I use nconnect=16 then I end up with an extra 16
> BIND_CONN_TO_SESSION roundtrips before every operation. And once it
> gets into this state, there seems to be no way to stop it.
>
> Now this client is actually mounting ~22 servers all with nconnect and
> if I reduce the nconnect for all of them to "8" then I am less likely
> to see these repeating BIND_CONN_TO_SESSION calls (although I still
> see some). If I reduce the nconnect for each mount to 4, then I don't
> see the BIND_CONN_TO_SESSION appear (yet) with our workloads. So I'm
> wondering if there is some limit like the number of client mounts of
> unique server (22) times the total number of TCP connections to each?
> So in this case 22 servers x nconnect=8 = 176 client connections.

Hm, doesn't each of these use up a reserved port on the client by
default?  I forget the details of that.  Does "noresvport" help?

On the server (if Linux) there are maximums on the number of
connections.  It should be logging "too many open connections" if you're
hitting that.

--b.

> Or are there some sequence errors that trigger a BIND_CONN_TO_SESSION
> and increasing the number of nconnect threads increases the chances of
> triggering it? The remote servers are a mix of RHEL7 and RHEL8 and
> seem to show the same behaviour.
>
> I tried watching the rpcdebug stream but I'll admit I wasn't really
> sure what to look for. I see the same thing on a bunch of recent
> kernels (I've only tested from 5.12 upwards). This has probably been
> happening for our workloads for quite some time but it's only when the
> latency became so large that I noticed all these extra round trips.
>
> Any pointers as to why this might be happening?
>
> Cheers,
>
> Daire





[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux