nconnect & repeating BIND_CONN_TO_SESSION?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have been playing around with NFSv4.2 over very high latency
networks (200ms - nocto,actimeo=3600,nconnect) and I noticed that
lookups were much slower than expected.

Looking at a normal stat, at first with simple workloads, I see the
expected LOOKUP/ACCESS pairs for each directory and file in a path.
But after some period of time and with extra load on the client host
(I am also re-exporting these mounts but I don't think that's
relevant), I start to see a BIND_CONN_TO_SESSION call for every
nconnect connection before every LOOKUP & ACCESS. In the case of a
high latency network, these extra roundtrips kill performance.

I am using nconnect because it has some clear performance benefits
when doing sequential reads and writes over high latency connections.
If I use nconnect=16 then I end up with an extra 16
BIND_CONN_TO_SESSION roundtrips before every operation. And once it
gets into this state, there seems to be no way to stop it.

Now this client is actually mounting ~22 servers all with nconnect and
if I reduce the nconnect for all of them to "8" then I am less likely
to see these repeating BIND_CONN_TO_SESSION calls (although I still
see some). If I reduce the nconnect for each mount to 4, then I don't
see the BIND_CONN_TO_SESSION appear (yet) with our workloads. So I'm
wondering if there is some limit like the number of client mounts of
unique server (22) times the total number of TCP connections to each?
So in this case 22 servers x nconnect=8 = 176 client connections.

Or are there some sequence errors that trigger a BIND_CONN_TO_SESSION
and increasing the number of nconnect threads increases the chances of
triggering it? The remote servers are a mix of RHEL7 and RHEL8 and
seem to show the same behaviour.

I tried watching the rpcdebug stream but I'll admit I wasn't really
sure what to look for. I see the same thing on a bunch of recent
kernels (I've only tested from 5.12 upwards). This has probably been
happening for our workloads for quite some time but it's only when the
latency became so large that I noticed all these extra round trips.

Any pointers as to why this might be happening?

Cheers,

Daire



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux