Re: [PATCH 0/9] Multiple network connections for a single NFS mount.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 30, 2019 at 1:57 PM Chuck Lever <chuck.lever@xxxxxxxxxx> wrote:
>
> Hi Neil-
>
> Thanks for chasing this a little further.
>
>
> > On May 29, 2019, at 8:41 PM, NeilBrown <neilb@xxxxxxxx> wrote:
> >
> > This patch set is based on the patches in the multipath_tcp branch of
> > git://git.linux-nfs.org/projects/trondmy/nfs-2.6.git
> >
> > I'd like to add my voice to those supporting this work and wanting to
> > see it land.
> > We have had customers/partners wanting this sort of functionality for
> > years.  In SLES releases prior to SLE15, we've provide a
> > "nosharetransport" mount option, so that several filesystem could be
> > mounted from the same server and each would get its own TCP
> > connection.
>
> Is it well understood why splitting up the TCP connections result
> in better performance?

It has been historical that NFS can not fill up a highspeed pipe.
There have been studies that shown a negative interactions between VM
dirty page flushing and TCP window behavior that lead to bad
performance (VM flushes too aggressively which creates TCP congestion
so window closes which then makes the VM to stop flushing and so the
system oscilates between bad states and always underperforms ). But
that a side, there might be different server implementations that
would perform better when multiple connections are used.

I forget the details but there used to be (might still be) Data
transfer storage challenges at conferences to see who can transfer the
largest amount of data faster. They all have shown that to accomplish
that you need to have multiple TCP connections.

But to answer: no I don't think it's "well understood" why splitting
TCP connection performs better.

> > In SLE15 we are using this 'nconnect' feature, which is much nicer.
> >
> > Partners have assured us that it improves total throughput,
> > particularly with bonded networks, but we haven't had any concrete
> > data until Olga Kornievskaia provided some concrete test data - thanks
> > Olga!
> >
> > My understanding, as I explain in one of the patches, is that parallel
> > hardware is normally utilized by distributing flows, rather than
> > packets.  This avoid out-of-order deliver of packets in a flow.
> > So multiple flows are needed to utilizes parallel hardware.
>
> Indeed.
>
> However I think one of the problems is what happens in simpler scenarios.
> We had reports that using nconnect > 1 on virtual clients made things
> go slower. It's not always wise to establish multiple connections
> between the same two IP addresses. It depends on the hardware on each
> end, and the network conditions.
>
>
> > An earlier version of this patch set was posted in April 2017 and
> > Chuck raised two issues:
> > 1/ mountstats only reports on one xprt per mount
> > 2/ session establishment needs to happen on a single xprt, as you
> >    cannot bind other xprts to the session until the session is
> >    established.
> > I've added patches to address these, and also to add the extra xprts
> > to the debugfs info.
> >
> > I've also re-arrange the patches a bit, merged two, and remove the
> > restriction to TCP and NFSV4.x,x>=1.  Discussions seemed to suggest
> > these restrictions were not needed, I can see no need.
>
> RDMA could certainly benefit for exactly the reason you describe above.
>
>
> > There is a bug with the load balancing code from Trond's tree.
> > While an xprt is attached to a client, the queuelen is incremented.
> > Some requests (particularly BIND_CONN_TO_SESSION) pass in an xprt,
> > and the queuelen was not incremented in this case, but it was
> > decremented.  This causes it to go 'negative' and havoc results.
> >
> > I wonder if the last three patches (*Allow multiple connection*) could
> > be merged into a single patch.
> >
> > I haven't given much thought to automatically determining the optimal
> > number of connections, but I doubt it can be done transparently with
> > any reliability.
>
> A Solaris client can open up to 8 connections to a server, but there
> are always some scenarios where the heuristic creates too many
> connections and becomes a performance issue.

That's great that a solaris client can have many multiple connections,
let's not leave the linux client behind then :-) Given your knowledge
in this case, do you have words of wisdom/lessons learned that could
help with it?

> We also have concerns about running the client out of privileged port
> space.
>
> The problem with nconnect is that it can work well, but it can also be
> a very easy way to shoot yourself in the foot.

It's an optional feature so I'd argue that if you've chosen to use it,
then don't complain about the consequences.

> I also share the concerns about dealing properly with retransmission
> and NFSv4 sessions.
>
>
> > When adding a connection improves throughput, then
> > it was almost certainly a good thing to do. When adding a connection
> > doesn't improve throughput, the implications are less obvious.
> > My feeling is that a protocol enhancement where the serve suggests an
> > upper limit and the client increases toward that limit when it notices
> > xmit backlog, would be about the best we could do.  But we would need
> > a lot more experience with the functionality first.
>
> What about situations where the network capabilities between server and
> client change? Problem is that neither endpoint can detect that; TCP
> usually just deals with it.
>
> Related Work:
>
> We now have protocol (more like conventions) for clients to discover
> when a server has additional endpoints so that it can establish
> connections to each of them.

Yes I totally agree we need solution for when there are multiple
endpoints. And we need a solution that's being proposed here which is
to establish multiple connections to the same endpoint.

> https://datatracker.ietf.org/doc/rfc8587/
>
> and
>
> https://datatracker.ietf.org/doc/draft-ietf-nfsv4-rfc5661-msns-update/
>
> Boiled down, the client uses fs_locations and trunking detection to
> figure out when two IP addresses are the same server instance.
>
> This facility can also be used to establish a connection over a
> different path if network connectivity is lost.
>
> There has also been some exploration of MP-TCP. The magic happens
> under the transport socket in the network layer, and the RPC client
> is not involved.
>
>
> > Comments most welcome.  I'd love to see this, or something similar,
> > merged.
> >
> > Thanks,
> > NeilBrown
> >
> > ---
> >
> > NeilBrown (4):
> >      NFS: send state management on a single connection.
> >      SUNRPC: enhance rpc_clnt_show_stats() to report on all xprts.
> >      SUNRPC: add links for all client xprts to debugfs
> >
> > Trond Myklebust (5):
> >      SUNRPC: Add basic load balancing to the transport switch
> >      SUNRPC: Allow creation of RPC clients with multiple connections
> >      NFS: Add a mount option to specify number of TCP connections to use
> >      NFSv4: Allow multiple connections to NFSv4.x servers
> >      pNFS: Allow multiple connections to the DS
> >      NFS: Allow multiple connections to a NFSv2 or NFSv3 server
> >
> >
> > fs/nfs/client.c                      |    3 +
> > fs/nfs/internal.h                    |    2 +
> > fs/nfs/nfs3client.c                  |    1
> > fs/nfs/nfs4client.c                  |   13 ++++-
> > fs/nfs/nfs4proc.c                    |   22 +++++---
> > fs/nfs/super.c                       |   12 ++++
> > include/linux/nfs_fs_sb.h            |    1
> > include/linux/sunrpc/clnt.h          |    1
> > include/linux/sunrpc/sched.h         |    1
> > include/linux/sunrpc/xprt.h          |    1
> > include/linux/sunrpc/xprtmultipath.h |    2 +
> > net/sunrpc/clnt.c                    |   98 ++++++++++++++++++++++++++++++++--
> > net/sunrpc/debugfs.c                 |   46 ++++++++++------
> > net/sunrpc/sched.c                   |    3 +
> > net/sunrpc/stats.c                   |   15 +++--
> > net/sunrpc/sunrpc.h                  |    3 +
> > net/sunrpc/xprtmultipath.c           |   23 +++++++-
> > 17 files changed, 204 insertions(+), 43 deletions(-)
> >
> > --
> > Signature
> >
>
> --
> Chuck Lever
>
>
>



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux