Re: [PATCH 1/1] NFSv4.1+ add trunking when server trunking detected

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 8, 2021 at 5:06 PM Chuck Lever III <chuck.lever@xxxxxxxxxx> wrote:
>
>
>
> > On Jun 8, 2021, at 4:56 PM, Olga Kornievskaia <olga.kornievskaia@xxxxxxxxx> wrote:
> >
> > On Tue, Jun 8, 2021 at 4:41 PM Chuck Lever III <chuck.lever@xxxxxxxxxx> wrote:
> >>
> >>
> >>
> >>> On Jun 8, 2021, at 2:45 PM, Olga Kornievskaia <olga.kornievskaia@xxxxxxxxx> wrote:
> >>>
> >>> From: Olga Kornievskaia <kolga@xxxxxxxxxx>
> >>>
> >>> After trunking is discovered in nfs4_discover_server_trunking(),
> >>> add the transport to the old client structure before destroying
> >>> the new client structure (along with its transport).
> >>>
> >>> Signed-off-by: Olga Kornievskaia <kolga@xxxxxxxxxx>
> >>> ---
> >>> fs/nfs/nfs4client.c | 40 ++++++++++++++++++++++++++++++++++++++++
> >>> 1 file changed, 40 insertions(+)
> >>>
> >>> diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
> >>> index 42719384e25f..984c851844d8 100644
> >>> --- a/fs/nfs/nfs4client.c
> >>> +++ b/fs/nfs/nfs4client.c
> >>> @@ -361,6 +361,44 @@ static int nfs4_init_client_minor_version(struct nfs_client *clp)
> >>>      return nfs4_init_callback(clp);
> >>> }
> >>>
> >>> +static void nfs4_add_trunk(struct nfs_client *clp, struct nfs_client *old)
> >>> +{
> >>> +     struct sockaddr_storage clp_addr, old_addr;
> >>> +     struct sockaddr *clp_sap = (struct sockaddr *)&clp_addr;
> >>> +     struct sockaddr *old_sap = (struct sockaddr *)&old_addr;
> >>> +     size_t clp_salen, old_salen;
> >>> +     struct xprt_create xprt_args = {
> >>> +             .ident = old->cl_proto,
> >>> +             .net = old->cl_net,
> >>> +             .servername = old->cl_hostname,
> >>> +     };
> >>> +     struct nfs4_add_xprt_data xprtdata = {
> >>> +             .clp = old,
> >>> +     };
> >>> +     struct rpc_add_xprt_test rpcdata = {
> >>> +             .add_xprt_test = old->cl_mvops->session_trunk,
> >>> +             .data = &xprtdata,
> >>> +     };
> >>> +
> >>> +     if (clp->cl_proto != old->cl_proto)
> >>> +             return;
> >>> +     clp_salen = rpc_peeraddr(clp->cl_rpcclient, clp_sap, sizeof(clp_addr));
> >>> +     old_salen = rpc_peeraddr(old->cl_rpcclient, old_sap, sizeof(old_addr));
> >>> +
> >>> +     if (clp_addr.ss_family != old_addr.ss_family)
> >>> +             return;
> >>> +
> >>> +     xprt_args.dstaddr = clp_sap;
> >>> +     xprt_args.addrlen = clp_salen;
> >>> +
> >>> +     xprtdata.cred = nfs4_get_clid_cred(old);
> >>> +     rpc_clnt_add_xprt(old->cl_rpcclient, &xprt_args,
> >>> +                       rpc_clnt_setup_test_and_add_xprt, &rpcdata);
> >>
> >> Is there an upper bound on the number of transports that
> >> are added to the NFS client's switch?
> >
> > I don't believe any limits exist right now. Why should there be a
> > limit? Are you saying that the client should limit trunking? While
> > this is not what's happening here, but say FS_LOCATION returned 100
> > ips for the server. Are you saying the client should be limiting how
> > many trunkable connections it would be establishing and picking just a
> > few addresses to try? What's happening with this patch is that say
> > there are 100mounts to 100 ips (each representing the same server or
> > trunkable server(s)), without this patch a single connection is kept,
> > with this patch we'll have 100 connections.
>
> The patch description needs to make this behavior more clear. It
> needs to explain "why" -- the body of the patch already explains
> "what". Can you include your last sentence in the description as
> a use case?

I sure can.

> As for the behavior... Seems to me that there are going to be only
> infinitesimal gains after the first few connections, and after
> that, it's going to be a lot for both sides to manage for no real
> gain. I think you do want to cap the total number of connections
> at a reasonable number, even in the FS_LOCATIONS case.

Do you want to cap it at 16 like we do for nconnect? I'm ok with that for now.

I don't understand why a setup where there X NICs on each side
(client/server) and X mounts are done, there won't be throughput of
close to X * throughput of a single NIC.

> >>> +
> >>> +     if (xprtdata.cred)
> >>> +             put_cred(xprtdata.cred);
> >>> +}
> >>> +
> >>> /**
> >>> * nfs4_init_client - Initialise an NFS4 client record
> >>> *
> >>> @@ -434,6 +472,8 @@ struct nfs_client *nfs4_init_client(struct nfs_client *clp,
> >>>               * won't try to use it.
> >>>               */
> >>>              nfs_mark_client_ready(clp, -EPERM);
> >>> +             if (old->cl_mvops->session_trunk)
> >>> +                     nfs4_add_trunk(clp, old);
> >>>      }
> >>>      clear_bit(NFS_CS_TSM_POSSIBLE, &clp->cl_flags);
> >>>      nfs_put_client(clp);
> >>> --
> >>> 2.27.0
> >>>
> >>
> >> --
> >> Chuck Lever
>
> --
> Chuck Lever
>
>
>



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux