On Thu, 2021-06-10 at 11:01 -0400, Olga Kornievskaia wrote: > On Thu, Jun 10, 2021 at 10:51 AM Chuck Lever III < > chuck.lever@xxxxxxxxxx> wrote: > > > > > > > > > On Jun 10, 2021, at 10:29 AM, Olga Kornievskaia < > > > olga.kornievskaia@xxxxxxxxx> wrote: > > > > > > On Thu, Jun 10, 2021 at 9:56 AM Chuck Lever III < > > > chuck.lever@xxxxxxxxxx> wrote: > > > > > > > > > > > > > > > > > On Jun 10, 2021, at 9:34 AM, Trond Myklebust < > > > > > trondmy@xxxxxxxxxxxxxxx> wrote: > > > > > > > > > > On Thu, 2021-06-10 at 13:30 +0000, Chuck Lever III wrote: > > > > > > > > > > > > > > > > > > > On Jun 9, 2021, at 5:53 PM, Olga Kornievskaia < > > > > > > > olga.kornievskaia@xxxxxxxxx> wrote: > > > > > > > > > > > > > > From: Olga Kornievskaia <kolga@xxxxxxxxxx> > > > > > > > > > > > > > > This option will control up to how many xprts can the > > > > > > > client > > > > > > > establish to the server. This patch parses the value and > > > > > > > sets > > > > > > > up structures that keep track of max_connect. > > > > > > > > > > > > > > Signed-off-by: Olga Kornievskaia <kolga@xxxxxxxxxx> > > > > > > > --- > > > > > > > fs/nfs/client.c | 1 + > > > > > > > fs/nfs/fs_context.c | 8 ++++++++ > > > > > > > fs/nfs/internal.h | 2 ++ > > > > > > > fs/nfs/nfs4client.c | 12 ++++++++++-- > > > > > > > fs/nfs/super.c | 2 ++ > > > > > > > include/linux/nfs_fs_sb.h | 1 + > > > > > > > 6 files changed, 24 insertions(+), 2 deletions(-) > > > > > > > > > > > > > > diff --git a/fs/nfs/client.c b/fs/nfs/client.c > > > > > > > index 330f65727c45..486dec59972b 100644 > > > > > > > --- a/fs/nfs/client.c > > > > > > > +++ b/fs/nfs/client.c > > > > > > > @@ -179,6 +179,7 @@ struct nfs_client > > > > > > > *nfs_alloc_client(const > > > > > > > struct nfs_client_initdata *cl_init) > > > > > > > > > > > > > > clp->cl_proto = cl_init->proto; > > > > > > > clp->cl_nconnect = cl_init->nconnect; > > > > > > > + clp->cl_max_connect = cl_init->max_connect ? > > > > > > > cl_init- > > > > > > > > max_connect : 1; > > > > > > > > > > > > So, 1 is the default setting, meaning the "add another > > > > > > transport" > > > > > > facility is disabled by default. Would it be less > > > > > > surprising for > > > > > > an admin to allow some extra connections by default? > > > > > > > > > > > > > > > > > > > clp->cl_net = get_net(cl_init->net); > > > > > > > > > > > > > > clp->cl_principal = "*"; > > > > > > > diff --git a/fs/nfs/fs_context.c b/fs/nfs/fs_context.c > > > > > > > index d95c9a39bc70..cfbff7098f8e 100644 > > > > > > > --- a/fs/nfs/fs_context.c > > > > > > > +++ b/fs/nfs/fs_context.c > > > > > > > @@ -29,6 +29,7 @@ > > > > > > > #endif > > > > > > > > > > > > > > #define NFS_MAX_CONNECTIONS 16 > > > > > > > +#define NFS_MAX_TRANSPORTS 128 > > > > > > > > > > > > This maximum seems excessive... again, there are > > > > > > diminishing > > > > > > returns to adding more connections to the same server. > > > > > > what's > > > > > > wrong with re-using NFS_MAX_CONNECTIONS for the maximum? > > > > > > > > > > > > As always, I'm a little queasy about adding yet another > > > > > > mount > > > > > > option. Are there real use cases where a whole-client > > > > > > setting > > > > > > (like a sysfs attribute) would be inadequate? Is there a > > > > > > way > > > > > > the client could figure out a reasonable maximum without a > > > > > > human intervention, say, by counting the number of NICs on > > > > > > the system? > > > > > > > > > > Oh, hell no! We're not tying anything to the number of > > > > > NICs... > > > > > > > > That's a bit of an over-reaction. :-) A little more explanation > > > > would be welcome. I mean, don't you expect someone to ask "How > > > > do I pick a good value?" and someone might reasonably answer > > > > "Well, start with the number of NICs on your client times 3" or > > > > something like that. > > > > > > That's what I was thinking and thank you for at least considering > > > that > > > it's a reasonable answer. > > > > > > > IMO we're about to add another admin setting without > > > > understanding > > > > how it will be used, how to select a good maximum value, or > > > > even > > > > whether this maximum needs to be adjustable. In a previous e- > > > > mail > > > > Olga has already demonstrated that it will be difficult to > > > > explain > > > > how to use this setting with nconnect=. > > > > > > I agree that understanding on how it will be used is unknown or > > > understood but I think nconnect and max_connect represent > > > different > > > capabilities. I agree that adding nconnect transports leads to > > > diminishing returns after a certain (relatively low) number. > > > However, > > > I don't believe the same holds for when xprts are going over > > > different > > > NICs. Therefore I didn't think max_connect should have been bound > > > by > > > the same numbers as nconnect. > > > > Thanks for reminding me, I had forgotten the distinction between > > the two mount options. > > > > I think there's more going on than just the NIC -- lock contention > > on the client will also be a somewhat limiting factor, as will the > > number of local CPUs and memory bandwidth. And as Trond points out, > > the network topology between the client and server will also have > > some impact. > > > > And I'm trying to understand why an admin would want to turn off > > the "add another xprt" mechanism -- ie, the lower bound. Why is > > the default setting 1? > > I think the reason for having default as 1 was to address Trond's > comment that some servers are struggling to support nconnect. So I'm > trying not to force any current setup to needing to change their > mount > setup to specifically say "max_connect=1". I want environments that > can support trunking specifically allow for trunking by adding a new > mount option to increase the limit. > > If this is not a concern then max_connect's default can just be the > whatever default value we pick for the it. > The default needs to preserve existing behaviour, so max_connect=1 is correct. -- Trond Myklebust Linux NFS client maintainer, Hammerspace trond.myklebust@xxxxxxxxxxxxxxx