Re: [PATCH v2 2/3] NFSv4 introduce max_connect mount options

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Jun 10, 2021, at 10:29 AM, Olga Kornievskaia <olga.kornievskaia@xxxxxxxxx> wrote:
> 
> On Thu, Jun 10, 2021 at 9:56 AM Chuck Lever III <chuck.lever@xxxxxxxxxx> wrote:
>> 
>> 
>> 
>>> On Jun 10, 2021, at 9:34 AM, Trond Myklebust <trondmy@xxxxxxxxxxxxxxx> wrote:
>>> 
>>> On Thu, 2021-06-10 at 13:30 +0000, Chuck Lever III wrote:
>>>> 
>>>> 
>>>>> On Jun 9, 2021, at 5:53 PM, Olga Kornievskaia <
>>>>> olga.kornievskaia@xxxxxxxxx> wrote:
>>>>> 
>>>>> From: Olga Kornievskaia <kolga@xxxxxxxxxx>
>>>>> 
>>>>> This option will control up to how many xprts can the client
>>>>> establish to the server. This patch parses the value and sets
>>>>> up structures that keep track of max_connect.
>>>>> 
>>>>> Signed-off-by: Olga Kornievskaia <kolga@xxxxxxxxxx>
>>>>> ---
>>>>> fs/nfs/client.c           |  1 +
>>>>> fs/nfs/fs_context.c       |  8 ++++++++
>>>>> fs/nfs/internal.h         |  2 ++
>>>>> fs/nfs/nfs4client.c       | 12 ++++++++++--
>>>>> fs/nfs/super.c            |  2 ++
>>>>> include/linux/nfs_fs_sb.h |  1 +
>>>>> 6 files changed, 24 insertions(+), 2 deletions(-)
>>>>> 
>>>>> diff --git a/fs/nfs/client.c b/fs/nfs/client.c
>>>>> index 330f65727c45..486dec59972b 100644
>>>>> --- a/fs/nfs/client.c
>>>>> +++ b/fs/nfs/client.c
>>>>> @@ -179,6 +179,7 @@ struct nfs_client *nfs_alloc_client(const
>>>>> struct nfs_client_initdata *cl_init)
>>>>> 
>>>>>        clp->cl_proto = cl_init->proto;
>>>>>        clp->cl_nconnect = cl_init->nconnect;
>>>>> +       clp->cl_max_connect = cl_init->max_connect ? cl_init-
>>>>>> max_connect : 1;
>>>> 
>>>> So, 1 is the default setting, meaning the "add another transport"
>>>> facility is disabled by default. Would it be less surprising for
>>>> an admin to allow some extra connections by default?
>>>> 
>>>> 
>>>>>        clp->cl_net = get_net(cl_init->net);
>>>>> 
>>>>>        clp->cl_principal = "*";
>>>>> diff --git a/fs/nfs/fs_context.c b/fs/nfs/fs_context.c
>>>>> index d95c9a39bc70..cfbff7098f8e 100644
>>>>> --- a/fs/nfs/fs_context.c
>>>>> +++ b/fs/nfs/fs_context.c
>>>>> @@ -29,6 +29,7 @@
>>>>> #endif
>>>>> 
>>>>> #define NFS_MAX_CONNECTIONS 16
>>>>> +#define NFS_MAX_TRANSPORTS 128
>>>> 
>>>> This maximum seems excessive... again, there are diminishing
>>>> returns to adding more connections to the same server. what's
>>>> wrong with re-using NFS_MAX_CONNECTIONS for the maximum?
>>>> 
>>>> As always, I'm a little queasy about adding yet another mount
>>>> option. Are there real use cases where a whole-client setting
>>>> (like a sysfs attribute) would be inadequate? Is there a way
>>>> the client could figure out a reasonable maximum without a
>>>> human intervention, say, by counting the number of NICs on
>>>> the system?
>>> 
>>> Oh, hell no! We're not tying anything to the number of NICs...
>> 
>> That's a bit of an over-reaction. :-) A little more explanation
>> would be welcome. I mean, don't you expect someone to ask "How
>> do I pick a good value?" and someone might reasonably answer
>> "Well, start with the number of NICs on your client times 3" or
>> something like that.
> 
> That's what I was thinking and thank you for at least considering that
> it's a reasonable answer.
> 
>> IMO we're about to add another admin setting without understanding
>> how it will be used, how to select a good maximum value, or even
>> whether this maximum needs to be adjustable. In a previous e-mail
>> Olga has already demonstrated that it will be difficult to explain
>> how to use this setting with nconnect=.
> 
> I agree that understanding on how it will be used is unknown or
> understood but I think nconnect and max_connect represent different
> capabilities. I agree that adding nconnect transports leads to
> diminishing returns after a certain (relatively low) number. However,
> I don't believe the same holds for when xprts are going over different
> NICs. Therefore I didn't think max_connect should have been bound by
> the same numbers as nconnect.

Thanks for reminding me, I had forgotten the distinction between
the two mount options.

I think there's more going on than just the NIC -- lock contention
on the client will also be a somewhat limiting factor, as will the
number of local CPUs and memory bandwidth. And as Trond points out,
the network topology between the client and server will also have
some impact.

And I'm trying to understand why an admin would want to turn off
the "add another xprt" mechanism -- ie, the lower bound. Why is
the default setting 1?


> Perhaps 128 is too high of a value (for
> reference I did 8 *nconnect_max).
> 
>> Thus I would favor a (moderate) soldered-in maximum to start with,
>> and then as real world use cases arise, consider adding a tuning
>> mechanism based on actual requirements.
> 
> Can you suggest a moderate number between 16 and 128?

16 is conservative, and there's nothing preventing us from changing
that maximum over time as we learn more.

An in-code comment explaining how the final maximum value was arrived
at would be good to add. Even "This is just a guess" would be valuable
to anyone in the future trying to figure out a new value, IMO.

--
Chuck Lever







[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux