On Wed, 2020-10-07 at 12:05 -0400, Bruce Fields wrote: > On Wed, Oct 07, 2020 at 10:15:39AM -0400, Chuck Lever wrote: > > > > > On Oct 7, 2020, at 10:05 AM, Bruce Fields <bfields@xxxxxxxxxxxx> > > > wrote: > > > > > > On Wed, Oct 07, 2020 at 09:45:50AM -0400, Chuck Lever wrote: > > > > > > > > > On Oct 7, 2020, at 8:55 AM, Benjamin Coddington < > > > > > bcodding@xxxxxxxxxx> wrote: > > > > > > > > > > On 7 Oct 2020, at 7:27, Benjamin Coddington wrote: > > > > > > > > > > > On 6 Oct 2020, at 20:18, J. Bruce Fields wrote: > > > > > > > > > > > > > On Tue, Oct 06, 2020 at 05:46:11PM -0400, Olga > > > > > > > Kornievskaia wrote: > > > > > > > > On Tue, Oct 6, 2020 at 3:38 PM Benjamin Coddington < > > > > > > > > bcodding@xxxxxxxxxx> wrote: > > > > > > > > > On 6 Oct 2020, at 11:13, J. Bruce Fields wrote: > > > > > > > Looks like nfs4_init_{non}uniform_client_string() stores > > > > > > > it in > > > > > > > cl_owner_id, and I was thinking that meant cl_owner_id > > > > > > > would be used > > > > > > > from then on.... > > > > > > > > > > > > > > But actually, I think it may run that again on recovery, > > > > > > > yes, so I bet > > > > > > > changing the nfs4_unique_id parameter midway like this > > > > > > > could cause bugs > > > > > > > on recovery. > > > > > > > > > > > > Ah, that's what I thought as well. Thanks for looking > > > > > > closer Olga! > > > > > > > > > > Well, no -- it does indeed continue to use the original > > > > > cl_owner_id. We > > > > > only jump through nfs4_init_uniquifier_client_string() if > > > > > cl_owner_id is > > > > > NULL: > > > > > > > > > > 6087 static int > > > > > 6088 nfs4_init_uniform_client_string(struct nfs_client *clp) > > > > > 6089 { > > > > > 6090 size_t len; > > > > > 6091 char *str; > > > > > 6092 > > > > > 6093 if (clp->cl_owner_id != NULL) > > > > > 6094 return 0; > > > > > 6095 > > > > > 6096 if (nfs4_client_id_uniquifier[0] != '\0') > > > > > 6097 return nfs4_init_uniquifier_client_string(clp); > > > > > 6098 > > > > > > > > > > > > > > > Testing proves this out as well for both EXCHANGE_ID and > > > > > SETCLIENTID. > > > > > > > > > > Is there any precedent for stabilizing module parameters as > > > > > part of a > > > > > supported interface? Maybe this ought to be a mount option, > > > > > so client can > > > > > set a uniquifier per-mount. > > > > > > > > The protocol is designed as one client-ID per client. FreeBSD > > > > is > > > > the only client I know of that uses one client-ID per mount, > > > > fwiw. > > > > > > > > You are suggesting each mount point would have its own lease. > > > > There > > > > would likely be deeper implementation changes needed than just > > > > specifying a unique client-ID for each mount point. > > > > > > Huh, I thought that should do it. > > > > > > Do you have something specific in mind? > > > > The relationship between nfs_client and nfs_server structs comes to > > mind. > > I'm not following. Do you have a specific problem in mind? > The problem that all locks etc are tied to the lease, so if you change the clientid (and hence change the lease) then you need to ensure that the client knows to which lease the locks belong, that it is able to respond appropriately to all delegation recalls, layout recalls, ... etc. This need to track things on a per-lease basis is why we have the struct nfs_client. Things that are tracked on a per-superblock basis are tracked by the struct nfs_server. However all this is moot as long as nobody can explain why we'd want to do all this. As far as I can tell, this thread started with a complaint that performance suffers when we don't allow setups that hack the client by pretending that a multi-homed server is actually multiple different servers. AFAICS Tom Talpey's question is the relevant one. Why is there a performance regression being seen by these setups when they share the same connection? Is it really the connection, or is it the fact that they all share the same fixed-slot session? I did see Igor's claim that there is a QoS issue (which afaics would also affect NFSv3), but why do I care about QoS as a per-mountpoint feature? -- Trond Myklebust Linux NFS client maintainer, Hammerspace trond.myklebust@xxxxxxxxxxxxxxx