On Thu, 3 Jul 2014 17:35:26 -0400 "J. Bruce Fields" <bfields@xxxxxxxxxxxx> wrote: > On Thu, Jul 03, 2014 at 04:32:59PM -0400, J. Bruce Fields wrote: > > On Mon, Jun 30, 2014 at 11:48:44AM -0400, Jeff Layton wrote: > > > We want to use the nfsd4_compound_state to cache the nfs4_client in > > > order to optimise away extra lookups of the clid. > > > > > > In the v4.0 case, we use this to ensure that we only have to look up the > > > client at most once per compound for each call into lookup_clientid. For > > > v4.1+ we set the pointer in the cstate during SEQUENCE processing so we > > > should never need to do a search for it. > > > > The connectathon locking test is failing for me in the nfsv4/krb5i case > > as of this commit. > > > > Which makes no sense to me whatsoever, so it's entirely possible this is > > some unrelated problem on my side. I'll let you know when I've figured > > out anything more. > > It's intermittent. > > I've reproduced it on the previous commit so I know at least that this > one isn't at fault. > > I doubt it's really dependent on krb5i, at most that's probably just > making it more likely to reproduce. > > --b. I haven't been able to reproduce it yet, but I suspect you're hitting this check in lookup_or_create_lock_state: /* with an existing lockowner, seqids must be the same */ status = nfserr_bad_seqid; if (!cstate->minorversion && lock->lk_new_lock_seqid != lo->lo_owner.so_seqid) goto out; Hmmm...there are some changes that go in in this patch wrt to lock seqid handling: nfsd: clean up lockowner refcounting when finding them Perhaps those need to go in earlier? Though when I looked at that originally, I figured that we wouldn't need those until the refcounting changes went in (which is why I didn't put them in). It might be interesting to look at traces and see whether they're consistent with hitting that check (or maybe put some debug printks in)? -- Jeff Layton <jlayton@xxxxxxxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html