Re: [PATCH] NFSv4: fix stateid refreshing when CLOSE racing with OPEN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 11, 2019 at 10:18 AM Trond Myklebust
<trondmy@xxxxxxxxxxxxxxx> wrote:
>
> On Thu, 2019-10-10 at 13:32 -0400, Olga Kornievskaia wrote:
> > On Thu, Oct 10, 2019 at 3:42 AM Murphy Zhou <jencce.kernel@xxxxxxxxx>
> > wrote:
> > > Since commit:
> > >   [0e0cb35] NFSv4: Handle NFS4ERR_OLD_STATEID in
> > > CLOSE/OPEN_DOWNGRADE
> > >
> > > xfstests generic/168 on v4.2 starts to fail because reflink call
> > > gets:
> > >   +XFS_IOC_CLONE_RANGE: Resource temporarily unavailable
> >
> > I don't believe this failure has to do with getting ERR_OLD_STATEID
> > on
> > the CLOSE. What you see on the network trace is expected as the
> > client
> > in parallel sends OPEN/CLOSE thus server will fail the CLOSE with the
> > ERR_OLD_STATEID since it already updated its stateid for the OPEN.
> >
> > > In tshark output, NFS4ERR_OLD_STATEID stands out when comparing
> > > with
> > > good ones:
> > >
> > >  5210   NFS 406 V4 Reply (Call In 5209) OPEN StateID: 0xadb5
> > >  5211   NFS 314 V4 Call GETATTR FH: 0x8d44a6b1
> > >  5212   NFS 250 V4 Reply (Call In 5211) GETATTR
> > >  5213   NFS 314 V4 Call GETATTR FH: 0x8d44a6b1
> > >  5214   NFS 250 V4 Reply (Call In 5213) GETATTR
> > >  5216   NFS 422 V4 Call WRITE StateID: 0xa818 Offset: 851968 Len:
> > > 65536
> > >  5218   NFS 266 V4 Reply (Call In 5216) WRITE
> > >  5219   NFS 382 V4 Call OPEN DH: 0x8d44a6b1/
> > >  5220   NFS 338 V4 Call CLOSE StateID: 0xadb5
> > >  5222   NFS 406 V4 Reply (Call In 5219) OPEN StateID: 0xa342
> > >  5223   NFS 250 V4 Reply (Call In 5220) CLOSE Status:
> > > NFS4ERR_OLD_STATEID
> > >  5225   NFS 338 V4 Call CLOSE StateID: 0xa342
> > >  5226   NFS 314 V4 Call GETATTR FH: 0x8d44a6b1
> > >  5227   NFS 266 V4 Reply (Call In 5225) CLOSE
> > >  5228   NFS 250 V4 Reply (Call In 5226) GETATTR
> >
> > "resource temporarily unavailable" is more likely to do with ulimit
> > limits.
> >
> > I also saw the same error. After I increased the ulimit for the stack
> > size, the problem went away. There might still be a problem somewhere
> > in the kernel.
> >
> > Trond, is it possible that we have too many CLOSE recovery on the
> > stack that's eating up stack space?
>
> That shouldn't normally happen. CLOSE runs as an asynchronous RPC call,
> so its stack usage should be pretty minimal (limited to whatever each
> callback function uses).

Yeah, that wasn't it. I've straced generic/168 to catch
ioctl(clone_file_range) returning EAGAIN.

I've instrumented the kernel to see where we are returning an EAGAIN
in nfs42_proc_clone(). nfs42_proc_clone is failing on
nfs42_select_rw_context() because nfs4_copy_open_stateid() is failing
to get the open state. Basically it looks like we are trying to do a
clone on a file that's not opened. Still trying to understand
things...



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux