Re: [PATCH] NFSv4: fix stateid refreshing when CLOSE racing with OPEN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 18, 2019 at 8:34 PM Olga Kornievskaia <aglo@xxxxxxxxx> wrote:
>
> On Fri, Oct 11, 2019 at 2:50 PM Olga Kornievskaia <aglo@xxxxxxxxx> wrote:
> >
> > On Fri, Oct 11, 2019 at 10:18 AM Trond Myklebust
> > <trondmy@xxxxxxxxxxxxxxx> wrote:
> > >
> > > On Thu, 2019-10-10 at 13:32 -0400, Olga Kornievskaia wrote:
> > > > On Thu, Oct 10, 2019 at 3:42 AM Murphy Zhou <jencce.kernel@xxxxxxxxx>
> > > > wrote:
> > > > > Since commit:
> > > > >   [0e0cb35] NFSv4: Handle NFS4ERR_OLD_STATEID in
> > > > > CLOSE/OPEN_DOWNGRADE
> > > > >
> > > > > xfstests generic/168 on v4.2 starts to fail because reflink call
> > > > > gets:
> > > > >   +XFS_IOC_CLONE_RANGE: Resource temporarily unavailable
> > > >
> > > > I don't believe this failure has to do with getting ERR_OLD_STATEID
> > > > on
> > > > the CLOSE. What you see on the network trace is expected as the
> > > > client
> > > > in parallel sends OPEN/CLOSE thus server will fail the CLOSE with the
> > > > ERR_OLD_STATEID since it already updated its stateid for the OPEN.
> > > >
> > > > > In tshark output, NFS4ERR_OLD_STATEID stands out when comparing
> > > > > with
> > > > > good ones:
> > > > >
> > > > >  5210   NFS 406 V4 Reply (Call In 5209) OPEN StateID: 0xadb5
> > > > >  5211   NFS 314 V4 Call GETATTR FH: 0x8d44a6b1
> > > > >  5212   NFS 250 V4 Reply (Call In 5211) GETATTR
> > > > >  5213   NFS 314 V4 Call GETATTR FH: 0x8d44a6b1
> > > > >  5214   NFS 250 V4 Reply (Call In 5213) GETATTR
> > > > >  5216   NFS 422 V4 Call WRITE StateID: 0xa818 Offset: 851968 Len:
> > > > > 65536
> > > > >  5218   NFS 266 V4 Reply (Call In 5216) WRITE
> > > > >  5219   NFS 382 V4 Call OPEN DH: 0x8d44a6b1/
> > > > >  5220   NFS 338 V4 Call CLOSE StateID: 0xadb5
> > > > >  5222   NFS 406 V4 Reply (Call In 5219) OPEN StateID: 0xa342
> > > > >  5223   NFS 250 V4 Reply (Call In 5220) CLOSE Status:
> > > > > NFS4ERR_OLD_STATEID
> > > > >  5225   NFS 338 V4 Call CLOSE StateID: 0xa342
> > > > >  5226   NFS 314 V4 Call GETATTR FH: 0x8d44a6b1
> > > > >  5227   NFS 266 V4 Reply (Call In 5225) CLOSE
> > > > >  5228   NFS 250 V4 Reply (Call In 5226) GETATTR
> > > >
> > > > "resource temporarily unavailable" is more likely to do with ulimit
> > > > limits.
> > > >
> > > > I also saw the same error. After I increased the ulimit for the stack
> > > > size, the problem went away. There might still be a problem somewhere
> > > > in the kernel.
> > > >
> > > > Trond, is it possible that we have too many CLOSE recovery on the
> > > > stack that's eating up stack space?
> > >
> > > That shouldn't normally happen. CLOSE runs as an asynchronous RPC call,
> > > so its stack usage should be pretty minimal (limited to whatever each
> > > callback function uses).
> >
> > Yeah, that wasn't it. I've straced generic/168 to catch
> > ioctl(clone_file_range) returning EAGAIN.
> >
> > I've instrumented the kernel to see where we are returning an EAGAIN
> > in nfs42_proc_clone(). nfs42_proc_clone is failing on
> > nfs42_select_rw_context() because nfs4_copy_open_stateid() is failing
> > to get the open state. Basically it looks like we are trying to do a
> > clone on a file that's not opened. Still trying to understand
> > things...
>
> Trond,
>
> Generic/168 fails in 2 ways (though only 1 leads to the failure in
> xfs_io). Another way is having a file closed then client using the
> stateid for the write and getting a bad_stateid which the client
> recovers from (but the fact that client shouldn't have done that is a
> problem). Another is the clone where again happens that file is
> "closed" and clone is trying to use a stateid.
>
> The problem comes from the following fact. We have a racing CLOSE and
> OPEN. Where client did the CLOSE followed by the OPEN but the server
> processed OPEN and then the CLOSE. Server returns OLD_STATEID to the
> CLOSE. What the code does it bumps the sequence id and resends the
> CLOSE which inadvertently is closing a file that was opened before.
> While IO errors from this are recoverable, the clone error is visible
> to the application (I think another case would be a copy).
>
> I don't have a code solution yet. But it'll have to be something where
> we need to ignore a CLOSE with OLD_STATEID when another OPEN happened.

Also besides the cases for the clone, copy, if the racing open got a
delegation and that was used during the BAD_STATEID that's an
automatic  EIO to the application.

I guess my proposal is to revert 12f275cdd1638. The same problem
exists for the DELEGRETURN racing with the open and if we re-try we
will return a delegation that was just given to us but client would
think that we still have it.



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux