Re: [PATCH 1/1] SUNRPC: fix handling of half-closed connection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 22, 2019 at 11:32 AM Dave Wysochanski <dwysocha@xxxxxxxxxx> wrote:
>
> On Fri, 2019-02-22 at 09:44 -0500, Olga Kornievskaia wrote:
> > Hi Dave,
> >
> > A re-producer is a server that sends an ACK to the client's FIN/ACK
> > request and does nothing afterwards (I can reproduce it 100% with a
> > hacked up server. It was discovered with a "broken" server that
> > doesn't fully closes a connection). It leave this client unable to
> > connect to this server again indefinitely/forever/reboot required
> > kind
> > of state. Once it was considered that doing something like that to
> > the
> > client is a form of an attack (denial-of-server) and thus the kernel
> > has a tcp_fin_timeout option after which the kernel will abort the
> > connection. However this only applies to the sockets that have been
> > closed by the client. This is NOT the case. NFS does not close the
> > connection and it ignores kernel's notification of FIN_WAIT2 state.
> >
> Interesting.  I had the same reproducer but eventually an RST would get
> sent from the NFS client due to the TCP keepalives.  It sounds like
> that is not the case anymore.  When I did my testing was over a year
> and a half ago though.

after the keepalives the TCP also sends a FIN/ACK if the server
properly sent a FIN/ACK back, then keep alive will indeed be
sufficient. Can you check if in your trace server in one time sent
just an ACK but in another case sent the FIN/ACK?

> > One can argue that this is a broken server and we shouldn't bother.
> > But this patch is an attempt to argue that the client still should
> > care and deal with this condition. However, if the community feels
> > that a broken server is a broken server and this form of an attack is
> > not interested, this patch can live will be an archive for later or
> > never.
> >
>
> This isn't IPoIB is it?

No, just normal TCP.


> Actually, fwiw, looking back I had speculated on changes in this area.
> I'm adding you to the CC list of this bug which had some of my musings
> on it:
> https://bugzilla.redhat.com/show_bug.cgi?id=1466802#c43
> That bug I ended up closing when we could no longer prove there was any
> state where the NFS client could get stuck in FIN_WAIT2 after the
> keepalive patch.

I can reproduce current behavior with the current upstream code.


> It can happen that the server only sends the ACK back to the clients
> FIN,ACK so in general the testcase is valid.  But then the question is
> how long should one wait for the final data and FIN from the server, or
> are there ever instances where you shouldn't wait forever?  Is there a
> way for us to know for sure there is no data left to receive so it's
> safe to timeout?  No RPCs outstanding?

Yep all good questions which I don't have answers to. I realize that
for instance, that a server might send an ACK and then send a FIN/ACK
after that. But why is it bad for the client to proactively send an
RST (by shutting down the connection in TCP_FIN_WAIT2 if the client
was shutting down the connection anyway.

> I don't claim to know many of the subtleties here as far as would the
> server wait forever in LAST_ACK or do implementations eventually
> timeout after some time?  Seems like if these last packets get lost
> there is likely a bug somewhere (either firewall or TCP stack, etc).
> https://tools.ietf.org/html/rfc793#page-22
>
> It looks like at least some people are putting timeouts into their
> stacks though I'm not sure that's a good idea or not.

I saw only one other driver in the kernel that does have handling of
the TCP_FIN_WAIT2 state (driver/crypto/chelsio) ...



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux