linux returns EAGAIN for closed ocserv interfaces

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2014-09-26 at 13:39 +0200, Nikos Mavrogiannopoulos wrote:
> On Fri, 2014-09-26 at 11:30 +0100, David Woodhouse wrote:
> > On Sun, 2014-09-21 at 02:00 +0200, Nikos Mavrogiannopoulos wrote:
> > > On Sat, 2014-09-20 at 13:05 +0200, Niels Peen wrote:
> > > > Another possible clue: I upgraded from ocserv 0.3 to 0.8 on September 15th. 0.3 has never caused this problem.
> > > 
> > > I don't see much changes related to that issue from 0.3 to 0.8, but that
> > > looks like a race issue in the kernel and could be caused by different
> > > timings between calls. I've applied the suggested fix anyway as it looks
> > > correct.
> > 
> > It looks very very wrong to me. Linus had it right at
> > https://lkml.org/lkml/2002/7/17/165
> > 
> > The close() system call should *never* fail to close the file
> > descriptor. And as Linus points out, your force_close() hack is very
> > broken in a threaded environment.
> 
> That doesn't matter much for ocserv as there are no multiple threads. It
> was added as it looked reasonable for other OSes which may not behave as
> Linux.

I'm not convinced there are any. I know of no other userspace which will
retry a close() call. And it's hard to imagine how it could ever be
necessary. The *flush* might fail, but closing the file descriptor is
always easy enough because it just involves throwing away local state.

Also, we close all file descriptors when a process exits. If close() can
fail, that makes a horrid mess of the task exit code path. I dread to
think how this could ever be handled. I just don't think it can happen.

> > Niels seemed to suggest that the client had gone away, which implies
> > that there's no ocserv thread servicing the tun device at all. Is that
> > the case? Is the device in question still *up*? It probably shouldn't
> > be. (You do have a separate tundev per client?)
> > 
> > I'd like to see more information about the state of the system when this
> > failure happens. Reproducing with dnsmasq seems like it would be hard
> > because to hit the race condition you have to disconnect the VPN after
> > sending a query but before receiving the reply. I suppose you could hack
> > openconnect to disconnect after sending a DNS request :)  Or use a
> > different UDP sender on the server side.
> 
> I'd really love to solve that issue, as I also don't believe that the
> force_close() is responsible for the solution.

Right. There's a real problem here, and I don't see how this
force_close() thing would *even* paper over it, let alone fix it. Your
loop should just *never* trigger (and perhaps we could add a warning or
at least a debug message of it does?).

> > (Has anyone been running VoIP over ocserv connections, btw? This talk of
> > buffers and EAGAIN reminds me that we need to make sure we avoid
> > excessive buffering. cf.
> > http://git.infradead.org/users/dwmw2/openconnect.git/commitdiff/3444f811
> > )
> 
> I am. One can tune it using the output-buffer parameter. Since you
> brought that, I suspect that this particular commit must have been the
> responsible for the asymmetry in upload/download (it was on an old
> thread, with openconnect upload being 4 times slower than download).
> Unfortunately I have no longer the hardware to verify that theory.

Hm, maybe I was a little too aggressive about cutting down the output
buffering? But if we ask for MTU*2 don't we actually get to put *four*
full-MTU packets into the socket before it stalls? Surely that ought to
be enough to keep the packets flowing?

Was there anything special we needed to test this? Just ocserv and
openconnect running on different hosts on a gigabit Ethernet segment?

-- 
dwmw2
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 5745 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/openconnect-devel/attachments/20140926/19289aa9/attachment.bin>


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux