Search squid archive

Re: Error Resolution (TunnelStateData::Connection:: error )

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/06/2015 5:39 p.m., Iruma Keisuke wrote:
> Thank you Amos.
> I really appreciate your response.
> 
> We analyzed the trend of FD an error has occurred.
> 
> 2015/06/01_08:52:35 nsu01pint-int01 [cache]2015/06/01 08:52:32|
> TunnelStateData::Connection:: error : FD 81: read/write failure: (110)
> Connection timed out
> Active file descriptors:
>  File Type   Tout Nread  * Nwrite * Remote Address        Description
>   ---- ------ ---- -------- -------- ---------------------
> --------------------- ---------
>    81 Socket 86282    8100*   67977  XXX.XXXX.2.136:49907   Reading
> next request Date: Sun, 31 May 2015 23:32:30 GMT
>    81 Socket 86302    8100*   16753  XXX.XXXX.2.136:49944   Reading
> next request Date: Sun, 31 May 2015 23:37:30 GMT
>    81 Socket 86002    8100*   17395* XXX.XXXX.2.136:49944   Reading
> next request Date: Sun, 31 May 2015 23:42:30 GMT
>    81 Socket 85702    8100*   17395* XXX.XXXX.2.136:49944   Reading
> next request Date: Sun, 31 May 2015 23:47:30 GMT
>    81 Socket 85402    8100*   17395* XXX.XXXX.2.136:49944   Reading
> next request Date: Sun, 31 May 2015 23:52:30 GMT
>    81 Socket 86354   56810*   40697  XXX.XXXX.6.114:49687   Reading
> next request Date: Sun, 31 May 2015 23:57:30 GMT
>
>                          Error while writing ?
> 
> All it seems to have time out in writing.
> And, the time until the timeout is between 10 to 15 minutes.
> 
> Are "Write_timeout" and "read_timeout" directive related to the error?
> "write_timeout" is a directive that does not exist in the 3.1 version.
> Though "write_timeout" can not be set, it works and cause a timeout in
> 15 minutes?

There is no write timeout, things either write immediately or as soon as
space on the network pipe becomes available. The processing Job waiting
for the write to finish can timeout, but that is completely different
from sockets layer.

It might be read_timeout, but IIRC that is internal and Squid uses
close() on the socket to send the signal to the other end. Not try to
read() and fail.


> 
> I think this is also a relationship.
> http://www.squid-cache.org/Doc/config/half_closed_clients/
>> Squid can not tell the difference between a half-closed connection, and a fully closed one.
> 3.1 version "half_closed_clients" is off by default.
> 
> I have this kind of guess.
> 
> 1. Change the client to "half_closed".
> 2. squid write to FD.
> 3. Change the client to "fully_losed".
> 4. Since the "fully_losed" squid can not understand, squid attempts to
> write to the FD.
> 5. After 15 minutes, "write_timeout" occurs in squid
> 
> Can I get your opinion?

Squid does understand fully closed (the state, no directive). If the
socket/FD were fully closed it would be a "read/write error connection
closed" message in the log.

In the world of sockets a close event by the remote end shows up as a
signal requesting a read(), when that read() is performed an error is
returned which explains what happened. I think this is what you are
seeing (eg "connection timed out" happend).
 Why/how its let to get into that state is unclear and may be related to
a bug somewhere. But still unknown whether that place would be Squid,
client or server.

Timeout signalled by the TCP stack is possible and normal, though not
exactly common. As you saw it takes a while and usually connections get
closed when finished with instead of left waiting for a long time like
that, so you dont see this.
 The exception is CONNECT tunnels (TunnelStateData) - Squid is not aware
of the protocol inside so must keep them open until the very last bytes
is passed and either the remote end or the client requesting it goes away.


PS. Eliezer packages current versions of Squid unofficially for RHEL.
You may want to try upgrading to that. It is not supported by RHEL, but
by Eliezer and us here. If you want to continue with the RHEL package
then I suggest its best to get in touch with them about problems (if
this is one).

Amos

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux