On 24/01/2011 06:35, Max Feil wrote:
> Already did use Wireshark. Here is some more info:
>
> If you look through the traces you'll notice that at some point Squid
sends a TCP [FIN, ACK] right in the middle of a connection for seemingly
no reason. (Attempting to close the connection) The server ignores this
and sends the rest of the data, which Squid responds to with TCP RST
(request to reset) since it now believes the connection to be closed.
>
> From the browser side it seems to be given no notification that the
connection was closed (and indeed I can see no reason why it should be
closed) so it seems to sit around doing nothing as it may have reached
the max connections limit. After about 2 minutes (possibly related to a
persistent connection timeout limit in squid) Squid seems to terminate
all the connections with FIN,ACKs. The browser then seems to realize its
connections are gone and it requests the remaining resources resulting
in a bunch of TCP SYNs followed by the rest of the resources.
>
> Why it does this in the middle of connections we still have no clue,
however turning off server_persistent_connections seems to make it load
fast. However this is probably a bad idea in general...
>
> Max
>
> -----Original Message-----
> From: Henrik Nordström [mailto:henrik@xxxxxxxxxxxxxxxxxxx]
> Sent: Sunday, January 23, 2011 7:16 PM
> To: Max Feil
> Cc: squid-users@xxxxxxxxxxxxxxx
> Subject: RE: Squid 3.x very slow loading on ireport.cnn.com
>
> tor 2011-01-20 klockan 02:50 -0500 skrev Max Feil:
>
>> Thanks. I am looking at the squid access.log and the delay is caused by
>> a GET which for some reason does not result in a response from the
>> server. Either there is no response or Squid is missing the response.
>> After a 120 second time-out the page continues loading, but the end
>> result may be malformed due to the object which did not load.
>>
> I would take a peek at the traffic using wireshark to get some insight
> in what is going on there.
>
> REgards
> Henrik
>
>
just noticed your relply.
and also the mail daemon didnt like my log from a reason so i will send
it to you seperetly:
try to make an icl for this sites\domains in the list below to not use
cache at all.
send a log with much detail on the requests (headers\debug mode)
the last message below:
there was another guy with cnn problem no?
(named max)
did you made basic test like not with ping and dns stuff?
cnn like many others are using CDN.. what makes it a little problem
sometimes.
did you compiled it yourself?
this is the second time so try these:
i will give you domain names and IP.
and also do you use a local dns server? or ISP ? or eles?
try to set the name server for the proxy as 8.8.8.8 (google dns)
ping it first..
the page has like 8-10 domains\names it is trying to get
ireport.cnn.com
i.cdn.turner.com
i2.cdn.turner.com
audience.cnn.com
b.scorecardresearch.com
metrics.cnn.com
metrics.ireport.com
to to ping and dig... each one of them and send it in the email.
then try to put in the hosts file of the squid OS these lines
157.166.255.213 ireport.cnn.com
207.123.56.126 i.cdn.turner.com
192.12.94.30 i2.cdn.turner.com
157.166.255.80 audience.cnn.com
92.123.69.155 b.scorecardresearch.com
66.235.143.121 metrics.cnn.com
192.33.14.30 metrics.ireport.com
also try to just get to the ip
http://192.12.94.30/
send the results for these.
another thing..
send us your settings file.
if squid is running in transparent mode specify the ipv4 address .
if it's not transparent even so set it to be able to...
next thing is to make sure that Failed DNS cache time is set on 5 seconds
dns_v4_fallback on
and of cvourse a log will be nice.
i will show you some of mine.