Hello all!,
checking my logs from time to time I see that there are some requests
which return the TCP_MISS/000 log code, I'm managing a medium sized
Active-Standby transparent caching proxy (direct routing) which is
handling around 100 requests per second (average on daily basis), I
know what the entry means but I'm not exactly sure whether under
normal operating conditions they are normal to see in such amount.
The number of these entries is less than 0,001% of total requests
served (avg 1 entry per 10 seconds), should I worry about it or
others get them too?
How long a duration do they show? any consistency to the type of
requests? is there
As far as I can see sometimes a sequence of 000 misses is replied to the
same requesting IP (mostly web spiders) but in the meantime they do get
tons of other content.
Some of them (maybe 20%) come in couples something like:
1366622555.453 1488 87.19.154.90 TCP_MISS/000 0 GET
http://yammo.it/index.php? - DIRECT/151.1.96.198 -
1366622555.454 2327 87.19.154.90 TCP_MISS/000 0 GET
http://yammo.it/index.php? - DIRECT/151.1.96.198 -
1366622571.558 292 82.90.127.184 TCP_MISS/000 0 GET
http://www.forumviaggiatori.com/tabindex.php? - DIRECT/5.134.122.135 -
1366622571.575 242 82.90.127.184 TCP_MISS/000 0 GET
http://www.forumviaggiatori.com/popup%0Bg.png - DIRECT/5.134.122.135 -
1366622596.390 1972 193.32.73.24 TCP_MISS/000 0 GET
http://www.romaintheclub.com/24042013-shed-function-goa -
DIRECT/5.134.122.154 -
1366622596.561 166 193.32.73.24 TCP_MISS/000 0 GET
http://www.romaintheclub.com/24042013-shed-function-goa -
DIRECT/5.134.122.154 -
In normal traffic this could be the result of:
* DNS lookup failure/timeout.
Identified by the lack of upstream server information on the log line.
This is very common as websites contain broken links, broken XHR
scripts, and even some browsers send garbage FQDN in requests to probe
network functionality. Not to mention DNS misconfiguration and broken
DNS servers not responding to AAAA lookups.
We are not using IPv6 yet, and it could be due to actually failed DNS
lookups, as I still have to fix some issues we have with our local
resolvers. Details from DNS stats
Rcode Matrix:
RCODE ATTEMPT1 ATTEMPT2 ATTEMPT3
0 93690 3 0
1 0 0 0
2 1525 1522 1522
3 540 0 0
4 0 0 0
5 0 0 0
* "Happy Eyeballs" clients.
Identified by the short duration of transaction as clients open
multiple connections abort some almost immediately.
Maybe that's why they come in couples?
* HTTP Expect:100-continue feature being used over a Squid having
"ignore_expect_100 on" configured - or some other proxy doing the
equivalent.
Identified by the long duration of the transaction, HTTP method type
plus an Expect header on request, and sometimes no body size. As the
client sends headers with Expect: then times out waiting for a
100-continue response which is never going to appear. These clients
are broken as they are supposed to send the request payload on timeout
anyway which would make the transaction complete properly.
Did not check this one
3) PMTUd breakage on the upstream routes.
Identified at the TCP level by complete lack of TCP ACK to data
packets following a successful TCP SYN + SYN/ACK handshake. This would
account for the intermittent nature of it as HTTP response sizes vary
a only large packets go over the MTU size (individual TCP packets,
*not* HTTP response message size).
I don't think it's the case here
Amos
I suspect that most of the misses come from loaded webservers discarding
requests (and so squid never receives a reply) or by actual firewalls
discarding excessive packets.
Any other suggestions?
Martin