Re: [squid-users] Squid TPROXY and TCP_MISS/000 entries

From: Marcin Czupryniak <>
Date: Mon, 22 Apr 2013 11:46:49 +0200

>> Hello all!,
>> checking my logs from time to time I see that there are some requests
>> which return the TCP_MISS/000 log code, I'm managing a medium sized
>> Active-Standby transparent caching proxy (direct routing) which is
>> handling around 100 requests per second (average on daily basis), I
>> know what the entry means but I'm not exactly sure whether under
>> normal operating conditions they are normal to see in such amount.
>> The number of these entries is less than 0,001% of total requests
>> served (avg 1 entry per 10 seconds), should I worry about it or
>> others get them too?
> How long a duration do they show? any consistency to the type of
> requests? is there
As far as I can see sometimes a sequence of 000 misses is replied to the
same requesting IP (mostly web spiders) but in the meantime they do get
tons of other content.
Some of them (maybe 20%) come in couples something like:

1366622555.453 1488 TCP_MISS/000 0 GET - DIRECT/ -
1366622555.454 2327 TCP_MISS/000 0 GET - DIRECT/ -

1366622571.558 292 TCP_MISS/000 0 GET - DIRECT/ -
1366622571.575 242 TCP_MISS/000 0 GET - DIRECT/ -

1366622596.390 1972 TCP_MISS/000 0 GET -
1366622596.561 166 TCP_MISS/000 0 GET -

> In normal traffic this could be the result of:
> * DNS lookup failure/timeout.
> Identified by the lack of upstream server information on the log line.
> This is very common as websites contain broken links, broken XHR
> scripts, and even some browsers send garbage FQDN in requests to probe
> network functionality. Not to mention DNS misconfiguration and broken
> DNS servers not responding to AAAA lookups.
We are not using IPv6 yet, and it could be due to actually failed DNS
lookups, as I still have to fix some issues we have with our local
resolvers. Details from DNS stats

Rcode Matrix:
     0 93690 3 0
     1 0 0 0
     2 1525 1522 1522
     3 540 0 0
     4 0 0 0
     5 0 0 0

> * "Happy Eyeballs" clients.
> Identified by the short duration of transaction as clients open
> multiple connections abort some almost immediately.
Maybe that's why they come in couples?
> * HTTP Expect:100-continue feature being used over a Squid having
> "ignore_expect_100 on" configured - or some other proxy doing the
> equivalent.
> Identified by the long duration of the transaction, HTTP method type
> plus an Expect header on request, and sometimes no body size. As the
> client sends headers with Expect: then times out waiting for a
> 100-continue response which is never going to appear. These clients
> are broken as they are supposed to send the request payload on timeout
> anyway which would make the transaction complete properly.
Did not check this one
> 3) PMTUd breakage on the upstream routes.
> Identified at the TCP level by complete lack of TCP ACK to data
> packets following a successful TCP SYN + SYN/ACK handshake. This would
> account for the intermittent nature of it as HTTP response sizes vary
> a only large packets go over the MTU size (individual TCP packets,
> *not* HTTP response message size).
I don't think it's the case here
> Amos

I suspect that most of the misses come from loaded webservers discarding
requests (and so squid never receives a reply) or by actual firewalls
discarding excessive packets.
Any other suggestions?

Received on Mon Apr 22 2013 - 09:47:05 MDT

This archive was generated by hypermail 2.2.0 : Tue Apr 23 2013 - 12:00:05 MDT