RE: [squid-users] code red is making horrible on our network

From: BAARDA, Don <don.baarda@dont-contact.us>
Date: Mon, 13 Aug 2001 09:16:50 +0930

G'day,

Just a few quick comments.

Do you have a squid acl denying code-red urls?

If not, then Squid will be trying to pass all these requests upstream. This
will require reverse dns lookups of random IP's.... which can take some time
because many of them will be bogus and hence not "negatively-cached" for
long. For the ones that do resolve, Squid will then be passing the request
upstream with an invalid URL... once again not cacheable. Also, severs with
nothing running on port 80 can take their time responding with a "service
unavailable" to prevent various DOS attacks, so long delays all round just
to find out it was an invalid request.

Also, it is possible that DNS resolving will block when all resolvers are in
use... very bad. Using cachemgr you can check the status of your DNS
lookups.

Also, I'm not familiar with transparent proxy setups... do they still
forward the request to squid with an IP number for the hostname, or do they
attempt to reverse-lookup and use a dnsname? If the transparent thingy is
doing reverse lookups for you, then it could be dying too.

It pays to kill these requests ASAP... if you can't at the network routing
layer, then get squid to kill them ASAP by denying them with an acl.
Depending on when redirectors are applied, it might be a good idea to kill
them with a redirector too. It would be better to kill them before they get
to squid though if you can (perhaps get the transparent redirector thingy to
kill them?).

ABO
Received on Sun Aug 12 2001 - 17:45:57 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:01:34 MST