Remote accelerator and ICP questions

From: Peter C. Norton <spacey@dont-contact.us>
Date: Mon, 7 Sep 1998 22:52:06 -0400

Hi,

I'm fairly new to squid, but am in a situation where it seems to be the
ideal short-term solution to bandwidth problems.

I'm in the process of setting up 2 systems as accelerators, using
squid-1.1.22, for my clients' web servers to be located on a network in
brazil for their audience there. In my test environment, I am using 2
servers as accelerators on port 80 with

httpd_accel virtual 80

defined, and a destination ACL of (assuming our network is 10.0.0.0)

acl our_net dst 10.0.0.0/255.255.255.0
...
deny !our_net

I'm set up this way because there are about 4 different hostnames that
have content that will be cached, and the network I'm testing on is very
low on free IP addresses. I have a name server that the test clients are
using, but a different name server is being used by the accelerators, so
they won't loop upon themselves. The live situation will be set up
similarly. Is there anything I've described that sounds like this is a
bad idea for live deployment?

In addition, I have another question about using these 2 servers as peers.

In testing, I've set both of these systems as peers to the other.
On system1, I have

cache_host system2 sibling 80 3130

and on system2 I have

cache_host system1 sibling 80 3130

Both have
icp_access allow all

However when I look in the access log, I don't see any indication that
there was any icp contact. When I try to look at what's listening on
either of these systems, a "netstat -an" doesn't show anything listening
on port 3130/udp, which I think it should. I haven't put tcpdump on the
job yet, since the system that I have it installed in is on another port
on the switch. However, if that'll shed any light, I'm willing to.

According to the online docs and FAQ, the cache_host line is all I should
need to have the 2 caches work as peers. Also, the servers I'm contacting
are about 14 hops away, and the 2 caches are sitting on the same switched
fast ethernet.

These systems are both FreeBSD-2.2.7-RELEASE, installed via ftp within the
last 2 weeks. The kernel has been jacked up to allow up to 30,000 file
descriptors, and the shell that squid is running in has a ulimit'd max of
8020 file descriptors. There is a lot of RAM (384 MB) and 2 GB dedicated
to the cache, with 9 expected when they go live (these are expected to
last for the next year, and the site has been successful in growing a
lot).

I'm also trying to test this on another network with a linux and a
solaris2.6/sparc host. Squid under solaris dies when I use virtual, so it
hasn't been a successful duplication of the experiment so far :(

-Peter

P.S.
I'm also considering setting reload_into_ims so that a single user can't
kill the effectiveness of the accelerator cache. Is this a good or bad
thing in others' experience? Is there a way to force reload in case of a
problem?

-- 
Peter C. Norton                      Time comes into it. / Say it.  Say it.
spacey@pobox.com                   | The Universe is made of stories,  
http://spacey.dyn.ml.org           | not of atoms. 
                                   |
                                     Muriel Rukeyser "The Speed of Darknesss"
Received on Mon Sep 07 1998 - 19:54:40 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:41:53 MST