Re: [squid-users] Squid in an ISP environment

From: Robin Stevens <robin.stevens@dont-contact.us>
Date: Fri, 29 Nov 2002 15:54:18 +0000

On Sun, Nov 24, 2002 at 08:29:31PM +1100, Robert Collins wrote:
> On Sat, 2002-11-23 at 02:40, Andrew Veitch wrote:
> > On 23 Nov 2002, Robert Collins wrote:
> > > there was the JaNet cache farm, but I believe that has closed
> > > down. It must have served a pretty big user base as well.
> >
> > Official "switch off" date is 3rd December.
 
> is JANET just going to dissappear? What is taking its place?

JANET's staying in much the same form as ever, but the charging model is
changing. Using the JANET cache was a way for institutions to avoid
charges for traffic on the transatlantic links, back in the days when these
were extremely expensive and frequently very congested.

Once JANET moved to multi-gigabit transatlantic feeds early this year [1],
simply getting the traffic data broken down by institution became
impossible, so the charging model had to change, and the incentive to run
the national service went away. It's a shame that it's going when so much
hard work and dedication went into providing the service, but times (and
policies!) change.

Oxford moved to interception caching three years ago, using the JANET cache
as a parent, and saved ourselves a significant sum despite the costs in
hardware and staff time. Additionally the cache made life considerably
more pleasant when we were stuck behind a highly congested 34Mbit link onto
JANET (we're now at 1Gbit) - delay-pools were wonderful for limiting
traffic in multimedia files at this time :-) We initially ran a cluster of
Solaris Suns behind an Alteon L4 switch, but moved to Linux on Dell
hardware in mid-2001, with considerable performance and operational
benefits.

We stopped using the national cache in April, once we were sure that none
of our current traffic was chargeable, which greatly simplified operation
at our end. In particular it removed the need to maintain a large ACL of
exemptions for the many e-journals to which our libraries had taken out
site-wide licences.

I'm in the process of preparing for interception caching to be disabled in
January. We can't continue for more than a few months longer without
purchasing additional hardware or else suffering congestion at peak times,
and the money is no longer there for new servers. We'll keep on a single
server for voluntary use, but I don't expect heavy usage.

Current throughput is up to 28 million requests per day and 200GB of
traffic (peak hourly loads averaging 600 requests/sec, 37Mbit/sec), with
typical hit rates of 60% by requests, 35% by volume. To answer the
original question in this thread, I see absolutely no reason why we
couldn't scale to a significantly larger user base simply by adding further
servers - the key is effective load-balancing across a cluster of squid
boxes. I rather suspect that we tend to see more traffic per user than a
typical ISP, given that almost all our users are on 10 or 100Mbit ethernet
rather than dialup or domestic "broad" band.
  
There's a little more information on our setup at
http://www.oucs.ox.ac.uk/cache/info.xml including a couple of plots of
usage over time. Alternatively feel free to mail me for more information.

        Robin

[1] Using connectivity from two companies, both of whom went into
administration within a few months of installation. Ooops. Thank goodness
the JANET core network is on Worldcom :-)

-- 
--------------- Robin Stevens  <robin.stevens@oucs.ox.ac.uk> -----------------
Oxford University Computing Services ----------- Web: http://www.cynic.org.uk/
------- (+44)(0)1865: 273212 (work) 273275 (fax)  Mobile: 07776 235326 -------
Received on Fri Nov 29 2002 - 08:54:19 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:11:39 MST