[squid-users] Managing user http bandwidth with squid cache

From: Alan Dawson <aland_at_burngreave.net>
Date: Tue, 16 Oct 2012 16:47:58 +0100

Hi,

I'm at an educational establishment, with approx 2500 desktops.
We have had a restrictive web access policy implemented with
a web cache/filtering proxy appliance. User browsers are configured
by a PAC file and web proxy auto discovery. They authenticate against
the appliance with NTLM

We plan on changing that policy to something much less restrictive,
but one of the technical issues we are expecting is an increase in
web traffic usage.

Currently we use 60Mb/s at peak times ( with 97% of that being http
traffic ), with our network connection being rated at 100Mb/s

We'd like to manage the amount of bandwidth that we use at our site
connecting to high traffic sites like youtube/vimeo/bbc, so that there is
always capacity for critical web applications, for example online
examinations.

The filter/proxy appliance does not have any options for limiting bandwidth

One of the ways we are investigating would be to use a squid web cache and
delay_pools. We would try to identify high bandwidth/popular sites, and
either use a PAC file so clients chose the bandwidth restricting cache, or
use a cache chaining rule on the filter/proxy appliance, to pass requests
for particular sites to the bandwidth restricting cache.

If users connect to the squid cache directly we would authenticate using
Kerberos/NTLM for windows clients and Basic for others.

Does this approach seem valid ?

What kind of resource would the squid cache require ( RAM/CPU ... )

Regards,

Alan Dawson

-- 
"The introduction of a coordinate system to geometry is an act of violence"

Received on Tue Oct 16 2012 - 15:48:10 MDT

This archive was generated by hypermail 2.2.0 : Wed Oct 17 2012 - 12:00:02 MDT