Re: ACLs and Stuff...

From: Kendall Lister <kendall@dont-contact.us>
Date: Thu, 9 Dec 1999 10:11:47 +1100 (EST)

On Wed, 8 Dec 1999, Andreas Skilitsis wrote:

> I'm really new (yeah... you guessed right...) to the Squid and Linux
> world in the whole... and coming from a Mac world...

I'm no expert either, but here are my suggestions.

> 1. The firewall uses 2 eth cards (Yeah how strange), but I was
> wondering if the inside-of-firewall clients should contact the Proxy
> in the inside-IP to serve their requests or the outside-IP. Both
> worked with version 1.1 of Squid that got to run ok after a fresh
> compile and without modifications on the default conf file... but I'm
> unsure what the "right" thing is...

Can't help you here, but ...

> 2. This squid is meant to server around 10-20 users in our company's
> LAN (no more than 3-4 at a time tho) but not all clients are equal...
> so I thought I'll apply a simple rule...
>
> 2.1 Giving "super-users" true IP from the inside-of-firewall subnet
> (195.99.19.20 255.255.255.224 for example), and "normal-users" a fake
> IP like 192.168.1.20 255.255.255.0... (does this need IP Masquerading
> too? We only set a second "gateway address" to 192.168.1.1 on the
> interface)

This seems like a good way to differentiate between the two classes of
users.

> 2.2 "Super users" should get all URLs unrestricted... and "Normal
> users" should get all URLs except those matching some strings I'll
> type in... like sex playboy etc... (I think it's a lot easier to
> prevent access to these sites by keyword than to predict all
> domains... :)) ).

Check out the Squid Users' Guide
(http://squid.nlanr.net/Docs/Users-Guide/) - the section "Access Controls"
under "More Configuration Details" gives an example that should fit your
needs exactly, or at least give you a starting point to build on.

> 2.3 A possible "extension" of the 2.2 rule... would be if "normal
> users" could get all sites unrestricted but only after 17:00 or so...
> but that's entirely optional... if it messes things up too much...
> I'll better leave it.

If your time requirements are as simple as business hours / after hours,
you could create two config files and use a cron job to copy each of them
into place at the right time and restart squid - given the small size of
your cache this shouldn't cause any problems.

> 3. Our internet provider runs a squid cache too... can I somehow "take
> advantage" of his cached documents but ONLY if they have it already
> cached... I mean I don't want to download everything from his squid...
> just the cached objects... I know this has to do with the
> sibling/parent/child thing... but it really isn't clear to me what
> does what and what of the squid.conf option should be open to actually
> "get the job done".

I've got a feeling that making their proxy a sibling would do the trick,
although I'm not sure why you would want to this when they are upstream
from you.

> 4. (And last ... I promise) I truly am silly enough to believe that
> all the above will be answered... so I also would like to ask what the
> ideal squid.conf memory/disk ratio would be... I have up to 6GB of
> disk for cached objects to spare... and what I already have in is 30MB
> ram and 4GB for cached objects... how does it sound?

Sounds okay to me - judging from what others post about their setups, 4 Gb
and 300 Mb should be plenty for your needs. Of course, if you have the
flexibility to experiment, each setup is unique and you should be able to
find the most efficient arrangement for your site.

--
 Kendall Lister, Systems Operator for Charon I.S. - kendall@charon.net.au
  Charon Information Services - Friendly, Cheap Melbourne ISP: 9589 7781
Received on Wed Dec 08 1999 - 16:20:06 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:49:47 MST