Re: ACLs and Stuff...

From: Panagiotis Malakoudis <pmal@dont-contact.us>
Date: Thu, 9 Dec 1999 09:09:45 +0200

Yeia sou Andrea,

1.Concerning the first subject about the inside zone using their proxy to
contact the Internet which is located at the outside interface.
You could grand access through the firewall for the proxys IP. In this way
your users can access the internet ONLY by using the proxy server because
the request tha passes through the fw's access lists is made by a host that
is not subject to them.
On the other hand if one use tries to access the internet without your
proxy, the request is made by a host that IS subject to those access lists.

2. This senario is exactly the same as the one i\ve implemented for my
network.
I have a group of users who have full access to browse at a great speed and
no download restrictions. I have another group who can browse the internet
but not at such a high speed as the priviledged ones and those users can
only download after 16:00 while at the same time all the users who try to
access xrated sites are limited to a very small bandwidth.
Now for your network I would suggest the following:
Don't give others "true" addresses and others "fake" ones. Use a default
policy for giving IPs. Then lets see at your squid configuration.
You want to define the basic acl's. One for the "super users" and one for
all the other.
I name the first "priv_users" and the second "all"
The next think you want to do is make an acl for the time restrictions,
something like
acl weekrestr time MTWHF 08:00-16:00
This matches the weekdays from Monday to Friday for the hours 08:00 to
16:00.
At the http_access statement try the following.
http_access deny weekrest !priv_users
http_access allow all
This works...

Now on how to block xrated sites. You can use a text file (best located
somewhere under the squid tree) and add there any site or keyword you want
to block. Squid will interact with this text file and an access list will be
matched.
You can use the following...
acl xrated urlpath_regex "/usr/local/squid/blocked_sites.txt"
http_access deny xrated

I've been trying to create a big text file because there are mane sites that
do not use any distinctive keyword.
So far my file is 2.5 kb.

3. You can setup a neightbour cache and use your upstream provider only if
they allow you to do so. Then your squid will also check the neighbours
cache to see if it can find the documents requested. I'm sorry though that I
cannot give you any more info on that because I've neer done this before.

4. Squid is the best proxy arounf because it can be configured to use great
chunks of memory. The bigger the memory, the faster the browsing. Your hard
disk is sufficient but you need a major memory upgrade.

I should better end this mail now because it is getting too big :-)
Hope I answered some of your questions.

Panagiotis S. Malakoudis

Systems Administrator
SPACE HELLAS S.A.

----- Original Message -----
From: "Andreas Skilitsis" <macstar@avalon.gr>
To: <squid-users@ircache.net>
Sent: Wednesday, December 08, 1999 11:51 PM
Subject: ACLs and Stuff...

> Hi all...
>
> I'm really new (yeah... you guessed right...) to the Squid and Linux
> world in the whole... and coming from a Mac world... I can say this
> is a real difficult step I took... (altho now with LinuxPPC and MacOS
> X in the way... things are getting better for the PPC Platform).
>
> I do have a lot of help from a friend on setting up a Linux box on a
> Pentium III/450, 128MB Ram, and he already has done a lot on
> installing and compiling and stuff... but we're stuck on successfully
> running Squid 2.2 STABLE5 on it. It's a 2.0.36 linux (RedHat) and we
> already run firewall on it... so this makes some things even harder.
>
> Anyway... since we're going to find anything that could be wrong in
> the firewall... I'll keep this list spam-free and only ask some squid
> specific questions:
>
> 1. The firewall uses 2 eth cards (Yeah how strange), but I was
> wondering if the inside-of-firewall clients should contact the Proxy
> in the inside-IP to serve their requests or the outside-IP. Both
> worked with version 1.1 of Squid that got to run ok after a fresh
> compile and without modifications on the default conf file... but I'm
> unsure what the "right" thing is...
>
>
> 2. This squid is meant to server around 10-20 users in our company's
> LAN (no more than 3-4 at a time tho) but not all clients are equal...
> so I thought I'll apply a simple rule...
>
> 2.1 Giving "super-users" true IP from the inside-of-firewall subnet
> (195.99.19.20 255.255.255.224 for example), and "normal-users" a fake
> IP like 192.168.1.20 255.255.255.0... (does this need IP Masquerading
> too? We only set a second "gateway address" to 192.168.1.1 on the
> interface)
>
> 2.2 "Super users" should get all URLs unrestricted... and "Normal
> users" should get all URLs except those matching some strings I'll
> type in... like sex playboy etc... (I think it's a lot easier to
> prevent access to these sites by keyword than to predict all
> domains... :)) ).
>
> 2.3 A possible "extension" of the 2.2 rule... would be if "normal
> users" could get all sites unrestricted but only after 17:00 or so...
> but that's entirely optional... if it messes things up too much...
> I'll better leave it.
>
>
> 3. Our internet provider runs a squid cache too... can I somehow
> "take advantage" of his cached documents but ONLY if they have it
> already cached... I mean I don't want to download everything from his
> squid... just the cached objects... I know this has to do with the
> sibling/parent/child thing... but it really isn't clear to me what
> does what and what of the squid.conf option should be open to
> actually "get the job done".
>
> 4. (And last ... I promise) I truly am silly enough to believe that
> all the above will be answered... so I also would like to ask what
> the ideal squid.conf memory/disk ratio would be... I have up to 6GB
> of disk for cached objects to spare... and what I already have in is
> 30MB ram and 4GB for cached objects... how does it sound?
>
>
> THANKS A MILLION TIMES TO ANYONE THAT HAS REACHED THIS LINE (Reading...)
:)
>
> And my really Linux-loving hugs to anyone that will answer this... or
> help in any way!!!
>
>
> Andreas Skilitsis
> Soon-To-Be Linux-Lover
> MacOS Networks Admin (for now)
>
>
> ___
> Andreas Skilitsis
> macstar@avalon.gr
>
> ___
> - How many Microsoft engineers does it take to screw in a lightbulb?
> - None. They just redefine darkness as the standard.
> ___
Received on Thu Dec 09 1999 - 00:23:57 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:49:48 MST