Re: hierachy_stoplist and single parent (fix) (fwd)

From: Peter Childs <pjchilds@dont-contact.us>
Date: Tue, 18 Jun 1996 01:58:36 +0930 (CST)

 This was intended for squid-users@ but was send to squid@
 by mistake...

----- Forwarded message from Peter Childs -----

 Gday!

> Correct me if I am wrong
> * All HTTP traffic MUST to go throught your parent cache
> (inside a non-forwarding firewall)
> * All FTP traffic should go directly, since your parent
> have broken FTP proxy support.
>
> I don't get it. Are you allowed to go directly to hosts
> outside your firewall or not?

 Sure... the firewall just doesn't allow connections from
 anything inside to port 80 on the outside... (the parent proxy
 we use for web is on port 8080 outside)

> ---- Alternative 1. ----
> A forwarding firewall, that accepts all traffic,
> but it is prefered that HTTP goes throught your parent.
>
> then you can use
> cache_host parent...
> local_domain your.domain
> hierarchy_stoplist ftp://
>
> When using this setup, all cacheable requests goes to the parent,
> all private (http_stop, pragma: no-cache, If-Modified-Since) or
> ftp requests go directly to the source.

 This ones out because all http:// requests *must* go to the parent
 cache cacheable or not....

> ---- Alternative 2 ----
> If your firewall is not forwarding outgoing traffic other
> than ftp, then you either have to fix the parent proxy to
> handle ftp proxying, or hack the Squid sources as you did.
>
> The last part of your patch can be replaced by
> hierarchy_stoplist ftp://
> or if you don't want to cache ftp
> ftp_stoplist .*

 I think this one is the way to go... I'm happy with the results
 and its sending all the requests to the right places :)

> ---- Alternative 3 -----
> If all you want to do is to give a error page on ftp requests
> outside your firewall, then you can use the following:

 Nope... that wouldn't impress everyone :)

> Henrik Nordstrom

 Thanks for your assistance. Squid is a *great* product and its
 performing really well over here.

 Some things you could look at that I have noticed is that if a
 request for a page like

   http://www.fred.com/~sam/a.html

 and

   http://www.fred.com/%7Esam/a.html

 aren't recognised as the same page... I guess that would
 be a matter of just escaping all the requests as they come in before
 they get processed??

 Another one is things like

  ftp://ftp.cdrom.com/

 and

  ftp://ftp.freebsd.org/

 which are the same site.... I guess there is no real good way to do
 this since the lookups would be quite expensive (time wise)..??

 Peter

----- End of forwarded message from Peter Childs -----

--
 Peter Childs  ---  http://www.imforei.apana.org.au/~pjchilds
   Active APANA SA Member  ---  Author PopWatch + Inf-HTML
  Email: pjchilds@imforei.apana.org.au   Fax: 61-8-82784742
Received on Mon Jun 17 1996 - 09:29:34 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:32:30 MST