RE: [squid-users] reverse proxy / virtual hosting

From: Francois Liot <fliot@dont-contact.us>
Date: Tue, 22 Jun 2004 15:21:54 +0200

I will try to be a bit clearer.

Here is the picture :
--TCP-------------SSL----------------------Encapsulated protocol
                                            (could be HTTP...)

--IP:Port--Certificate used for handshake----decyphered protocol

in case of HTTP, once decyphered you could indeed retrieve all HTTP
headers variables (as HTTP_HOST...).

The problem is the following
You can map a single certificate (by IP:Port) to try to obtain an SSL
handshake.

Then having on a single IP:Port (let's say yourmachine:443) several
HTTPS answser possible is SSL non compliant (in fact doing hugly job,
you will do it, but using the same certificate for all your website -
user will see an error https://mysite1 is encrypted by https://mysite2
certificate...)

Just like I told you, Apache is suffring the same limitation (impossible
to have HTTPS virtual servers on a single IP/Port)

Regards

Francois Liot

On Tue, 2004-06-22 at 15:02, Chris Perreault wrote:
> -----Original Message-----
> From: Dan DeLong [mailto:ddelong@custdata.com]
> Sent: Tuesday, June 22, 2004 8:42 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] reverse proxy / virtual hosting
>
>
> Hello,
>
> I currently have squid running as a reverse proxy. I have a number of squid
> instances running to handle a number of different websites. Each squid
> instance listens on it's own ip address and handles the SSL cert for the
> incoming web request. My goal is to have squid listen on one address to
> handle multiple websites in essence do virtual hosting. Can this be done
> with squid ? If so, can you provide any direction on how to set squid up to
> do this ?
>
> Thanks.
>
>
> ~~~~~~~~~~~~~~~~~~~~~~
> ~~~~~~~~~~~~~~~~~~~~~~
>
> We are looking to set up the same environment here. Multiple back end
> webservers being handled by a reverse proxy. Users would go to
> www.ourcompany.com/extranet www.ourcompany.com/intranet
> www.ourcompany.com/web2 etc, with a mapping created for each of those
> various webservers. By default, www.ourcompany.com would send them to the
> main webserver, a homegrown portal type web interface, with links to the
> other webservers.
>
> On 2.5stable5 I accomplished this using squidguard as a redirector. The
> problem we ran into was when we tried to add in ssl and ldap authentication,
> so right now are messing with squid-3.0.pre3. Yesterday we made good
> progress (ie: no other issues got in the way and I got to work on this:))
> and got the ldap authentication and ssl working, with it connecting to one
> back end webserver...having defined that in the cache_peer and acl conf
> lines. I'm hoping to have time, over the next few days, to get squidguard
> working with this configuration. I'm sure what you want to do can be done,
> and am pretty sure people have done it before. Documentation seems to be
> lacking on exactly what steps were taken to do so though. Once I get this
> figured out I'll post the conf file and what steps were taken so it aids
> others. I've spent a lot of time researching this, over the last month or
> two, but having only spent 2 months with squid I am far from an expert on
> this. I got my company to fork over some cash to an outside consultant and
> I've been real happy with the one we went with, who was listed on the
> squid-cache.org site as those offering paid assistance. (no idea what the
> protocol here is on offering plugs for a job well done, so I won't mention
> which company we went with)
>
> If you want to get to the point where you just proxy the traffic to multiple
> back end webservers, squidguard will do the trick for you. If you are up to
> the task, you can write your own redirector program too. The
> redirector_program conf line is where you add info in for that.
Received on Tue Jun 22 2004 - 07:22:05 MDT

This archive was generated by hypermail pre-2.1.9 : Thu Jul 01 2004 - 12:00:03 MDT