Re: [squid-users] getting https pages from peer on ssl-bump mode

From: Oguz Yilmaz <oguzyilmazlist_at_gmail.com>
Date: Tue, 9 Oct 2012 09:45:35 +0300

Thank you for detailed explanations.

On Mon, Oct 8, 2012 at 1:25 AM, Amos Jeffries <squid3_at_treenet.co.nz> wrote:
> On 08.10.2012 01:16, Oguz Yilmaz wrote:
>>
>> I am trying with ssl-bump. I am using squid 3.1.21.
>>
>> First of all I got the CANNOT FORWARD error page. When I debug I found:
>>
>> 2012/10/07 14:27:49.380| fwdConnectStart: Ssl bumped connections
>> through parrent proxy are not allowed
>> 2012/10/07 14:27:49.380| forward.cc(286) fail: ERR_CANNOT_FORWARD
>> "Service Unavailable"
>>
>>
>> Then, I added always_direct rule and reached to https site.
>>
>> acl HTTPS proto HTTPS
>> always_direct allow HTTPS
>>
>>
>> According to message above and a reply from Amos in another thread,
>> squid stopped getting https over peers, because "it does not again
>> encrypt ssl connection for the peer". Capability of getting https
>> pages over peers was previous behaviour and I did not understand why
>> squid does not get pages from peers instead of direct? I assume it is
>> about software architecture.
>
>
> Without SSL-bump the HTTPS us seen by Squid as a CONNECT request with
> encrypted binary data. The CONNECT request and data can be safely sent to a
> peer and the reply shunted straight back to the client. This is otherwise
> known as a blind tunnel / binary tunnel through the HTTP proxy.
> This can be done safely whether the peer supports SSL or not. Or even
> whether your proxy supports SSL or not.
>
>
> With SSL-bump the CONNECT wrapper request is removed, the encrypted data
> decrypted. THEN Squid handles the decrypted request almost as if it was sent
> in to a https_port. Squid does not support adding the CONNECT request
> wrapper back when passing the request to non-SSL peers. If the request were
> relayed out to peer this would result in HTTPS be decrypted then sent "in
> the clear" to any peers - seriously breaking the security on HTTPS. (you say
> earlier Squid did that, we know, that got fixed pretty soon after it was
> found but there are still a few releases which do it).
> In reverse-proxy we can safely assume that peers are part of a trusted
> backend system for the reverse-proxy. For the corporate situations where
> SSL-Bump is used we CANNOT make that assumption safely even if the peer has
> the SSL connection options configured and must, for now, block relaying to
> peers.
>
>
>
>
>>
>> Is this the current situation(3.HEAD). Are there any project to
>> implement getting SSL pages over peers?
>
>
> All current Squid releases share the above behaviour. SSL-Bump in 3.1 was
> experimental and rather limited in what it can do. I recommend using at
> least 3.2 for less client annoyance, preferably use 3.3 for the best
> SSL-Bump behaviour (server-first bumping fixes a few other security
> problems).
>
> As to work underway; I made some effort to work towards re-wrapping CONNECT
> on outbound requests for another project unrelated to SSL-Bump but sharing
> the same requirement. It is still in the planning stages with no timeline
> for any code. Any contributions toward that would be welcome.
>
>
>
>> Because this mode obligate me
>> to choose between:
>> a- do https filtering in squid and does not forward https to
>> dansguardian (I use https domain name filtering on dg)
>> b- dont do https filtering and continue with https domain name
>> filtering on dansguardian.
>>
>
> So far as I'm aware anything you can do in DG can also be done in Squid. So
> (a) is your best option.
>
>
>
>>
>>
>> 2012/10/07 14:35:50.142| peerSelectCallback: https://www.haberturk.com/
>> 2012/10/07 14:35:50.142| Failed to select source for
>> 'https://www.haberturk.com/'
>> 2012/10/07 14:35:50.142| always_direct = -1
>
>
> Hmm. -1 here is strange. It means some lookup (authentication, or IDENT or
> external ACL) is being waited for. Scan your whole config for always_direct
> lines and check their order carefully.
>
> The "always_direct allow HTTPS" should have produced "1" there and made your
> Squid use DNS results for www.haberturk.com instead of ERR_CANNOT_FORWARD.
>
>
> Amos
>
>
>> 2012/10/07 14:35:50.142| never_direct = 0
>> 2012/10/07 14:35:50.142| timedout = 0
>> 2012/10/07 14:35:50.142| fwdStartComplete: https://www.haberturk.com/
>> 2012/10/07 14:35:50.142| fwdStartFail: https://www.haberturk.com/
>> 2012/10/07 14:35:50.142| forward.cc(286) fail: ERR_CANNOT_FORWARD
>> "Service Unavailable"
>> https://www.haberturk.com/
>> 2012/10/07 14:35:50.142| StoreEntry::unlock: key
>> '31F6E0CCC4924D82F5F0070DE9555597' count=2
>> 2012/10/07 14:35:50.142| FilledChecklist.cc(168) ~ACLFilledChecklist:
>> ACLFilledChecklist destroyed 0x91502d0
>> 2012/10/07 14:35:50.142| ACLChecklist::~ACLChecklist: destroyed 0x91502d0
>> 2012/10/07 14:35:50.142| forward.cc(164) ~FwdState: FwdState
>> destructor starting
>> 2012/10/07 14:35:50.142| Creating an error page for entry 0x9152990
>> with errorstate 0x91504a0 page id 13
>> 2012/10/07 14:35:50.142| StoreEntry::lock: key
>> '31F6E0CCC4924D82F5F0070DE9555597' count=3
>> 2012/10/07 14:35:50.142| errorpage.cc(1075) BuildContent: No existing
>> error page language negotiated for ERR_CANNOT_FORWARD. Using default
>> error file.
>>
>> Best Regards,
>>
>>
>>
>> --
>> Oguz YILMAZ
>
>
Received on Tue Oct 09 2012 - 06:46:04 MDT

This archive was generated by hypermail 2.2.0 : Tue Oct 09 2012 - 12:00:03 MDT