Re: [squid-users] Too many TCP_DENIED/407 when using Kerberos authentication

From: Eliezer Croitoru <eliezer_at_ngtech.co.il>
Date: Wed, 02 Oct 2013 00:51:39 +0300

Hey,

On 10/01/2013 03:33 PM, Carlos Defoe wrote:
> One important thing to me is to pick a list of, let's say, top 50
> accessed websites that can be whitelisted, and take it out of
> authentication process.
This is a very nice idea..
You can try to also whitelist some IPs of specific machines if you have
mac+ip level security mechanisms.

I wanted to think of something:
Let say a server that works with 10 users using RDP, what level of
restriction will you use for that??
In a case of a full IP allow then you are in a problem when there is no
auth...
Using forward proxy with strict authentication policy reduces the
ability to impose as another user in the application level.
You can see in the logs what user password\access\usage was being
compromised even if he or anyone that is operating the computer is
violating company policy..

As I stated before that if an employer dosn't trust his worker\employe
he is in a very bad status and should restrict even the access to the
bath-room\toilet.

Eliezer
>
> I didn't measure, but doing that will reduce a lot of work, not only
> with those 407 messages, but with the entire authentication process.
>
> On Tue, Oct 1, 2013 at 8:47 AM, Brendan Kearney <bpk678_at_gmail.com> wrote:
>> On Tue, 2013-10-01 at 14:14 +0330, Hooman Valibeigi wrote:
>>> I understand the prime of challenge/response protocol. Failing the
>>> first request looks fine as long as it occurs only once and not for
>>> every page you visit.
>>>
>>> I wonder if administrators would be happy with the fact that users
>>> have to send 2 requests to fetch an object, 40% of times on a browser
>>> that's been open for the whole day. Could I blame the browser for not
>>> learning how it should talk the proxy?
>>>
>>> Apart from the waste of bandwidth (although negligible), the other
>>> problem is that logs will be cluttered and full of garbage which also
>>> makes access/usage statistics inaccurate.
>>
>> acl AuthRequest http_status 407
>> access_log ... !AuthRequest ...
>>
>> i use the above to keep auth requests out of my logs. i dont care to
>> see that expected behavior is happening (given the rate at which it
>> happens and how clouded the logs become when 407s are logged).
>>
>> i work with another proxy vendor in my job and they have functionality
>> to cache the users credentials for a given period of time and use
>> something like the users IP as a surrogate credential for the given
>> credential cache period. i have not dug into the authentication helpers
>> for squid that deeply, but do they have a similar functionality?
>>
>> as for failing on the first request, the browser does not learn that it
>> has to provide auth, and does not begin doing so on its own. the
>> challenge for auth is always a function of the proxy and a browser does
>> not assume it will always face an auth challenge.
>>
>> as an anecdotal argument, at work i recently went though a migration
>> from NTLM auth only to Kerberos auth as primary with NTLM as secondary
>> auth. because we were using an external device to handle the NTLM
>> challenges between the proxies and AD, along with a couple of other
>> performance-sapping, latency-introducing options, we significantly
>> impacted browsing with this configuration.
>>
>> NTLM is very chatty, and authenticating every single object that an HTTP
>> method is issued for (GET, or whatever) means the "cost" of browsing
>> goes up significantly. we moved to Kerberos which reduces load on the
>> proxies and AD, we now have the proxies talking directly to AD instead
>> of the external device to speed up the auth process overall, and we
>> leveraged the credential caching functionality to further reduce load
>> and quicken the overall user experience when browsing the web.
>>
Received on Tue Oct 01 2013 - 21:51:50 MDT

This archive was generated by hypermail 2.2.0 : Wed Oct 02 2013 - 12:00:04 MDT