Re: [squid-users] Too many TCP_DENIED/407 when using Kerberos authentication

From: Carlos Defoe <carlosdefoe_at_gmail.com>
Date: Tue, 1 Oct 2013 09:33:33 -0300

One important thing to me is to pick a list of, let's say, top 50
accessed websites that can be whitelisted, and take it out of
authentication process.

I didn't measure, but doing that will reduce a lot of work, not only
with those 407 messages, but with the entire authentication process.

On Tue, Oct 1, 2013 at 8:47 AM, Brendan Kearney <bpk678_at_gmail.com> wrote:
> On Tue, 2013-10-01 at 14:14 +0330, Hooman Valibeigi wrote:
>> I understand the prime of challenge/response protocol. Failing the
>> first request looks fine as long as it occurs only once and not for
>> every page you visit.
>>
>> I wonder if administrators would be happy with the fact that users
>> have to send 2 requests to fetch an object, 40% of times on a browser
>> that's been open for the whole day. Could I blame the browser for not
>> learning how it should talk the proxy?
>>
>> Apart from the waste of bandwidth (although negligible), the other
>> problem is that logs will be cluttered and full of garbage which also
>> makes access/usage statistics inaccurate.
>
> acl AuthRequest http_status 407
> access_log ... !AuthRequest ...
>
> i use the above to keep auth requests out of my logs. i dont care to
> see that expected behavior is happening (given the rate at which it
> happens and how clouded the logs become when 407s are logged).
>
> i work with another proxy vendor in my job and they have functionality
> to cache the users credentials for a given period of time and use
> something like the users IP as a surrogate credential for the given
> credential cache period. i have not dug into the authentication helpers
> for squid that deeply, but do they have a similar functionality?
>
> as for failing on the first request, the browser does not learn that it
> has to provide auth, and does not begin doing so on its own. the
> challenge for auth is always a function of the proxy and a browser does
> not assume it will always face an auth challenge.
>
> as an anecdotal argument, at work i recently went though a migration
> from NTLM auth only to Kerberos auth as primary with NTLM as secondary
> auth. because we were using an external device to handle the NTLM
> challenges between the proxies and AD, along with a couple of other
> performance-sapping, latency-introducing options, we significantly
> impacted browsing with this configuration.
>
> NTLM is very chatty, and authenticating every single object that an HTTP
> method is issued for (GET, or whatever) means the "cost" of browsing
> goes up significantly. we moved to Kerberos which reduces load on the
> proxies and AD, we now have the proxies talking directly to AD instead
> of the external device to speed up the auth process overall, and we
> leveraged the credential caching functionality to further reduce load
> and quicken the overall user experience when browsing the web.
>
Received on Tue Oct 01 2013 - 12:33:42 MDT

This archive was generated by hypermail 2.2.0 : Wed Oct 02 2013 - 12:00:04 MDT