Re: [squid-users] Re: kerberos authentication - performance tuning

From: guest01 <guest01_at_gmail.com>
Date: Wed, 16 Feb 2011 13:28:29 +0100

Hi,

We had to bypass the kerberos authentication for now (most of the
users will be authenticated by IP (there are already more than 10000
unique IPs in my Squid logs). iirc, disabling the replay cache did not
help much. There is a load avg of 0.4 right now (authenticating about
9000 users per IP and 1000 with Kerberos) with approx 450 RPS (2
strong servers), which looks pretty good.

What do you think? Can SMP functionality of Squid 3.2 reduce our load
problem significantly? At the moment, we have multiple independent
squid processes per server (4 squid instances, 16 cpus), but I don't
see any way (except adding more hardware) to authenticate >10000 with
Kerberos.

regards

On Sat, Feb 12, 2011 at 2:09 PM, Markus Moeller <huaraz_at_moeller.plus.com> wrote:
> Hi Peter
>
>> "Nick Cairncross" <Nick.Cairncross_at_condenast.co.uk> wrote in message
>> news:C9782338.5940F%nick.cairncross_at_condenast.co.uk...
>> On 09/02/2011 09:34, "guest01" <guest01_at_gmail.com> wrote:
>>
>>> Hi,
>>>
>>> We are currently using Squid 3.1.10 on RHEL5.5 and Kerberos
>>> authentication for most of our clients (authorization with an icap
>>> server). At the moment, we are serving approx 8000 users with two
>>> servers. Unfortunately, we have performance troubles with our Kerberos
>>> authentication. Load values are way tooooo high ...
>>>
>>> 10:19:58 up 16:14,  2 users,  load average: 23.03, 32.37, 25.01
>>> 10:19:59 up 15:37,  2 users,  load average: 58.97, 57.92, 47.73
>>>
>>> Peak values have been >70 for the 5min interval. At the moment, there
>>> are approx 400 hits/second (200 per server). We already disabled
>>> caching on harddisk. Avg service time for Kerberos is up to 2500ms
>>> (which is quite long).
>>>
>>> Our kerberos configuration looks pretty simple:
>>> #KERBEROS
>>> auth_param negotiate program
>>> /opt/squid/libexec/negotiate_kerberos_auth -s HTTP/fqdn -r
>>> auth_param negotiate children 30
>>> auth_param negotiate keep_alive on
>>>
>>> Is there anyway for further caching or something like that?
>>>
>>> For testing purposes, we authenticated a certain subnet by IP and load
>>> values decreased to <1. (Unfortunately, this is not possible because
>>> every user gets a policy assigned by its username)
>>>
>>> Any ideas anyone? Are there any kerberos related benchmarks available
>>> (could not find any), maybe this issue is not a problem, just a
>>> limitation and we have to add more servers?
>>>
>>> Thanks!
>>>
>>> best regards
>>> Peter
>>
>> Peter,
>>
>> I have pretty much the same setup as you - just 3.1.8, though only 700
>> users.
>>
>> Have you disabled the replay cache:
>> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
>> But beware of a memory leak (depending on your libs of course):
>>
>> http://squid-web-proxy-cache.1019090.n4.nabble.com/Intermittent-SquidKerbAu
>> th-Cannot-allocate-memory-td3179036.html. I have a call outstanding with
>> RH at the moment.
>>
>
> Could you try disabling the replay cache ? Did it improve the load ?
>
>> Are your rules repeating requesting authentication unnecessarily when it's
>> already been done? Amos was very helpful when advising on this (search for
>> the post..)
>>
>> 8000 users.. Only 30 helpers? What does cachemgr say about used negotiate
>> helper stats, timings/sec etc.
>> Is your krb5.conf using the nearest kdc in it's own site etc?
>>
>
> The kdc is only important for the client. The server (squid) never talks to
> the kdc.
>
>> Some load testers out there incorporate Kerberos load testing.
>>
>> Just my thoughts..
>>
>> Nick
>>
>>
>>>
>>
>>
>> The information contained in this e-mail is of a confidential nature and
>> is intended only for the addressee.  If you are not the intended addressee,
>> any disclosure, copying or distribution by you is prohibited and may be
>> unlawful.  Disclosure to any party other than the addressee, whether
>> inadvertent or otherwise, is not intended to waive privilege or
>> confidentiality.  Internet communications are not secure and therefore Conde
>> Nast does not accept legal responsibility for the contents of this message.
>>  Any views or opinions expressed are those of the author.
>>
>> The Conde Nast Publications Ltd (No. 226900), Vogue House, Hanover Square,
>> London W1S 1JU
>>
>
>
>
Received on Wed Feb 16 2011 - 12:29:37 MST

This archive was generated by hypermail 2.2.0 : Thu Feb 17 2011 - 12:00:05 MST