Re: [squid-users] Heavy load squid with high CPU utilization...

From: <david_at_lang.hm>
Date: Sat, 26 Mar 2011 14:06:10 -0700 (PDT)

In my testing in the last couple of weeks, I've found that newer squid
versions take significantly more cpu than the older versions, translating
into significantly less capacity

I didn't test 2.7, but in my tests

3.0 4200 requests/sec
3.1.11 2100 requests/sec
3.2.0.5 1400 requests/sec (able to scale up to 2900 request/sec using 4
cpu cores, beyond that it dropped again)

David Lang

On Sat, 26 Mar 2011, Dejan Zivanic wrote:

> With squid 3.1.11 CPU usage of squid process is 100% during 10am to 10 pm...
>
> I will try now with 2.7.Stable9. I just dont know what could be the problem.
>
>
>
> On 23.3.11. 16.24, Marcus Kool wrote:
>>
>>
>> Zivanic Dejan wrote:
>>> On 3/23/11 3:27 AM, Marcus Kool wrote:
>>>> Dejan,
>>>>
>>>> Squid is known to be CPU bound under heavy load and the
>>>> Quad core running at 1.6 GHz in not the fastest.
>>>> A 3.2 GHz dual core will give you double speed.
>>>>
>>>> The config parameter "minimum_object_size 10 KB"
>>>> prevents that objects smaller than 10 KB are not written to disk.
>>>> I am curious to know why you have this value and if you
>>>> benchmarked it, can you share the results ?
>>>>
>>>> The mean object size is 53 KB and the parameter
>>>> maximum_object_size_in_memory 50 KB
>>>> implies that you have a relatively large number of hot objects
>>>> that do not stay in memory.
>>>> The memory hit % is low and the disk hit % is high, so the
>>>> maximum_object_size_in_memory should be increased.
>>>> I suggest 96 KB, monitor the memory hit % and increase more
>>>> if necessary.
>>>>
>>> increased
>>
>>>> client_persistent_connections and server_persistent_connections
>>>> are off. The default is on and usually gives better performance.
>>>> Why are they off ?
>>> changed
>>>> The TCP window scaling is off. This is a performance penalty
>>>> for large objects since it uses the select/epoll loop a lot more
>>>> because objects arrive in more smaller pieces.
>>>> Why is it off ?
>>> I activate scaling.
>>
>>>> If you have a good reason to set it off I recommend to use
>>>> the maximum size for fixed TCP window size: 64K (squid parameter
>>>> tcp_recv_bufsize) to reduce the number of calls to select/epoll.
>>>>
>>> with scaling on should i set tcp_recv_bufsize to 64k ?
>>
>> Your TCP scaling options are:
>> net.ipv4.tcp_rmem = 4096 87380 16777216
>> net.ipv4.tcp_wmem = 4096 87380 16777216
>> No. with scaling your settings are ok although the maximum values are a
>> bit high.
>> To save memory, you could set tcp_recv_bufsize to anything reasonable.
>> This depends mostly on average delay.
>>
>>>> You use one disk solely for cache. This can be better
>>>> if you use a battery-backed disk I/O controller with
>>>> 256MB cache.
>>>> And the obvious: more disks is good for overall performance
>>>>
>>>> Marcus
>>
>> Of course, I am interested in feedback and what the configuration changes
>> mean for the performance.
>>
>> Marcus
>>
>
>
Received on Sat Mar 26 2011 - 21:06:18 MDT

This archive was generated by hypermail 2.2.0 : Sun Mar 27 2011 - 12:00:03 MDT