[squid-users] High CPU usage for large object

From: NGUYEN, KHANH, ATTSI <nguyenkt@dont-contact.us>
Date: Tue, 7 Aug 2007 13:25:37 -0400

Hi,

I am using squid 2.6 on Linux AS version 4, update 3.

Hardware: dell 2850, 4 GB memory, 6 x 72 GB disks. NO RAID. Each disk is one mount point.

Squid basic configuration:

cache_mem: 2 GB
maximum_object_size 5096 MB
maximum_object_size_in_memory 100 MB
cache_replacement_policy lru
6 cache_dir on each disk.
Cache serve is configured as a reverse proxy in front of an apache server.

squid compilation option: enable-follow-x-forwarded-for, enable-async-io, enable-auth, disable-wccp, enable-snmp, enable-x-accelerator-vary, enable-remove-policies=lru

When I request a 4GB object from the squid server(the object is already cached) vs from an apache server (version 2.2.0), the cpu usage of the squid process is at least 3 times more than the cpu usage of the apache server. This object size exceeds the maximum_object_size_in_memory thus it has to get from the disk each time there is a request for it. So perhaps the squid has some extra overhead. However, 3 times more seems unusual. Any body has any suggestion on tuning the squid or the OS to better serve large object? Also I notice that the cpu takes a lot of hit when the object is greater than 1 MB.

Thanks,
Khanh
Received on Tue Aug 07 2007 - 11:25:48 MDT

This archive was generated by hypermail pre-2.1.9 : Sat Sep 01 2007 - 12:00:03 MDT