[squid-users] Strange Squid Behavior in Serving Large Files

From: Snowy <xuefeng18@dont-contact.us>
Date: Wed, 19 Jun 2002 10:11:27 +0800

Hi, Henrik,

I have been using Squid as a transparent proxy for our company using WCCP
for quite some time. Recently, during some benchmark testings, I have found
a mysterious slowdown of Squid. I have set "maximum_object_size" to 20MB and
tested it to retrieve some large files, say 8MB and 16MB from an origin
server in our lab. It was found from Squid's "access.log" file that when a
client requested these large files for the first time, which result in
"TCP_MISS" in the log file, it takes only a few seconds. But when the client
tried to access those large files again, which causes "TCP_HIT" in log file,
the response time increased almost 10-fold to 50-100 seconds. Checking
through the system resources, I found that CPU is idle most of the time. The
RAM is also OK without the indication of VM swapping. This problem can be
partially avoided when I increase "maximum_object_size_in_memory" to 20MB,
which result in a "TCP_MEM_HIT". Does it mean that Squid is very bad in
serving those cached large files? What could be the possible causes?

The Squid version I am using is 2.4STABLE6 with Linux kernel v2.4.18. Did I
overlook or miss anything else?

Thanks a lot!

Sincerely,
Snowy
Received on Tue Jun 18 2002 - 20:22:37 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:08:43 MST