Re: [squid-users] Strange Squid Behavior in Serving Large Files

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Wed, 19 Jun 2002 23:42:29 +0200

How many requests/second is hitting your disks?

What do the standard system management tools say about disk I/O latency
while you are running these benchmarks?

In the tests I have done there has been no significant slowdown noticed
unless the disks is seriously overloaded. A idle system mainly gets
limited by the 100Mbps networking on cache hits.

Regards
Henrik

Snowy wrote:

> I have been using Squid as a transparent proxy for our company using WCCP
> for quite some time. Recently, during some benchmark testings, I have found
> a mysterious slowdown of Squid. I have set "maximum_object_size" to 20MB and
> tested it to retrieve some large files, say 8MB and 16MB from an origin
> server in our lab. It was found from Squid's "access.log" file that when a
> client requested these large files for the first time, which result in
> "TCP_MISS" in the log file, it takes only a few seconds. But when the client
> tried to access those large files again, which causes "TCP_HIT" in log file,
> the response time increased almost 10-fold to 50-100 seconds. Checking
> through the system resources, I found that CPU is idle most of the time. The
> RAM is also OK without the indication of VM swapping. This problem can be
> partially avoided when I increase "maximum_object_size_in_memory" to 20MB,
> which result in a "TCP_MEM_HIT". Does it mean that Squid is very bad in
> serving those cached large files? What could be the possible causes?
>
> The Squid version I am using is 2.4STABLE6 with Linux kernel v2.4.18. Did I
> overlook or miss anything else?
>
> Thanks a lot!
>
> Sincerely,
> Snowy
Received on Wed Jun 19 2002 - 16:18:53 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:08:44 MST