[squid-users] retrieving 1 file with 50 concurrent connections from memory cache in reverse proxy is really slow?

From: wheres theph <wherestheph_at_gmail.com>
Date: Wed, 14 May 2008 15:15:25 -0700

I set up a squid as a reverse proxy that round robins to 2 webservers (one
is which is the same machine where squid is), and it appears that things
work fine as a single user browsing casually. However, doing an
apachebench request with 50 concurrent users to the same url times out the
apachebench request:

***********************
# ab -c 50 -n 250 -H "Accept-Encoding: gzip,deflate"
http://www.domain.com/images/20k.gif
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking domain.com (be patient)
Completed 100 requests
Completed 200 requests
apr_poll: The timeout specified has expired (70007)
Total of 238 requests completed

***********************

On the first access of the gif file, the access.log shows the TCP_MISS
as expected. Subsequent accesses show the TCP_MEM_HIT as expected also.

No errors show up in squid.out. Any ideas on why serving a static image
from memory is so slow that ApacheBench times out?

My round robin setup is pretty standard I think:
*************
http_port 12.34.56.1:80 vhost defaultsite=www.domain.com
http_port 3128
cache_peer 12.34.56.2 parent 80 0 no-query originserver round-robin
cache_peer 127.0.0.1 parent 80 0 no-query originserver round-robin

cache_dir null /tmp

url_rewrite_host_header off

acl our_sites www.domain.com
http_access allow our_sites

maximum_object_size_in_memory 1024 KB
*************

I am using CentOS 5, squid v 2.6.STABLE6-5.el5_1.
Received on Wed May 14 2008 - 22:15:28 MDT

This archive was generated by hypermail 2.2.0 : Tue Aug 05 2008 - 01:05:13 MDT