Re: [squid-users] Tuning Squid for large user base

From: James MacLean <macleajb@dont-contact.us>
Date: Sun, 7 Mar 2004 09:13:31 -0400 (AST)

On Sun, 7 Mar 2004, Henrik Nordstrom wrote:

> On Sat, 6 Mar 2004, James MacLean wrote:
> > This is _definitely_ the case. We have a rate limited (QoS via CISCO) 6MBs
> > link. It's on a 100Mb/s ethernet circuit. It runs _full_ all day. Hence
> > the idea to apply Squid. We are using squid now at over 300 educational
> > sites and have had great success with it.
>
> Is the above 6 Mbit/s or 6 MByte/s?
>
> 6 Mbit/s is not very much for a Squid to handle in terms of traffic
> volume. 6 MByte/s is a different story.

6MBytes. 620+ sites. Thousands of client computers :).
 
> Use of "half_closed_clients off", "quick_abort_min 0 KB" and
> "quick_abort_max 0 KB" recommended in such situations. If extreme then
> disabling the use of persistent connections also help.

I have not tried the quick_abort's and will give them a go tomorrow. Also
the persistent connections, but as always it is important that the clients
see as little change as possible implementing this ;).
 
> > for there web pages is quicker than everyone being proxied by squid. This
> > was the first time we had actually seen Squid act this way and obviously
> > have been trying to pick out what we have been doing wrong. Some slight
> > delay because of the proxy is fine, but as you watch, the traffic to the
> > Internet drops and client response time jumps :(.
> You need to determine why this happens. There is a couple of different
> cenarios requiring different actions.
>
> Things you need to monitor are
>
> * CPU usage

Certainly this goes up, and it is mostly on 1 CPU as I understand would be
expected. Load gets over 2, but was staying under 3.

> * Number of active filedescriptors

That climbs fast but does peek.

> * vmstat activity

This I have not watched yet. To do for tomorrow.

> * cache.log messages

Nothing that strikes me other than some :

2004/03/05 16:01:24| parseHttpRequest: Requestheader contains NULL
characters
2004/03/05 16:01:24| clientReadRequest: FD 2727 Invalid Request
2004/03/05 16:01:34| urlParse: Illegal character in hostname
'www.www.picher's.com.org'
2004/03/06 06:10:30| parseHttpRequest: Requestheader contains NULL characters
2004/03/06 06:10:30| clientReadRequest: FD 16 Invalid Request
2004/03/06 07:39:47| parseHttpRequest: Requestheader contains NULL characters
2004/03/06 07:39:47| clientReadRequest: FD 14 Invalid Request
2004/03/06 17:11:06| parseHttpRequest: Requestheader contains NULL characters
2004/03/06 17:11:06| clientReadRequest: FD 14 Invalid Request
2004/03/06 19:57:31| parseHttpRequest: Requestheader contains NULL characters
2004/03/06 19:57:31| clientReadRequest: FD 14 Invalid Request
2004/03/06 20:10:04| parseHttpRequest: Requestheader contains NULL characters
2004/03/06 20:10:04| clientReadRequest: FD 15 Invalid Request
2004/03/06 21:42:45| CACHEMGR: admin@142.227.51.1 requesting 'menu'
2004/03/06 21:42:47| CACHEMGR: admin@142.227.51.1 requesting 'info'
2004/03/06 22:18:56| parseHttpRequest: Requestheader contains NULL characters
2004/03/06 22:18:56| clientReadRequest: FD 14 Invalid Request

> > . Squid slows way down when it's upstream request pipe is full, or
> > . There is a certain number of open FD's that when we go beyond it, Squid
> > start to stall?
>
> Both apply. Often together to make things even worse..
>
> * pipe is full, causing lag on fetching objects
> * the lag causing more and more clients to queue up increasing the
> filedescriptor usage
> * the increased filedescriptor usage increases CPU usage, and when
> reaching 100% squid starts to lag due to short of CPU time
> * the increased filedescriptor usage may also make Squid or your system
> run short of filedescriptors, forcing Squid to stop accept new requests
> for a while further increasing the lag. This condition is always logged in
> cache.log should it occur.

Originally we saw these. Hence upping the FDs, and now we don't see this
meesage anymore.

The pipe is full regardless of having Squid up, but without squid the
client response time is much more favorable. Maybe 10,000 requests spread
over many clients works better over the pipe than all those requests from
only Squid?
 
> The same symptoms can also be seen due to swap activity. If your system
> for some reason runs short on memory it will start to swap, and Squid is
> hit very badly from swap activity.

Originally it was swapping when it loaded up, but now, especially with
no_cache deny all during testing, it is not swapping at all. And also,
when the upstream link was not full, and there were less clients, it
seemed to work as expected.

> > Are the 3 cache_dir's per box on different channels... for speed?
> Squid does not use very much bandwidth to the drives so multiple channels
> rarely make much difference.
> What Squid uses mostly for the cache is seek time, so it is important
> each drive can operate independently. What this means is that certain IDE
> configurations where only one drive per channel can be processing commands
> at a time is not suitable for Squid.

Ah. Ok, so with us we are using SCSI raided. And with that it sounds like
one diskd line should satisfy?
 
> The filedescriptor usage on the other hand represents what is going on
> right now and is not impacted by history.
> But the filedescriptor usage does look a bit high even if assuming all
> those clients are very active right now.

It appears that requests not being serviced fast enough by the uplink is
aiding in this congestion. I wonder if running multiple Squids on the one
PC would be more effective than one with so many open FDs. I'm guessing
that what might be sped up in FDs would get lost using independant
caches?
 
> Regards
> Henrik

Thanks for the information,
JES

-- 
James B. MacLean        macleajb@ednet.ns.ca
Department of Education 
Nova Scotia, Canada
     
Received on Sun Mar 07 2004 - 06:13:32 MST

This archive was generated by hypermail pre-2.1.9 : Thu Apr 01 2004 - 12:00:01 MST