Re: [SQU] Memory/CPU Usage

From: Barry Kostjens <bkostjens@dont-contact.us>
Date: Thu, 19 Oct 2000 09:38:43 +0200

On Thursday 19 October 2000 08:56, Simon Bryan wrote:

> Hi,
> While investigating why our internet connection is abnormally slow i
> noticed that when I view the 'top' process as root Squid is using up to 10%
> of CPU and 15% of memory, however when viewed as a user the figures are 98%
> for CPU and 39% for memory. Does this indicate the need to increase RAM at
> all?

32 MB is not enough for a linux box running squid, at least if your 'cache'
partition is bigger then a couple of 100 MB's...

>
> Given that we have a PII with 32 Mb RAM, what Squid parameters would
> make the biggest difference to its speed (latency?)? We are using NCSA

cache_dir ufs /proxy/cache 1000 16 256

With less RAM you have to decrease your cache-dir size.
You can calculate your RAM needs, following the squid 'user manual':

[snip]

RAM requirements

Squid keeps an in-memory table of objects in RAM. Because of the way that
Squid checks if objects are in the file store, fast access to the table is
very important. Squid slows down dramatically when parts of the table are in
swap.

Since Squid is one large process, swapping is particularly bad. If the
operating system has to swap data, Squid is placed on the 'sleeping tasks'
queue, and cannot service other established connections. (? hmm. it will
actually get woken up straight away. I wonder if this is relevant ?)

Each object stored on disk uses about 75 bytes (? get exact value ?) of RAM
in the index. The average size of an object on the Internet is about 13kb, so
if you have a gigabyte of disk space you will probably store around about 80
000 objects.

At 75 bytes of RAM per object, 80 000 objects require about six megabytes of
RAM. If you have 8gigs of disk you will need 48Mb of RAM just for the object
index. It is important to note that this excludes memory for your operating
system, the Squid binary, memory for in-transit objects and spare RAM for for
disk cache.

So, what should your sustained-thoughput of your disks be? Squid tends to
read in small blocks, so throughput is of lesser importance than random seek
times. Generally disks with fast seeks are high throughput, and most disks
(even IDE disks these days) can transfer data faster than clients can
download it from you. Don't blow a year's budget on really high-speed disks,
go for lower-seek times instead - or add more disks.

[snip]

Only watch the average object size, in the manual they say 13kb.
My experience is more like 8kb.....

-- 
greets,
Barry Kostjens | RedHat Certified Engineer
 [Internet Limburg | http://www.ilimburg.nl]
--
To unsubscribe, see http://www.squid-cache.org/mailing-lists.html
Received on Thu Oct 19 2000 - 01:46:38 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:55:48 MST