Squid Decreasing Performance

From: Jose Pereira Van-Dunem <jvd@dont-contact.us>
Date: Wed, 28 Apr 1999 09:39:37 +0100

Helloi Oscar,

I think i understood your very helpful explanation (below) concerning squid
behavior and performance related to Object Size, Disk Space and RAM Size
(cache_mem) !

You wrote:

>Squid uses memory as it adds data to the on-disk store. If you have
>too big a store, your squid size will exceed your memory. When this
>happens, your machine will start to swap.
>Whenever Squid locates an object on disk, you will end up swapping
>large amounts of ram... you don't want this.

>The average size of requested objects (on most caches) is 13 KB.
> You need about 6 megs of ram per gig of disk.

I am running squid using the default settings:

> cache_mem 8
> cache_dir /usr/local/squid/cache 100 16 256

But my machine is swapping and it slows the whole of Squid down !

I run 'vmstat 1' and watch the 'si' and 'so' values:

si = 48 so = 32

I also tried to set

> cache_mem 1
> cache_dir /usr/local/squid/cache 100 16 256

Then i shutdown the machne and restarted squid but it started swapping again

The size of my /usr/local/squid/cache/core file is 14,737,408
and the size od my /usr/local/squid/cache/swap.state is 3,494,832

...Any idea to help me, please !



>> José Pereira Van-Dúnem Email: jvd@ebonet.net
>> EBONet - Dep Servicos Internet
>> R. dos Enganos,Nr 1, 1o, 23 Phone: (244-2) 336533/334329
>> Kinaxixi-Luanda-Angola Fax: (244-2) 390995
>> http://www.ebonet.net PO Box: 3110
-----Original Message-----
From: Oskar Pearson <oskar@linux.org.za>
To: Patrick Kormann <pkormann@datacomm.ch>
Cc: squid-users@ircache.net <squid-users@ircache.net>
Date: Friday, April 09, 1999 10:17 PM

>Hi Patrick
>> It seems I still don't have fully understood the sense of cache_mem and
>> maybe cache_dir. I always get quite short on memory when I cache a lot
>> even if I don't). Why can you say that cache_mem 48 means half of the
>aah: one of the most common problems!
>Let's ignore 'cache_mem' for a while.
>When you install Squid and start it, it uses a small amount of memory
>for in-memory tables and the like. As objects are added to disk, it
>uses some memory to keep track of which objects are there (it can't do
>a linear search of the entire disk, so it keeps a structured table of
>objects in memory.)
>Each object added uses something like 75 bytes of memory.
>If you check the store_avg_object_size value in squid.conf, you will
>find that the average size of requested objects (on most caches) is
>13 KB.
>If you have a gig of disk space (that's 1024*1024*1024 bytes), you can
>thus fit ((1024*1024*1024) / (13*1024)) objects on it, which comes to
>about 80 000. If each object uses 75 bytes of ram, you need about 6
>megs of ram per gig of disk. Note that this includes things like:
>o Squid: the binary
>o Squid: space for parts of objects that are "in transit"
>o memory leaks
>o operating system buffering
>o network buffers
>o other programs on the machine
>So, I would guess that 7megs of ram per gig of disk would be fine for
>most people.
>> cache_dir will be put in memory? Would a cache_mem 1 be aedequate for us
>> well?? Could you give me a hint how I should set up cache_mem and
>> cache_dir when I have:
>Ok: now, let's consider the 'cache_mem' value.
>Opening files on disk is Slooooowwww compared to sending them from
>memory (even if the OS has the file in the cache, you will find that
>the 'open()' command slows things down.) It's worth taking the
>most-requested objects and storing them in ram if your cache is
>loaded. So that you don't end up using all of the ram in the machine,
>you can set a 'maximum size' allocated to popular objects. This is the
>cache_mem value.
>The person in the previous mail only had 100Mb of disk for his cache.
>Since he had cache_mem set to 48Mb, he was keeping almost 50% of his
>cache store in memory... and it's an incredibly small cache..
>So, he should store only 1mb instead, and use the memory for other
>> About 15 GB available for squid-cache
>> 256 MB of RAM
>Hmm. 15*7 = 105. so, 105Mb will be used straight off. You have
>about 1.2 million objects (I think.)
>> No other services running on the machine
>> Right now more than 2300 requests per proxy in busy times (there are 4
>> proxies, all the same config, all about the same amount of requests)
>> (I guess, not sure about that) something over 10'000 ICP messages per
>> minute.
>With that many ICP requests, you will use lots of network buffers.
>That's 166 per second. Kernel's often don't like lots and lots of very
>small packets, so you should ensure that you aren't using almost all
>of your ram. You also want lots of memory free for buffering: your
>disk access time is almost always a limiting factor on caches. Of your
>1.2million objects, you will probably find that only a few 100 are
>actually hit incredibly often. Let's allocate some memory for the 1000
>most requested objects (remember that each object is 13kb or so): that's
>1000*13KB = 13000kb memory.... 13 megabytes. so, set cache_mem to 15,
>run Squid and leave the rest as is.
Received on Wed Apr 28 1999 - 02:25:56 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:45:59 MST