Re: 20GB IDE and Redhat 6.0

From: Jon Kay <>
Date: Sat, 02 Oct 1999 11:26:23 +0000

Henrik Nordstrom wrote:
> One of the Alpha 2.2.STABLE4 Squid servers I maintain with 20GB of cache
> (1.5-1.7 million objects) and quite high load (130-160 requests/s) uses
> little more than 300MB of memory for the Squid process, then the OS
> needs some memory for disk buffers, network buffers and other
> applications like DNS server, resolvers, redirectors and other
> accessories requires their share. It all sums up to a well balanced
> server at 512MB.
> A review of the figures indicates that much of this may be contributed
> to the 64 bit nature of the Alpha machines causing Squids internal data
> structures to require quite a bit more memory (a rought estimate gives
> ~40% larger, more details on Monday). On a typical 32-bit processor the
> memory requirements of Squid may be simething like 80MB less than this,
> and if you don't run a DNS server or other memory hungry processes the
> requirements may gain something in the order of additional 50MB compared
> to this machine.
> So for another CPU with the same configuration you may surive with only
> 384MB.
ALl right, but numbers like '8m per gig' have been bruited around a lot, and
not without reason. Even 386m is alot more than 160m. Would you care to post
a cachemgr.cgi run, maybe even a netstat -m run as well against that cache, so
we can see something of what's going on?

I wonder if it's due to the high workload. Are request-related data structures
and numbers of connections growing pathologically?

Jon Kay                                         (512) 420-9025
'push done right'                                  Squid consultants
Received on Sat Oct 02 1999 - 10:43:54 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:48:42 MST