Re: [squid-users] Squid + FreeBSD and datasize segment limit

From: Joe Cooper <joe@dont-contact.us>
Date: Fri, 15 Mar 2002 22:12:21 -0600

Hmmm... Why not just shrink your cache_dir total to something more
reasonable? 96GB is /huge/ (really, I mean gigantic, elephantine,
gargantuan, ummm...kinda big) for a 1.5GB RAM box and it is entirely
expected behavior for the Squid process to grow up to about 1GB in size
plus cache_mem and whatever other memory allocations Squid needs to do
to support your load.

That said, I don't know of a way to make Squid gracefully handle hitting
an OS limit. Avoiding hitting it is the best advice I have to offer.
And the advice that a Squid process that large in that little memory
will perform quite badly.

Francis Vidal wrote:
> Hi,
>
> We're running Squid 2.4-STABLE4 on FreeBSD-STABLE systems and we're
> nearing the kernel-default 512MB datasize segment limit. We've already
> bumped it to 768MB by re-compiling the kernel and setting
> MAXDSIZ=(768UL*1024*1024). I've included the specs below.
>
> Squid seems to just gobble up whatever memory is available and restarts
> when there's no more memory but it does that over and over again. Is
> there a graceful way of making Squid stay up and running even if it
> reaches the datasize segment limit? One thing we noticed is that when
> this happens, Squid disconnects itself from the WCCP group and does not
> rejoin after a restart (when datasize segment is reached).
>
> --[ machine
> specification ]-----------------------------------------------
>
> Memory: 1.5GB
> cache_dir total: 96GB (spread over 3 HDDs)
> cache_mem setting: 128MB

-- 
Joe Cooper <joe@swelltech.com>
http://www.swelltech.com
Web Caching Appliances and Support
Received on Fri Mar 15 2002 - 21:27:09 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:06:57 MST