Re: Does Squid push the data-segment size limit?

From: Dancer <dancer@dont-contact.us>
Date: Thu, 05 Feb 1998 10:31:15 +1000

Michael Pelletier wrote:

> I'm trying to diagnose some problems I'm having with squid dying due to
> memory allocation errors, and my BSD/OS 3.1 system allows you to set a
> limit on "datasize", defined in the manpage as:
>
> datasize
> The maximum size of the data segment for a process; this defines
> how far a program may extend its break with the sbrk(2) system
> call.
>
> I'm not 100% sure what this means, as I've not tinkered with sbrk()
> before. Does Squid need this value to be set quite large for a 3GB cache
> in order to allow enough space for the hot object cache and the database
> indices? Or am I running out of memory due to contention with other
> processes?

Programs get started with a default 'heap'. That is, they get enough space to
store global variables and whatnot, and a pool for malloc. That's their data
segment. If you start mallocing chunks of memory, the heap will run out of space,
and malloc() calls sbrk() to extend the size of the data segment to get more.

They key thing here is that space allocated with sbrk() _cannot_ be given back to
the operating system until the process _terminates_. Once your data-segment size
has been increased, that's it. As long as your process runs, that memory is
assigned to it, and it cannot be given to any other process. You can free() every
byte, but you're still stuck with it.

(For reference, I need about 64MB of real ram + 10MB of swap for squid 1.1.18 to
manage a 3GB cache)

D

--
Did you read the documentation AND the FAQ?
If not, I'll probably still answer your question, but my patience will
be limited, and you take the risk of sarcasm and ridicule.
Received on Wed Feb 04 1998 - 16:42:48 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:38:47 MST