RE: Squid dying ...

From: } <Bike@dont-contact.us>
Date: Thu, 16 Jul 1998 16:04:49 +0800

>(CHILD_MAX is for Apache, if you're running that too...) You can set
>the DFLDSIZ (the default datasize) and the MAXDSIZ to whatever suits
>your machine's memory configuration. However, I have had problems
>getting Squid to function correctly on my Intel Pentium-133 with a
>datasize of greater than 128MB, even though I have 256MB available on
>the system. Let me know if it works for you. Squid will generally
>stay up and running for at least a week, and usually more like 3-4
>weeks, with 128MB limit and a 24MB hot cache size on a 3,750MB cache,
>so I don't worry about it too much -- RunCache restarts it
>automatically anyway.

You mean your squid automatically restarts every 3-4 weeks?? How did
you do that?? My squid is running for three days and occupies about
24MB of memory now. I don't know whether my squid will dye after it
growth up to 64MB. Is there any way for running squid as long as
possible??

I am running squid 1.1.22 on FreeBSD 2.2.5. My machine will be PII-300,
512MB ram, 54G cache disk. (It's now K6-233, 128MB ram 1G cached disk).
It's important for us to hold the system stable.

Thanks.

-----Original Message-----
From: Michael Pelletier [mailto:mikep@comshare.com]
Sent: Thursday, July 09, 1998 9:47 PM
To: Michael Beckmann
Cc: Paresh Kumar; Squid List
Subject: Re: Squid dying ...

On Thu, 9 Jul 1998, Michael Beckmann wrote:

> Paresh Kumar wrote:
>
> > I've got this situation where squid dies every coupla hours with
this
> > message: "FATAL: xmalloc: unable to allocate 4096 bytes"
> >
> > We are using BSD/OS 3.0 with 128Mb RAM. I have been monitoring the
amount
> > of free RAM and there's usually 30-40M free when it dies.
>
> I think what matters here is not the free RAM, but rather the memory
> available to the Squid process. Look at the size of the squid
> process, I suspect it dies when it reaches 64 MB.

This is correct. What matters here is the "datasize" parameter. If
you enter the command "ulimit -a" in Korn shell, you get something
like:

time(cpu-seconds) unlimited
file(blocks) unlimited
coredump(blocks) unlimited
data(kbytes) 16384
stack(kbytes) 131072
lockedmem(kbytes) unlimited
memory(kbytes) unlimited
nofiles(descriptors) 13196
processes 4116

In
BSD/OS you set the MAXDSIZ parameter in the kernel configuration:

inet-prime$ cd /usr/src/sys/386/conf
inet-prime$ vi INET-PRIME

Around line 63 of the configuration file, you see a line that says:

# support for large routing tables, e.g. gated with full Internet
routing:
options "KMEMSIZE=\(16*1024*1024\)"
options "DFLSSIZ=\(8*1024*1024\)"

Right there's a good place to add the following lines:

options "DFLDSIZ=\(128*1024*1024\)"
options "MAXDSIZ=\(256*1024*1024\)"
options "CHILD_MAX=256"

(CHILD_MAX is for Apache, if you're running that too...) You can set
the DFLDSIZ (the default datasize) and the MAXDSIZ to whatever suits
your machine's memory configuration. However, I have had problems
getting Squid to function correctly on my Intel Pentium-133 with a
datasize of greater than 128MB, even though I have 256MB available on
the system. Let me know if it works for you. Squid will generally
stay up and running for at least a week, and usually more like 3-4
weeks, with 128MB limit and a 24MB hot cache size on a 3,750MB cache,
so I don't worry about it too much -- RunCache restarts it
automatically anyway.

In order to allow Squid to max out this limit as set in the kernel,
you should add the following to the RunCache script:

unlimit; unlimit datasize

Right up at the beginning of the script. I found while running
news.daily that for some reason, when you do "unlimit" and then switch
users, the datasize clicks back to its original miniscule value.
Hence the addition of "unlimit datasize" after the "unlimit," just to
be on the safe side.

        -Mike Pelletier.
Received on Thu Jul 16 1998 - 01:06:53 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:41:08 MST