Re: Squid process sizes growing

From: James R Grinter <jrg@dont-contact.us>
Date: Sun, 27 Oct 1996 13:58:32 +0000

On Sun 27 Oct, 1996, Oskar Pearson <oskar@is.co.za> wrote:
>2*Internet Explorer (2* about ?4 megs?)
>1*quake (12 megs)
>1*Netscape (SGI Version) (3 megs?)
>1*racing game (12 megs)
>1*Microsoft techCD update (15 megs -if my memory serves me right)
>Plus the normal hits...
>
>So I had no reason to complain if it was using more ram than I had told
>it to.

Yes, that's definitely an issue. Squid uses its own routines to load
an item from disk as quickly as possible all into RAM, too, so large
files in your cache for cache hits are quite a problem as well.

When your cache clients are all slow readers (modem users, typically),
that further complicates the problem.

>REALLY badly. When squid requests 1 Meg, and then de-allocates it, the
>OS doesn't take the ram back... It can end up only taking about 1/3
>back.

I think that's true for most malloc/free implementations (someone
correct me here, if you know of specific ones that do give back
memory). But that isn't so much of a problem anyway: all those pages
will just get swapped out and it wouldn't be a problem: everyone has
lots of swap, right? Things degrade when the amount of memory Squid
wants to use is larger than the amount of real memory you have, and it
doesn't take that long to get there if many people are downloading
large files.

What might would be the implementation of some sort of paging algorithm
inside of Squid, so that only chunks of a file at a time are stored in
actual memory. Memory usage would then be much closer to linear with
the number of concurrent clients. This does mean that multiple
concurrent readers of a URL would be treated differently, but that
could be a good thing as the code does currently allow a stalled reader
to stall all others.

(This starts to become almost a userland reimplementation of mmap(),
except that just maybe the kernel would be better at the memory
handling and scheduling reads from disk. I know, however, that Harvest
didn't use mmap() because of the expense of the page faults.)

[hopefully that all makes sense, someone tell me if it doesn't]

James.
Received on Sun Oct 27 1996 - 05:57:38 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:33:22 MST