Re: Proxy of large files

From: Duane Wessels <>
Date: Tue, 21 Jan 97 08:19:29 -0800 writes:

>On Mon 20 Jan, 1997, Richard Roderick <> wrote:
>>Setting maximum_object_size to X will prevent squid from keeping a copy of
>>files larger than X. Many of us set X to a smaller number because squid
>>keeps a copy of it in memory while it downloads and tends to lead to out of
>>memory problems.
>max object size is supposed to limit the use of an object's memory
>and then go into 'delete behind' marking the object private.
>As an aside comment, I'm not sure this always works, as I noticed
>yesterday on our production 1.0.20 based cache that there was a
>large amount of memory in use (400MB) and what appeared to be 180MB
>tied up in one object (from looking at VM Objects): a broken server
>push script, I think.
>At the point where the object goes into delete behind, there was a
>secondary problem of multiple readers being able to stall a fetch;
>but I'm not sure if that problem still exists in 1.1?

Yes, this problem does still exist. Its one of the reasons to
switch to the disk-only storage model.

Does anyone have good or bad things to say about the 'NOVM' version
of squid-1.1.4? Does anyone think that switching over to the disk-only
approach is the wrong thing to do?

Duane W.
Received on Tue Jan 21 1997 - 08:36:23 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:34:08 MST