Re: [squid-users] squid not cachig big files.

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Sun, 16 Feb 2014 02:53:55 +1300

On 16/02/2014 1:31 a.m., п°п╦я┘п╟п╦п╩ п⌠п╟п╠п╣я─п╨п╬я─п╫ wrote:
> Good day!
>
>
> my squid cant' cashing (store to cache) big files >200mb.
>
> what i do wrong?

32-bit build of Squid?

32-bit operating system drivers? (unlikely, but maybe).

Any particular URLs being troublesome?

<snip>

> cache_mem 1800 MB
>
> maximum_object_size 999 GB
>
> minimum_object_size 0
>
> cache_dir ufs /var/cache 30000000 96 984 min-size=0 max-size=300000000
>

I'd make that L1 value 128 or 256 subdirectories instead of 96. And less
on the L2 subdirectories. The idea of L1/L2 is to flatten the lookup
time on the 2^27 objects Squid can put in there.

Also, note that Squid has that 2^27 object count limit. That is
absolute. It is no use having 30TB cache size if the objects inside it
only use 2TB.

Also note that Squid is best served by having cache_dir arranged one per
physical device. Drive partitioning on one spindle of spinning media
gives only problems, likewise RAID / LVM joining of devices for extra
large space leads mostly to problems.

<snip>
>
> refresh_pattern . 900000000 80% 900000000 override-expire
> override-lastmod reload-into-ims ignore-no-cache ignore-private
> ignore-auth

Several seriously bad ideas in the above.

If you are using a Squid older than 3.4 please consider an upgrade over
doing that type of refresh_pattern. The latest Squid can cache a large %
more of traffic *properly* without:
 a) leaking any users private information to the other users, or
 b) damaging the way the pages display, or
 c) "corrupting" web application processing.

They will also protect you from N-bit wrap on those time limits and from
exceeding the HTTP protocol storage BCP limits.

HTH
Amos
Received on Sat Feb 15 2014 - 13:54:08 MST

This archive was generated by hypermail 2.2.0 : Sat Feb 15 2014 - 12:00:05 MST