Re: [squid-users] Questions about large files

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Thu, 14 Oct 2004 10:19:52 +0200 (CEST)

On Wed, 13 Oct 2004, Brennon Church wrote:

> I'm downloading a CD image at 600M. Part-way through a download, the
> download manager will tell me that the file is done. When I look at the file,
> I see that it's not, it could be 20 Megs it could be 500 Megs

Anything in cache.log?

What is said in access.log?

> 2) After those times where a larger file (again, 600M or so) succeeds, the
> object is there, and I'm able to download the file again from the cache
> rather than directly from the site. Shortly afterwards, however, the object
> is overwritten by something else.

What cache_replacement_policy are you using?

Also enable log_mime_hdrs and make sure it is not your client which forces
Squid to disregard the cache (the latter can be determined from
access.log with log_mime_hdrs enabled).

It could also be that the server publishing the large file puts limit on
how long it may be cached, or a matter of your refresh_pattern settings.

Verify the object with the cacheability check engine if unsure. This
explains a lot of the details on how caches determines how long an object
may be cached.

Regards
Henrik
Received on Thu Oct 14 2004 - 02:19:54 MDT

This archive was generated by hypermail pre-2.1.9 : Mon Nov 01 2004 - 12:00:02 MST