Re: [squid-users] can't get squid to cache

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Wed, 9 Jul 2008 15:22:21 +1200 (NZST)

>
> Hey guys,
>
> I've got a proprietary web application what we use as a back end for
> other applications, and I want to do some agressive caching using squid,
> as a test, to reduce the load on the back end.
>
> I spent 2-3 days on googling and reading the archives, but nothing I do
> or try seems to help! :(
>
> Here's the original back-end request (for one image in that app):
>
> ------------------------------------
> [angelo_at_zvr-web-04 ~]$ wget -S --spider
> http://10.94.206.34:8000/stats_components/collapseon.gif
> --00:19:07-- http://10.94.206.34:8000/stats_components/collapseon.gif
> => `collapseon.gif'
> Connecting to 10.94.206.34:8000... connected.
> HTTP request sent, awaiting response...
> HTTP/1.0 200 OK
> Content-Type: image/gif
> Content-Length: 64
> Length: 64 [image/gif]
> 200 OK
> ------------------------------------
>
> As you can see, it's missing all cache headers, and expires and
> last-modified header.
>
> This is how my squid config is (now running on 2.6STABLE16, tried on
> 3.0RC1 too):
>
> ------------------------------------
> hierarchy_stoplist cgi-bin
> acl QUERY urlpath_regex cgi-bin
>
> shutdown_lifetime 1 second
>
> acl all src 0.0.0.0/0.0.0.0
> cache allow all
>
> #400GB disk cache
> cache_dir ufs /usr/local/squid/cache 409600 16 256
>
> maximum_object_size 5 MB
> cache_mem 1024 MB
> cache_swap_low 90
> cache_swap_high 95
> maximum_object_size_in_memory 512 KB
>
> cache_replacement_policy heap LFUDA
> memory_replacement_policy heap LFUDA
>
> http_port 8000 vhost vport
> cache_peer 10.94.206.34 parent 8000 0 no-query originserver
>
> http_access allow all
>
> minimum_expiry_time 3600 seconds
> refresh_pattern . 3600 100% 3600 ignore-no-cache ignore-reload
> override-expire override-lastmod

These ignore and overrides have no effect when the control headers are
missing. As you noted from your app.

>
> access_log /var/log/squid/access.log squid
> cache_log /var/log/squid/cache.log
> cache_store_log /var/log/squid/store.log

Um, since you have the store.log check it to see what squid is saving to
the cache.

>
> strip_query_terms off
> ------------------------------------
>
> This was the most agressive config I could found, and I expect the
> refresh_pattern line to force squid to cache..
>
> But all my access.log file keeps saying is:
> ------------------------------------
> 1215555215.645 1 127.0.0.1 TCP_MISS/200 200 HEAD
> http://localhost:8000/stats_components/collapseon.gif -
> FIRST_UP_PARENT/10.94.206.34 image/gif
> 1215555217.096 92 127.0.0.1 TCP_MISS/200 200 HEAD
> http://localhost:8000/stats_components/collapseon.gif -
> FIRST_UP_PARENT/10.94.206.34 image/gif
> 1215555217.940 1 127.0.0.1 TCP_MISS/200 200 HEAD
> http://localhost:8000/stats_components/collapseon.gif -
> FIRST_UP_PARENT/10.94.206.34 image/gif
> 1215555218.718 2 127.0.0.1 TCP_MISS/200 200 HEAD
> http://localhost:8000/stats_components/collapseon.gif -
> FIRST_UP_PARENT/10.94.206.34 image/gif
> ------------------------------------
>
> And in the store.log:
> ------------------------------------
> 1215555215.645 RELEASE -1 FFFFFFFF 98DDCD4857BAF3122EE99EB25E4C3800 200
> -1 -1 -1 image/gif 64/0 HEAD
> http://localhost:8000/stats_components/collapseon.gif
> 1215555217.096 RELEASE -1 FFFFFFFF A3AE2ED993B031DBD93CF74E2BD64BC5 200
> -1 -1 -1 image/gif 64/0 HEAD
> http://localhost:8000/stats_components/collapseon.gif
> 1215555217.940 RELEASE -1 FFFFFFFF FFFE8387EBAB471EC045EFA51F9AE472 200
> -1 -1 -1 image/gif 64/0 HEAD
> http://localhost:8000/stats_components/collapseon.gif
> 1215555218.718 RELEASE -1 FFFFFFFF EA490C98564ABDF390D216E2C3DC210E 200
> -1 -1 -1 image/gif 64/0 HEAD
> http://localhost:8000/stats_components/collapseon.gif
> ------------------------------------
>
>
> Does anyone ave any idea's on why Squid won't cache the requests?

Squid can cache _objects_ but HEAD requests and GET requests are
different. GET contains the object, HEAD does not.

If you can get the front-end apps (or even your testing spider) to pull
the full object into cache with a GET I suspect the MISS would reduce.

Amos
Received on Wed Jul 09 2008 - 03:22:26 MDT

This archive was generated by hypermail 2.2.0 : Wed Jul 09 2008 - 12:00:03 MDT