Re: [squid-users] Squid is not caching content in reverse proxy mode

From: bnichols <mrnicholsb_at_gmail.com>
Date: Tue, 3 Jul 2012 18:54:32 -0700

On Wed, 04 Jul 2012 11:52:31 +1200
Amos Jeffries <squid3_at_treenet.co.nz> wrote:

> On 04.07.2012 09:00, Abhishek Chanda wrote:
> > Hi all,
> >
> > I am trying to configure Squid as a caching server.
>

If you are new to squid you might have better luck pasting the
contents of your config file here or asking us to review it to see if
there are any issues with your config.

 
> Squid version?
>
> > I have a LAN where
> > The webserver (apache) is at 192.168.122.11 squid is at
> > 192.168.122.21
> > and my client is at 192.168.122.22. The problem is, when I look at
> > Squid's access log, all I see are TCP_MISS messages. It seems Squid
> > is
> > not caching at all. I checked that the cache directory has all
> > proper permissions. What else can go wrong here?
>
> * Your web server may be informing squid no response is cacheable.
>
> * Your web server may not be supplying cache-control expiry
> information correctly. So Squid can't store it.
>
> * Your clients may be informing Squid the content they need is
> outdated and needs updating.
>
> * You may be looking at log URLs which have hidden/truncated query
> strings and are actually no repeat requests in your traffic.
>
>
> > Here is my squid config:
> >
> > acl manager proto cache_object
> > acl localhost src 127.0.0.1/32 ::1
> > acl to_localhost dst 127.0.0.1/8 0.0.0.0/32 ::1
> > acl SSL_ports port 443
> > acl Safe_ports port 80
> > acl Safe_ports port 21
> > acl Safe_ports port 443
> > acl Safe_ports port 70
> > acl Safe_ports port 210
> > acl Safe_ports port 1025-65535
> > acl Safe_ports port 280
> > acl Safe_ports port 488
> > acl Safe_ports port 591
> > acl Safe_ports port 777
> > acl CONNECT method CONNECT
> > http_access allow all
>
> "allow all" at the top of the security rules.
>
>
> Run this command:
> squidclient -p 3128 -h <your-squid-IP> -j google.com /
>
> You should NOT be able to retrieve content for anyone elses website
> through a properly configured reverse-proxy.
>
> Please notice the http_access and cache_peer_access rules in
> http://wiki.squid-cache.org/ConfigExamples/Reverse/VirtualHosting
>
>
> > http_access allow manager localhost
> > http_access deny manager
> > http_access deny !Safe_ports
> > http_access deny CONNECT !SSL_ports
> > http_access allow localhost
> > http_access deny all
> > http_port 3128 accel defaultsite=cona-proxy vhost
>
> HTTP uses port 80, not port 3128.
>
> This is wrong unless all your public website URLs look like:
>
> http://example.com:3128/something
>
>
> It is a bad idea to test using a setup different than your intended
> production configuration. The handling of ports and IPs changes
> radically between the traffic modes.
> ... in this setup squid will default to passing
> "http://cona-proxy:3128/" as the URL details to your origin servers,
> with the "cona-proxy" replaced by whatever is in the Host: header
> *if* one is provided.
>
>
> > cache_peer 192.168.122.11 parent 80 0 no-query originserver
> > login=PAS
>
> "PAS" --> "PASS"
>
> > name=webserver
> > cache_dir ufs /var/spool/squid3 100 16 256
>
> 100MB cache.
>
> > coredump_dir /var/spool/squid3
> > refresh_pattern ^ftp: 1440 20% 10080
> > refresh_pattern ^gopher: 1440 0% 1440
> > refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> > refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
> > refresh_pattern . 0 20% 4320
> > always_direct allow all
>
>
> There is one problem.
>
> "always_direct allow all" --> AKA, "do not use any cache_peer
> settings. Ever."
>
>
> > acl server_users dstdomain cona-proxy
> > http_access allow server_users
> > cache_peer_access webserver allow server_users
> > cache_peer_access webserver deny all
> >
> > In all machines, cona-proxy points to 192.168.122.21 (added that in
> > /etc/hosts)
>
> Bad. defaultsite=FQDN is a value which can appear on public URLs. If
> you need to specify it in your hosts file
>
>
> > Output of curl -v 192.168.122.11 from 192.168.122.22
> >
> > * About to connect() to 192.168.122.11 (#0)
> > * Trying 192.168.122.11... connected
>
> Notice how this is going directly to .11 machine. The proxy is never
> contacted.
>
>
> >> GET / HTTP/1.1
> >> User-Agent: curl/7.22.0 (i686-pc-linux-gnu) libculr/7.22.0
> >> OpneSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> >> Host: 192.168.122.11
> >> Accept: */*
> >>
> > < HTTP/1.1 202 OK
> > < Date Mon, 02 Jul 2012 05:48:50 GMT
> > < Server: Apache/2.2.22 (Ubuntu)
> > < Last-Modified: Tue, 19 Jun 2012 23:04:25 GMT
> > < ETag: "27389-b1-4c2db4dc2c182"
> > < Accept_Ranges: bytes
> > < Content-Length: 177
> > < Vary: Accept-Encoding
> > < Content-Type: text/html
> > < X-Pad: avoid browser bug
> > <
>
> This response includes some variant handling and heuristic age
> calculation features.
>
> I suggest using the tool at redbot.org on your URLs to find out why
> they are MISS. It will scan for a large number of cases and report
> the reasons for any caching issues or times.
>
>
> Amos
Received on Wed Jul 04 2012 - 01:54:43 MDT

This archive was generated by hypermail 2.2.0 : Wed Jul 04 2012 - 12:00:02 MDT