Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Sat, 13 Feb 2010 20:48:45 +1300

Tory M Blue wrote:
> Squid 2.7Stable7
> F12
> AUFS on a ext3 FS
> 6gigs ram
> dual proc
> cache_dir aufs /cache 32000 16 256
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/vda2 49G 3.8G 42G 9% /cache
>
> configure options: '--host=i686-pc-linux-gnu'
> '--build=i686-pc-linux-gnu' '--target=i386-redhat-linux'
> '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
> '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
> '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib'
> '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
> '--mandir=/usr/share/man' '--infodir=/usr/share/info'
> '--exec_prefix=/usr' '--libexecdir=/usr/lib/squid'
> '--localstatedir=/var' '--datadir=/usr/share/squid'
> '--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid'
> '--with-pidfile=$(localstatedir)/run/squid.pid'
> '--disable-dependency-tracking' '--enable-arp-acl'
> '--enable-follow-x-forwarded-for'
> '--enable-auth=basic,digest,ntlm,negotiate'
> '--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,getpwnam,multi-domain-NTLM,SASL,squid_radius_auth'
> '--enable-ntlm-auth-helpers=no_check,fakeauth'
> '--enable-digest-auth-helpers=password,ldap'
> '--enable-negotiate-auth-helpers=squid_kerb_auth'
> '--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
> '--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
> '--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
> '--enable-ident-lookups' '--with-large-files'
> '--enable-linux-netfilter' '--enable-referer-log'
> '--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
> '--enable-storeio=aufs,diskd,ufs' '--enable-useragent-log'
> '--enable-wccpv2' '--enable-esi' '--with-aio'
> '--with-default-user=squid' '--with-filedescriptors=16384' '--with-dl'
> '--with-openssl' '--with-pthreads' 'build_alias=i686-pc-linux-gnu'
> 'host_alias=i686-pc-linux-gnu' 'target_alias=i386-redhat-linux'
> 'CFLAGS=-fPIE -Os -g -pipe -fsigned-char -O2 -g -march=i386
> -mtune=i686' 'LDFLAGS=-pie'
>
> No load to speak of, very little iowait. Threads were configured as the default.
>
> This is running on a striped pair of SSD's and is only a test script
> that (ya it's hitting it a bit hard), but nothing that squid nor my
> hardware should have an issue with.

What exactly is this test script doing then?

How many requests, of what type, is it pumping into squid? over what
time period?
  (going by the cachemgr dump below I see your Squid is processing an
average 1190 requests per second. but how intensive the peak load is
unknown.)

If the disks can take it, you could bump the --with-aufs-threads=N up a
bit and raise the ceiling. The default is 16 per cache_dir configured.

Queue congestion starts appearing if there are 8 operations queued and
not yet handled by the IO threads. The limit is then doubled (to 16)
before the next warning appears. And so on...

Congestion can be hit easily is a sudden peak in load until Squid
adjusts to your regular traffic. If the queues fill too much you get a
more serious "Disk I/O overloading" instead.

>
> I've searched and really there does not appear to be a solid answer,
> except running out of cpu or running out of iops, neither "appears" to
> be the case here. Figured if it was a thread issue, I would see a
> bottleneck on my server? (ya?). Also the if it only happens a couple

Queue congestion is what I'd call a leaf thread/process bottleneck.

> of times ignore it. This is just some testing and I believe this
> congestion is possibly causing the 500 errors I'm seeing while running
> my script.

Which are?

>
> Any pointers, where to look etc? (2.7stable6 on fc6/xen kernel) had no
> such issues (yes, the SSD's are a new variable (but otherwise
> identical hardware).
>
> Thanks
> Tory
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 13.13 0.00 40.66 4.80 0.00 41.41
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> vda 1539.50 11516.00 6604.00 23032 13208
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 13.60 0.00 38.29 11.08 0.00 37.03
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> vda 1385.07 11080.60 0.00 22272 0
>
>
> [@cache01 ~]$ free
> total used free shared buffers cached
> Mem: 5039312 421768 4617544 0 50372 183900
> -/+ buffers/cache: 187496 4851816
> Swap: 7143416 0 7143416
>
> Totals since cache startup:
> sample_time = 1266055655.813016 (Sat, 13 Feb 2010 10:07:35 GMT)
> client_http.requests = 810672
> client_http.hits = 683067
> client_http.errors = 0
> client_http.kbytes_in = 171682
> client_http.kbytes_out = 2145472
> client_http.hit_kbytes_out = 1809060
> server.all.requests = 127606
> server.all.errors = 0
> server.all.kbytes_in = 321960
> server.all.kbytes_out = 38104
> server.http.requests = 127606
> server.http.errors = 0
> server.http.kbytes_in = 321960
> server.http.kbytes_out = 38104
> server.ftp.requests = 0
> server.ftp.errors = 0
> server.ftp.kbytes_in = 0
> server.ftp.kbytes_out = 0
> server.other.requests = 0
> server.other.errors = 0
> server.other.kbytes_in = 0
> server.other.kbytes_out = 0
> icp.pkts_sent = 0
> icp.pkts_recv = 0
> icp.queries_sent = 0
> icp.replies_sent = 0
> icp.queries_recv = 0
> icp.replies_recv = 0
> icp.query_timeouts = 0
> icp.replies_queued = 0
> icp.kbytes_sent = 0
> icp.kbytes_recv = 0
> icp.q_kbytes_sent = 0
> icp.r_kbytes_sent = 0
> icp.q_kbytes_recv = 0
> icp.r_kbytes_recv = 0
> icp.times_used = 0
> cd.times_used = 0
> cd.msgs_sent = 0
> cd.msgs_recv = 0
> cd.memory = 0
> cd.local_memory = 487
> cd.kbytes_sent = 0
> cd.kbytes_recv = 0
> unlink.requests = 0
> page_faults = 1
> select_loops = 467112
> cpu_time = 681.173445
> wall_time = -40015.496720
> swap.outs = 126078
> swap.ins = 1366134
> swap.files_cleaned = 0
> aborted_requests = 0

-- 
Please be using
   Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
   Current Beta Squid 3.1.0.16
Received on Sat Feb 13 2010 - 07:48:55 MST

This archive was generated by hypermail 2.2.0 : Sat Feb 13 2010 - 12:00:04 MST