RE: Disk full errors with Squid2.3S1 and Solaris

From: Mark Bizzell <bizzell@dont-contact.us>
Date: Mon, 21 Feb 2000 10:22:11 +1000

David,

Thanks everyone for your response.

I have decreased my cache_dir size from 1800 ( squid 2.5S5 ) down to 1200
(squid 2.3s1) and this seems to be keeping the disks @ 90-94% full with 0.5%
fragmentation.

Seems a strange workaround but at least I won't have to do a rebuild this
week.

Mark.

-----Original Message-----
From: David J N Begley [mailto:david@avarice.nepean.uws.edu.au]
Sent: Thursday, 17 February 2000 7:44
To: Mark Bizzell
Cc: 'squid-users@ircache.net'
Subject: Re: Disk full errors with Squid2.3S1 and Solaris

On Wed, 16 Feb 2000, Mark Bizzell wrote:

> I have recently upgraded our campus proxy servers from squid2.2S5 to
2.3S1.
> Since the upgrade we've had a lot of trouble with the disks filling up -
> regardless as to the Cache_dir size.

The usual Solaris approach is to run "/usr/bin/df -g" and "/usr/sbin/fsck
-n" on the cache disks - for example:

  /usr/bin/df -g /opt/squid/cache/dir1
  /usr/sbin/fsck -n /dev/rdsk/c1t2d0s7

In the first case, you're looking for free space/inodes - in the second,
fragmentation level (I can't recall if the same numbers are available in
fsck
output). You should not only have free space/inodes, but fragmentation
shouldn't be high enough to impair the creation of new files.

I think from your previous post you've already noted that you have free
space,
inodes and fragmentation so something else must be amiss (are you using file

system tools to measure these, or relying on Squid's own measurements?).

> Squid configuration
> ./configure --enable-icmp --enable-async-io --enable-snmp
> --enable-cache-digests

Have you built a version sans async I/O, "just in case"?

> I have tried the valid combinations of
> newfs -m 2 -b [8092|4096] -f [1024|512] -o space <rdev>
> and also reduced the size of the cache_dir from 1800 down to 1500

My cache disks (Solaris 2.5.1 and 7, running Squid 2.2.STABLE5 with async
I/O)
are run up to 96% capacity (both) and retain under 1% fragmentation (Sol7
box); as you've noted above, "-o space" is critical when you don't have
enough spindles. With six spindles I can use "-o time" and Solaris 2.5.1
automatically varies between space/time optimisation across the disks,
though
fragmentation does rise to about 7% (interestingly, Solaris 7 doesn't appear

to possess this same functionality, letting fragmentation rise to 17% before

Solaris craps out complaining the disks are full).

Under Solaris 7, I built the latest cache disks using:

  newfs -v -i 8192 -m 2 -o space -r 7200 <rdev>

Altering the bytes/inode ratio I hoped to reduce the number of inodes (most
of
which were unused anyway) and increase the amount of data space. At 96%
data
capacity, inode usage is about 54% (YMMV).

> We have 2 squid proxys in a sibling relationship. These communicate with 2
> parent squid proxy's. Could the transfer of the cache_digests be filling
up
> the disks ?

Shouldn't (certainly isn't on our 2.2.STABLE5 boxes that are using
predominantly cache digests with most of ICP turned off) - cache digests
should be in memory, I'm not aware of them finding their way into the
persistent disk cache. Even if they did, using the file system tools above
should tell you the *real* disk usage, regardless of contents.

Cheers..

dave
Received on Sun Feb 20 2000 - 17:31:22 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:51:22 MST