Re: Disk full again. Inodes are available.

From: Thanos Siaperas <thanus@dont-contact.us>
Date: Tue, 27 Jun 2000 10:41:44 +0300

Henrik Nordstrom wrote:
>
> Igor A.Klenin wrote:
>
> > Right now cache does works. And when on the /cache partition will left
> >
> > ~780 Mbytes
> > cache will crash.
>
> Could it be that it runs out of space in the log directory? Squid will
> not be very happy if it cannot write to cache.log, access.log and some
> other log files...
>
> Also make sure your filesystem is tuned for space, not time. There has
> been an issue with the filesystem fragmentation caused by Squid causing
> Solaris UFS filesystems to end up with only fragments of blocks
> available. df still reports a lot of free space, but only small files
> can be created. Any larger files will error with "filesystem full, no
> space available".
>
> --
> Henrik Nordstrom

After checking the log files "squid.out" and "cache.log", try
# fsck -n /cache
and see what the fragmentation reports at the end.
It will probably be ~15%, which means that 15% of the disk is unusable :(
Also see,
http://www.squid-cache.org/Doc/FAQ/FAQ-14.html#ss14.1

BTW, my cache is 50 GB on solaris UFS, resulting in 7.5GB unusable space :((
Optimizing for space is not an option, since our first priority is for the cache
to be fast.
 The only de-fragmentation way I have in mind is ufsdump-ufsrestore the cache
filesystems.
 Is there any plans for implementing something like the cyclic buffers of inn2.3,
with maybe direct access to the disks rather than through a filesystem.
  Would a different caching algorithm cooperate better with solaris UFS?

 Do any fellow squid managers have any more ideas - solutions on this matter?

Thanks,

Thanos Siaperas
NOC / AUTH / Greece
thanus@auth.gr
Received on Tue Jun 27 2000 - 01:48:32 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:54:11 MST